The UAE has become the first country globally to implement regulations specifically addressing the use of Artificial Intelligence in election campaigns. These new rules aim to ensure fairness and transparency by managing how AI tools are deployed by candidates and parties. The regulations cover aspects such as deepfakes, misinformation, and the use of AI for voter targeting. Why it matters: This move establishes a significant precedent for responsible AI governance, particularly concerning democratic processes, and positions the UAE as a leader in proactive AI policy development.
MBZUAI researchers are studying how AI can be used to combat disinformation and improve news verification during elections, as AI amplifies the volume and speed of fake news. Dilshod Azizov is using machine learning to spot patterns in news that will improve verification, while Preslav Nakov's FRAPPE system identifies persuasive techniques and framing in news articles. FRAPPE uses machine learning and NLP to analyze news presentation and reporting, aiming to help users understand the underlying context of news. Why it matters: This research highlights the potential of AI to both negatively and positively impact democratic processes, emphasizing the need for tools to analyze and verify information in the face of increasing AI-generated disinformation.
A panel discussion hosted by MBZUAI in collaboration with the Manara Center for Coexistence and Dialogue addressed misinformation and its threat to elections. The talk covered the reasons behind the rise of misinformation, citizen perspectives, and the role of social media influencers. Two cases, the Indian general elections of 2024 and the upcoming US presidential elections in November 2024, were used to describe the contours of misinformation. Why it matters: Understanding the dynamics of misinformation, especially through social media influencers, is crucial for safeguarding democratic processes in the region and globally.
A new methodology emulating fact-checker criteria assesses news outlet factuality and bias using LLMs. The approach uses prompts based on fact-checking criteria to elicit and aggregate LLM responses for predictions. Experiments demonstrate improvements over baselines, with error analysis on media popularity and region, and a released dataset/code at https://github.com/mbzuai-nlp/llm-media-profiling.
Senate Foreign Relations Committee Democrats issued a statement regarding reports of a bribe from UAE officials to the Trump family. They urged the Biden Administration to investigate these allegations thoroughly and provide Congress with relevant information. The statement highlights concerns about foreign influence and potential illicit financial activities in U.S. politics involving Middle Eastern actors. Why it matters: This political development concerns allegations of corruption involving foreign officials and could impact diplomatic relations between the United States and the UAE, though it is not directly related to artificial intelligence.
This paper introduces a new task: detecting propaganda techniques in code-switched text. The authors created and released a corpus of 1,030 English-Roman Urdu code-switched texts annotated with 20 propaganda techniques. Experiments show the importance of directly modeling multilinguality and using the right fine-tuning strategy for this task.
MBZUAI Professor Preslav Nakov is researching methods to identify and combat the harmful uses of large language models in generating disinformation. He notes that disinformation, unlike fake news, is weaponized with the intent to persuade, not just to lie. His research focuses on the linguistic differences between human-written and machine-generated disinformation, such as the use of rhetorical devices in human propaganda. Why it matters: As AI-generated content becomes more prevalent, understanding and mitigating its potential for spreading disinformation is critical for maintaining trust and integrity in information ecosystems, especially during major election cycles.