MBZUAI 2023 graduate Muhammad Umar is researching propaganda detection in low-resource, code-switched languages like Roman Urdu. His master's thesis focuses on detecting propaganda techniques in social media text using deep learning models. Umar aims to submit a paper on his findings to the EMNLP 2023 conference. Why it matters: This research addresses the under-explored area of propaganda detection in low-resource languages, which is crucial for combating misinformation in bilingual communities.
This paper introduces a new task: detecting propaganda techniques in code-switched text. The authors created and released a corpus of 1,030 English-Roman Urdu code-switched texts annotated with 20 propaganda techniques. Experiments show the importance of directly modeling multilinguality and using the right fine-tuning strategy for this task.
MBZUAI Professor Preslav Nakov believes AI can outpace human fact-checkers in detecting fake news by analyzing language and sentence structure. AI systems can identify common sources of fake news and flag domains for blocking. Nakov's research focuses on disinformation, fact checking, and media bias detection. Why it matters: AI-driven solutions for combating fake news could help mitigate the spread of misinformation and its impact on society, especially in the Arabic-speaking world.
MBZUAI researchers are studying how AI can be used to combat disinformation and improve news verification during elections, as AI amplifies the volume and speed of fake news. Dilshod Azizov is using machine learning to spot patterns in news that will improve verification, while Preslav Nakov's FRAPPE system identifies persuasive techniques and framing in news articles. FRAPPE uses machine learning and NLP to analyze news presentation and reporting, aiming to help users understand the underlying context of news. Why it matters: This research highlights the potential of AI to both negatively and positively impact democratic processes, emphasizing the need for tools to analyze and verify information in the face of increasing AI-generated disinformation.
MBZUAI Professor Preslav Nakov is researching methods to identify and combat the harmful uses of large language models in generating disinformation. He notes that disinformation, unlike fake news, is weaponized with the intent to persuade, not just to lie. His research focuses on the linguistic differences between human-written and machine-generated disinformation, such as the use of rhetorical devices in human propaganda. Why it matters: As AI-generated content becomes more prevalent, understanding and mitigating its potential for spreading disinformation is critical for maintaining trust and integrity in information ecosystems, especially during major election cycles.