Zeerak Talat, an independent scholar, gave a talk at MBZUAI on ethical concerns in NLP. The talk covered disparities in research on biases in NLP, performance differences based on socio-economic language variations, and risks of malicious reuse of NLP tools. Talat's research considers how machine learning interacts with and impacts societies through content moderation technologies. Why it matters: As NLP technologies become more integrated into society, understanding and addressing their potential harms and ethical implications is crucial for responsible development and deployment in the region and beyond.
Iryna Gurevych from TU Darmstadt presented research on using large language models for real-world fact-checking, focusing on dismantling misleading narratives from misinterpreted scientific publications and detecting misinformation via visual content. The research aims to explain why a false claim was believed, why it is false, and why the alternative is correct. Why it matters: Addressing misinformation, especially when supported by seemingly credible sources, is critical for public health, conflict resolution, and maintaining trust in institutions in the Middle East and globally.
Iryna Gurevych from TU Darmstadt discussed challenges in using NLP for misinformation detection, highlighting the gap between current fact-checking research and real-world scenarios. Her team is working on detecting emerging misinformation topics and has constructed two corpora for fact checking using larger evidence documents. They are also collaborating with cognitive scientists to detect and respond to vaccine hesitancy using effective communication strategies. Why it matters: Addressing misinformation is crucial in the Middle East, especially regarding public health and socio-political issues, making advancements in NLP-based fact-checking highly relevant.
This paper introduces two shared tasks for abusive and threatening language detection in Urdu, a low-resource language with over 170 million speakers. The tasks involve binary classification of Urdu tweets into Abusive/Non-Abusive and Threatening/Non-Threatening categories, respectively. Datasets of 2400/6000 training tweets and 1100/3950 testing tweets were created and manually annotated, along with logistic regression and BERT-based baselines. 21 teams participated and the best systems achieved F1-scores of 0.880 and 0.545 on the abusive and threatening language tasks, respectively, with m-BERT showing the best performance.
A new methodology emulating fact-checker criteria assesses news outlet factuality and bias using LLMs. The approach uses prompts based on fact-checking criteria to elicit and aggregate LLM responses for predictions. Experiments demonstrate improvements over baselines, with error analysis on media popularity and region, and a released dataset/code at https://github.com/mbzuai-nlp/llm-media-profiling.