Skip to content
GCC AI Research

Search

Results for "Preslav Nakov"

On a mission to end fake news

MBZUAI ·

MBZUAI Professor Preslav Nakov is researching methods to combat fake news and online disinformation through NLP techniques. His work focuses on detecting harmful memes and identifying the stance of individuals regarding disinformation. Four of Nakov’s recent papers on these topics were presented at NAACL 2022. Why it matters: This research aims to mitigate the impact of weaponized news and online manipulation, contributing to a more trustworthy information environment in the region and globally.

The war on fake news can be won

MBZUAI ·

MBZUAI Professor Preslav Nakov believes AI can outpace human fact-checkers in detecting fake news by analyzing language and sentence structure. AI systems can identify common sources of fake news and flag domains for blocking. Nakov's research focuses on disinformation, fact checking, and media bias detection. Why it matters: AI-driven solutions for combating fake news could help mitigate the spread of misinformation and its impact on society, especially in the Arabic-speaking world.

Tackling human-written disinformation and machine hallucinations

MBZUAI ·

MBZUAI Professor Preslav Nakov is researching methods to identify and combat the harmful uses of large language models in generating disinformation. He notes that disinformation, unlike fake news, is weaponized with the intent to persuade, not just to lie. His research focuses on the linguistic differences between human-written and machine-generated disinformation, such as the use of rhetorical devices in human propaganda. Why it matters: As AI-generated content becomes more prevalent, understanding and mitigating its potential for spreading disinformation is critical for maintaining trust and integrity in information ecosystems, especially during major election cycles.

Decoding the news: a new application to identify persuasion techniques in the media

MBZUAI ·

MBZUAI Professor Preslav Nakov has developed FRAPPE, an interactive website that analyzes news articles to identify persuasion techniques. FRAPPE helps users understand framing, persuasion, and propaganda at an aggregate level, across different news outlets and countries. Presented at EACL, FRAPPE uses 23 specific techniques categorized into six broader buckets, such as 'attack on reputation' and 'manipulative wording'. Why it matters: The tool addresses the increasing difficulty in discerning factual information from disinformation, providing a means to identify biases in news media from different countries.

MBZUAI at ACL2023

MBZUAI ·

MBZUAI researchers had 26 papers accepted at ACL 2023, a top NLP conference. Assistant Professor Alham Fikri Aji co-authored eight papers, including one on crosslingual generalization through multitask finetuning (MTF). Deputy Department Chair Preslav Nakov co-authored a paper on a Bulgarian language understanding benchmark dedicated to the memory of Yale Computer Scientist Dragomir R. Radev. Why it matters: MBZUAI's strong presence at ACL highlights its growing influence in the NLP field and its contributions to multilingual AI research.

MBZUAI is changing the landscape of large language models in the region.

MBZUAI ·

MBZUAI has been actively involved in developing AI and generative models, contributing to models like Llama 2, Jais, Vicuna, and LaMini. Professor Preslav Nakov notes Llama 2's improvements in size and carbon footprint over Llama 1. MBZUAI aims to tackle challenges like information accuracy, economic costs, and the scarcity of Arabic online content. Why it matters: MBZUAI's work helps address the limitations of current LLMs, particularly for Arabic, and promotes sustainable AI development in the region.

Fact checking with ChatGPT

MBZUAI ·

A new paper from MBZUAI researchers explores using ChatGPT to combat the spread of fake news. The researchers, including Preslav Nakov and Liangming Pan, demonstrate that ChatGPT can be used to fact-check published information. Their paper, "Fact-Checking Complex Claims with Program-Guided Reasoning," was accepted at ACL 2023. Why it matters: This research highlights the potential of large language models to address the growing challenge of misinformation, with implications for maintaining information integrity in the digital age.

Can crowdsourced fact-checking curb misinformation on social media?

MBZUAI ·

MBZUAI Professor Preslav Nakov discusses Meta's shift to crowdsourced fact-checking via Community Notes, replacing third-party fact-checkers. Community Notes, originating from Twitter's Birdwatch, allows users to add context to potentially misleading posts, visible after community consensus. Research indicates this approach can reduce misinformation and lead to post retractions. Why it matters: The adoption of crowdsourcing for content moderation by major platforms like Meta could significantly impact online information quality for billions of users.