Skip to content
GCC AI Research

Improving through argument: a symbolic approach to fake-news detection

MBZUAI · Notable

Summary

MBZUAI researchers developed a symbolic adversarial learning framework (SALF) for fake news detection using LLM-powered agents. SALF employs a generator and a detector in a debate-like setup, judged by another LLM, to improve the agents' ability to create and identify fake news. Testing showed that the SALF generator degraded the performance of existing fake news detectors by 53.4% on Chinese and 34.2% on English datasets. Why it matters: This research offers a novel approach to combating the evolving threat of LLM-generated disinformation, a critical issue for maintaining reliable information ecosystems in the region and globally.

Get the weekly digest

Top AI stories from the GCC region, every week.

Related

Overview of the Shared Task on Fake News Detection in Urdu at FIRE 2021

arXiv ·

This paper provides an overview of the UrduFake@FIRE2021 shared task, which focused on fake news detection in the Urdu language. The task involved binary classification of news articles into real or fake categories using a dataset of 1300 training and 300 testing articles across five domains. 34 teams registered, with 18 submitting results and 11 providing technical reports detailing various approaches from BoW to Transformer models, with the best system achieving an F1-macro score of 0.679.

FIRE: Fact-checking with Iterative Retrieval and Verification

arXiv ·

A novel agent-based framework called FIRE is introduced for fact-checking long-form text. FIRE iteratively integrates evidence retrieval and claim verification, deciding whether to provide a final answer or generate a subsequent search query. Experiments show FIRE achieves comparable performance to existing methods while reducing LLM costs by 7.6x and search costs by 16.5x.

Truth-O-Meter: Making neural content meaningful and truthful

MBZUAI ·

A new content improvement system has been developed to address issues of randomness and incorrectness in text generated by deep learning models like GPT-3. The system uses text mining to identify correct sentences and employs syntactic/semantic generalization to substitute problematic elements. The system can substantially improve the factual correctness and meaningfulness of raw content. Why it matters: Improving the quality of automatically generated content is crucial for ensuring reliability and trustworthiness across various AI applications.

Combatting the spread of scientific falsehoods with NLP

MBZUAI ·

Researchers from MBZUAI and other institutions presented a study at ACL 2024 on combatting misinformation by identifying misrepresented scientific research. They compiled a dataset called MISSCI, comprised of real-world examples of misinformation gathered from the HealthFeedback fact-checking website. The annotators classified the different types of errors in reasoning into nine different classes. Why it matters: This work addresses a critical need to combat the spread of scientific falsehoods online, especially given the challenges of manual fact-checking.