Middle East AI

This Week arXiv

Profiling News Media for Factuality and Bias Using LLMs and the Fact-Checking Methodology of Human Experts

arXiv · · Significant research

Summary

A new methodology emulating fact-checker criteria assesses news outlet factuality and bias using LLMs. The approach uses prompts based on fact-checking criteria to elicit and aggregate LLM responses for predictions. Experiments demonstrate improvements over baselines, with error analysis on media popularity and region, and a released dataset/code at https://github.com/mbzuai-nlp/llm-media-profiling.

Keywords

fact-checking · LLM · bias detection · media profiling · misinformation

Get the weekly digest

Top AI stories from the GCC region, every week.

Related

Can LLMs Automate Fact-Checking Article Writing?

arXiv ·

Researchers at MBZUAI have introduced QRAFT, an LLM-based framework designed to automate the generation of fact-checking articles. The system mimics the writing workflow of human fact-checkers, aiming to bridge the gap between automated fact-checking systems and public dissemination. While QRAFT outperforms existing text-generation methods, it still falls short of expert-written articles, highlighting areas for further research.

Fact-Checking Complex Claims with Program-Guided Reasoning

arXiv ·

This paper introduces ProgramFC, a fact-checking model that decomposes complex claims into simpler sub-tasks using a library of functions. The model uses LLMs to generate reasoning programs and executes them by delegating sub-tasks, enhancing explainability and data efficiency. Experiments on fact-checking datasets demonstrate ProgramFC's superior performance compared to baseline methods, with publicly available code and data.

OpenFactCheck: A Unified Framework for Factuality Evaluation of LLMs

arXiv ·

MBZUAI researchers release OpenFactCheck, a unified framework to evaluate the factual accuracy of large language models. The framework includes modules for response evaluation, LLM evaluation, and fact-checker evaluation. OpenFactCheck is available as an open-source Python library, a web service, and via GitHub.

SectEval: Evaluating the Latent Sectarian Preferences of Large Language Models

arXiv ·

The paper introduces SectEval, a new benchmark to evaluate sectarian biases in LLMs concerning Sunni and Shia Islam, available in English and Hindi. Results show significant inconsistencies in LLM responses based on language, with some models favoring Shia responses in English but Sunni in Hindi. Location-based experiments further reveal that advanced models adapt their responses based on the user's claimed country, while smaller models exhibit a consistent Sunni-leaning bias.