Skip to content
GCC AI Research

Search

Results for "subjectivity analysis"

Dhati+: Fine-tuned Large Language Models for Arabic Subjectivity Evaluation

arXiv ·

This paper introduces AraDhati+, a new comprehensive dataset for Arabic subjectivity analysis created by combining existing datasets like ASTD, LABR, HARD, and SANAD. The researchers fine-tuned Arabic language models including XLM-RoBERTa, AraBERT, and ArabianGPT on AraDhati+ for subjectivity classification. An ensemble decision approach achieved 97.79% accuracy. Why it matters: The work addresses the under-resourced nature of Arabic NLP by providing a new dataset and demonstrating strong results in subjectivity classification, advancing sentiment analysis capabilities for the Arabic language.

Detecting Propaganda Techniques in Code-Switched Social Media Text

arXiv ·

This paper introduces a new task: detecting propaganda techniques in code-switched text. The authors created and released a corpus of 1,030 English-Roman Urdu code-switched texts annotated with 20 propaganda techniques. Experiments show the importance of directly modeling multilinguality and using the right fine-tuning strategy for this task.

Profiling News Media for Factuality and Bias Using LLMs and the Fact-Checking Methodology of Human Experts

arXiv ·

A new methodology emulating fact-checker criteria assesses news outlet factuality and bias using LLMs. The approach uses prompts based on fact-checking criteria to elicit and aggregate LLM responses for predictions. Experiments demonstrate improvements over baselines, with error analysis on media popularity and region, and a released dataset/code at https://github.com/mbzuai-nlp/llm-media-profiling.

Revisiting Common Assumptions about Arabic Dialects in NLP

arXiv ·

This paper critically examines common assumptions about Arabic dialects used in NLP. The authors analyze a multi-label dataset where sentences in 11 country-level dialects were assessed by native speakers. The analysis reveals that widely held assumptions about dialect grouping and distinctions are oversimplified and not always accurate. Why it matters: The findings suggest that current approaches in Arabic NLP tasks like dialect identification may be limited by these inaccurate assumptions, hindering further progress in the field.