Skip to content
GCC AI Research

Search

Results for "Fairness"

The Geopolitics of AI Safety: A Causal Analysis of Regional LLM Bias

arXiv ·

This study introduces a Probabilistic Graphical Model (PGM) framework utilizing Pearl's do-operator to causally audit LLM safety mechanisms, specifically isolating the effect of injecting cultural demographics into prompts. A large-scale empirical analysis was conducted across seven instruction-tuned models from diverse origins, including the UAE's Falcon3-7B, as well as models from the US, Europe, China, and India, using ToxiGen and BOLD datasets. The findings revealed a disparity between observational and interventional bias, demonstrating that standard fairness metrics can overestimate demographic bias. Western models exhibited higher causal refusal rates for specific demographic groups, while Eastern models showed low overall intervention rates with targeted sensitivities toward regional demographics. Why it matters: This research highlights the geopolitical nuances of LLM safety alignment and the potential for demographic-sensitive over-triggering to restrict benign discourse, which is particularly relevant for diverse regions like the Middle East in developing culturally-aware AI.

User-Centric Gender Rewriting

MBZUAI ·

NYU and NYU Abu Dhabi researchers are working on user-centric gender rewriting in NLP, especially for Arabic. They are building an Arabic Parallel Gender Corpus and developing models for gender rewriting tasks. The work aims to address representational harms caused by NLP systems that don't account for user preferences regarding grammatical gender. Why it matters: This research promotes fairness and inclusivity in Arabic NLP by enabling systems to generate gender-specific outputs based on user preferences, mitigating biases present in training data.

ArabJobs: A Multinational Corpus of Arabic Job Ads

arXiv ·

The ArabJobs dataset is a new corpus of over 8,500 Arabic job advertisements collected from Egypt, Jordan, Saudi Arabia, and the UAE. The dataset contains over 550,000 words and captures linguistic, regional, and socio-economic variation in the Arab labor market. It is available on GitHub and can be used for fairness-aware Arabic NLP and labor market research.