Skip to content
GCC AI Research

Search

Results for "LMICs"

Supporting malaria solutions

MBZUAI ·

Malaria No More, the Crown Prince Court of Abu Dhabi, and the Reaching the Last Mile program launched the Institute for Malaria and Climate Solutions (IMACS) to combat malaria amidst climate change. Mohamed Bin Zayed University for Artificial Intelligence (MBZUAI) joined as a technical partner, providing research support leveraging AI and data science. The initiative aims to develop and implement AI-driven strategies to address the impact of climate change on malaria transmission. Why it matters: This partnership highlights the UAE's commitment to using AI for global health challenges, particularly in combating climate-sensitive diseases like malaria.

Weather forecasting training program brings power of AI to low- and middle-income countries

MBZUAI ·

MBZUAI and the University of Chicago are collaborating on a program to train governments in low- and middle-income countries (LMICs) to use AI weather forecasting models. Funded by a grant from the UAE Presidential Court, the program's first cohort includes staff from Bangladesh, Chile, Ethiopia, Kenya, and Nigeria, receiving training in the UAE at MBZUAI and NCM. The program aims to expand to 30 countries, potentially benefiting millions of farmers by improving yields and livelihoods. Why it matters: This initiative democratizes access to advanced weather forecasting, enabling LMICs to leverage AI for climate resilience and agricultural productivity.

The Geopolitics of AI Safety: A Causal Analysis of Regional LLM Bias

arXiv ·

This study introduces a Probabilistic Graphical Model (PGM) framework utilizing Pearl's do-operator to causally audit LLM safety mechanisms, specifically isolating the effect of injecting cultural demographics into prompts. A large-scale empirical analysis was conducted across seven instruction-tuned models from diverse origins, including the UAE's Falcon3-7B, as well as models from the US, Europe, China, and India, using ToxiGen and BOLD datasets. The findings revealed a disparity between observational and interventional bias, demonstrating that standard fairness metrics can overestimate demographic bias. Western models exhibited higher causal refusal rates for specific demographic groups, while Eastern models showed low overall intervention rates with targeted sensitivities toward regional demographics. Why it matters: This research highlights the geopolitical nuances of LLM safety alignment and the potential for demographic-sensitive over-triggering to restrict benign discourse, which is particularly relevant for diverse regions like the Middle East in developing culturally-aware AI.