Skip to content
GCC AI Research

Search

Results for "cultural divide"

Why AI can describe an image but struggles to understand the culture inside it

MBZUAI ·

A new paper from MBZUAI introduces JEEM, a benchmark dataset for evaluating vision-language models on their understanding of images grounded in four Arabic-speaking societies (Jordan, UAE, Egypt, and Morocco) and their ability to use local dialects. The dataset comprises 2,178 images and 10,890 question-answer pairs reflecting everyday life and culturally specific scenes. Evaluation of several Arabic-capable models (Maya, PALO, Peacock, AIN, AyaV) and GPT-4o revealed that while models can generate fluent language, they struggle with genuine understanding, consistency, and relevance, especially when cultural context is important. Why it matters: This research highlights the challenges of building AI systems that can truly understand and interact with diverse cultures, emphasizing the need for culturally grounded datasets and evaluation metrics.

Why AI can describe an image but struggles to understand the culture inside it

MBZUAI ·

MBZUAI researchers release JEEM, a new benchmark dataset for evaluating vision-language models on Arabic dialects. The dataset covers image captioning and visual question answering tasks using images from Jordan, UAE, Egypt, and Morocco. Results show models struggle with cultural understanding and relevance despite fluent language generation.

Culture and bias in LLMs: Defining the challenge and mitigating risks

MBZUAI ·

Researchers from MBZUAI, University of Washington, and other institutions presented studies at EMNLP 2024 exploring how LLMs represent cultures. A survey analyzed dozens of recent studies on LLMs and culture and proposes a new framework for future research. The survey found that there is no widely accepted definition of 'culture' in NLP, making it challenging to interpret how models represent culture through language. Why it matters: This highlights a key gap in the field and emphasizes the need for a more rigorous and consistent understanding of culture in AI, especially as LLMs become more globally integrated.

Culturally Aware GenAI Risks for Youth: Perspectives from Youth, Parents, and Teachers in a Non-Western Context

arXiv ·

A study investigated the culturally aware risks of Generative AI for youth aged 7-17 in Saudi Arabia, focusing on privacy and safety challenges. Researchers analyzed 736 Reddit posts, 1,262 X (Twitter) posts, and conducted interviews with 31 Saudi participants including youth, parents, and teachers. Findings highlighted context-dependent risks, particularly regarding the disclosure of personal and family information that conflicts with culturally rooted expectations of modesty, privacy, and honor. The study proposes design implications for inclusive, context-sensitive parental controls that align with local cultural norms and values. Why it matters: This research is crucial for developing AI tools and policies that are culturally appropriate and safeguard youth in non-Western contexts like the Middle East.