This paper presents the development and validation of an Arabic version of the Attitudes Toward Large Language Models (AT-GLLM and AT-PLLM) scales, adapted from the original English versions. The study involved translating the scales and testing them on a sample of 249 Arabic-speaking adults. The translated scales demonstrated strong psychometric properties, including a two-factor structure, measurement invariance across genders, and good reliability and validity. Why it matters: This provides a culturally relevant tool for assessing attitudes toward LLMs in the Arab world, crucial for localized research and policy-making in the rapidly growing field of Arabic AI.
This article surveys the landscape of Arabic Large Language Models (ALLMs), tracing their evolution from early text processing systems to sophisticated AI models. It highlights the unique challenges and opportunities in developing ALLMs for the 422 million Arabic speakers across 27 countries. The paper also examines the evaluation of ALLMs through benchmarks and public leaderboards. Why it matters: ALLMs can bridge technological gaps and empower Arabic-speaking communities by catering to their specific linguistic and cultural needs.
The paper introduces AraTrust, a new benchmark for evaluating the trustworthiness of LLMs when prompted in Arabic. The benchmark contains 522 multiple-choice questions covering dimensions like truthfulness, ethics, safety, and fairness. Experiments using AraTrust showed that GPT-4 performed the best, while open-source models like AceGPT 7B and Jais 13B had lower scores. Why it matters: This benchmark addresses a critical gap in evaluating LLMs for Arabic, which is essential for ensuring the safe and ethical deployment of AI in the Arab world.
The paper introduces AraHalluEval, a new framework for evaluating hallucinations in Arabic and multilingual large language models (LLMs). The framework uses 12 fine-grained hallucination indicators across generative question answering and summarization tasks, evaluating 12 LLMs including Arabic-specific, multilingual, and reasoning-based models. Results show factual hallucinations are more common than faithfulness errors, with the Arabic model Allam showing lower hallucination rates. Why it matters: This work addresses a critical gap in Arabic NLP by providing a comprehensive tool for assessing and mitigating hallucination in LLMs, which is essential for reliable AI applications in the Arabic-speaking world.
The paper introduces ALLaM, a series of large language models for Arabic and English, designed to support Arabic Language Technologies. The models are trained with language alignment and knowledge transfer in mind, using a decoder-only architecture. ALLaM achieves state-of-the-art results on Arabic benchmarks like MMLU Arabic and Arabic Exams. Why it matters: This work advances Arabic NLP by providing high-performing LLMs and demonstrating effective techniques for cross-lingual transfer learning and alignment with human preferences.