Skip to content
GCC AI Research

Search

Results for "Arabic language"

A Panoramic Survey of Natural Language Processing in the Arab World

arXiv ·

This survey paper reviews the landscape of Natural Language Processing (NLP) research and applications in the Arab world. It discusses the unique challenges posed by the Arabic language, such as its morphological complexity and dialectal diversity. The paper also presents a historical overview of Arabic NLP and surveys various research areas, including machine translation, sentiment analysis, and speech recognition. Why it matters: The survey provides a comprehensive resource for researchers and practitioners interested in the current state and future directions of Arabic NLP, a field critical for enabling AI technologies to serve Arabic-speaking communities.

Studying the History of the Arabic Language: Language Technology and a Large-Scale Historical Corpus

arXiv ·

This paper introduces a large-scale historical corpus of written Arabic spanning 1400 years. The corpus was cleaned and processed using Arabic NLP tools, including identification of reused text. The study uses a novel automatic periodization algorithm to study the history of the Arabic language, confirming the division into Modern Standard and Classical Arabic. Why it matters: This resource enables further computational research into the evolution of Arabic and the development of NLP tools for historical texts.

On the importance of Data Scale in Pretraining Arabic Language Models

arXiv ·

This paper studies the impact of data scale on Arabic Pretrained Language Models (PLMs). Researchers retrained BERT-base and T5-base models on large Arabic corpora, achieving state-of-the-art results on the ALUE and ORCA benchmarks. The analysis indicates that pretraining data volume is the most important factor for performance. Why it matters: This work provides valuable insights into building effective Arabic language models, emphasizing the importance of large, high-quality datasets for advancing Arabic NLP.

The Landscape of Arabic Large Language Models (ALLMs): A New Era for Arabic Language Technology

arXiv ·

This article surveys the landscape of Arabic Large Language Models (ALLMs), tracing their evolution from early text processing systems to sophisticated AI models. It highlights the unique challenges and opportunities in developing ALLMs for the 422 million Arabic speakers across 27 countries. The paper also examines the evaluation of ALLMs through benchmarks and public leaderboards. Why it matters: ALLMs can bridge technological gaps and empower Arabic-speaking communities by catering to their specific linguistic and cultural needs.

Arabic Stable LM: Adapting Stable LM 2 1.6B to Arabic

arXiv ·

The paper introduces Arabic Stable LM, a 1.6B parameter Arabic-centric language model, in both base and chat versions. The Arabic Stable LM 1.6B chat model achieves strong results on several benchmarks, outperforming models with up to 8x more parameters. The study also demonstrates the benefit of incorporating synthetic instruction tuning data through a large synthetic dialogue dataset. Why it matters: This work makes Arabic LLMs more accessible by reducing the parameter size while maintaining strong performance, facilitating deployment in resource-constrained environments.

AI and the Arabic language: Preserving cultural heritage and enabling future discovery

MBZUAI ·

This article discusses MBZUAI's efforts in advancing Arabic language AI, including the development of advanced linguistic models using deep learning techniques. Key initiatives include Jais, a 13B parameter Arabic LLM developed in collaboration with G42's Inception, and Atlas-Chat, which understands the Moroccan dialect. The university is also incorporating Arabic in practical AI solutions like BiMediX2, a healthcare multi-modal model that understands medical queries in both English and Arabic. Why it matters: These initiatives are crucial for preserving Arabic cultural heritage, enabling future discovery, and addressing linguistic challenges specific to the Arabic language in AI applications.

ArabianGPT: Native Arabic GPT-based Large Language Model

arXiv ·

The paper introduces ArabianGPT, a suite of transformer-based language models designed specifically for Arabic, including versions with 0.1B and 0.3B parameters. A key component is the AraNizer tokenizer, tailored for Arabic script's morphology. Fine-tuning ArabianGPT-0.1B achieved 95% accuracy in sentiment analysis, up from 56% in the base model, and improved F1 scores in summarization. Why it matters: The models address the gap in native Arabic LLMs, offering better performance on Arabic NLP tasks through tailored architecture and tokenization.

A new standard for evaluating Arabic language models presented at ACL

MBZUAI ·

MBZUAI researchers have created ArabicMMLU, the first benchmark dataset in Modern Standard Arabic for evaluating language understanding across multiple tasks. The dataset contains over 14,000 multiple-choice questions from school exams across the Arabic-speaking world and addresses the limitations of translated English datasets. It was presented at the 62nd Annual Meeting of the Association for Computational Linguistics in Bangkok. Why it matters: This benchmark enables a more accurate and culturally relevant evaluation of LLMs' capabilities in Arabic, which is crucial for developing AI tailored to the Arab world.