Skip to content
GCC AI Research

Egyptian Arabic to English Statistical Machine Translation System for NIST OpenMT'2015

arXiv · · Notable

Summary

This paper describes the QCRI-Columbia-NYUAD group's Egyptian Arabic-to-English statistical machine translation system submitted to the NIST OpenMT'2015 competition. The system used tools like 3arrib and MADAMIRA for processing and standardizing informal dialectal Arabic. The system was trained using phrase-based SMT with features such as operation sequence model, class-based language model and neural network joint model. Why it matters: The work demonstrates advances in machine translation for dialectal Arabic, a challenging but important area for regional communication and NLP research.

Get the weekly digest

Top AI stories from the GCC region, every week.

Related

QCRI Machine Translation Systems for IWSLT 16

arXiv ·

This paper describes QCRI's machine translation systems for the IWSLT 2016 evaluation campaign, focusing on Arabic-English and English-Arabic tracks. They built both Phrase-based and Neural machine translation models. A Neural MT system, trained by stacking data from different genres through fine-tuning, and applying ensemble over 8 models, outperformed a strong phrase-based system by 2 BLEU points in the Arabic->English direction. Why it matters: The research highlights the early promise of neural machine translation for Arabic language pairs, demonstrating its potential to surpass traditional methods.

Advancing Dialectal Arabic to Modern Standard Arabic Machine Translation

arXiv ·

This paper explores Dialectal Arabic (DA) to Modern Standard Arabic (MSA) machine translation using prompting and fine-tuning techniques for Levantine, Egyptian, and Gulf dialects. The study found that few-shot prompting outperformed zero-shot and chain-of-thought methods across six large language models, with GPT-4o achieving the highest performance. A quantized Gemma2-9B model achieved a chrF++ score of 49.88, outperforming zero-shot GPT-4o (44.58). Why it matters: The research provides a resource-efficient pipeline for DA-MSA translation, enabling more inclusive language technologies by addressing the challenges posed by dialectal variations in Arabic.

Nile-Chat: Egyptian Language Models for Arabic and Latin Scripts

arXiv ·

The authors introduce Nile-Chat, a collection of LLMs (4B, 3x4B-A6B, and 12B) specifically for the Egyptian dialect, capable of understanding and generating text in both Arabic and Latin scripts. A novel language adaptation approach using the Branch-Train-MiX strategy is used to merge script-specialized experts into a single MoE model. Nile-Chat models outperform multilingual and Arabic LLMs like LLaMa, Jais, and ALLaM on newly introduced Egyptian benchmarks, with the 12B model achieving a 14.4% performance gain over Qwen2.5-14B-Instruct on Latin-script benchmarks; all resources are publicly available. Why it matters: This work addresses the overlooked aspect of adapting LLMs to dual-script languages, providing a methodology for creating more inclusive and representative language models in the Arabic-speaking world.

Aladdin-FTI @ AMIYA Three Wishes for Arabic NLP: Fidelity, Diglossia, and Multidialectal Generation

arXiv ·

The paper introduces Aladdin-FTI, a system designed for generating and translating dialectal Arabic (DA). Aladdin-FTI supports text generation in Moroccan, Egyptian, Palestinian, Syrian, and Saudi dialects. It also handles bidirectional translation between these dialects, Modern Standard Arabic (MSA), and English. Why it matters: This work contributes to addressing the under-representation of Arabic dialects in NLP research and enables more inclusive Arabic language models.