Researchers introduce Swan, a family of Arabic-centric embedding models including Swan-Small (based on ARBERTv2) and Swan-Large (based on ArMistral). They also propose ArabicMTEB, a benchmark suite for cross-lingual, multi-dialectal Arabic text embedding performance across 8 tasks and 94 datasets. Swan-Large achieves state-of-the-art results, outperforming Multilingual-E5-large in most Arabic tasks. Why it matters: The new models and benchmarks address a critical need for high-quality Arabic language models that are both dialectally and culturally aware, enabling more effective NLP applications in the region.
This paper describes the QCRI-Columbia-NYUAD group's Egyptian Arabic-to-English statistical machine translation system submitted to the NIST OpenMT'2015 competition. The system used tools like 3arrib and MADAMIRA for processing and standardizing informal dialectal Arabic. The system was trained using phrase-based SMT with features such as operation sequence model, class-based language model and neural network joint model. Why it matters: The work demonstrates advances in machine translation for dialectal Arabic, a challenging but important area for regional communication and NLP research.
The ArabJobs dataset is a new corpus of over 8,500 Arabic job advertisements collected from Egypt, Jordan, Saudi Arabia, and the UAE. The dataset contains over 550,000 words and captures linguistic, regional, and socio-economic variation in the Arab labor market. It is available on GitHub and can be used for fairness-aware Arabic NLP and labor market research.
The paper introduces ADAB (Arabic Politeness Dataset), a new annotated Arabic dataset for politeness detection collected from online platforms. The dataset covers Modern Standard Arabic and multiple dialects (Gulf, Egyptian, Levantine, and Maghrebi). It contains 10,000 samples across 16 politeness categories and achieves substantial inter-annotator agreement (kappa = 0.703). Why it matters: This dataset addresses the under-explored area of Arabic-language resources for politeness detection, which is crucial for culturally-aware NLP systems.
This paper introduces an enhanced Dense Passage Retrieval (DPR) framework tailored for Arabic text retrieval. The core innovation is an Attentive Relevance Scoring (ARS) mechanism that improves semantic relevance modeling between questions and passages, replacing standard interaction methods. The method integrates pre-trained Arabic language models and architectural refinements, achieving improved retrieval and ranking accuracy for Arabic question answering. Why it matters: This work addresses the underrepresentation of Arabic in NLP research by providing a novel approach and publicly available code to improve Arabic text retrieval, which can benefit various applications like Arabic search engines and question-answering systems.
This paper describes QCRI's machine translation systems for the IWSLT 2016 evaluation campaign, focusing on Arabic-English and English-Arabic tracks. They built both Phrase-based and Neural machine translation models. A Neural MT system, trained by stacking data from different genres through fine-tuning, and applying ensemble over 8 models, outperformed a strong phrase-based system by 2 BLEU points in the Arabic->English direction. Why it matters: The research highlights the early promise of neural machine translation for Arabic language pairs, demonstrating its potential to surpass traditional methods.