The paper introduces AraELECTRA, a new Arabic language representation model. AraELECTRA is pre-trained using the replaced token detection objective on large Arabic text corpora. The model is evaluated on multiple Arabic NLP tasks, including reading comprehension, sentiment analysis, and named-entity recognition. Why it matters: AraELECTRA outperforms current state-of-the-art Arabic language representation models, given the same pretraining data and even with a smaller model size, advancing Arabic NLP.
DERC is partnering with EPFL in Switzerland on a four-year project using EMTR and ML to study electromagnetic disturbance localization in PCBs. Professor Farhad Rachidi (EPFL) and Dr. Nicolas Mora (DERC) will mentor a PhD student. The collaboration builds on prior relationships between DERC researchers and Prof. Rachidi's lab. Why it matters: The partnership strengthens DERC's methodological expertise and international recognition in electromagnetic studies, potentially leading to further collaborations.
The ArabJobs dataset is a new corpus of over 8,500 Arabic job advertisements collected from Egypt, Jordan, Saudi Arabia, and the UAE. The dataset contains over 550,000 words and captures linguistic, regional, and socio-economic variation in the Arab labor market. It is available on GitHub and can be used for fairness-aware Arabic NLP and labor market research.
This paper introduces GigaBERT, a customized bilingual BERT model pre-trained for Arabic NLP and English-to-Arabic zero-shot transfer learning. The study evaluates GigaBERT's performance on four information extraction tasks: named entity recognition, part-of-speech tagging, argument role labeling, and relation extraction. Results show that GigaBERT outperforms mBERT, XLM-RoBERTa, and AraBERT in both supervised and zero-shot transfer settings. Why it matters: GigaBERT advances Arabic NLP by providing a high-performing, publicly available model tailored for the complexities of the Arabic language and cross-lingual applications.
The article discusses parameter-efficient fine-tuning methods for large NLP models, highlighting their importance due to the increasing size and computational demands of state-of-the-art language models. It provides an overview of these methods, presenting them in a unified view to emphasize their similarities and differences. Indraneil, a PhD candidate at TU Darmstadt's UKP Lab, is researching parameter-efficient fine-tuning, sparsity, and conditional computation methods to improve LLM performance in multilingual, multi-task settings. Why it matters: Efficient fine-tuning techniques are crucial for democratizing access to and accelerating the deployment of large language models in the region and beyond.
Technology Innovation Institute (TII) has launched Electromagnetic Compatibility (EMC) laboratories in Abu Dhabi, the first such facility in the Arab world. The facility at TII's Directed Energy Research Center (DERC) includes three labs: an EMC semi-anechoic chamber, a pulsed power laboratory, and a low-noise emanation laboratory. These labs will enable evaluation of technologies against electromagnetic interference and support in-country R&D and local industry in line with UAE's 'Operation 300bn'. Why it matters: This advanced infrastructure signals the UAE's commitment to fostering innovation in electronics and related sectors, reducing reliance on foreign testing and certification.
The paper introduces AlcLaM, an Arabic dialectal language model trained on 3.4M sentences from social media. AlcLaM expands the vocabulary and retrains a BERT-based model, using only 13GB of dialectal text. Despite the smaller training data, AlcLaM outperforms models like CAMeL, MARBERT, and ArBERT on various Arabic NLP tasks. Why it matters: AlcLaM offers a more efficient and accurate approach to Arabic NLP by focusing on dialectal Arabic, which is often underrepresented in existing models.