Skip to content
GCC AI Research

Search

Results for "vocabulary expansion"

Second Language (Arabic) Acquisition of LLMs via Progressive Vocabulary Expansion

arXiv ·

This paper introduces AraLLaMA, a new Arabic large language model (LLM) trained using a progressive vocabulary expansion method inspired by second language acquisition. The model utilizes a modified byte-pair encoding (BPE) algorithm to dynamically extend the Arabic subwords in its vocabulary during training, balancing the out-of-vocabulary (OOV) ratio. Experiments show AraLLaMA achieves performance comparable to existing Arabic LLMs on various benchmarks, and all models, data, and code will be open-sourced. Why it matters: This work addresses the need for more accessible and performant Arabic LLMs, contributing to democratization of AI in the Arab world.

Retrieval Augmentation as a Shortcut to the Training Data

MBZUAI ·

This article discusses retrieval augmentation in text generation, where information retrieved from an external source is used to condition predictions. It references recent work on retrieval-augmented image captioning, showing that model size can be greatly reduced when training data is available through retrieval. The author intends to continue this work focusing on the intersection of retrieval augmentation and in-context learning, and controllable image captioning for language learning materials. Why it matters: This research direction has the potential to improve transfer learning in vision-language models, which could be especially relevant for downstream applications in Arabic NLP and multimodal tasks.

Machine learning and natural language processing in support of interactive automated tutoring for non-native

MBZUAI ·

Ted Briscoe from the University of Cambridge discussed using machine learning and NLP to develop learning-oriented assessment (LOA) for non-native writers. The technology is used in Cambridge English courseware like Empower and Linguaskill, as well as Write and Improve. Briscoe is also the co-founder and CEO of iLexIR Ltd. Why it matters: Improving automated language assessment could significantly enhance online language learning platforms in the Arab world and beyond.

A new approach to improve vision-language models

MBZUAI ·

MBZUAI researchers have developed a new approach to enhance the generalizability of vision-language models when processing out-of-distribution data. The study, led by Sheng Zhang and involving multiple MBZUAI professors and researchers, addresses the challenge of AI applications needing to manage unforeseen circumstances. The new method aims to improve how these models, which combine natural language processing and computer vision, handle new information not used during training. Why it matters: Improving the adaptability of vision-language models is critical for real-world AI applications like autonomous driving and medical imaging, especially in diverse and changing environments.

Culturally Yours: A new tool for understanding cultural references in text

MBZUAI ·

MBZUAI researchers have developed "Culturally Yours," a reading assistant that highlights and explains culturally-specific items on webpages to help users understand unfamiliar terms. The tool addresses the "cold-start problem" by asking users for demographic information to personalize the identification of potentially unfamiliar cultural references. It was presented at the 31st International Conference on Computational Linguistics in Abu Dhabi. Why it matters: This tool can help bridge linguistic and cultural gaps, particularly for underrepresented languages and cultures, and aid businesses in reaching diverse audiences.

NLP “dream team” on the agenda

MBZUAI ·

MBZUAI has appointed Professor Timothy Baldwin as Associate Provost and acting chair of its new NLP Department. Baldwin will focus on strengthening the curriculum and building a world-class faculty team. He previously spent 17 years at the University of Melbourne. Why it matters: The recruitment signals MBZUAI's commitment to becoming a leading center for NLP research and education in the region.

The evolving of Data Science and the Saudi Arabia case. How much have we changed in 13 years?

arXiv ·

This study analyzes the evolution of data science vocabulary using 16,018 abstracts containing "data science" over 13 years. It identifies new vocabulary introduction and its integration into scientific literature using techniques like EDA, LSA, LDA, and N-grams. The research compares overall scientific publications with those specific to Saudi Arabia, identifying representative articles based on vocabulary usage. Why it matters: The work provides insights into the development of data science terminology and its specific adoption within the Saudi Arabian research landscape.

Creating Arabic LLM Prompts at Scale

arXiv ·

This paper introduces two methods for creating Arabic LLM prompts at scale: translating existing English prompt datasets and creating natural language prompts from Arabic NLP datasets. Using these methods, the authors generated over 67.4 million Arabic prompts covering tasks like summarization and question answering. Fine-tuning a 7B Qwen2 model on these prompts outperforms a 70B Llama3 model in handling Arabic prompts. Why it matters: The research provides a cost-effective approach to scaling Arabic LLM training data, potentially improving the performance of smaller, more accessible models for Arabic NLP.