Skip to content
GCC AI Research

Search

Results for "low-resource languages"

Challenges in low-resourced NLP: an Irish case study

MBZUAI ·

Dr. Teresa Lynn from Dublin City University (DCU) discussed the challenges in developing NLP tools for Irish, a low-resource language facing digital extinction. She highlighted the lack of speech and language applications and fundamental language resources for Irish. Lynn also mentioned her work at DCU on the GaelTech project and her involvement in the European Language Equality project. Why it matters: The development of NLP tools for low-resource languages like Irish is crucial for preserving linguistic diversity and preventing digital marginalization in the AI era.

Addressing NLP problems in low resource settings

MBZUAI ·

Thamar Solorio from the University of Houston will discuss machine learning approaches for spontaneous human language processing. The talk will cover adapting multilingual transformers to code-switching data and using data augmentation for domain adaptation in sequence labeling tasks. Solorio will also provide an overview of other research projects at the RiTUAL lab, focusing on the scarcity of labeled data. Why it matters: This presentation addresses key challenges in Arabic NLP related to data scarcity, which is a persistent obstacle in developing effective AI applications for the region.

Navigating NLP for Underrepresented Languages: Dataset Challenges, Efficient Techniques, and Evaluations

MBZUAI ·

MBZUAI's Dr. Fajri Koto presented research on overcoming challenges in NLP for underrepresented languages. His work includes creating multilingual datasets for Indonesian languages by engaging native speakers and finding that direct composition yields better results than translation. He also discussed vocabulary adaptation and zero-shot learning to address computational resource limitations, and emphasized the importance of datasets with local context for evaluating LLMs. Why it matters: This research addresses critical gaps in NLP for low-resource languages, providing insights and techniques to improve performance and cultural relevance in multilingual AI models within the region and globally.

Processing language like a human

MBZUAI ·

MBZUAI's Hanan Al Darmaki is working to improve automated speech recognition (ASR) for low-resource languages, where labeled data is scarce. She notes that Arabic presents unique challenges due to dialectal variations and a lack of written resources corresponding to spoken dialects. Al Darmaki's research focuses on unsupervised speech recognition to address this gap. Why it matters: Overcoming these challenges can improve virtual assistant effectiveness across diverse languages and enable more inclusive AI applications in the Arabic-speaking world.

A Panoramic Survey of Natural Language Processing in the Arab World

arXiv ·

This survey paper reviews the landscape of Natural Language Processing (NLP) research and applications in the Arab world. It discusses the unique challenges posed by the Arabic language, such as its morphological complexity and dialectal diversity. The paper also presents a historical overview of Arabic NLP and surveys various research areas, including machine translation, sentiment analysis, and speech recognition. Why it matters: The survey provides a comprehensive resource for researchers and practitioners interested in the current state and future directions of Arabic NLP, a field critical for enabling AI technologies to serve Arabic-speaking communities.

Study on the paradox of ‘low-resource’ languages wins Outstanding Paper Award at EMNLP

MBZUAI ·

A study co-authored by researchers from UC Berkeley, University of the Witwatersrand, Lelapa AI, and MBZUAI received the Outstanding Paper Award at EMNLP 2024. The paper critiques the term "low-resource" languages in NLP, highlighting its limitations in capturing the diverse challenges faced by different languages. The authors propose a more detailed analysis of resourcedness to encourage targeted support for languages currently underserved by technology. Why it matters: The research challenges assumptions in NLP and promotes more nuanced approaches to supporting the world's many languages, including Arabic, in AI systems.

Towards Inclusive NLP: Assessing Compressed Multilingual Transformers across Diverse Language Benchmarks

arXiv ·

This paper benchmarks multilingual and monolingual LLM performance across Arabic, English, and Indic languages, examining model compression effects like pruning and quantization. Multilingual models outperform language-specific counterparts, demonstrating cross-lingual transfer. Quantization maintains accuracy while promoting efficiency, but aggressive pruning compromises performance, particularly in larger models. Why it matters: The findings highlight strategies for scalable and fair multilingual NLP, addressing hallucination and generalization errors in low-resource languages.