Ekaterina Vylomova from the University of Melbourne gave a talk on using NLP models to advance research in linguistic morphology, typology, and social psychology. The talk covered using models to study morphology, phonetic changes in words over time, and diachronic changes in language semantics. Vylomova presented the UniMorph project, a cross-lingual annotation schema and database with morphological paradigms for over 150 languages. Why it matters: This research demonstrates the potential of NLP to contribute to a deeper understanding of language evolution and structure, with applications in linguistic research and the study of social and cultural changes.
MBZUAI doctoral student Hawau Toyin is applying AI to the identification, correction, and evaluation of stuttering, particularly in developing countries where it often goes undiagnosed. She is collaborating with the SpeechCare Center UAE and her advisor Dr. Hanan Aldarmaki to develop AI tools for faster and more accessible diagnosis and treatment. The research focuses on data collection from around the world to build an effective AI system that can analyze the various forms of stuttering. Why it matters: This research addresses a critical healthcare gap by leveraging AI to improve diagnosis and treatment of speech disorders in underserved regions.
A research talk was given on privacy and security issues in speech processing, highlighting the unique privacy challenges due to the biometric information embedded in speech. The talk covered the legal landscape, proposed solutions like cryptographic and hashing-based methods, and adversarial processing techniques. Dr. Bhiksha Raj from Carnegie Mellon University, an expert in speech and audio processing, delivered the talk. Why it matters: As speech-based interfaces become more prevalent in the Middle East, understanding and addressing the associated privacy risks is crucial for ethical AI development and deployment.
MBZUAI student Karima Kadaoui is developing machine learning algorithms to help speech-impaired individuals communicate more easily. Her project aims to create an app that translates speech impediments into understandable language, facilitating communication with others and integration with voice-enabled technologies like Siri and Google Assistant. The AI-powered app could assist individuals with conditions such as strokes and cerebral palsy, who often struggle with muscle control affecting speech clarity. Why it matters: The research addresses a critical need for inclusive AI solutions, potentially improving the quality of life for speech-impaired individuals in the region and beyond.
MBZUAI's Hanan Al Darmaki is working to improve automated speech recognition (ASR) for low-resource languages, where labeled data is scarce. She notes that Arabic presents unique challenges due to dialectal variations and a lack of written resources corresponding to spoken dialects. Al Darmaki's research focuses on unsupervised speech recognition to address this gap. Why it matters: Overcoming these challenges can improve virtual assistant effectiveness across diverse languages and enable more inclusive AI applications in the Arabic-speaking world.
This survey paper reviews the landscape of Natural Language Processing (NLP) research and applications in the Arab world. It discusses the unique challenges posed by the Arabic language, such as its morphological complexity and dialectal diversity. The paper also presents a historical overview of Arabic NLP and surveys various research areas, including machine translation, sentiment analysis, and speech recognition. Why it matters: The survey provides a comprehensive resource for researchers and practitioners interested in the current state and future directions of Arabic NLP, a field critical for enabling AI technologies to serve Arabic-speaking communities.
Pedro J. Moreno, former head of ASR R&D at Google, presented a talk at MBZUAI on the past, present, and future of speech technologies. The talk covered the evolution of speech tech, his career contributions including work on Google Voice search, and the impact of LLMs on speech science. He also discussed the interplay between foundational and applied research and preparing the next generation of scientists. Why it matters: The talk provides insights into the trajectory of speech technologies from a leading researcher, highlighting future directions and the ethical considerations surrounding AI's impact on society.
The Qatar Computing Research Institute (QCRI) has released SpokenNativQA, a multilingual spoken question-answering dataset for evaluating LLMs in conversational settings. The dataset contains 33,000 naturally spoken questions and answers across multiple languages, including low-resource and dialect-rich languages. It aims to address the limitations of text-based QA datasets by incorporating speech variability, accents, and linguistic diversity. Why it matters: This benchmark enables more robust evaluation of LLMs in speech-based interactions, particularly for Arabic dialects and other low-resource languages.