Skip to content
GCC AI Research

I see what you’re saying: the Abu Dhabi AI researchers making video dubbing sync

MBZUAI · Notable

Summary

Researchers at MBZUAI have developed Auto-DUB, a system using deep learning, NLP, and CV to improve audio-visual dubbing, particularly for educational videos. The three-step process generates subtitles, creates an audio representation, and synchronizes the audio with lip movements. The system aims to overcome language barriers in e-learning by providing accurate translations and lip-synced audio. Why it matters: This research addresses a critical need in online education by making content more accessible to non-native English speakers, potentially expanding access to global educational resources in the Arab world.

Keywords

MBZUAI · dubbing · e-learning · NLP · CV

Get the weekly digest

Top AI stories from the GCC region, every week.

Related

MBZUAI team wins top prize at inaugural Arabic Natural Language Processing Conference

MBZUAI ·

An MBZUAI team won the best paper award at the inaugural Arabic Natural Language Processing Conference for their work on processing Arabic speech. Their study establishes a new approach to tackle the complexities of spoken Arabic, which differs significantly from text-based language models. The team's approach aims to advance new tools for Arabic speakers by addressing challenges like intonation and the continuous nature of speech. Why it matters: This award highlights the importance of specialized research in Arabic NLP, as mainstream LLMs often face limitations in accurately processing the nuances of Arabic speech.

The future of audio AI: adoption use cases powering the Middle East

MBZUAI ·

ElevenLabs, a voice AI research and product company, presented at MBZUAI's Incubation and Entrepreneurship Center (IEC) on the adoption of audio AI in the Middle East. Hussein Makki, general manager for the Middle East at ElevenLabs, highlighted the potential of voice-native AI across sectors like telecommunications, banking, and education. ElevenLabs focuses on making content accessible and engaging across languages and voices through its text-to-speech models. Why it matters: This signals growing interest and investment in voice AI applications within the region, potentially transforming customer service and content accessibility in Arabic.

VideoMolmo: Spatio-Temporal Grounding Meets Pointing

arXiv ·

Researchers from MBZUAI have introduced VideoMolmo, a large multimodal model for spatio-temporal pointing conditioned on textual descriptions. The model incorporates a temporal module with an attention mechanism and a temporal mask fusion pipeline using SAM2 for improved coherence across video sequences. They also curated a dataset of 72k video-caption pairs and introduced VPoS-Bench, a benchmark for evaluating generalization across real-world scenarios, with code and models publicly available.

Text-to-speech system brings real-time speech to LLMs

MBZUAI ·

MBZUAI researchers developed LLMVoX, a system enabling LLMs to produce real-time speech, including Arabic. LLMVoX addresses limitations of existing end-to-end and cascaded pipeline approaches, which suffer from either degraded reasoning or latency. LLMVoX was developed as part of Project OMER, which was recently awarded Regional Research Grant from Meta. Why it matters: This enhances the potential of LLMs to function as more natural, multimodal virtual assistants, especially for Arabic-speaking users in the Middle East.