GCC AI Research

This Week G42

G42 Releases Nanda 87B, Opening New Frontiers in Hindi-English Language AI

G42 · Significant research

Summary

G42 has launched Nanda 87B, an open-source Hindi-English LLM developed by MBZUAI in collaboration with Inception and Cerebras. Nanda 87B is built upon Llama-3.1 70B and trained on a dataset with over 65 billion Hindi tokens. The model is engineered for real-world use being fluent in formal Hindi, casual speech, and Hinglish, and is designed for translation, summarization, instruction-following, and transliteration tasks. Why it matters: This release marks a major advancement in creating inclusive AI technology tailored for one of the world's largest linguistic communities.

Keywords

G42 · MBZUAI · Nanda 87B · Hindi · LLM

Get the weekly digest

Top AI stories from the GCC region, every week.

Related

UAE to deploy 8 exaflop supercomputer in India to strengthen local sovereign AI infrastructure

MBZUAI ·

G42 and Cerebras, in partnership with MBZUAI and C-DAC, will deploy an 8 exaflop AI supercomputer in India. The system will operate under India's governance frameworks, with all data remaining within national jurisdiction to meet sovereign security and compliance requirements. The supercomputer will be accessible to Indian researchers, startups, and government entities under the India AI Mission.

PALO: A Polyglot Large Multimodal Model for 5B People

arXiv ·

Researchers introduce PALO, a polyglot large multimodal model with visual reasoning capabilities in 10 major languages including Arabic. A semi-automated translation approach was used to adapt the multimodal instruction dataset from English to the target languages. The models are trained across three scales (1.7B, 7B and 13B parameters) and a multilingual multimodal benchmark is proposed for evaluation.

BiMediX: Bilingual Medical Mixture of Experts LLM

arXiv ·

MBZUAI researchers introduce BiMediX, a bilingual (English and Arabic) mixture of experts LLM for medical applications. The model is trained on BiMed1.3M, a new 1.3 million bilingual instruction dataset and outperforms existing models like Med42 and Jais-30B on medical benchmarks. Code and models are available on Github.