IFM has released K2-V2, a 70B-class LLM that takes a "360-open" approach by making its weights, data, training details, checkpoints, and fine-tuning recipes publicly available. K2-V2 matches leading open-weight model performance while offering full transparency, contrasting with proprietary and semi-open Chinese models. Independent evaluations show K2 as a high-performance, fully open-source alternative in the AI landscape. Why it matters: K2-V2 provides developers with a transparent and reproducible foundation model, fostering trust and enabling customization without sacrificing performance, which is crucial for sensitive applications in the region.
MBZUAI's Institute of Foundation Models has released K2, a 70-billion-parameter, reasoning-centric foundation model. K2 is designed to be fully inspectable, with open weights, training code, data composition, mid-training checkpoints, and evaluation harnesses. K2 outperforms Qwen2.5-72B and approaches the performance of Qwen3-235B. Why it matters: This release promotes transparency and reproducibility in AI development, providing researchers with the resources needed to study, adapt, and build upon a strong foundation model.
Hamad Bin Khalifa University (HBKU) has released Fanar 2.0, the second generation of Qatar's Arabic-centric Generative AI platform, built entirely at QCRI. The core of Fanar 2.0 is Fanar-27B, which was continually pre-trained from a Gemma-3-27B backbone using 120 billion high-quality tokens and only 256 NVIDIA H100 GPUs. Fanar 2.0 includes capabilities like FanarGuard, Aura, Oryx, Fanar-Sadiq, Fanar-Diwan, and FanarShaheen for moderation, speech recognition, vision understanding, Islamic content, poetry generation, and translation. Why it matters: This shows that sovereign, resource-constrained AI development in the Arabic language is possible, producing competitive systems in the region.
MBZUAI's Institute of Foundation Models (IFM) has released K2 Think V2, a 70 billion parameter open-source general reasoning model built on K2 V2 Instruct. The model excels in complex reasoning benchmarks like AIME2025 and GPQA-Diamond, and features a low hallucination rate with long context reasoning capabilities. K2 Think V2 is fully sovereign and open, from pre-training through post-training, using IFM-curated data and a Guru dataset. Why it matters: This release contributes to closing the gap between community-owned reproducible AI and proprietary models, particularly in reasoning and long-context understanding for Arabic NLP tasks.
MBZUAI is a global partner in Meta's release of Llama 2, joining organizations like IBM, AWS, Microsoft, and NVIDIA. MBZUAI will provide early feedback and help build the software as a global community. MBZUAI is working on large language models, developing a sustainable LLM named Vicuna, and strengthening infrastructure for LLM-chat evaluation. Why it matters: MBZUAI's involvement promises to bring about a new generation of UAE-born AI advancements built around the Llama 2 ecosystem and fact-checking capabilities.
Researchers from MBZUAI have released MobiLlama, a fully transparent open-source 0.5 billion parameter Small Language Model (SLM). MobiLlama is designed for resource-constrained devices, emphasizing enhanced performance with reduced resource demands. The full training data pipeline, code, model weights, and checkpoints are available on Github.