IFM has released K2-V2, a 70B-class LLM that takes a "360-open" approach by making its weights, data, training details, checkpoints, and fine-tuning recipes publicly available. K2-V2 matches leading open-weight model performance while offering full transparency, contrasting with proprietary and semi-open Chinese models. Independent evaluations show K2 as a high-performance, fully open-source alternative in the AI landscape. Why it matters: K2-V2 provides developers with a transparent and reproducible foundation model, fostering trust and enabling customization without sacrificing performance, which is crucial for sensitive applications in the region.
MBZUAI's Institute of Foundation Models has released K2, a 70-billion-parameter, reasoning-centric foundation model. K2 is designed to be fully inspectable, with open weights, training code, data composition, mid-training checkpoints, and evaluation harnesses. K2 outperforms Qwen2.5-72B and approaches the performance of Qwen3-235B. Why it matters: This release promotes transparency and reproducibility in AI development, providing researchers with the resources needed to study, adapt, and build upon a strong foundation model.
MBZUAI's Institute of Foundation Models (IFM) has released K2 Think V2, a 70 billion parameter open-source general reasoning model built on K2 V2 Instruct. The model excels in complex reasoning benchmarks like AIME2025 and GPQA-Diamond, and features a low hallucination rate with long context reasoning capabilities. K2 Think V2 is fully sovereign and open, from pre-training through post-training, using IFM-curated data and a Guru dataset. Why it matters: This release contributes to closing the gap between community-owned reproducible AI and proprietary models, particularly in reasoning and long-context understanding for Arabic NLP tasks.
MBZUAI, G42, and Cerebras Systems have launched K2 Think V2, a 70-billion parameter reasoning system built on the K2-V2 base model. K2 Think V2 is fully open-source, from pre-training data to post-training alignment, ensuring transparency and reproducibility. It achieves leading results on complex reasoning benchmarks like AIME2025 and GPQA-Diamond. Why it matters: This release marks a significant advancement in the UAE's AI capabilities, demonstrating leadership in building globally accessible and fully sovereign AI systems focused on reasoning.
MBZUAI, Petuum, and LLM360 have launched K2-65B, an open-source 65B parameter LLM, trained on 1.4T tokens using 480 A100 GPUs. K2-65B outperforms Llama 2 70B while using 35% fewer resources, emphasizing sustainable AI development. The model and its chat variant, K2-Chat, excel in math, coding, medicine, and human-like response generation, with the model available under the Apache 2.0 license. Why it matters: This launch highlights the UAE's increasing capabilities in developing efficient and high-performing LLMs, promoting open-source collaboration and setting new standards for sustainable AI practices in the region.