IFM has released K2-V2, a 70B-class LLM that takes a "360-open" approach by making its weights, data, training details, checkpoints, and fine-tuning recipes publicly available. K2-V2 matches leading open-weight model performance while offering full transparency, contrasting with proprietary and semi-open Chinese models. Independent evaluations show K2 as a high-performance, fully open-source alternative in the AI landscape. Why it matters: K2-V2 provides developers with a transparent and reproducible foundation model, fostering trust and enabling customization without sacrificing performance, which is crucial for sensitive applications in the region.
MBZUAI has launched the Institute of Foundation Models (IFM) with a new Silicon Valley Lab in Sunnyvale, CA, joining existing facilities in Paris and Abu Dhabi. The launch event showcased PAN, a world model for simulating diverse realities with multimodal inputs. The IFM lab is also advancing K2-65B and JAIS AI systems. Why it matters: This expansion enhances MBZUAI's global presence and connects it with a critical AI ecosystem, supporting the UAE's economic diversification through advanced AI technologies.
MBZUAI's Institute of Foundation Models (IFM) has released K2 Think V2, a 70 billion parameter open-source general reasoning model built on K2 V2 Instruct. The model excels in complex reasoning benchmarks like AIME2025 and GPQA-Diamond, and features a low hallucination rate with long context reasoning capabilities. K2 Think V2 is fully sovereign and open, from pre-training through post-training, using IFM-curated data and a Guru dataset. Why it matters: This release contributes to closing the gap between community-owned reproducible AI and proprietary models, particularly in reasoning and long-context understanding for Arabic NLP tasks.
MBZUAI's Institute of Foundation Models (IFM) has launched five new specialized language and multimodal models, including BiMediX, PALO, GLaMM, GeoChat, and MobiLLaMA. These models address real-world applications in healthcare, visual reasoning, multilingual capabilities, geospatial analysis, and mobile device efficiency. BiMediX is a bilingual medical LLM, while GLaMM generates natural language responses related to objects in an image at the pixel level. Why it matters: This launch demonstrates MBZUAI's commitment to advancing AI research and developing practical AI solutions for various industries, especially with a focus on Arabic language capabilities.
KAUST Ph.D. student Chiheb Ben Hammouda won the best poster award at the Society for Industrial and Applied Mathematics Conference on Financial Mathematics & Engineering (FM19) for his work on option pricing under the rough Bergomi model. The winning poster, titled "Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model," details research carried out under the supervision of KAUST Professor Raul Tempone. The research group designed new efficient numerical methods for pricing derivatives under the rough Bergomi model by combining smoothing techniques. Why it matters: This award highlights KAUST's growing expertise in financial mathematics and its contribution to solving complex problems in the field using advanced numerical methods.
MBZUAI researchers at the Institute of Foundation Models (IFM) investigated the role of reinforcement learning (RL) in improving reasoning abilities of language models. Their study found that RL acts as an 'elicitor' for reasoning in domains frequently encountered during pre-training (e.g., math, coding), while genuinely teaching new reasoning skills in underrepresented domains (e.g., logic, simulations). To support their analysis, they created a new dataset called GURU containing 92,000 examples across six domains. Why it matters: This research clarifies the impact of reinforcement learning on language model reasoning, paving the way for developing models with more generalizable reasoning abilities across diverse domains, an important direction for more capable AI systems.
Researchers at ETH Zurich have formalized models of the EMV payment protocol using the Tamarin model checker. They discovered flaws allowing attackers to bypass PIN requirements for high-value purchases on EMV cards like Mastercard and Visa. The team also collaborated with an EMV consortium member to verify the improved EMV Kernel C-8 protocol. Why it matters: This research highlights the importance of formal methods in identifying critical vulnerabilities in widely used payment systems, potentially impacting financial security for consumers in the GCC region and worldwide.