Skip to content
GCC AI Research

Search

Results for "Foundation Model"

Understanding and improving foundation models: from environmental risk to social responsibility

MBZUAI ·

Dr. Jindong Wang from Microsoft Research Asia gave a talk at MBZUAI about the limitations of large foundation models, including adapting to real-world unpredictability and security concerns. He also discussed the need for interdisciplinary collaboration to evaluate the benefits and risks of these models. Dr. Wang shared his research and insights on how to harness the power of large foundation models while addressing their constraints and fostering responsible AI integration. Why it matters: This highlights MBZUAI's role in hosting discussions about responsible AI development and the challenges of deploying foundation models.

UAE’s Technology Innovation Institute Launches ‘Falcon Foundation’ to Champion Open-sourcing of Generative AI Models

TII ·

The Technology Innovation Institute (TII) in Abu Dhabi has launched the Falcon Foundation, a non-profit dedicated to advancing open-source generative AI models. TII is committing $300 million to fund open-source AI projects, beginning with its Falcon AI models. The foundation aims to foster collaboration among stakeholders, developers, academia, and industry to promote transparent governance and knowledge exchange in AI. Why it matters: This initiative signals the UAE's commitment to leading in AI development through open-source innovation and collaboration, potentially accelerating AI adoption and customization across various sectors.

TII, LightOn Partner to Build NOOR Platform for Exascale Computing for Foundation Models

TII ·

TII and LightOn have partnered to build the NOOR Platform for exascale computing, aimed at developing foundation models. The collaboration will leverage LightOn's expertise in large language models, with the first output being the largest Arabic language model to date. The platform will provide high-quality data pipelines and facilitate extreme-scale distributed training and serving. Why it matters: This partnership aims to establish Abu Dhabi as a center of AI excellence and boost the UAE's ambitions in high-tech innovation and NLP research.

MBZUAI Launches Institute of Foundation Models and Establishes Silicon Valley AI Lab

MBZUAI ·

MBZUAI has launched the Institute of Foundation Models (IFM) with a new Silicon Valley Lab in Sunnyvale, CA, joining existing facilities in Paris and Abu Dhabi. The launch event showcased PAN, a world model for simulating diverse realities with multimodal inputs. The IFM lab is also advancing K2-65B and JAIS AI systems. Why it matters: This expansion enhances MBZUAI's global presence and connects it with a critical AI ecosystem, supporting the UAE's economic diversification through advanced AI technologies.

Adapting foundation models for medical image segmentation: a new approach presented at MICCAI

MBZUAI ·

MBZUAI researchers developed a method to adapt Meta's Segment Anything Model (SAM) for medical image segmentation, addressing its performance gap with natural images. Their approach improves SAM's accuracy without requiring extensive retraining or large medical image datasets. The research, led by Chao Qin, was nominated for the Best Paper Award at the MICCAI conference in Marrakesh. Why it matters: This offers a more efficient and effective way to leverage foundation models in specialized medical imaging applications, potentially improving diagnostic accuracy and reducing the need for large-scale, domain-specific training data.

TerraFM: A Scalable Foundation Model for Unified Multisensor Earth Observation

arXiv ·

MBZUAI researchers introduce TerraFM, a scalable self-supervised learning model for Earth observation that uses Sentinel-1 and Sentinel-2 imagery. The model unifies radar and optical inputs through modality-specific patch embeddings and adaptive cross-attention fusion. TerraFM achieves strong generalization on classification and segmentation tasks, outperforming prior models on GEO-Bench and Copernicus-Bench.

K2: An open source model that delivers frontier capabilities

MBZUAI ·

MBZUAI's Institute of Foundation Models has released K2, a 70-billion-parameter, reasoning-centric foundation model. K2 is designed to be fully inspectable, with open weights, training code, data composition, mid-training checkpoints, and evaluation harnesses. K2 outperforms Qwen2.5-72B and approaches the performance of Qwen3-235B. Why it matters: This release promotes transparency and reproducibility in AI development, providing researchers with the resources needed to study, adapt, and build upon a strong foundation model.

K2-V2: Full Openness Finally Meets Real Performance

MBZUAI ·

IFM has released K2-V2, a 70B-class LLM that takes a "360-open" approach by making its weights, data, training details, checkpoints, and fine-tuning recipes publicly available. K2-V2 matches leading open-weight model performance while offering full transparency, contrasting with proprietary and semi-open Chinese models. Independent evaluations show K2 as a high-performance, fully open-source alternative in the AI landscape. Why it matters: K2-V2 provides developers with a transparent and reproducible foundation model, fostering trust and enabling customization without sacrificing performance, which is crucial for sensitive applications in the region.