Skip to content
GCC AI Research

Search

Results for "Open-source"

K2-V2: Full Openness Finally Meets Real Performance

MBZUAI ·

IFM has released K2-V2, a 70B-class LLM that takes a "360-open" approach by making its weights, data, training details, checkpoints, and fine-tuning recipes publicly available. K2-V2 matches leading open-weight model performance while offering full transparency, contrasting with proprietary and semi-open Chinese models. Independent evaluations show K2 as a high-performance, fully open-source alternative in the AI landscape. Why it matters: K2-V2 provides developers with a transparent and reproducible foundation model, fostering trust and enabling customization without sacrificing performance, which is crucial for sensitive applications in the region.

KAUST's Technology Transfer and Innovation (TTI) Department opens doors for scientist’s open-source software

KAUST ·

KAUST's Technology Transfer and Innovation (TTI) department has facilitated the release of KUBE, an open-source benchmarking framework developed by Craig Kapfer and his team. KUBE allows users to analyze the performance of software applications and high-performance computing (HPC) systems over time, using user-defined metrics. The software integrates with batch scheduling tools and provides historical time reporting and visualization capabilities. Why it matters: This release provides a valuable tool for optimizing applications and systems, potentially enhancing research and development in computational labs and computing centers in Saudi Arabia and beyond.

K2: An open source model that delivers frontier capabilities

MBZUAI ·

MBZUAI's Institute of Foundation Models has released K2, a 70-billion-parameter, reasoning-centric foundation model. K2 is designed to be fully inspectable, with open weights, training code, data composition, mid-training checkpoints, and evaluation harnesses. K2 outperforms Qwen2.5-72B and approaches the performance of Qwen3-235B. Why it matters: This release promotes transparency and reproducibility in AI development, providing researchers with the resources needed to study, adapt, and build upon a strong foundation model.

UAE's Falcon 40B is now Royalty Free

TII ·

The Technology Innovation Institute (TII) in the UAE has made its Falcon 40B large language model royalty-free for commercial and research use. Falcon 40B is ranked #1 on Hugging Face's leaderboard for LLMs, outperforming models like LLaMA. The model is now available under the Apache 2.0 license, promoting open access and collaboration in AI. Why it matters: This decision could accelerate AI innovation in the region by providing easier access to a state-of-the-art LLM for both public and private sector applications.

UAE’s Technology Innovation Institute Launches ‘Falcon Foundation’ to Champion Open-sourcing of Generative AI Models

TII ·

The Technology Innovation Institute (TII) in Abu Dhabi has launched the Falcon Foundation, a non-profit dedicated to advancing open-source generative AI models. TII is committing $300 million to fund open-source AI projects, beginning with its Falcon AI models. The foundation aims to foster collaboration among stakeholders, developers, academia, and industry to promote transparent governance and knowledge exchange in AI. Why it matters: This initiative signals the UAE's commitment to leading in AI development through open-source innovation and collaboration, potentially accelerating AI adoption and customization across various sectors.

MobiLlama: Towards Accurate and Lightweight Fully Transparent GPT

arXiv ·

Researchers from MBZUAI have released MobiLlama, a fully transparent open-source 0.5 billion parameter Small Language Model (SLM). MobiLlama is designed for resource-constrained devices, emphasizing enhanced performance with reduced resource demands. The full training data pipeline, code, model weights, and checkpoints are available on Github.

KVL releases new open source to visualize supercomputer simulations

KAUST ·

KAUST's Visualization Core Lab (KVL) has released inshimtu, a pseudo in situ visualization system for scientists working with large datasets and supercomputer simulations. Inshimtu simplifies the implementation of in situ visualization by using existing simulation output files without requiring changes to the simulation code. It helps scientists determine if implementing a full in situ visualization into their code is worthwhile. Why it matters: This open-source tool can improve the efficiency of supercomputing research in the region by allowing researchers to assess the value of in situ visualization before fully committing to it.

Short course on the development of open-source machine learning packages

MBZUAI ·

MBZUAI is hosting a short course on developing open-source machine learning packages. The course will be led by Chih-Jen Lin, an affiliated professor at MBZUAI and distinguished professor at National Taiwan University, who has developed widely used ML packages like LIBSVM and LibMultiLabel. The course will cover topics such as starting a project, choosing functionalities, and identifying research problems from user feedback. Why it matters: This course can help improve the quality and usability of open-source machine learning tools coming from the region's research institutions.