Skip to content
GCC AI Research

Search

Results for "Mistral-7B"

TII Launches Falcon Reasoning: Best 7B AI Model Globally, Also Outperforms Larger Models

TII ·

Technology Innovation Institute (TII) has launched Falcon H1R 7B, an open-source 7B parameter AI model with reasoning capabilities. It outperforms larger models like Microsoft Phi 4 Reasoning Plus 14B, Alibaba Qwen3 32B, and NVIDIA Nemotron H 47B on key benchmarks. The model uses a hybrid Transformer–Mamba architecture for improved accuracy and speed and is available on Hugging Face under the Falcon TII License. Why it matters: This release highlights the UAE's growing role in AI innovation by providing an efficient and accessible model for global research and development.

Arabic Large Language Models for Medical Text Generation

arXiv ·

This study explores fine-tuning large language models (LLMs) for Arabic medical text generation to improve hospital management systems. A unique dataset was collected from social media, capturing medical conversations between patients and doctors, and used to fine-tune models like Mistral-7B, LLaMA-2-7B, and GPT-2. The fine-tuned Mistral-7B model outperformed the others with a BERT F1-score of 68.5%. Why it matters: The research demonstrates the potential of generative AI to provide scalable and culturally relevant solutions for healthcare challenges in Arabic-speaking regions.

K2-V2: Full Openness Finally Meets Real Performance

MBZUAI ·

IFM has released K2-V2, a 70B-class LLM that takes a "360-open" approach by making its weights, data, training details, checkpoints, and fine-tuning recipes publicly available. K2-V2 matches leading open-weight model performance while offering full transparency, contrasting with proprietary and semi-open Chinese models. Independent evaluations show K2 as a high-performance, fully open-source alternative in the AI landscape. Why it matters: K2-V2 provides developers with a transparent and reproducible foundation model, fostering trust and enabling customization without sacrificing performance, which is crucial for sensitive applications in the region.

K2 Think V2: a fully sovereign reasoning model

MBZUAI ·

MBZUAI's Institute of Foundation Models (IFM) has released K2 Think V2, a 70 billion parameter open-source general reasoning model built on K2 V2 Instruct. The model excels in complex reasoning benchmarks like AIME2025 and GPQA-Diamond, and features a low hallucination rate with long context reasoning capabilities. K2 Think V2 is fully sovereign and open, from pre-training through post-training, using IFM-curated data and a Guru dataset. Why it matters: This release contributes to closing the gap between community-owned reproducible AI and proprietary models, particularly in reasoning and long-context understanding for Arabic NLP tasks.

Fanar 2.0: Arabic Generative AI Stack

arXiv ·

Hamad Bin Khalifa University (HBKU) has released Fanar 2.0, the second generation of Qatar's Arabic-centric Generative AI platform, built entirely at QCRI. The core of Fanar 2.0 is Fanar-27B, which was continually pre-trained from a Gemma-3-27B backbone using 120 billion high-quality tokens and only 256 NVIDIA H100 GPUs. Fanar 2.0 includes capabilities like FanarGuard, Aura, Oryx, Fanar-Sadiq, Fanar-Diwan, and FanarShaheen for moderation, speech recognition, vision understanding, Islamic content, poetry generation, and translation. Why it matters: This shows that sovereign, resource-constrained AI development in the Arabic language is possible, producing competitive systems in the region.

K2: An open source model that delivers frontier capabilities

MBZUAI ·

MBZUAI's Institute of Foundation Models has released K2, a 70-billion-parameter, reasoning-centric foundation model. K2 is designed to be fully inspectable, with open weights, training code, data composition, mid-training checkpoints, and evaluation harnesses. K2 outperforms Qwen2.5-72B and approaches the performance of Qwen3-235B. Why it matters: This release promotes transparency and reproducibility in AI development, providing researchers with the resources needed to study, adapt, and build upon a strong foundation model.