Skip to content
GCC AI Research

Search

Results for "small models"

Knowledge distillation and the greening of LLMs

MBZUAI ·

Researchers from MBZUAI, University of British Columbia, and Monash University have created LaMini-LM, a collection of small language models distilled from ChatGPT. LaMini-LM is trained on a dataset of 2.58M instructions and can be deployed on consumer laptops and mobile devices. The smaller models perform almost as well as larger counterparts while addressing security concerns. Why it matters: This work enables the deployment of LLMs in resource-constrained environments and enhances data security by reducing reliance on cloud-based LLMs.

Sadeed: Advancing Arabic Diacritization Through Small Language Model

arXiv ·

The paper introduces Sadeed, a fine-tuned decoder-only language model based on the Kuwain 1.5B Hennara model, for improved Arabic text diacritization. Sadeed is fine-tuned on high-quality diacritized datasets and achieves competitive results compared to larger proprietary models. The authors also introduce SadeedDiac-25, a new benchmark for fairer evaluation of Arabic diacritization across diverse text genres. Why it matters: This work advances Arabic NLP by providing both a competitive diacritization model and a more robust evaluation benchmark, facilitating further research and development in the field.

Alumni Spotlight: Making AI accessible for all

MBZUAI ·

MBZUAI alumnus Ahmed Sharshar is developing smaller AI models to make the technology more accessible, especially in resource-constrained environments like Egypt. His master's thesis involved creating an app that assesses lung health using mobile phone video analysis, eliminating the need for traditional medical devices. Sharshar is pursuing his Ph.D. at MBZUAI, focusing on lightweight and energy-efficient models for various applications. Why it matters: Democratizing AI through smaller, efficient models can enable broader applications and innovation across diverse sectors in the Middle East and beyond.

MobiLlama: Towards Accurate and Lightweight Fully Transparent GPT

arXiv ·

Researchers from MBZUAI have released MobiLlama, a fully transparent open-source 0.5 billion parameter Small Language Model (SLM). MobiLlama is designed for resource-constrained devices, emphasizing enhanced performance with reduced resource demands. The full training data pipeline, code, model weights, and checkpoints are available on Github.

Falcon 3: UAE’s Technology Innovation Institute Launches World’s most Powerful Small AI Models that can also be run on Light Infrastructures, including Laptops

TII ·

The Technology Innovation Institute (TII) in Abu Dhabi has launched Falcon 3, a new series of open-source large language models. Falcon 3 models range in size from 1B to 10B parameters and have been trained on 14 trillion tokens. Falcon 3 achieved the top spot on Hugging Face's LLM leaderboard for models under 13 billion parameters. Why it matters: This release democratizes access to high-performance AI by enabling efficient operation on laptops and light infrastructure, solidifying the UAE's position as a leader in open-source AI development.

Training Deep Neural Networks in Tiny Subspaces

MBZUAI ·

Xiaolin Huang from Shanghai Jiao Tong University presented a talk at MBZUAI on training deep neural networks in tiny subspaces. The talk covered the low-dimension hypothesis in neural networks and methods to find subspaces for efficient training. It suggests that training in smaller subspaces can improve training efficiency, generalization, and robustness. Why it matters: Investigating efficient training methods is crucial for resource-constrained environments and can enable broader access to advanced AI.