Skip to content
GCC AI Research

Search

Results for "LaMini"

Knowledge distillation and the greening of LLMs

MBZUAI ·

Researchers from MBZUAI, University of British Columbia, and Monash University have created LaMini-LM, a collection of small language models distilled from ChatGPT. LaMini-LM is trained on a dataset of 2.58M instructions and can be deployed on consumer laptops and mobile devices. The smaller models perform almost as well as larger counterparts while addressing security concerns. Why it matters: This work enables the deployment of LLMs in resource-constrained environments and enhances data security by reducing reliance on cloud-based LLMs.

CRC Seminar Series - Conor McMenamin

TII ·

Conor McMenamin from Universitat Pompeu Fabra presented a seminar on State Machine Replication (SMR) without honest participants. The talk covered the limitations of current SMR protocols and introduced the ByRa model, a framework for player characterization free of honest participants. He then described FAIRSICAL, a sandbox SMR protocol, and discussed how the ideas could be extended to real-world protocols, with a focus on blockchains and cryptocurrencies. Why it matters: This research on SMR protocols and their incentive compatibility could lead to more robust and secure blockchain technologies in the region.

Technology Innovation Institute’s Directed Energy Research Center Unveils First-in-GCC Region Laser-Matter Interaction Laboratory

TII ·

Technology Innovation Institute's (TII) Directed Energy Research Center (DERC) in Abu Dhabi has launched the GCC region's first Laser-Matter Interaction (LMI) Laboratory. The LMI Lab, part of DERC's Laser, Photonics, and Optoelectronics Division, will investigate laser interactions with matter. This lab will enable local research and development in laser materials processing, plasma physics, and nanotechnology, reducing reliance on foreign outsourcing. Why it matters: This regional first enhances the UAE's position in advanced technology research and expands the application of lasers across diverse industries in the GCC.

MBZUAI is changing the landscape of large language models in the region.

MBZUAI ·

MBZUAI has been actively involved in developing AI and generative models, contributing to models like Llama 2, Jais, Vicuna, and LaMini. Professor Preslav Nakov notes Llama 2's improvements in size and carbon footprint over Llama 1. MBZUAI aims to tackle challenges like information accuracy, economic costs, and the scarcity of Arabic online content. Why it matters: MBZUAI's work helps address the limitations of current LLMs, particularly for Arabic, and promotes sustainable AI development in the region.

MobiLlama: Towards Accurate and Lightweight Fully Transparent GPT

arXiv ·

Researchers from MBZUAI have released MobiLlama, a fully transparent open-source 0.5 billion parameter Small Language Model (SLM). MobiLlama is designed for resource-constrained devices, emphasizing enhanced performance with reduced resource demands. The full training data pipeline, code, model weights, and checkpoints are available on Github.

LlamaLens: Specialized Multilingual LLM for Analyzing News and Social Media Content

arXiv ·

Researchers have introduced LlamaLens, a specialized multilingual LLM designed for analyzing news and social media content. The model addresses domain specificity and multilinguality, with a focus on news and social media in Arabic, English, and Hindi. LlamaLens was evaluated on 18 tasks represented by 52 datasets, outperforming the state-of-the-art on 23 testing sets. Why it matters: This work contributes a valuable resource for multilingual NLP research, particularly in the context of analyzing news and social media content across diverse languages.

Empowering Large Language Models with Reliable Reasoning

MBZUAI ·

Liangming Pan from UCSB presented research on building reliable generative AI agents by integrating symbolic representations with LLMs. The neuro-symbolic strategy combines the flexibility of language models with precise knowledge representation and verifiable reasoning. The work covers Logic-LM, ProgramFC, and learning from automated feedback, aiming to address LLM limitations in complex reasoning tasks. Why it matters: Improving the reliability of LLMs is crucial for high-stakes applications in finance, medicine, and law within the region and globally.