Researchers from MBZUAI have released MobiLlama, a fully transparent open-source 0.5 billion parameter Small Language Model (SLM). MobiLlama is designed for resource-constrained devices, emphasizing enhanced performance with reduced resource demands. The full training data pipeline, code, model weights, and checkpoints are available on Github.
The paper introduces Juhaina, a 9.24B parameter Arabic-English bilingual LLM trained with an 8,192 token context window. It identifies limitations in the Open Arabic LLM Leaderboard (OALL) and proposes a new benchmark, CamelEval, for more comprehensive evaluation. Juhaina outperforms models like Llama and Gemma in generating helpful Arabic responses and understanding cultural nuances. Why it matters: This culturally-aligned LLM and associated benchmark could significantly advance Arabic NLP and democratize AI access for Arabic speakers.
The paper introduces ALLaM, a series of large language models for Arabic and English, designed to support Arabic Language Technologies. The models are trained with language alignment and knowledge transfer in mind, using a decoder-only architecture. ALLaM achieves state-of-the-art results on Arabic benchmarks like MMLU Arabic and Arabic Exams. Why it matters: This work advances Arabic NLP by providing high-performing LLMs and demonstrating effective techniques for cross-lingual transfer learning and alignment with human preferences.
MBZUAI is a global partner in Meta's release of Llama 2, joining organizations like IBM, AWS, Microsoft, and NVIDIA. MBZUAI will provide early feedback and help build the software as a global community. MBZUAI is working on large language models, developing a sustainable LLM named Vicuna, and strengthening infrastructure for LLM-chat evaluation. Why it matters: MBZUAI's involvement promises to bring about a new generation of UAE-born AI advancements built around the Llama 2 ecosystem and fact-checking capabilities.
This paper presents a UI-level evaluation of ALLaM-34B, an Arabic-centric LLM developed by SDAIA and deployed in the HUMAIN Chat service. The evaluation used a prompt pack spanning various Arabic dialects, code-switching, reasoning, and safety, with outputs scored by frontier LLM judges. Results indicate strong performance in generation, code-switching, MSA handling, reasoning, and improved dialect fidelity, positioning ALLaM-34B as a robust Arabic LLM suitable for real-world use.
Researchers have introduced LlamaLens, a specialized multilingual LLM designed for analyzing news and social media content. The model addresses domain specificity and multilinguality, with a focus on news and social media in Arabic, English, and Hindi. LlamaLens was evaluated on 18 tasks represented by 52 datasets, outperforming the state-of-the-art on 23 testing sets. Why it matters: This work contributes a valuable resource for multilingual NLP research, particularly in the context of analyzing news and social media content across diverse languages.
The article discusses the rise of large language models like ChatGPT and Gemini. It highlights their role in driving the first wave of AI development. Why it matters: While lacking specifics, the article suggests ongoing interest in the impact and future of LLMs, a key area of AI research and development.