Researchers from MBZUAI, UC Berkeley, CMU, Stanford, and UC San Diego collaborated to create Vicuna, an open-source chatbot that costs $300 to train, unlike ChatGPT which costs over $4 million. Vicuna achieves 90% of ChatGPT's subjective language quality while being far more energy-efficient and can run on a single GPU. It was fine-tuned from Meta AI’s LLaMA model using user-shared conversations and has gained significant traction on GitHub. Why it matters: This research demonstrates that high-quality chatbots can be developed at a fraction of the cost and environmental impact, opening up new possibilities for sustainable AI development in the region.
MBZUAI President Eric Xing led a global collaboration to develop Vicuna, an LLM alternative to GPT-3 addressing the unsustainable costs of training LLMs. OpenAI CEO Sam Altman acknowledged Abu Dhabi's role in the global AI conversation, building off of achievements like Vicuna. Xing and colleagues are publishing research at MLSys 2023 on "cross-mesh resharding" to improve computer communication in deep learning, aiming for low-carbon, affordable, and miniaturized AI. Why it matters: This research signals a push towards sustainable AI development in the region, emphasizing efficiency and reduced environmental impact.
MBZUAI is a global partner in Meta's release of Llama 2, joining organizations like IBM, AWS, Microsoft, and NVIDIA. MBZUAI will provide early feedback and help build the software as a global community. MBZUAI is working on large language models, developing a sustainable LLM named Vicuna, and strengthening infrastructure for LLM-chat evaluation. Why it matters: MBZUAI's involvement promises to bring about a new generation of UAE-born AI advancements built around the Llama 2 ecosystem and fact-checking capabilities.
Researchers from MBZUAI, University of British Columbia, and Monash University have created LaMini-LM, a collection of small language models distilled from ChatGPT. LaMini-LM is trained on a dataset of 2.58M instructions and can be deployed on consumer laptops and mobile devices. The smaller models perform almost as well as larger counterparts while addressing security concerns. Why it matters: This work enables the deployment of LLMs in resource-constrained environments and enhances data security by reducing reliance on cloud-based LLMs.
Researchers introduce Arabic Mini-ClimateGPT, a tailored Arabic LLM for climate change and sustainability. The model is fine-tuned on the Clima500-Instruct dataset and uses vector embedding retrieval during inference. Evaluations show the model outperforms baseline LLMs and is preferred by experts in 81.6% of cases.