Skip to content
GCC AI Research

Search

Results for "large language model"

LLMs 101: Large language models explained

MBZUAI ·

The article provides a basic overview of large language models (LLMs), explaining their functionality and applications. LLMs are AI systems that process and generate human-like text using transformer architecture, trained on vast datasets to predict the next word in a sequence. The piece differentiates between general-purpose, task-specific, and multimodal models, as well as closed-source and open-source LLMs. Why it matters: LLMs are foundational for advancements in Arabic NLP, as evidenced by models like MBZUAI's Jais, and understanding their mechanics is crucial for regional AI development.

Towards Real-world Fact-Checking with Large Language Models

MBZUAI ·

Iryna Gurevych from TU Darmstadt presented research on using large language models for real-world fact-checking, focusing on dismantling misleading narratives from misinterpreted scientific publications and detecting misinformation via visual content. The research aims to explain why a false claim was believed, why it is false, and why the alternative is correct. Why it matters: Addressing misinformation, especially when supported by seemingly credible sources, is critical for public health, conflict resolution, and maintaining trust in institutions in the Middle East and globally.

Empowering Large Language Models with Reliable Reasoning

MBZUAI ·

Liangming Pan from UCSB presented research on building reliable generative AI agents by integrating symbolic representations with LLMs. The neuro-symbolic strategy combines the flexibility of language models with precise knowledge representation and verifiable reasoning. The work covers Logic-LM, ProgramFC, and learning from automated feedback, aiming to address LLM limitations in complex reasoning tasks. Why it matters: Improving the reliability of LLMs is crucial for high-stakes applications in finance, medicine, and law within the region and globally.

Complex disease modeling and efficient drug discovery with large language models

MBZUAI ·

A KAUST alumnus presented research on using large language models for complex disease modeling and drug discovery. LLMs were trained on insurance claims of 123 million US people to model diseases and predict genetic parameters. Protein language models were developed to discover remote homologs and functional biomolecules, while RNA language models were used for RNA structure prediction and reverse design. Why it matters: This work highlights the potential of LLMs to accelerate computational biology research and drug development, with a KAUST connection.

LLM Post-Training: A Deep Dive into Reasoning Large Language Models

arXiv ·

A new survey paper provides a deep dive into post-training methodologies for Large Language Models (LLMs), analyzing their role in refining LLMs beyond pretraining. It addresses key challenges such as catastrophic forgetting, reward hacking, and inference-time trade-offs, and highlights emerging directions in model alignment, scalable adaptation, and inference-time reasoning. The paper also provides a public repository to continually track developments in this fast-evolving field.

Knowledge distillation and the greening of LLMs

MBZUAI ·

Researchers from MBZUAI, University of British Columbia, and Monash University have created LaMini-LM, a collection of small language models distilled from ChatGPT. LaMini-LM is trained on a dataset of 2.58M instructions and can be deployed on consumer laptops and mobile devices. The smaller models perform almost as well as larger counterparts while addressing security concerns. Why it matters: This work enables the deployment of LLMs in resource-constrained environments and enhances data security by reducing reliance on cloud-based LLMs.

Large Language Models for the Real World: Explorations of Sparse, Cross-lingual Understanding and Instruction-Tuned LLMs

MBZUAI ·

Veselin Stoyanov of Tome (previously Facebook AI) gave a talk at MBZUAI on challenges to LLM adoption. The talk covered sparse models, multilingual LLMs, and instruction finetuning. Stoyanov previously led development of RoBERTa, XLM-R, and MultiRay. Why it matters: This talk highlights MBZUAI's role as a forum for discussing key challenges and advancements in large language models, with implications for Arabic NLP and regional AI development.