Iryna Gurevych from TU Darmstadt presented research on using large language models for real-world fact-checking, focusing on dismantling misleading narratives from misinterpreted scientific publications and detecting misinformation via visual content. The research aims to explain why a false claim was believed, why it is false, and why the alternative is correct. Why it matters: Addressing misinformation, especially when supported by seemingly credible sources, is critical for public health, conflict resolution, and maintaining trust in institutions in the Middle East and globally.
Yanwei Fu from Fudan University will present research on multimodal models, robotic grasping, and fMRI neural decoding. Topics include few-shot learning, object-centered self-supervised learning, image manipulation, and visual-language alignment. The research also covers Transformer compression and applications of large models with MVS 3D modeling in robotic arm grasping. Why it matters: While the talk is not directly about Middle East AI, the topics covered are core to advancing AI research and applications in the region.
MBZUAI's president Eric Xing warns against the unchecked pursuit of increasingly large AI models, drawing an analogy to an "atomic bomb" due to the unpredictability of their behavior. He argues that the field lacks sufficient understanding of what these models learn and whether their outputs are reliable, advocating for more efficient models. Xing emphasizes the need for debuggability and error tracking in AI, similar to established engineering practices. Why it matters: The piece highlights growing concerns within the AI community about the scalability and potential risks associated with increasingly complex AI models, particularly regarding transparency and control.
A KAUST alumnus presented research on using large language models for complex disease modeling and drug discovery. LLMs were trained on insurance claims of 123 million US people to model diseases and predict genetic parameters. Protein language models were developed to discover remote homologs and functional biomolecules, while RNA language models were used for RNA structure prediction and reverse design. Why it matters: This work highlights the potential of LLMs to accelerate computational biology research and drug development, with a KAUST connection.
MBZUAI has been actively involved in developing AI and generative models, contributing to models like Llama 2, Jais, Vicuna, and LaMini. Professor Preslav Nakov notes Llama 2's improvements in size and carbon footprint over Llama 1. MBZUAI aims to tackle challenges like information accuracy, economic costs, and the scarcity of Arabic online content. Why it matters: MBZUAI's work helps address the limitations of current LLMs, particularly for Arabic, and promotes sustainable AI development in the region.
IFM has released K2-V2, a 70B-class LLM that takes a "360-open" approach by making its weights, data, training details, checkpoints, and fine-tuning recipes publicly available. K2-V2 matches leading open-weight model performance while offering full transparency, contrasting with proprietary and semi-open Chinese models. Independent evaluations show K2 as a high-performance, fully open-source alternative in the AI landscape. Why it matters: K2-V2 provides developers with a transparent and reproducible foundation model, fostering trust and enabling customization without sacrificing performance, which is crucial for sensitive applications in the region.
Jan Buchmann from TU Darmstadt presented research on NLP for long, structured documents at MBZUAI. The research addresses gaps in using document structure and improving the verifiability of LM responses. Experiments showed that models learn to represent document structure during pre-training, and larger models can cite sources well. Why it matters: This research contributes to making NLP more effective for complex documents like scientific articles and legal texts, which is crucial for information accessibility.
Veselin Stoyanov of Tome (previously Facebook AI) gave a talk at MBZUAI on challenges to LLM adoption. The talk covered sparse models, multilingual LLMs, and instruction finetuning. Stoyanov previously led development of RoBERTa, XLM-R, and MultiRay. Why it matters: This talk highlights MBZUAI's role as a forum for discussing key challenges and advancements in large language models, with implications for Arabic NLP and regional AI development.