MBZUAI faculty and researchers had 27 papers accepted at the 2022 NeurIPS conference. 12 MBZUAI faculty members have at least one paper accepted, with Professor Kun Zhang leading with 10 papers. Other faculty with accepted publications include Eric Xing, Le Song, and Fahad Khan. Why it matters: This achievement highlights MBZUAI's growing prominence in the global machine learning research community.
MBZUAI faculty and students will present 53 papers at NeurIPS 2023. Key faculty include Eric Xing, Kun Zhang, and Tongliang Liu, each contributing to nine papers. One paper was selected for oral presentation and two for spotlight presentations. Why it matters: MBZUAI's strong presence at NeurIPS highlights its growing influence in the global AI research community and its focus on high-impact AI research.
A research paper co-authored by Dr. Maxim Panov and Kirill Fedyanin from the AI and Digital Science Research Center (AIDRC) has been accepted for publication at NeurIPS 2022. The paper, titled “Nonparametric Uncertainty Quantification for Single Deterministic Neural Network”, proposes a fast and scalable method for uncertainty quantification in ML models. The method disentangles aleatoric and epistemic uncertainties and was validated on text classification and image datasets including MNIST and ImageNet. Why it matters: This demonstrates the growing AI research capabilities and contributions from the UAE to the global AI community, particularly in fundamental machine learning research.
MBZUAI researchers will present 20 papers at the 40th International Conference on Machine Learning (ICML) in Honolulu. Visiting Associate Professor Tongliang Liu leads with seven publications, followed by Kun Zhang with six. One paper investigates semi-supervised learning vs. model-based methods for noisy data annotation in deep neural networks. Why it matters: The research addresses the critical issue of data quality and accessibility in machine learning, particularly for organizations with limited resources for data annotation.
MBZUAI researchers presented a new second-order method for optimizing neural networks at NeurIPS 2024. The method addresses optimization problems related to variational inequalities common in machine learning. They demonstrated that for monotone inequalities with inexact second-order derivatives, no faster second- or first-order methods can theoretically exist, supporting this with experiments. Why it matters: This research has the potential to reduce the computational cost of training large and complex neural networks, which could accelerate AI development in the region.
Researchers from MBZUAI and King's College London have developed a new prompting strategy called self-guided exploration to improve LLM performance on combinatorial problems. The method was tested on complex challenges like the traveling salesman problem. The findings will be presented at the 38th Annual Conference on Neural Information Processing Systems (NeurIPS) in Vancouver. Why it matters: This research could lead to practical applications of LLMs in industries like logistics, planning, and scheduling by offering new approaches to computationally complex problems.
MBZUAI researchers, in collaboration with over 70 researchers, have created the Culturally diverse Visual Question Answering (CVQA) benchmark to evaluate cultural understanding in multimodal LLMs. The CVQA dataset includes over 10,000 questions in 31 languages and 13 scripts, testing models on images of local dishes, personalities, and monuments. Testing of several multimodal LLMs on the CVQA benchmark revealed significant challenges, even for top models. Why it matters: This benchmark highlights the need for AI models to better understand diverse cultures, promoting fairness and relevance across different languages and regions.
MBZUAI researchers introduced Web2Code, a new dataset suite, at NeurIPS to enhance multimodal LLM performance in web page analysis and HTML generation. The suite includes a fine-tuning dataset and two benchmark datasets. Instruction tuning with Web2Code improved performance on specialized tasks without affecting general capabilities. Why it matters: This contribution addresses a key limitation in current multimodal LLMs, potentially boosting productivity in web design and development by providing targeted training data.