MBZUAI researchers will present 20 papers at the 40th International Conference on Machine Learning (ICML) in Honolulu. Visiting Associate Professor Tongliang Liu leads with seven publications, followed by Kun Zhang with six. One paper investigates semi-supervised learning vs. model-based methods for noisy data annotation in deep neural networks. Why it matters: The research addresses the critical issue of data quality and accessibility in machine learning, particularly for organizations with limited resources for data annotation.
MBZUAI had 22 papers accepted at ICLR 2023, with faculty Kun Zhang co-authoring seven of them. Yuanzhi Li, an affiliated assistant professor at MBZUAI, received an honorable mention for his paper on knowledge distillation. Additionally, a paper co-authored by MBZUAI President Eric Xing was recognized as a top 5% paper at the conference. Why it matters: MBZUAI's strong presence at a top-tier machine learning conference like ICLR demonstrates the university's growing influence and research capabilities in the global AI landscape.
MBZUAI Department Chair Le Song served as a program co-chair at the 39th International Conference on Machine Learning (ICML). MBZUAI faculty, researchers, and students had 7 papers accepted at ICML 2022. Song noted the increasing focus on biomedicine and other science areas within the AI research community. Why it matters: Song's leadership role at ICML and MBZUAI's strong presence highlights the university's growing influence in the global machine learning landscape.
MBZUAI faculty and researchers had 27 papers accepted at the 2022 NeurIPS conference. 12 MBZUAI faculty members have at least one paper accepted, with Professor Kun Zhang leading with 10 papers. Other faculty with accepted publications include Eric Xing, Le Song, and Fahad Khan. Why it matters: This achievement highlights MBZUAI's growing prominence in the global machine learning research community.
MBZUAI and KAUST researchers collaborated to present new optimization methods at ICML 2024 for composite and distributed machine learning settings. The study addresses challenges in training large models due to data size and computational power. Their work focuses on minimizing the "loss function" by adjusting internal trainable parameters, using techniques like gradient clipping. Why it matters: This research contributes to the ongoing advancement of machine learning optimization, crucial for improving the performance and efficiency of AI models in the region and globally.