MBZUAI had 22 papers accepted at ICLR 2023, with faculty Kun Zhang co-authoring seven of them. Yuanzhi Li, an affiliated assistant professor at MBZUAI, received an honorable mention for his paper on knowledge distillation. Additionally, a paper co-authored by MBZUAI President Eric Xing was recognized as a top 5% paper at the conference. Why it matters: MBZUAI's strong presence at a top-tier machine learning conference like ICLR demonstrates the university's growing influence and research capabilities in the global AI landscape.
MBZUAI researchers presented a new strategy for handling complex optimization problems in machine learning at ICLR 2024. The study, a collaboration with ISAM, combines zeroth-order methods with hard-thresholding to address specific settings in machine learning. This approach aims to improve convergence, ensuring algorithms reach quality solutions efficiently. Why it matters: Improving optimization techniques is crucial for advancing machine learning models used in various applications, potentially accelerating development and enhancing performance.
MBZUAI researchers presented a new machine learning method at ICLR for uncovering hidden variables from observed data. The method, called "complementary gains," combines two weak assumptions to provide identifiability guarantees. This approach aims to recover true latent variables reflecting real-world processes, while solving problems efficiently. Why it matters: The research advances disentangled representation learning by finding minimal assumptions necessary for identifiability, improving the applicability of AI models to real-world data.
MBZUAI researchers will present 20 papers at the 40th International Conference on Machine Learning (ICML) in Honolulu. Visiting Associate Professor Tongliang Liu leads with seven publications, followed by Kun Zhang with six. One paper investigates semi-supervised learning vs. model-based methods for noisy data annotation in deep neural networks. Why it matters: The research addresses the critical issue of data quality and accessibility in machine learning, particularly for organizations with limited resources for data annotation.
MBZUAI researchers developed a new conditional independence test (DCT) that determines the dependence of two variables when both are discrete, continuous, or when one is discrete and the other is continuous. The new test addresses cases where variables are inherently continuous but represented in discretized form due to data collection limits. The findings will be presented at the 13th International Conference on Learning Representations (ICLR) in Singapore. Why it matters: This research addresses a fundamental problem in machine learning and statistics, improving causal relationship discovery in mixed datasets common across finance, public health, and other fields.
Mingyu Ding from UC Berkeley presented research on endowing robots with human-like commonsense and physical reasoning capabilities. The talk covered multimodal commonsense reasoning integrating vision, world models, and language-based task planners. It also discussed physical reasoning approaches for robots to infer dynamics and physical properties of objects. Why it matters: Enhancing robots with these capabilities can improve their ability to generalize across everyday tasks, leading to greater social benefits and impact.
MBZUAI Professor Timothy Baldwin delivered the presidential keynote at the 60th Annual Meeting of the Association for Computational Linguistics (ACL). Baldwin also published three papers at the conference, including work on biomedical literature summarization, NLP for Indonesian languages, and understanding procedural texts. The papers address challenges such as reducing human effort in reviewing medical documents and digitally preserving Indonesian indigenous languages. Why it matters: Baldwin's contributions and leadership role at ACL highlight the growing prominence of MBZUAI and GCC-based researchers in the global NLP community.