Skip to content
GCC AI Research

Search

Results for "ICLR 2023"

MBZUAI research at ICLR 2023

MBZUAI ·

MBZUAI had 22 papers accepted at ICLR 2023, with faculty Kun Zhang co-authoring seven of them. Yuanzhi Li, an affiliated assistant professor at MBZUAI, received an honorable mention for his paper on knowledge distillation. Additionally, a paper co-authored by MBZUAI President Eric Xing was recognized as a top 5% paper at the conference. Why it matters: MBZUAI's strong presence at a top-tier machine learning conference like ICLR demonstrates the university's growing influence and research capabilities in the global AI landscape.

Two weak assumptions, one strong result presented at ICLR

MBZUAI ·

MBZUAI researchers presented a new machine learning method at ICLR for uncovering hidden variables from observed data. The method, called "complementary gains," combines two weak assumptions to provide identifiability guarantees. This approach aims to recover true latent variables reflecting real-world processes, while solving problems efficiently. Why it matters: The research advances disentangled representation learning by finding minimal assumptions necessary for identifiability, improving the applicability of AI models to real-world data.

MBZUAI researchers at ICML

MBZUAI ·

MBZUAI researchers will present 20 papers at the 40th International Conference on Machine Learning (ICML) in Honolulu. Visiting Associate Professor Tongliang Liu leads with seven publications, followed by Kun Zhang with six. One paper investigates semi-supervised learning vs. model-based methods for noisy data annotation in deep neural networks. Why it matters: The research addresses the critical issue of data quality and accessibility in machine learning, particularly for organizations with limited resources for data annotation.

A new strategy for complex optimization problems in machine learning presented at ICLR

MBZUAI ·

MBZUAI researchers presented a new strategy for handling complex optimization problems in machine learning at ICLR 2024. The study, a collaboration with ISAM, combines zeroth-order methods with hard-thresholding to address specific settings in machine learning. This approach aims to improve convergence, ensuring algorithms reach quality solutions efficiently. Why it matters: Improving optimization techniques is crucial for advancing machine learning models used in various applications, potentially accelerating development and enhancing performance.

MBZUAI at ACL2023

MBZUAI ·

MBZUAI researchers had 26 papers accepted at ACL 2023, a top NLP conference. Assistant Professor Alham Fikri Aji co-authored eight papers, including one on crosslingual generalization through multitask finetuning (MTF). Deputy Department Chair Preslav Nakov co-authored a paper on a Bulgarian language understanding benchmark dedicated to the memory of Yale Computer Scientist Dragomir R. Radev. Why it matters: MBZUAI's strong presence at ACL highlights its growing influence in the NLP field and its contributions to multilingual AI research.

34 MBZUAI papers accepted at CVPR

MBZUAI ·

MBZUAI faculty, researchers, and students will present 34 papers at the 35th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2023). Fahad Khan is a co-author on 11 accepted papers, while Salman Khan and Shijian Lu have 10 and 9 papers, respectively. One paper focuses on person image synthesis via a denoising diffusion model, and another introduces PromptCAL for generalized novel category discovery. Why it matters: This large volume of acceptances at a top-tier conference highlights MBZUAI's growing prominence and research contributions in computer vision, with potential impact across various industries from online retail to autonomous driving.

New test that recovers hidden relationships in data to be presented at ICLR

MBZUAI ·

MBZUAI researchers developed a new conditional independence test (DCT) that determines the dependence of two variables when both are discrete, continuous, or when one is discrete and the other is continuous. The new test addresses cases where variables are inherently continuous but represented in discretized form due to data collection limits. The findings will be presented at the 13th International Conference on Learning Representations (ICLR) in Singapore. Why it matters: This research addresses a fundamental problem in machine learning and statistics, improving causal relationship discovery in mixed datasets common across finance, public health, and other fields.

Shorter but not Worse: Frugal Reasoning via Easy Samples as Length Regularizers in Math RLVR

arXiv ·

A new method is proposed to reduce the verbosity of LLMs in step-by-step reasoning by retaining moderately easy problems during Reinforcement Learning with Verifiable Rewards (RLVR) training. This approach acts as an implicit length regularizer, preventing the model from excessively increasing output length on harder problems. Experiments using Qwen3-4B-Thinking-2507 show the model achieves baseline accuracy with nearly twice shorter solutions.