Skip to content
GCC AI Research

Search

Results for "ICLR"

MBZUAI research at ICLR 2023

MBZUAI ·

MBZUAI had 22 papers accepted at ICLR 2023, with faculty Kun Zhang co-authoring seven of them. Yuanzhi Li, an affiliated assistant professor at MBZUAI, received an honorable mention for his paper on knowledge distillation. Additionally, a paper co-authored by MBZUAI President Eric Xing was recognized as a top 5% paper at the conference. Why it matters: MBZUAI's strong presence at a top-tier machine learning conference like ICLR demonstrates the university's growing influence and research capabilities in the global AI landscape.

A new strategy for complex optimization problems in machine learning presented at ICLR

MBZUAI ·

MBZUAI researchers presented a new strategy for handling complex optimization problems in machine learning at ICLR 2024. The study, a collaboration with ISAM, combines zeroth-order methods with hard-thresholding to address specific settings in machine learning. This approach aims to improve convergence, ensuring algorithms reach quality solutions efficiently. Why it matters: Improving optimization techniques is crucial for advancing machine learning models used in various applications, potentially accelerating development and enhancing performance.

Two weak assumptions, one strong result presented at ICLR

MBZUAI ·

MBZUAI researchers presented a new machine learning method at ICLR for uncovering hidden variables from observed data. The method, called "complementary gains," combines two weak assumptions to provide identifiability guarantees. This approach aims to recover true latent variables reflecting real-world processes, while solving problems efficiently. Why it matters: The research advances disentangled representation learning by finding minimal assumptions necessary for identifiability, improving the applicability of AI models to real-world data.

New test that recovers hidden relationships in data to be presented at ICLR

MBZUAI ·

MBZUAI researchers developed a new conditional independence test (DCT) that determines the dependence of two variables when both are discrete, continuous, or when one is discrete and the other is continuous. The new test addresses cases where variables are inherently continuous but represented in discretized form due to data collection limits. The findings will be presented at the 13th International Conference on Learning Representations (ICLR) in Singapore. Why it matters: This research addresses a fundamental problem in machine learning and statistics, improving causal relationship discovery in mixed datasets common across finance, public health, and other fields.

MBZUAI researchers at ICML

MBZUAI ·

MBZUAI researchers will present 20 papers at the 40th International Conference on Machine Learning (ICML) in Honolulu. Visiting Associate Professor Tongliang Liu leads with seven publications, followed by Kun Zhang with six. One paper investigates semi-supervised learning vs. model-based methods for noisy data annotation in deep neural networks. Why it matters: The research addresses the critical issue of data quality and accessibility in machine learning, particularly for organizations with limited resources for data annotation.

Uncovering causal relationships in multimodal biological data: A new framework presented at ICLR

MBZUAI ·

MBZUAI researchers presented a new causal representation learning framework at ICLR for identifying latent causal variables in multimodal biological data. The framework addresses the challenge of uncovering underlying causal factors from lab tests, genetic information, and medical images. The new approach can identify latent causal variables and their influence on observed biological outcomes across modalities. Why it matters: The model's ability to analyze causal mechanisms between modalities can lead to more complete insights in biomedical research.

Baldwin headlines ACL 2022

MBZUAI ·

MBZUAI Professor Timothy Baldwin delivered the presidential keynote at the 60th Annual Meeting of the Association for Computational Linguistics (ACL). Baldwin also published three papers at the conference, including work on biomedical literature summarization, NLP for Indonesian languages, and understanding procedural texts. The papers address challenges such as reducing human effort in reviewing medical documents and digitally preserving Indonesian indigenous languages. Why it matters: Baldwin's contributions and leadership role at ACL highlight the growing prominence of MBZUAI and GCC-based researchers in the global NLP community.

New approaches for machine learning optimization presented at ICML

MBZUAI ·

MBZUAI and KAUST researchers collaborated to present new optimization methods at ICML 2024 for composite and distributed machine learning settings. The study addresses challenges in training large models due to data size and computational power. Their work focuses on minimizing the "loss function" by adjusting internal trainable parameters, using techniques like gradient clipping. Why it matters: This research contributes to the ongoing advancement of machine learning optimization, crucial for improving the performance and efficiency of AI models in the region and globally.