This article discusses methods for handling label noise in deep learning, including extracting confident examples and modeling label noise. Tongliang Liu from the University of Sydney presented these approaches. The talk aimed to provide participants with a basic understanding of learning with noisy labels. Why it matters: As AI models are increasingly trained on large, noisy datasets, techniques for robust learning become crucial for reliable real-world performance.
A new paper coauthored by researchers at The University of Melbourne and MBZUAI explores disagreement in human annotation for AI training. The paper treats disagreement as a signal (human label variation or HLV) rather than noise, and proposes new evaluation metrics based on fuzzy set theory. These metrics adapt accuracy and F-score to cases where multiple labels may plausibly apply, aligning model output with the distribution of human judgments. Why it matters: This research addresses a key challenge in NLP by accounting for the inherent ambiguity in human language, potentially leading to more robust and human-aligned AI systems.
Researchers are exploring methods for evaluating the outcome of actions using off-policy observations where the context is noisy or anonymized. They employ proxy causal learning, using two noisy views of the context to recover the average causal effect of an action without explicitly modeling the hidden context. The implementation uses learned neural net representations for both action and context, and demonstrates outperformance compared to an autoencoder-based alternative. Why it matters: This research addresses a key challenge in applying AI in real-world scenarios where data privacy or bandwidth limitations necessitate working with noisy or anonymized data.
MBZUAI researchers will present 20 papers at the 40th International Conference on Machine Learning (ICML) in Honolulu. Visiting Associate Professor Tongliang Liu leads with seven publications, followed by Kun Zhang with six. One paper investigates semi-supervised learning vs. model-based methods for noisy data annotation in deep neural networks. Why it matters: The research addresses the critical issue of data quality and accessibility in machine learning, particularly for organizations with limited resources for data annotation.
This article discusses domain shift in machine learning, where testing data differs from training data, and methods to mitigate it via domain adaptation and generalization. Domain adaptation uses labeled source data and unlabeled target data. Domain generalization uses labeled data from single or multiple source domains to generalize to unseen target domains. Why it matters: Research in mitigating domain shift enhances the robustness and applicability of AI models in diverse real-world scenarios.
MBZUAI's Maxim Panov is developing uncertainty quantification methods to improve the reliability of language models. His work focuses on providing insights into the confidence level of machine learning models' predictions, especially in scenarios where accuracy is critical, such as medicine. Panov is working on post-processing techniques that can be applied to already-trained models. Why it matters: This research aims to address the issue of "hallucinations" in language models, enhancing their trustworthiness and applicability in sensitive domains within the region and globally.
MBZUAI Ph.D. student Raza Imam and colleagues presented a new benchmark called MediMeta-C to test the robustness of medical vision-language models (MVLMs) under real-world image corruptions. They found that top-performing MVLMs on clean data often fail under mild corruption, with fundoscopy models particularly vulnerable. To address this, they developed RobustMedCLIP (RMC), a lightweight defense using few-shot LoRA tuning to improve model robustness. Why it matters: This research highlights the critical need for robustness testing in medical AI to ensure reliability in clinical settings, particularly in resource-constrained environments where image quality may be compromised.
Researchers at MBZUAI have demonstrated a method called "Data Laundering" to artificially boost language model benchmark scores using knowledge distillation. The technique covertly transfers benchmark-specific knowledge, leading to inflated accuracy without genuine improvements in reasoning. The study highlights a vulnerability in current AI evaluation practices and calls for more robust benchmarks.