Skip to content
GCC AI Research

Search

Results for "OOD Robustness"

Asymmetry Learning and OOD Robustness

MBZUAI ·

Bruno Ribeiro from Purdue University presented a talk on Asymmetry Learning and Out-of-Distribution (OOD) Robustness. The talk introduced Asymmetry Learning, a new paradigm that focuses on finding evidence of asymmetries in data to improve classifier performance in both in-distribution and out-of-distribution scenarios. Asymmetry Learning performs a causal structure search to find classifiers that perform well across different environments. Why it matters: This research addresses a key challenge in AI by proposing a novel approach to improve the reliability and generalization of classifiers in unseen environments, potentially leading to more robust AI systems.

Towards Robust Multimodal Open-set Test-time Adaptation via Adaptive Entropy-aware Optimization

arXiv ·

This paper introduces Adaptive Entropy-aware Optimization (AEO), a new framework to tackle Multimodal Open-set Test-time Adaptation (MM-OSTTA). AEO uses Unknown-aware Adaptive Entropy Optimization (UAE) and Adaptive Modality Prediction Discrepancy Optimization (AMP) to distinguish unknown class samples during online adaptation by amplifying the entropy difference between known and unknown samples. The study establishes a new benchmark derived from existing datasets with five modalities and evaluates AEO's performance across various domain shift scenarios, demonstrating its effectiveness in long-term and continual MM-OSTTA settings.

Provable Unrestricted Adversarial Training without Compromise with Generalizability

arXiv ·

This paper introduces Provable Unrestricted Adversarial Training (PUAT), a novel adversarial training approach. PUAT enhances robustness against both unrestricted and restricted adversarial examples while improving standard generalizability by aligning the distributions of adversarial examples, natural data, and the classifier's learned distribution. The approach uses partially labeled data and an augmented triple-GAN to generate effective unrestricted adversarial examples, demonstrating superior performance on benchmarks.

On Transferability of Machine Learning Models

MBZUAI ·

This article discusses domain shift in machine learning, where testing data differs from training data, and methods to mitigate it via domain adaptation and generalization. Domain adaptation uses labeled source data and unlabeled target data. Domain generalization uses labeled data from single or multiple source domains to generalize to unseen target domains. Why it matters: Research in mitigating domain shift enhances the robustness and applicability of AI models in diverse real-world scenarios.

Deep Ensembles Work, But Are They Necessary?

MBZUAI ·

A recent study questions the necessity of deep ensembles, which improve accuracy and match larger models. The study demonstrates that ensemble diversity does not meaningfully improve uncertainty quantification on out-of-distribution data. It also reveals that the out-of-distribution performance of ensembles is strongly determined by their in-distribution performance. Why it matters: The findings suggest that larger, single neural networks can replicate the benefits of deep ensembles, potentially simplifying model deployment and reducing computational costs in the region.

A new approach to improve vision-language models

MBZUAI ·

MBZUAI researchers have developed a new approach to enhance the generalizability of vision-language models when processing out-of-distribution data. The study, led by Sheng Zhang and involving multiple MBZUAI professors and researchers, addresses the challenge of AI applications needing to manage unforeseen circumstances. The new method aims to improve how these models, which combine natural language processing and computer vision, handle new information not used during training. Why it matters: Improving the adaptability of vision-language models is critical for real-world AI applications like autonomous driving and medical imaging, especially in diverse and changing environments.

Teaching machines what they don’t know: a new approach to open-world object detection

MBZUAI ·

MBZUAI researchers are presenting a new approach to open-world object detection at the AAAI conference. The method enables machines to distinguish between known and unknown objects in images, and then learn to classify the unknown objects. PhD student Sahal Shaji Mullappilly is the lead author of the study, titled "Semi-Supervised Open-World Detection". Why it matters: This research addresses a key limitation in current object detection systems, allowing for more adaptable and robust AI in real-world applications.

VENOM: Text-driven Unrestricted Adversarial Example Generation with Diffusion Models

arXiv ·

The paper introduces VENOM, a text-driven framework for generating high-quality unrestricted adversarial examples using diffusion models. VENOM unifies image content generation and adversarial synthesis into a single reverse diffusion process, enhancing both attack success rate and image quality. The framework incorporates an adaptive adversarial guidance strategy with momentum to ensure the generated adversarial examples align with the distribution of natural images.