Skip to content
GCC AI Research

Search

Results for "distribution shift"

Adapting to Distribution Shifts: Recent Advances in Importance Weighting Methods

MBZUAI ·

This article discusses distribution shifts in machine learning and the use of importance weighting methods to address them. Masashi Sugiyama from the University of Tokyo and RIKEN AIP presented recent advances in importance-based distribution shift adaptation methods. The talk covered joint importance-predictor estimation, dynamic importance weighting, and multistep class prior shift adaptation. Why it matters: Understanding and mitigating distribution shifts is crucial for deploying robust and reliable AI models in real-world scenarios within the GCC region and beyond.

On Transferability of Machine Learning Models

MBZUAI ·

This article discusses domain shift in machine learning, where testing data differs from training data, and methods to mitigate it via domain adaptation and generalization. Domain adaptation uses labeled source data and unlabeled target data. Domain generalization uses labeled data from single or multiple source domains to generalize to unseen target domains. Why it matters: Research in mitigating domain shift enhances the robustness and applicability of AI models in diverse real-world scenarios.

Towards Robust Multimodal Open-set Test-time Adaptation via Adaptive Entropy-aware Optimization

arXiv ·

This paper introduces Adaptive Entropy-aware Optimization (AEO), a new framework to tackle Multimodal Open-set Test-time Adaptation (MM-OSTTA). AEO uses Unknown-aware Adaptive Entropy Optimization (UAE) and Adaptive Modality Prediction Discrepancy Optimization (AMP) to distinguish unknown class samples during online adaptation by amplifying the entropy difference between known and unknown samples. The study establishes a new benchmark derived from existing datasets with five modalities and evaluates AEO's performance across various domain shift scenarios, demonstrating its effectiveness in long-term and continual MM-OSTTA settings.

Asymmetry Learning and OOD Robustness

MBZUAI ·

Bruno Ribeiro from Purdue University presented a talk on Asymmetry Learning and Out-of-Distribution (OOD) Robustness. The talk introduced Asymmetry Learning, a new paradigm that focuses on finding evidence of asymmetries in data to improve classifier performance in both in-distribution and out-of-distribution scenarios. Asymmetry Learning performs a causal structure search to find classifiers that perform well across different environments. Why it matters: This research addresses a key challenge in AI by proposing a novel approach to improve the reliability and generalization of classifiers in unseen environments, potentially leading to more robust AI systems.

DGM-DR: Domain Generalization with Mutual Information Regularized Diabetic Retinopathy Classification

arXiv ·

This paper introduces a domain generalization (DG) method for Diabetic Retinopathy (DR) classification that maximizes mutual information using a large pretrained model. The method aims to address the challenge of domain shift in medical imaging caused by variations in data acquisition. Experiments on public datasets demonstrate that the proposed method outperforms state-of-the-art techniques, achieving a 5.25% improvement in average accuracy.

ConDiSR: Contrastive Disentanglement and Style Regularization for Single Domain Generalization

arXiv ·

This paper introduces a new Single Domain Generalization (SDG) method called ConDiSR for medical image classification, using channel-wise contrastive disentanglement and reconstruction-based style regularization. The method is evaluated on multicenter histopathology image classification, achieving a 1% improvement in average accuracy compared to state-of-the-art SDG baselines. Code is available at https://github.com/BioMedIA-MBZUAI/ConDiSR.

Unscented Autoencoder

arXiv ·

The paper introduces the Unscented Autoencoder (UAE), a novel deep generative model based on the Variational Autoencoder (VAE) framework. The UAE uses the Unscented Transform (UT) for a more informative posterior representation compared to the reparameterization trick in VAEs. It replaces Kullback-Leibler (KL) divergence with the Wasserstein distribution metric and demonstrates competitive performance in Fréchet Inception Distance (FID) scores.

Gaussian Variational Inference in high dimension

MBZUAI ·

This article discusses approximating a high-dimensional distribution using Gaussian variational inference by minimizing Kullback-Leibler divergence. It builds upon previous research and approximates the minimizer using a Gaussian distribution with specific mean and variance. The study details approximation accuracy and applicability using efficient dimension, relevant for analyzing sampling schemes in optimization. Why it matters: This theoretical research can inform the development of more efficient and accurate AI algorithms, particularly in areas dealing with high-dimensional data such as machine learning and data analysis.