This article discusses domain shift in machine learning, where testing data differs from training data, and methods to mitigate it via domain adaptation and generalization. Domain adaptation uses labeled source data and unlabeled target data. Domain generalization uses labeled data from single or multiple source domains to generalize to unseen target domains. Why it matters: Research in mitigating domain shift enhances the robustness and applicability of AI models in diverse real-world scenarios.
This paper introduces Provable Unrestricted Adversarial Training (PUAT), a novel adversarial training approach. PUAT enhances robustness against both unrestricted and restricted adversarial examples while improving standard generalizability by aligning the distributions of adversarial examples, natural data, and the classifier's learned distribution. The approach uses partially labeled data and an augmented triple-GAN to generate effective unrestricted adversarial examples, demonstrating superior performance on benchmarks.
MBZUAI researchers have developed a new approach to enhance the generalizability of vision-language models when processing out-of-distribution data. The study, led by Sheng Zhang and involving multiple MBZUAI professors and researchers, addresses the challenge of AI applications needing to manage unforeseen circumstances. The new method aims to improve how these models, which combine natural language processing and computer vision, handle new information not used during training. Why it matters: Improving the adaptability of vision-language models is critical for real-world AI applications like autonomous driving and medical imaging, especially in diverse and changing environments.
MBZUAI doctoral student Umaima Rahman is researching domain adaptation and generalization in deep learning for medical imaging to improve AI model performance across diverse hospitals and equipment. Her work focuses on building models that learn consistent features across different data sources to ensure reliability in various healthcare settings. Rahman emphasizes that generalization in healthcare AI is a necessity, especially in resource-limited settings, and aims to develop AI that assists clinicians rather than replaces them. Why it matters: This research addresses a critical challenge in deploying AI in healthcare, ensuring that models can be reliably used in diverse settings, particularly benefiting developing countries and improving global healthcare accessibility.
MBZUAI researchers presented a study at ICML 2024 examining how data aggregation distorts causal discovery. The study argues that current methods are misled because real-world interactions happen at a micro level while observations are aggregated. Using the example of ice cream sales and temperature, they highlight how aggregation introduces "instantaneous causality" where time-lags exist. Why it matters: The research identifies a fundamental limitation in current causal discovery methods, potentially impacting disciplines relying on accurate causal inference from observational data.
This paper introduces a method for quantifying the transferability of architectural components in Single Image Super-Resolution (SISR) models, termed "Universality," and proposes a Universality Assessment Equation (UAE). Guided by the UAE, the authors design optimized modules, Cycle Residual Block (CRB) and Depth-Wise Cycle Residual Block (DCRB), and demonstrate their effectiveness across various datasets and low-level tasks. Results show that networks using these modules outperform state-of-the-art methods, achieving improved PSNR or parameter reduction.
Emilio Porcu from Khalifa University presented on temporally evolving generalized networks, where graphs evolve over time with changing topologies. The presentation addressed challenges in building semi-metrics and isometric embeddings for these networks. The research uses kernel specification and network-based metrics and is illustrated using a traffic accident dataset. Why it matters: This work advances the application of kernel methods to dynamic graph structures, relevant for modeling evolving relationships in various domains.
MBZUAI Professor Kun Zhang's research focuses on causality in AI systems, aiming to understand underlying processes beyond data correlation. He emphasizes the importance of causality and graphical representations to model why systems produce observations and account for uncertainty. Zhang served as a program chair at the 38th Conference on Uncertainty in Artificial Intelligence (UAI) in Eindhoven. Why it matters: This highlights the growing importance of causality and uncertainty in AI research, crucial for responsible AI deployment and decision-making in the region.