Skip to content
GCC AI Research

Search

Results for "continual learning"

Continual Learning in Medical Imaging: A Survey and Practical Analysis

arXiv ·

This survey paper reviews recent literature on continual learning in medical imaging, addressing challenges like catastrophic forgetting and distribution shifts. It covers classification, segmentation, detection, and other tasks, while providing a taxonomy of studies and identifying challenges. The authors also maintain a GitHub repository to keep the survey up-to-date with the latest research.

Continuously Streaming Artificial Intelligence

MBZUAI ·

MBZUAI hosted a talk by Visiting Associate Professor Adrian Bors on continuously streaming AI and the challenge of catastrophic forgetting. The talk covered approaches to continual learning like expanding mixtures of models and generative replay mechanisms. Results were presented on image classification and generation tasks. Why it matters: Continual learning is crucial for AI systems to adapt to new environments and real-world data without forgetting previous knowledge.

DynaMMo: Dynamic Model Merging for Efficient Class Incremental Learning for Medical Images

arXiv ·

Researchers at MBZUAI have developed DynaMMo, a dynamic model merging method for efficient class incremental learning using medical images. DynaMMo merges multiple networks at different training stages using lightweight learnable modules, reducing computational overhead. Evaluated on three datasets, DynaMMo achieved a 10-fold reduction in GFLOPS compared to existing dynamic methods with a 2.76 average accuracy drop.

Intelligence Autonomy via Lifelong Learning AI

MBZUAI ·

Professor Hava Siegelmann, a computer science expert, is researching lifelong learning AI, drawing inspiration from the brain's abstraction and generalization capabilities. The research aims to enable intelligent systems in satellites, robots, and medical devices to adapt and improve their expertise in real-time, even with limited communication and power. The goal is to develop AI systems applicable for far edge computing that can learn in runtime and handle unanticipated situations. Why it matters: This research could lead to more resilient and adaptable AI systems for critical applications in remote and resource-constrained environments, with potential benefits for various sectors in the Middle East.

Using child’s play for machine learning

MBZUAI ·

MBZUAI Professor Salman Khan is researching continuous, lifelong learning systems for computer vision, aiming to mimic human learning processes like curiosity and discovery. His work focuses on learning from limited data and adversarial robustness of deep neural networks. Khan, along with MBZUAI professors Fahad Khan and Rao Anwer, and partners from other universities, presented research at CVPR 2022. Why it matters: This research has the potential to significantly improve the ability of AI systems to understand and adapt to the real world, enabling more intelligent autonomous systems.

Representation learning for deep clustering and few-shot learning

MBZUAI ·

Michael Kampffmeyer from UiT The Arctic University of Norway presented a talk at MBZUAI on representation learning for deep clustering and few-shot learning. The talk covered deep clustering in multi-view settings and the influence of geometrical representation properties on few-shot classification performance. He specifically discussed embedding representations on the hypersphere and its connection to the hubness phenomenon. Why it matters: This highlights MBZUAI's role in hosting discussions on advanced machine learning topics like few-shot learning, which are crucial for addressing data scarcity challenges in the region and beyond.

Towards Robust Multimodal Open-set Test-time Adaptation via Adaptive Entropy-aware Optimization

arXiv ·

This paper introduces Adaptive Entropy-aware Optimization (AEO), a new framework to tackle Multimodal Open-set Test-time Adaptation (MM-OSTTA). AEO uses Unknown-aware Adaptive Entropy Optimization (UAE) and Adaptive Modality Prediction Discrepancy Optimization (AMP) to distinguish unknown class samples during online adaptation by amplifying the entropy difference between known and unknown samples. The study establishes a new benchmark derived from existing datasets with five modalities and evaluates AEO's performance across various domain shift scenarios, demonstrating its effectiveness in long-term and continual MM-OSTTA settings.

Fine-tuning Text-to-Image Models: Reinforcement Learning and Reward Over-Optimization

MBZUAI ·

The article discusses research on fine-tuning text-to-image diffusion models, including reward function training, online reinforcement learning (RL) fine-tuning, and addressing reward over-optimization. A Text-Image Alignment Assessment (TIA2) benchmark is introduced to study reward over-optimization. TextNorm, a method for confidence calibration in reward models, is presented to reduce over-optimization risks. Why it matters: Improving the alignment and fidelity of text-to-image models is crucial for generating high-quality content, and addressing over-optimization enhances the reliability of these models in creative applications.