Skip to content
GCC AI Research

Search

Results for "meta-learning"

On Transferability of Machine Learning Models

MBZUAI ·

This article discusses domain shift in machine learning, where testing data differs from training data, and methods to mitigate it via domain adaptation and generalization. Domain adaptation uses labeled source data and unlabeled target data. Domain generalization uses labeled data from single or multiple source domains to generalize to unseen target domains. Why it matters: Research in mitigating domain shift enhances the robustness and applicability of AI models in diverse real-world scenarios.

Continual Learning in Medical Imaging: A Survey and Practical Analysis

arXiv ·

This survey paper reviews recent literature on continual learning in medical imaging, addressing challenges like catastrophic forgetting and distribution shifts. It covers classification, segmentation, detection, and other tasks, while providing a taxonomy of studies and identifying challenges. The authors also maintain a GitHub repository to keep the survey up-to-date with the latest research.

Towards Robust Multimodal Open-set Test-time Adaptation via Adaptive Entropy-aware Optimization

arXiv ·

This paper introduces Adaptive Entropy-aware Optimization (AEO), a new framework to tackle Multimodal Open-set Test-time Adaptation (MM-OSTTA). AEO uses Unknown-aware Adaptive Entropy Optimization (UAE) and Adaptive Modality Prediction Discrepancy Optimization (AMP) to distinguish unknown class samples during online adaptation by amplifying the entropy difference between known and unknown samples. The study establishes a new benchmark derived from existing datasets with five modalities and evaluates AEO's performance across various domain shift scenarios, demonstrating its effectiveness in long-term and continual MM-OSTTA settings.

Intelligence Autonomy via Lifelong Learning AI

MBZUAI ·

Professor Hava Siegelmann, a computer science expert, is researching lifelong learning AI, drawing inspiration from the brain's abstraction and generalization capabilities. The research aims to enable intelligent systems in satellites, robots, and medical devices to adapt and improve their expertise in real-time, even with limited communication and power. The goal is to develop AI systems applicable for far edge computing that can learn in runtime and handle unanticipated situations. Why it matters: This research could lead to more resilient and adaptable AI systems for critical applications in remote and resource-constrained environments, with potential benefits for various sectors in the Middle East.

MedNNS: Supernet-based Medical Task-Adaptive Neural Network Search

arXiv ·

The paper introduces MedNNS, a neural network search framework designed for medical imaging, addressing challenges in architecture selection and weight initialization. MedNNS constructs a meta-space encoding datasets and models based on their performance using a Supernetwork-based approach, expanding the model zoo size by 51x. The framework incorporates rank loss and Fréchet Inception Distance (FID) loss to capture inter-model and inter-dataset relationships, improving alignment in the meta-space and outperforming ImageNet pre-trained DL models and SOTA NAS methods.

Lifelong learning with the metaverse

MBZUAI ·

MBZUAI's Metaverse Lab is developing AI algorithms for photorealistic virtual humans and dynamic environments. Hao Li, Director of the lab, envisions using the metaverse for immersive learning experiences related to history and culture. He is also working on tools to prevent deepfakes and other cyberthreats. Why it matters: This research at MBZUAI aims to advance AI and immersive technologies for education and address potential risks in the metaverse.

Beyond self-driving simulations: teaching machines to learn

KAUST ·

KAUST researchers in the Image and Video Understanding Lab are applying machine learning to computer vision for automated navigation, including self-driving cars and UAVs. They tested their algorithms on KAUST roads, aiming to replicate the brain's efficiency in tasks like activity and object recognition. The team is also exploring the possibility of creative algorithms that can transfer skills without direct training. Why it matters: This research contributes to the advancement of autonomous systems and explores the fundamental questions of replicating human intelligence in machines within the GCC region.

Performance Prediction via Bayesian Matrix Factorisation for Multilingual Natural Language Processing Tasks

MBZUAI ·

A new Bayesian matrix factorization approach is explored for performance prediction in multilingual NLP, aiming to reduce the experimental burden of evaluating various language combinations. The approach outperforms state-of-the-art methods in NLP benchmarks like machine translation and cross-lingual entity linking. It also avoids hyperparameter tuning and provides uncertainty estimates over predictions. Why it matters: Accurate performance prediction methods accelerate multilingual NLP research by reducing computational costs and improving experimental efficiency, especially valuable for Arabic NLP tasks.