MBZUAI researchers introduce UniMed-CLIP, a unified Vision-Language Model (VLM) for diverse medical imaging modalities, trained on the new large-scale, open-source UniMed dataset. UniMed comprises over 5.3 million image-text pairs across six modalities: X-ray, CT, MRI, Ultrasound, Pathology, and Fundus, created using LLMs to transform classification datasets into image-text formats. UniMed-CLIP significantly outperforms existing generalist VLMs and matches modality-specific medical VLMs in zero-shot evaluations, improving over BiomedCLIP by +12.61 on average across 21 datasets while using 3x less training data.
Researchers at MBZUAI introduce FissionFusion, a hierarchical model merging approach to improve medical image analysis performance. The method uses local and global aggregation of models based on hyperparameter configurations, along with a cyclical learning rate scheduler for efficient model generation. Experiments show FissionFusion outperforms standard model souping by approximately 6% on HAM10000 and CheXpert datasets and improves OOD performance.
MBZUAI researchers are introducing MedNNS, a system to be presented at MICCAI 2025, that recommends the best AI architecture and initialization for medical imaging tasks. MedNNS addresses the challenge of inefficient trial-and-error in building medical imaging AI by reframing model selection as a retrieval problem. The system employs a Once-For-All ResNet-like model and a learned meta-space of 720k model-dataset pairs, using dataset embeddings to predict optimal model performance. Why it matters: By automating model selection, MedNNS promises to significantly reduce the time and resources required to develop effective AI solutions for healthcare, particularly in medical imaging.
A new paper at ICCV 2025, co-authored by MBZUAI Ph.D. student Dmitry Demidov, introduces Dense-WebVid-CoVR, a 1.6-million sample benchmark for composed video retrieval (CoVR). The benchmark features longer, context-rich descriptions and modification texts, generated using Gemini Pro and GPT-4o, with manual verification. The paper also presents a unified fusion approach that jointly reasons across video and text inputs, improving performance on fine-grained edit details. Why it matters: This work advances video search capabilities by enabling more human-like queries, which is crucial for creative and analytic workflows that require nuanced video retrieval.
MBZUAI Ph.D. student Raza Imam and colleagues presented a new benchmark called MediMeta-C to test the robustness of medical vision-language models (MVLMs) under real-world image corruptions. They found that top-performing MVLMs on clean data often fail under mild corruption, with fundoscopy models particularly vulnerable. To address this, they developed RobustMedCLIP (RMC), a lightweight defense using few-shot LoRA tuning to improve model robustness. Why it matters: This research highlights the critical need for robustness testing in medical AI to ensure reliability in clinical settings, particularly in resource-constrained environments where image quality may be compromised.
KAUST and the Al-Madinah Region Development Authority (MDA) signed an MoU to enhance efficiency, resiliency, and safety in Al-Madinah. KAUST will share high-resolution climate change projections and assess soil loss dynamics. The collaboration aims to tackle challenges in the environmental and water sectors through research, development, and training. Why it matters: This partnership showcases KAUST's role in translating research into practical smart city solutions for regional development, addressing critical environmental concerns.