MBZUAI and Corniche Hospital researchers have developed FetalCLIP, a foundation model for analyzing fetal ultrasound images to detect congenital conditions. FetalCLIP outperformed other foundation models on ultrasound analysis tasks. The AI model aims to improve the early diagnosis of ailments like congenital heart defects. Why it matters: This innovation has the potential to dramatically improve health outcomes for millions of children annually by providing physicians with better insights into fetal health.
This paper introduces a multi-task learning approach for fetal biometric estimation from ultrasound images, classifying regions (head, abdomen, femur) and estimating parameters. The model, a U-Net architecture with a classification head, achieved a mean absolute error of 1.08 mm for head circumference, 1.44 mm for abdomen circumference, and 1.10 mm for femur length, with 99.91% classification accuracy. The researchers are affiliated with MBZUAI. Why it matters: This research demonstrates advancements in automated fetal health monitoring using AI, potentially improving prenatal care and diagnostics in the region.
Manling Li from UIUC proposes a new research direction: Event-Centric Multimodal Knowledge Acquisition, which transforms traditional entity-centric single-modal knowledge into event-centric multi-modal knowledge. The approach addresses challenges in understanding multimodal semantic structures using zero-shot cross-modal transfer (CLIP-Event) and long-horizon temporal dynamics through the Event Graph Model. Li's work aims to enable machines to capture complex timelines and relationships, with applications in timeline generation, meeting summarization, and question answering. Why it matters: This research pioneers a new approach to multimodal information extraction, moving from static entity-based understanding to dynamic, event-centric knowledge acquisition, which is essential for advanced AI applications in understanding complex scenarios.
MBZUAI researchers introduce UniMed-CLIP, a unified Vision-Language Model (VLM) for diverse medical imaging modalities, trained on the new large-scale, open-source UniMed dataset. UniMed comprises over 5.3 million image-text pairs across six modalities: X-ray, CT, MRI, Ultrasound, Pathology, and Fundus, created using LLMs to transform classification datasets into image-text formats. UniMed-CLIP significantly outperforms existing generalist VLMs and matches modality-specific medical VLMs in zero-shot evaluations, improving over BiomedCLIP by +12.61 on average across 21 datasets while using 3x less training data.
Researchers at MBZUAI introduce FissionFusion, a hierarchical model merging approach to improve medical image analysis performance. The method uses local and global aggregation of models based on hyperparameter configurations, along with a cyclical learning rate scheduler for efficient model generation. Experiments show FissionFusion outperforms standard model souping by approximately 6% on HAM10000 and CheXpert datasets and improves OOD performance.
MBZUAI is developing AI-powered applications to help reduce malaria's impact in Indonesia, supported by Sheikh Mohamed bin Zayed Al Nahyan's Reaching the Last Mile initiative. The applications use sensory data fusion to create "digital twins" for precise weather forecasting and real-time environmental representation. AI and clustering analysis identify recurring features contributing to malaria outbreaks, enabling preventative measures and early treatment. Why it matters: This project demonstrates AI's potential in combating climate-sensitive diseases and improving public health in vulnerable regions.
A new benchmark, LongShOTBench, is introduced for evaluating multimodal reasoning and tool use in long videos, featuring open-ended questions and diagnostic rubrics. The benchmark addresses the limitations of existing datasets by combining temporal length and multimodal richness, using human-validated samples. LongShOTAgent, an agentic system, is also presented for analyzing long videos, with both the benchmark and agent demonstrating the challenges faced by state-of-the-art MLLMs.