MBZUAI faculty, researchers, and students presented eight academic papers at the 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2022) in Singapore. Seven of the accepted papers feature a master’s or doctoral student as first author. The papers are the outcome of two MBZUAI faculty led labs – BioMedical Image Analysis (BioMedIA) lab and SPriNT-AI. Why it matters: This highlights MBZUAI's growing prominence in medical image analysis and AI, showcasing the university's commitment to producing high-quality research and fostering young talent in the field.
MBZUAI researchers developed a method to adapt Meta's Segment Anything Model (SAM) for medical image segmentation, addressing its performance gap with natural images. Their approach improves SAM's accuracy without requiring extensive retraining or large medical image datasets. The research, led by Chao Qin, was nominated for the Best Paper Award at the MICCAI conference in Marrakesh. Why it matters: This offers a more efficient and effective way to leverage foundation models in specialized medical imaging applications, potentially improving diagnostic accuracy and reducing the need for large-scale, domain-specific training data.
MBZUAI researchers developed a new approach called Multimodal Optimal Transport via Grounded Retrieval (MOTOR) to improve the accuracy of vision-language models for medical image analysis. MOTOR combines retrieval-augmented generation (RAG) with an optimal transport algorithm to retrieve and rank relevant image and textual data. Testing on two medical datasets showed that MOTOR improved average performance by 6.45%. Why it matters: This technique addresses the challenges of limited specialized medical datasets and computational costs associated with training AI models for medical image interpretation, offering a more efficient and accurate solution.
This paper introduces BRIQA, a new method for automated assessment of artifact severity in pediatric brain MRI, which is important for diagnostic accuracy. BRIQA uses gradient-based loss reweighting and a rotating batching scheme to handle class imbalance in artifact severity levels. Experiments show BRIQA improves average macro F1 score from 0.659 to 0.706, especially for Noise, Zipper, Positioning and Contrast artifacts.
This survey paper reviews recent literature on continual learning in medical imaging, addressing challenges like catastrophic forgetting and distribution shifts. It covers classification, segmentation, detection, and other tasks, while providing a taxonomy of studies and identifying challenges. The authors also maintain a GitHub repository to keep the survey up-to-date with the latest research.
Researchers at MBZUAI have developed a new machine learning method called survival rank-n-contrast (SurvRNC) to improve survival models for cancer prognoses. The method is designed to predict survival times for head and neck cancer patients using multimodal data while accounting for censored data (missing values). Numan Saeed presented the team’s work at the 27th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI). Why it matters: Accurate prognoses can significantly improve patient outcomes, and this research contributes to advancements in machine learning techniques for handling complex and incomplete medical data.