MBZUAI hosted a panel discussion in collaboration with the Manara Center for Coexistence and Dialogue. The discussion focused on the intersection of AI and medical image computing. Jiebo Luo, a professor at the University of Rochester, discussed his work on applying AI to healthcare, including moving beyond classification to semantic description and expanding use from hospitals to home telemedicine. Why it matters: This highlights the increasing focus on AI applications in healthcare within the Middle East, particularly at institutions like MBZUAI, which are fostering discussions on the ethical and practical implications of AI in medicine.
MBZUAI's BioMedIA lab, led by Mohammad Yaqub, is developing AI solutions for healthcare challenges in cardiology, pulmonology, and oncology using computer vision. Yaqub's previous research analyzed fetal ultrasound images to correlate bone development with maternal vitamin D levels. The lab is now applying image analysis to improve the treatment of head and neck cancer using PET and CT scans. Why it matters: This research demonstrates the potential of AI and computer vision to improve diagnostic accuracy and accessibility of healthcare in the region and beyond.
Pascal Fua from EPFL gave a talk at MBZUAI on physics-based deep learning for medical imaging. The talk covered how self-supervision and knowledge of human anatomy and physics can improve deep learning algorithms when training data is limited. Applications discussed included endoscopic heart surgery, colonoscopy, and intubation. Why it matters: This highlights the growing importance of domain knowledge and self-supervision in overcoming data scarcity challenges for AI in healthcare applications within the region.
This survey paper reviews recent literature on continual learning in medical imaging, addressing challenges like catastrophic forgetting and distribution shifts. It covers classification, segmentation, detection, and other tasks, while providing a taxonomy of studies and identifying challenges. The authors also maintain a GitHub repository to keep the survey up-to-date with the latest research.
Researchers propose a universal anatomical embedding (UAE) framework for medical image analysis to learn appearance, semantic, and cross-modality anatomical embeddings. UAE incorporates semantic embedding learning with prototypical contrastive loss, a fixed-point-based matching strategy, and an iterative approach for cross-modality embedding learning. The framework was evaluated on landmark detection, lesion tracking and CT-MRI registration tasks, outperforming existing state-of-the-art methods.
This paper introduces MOTOR, a multimodal retrieval and re-ranking approach for medical visual question answering (MedVQA) that uses grounded captions and optimal transport to capture relationships between queries and retrieved context, leveraging both textual and visual information. MOTOR identifies clinically relevant contexts to augment VLM input, achieving higher accuracy on MedVQA datasets. Empirical analysis shows MOTOR outperforms state-of-the-art methods by an average of 6.45%.
Researchers have developed robotic path-planning and control algorithms for minimally invasive surgery (MIS) that steer flexible needles, incorporating teleoperation and haptic feedback. An AI algorithm was designed to predict target motion due to respiratory movement, improving needle placement accuracy. GANs were used to generate synthetic images visualizing organ and tumor motion. Why it matters: This research demonstrates the potential of AI and robotics to enhance precision and adaptability in MIS, potentially reducing patient trauma and improving recovery times in the region and beyond.