Skip to content
GCC AI Research

Search

Results for "biomedical image analysis"

UAE: Universal Anatomical Embedding on Multi-modality Medical Images

arXiv ·

Researchers propose a universal anatomical embedding (UAE) framework for medical image analysis to learn appearance, semantic, and cross-modality anatomical embeddings. UAE incorporates semantic embedding learning with prototypical contrastive loss, a fixed-point-based matching strategy, and an iterative approach for cross-modality embedding learning. The framework was evaluated on landmark detection, lesion tracking and CT-MRI registration tasks, outperforming existing state-of-the-art methods.

Xu pursues AI-based biomedical image analysis

MBZUAI ·

Dr. Min Xu joins MBZUAI as Affiliated Assistant Professor in Computer Vision to advance AI-based biomedical image analysis. His research focuses on cellular cryo-electron tomography (Cryo-ET) 3D image analysis, spatial transcriptomics, digital pathology, and automated science. Xu will collaborate with MBZUAI faculty and advise master’s students, leveraging his expertise in computational biology and bioinformatics. Why it matters: This appointment strengthens MBZUAI's capabilities in applying AI to critical areas of biomedical research, potentially leading to breakthroughs in disease understanding and treatment.

ConDiSR: Contrastive Disentanglement and Style Regularization for Single Domain Generalization

arXiv ·

This paper introduces a new Single Domain Generalization (SDG) method called ConDiSR for medical image classification, using channel-wise contrastive disentanglement and reconstruction-based style regularization. The method is evaluated on multicenter histopathology image classification, achieving a 1% improvement in average accuracy compared to state-of-the-art SDG baselines. Code is available at https://github.com/BioMedIA-MBZUAI/ConDiSR.

Continual Learning in Medical Imaging: A Survey and Practical Analysis

arXiv ·

This survey paper reviews recent literature on continual learning in medical imaging, addressing challenges like catastrophic forgetting and distribution shifts. It covers classification, segmentation, detection, and other tasks, while providing a taxonomy of studies and identifying challenges. The authors also maintain a GitHub repository to keep the survey up-to-date with the latest research.

Improving patient care with computer vision

MBZUAI ·

MBZUAI's BioMedIA lab, led by Mohammad Yaqub, is developing AI solutions for healthcare challenges in cardiology, pulmonology, and oncology using computer vision. Yaqub's previous research analyzed fetal ultrasound images to correlate bone development with maternal vitamin D levels. The lab is now applying image analysis to improve the treatment of head and neck cancer using PET and CT scans. Why it matters: This research demonstrates the potential of AI and computer vision to improve diagnostic accuracy and accessibility of healthcare in the region and beyond.

Medical Image Computing: Harvesting the Healing Power of AI and Domain Knowledg

MBZUAI ·

MBZUAI hosted a panel discussion in collaboration with the Manara Center for Coexistence and Dialogue. The discussion focused on the intersection of AI and medical image computing. Jiebo Luo, a professor at the University of Rochester, discussed his work on applying AI to healthcare, including moving beyond classification to semantic description and expanding use from hospitals to home telemedicine. Why it matters: This highlights the increasing focus on AI applications in healthcare within the Middle East, particularly at institutions like MBZUAI, which are fostering discussions on the ethical and practical implications of AI in medicine.

MOTOR: Multimodal Optimal Transport via Grounded Retrieval in Medical Visual Question Answering

arXiv ·

This paper introduces MOTOR, a multimodal retrieval and re-ranking approach for medical visual question answering (MedVQA) that uses grounded captions and optimal transport to capture relationships between queries and retrieved context, leveraging both textual and visual information. MOTOR identifies clinically relevant contexts to augment VLM input, achieving higher accuracy on MedVQA datasets. Empirical analysis shows MOTOR outperforms state-of-the-art methods by an average of 6.45%.