MBZUAI's BioMedIA lab, led by Mohammad Yaqub, is developing AI solutions for healthcare challenges in cardiology, pulmonology, and oncology using computer vision. Yaqub's previous research analyzed fetal ultrasound images to correlate bone development with maternal vitamin D levels. The lab is now applying image analysis to improve the treatment of head and neck cancer using PET and CT scans. Why it matters: This research demonstrates the potential of AI and computer vision to improve diagnostic accuracy and accessibility of healthcare in the region and beyond.
MBZUAI and Sheikh Shakbout Medical City researchers developed PECon, a deep learning method for pulmonary embolism detection using CT scans and electronic health records. PECon uses neural networks and contrastive learning to encode and align image and text data. The method aims to improve diagnosis accuracy and speed, potentially saving lives. Why it matters: This research demonstrates AI's potential to enhance medical diagnostics in the UAE, addressing a critical healthcare challenge.
A KAUST team led by Xin Gao developed an AI model for COVID-19 detection from CT scans, addressing limitations of existing methods. The model incorporates a novel embedding strategy, a CT scan simulator, and a 2.5D deep-learning algorithm. Tested at King Faisal Specialist Hospital, the model demonstrated high accuracy in detecting COVID-19 cases. Why it matters: This research provides a valuable tool for rapid and accurate COVID-19 diagnosis in the region, especially in early-stage infections, improving healthcare outcomes.
MBZUAI researchers led by Dr. Mohammad Yaqub are developing AI algorithms for real-time medical diagnoses, including tools for multiple sclerosis and congenital heart disease. The team developed ScanNav, an AI fetal anomaly assessment system licensed by GE Healthcare for Voluson SWIFT ultrasound machines. ScanNav assists doctors during anomaly scans after 20 weeks of gestation to check for conditions like heart issues and spina bifida. Why it matters: This research has the potential to significantly improve the speed and accuracy of medical diagnoses in the UAE and beyond, addressing critical gaps in healthcare.
A new brain tumor segmentation method based on convolutional neural networks is proposed for the BraTS-GoAT challenge. The method employs the MedNeXt architecture and model ensembling to segment tumors in brain MRI scans from diverse populations. Experiments on the unseen validation set demonstrate promising results with an average DSC of 85.54%.
A senior lecturer at the University of New South Wales discussed the use of AI to improve early prognosis and personalized treatment plans for neurodegenerative diseases, cardiovascular imaging and multiomics. The lecture highlighted the potential of AI algorithms to detect subtle changes at early stages through advanced multiomics techniques and medical imaging analysis. The speaker has expertise in analyzing medical images and has collaborated with medical professionals to develop AI tools for diagnosis of cancer, neurodegenerative disease, and heart disease. Why it matters: AI-driven prognosis and treatment planning promises earlier intervention and improved outcomes for challenging diseases in the region.
MBZUAI's first Ph.D. graduate, Numan Saeed, developed deep learning models to diagnose head and neck cancers using PET and CT scan imagery. His research focused on improving early detection and accurate localization of tumors, aiming to enhance diagnosis and prognosis. Early diagnosis can reduce mortality rates by up to 70%. Why it matters: This research showcases the potential of AI in healthcare to improve cancer diagnosis and treatment, addressing a critical need in resource-constrained healthcare systems.
This paper introduces Pulmonary Embolism Detection using Contrastive Learning (PECon), a supervised contrastive pretraining strategy using both CT scans and EHR data to improve feature alignment between modalities for better PE diagnosis. PECon pulls sample features of the same class together while pushing away features of other classes. The approach achieves state-of-the-art results on the RadFusion dataset, with an F1-score of 0.913 and AUROC of 0.943.