MBZUAI's Dr. Mohammad Yaqub is developing AI algorithms to power point-of-care ultrasound (PoCUS) on mobile devices, expanding on his prior work on an AI-based fetal anomaly system used in GE Healthcare's ultrasound. These algorithms aim to make smaller, affordable PoCUS devices accessible in remote areas for faster diagnoses. The handheld devices, costing around $5000 USD, can connect to mobile devices and provide intelligence to interpret images, addressing the shortage of specialists in remote locations. Why it matters: This initiative democratizes access to critical diagnostic tools, potentially saving lives by enabling early detection of life-threatening conditions in underserved communities.
Dr. Alison Noble from the University of Oxford presented her work on smart medical ultrasound technology at the KAUST Research Open Week, focusing on automated image analysis and deep learning. Her research aims to improve data collection, patient-doctor relations, and accessibility of healthcare. Portable ultrasound technology can increase accessibility for patients in remote areas. Why it matters: AI-enhanced ultrasound has the potential to significantly improve healthcare delivery and diagnostics in Saudi Arabia and the broader region, especially in underserved communities.
This paper introduces Pulmonary Embolism Detection using Contrastive Learning (PECon), a supervised contrastive pretraining strategy using both CT scans and EHR data to improve feature alignment between modalities for better PE diagnosis. PECon pulls sample features of the same class together while pushing away features of other classes. The approach achieves state-of-the-art results on the RadFusion dataset, with an F1-score of 0.913 and AUROC of 0.943.
MBZUAI and Sheikh Shakbout Medical City researchers developed PECon, a deep learning method for pulmonary embolism detection using CT scans and electronic health records. PECon uses neural networks and contrastive learning to encode and align image and text data. The method aims to improve diagnosis accuracy and speed, potentially saving lives. Why it matters: This research demonstrates AI's potential to enhance medical diagnostics in the UAE, addressing a critical healthcare challenge.
This paper introduces a self-supervised contrastive learning method for segmenting the left ventricle in echocardiography images when limited labeled data is available. The approach uses contrastive pretraining to improve the performance of UNet and DeepLabV3 segmentation networks. Experiments on the EchoNet-Dynamic dataset show the method achieves a Dice score of 0.9252, outperforming existing approaches, with code available on Github.
Khalifa University's podcast, KU Pulse, featured Dr. Jamal Alsawalhi discussing the challenges of decarbonizing the transport sector. The episode explores battery costs, infrastructure, supply chains, and public perception, highlighting emerging KU technologies like rare earth free motors and wireless charging. Another episode featured Dr. Mohamed Ramy Elmarry and Omar Aldhanhani discussing the UAE's first mainland Antarctic expedition. Why it matters: The podcast highlights KU's research contributions to sustainable technologies and polar science, crucial for addressing climate change and promoting innovation in the UAE.
This paper introduces a multi-task learning approach for fetal biometric estimation from ultrasound images, classifying regions (head, abdomen, femur) and estimating parameters. The model, a U-Net architecture with a classification head, achieved a mean absolute error of 1.08 mm for head circumference, 1.44 mm for abdomen circumference, and 1.10 mm for femur length, with 99.91% classification accuracy. The researchers are affiliated with MBZUAI. Why it matters: This research demonstrates advancements in automated fetal health monitoring using AI, potentially improving prenatal care and diagnostics in the region.
Researchers from MBZUAI have developed EchoCoTr, a novel spatiotemporal deep learning method for estimating left ventricular ejection fraction (LVEF) from echocardiograms. EchoCoTr combines CNNs and vision transformers to overcome the limitations of each when applied to medical video data. The method achieves state-of-the-art results on the EchoNet-Dynamic dataset, demonstrating improved accuracy compared to existing approaches, with code available on GitHub.