This paper introduces a multi-task learning approach for fetal biometric estimation from ultrasound images, classifying regions (head, abdomen, femur) and estimating parameters. The model, a U-Net architecture with a classification head, achieved a mean absolute error of 1.08 mm for head circumference, 1.44 mm for abdomen circumference, and 1.10 mm for femur length, with 99.91% classification accuracy. The researchers are affiliated with MBZUAI. Why it matters: This research demonstrates advancements in automated fetal health monitoring using AI, potentially improving prenatal care and diagnostics in the region.
Researchers propose a universal anatomical embedding (UAE) framework for medical image analysis to learn appearance, semantic, and cross-modality anatomical embeddings. UAE incorporates semantic embedding learning with prototypical contrastive loss, a fixed-point-based matching strategy, and an iterative approach for cross-modality embedding learning. The framework was evaluated on landmark detection, lesion tracking and CT-MRI registration tasks, outperforming existing state-of-the-art methods.
MBZUAI and Corniche Hospital researchers have developed FetalCLIP, a foundation model for analyzing fetal ultrasound images to detect congenital conditions. FetalCLIP outperformed other foundation models on ultrasound analysis tasks. The AI model aims to improve the early diagnosis of ailments like congenital heart defects. Why it matters: This innovation has the potential to dramatically improve health outcomes for millions of children annually by providing physicians with better insights into fetal health.
Researchers at MBZUAI introduce FissionFusion, a hierarchical model merging approach to improve medical image analysis performance. The method uses local and global aggregation of models based on hyperparameter configurations, along with a cyclical learning rate scheduler for efficient model generation. Experiments show FissionFusion outperforms standard model souping by approximately 6% on HAM10000 and CheXpert datasets and improves OOD performance.
This paper introduces a self-supervised contrastive learning method for segmenting the left ventricle in echocardiography images when limited labeled data is available. The approach uses contrastive pretraining to improve the performance of UNet and DeepLabV3 segmentation networks. Experiments on the EchoNet-Dynamic dataset show the method achieves a Dice score of 0.9252, outperforming existing approaches, with code available on Github.