Skip to content
GCC AI Research

Search

Results for "FetalCLIP"

Using AI to detect congenital conditions before birth

MBZUAI ·

MBZUAI and Corniche Hospital researchers have developed FetalCLIP, a foundation model for analyzing fetal ultrasound images to detect congenital conditions. FetalCLIP outperformed other foundation models on ultrasound analysis tasks. The AI model aims to improve the early diagnosis of ailments like congenital heart defects. Why it matters: This innovation has the potential to dramatically improve health outcomes for millions of children annually by providing physicians with better insights into fetal health.

Multi-Task Learning Approach for Unified Biometric Estimation from Fetal Ultrasound Anomaly Scans

arXiv ·

This paper introduces a multi-task learning approach for fetal biometric estimation from ultrasound images, classifying regions (head, abdomen, femur) and estimating parameters. The model, a U-Net architecture with a classification head, achieved a mean absolute error of 1.08 mm for head circumference, 1.44 mm for abdomen circumference, and 1.10 mm for femur length, with 99.91% classification accuracy. The researchers are affiliated with MBZUAI. Why it matters: This research demonstrates advancements in automated fetal health monitoring using AI, potentially improving prenatal care and diagnostics in the region.

Five ways AI is creating a healthier future

MBZUAI ·

MBZUAI researchers developed FetalCLIP, an AI model trained on 210,000 ultrasound images for fast and reliable interpretation of fetal scans. MBZUAI's President Eric Xing contributed to the General Expression Transformer (GET), an AI foundation model acting as a biological simulator to predict gene behavior. MBZUAI and Carleton University created MedPromptX for quicker disease diagnosis and treatment plans using multimodal AI. Why it matters: These AI advancements from MBZUAI have the potential to revolutionize healthcare in the region and globally, from prenatal care to drug discovery and personalized medicine.

UniMed-CLIP: Towards a Unified Image-Text Pretraining Paradigm for Diverse Medical Imaging Modalities

arXiv ·

MBZUAI researchers introduce UniMed-CLIP, a unified Vision-Language Model (VLM) for diverse medical imaging modalities, trained on the new large-scale, open-source UniMed dataset. UniMed comprises over 5.3 million image-text pairs across six modalities: X-ray, CT, MRI, Ultrasound, Pathology, and Fundus, created using LLMs to transform classification datasets into image-text formats. UniMed-CLIP significantly outperforms existing generalist VLMs and matches modality-specific medical VLMs in zero-shot evaluations, improving over BiomedCLIP by +12.61 on average across 21 datasets while using 3x less training data.

FissionFusion: Fast Geometric Generation and Hierarchical Souping for Medical Image Analysis

arXiv ·

Researchers at MBZUAI introduce FissionFusion, a hierarchical model merging approach to improve medical image analysis performance. The method uses local and global aggregation of models based on hyperparameter configurations, along with a cyclical learning rate scheduler for efficient model generation. Experiments show FissionFusion outperforms standard model souping by approximately 6% on HAM10000 and CheXpert datasets and improves OOD performance.

Lab grown stem cells used to study embryogenesis

KAUST ·

Researchers at KAUST and Peking University Third Hospital have created a novel blastoid model for studying early human development using extended pluripotent stem cells (EPSCs). The blastoid is a 3D cell model mimicking the blastocyst phase, avoiding ethical concerns associated with using human embryos. The team showed that blastoids can be cultured to mimic post-implantation development, offering insights into early cell lineages. Why it matters: This innovation provides a way to study human embryogenesis without the ethical constraints of using actual embryos, potentially advancing our understanding of miscarriage and birth defects.

Deep learning accelerates research on early pregnancies

KAUST ·

KAUST researchers have developed deepBlastoid, a deep learning tool for evaluating models of human embryo development, called blastoids. deepBlastoid can evaluate images of blastoids at speeds 1000 times faster than expert scientists, processing 273 images per second. Trained on over 2000 microscopic blastoid images, it assesses the impact of chemicals on blastoid development using over 10,000 images. Why it matters: This AI tool accelerates research into early pregnancy, fertility complications, and the impact of chemicals on embryo development, with implications for reproductive technologies.