Skip to content
GCC AI Research

Search

Results for "LVEF"

EchoCoTr: Estimation of the Left Ventricular Ejection Fraction from Spatiotemporal Echocardiography

arXiv ·

Researchers from MBZUAI have developed EchoCoTr, a novel spatiotemporal deep learning method for estimating left ventricular ejection fraction (LVEF) from echocardiograms. EchoCoTr combines CNNs and vision transformers to overcome the limitations of each when applied to medical video data. The method achieves state-of-the-art results on the EchoNet-Dynamic dataset, demonstrating improved accuracy compared to existing approaches, with code available on GitHub.

Accelerating echocardiogram analysis with AI: a new deep learning method presented at MICCAI

MBZUAI ·

MBZUAI researchers developed a new deep learning method for rapid and accurate estimation of clinical measurements from echocardiograms. The method focuses on improving the measurement of the left ventricle ejection fraction, a key indicator of heart health. Their deep learning approach improves upon previous methods by better organizing data representation, enhancing performance and transferability. Why it matters: The AI-driven solution can potentially reduce analysis time for cardiologists, improve patient care, and be particularly beneficial in regions with limited healthcare resources.

Contrastive Pretraining for Echocardiography Segmentation with Limited Data

arXiv ·

This paper introduces a self-supervised contrastive learning method for segmenting the left ventricle in echocardiography images when limited labeled data is available. The approach uses contrastive pretraining to improve the performance of UNet and DeepLabV3 segmentation networks. Experiments on the EchoNet-Dynamic dataset show the method achieves a Dice score of 0.9252, outperforming existing approaches, with code available on Github.

PECon: Contrastive Pretraining to Enhance Feature Alignment between CT and EHR Data for Improved Pulmonary Embolism Diagnosis

arXiv ·

This paper introduces Pulmonary Embolism Detection using Contrastive Learning (PECon), a supervised contrastive pretraining strategy using both CT scans and EHR data to improve feature alignment between modalities for better PE diagnosis. PECon pulls sample features of the same class together while pushing away features of other classes. The approach achieves state-of-the-art results on the RadFusion dataset, with an F1-score of 0.913 and AUROC of 0.943.

SALT: Parameter-Efficient Fine-Tuning via Singular Value Adaptation with Low-Rank Transformation

arXiv ·

Researchers introduce SALT, a parameter-efficient fine-tuning method for medical image segmentation that combines singular value adaptation with low-rank transformation. SALT selectively adapts influential singular values and complements this with a low-rank update for the remaining subspace. Experiments on five medical datasets show SALT outperforms state-of-the-art PEFT methods by 2-5% in Dice score with only 3.9% trainable parameters.

Breathing new life into medical applications

MBZUAI ·

MBZUAI graduate Ahmed Sharshar developed a computer vision application that assesses lung health from a video of a person breathing, estimating Forced Vital Capacity (FVC), Forced Expiratory Volume in 1 second (FEV1), and Peak Expiratory Flow (PEF). The model achieved up to 100% accuracy using thermal video data from 60 participants. Sharshar aims to create lightweight models applicable in developing countries without high-end GPUs. Why it matters: This research showcases the potential of AI to democratize healthcare access through non-invasive, accessible diagnostic tools.

A Benchmark and Agentic Framework for Omni-Modal Reasoning and Tool Use in Long Videos

arXiv ·

A new benchmark, LongShOTBench, is introduced for evaluating multimodal reasoning and tool use in long videos, featuring open-ended questions and diagnostic rubrics. The benchmark addresses the limitations of existing datasets by combining temporal length and multimodal richness, using human-validated samples. LongShOTAgent, an agentic system, is also presented for analyzing long videos, with both the benchmark and agent demonstrating the challenges faced by state-of-the-art MLLMs.