The paper introduces MedPromptX, a clinical decision support system using multimodal large language models (MLLMs), few-shot prompting (FP), and visual grounding (VG) for chest X-ray diagnosis, integrating imagery with EHR data. MedPromptX refines few-shot data dynamically for real-time adjustment to new patient scenarios and narrows the search area in X-ray images. The study introduces MedPromptX-VQA, a new visual question answering dataset, and demonstrates state-of-the-art performance with an 11% improvement in F1-score compared to baselines.
Researchers from MBZUAI have developed XReal, a diffusion model for generating realistic chest X-ray images with precise control over anatomy and pathology location. The model utilizes an Anatomy Controller and a Pathology Controller to introduce spatial control in a pre-trained Text-to-Image Diffusion Model without fine-tuning. XReal outperforms existing X-ray diffusion models in realism, as evaluated by quantitative metrics and radiologists' ratings, and the code/weights are available.
MBZUAI researchers introduce XrayGPT, a conversational medical vision-language model for analyzing chest radiographs and answering open-ended questions. The model aligns a medical visual encoder (MedClip) with a fine-tuned large language model (Vicuna) using a linear transformation. To enhance performance, the LLM was fine-tuned using 217k interactive summaries generated from radiology reports.
This paper introduces Pulmonary Embolism Detection using Contrastive Learning (PECon), a supervised contrastive pretraining strategy using both CT scans and EHR data to improve feature alignment between modalities for better PE diagnosis. PECon pulls sample features of the same class together while pushing away features of other classes. The approach achieves state-of-the-art results on the RadFusion dataset, with an F1-score of 0.913 and AUROC of 0.943.
Researchers at MBZUAI have introduced TiBiX, a novel approach leveraging temporal information from previous chest X-rays (CXRs) and reports for bidirectional generation of current CXRs and reports. TiBiX addresses two key challenges: generating current images from previous images and reports, and generating current reports from both previous and current images. The study also introduces a curated temporal benchmark dataset derived from the MIMIC-CXR dataset and achieves state-of-the-art results in report generation.
Researchers propose a universal anatomical embedding (UAE) framework for medical image analysis to learn appearance, semantic, and cross-modality anatomical embeddings. UAE incorporates semantic embedding learning with prototypical contrastive loss, a fixed-point-based matching strategy, and an iterative approach for cross-modality embedding learning. The framework was evaluated on landmark detection, lesion tracking and CT-MRI registration tasks, outperforming existing state-of-the-art methods.
A KAUST team led by Xin Gao developed an AI model for COVID-19 detection from CT scans, addressing limitations of existing methods. The model incorporates a novel embedding strategy, a CT scan simulator, and a 2.5D deep-learning algorithm. Tested at King Faisal Specialist Hospital, the model demonstrated high accuracy in detecting COVID-19 cases. Why it matters: This research provides a valuable tool for rapid and accurate COVID-19 diagnosis in the region, especially in early-stage infections, improving healthcare outcomes.
MBZUAI doctoral student Mai A. Shaaban and colleagues developed MedPromptX, a system that analyzes chest X-rays and patient data to aid lung disease diagnoses. MedPromptX uses multimodal large language models with visual grounding and few-shot prompting, trained on a new dataset of 6,000 patient records (MedPromptX-VQA) derived from MIMIC-IV and MIMIC-CXR. The system addresses the challenge of incomplete electronic health records by leveraging the knowledge embedded in large language models to interpret lab results. Why it matters: This research advances AI-driven medical diagnostics by integrating diverse data sources and addressing data gaps, potentially leading to quicker and more accurate diagnoses.