Skip to content
GCC AI Research

Search

Results for "clinical decision support"

MedPromptX: Grounded Multimodal Prompting for Chest X-ray Diagnosis

arXiv ·

The paper introduces MedPromptX, a clinical decision support system using multimodal large language models (MLLMs), few-shot prompting (FP), and visual grounding (VG) for chest X-ray diagnosis, integrating imagery with EHR data. MedPromptX refines few-shot data dynamically for real-time adjustment to new patient scenarios and narrows the search area in X-ray images. The study introduces MedPromptX-VQA, a new visual question answering dataset, and demonstrates state-of-the-art performance with an 11% improvement in F1-score compared to baselines.

From Big Data to Bedside (DB2B): Artificial Intelligence in Precision Oncology

MBZUAI ·

This article discusses the use of artificial intelligence in precision oncology, particularly in understanding individual tumor mechanisms and aiding clinical decision-making. Dr. Xinghua Lu, with extensive experience in medicine and biomedical informatics, will present research on individualized Bayesian causal inference methods for investigating oncogenic mechanisms. These methods aim to provide clinical decision support at the cellular, tumor, and patient levels. Why it matters: AI-driven precision oncology can enable more personalized and effective cancer treatments, improving patient outcomes in the region and globally.

Clinical prediction system of complications among COVID-19 patients: a development and validation retrospective multicentre study

arXiv ·

A retrospective study in Abu Dhabi, UAE, developed a machine learning-based prognostic system to predict the risk of seven complications in COVID-19 patients using data from 3,352 patient encounters. The system, trained on data from the first 24 hours of admission, achieved high accuracy (AUROC > 0.80) in predicting complications like AKI, ARDS, and elevated biomarkers in geographically split test sets. The models primarily used gradient boosting and logistic regression.

The diagnosis game: A simulated hospital environment to measure AI agents’ diagnostic abilities

MBZUAI ·

MBZUAI researchers developed MedAgentSim, a simulated hospital environment to evaluate AI diagnostic abilities. The simulation uses LLM-powered agents to mimic doctor-patient conversations, providing a dynamic assessment of diagnostic skills. The system includes doctor, patient, and evaluator agents that interact within the simulated hospital, making real-time decisions. Why it matters: This research offers a more realistic evaluation of AI in clinical settings, addressing limitations of current benchmarks and potentially improving AI's use in healthcare.

A multimodal approach for developing medical diagnoses with AI

MBZUAI ·

MBZUAI doctoral student Mai A. Shaaban and colleagues developed MedPromptX, a system that analyzes chest X-rays and patient data to aid lung disease diagnoses. MedPromptX uses multimodal large language models with visual grounding and few-shot prompting, trained on a new dataset of 6,000 patient records (MedPromptX-VQA) derived from MIMIC-IV and MIMIC-CXR. The system addresses the challenge of incomplete electronic health records by leveraging the knowledge embedded in large language models to interpret lab results. Why it matters: This research advances AI-driven medical diagnostics by integrating diverse data sources and addressing data gaps, potentially leading to quicker and more accurate diagnoses.

Machine learning algorithms for precision medicine

MBZUAI ·

Agathe Guilloux, a professor in Data Science at Evry Paris Saclay University, presented on machine learning algorithms for precision medicine at MBZUAI. Her talk covered the main challenges of precision medicine and how AI can address them. She also discussed algorithms developed for decision support tools. Why it matters: This highlights MBZUAI's role as a platform for discussing advanced AI applications in healthcare, even when the research is not directly conducted in the GCC.

Benchmarking the Medical Understanding and Reasoning of Large Language Models in Arabic Healthcare Tasks

arXiv ·

This paper benchmarks the performance of large language models (LLMs) on Arabic medical natural language processing tasks using the AraHealthQA dataset. The study evaluated LLMs in multiple-choice question answering, fill-in-the-blank, and open-ended question answering scenarios. The results showed that a majority voting solution using Gemini Flash 2.5, Gemini Pro 2.5, and GPT o3 achieved 77% accuracy on MCQs, while other LLMs achieved a BERTScore of 86.44% on open-ended questions. Why it matters: The research highlights both the potential and limitations of current LLMs in Arabic clinical contexts, providing a baseline for future improvements in Arabic medical AI.