This study explores fine-tuning large language models (LLMs) for Arabic medical text generation to improve hospital management systems. A unique dataset was collected from social media, capturing medical conversations between patients and doctors, and used to fine-tune models like Mistral-7B, LLaMA-2-7B, and GPT-2. The fine-tuned Mistral-7B model outperformed the others with a BERT F1-score of 68.5%. Why it matters: The research demonstrates the potential of generative AI to provide scalable and culturally relevant solutions for healthcare challenges in Arabic-speaking regions.
Researchers at MBZUAI have introduced TiBiX, a novel approach leveraging temporal information from previous chest X-rays (CXRs) and reports for bidirectional generation of current CXRs and reports. TiBiX addresses two key challenges: generating current images from previous images and reports, and generating current reports from both previous and current images. The study also introduces a curated temporal benchmark dataset derived from the MIMIC-CXR dataset and achieves state-of-the-art results in report generation.
Researchers address the challenge of limited Arabic medical dialogue data by generating 80,000 synthetic question-answer pairs using ChatGPT-4o and Gemini 2.5 Pro, expanding an initial dataset of 20,000 records. They fine-tuned five LLMs, including Mistral-7B and AraGPT2, and evaluated performance using BERTScore and expert review. Results showed that training with ChatGPT-4o-generated data led to higher F1-scores and fewer hallucinations across models. Why it matters: This demonstrates the potential of synthetic data augmentation to improve domain-specific Arabic language models, particularly for low-resource medical NLP applications.
The paper introduces MedPromptX, a clinical decision support system using multimodal large language models (MLLMs), few-shot prompting (FP), and visual grounding (VG) for chest X-ray diagnosis, integrating imagery with EHR data. MedPromptX refines few-shot data dynamically for real-time adjustment to new patient scenarios and narrows the search area in X-ray images. The study introduces MedPromptX-VQA, a new visual question answering dataset, and demonstrates state-of-the-art performance with an 11% improvement in F1-score compared to baselines.
MBZUAI researchers introduce XrayGPT, a conversational medical vision-language model for analyzing chest radiographs and answering open-ended questions. The model aligns a medical visual encoder (MedClip) with a fine-tuned large language model (Vicuna) using a linear transformation. To enhance performance, the LLM was fine-tuned using 217k interactive summaries generated from radiology reports.
This paper introduces MOTOR, a multimodal retrieval and re-ranking approach for medical visual question answering (MedVQA) that uses grounded captions and optimal transport to capture relationships between queries and retrieved context, leveraging both textual and visual information. MOTOR identifies clinically relevant contexts to augment VLM input, achieving higher accuracy on MedVQA datasets. Empirical analysis shows MOTOR outperforms state-of-the-art methods by an average of 6.45%.
A new study introduces Sporo AraSum, a language model designed for Arabic clinical documentation, and compares it to JAIS using synthetic datasets and modified PDQI-9 metrics. Sporo AraSum significantly outperformed JAIS in quantitative AI metrics and qualitative attributes related to accuracy, utility, and cultural competence. The model addresses the nuances of Arabic while reducing AI hallucinations, making it suitable for Arabic-speaking healthcare. Why it matters: The model offers a more culturally and linguistically sensitive solution for Arabic clinical documentation, potentially improving healthcare workflows and patient outcomes in the region.