Skip to content
GCC AI Research

Search

Results for "Arabic medical text generation"

Arabic Large Language Models for Medical Text Generation

arXiv ·

This study explores fine-tuning large language models (LLMs) for Arabic medical text generation to improve hospital management systems. A unique dataset was collected from social media, capturing medical conversations between patients and doctors, and used to fine-tune models like Mistral-7B, LLaMA-2-7B, and GPT-2. The fine-tuned Mistral-7B model outperformed the others with a BERT F1-score of 68.5%. Why it matters: The research demonstrates the potential of generative AI to provide scalable and culturally relevant solutions for healthcare challenges in Arabic-speaking regions.

Severity-Aware Weighted Loss for Arabic Medical Text Generation

arXiv ·

Researchers proposed a severity-aware weighted loss method to fine-tune Arabic language models for medical text generation, prioritizing severe clinical cases. This approach utilizes soft severity probabilities, derived from an AraBERT-based classifier, to dynamically scale token-level loss contributions during optimization on the MAQA dataset. The method consistently improved performance across ten Arabic LLMs, with AraGPT2-Base increasing from 54.04% to 66.14% and AraGPT2-Medium from 59.16% to 67.18%. Why it matters: This novel fine-tuning strategy addresses a critical limitation in medical AI by enhancing the safety and reliability of Arabic medical large language models, particularly in high-stakes clinical scenarios.