Skip to content
GCC AI Research

Advancing Complex Medical Communication in Arabic with Sporo AraSum: Surpassing Existing Large Language Models

arXiv · · Significant research

Summary

A new study introduces Sporo AraSum, a language model designed for Arabic clinical documentation, and compares it to JAIS using synthetic datasets and modified PDQI-9 metrics. Sporo AraSum significantly outperformed JAIS in quantitative AI metrics and qualitative attributes related to accuracy, utility, and cultural competence. The model addresses the nuances of Arabic while reducing AI hallucinations, making it suitable for Arabic-speaking healthcare. Why it matters: The model offers a more culturally and linguistically sensitive solution for Arabic clinical documentation, potentially improving healthcare workflows and patient outcomes in the region.

Get the weekly digest

Top AI stories from the GCC region, every week.

Related

Benchmarking the Medical Understanding and Reasoning of Large Language Models in Arabic Healthcare Tasks

arXiv ·

This paper benchmarks the performance of large language models (LLMs) on Arabic medical natural language processing tasks using the AraHealthQA dataset. The study evaluated LLMs in multiple-choice question answering, fill-in-the-blank, and open-ended question answering scenarios. The results showed that a majority voting solution using Gemini Flash 2.5, Gemini Pro 2.5, and GPT o3 achieved 77% accuracy on MCQs, while other LLMs achieved a BERTScore of 86.44% on open-ended questions. Why it matters: The research highlights both the potential and limitations of current LLMs in Arabic clinical contexts, providing a baseline for future improvements in Arabic medical AI.