This paper introduces a predictive analysis of Arabic court decisions, utilizing 10,813 real commercial court cases. The study evaluates LLaMA-7b, JAIS-13b, and GPT3.5-turbo models under zero-shot, one-shot, and fine-tuned training paradigms, also experimenting with summarization and translation. GPT-3.5 models significantly outperformed others, exceeding JAIS model performance by 50%, while also demonstrating the unreliability of most automated metrics. Why it matters: This research bridges computational linguistics and Arabic legal analytics, offering insights for enhancing judicial processes and legal strategies in the Arabic-speaking world.
The paper introduces a benchmark of 1,000 multiple-choice questions to evaluate LLMs on Islamic inheritance law ('ilm al-mawarith). Seven LLMs were tested, with o3 and Gemini 2.5 achieving over 90% accuracy, while ALLaM, Fanar, LLaMA, and Mistral scored below 50%. Error analysis revealed limitations in handling structured legal reasoning. Why it matters: This research highlights the challenges and opportunities for adapting LLMs to complex, culturally-specific legal domains like Islamic jurisprudence.
Justice Connect, an Australian charity, collaborated with MBZUAI's Prof. Timothy Baldwin to improve their legal intake tool using NLP. The tool helps route legal requests, but users struggled to identify the relevant area of law, leading to delays and frustration. By applying NLP, the collaboration aims to help users more easily navigate the tool and access appropriate legal resources. Why it matters: This project demonstrates how NLP can be applied to improve access to justice and address unmet legal needs, particularly for those unfamiliar with legal terminology.
Researchers introduce ALARB, a new benchmark for evaluating reasoning in Arabic LLMs using 13K Saudi commercial court cases. The benchmark includes tasks like verdict prediction, reasoning chain completion, and identification of relevant regulations. Instruction-tuning a 12B parameter model on ALARB achieves performance comparable to GPT-4o in verdict prediction and generation.
Researchers introduce ArabLegalEval, a multitask benchmark dataset for assessing Arabic legal knowledge in LLMs. The dataset contains tasks sourced from Saudi legal documents and synthesized questions, drawing inspiration from MMLU and LegalBench. Experiments benchmarked models including GPT-4 and Jais, exploring in-context learning and various evaluation methods. Why it matters: This resource should help accelerate AI research and evaluation in the Arabic legal domain, where datasets are lacking.
This paper introduces an AI-driven decision support system for green hydrogen investment in Oman, specifically for the Duqm R3 auction. The system uses publicly available meteorological data to predict maintenance pressure on hydrogen infrastructure, creating a Maintenance Pressure Index (MPI). This tool supports regulatory oversight and operational decision-making by enabling temporal benchmarking against performance claims.
Researchers introduce a new task for generating question-passage pairs to aid in developing regulatory question-answering (QA) systems. The ObliQA dataset, comprising 27,869 questions from Abu Dhabi Global Markets (ADGM) financial regulations, is presented. A baseline Regulatory Information Retrieval and Answer Generation (RIRAG) system is designed and evaluated using the RePASs metric.