The article discusses the rise of large language models like ChatGPT and Gemini. It highlights their role in driving the first wave of AI development. Why it matters: While lacking specifics, the article suggests ongoing interest in the impact and future of LLMs, a key area of AI research and development.
This paper introduces a predictive analysis of Arabic court decisions, utilizing 10,813 real commercial court cases. The study evaluates LLaMA-7b, JAIS-13b, and GPT3.5-turbo models under zero-shot, one-shot, and fine-tuned training paradigms, also experimenting with summarization and translation. GPT-3.5 models significantly outperformed others, exceeding JAIS model performance by 50%, while also demonstrating the unreliability of most automated metrics. Why it matters: This research bridges computational linguistics and Arabic legal analytics, offering insights for enhancing judicial processes and legal strategies in the Arabic-speaking world.
This paper presents a UI-level evaluation of ALLaM-34B, an Arabic-centric LLM developed by SDAIA and deployed in the HUMAIN Chat service. The evaluation used a prompt pack spanning various Arabic dialects, code-switching, reasoning, and safety, with outputs scored by frontier LLM judges. Results indicate strong performance in generation, code-switching, MSA handling, reasoning, and improved dialect fidelity, positioning ALLaM-34B as a robust Arabic LLM suitable for real-world use.
MBZUAI researchers introduce LLM-BabyBench, a benchmark suite for evaluating grounded planning and reasoning in LLMs. The suite, built on a textual adaptation of the BabyAI grid world, assesses LLMs on predicting action consequences, generating action sequences, and decomposing instructions. Datasets, evaluation harness, and metrics are publicly available to facilitate reproducible assessment.
The article provides a basic overview of large language models (LLMs), explaining their functionality and applications. LLMs are AI systems that process and generate human-like text using transformer architecture, trained on vast datasets to predict the next word in a sequence. The piece differentiates between general-purpose, task-specific, and multimodal models, as well as closed-source and open-source LLMs. Why it matters: LLMs are foundational for advancements in Arabic NLP, as evidenced by models like MBZUAI's Jais, and understanding their mechanics is crucial for regional AI development.