Skip to content
GCC AI Research

LLMeBench: A Flexible Framework for Accelerating LLMs Benchmarking

arXiv · · Notable

Summary

Researchers have introduced LLMeBench, a customizable framework for evaluating large language models (LLMs) across diverse NLP tasks and languages. The framework features generic dataset loaders, multiple model providers, and pre-implemented evaluation metrics, supporting in-context learning with zero- and few-shot settings. LLMeBench was tested on 31 unique NLP tasks using 53 datasets across 90 experimental setups with 296K data points, and the code has been open-sourced. Why it matters: The framework's flexibility and ease of customization should accelerate LLM benchmarking, especially for Arabic and other low-resource languages.

Keywords

LLM · benchmarking · NLP · framework · Arabic

Get the weekly digest

Top AI stories from the GCC region, every week.

Related

LAraBench: Benchmarking Arabic AI with Large Language Models

arXiv ·

LAraBench introduces a benchmark for Arabic NLP and speech processing, evaluating LLMs like GPT-3.5-turbo, GPT-4, BLOOMZ, Jais-13b-chat, Whisper, and USM. The benchmark covers 33 tasks across 61 datasets, using zero-shot and few-shot learning techniques. Results show that SOTA models generally outperform LLMs in zero-shot settings, though larger LLMs with few-shot learning reduce the gap. Why it matters: This benchmark helps assess and improve the performance of LLMs on Arabic language tasks, highlighting areas where specialized models still excel.

When Benchmarks are Targets: Revealing the Sensitivity of Large Language Model Leaderboards

arXiv ·

Researchers from the National Center for AI in Saudi Arabia investigated the sensitivity of Large Language Model (LLM) leaderboards to minor benchmark perturbations. They found that small changes, like choice order, can shift rankings by up to 8 positions. The study recommends hybrid scoring and warns against over-reliance on simple benchmark evaluations, providing code for further research.

LLM-BABYBENCH: Understanding and Evaluating Grounded Planning and Reasoning in LLMs

arXiv ·

MBZUAI researchers introduce LLM-BabyBench, a benchmark suite for evaluating grounded planning and reasoning in LLMs. The suite, built on a textual adaptation of the BabyAI grid world, assesses LLMs on predicting action consequences, generating action sequences, and decomposing instructions. Datasets, evaluation harness, and metrics are publicly available to facilitate reproducible assessment.

SocialMaze: A Benchmark for Evaluating Social Reasoning in Large Language Models

arXiv ·

MBZUAI researchers introduce SocialMaze, a new benchmark for evaluating social reasoning capabilities in large language models (LLMs). SocialMaze includes six diverse tasks across social reasoning games, daily-life interactions, and digital community platforms, emphasizing deep reasoning, dynamic interaction, and information uncertainty. Experiments show that LLMs vary in handling dynamic interactions, degrade under uncertainty, but can be improved via fine-tuning on curated reasoning examples.