Skip to content
GCC AI Research

Search

Results for "CamelEval"

CamelEval: Advancing Culturally Aligned Arabic Language Models and Benchmarks

arXiv ·

The paper introduces Juhaina, a 9.24B parameter Arabic-English bilingual LLM trained with an 8,192 token context window. It identifies limitations in the Open Arabic LLM Leaderboard (OALL) and proposes a new benchmark, CamelEval, for more comprehensive evaluation. Juhaina outperforms models like Llama and Gemma in generating helpful Arabic responses and understanding cultural nuances. Why it matters: This culturally-aligned LLM and associated benchmark could significantly advance Arabic NLP and democratize AI access for Arabic speakers.

Spot-the-Camel: Computer Vision for Safer Roads

arXiv ·

Researchers in Saudi Arabia are applying computer vision techniques to reduce Camel-Vehicle Collisions (CVCs). They tested object detection models including CenterNet, EfficientDet, Faster R-CNN, SSD, and YOLOv8 on the task, finding YOLOv8 to be the most accurate and efficient. Future work will focus on developing a system to improve road safety in rural areas.

Computer Vision for a Camel-Vehicle Collision Mitigation System

arXiv ·

Researchers are exploring computer vision models to mitigate Camel-Vehicle Collisions (CVC) in Saudi Arabia, which have a high fatality rate. They tested CenterNet, EfficientDet, Faster R-CNN, and SSD for camel detection, finding CenterNet to be the most accurate and efficient. Future work involves developing a comprehensive system to enhance road safety in rural areas.

EvoLMM: Self-Evolving Large Multimodal Models with Continuous Rewards

arXiv ·

Researchers at MBZUAI have introduced EvoLMM, a self-evolving framework for large multimodal models that enhances reasoning capabilities without human-annotated data or reward distillation. EvoLMM uses two cooperative agents, a Proposer and a Solver, which generate image-grounded questions and solve them through internal consistency, using a continuous self-rewarding process. Evaluations using Qwen2.5-VL as the base model showed performance gains of up to 3% on multimodal math-reasoning benchmarks like ChartQA, MathVista, and MathVision using only raw training images.

When Benchmarks are Targets: Revealing the Sensitivity of Large Language Model Leaderboards

arXiv ·

Researchers from the National Center for AI in Saudi Arabia investigated the sensitivity of Large Language Model (LLM) leaderboards to minor benchmark perturbations. They found that small changes, like choice order, can shift rankings by up to 8 positions. The study recommends hybrid scoring and warns against over-reliance on simple benchmark evaluations, providing code for further research.