Skip to content
GCC AI Research

The diagnosis game: A simulated hospital environment to measure AI agents’ diagnostic abilities

MBZUAI · Significant research

Summary

MBZUAI researchers developed MedAgentSim, a simulated hospital environment to evaluate AI diagnostic abilities. The simulation uses LLM-powered agents to mimic doctor-patient conversations, providing a dynamic assessment of diagnostic skills. The system includes doctor, patient, and evaluator agents that interact within the simulated hospital, making real-time decisions. Why it matters: This research offers a more realistic evaluation of AI in clinical settings, addressing limitations of current benchmarks and potentially improving AI's use in healthcare.

Get the weekly digest

Top AI stories from the GCC region, every week.

Related

A new stress test for AI agents that plan, look and click

MBZUAI ·

MBZUAI researchers won second place at the AgentX Competition at UC Berkeley for their benchmark measuring AI agents' reasoning across images, comparisons, and video. The Agent-X dataset includes 828 tasks across six domains, requiring agents to use 14 executable tools without explicit instructions. Agent-X analyzes the agent's full reasoning trajectory, unlike typical evaluations that focus only on final answers. Why it matters: The benchmark exposes limitations in current multimodal AI agents and provides a more rigorous evaluation framework for real-world applications in the region and beyond.

Can LLMs reason? New benchmark puts models to the test

MBZUAI ·

MBZUAI researchers created a new benchmark dataset called TextGames to evaluate the reasoning abilities of LLMs. The dataset uses simple, text-based games requiring skills like pattern recognition and logical thinking. LLMs struggled with the hardest questions, suggesting limitations in their reasoning capabilities despite advancements in language understanding. Why it matters: This research highlights the need for specialized reasoning models and benchmarks that go beyond memorization to truly test AI's problem-solving abilities.

Agent-X: Evaluating Deep Multimodal Reasoning in Vision-Centric Agentic Tasks

arXiv ·

MBZUAI introduces Agent-X, a benchmark for evaluating multi-step reasoning in vision-centric agents across real-world, multimodal settings. Agent-X includes 828 tasks with diverse visual contexts and spans six environments, requiring tool use and stepwise decision-making. Experiments show that current LLMs struggle with multi-step vision tasks, achieving less than 50% success, highlighting areas for improvement in LMM reasoning and tool use.