Skip to content
GCC AI Research

Search

Results for "agent-based model"

From Individual to Society: Social Simulation Driven by LLM-based Agent

MBZUAI ·

Fudan University's Zhongyu Wei presented research on social simulation driven by LLMs, covering individual and large-scale social movement simulation. Wei directs the Data Intelligence and Social Computing Lab (Fudan DISC) and has published extensively on multimodal large models and social computing. His work includes the Volcano multimodal model, DISC-MedLLM, and ElectionSim. Why it matters: Using LLMs for social simulation could provide new tools for understanding and potentially predicting social dynamics in the Arab world.

Better models show how infectious diseases spread

KAUST ·

KAUST researchers developed a new model integrating SIR compartment modeling in time and a point process modeling approach in space-time, also considering age-specific contact patterns. They used a two-step framework to model infectious locations over time for different age groups. The model demonstrated improved predictive accuracy in simulations and a COVID-19 case study in Cali, Colombia, compared to existing models. Why it matters: This model can assist decision-makers in identifying high-risk locations and vulnerable populations for better disease control strategies in the region and globally.

The diagnosis game: A simulated hospital environment to measure AI agents’ diagnostic abilities

MBZUAI ·

MBZUAI researchers developed MedAgentSim, a simulated hospital environment to evaluate AI diagnostic abilities. The simulation uses LLM-powered agents to mimic doctor-patient conversations, providing a dynamic assessment of diagnostic skills. The system includes doctor, patient, and evaluator agents that interact within the simulated hospital, making real-time decisions. Why it matters: This research offers a more realistic evaluation of AI in clinical settings, addressing limitations of current benchmarks and potentially improving AI's use in healthcare.

Governing What the EU AI Act Excludes: Accountability for Autonomous AI Agents in Smart City Critical Infrastructure

arXiv ·

This research paper identifies an accountability deficit for autonomous AI agents operating in smart city critical infrastructure under the EU AI Act, noting that specific provisions exclude safety-component AI from certain explanation rights and impact assessments. It proposes AgentGov-SC, a three-layer governance architecture specifying 25 measures, 5 conflict resolution rules, and an autonomy-calibrated activation model, with bidirectional traceability to established AI frameworks. A scenario analysis traces the governance activation through a multi-agent corridor cascade involving documented UAE smart-city systems. Why it matters: This paper addresses a significant regulatory gap in AI governance for complex, multi-agent systems in critical urban infrastructure, offering a novel architectural solution highly relevant to global smart city initiatives, including those in the Middle East.

Learning to Cooperate in Multi-Agent Systems

MBZUAI ·

Dr. Yali Du from King's College London will give a presentation on learning to cooperate in multi-agent systems. Her research focuses on enabling cooperative and responsible behavior in machines using reinforcement learning and foundation models. She will discuss enhancing collaboration within social contexts, fostering human-AI coordination, and achieving scalable alignment. Why it matters: This highlights the growing importance of research into multi-agent systems and human-AI interaction, crucial for developing AI that integrates effectively and ethically into society.