Dr. Yali Du from King's College London will give a presentation on learning to cooperate in multi-agent systems. Her research focuses on enabling cooperative and responsible behavior in machines using reinforcement learning and foundation models. She will discuss enhancing collaboration within social contexts, fostering human-AI coordination, and achieving scalable alignment. Why it matters: This highlights the growing importance of research into multi-agent systems and human-AI interaction, crucial for developing AI that integrates effectively and ethically into society.
This paper introduces a decentralized multi-agent decision-making framework for search and action problems under time constraints, treating time as a budgeted resource where actions have costs and rewards. The approach uses probabilistic reasoning to optimize decisions, maximizing reward within the given time. Evaluated in a simulated search, pick, and place scenario inspired by the Mohamed Bin Zayed International Robotics Challenge (MBZIRC), the algorithm outperformed benchmark strategies. Why it matters: The framework's validation in a Gazebo environment signals potential for real-world robotic applications, particularly in time-sensitive and cooperative tasks within the robotics domain in the UAE.
Researchers from MBZUAI, Carnegie Mellon University, and Meta AI presented a new approach called ThoughtComm at NeurIPS 2025 where AI agents communicate through internal, latent representations instead of natural language. This framework extracts and selectively shares latent "thoughts" from agents' internal states, representing the underlying structure of their reasoning. Results show that agents coordinate more effectively, reach consensus faster, and solve problems more accurately using this method. Why it matters: Bypassing the limitations of natural language in AI communication could lead to more efficient and accurate multi-agent systems, impacting areas like robotics, collaborative AI, and distributed problem-solving.
The Robotics, Intelligent Systems, and Control (RISC) lab at KAUST is developing swarm robotics, enabling robots to work together on collaborative tasks with limited human supervision. RISC is using game theory to improve how robots make coordinated decisions in scenarios like engaging intruders or tracking oil spills. The lab is also researching programmable self-assembly for robot swarms. Why it matters: This research advances autonomous multi-agent systems for critical applications like search and rescue and environmental monitoring in the region.
Giulia De Masi, Principal Scientist at the Technology Innovation Institute (TII) in Abu Dhabi, specializes in Collective Intelligence and Swarm Robotics. Her work focuses on designing emergent behaviors in robot swarms through local interactions, drawing inspiration from social insects. De Masi's background includes positions at academic institutions in the UAE and a PhD from the University of Rome La Sapienza. Why it matters: This highlights the growing focus on swarm robotics and collective intelligence research within the UAE, with potential applications in various industries.