Skip to content
GCC AI Research

Search

Results for "Collaborative Learning"

Investigating collaborative learning at MBZUAI’s AI Quorum

MBZUAI ·

MBZUAI launched the AI Quorum, a winter series from October 2022 to March 2023, to stimulate AI research. The first session, led by Professor Michael Jordan, focused on collaborative learning with around 20 research experts. Discussions covered the use of edge devices like cell phones and hospitals providing data to build large models, as well as risks like free-riding and adversarial attacks. Why it matters: The AI Quorum initiative positions MBZUAI as a hub for global AI collaboration, addressing key challenges and opportunities in collaborative learning for real-world applications.

Building Planetary-Scale Collaborative Intelligence

MBZUAI ·

Sai Praneeth Karimireddy from UC Berkeley presented a talk on building planetary-scale collaborative intelligence, highlighting the challenges of using distributed data in machine learning due to data silos and ethical-legal restrictions. He proposed collaborative systems like federated learning as a solution to bring together distributed data while respecting privacy. The talk addressed the need for efficiency, reliability, and management of divergent goals in these systems, suggesting the use of tools from optimization, statistics, and economics. Why it matters: Collaborative AI systems can unlock valuable distributed data in the region, especially in sensitive sectors like healthcare, while ensuring privacy and addressing ethical concerns.

Frontiers of federation at the AI Quorum

MBZUAI ·

MBZUAI hosted the Second Workshop on Collaborative Learning as part of the AI Quorum in Abu Dhabi, focusing on collaborative and federated learning for sustainable development. Researchers discussed applications in medicine, biology, ecological conservation, and humanitarian aid. Eric Xing highlighted the potential of large biology models, similar to LLMs, to revolutionize biological data analysis. Why it matters: This workshop underscores the UAE's commitment to advancing AI research in crucial sectors like healthcare and sustainability through collaborative learning approaches.

Learning to Cooperate in Multi-Agent Systems

MBZUAI ·

Dr. Yali Du from King's College London will give a presentation on learning to cooperate in multi-agent systems. Her research focuses on enabling cooperative and responsible behavior in machines using reinforcement learning and foundation models. She will discuss enhancing collaboration within social contexts, fostering human-AI coordination, and achieving scalable alignment. Why it matters: This highlights the growing importance of research into multi-agent systems and human-AI interaction, crucial for developing AI that integrates effectively and ethically into society.

KAUST advances scalable AI through global collaboration

KAUST ·

KAUST is hosting a workshop on distributed training in November 2025, led by Professors Peter Richtarik and Marco Canini, focusing on scaling large models like LLMs and ViTs. Richtarik's team recently solved a 75-year-old problem in asynchronous optimization, developing time-optimal stochastic gradient descent algorithms. This research improves the speed and reliability of large model training and supports applications in distributed and federated learning. Why it matters: KAUST's focus on scalable AI and federated learning contributes to Saudi Arabia's Vision 2030 goals and addresses critical challenges in AI deployment and data privacy.

Multi-agent Time-based Decision-making for the Search and Action Problem

arXiv ·

This paper introduces a decentralized multi-agent decision-making framework for search and action problems under time constraints, treating time as a budgeted resource where actions have costs and rewards. The approach uses probabilistic reasoning to optimize decisions, maximizing reward within the given time. Evaluated in a simulated search, pick, and place scenario inspired by the Mohamed Bin Zayed International Robotics Challenge (MBZIRC), the algorithm outperformed benchmark strategies. Why it matters: The framework's validation in a Gazebo environment signals potential for real-world robotic applications, particularly in time-sensitive and cooperative tasks within the robotics domain in the UAE.

DaringFed: A Dynamic Bayesian Persuasion Pricing for Online Federated Learning under Two-sided Incomplete Information

arXiv ·

This paper introduces DaringFed, a novel dynamic Bayesian persuasion pricing mechanism for online federated learning (OFL) that addresses the challenge of two-sided incomplete information (TII) regarding resources. It formulates the interaction between the server and clients as a dynamic signaling and pricing allocation problem within a Bayesian persuasion game, demonstrating the existence of a unique Bayesian persuasion Nash equilibrium. Evaluations on real and synthetic datasets demonstrate that DaringFed optimizes accuracy and convergence speed and improves the server's utility.