Alexander Gasnikov from the Moscow Institute of Physics and Technology presented a talk on open problems in convex optimization. The talk covered stochastic averaging vs stochastic average approximation, saddle-point problems and accelerated methods, homogeneous federated learning, and decentralized optimization. Gasnikov's research focuses on optimization algorithms and he has published in NeurIPS, ICML, EJOR, OMS, and JOTA. Why it matters: While the talk itself isn't directly related to GCC AI, understanding convex optimization is crucial for advancing machine learning algorithms used in the region.
This article discusses the need for a decentralized approach to AI, especially in contexts where data and knowledge are distributed. It highlights five key technical challenges: privacy, verifiability, incentives, orchestration, and crowdUX. The author, Ramesh Raskar from MIT Media Lab, advocates for integrating privacy tech, distributed verifiable AI, data markets, orchestration, and crowd experience into the Web3 framework. Why it matters: Decentralized AI could unlock new possibilities for collaboration and problem-solving in the region, particularly in sectors like healthcare and logistics where data is often siloed.
A talk at MBZUAI discussed federated learning, a distributed machine learning approach training models over devices while keeping data localized. The presentation covered a straggler-resilient federated learning scheme using adaptive node participation to tackle system heterogeneity. It also presented a robust optimization formulation for addressing data heterogeneity and a new algorithm for personalizing learned models. Why it matters: Federated learning is crucial for AI applications involving decentralized data sources, and research on improving its robustness and personalization is essential for real-world deployment in the region.
MBZUAI and KAUST researchers collaborated to present new optimization methods at ICML 2024 for composite and distributed machine learning settings. The study addresses challenges in training large models due to data size and computational power. Their work focuses on minimizing the "loss function" by adjusting internal trainable parameters, using techniques like gradient clipping. Why it matters: This research contributes to the ongoing advancement of machine learning optimization, crucial for improving the performance and efficiency of AI models in the region and globally.
MBZUAI researchers are applying federated learning to optimize smart grids while protecting user data privacy. This approach leverages techniques from smart healthcare systems to enhance energy efficiency and local energy sharing. The research addresses the challenge of balancing grid optimization with the risk of user identity theft associated with traditional data-intensive smart grids. Why it matters: This research demonstrates a practical application of privacy-preserving AI in critical infrastructure, addressing key concerns around data security and fostering trust in smart grid technologies.
Sai Praneeth Karimireddy from UC Berkeley presented a talk on building planetary-scale collaborative intelligence, highlighting the challenges of using distributed data in machine learning due to data silos and ethical-legal restrictions. He proposed collaborative systems like federated learning as a solution to bring together distributed data while respecting privacy. The talk addressed the need for efficiency, reliability, and management of divergent goals in these systems, suggesting the use of tools from optimization, statistics, and economics. Why it matters: Collaborative AI systems can unlock valuable distributed data in the region, especially in sensitive sectors like healthcare, while ensuring privacy and addressing ethical concerns.
Qingbiao Li from the Oxford Robotics Institute is researching decentralized multi-robot coordination using Graph Neural Networks (GNNs). The approach builds an information-sharing mechanism within a decentralized multi-robot system through GNNs and imitation learning. It also uses visual machine learning-assisted navigation with panoramic cameras to guide robots in unseen environments. Why it matters: This research could improve the effectiveness of automated mobile robot systems in urban rail transit and warehousing logistics in the GCC region, where smart city initiatives are growing.
KAUST is hosting a workshop on distributed training in November 2025, led by Professors Peter Richtarik and Marco Canini, focusing on scaling large models like LLMs and ViTs. Richtarik's team recently solved a 75-year-old problem in asynchronous optimization, developing time-optimal stochastic gradient descent algorithms. This research improves the speed and reliability of large model training and supports applications in distributed and federated learning. Why it matters: KAUST's focus on scalable AI and federated learning contributes to Saudi Arabia's Vision 2030 goals and addresses critical challenges in AI deployment and data privacy.