Skip to content
GCC AI Research

Search

Results for "decentralized optimization"

Open Problems in Modern Convex Optimization

MBZUAI ·

Alexander Gasnikov from the Moscow Institute of Physics and Technology presented a talk on open problems in convex optimization. The talk covered stochastic averaging vs stochastic average approximation, saddle-point problems and accelerated methods, homogeneous federated learning, and decentralized optimization. Gasnikov's research focuses on optimization algorithms and he has published in NeurIPS, ICML, EJOR, OMS, and JOTA. Why it matters: While the talk itself isn't directly related to GCC AI, understanding convex optimization is crucial for advancing machine learning algorithms used in the region.

Graph neural network approach for decentralized multi-robot coordination

MBZUAI ·

Qingbiao Li from the Oxford Robotics Institute is researching decentralized multi-robot coordination using Graph Neural Networks (GNNs). The approach builds an information-sharing mechanism within a decentralized multi-robot system through GNNs and imitation learning. It also uses visual machine learning-assisted navigation with panoramic cameras to guide robots in unseen environments. Why it matters: This research could improve the effectiveness of automated mobile robot systems in urban rail transit and warehousing logistics in the GCC region, where smart city initiatives are growing.

Enabling Fast, Robust, and Personalized Federated Learning

MBZUAI ·

A talk at MBZUAI discussed federated learning, a distributed machine learning approach training models over devices while keeping data localized. The presentation covered a straggler-resilient federated learning scheme using adaptive node participation to tackle system heterogeneity. It also presented a robust optimization formulation for addressing data heterogeneity and a new algorithm for personalizing learned models. Why it matters: Federated learning is crucial for AI applications involving decentralized data sources, and research on improving its robustness and personalization is essential for real-world deployment in the region.

Programmable Networks for Distributed Deep Learning: Advances and Perspectives

MBZUAI ·

A presentation discusses using programmable network devices to reduce communication bottlenecks in distributed deep learning. It explores in-network aggregation and data processing to lower memory needs and increase bandwidth usage. The talk also covers gradient compression and the potential role of programmable NICs. Why it matters: Optimizing distributed deep learning infrastructure is critical for scaling AI model training in resource-constrained environments.

Unlocking Decentralized AI and Vision: Overcoming Incentive Barriers, Orchestration Challenges, and Data Silos

MBZUAI ·

This article discusses the need for a decentralized approach to AI, especially in contexts where data and knowledge are distributed. It highlights five key technical challenges: privacy, verifiability, incentives, orchestration, and crowdUX. The author, Ramesh Raskar from MIT Media Lab, advocates for integrating privacy tech, distributed verifiable AI, data markets, orchestration, and crowd experience into the Web3 framework. Why it matters: Decentralized AI could unlock new possibilities for collaboration and problem-solving in the region, particularly in sectors like healthcare and logistics where data is often siloed.

New approaches for machine learning optimization presented at ICML

MBZUAI ·

MBZUAI and KAUST researchers collaborated to present new optimization methods at ICML 2024 for composite and distributed machine learning settings. The study addresses challenges in training large models due to data size and computational power. Their work focuses on minimizing the "loss function" by adjusting internal trainable parameters, using techniques like gradient clipping. Why it matters: This research contributes to the ongoing advancement of machine learning optimization, crucial for improving the performance and efficiency of AI models in the region and globally.

A new strategy for complex optimization problems in machine learning presented at ICLR

MBZUAI ·

MBZUAI researchers presented a new strategy for handling complex optimization problems in machine learning at ICLR 2024. The study, a collaboration with ISAM, combines zeroth-order methods with hard-thresholding to address specific settings in machine learning. This approach aims to improve convergence, ensuring algorithms reach quality solutions efficiently. Why it matters: Improving optimization techniques is crucial for advancing machine learning models used in various applications, potentially accelerating development and enhancing performance.

A Decentralized Multi-Agent Unmanned Aerial System to Search, Pick Up, and Relocate Objects

arXiv ·

This paper presents a decentralized multi-agent unmanned aerial system designed for search, pickup, and relocation of objects. The system integrates multi-agent aerial exploration, object detection/tracking, and aerial gripping. The decentralized system uses global state estimation, reactive collision avoidance, and sweep planning for exploration. Why it matters: The system's successful deployment in demonstrations and competitions like MBZIRC highlights the potential of integrated robotic solutions for complex tasks such as search and rescue in the region.