Skip to content
GCC AI Research

Search

Results for "stochastic gradient descent"

SGD from the Lens of Markov process: An Algorithmic Stability Perspective

MBZUAI ·

A Marie Curie Fellow from Inria and UIUC presented research on stochastic gradient descent (SGD) through the lens of Markov processes, exploring the relationships between heavy-tailed distributions, generalization error, and algorithmic stability. The research challenges existing theories about the monotonic relationship between heavy tails and generalization error. It introduces a unified approach for proving Wasserstein stability bounds in stochastic optimization, applicable to convex and non-convex losses. Why it matters: The work provides novel insights into the theoretical underpinnings of stochastic optimization, relevant to researchers at MBZUAI and other institutions in the region working on machine learning algorithms.

An Adaptive Stochastic Sequential Quadratic Programming with Differentiable Exact Augmented Lagrangians

MBZUAI ·

Mladen Kolar from the University of Chicago Booth School of Business discussed stochastic optimization with equality constraints at MBZUAI. He presented a stochastic algorithm based on sequential quadratic programming (SQP) using a differentiable exact augmented Lagrangian. The algorithm adapts random stepsizes using a stochastic line search procedure, establishing global "almost sure" convergence. Why it matters: The presentation highlights MBZUAI's role in hosting discussions on advanced optimization techniques, fostering research and knowledge exchange in the field of machine learning.

KAUST Professor Peter Richtárik wins Distinguished Speaker Award

KAUST ·

KAUST Professor Peter Richtárik received a Distinguished Speaker Award at the Sixth International Conference on Continuous Optimization (ICCOPT 2019) in Berlin. Richtárik's lecture series, totaling six hours, focused on stochastic gradient descent (SGD) methods, drawing from recent research by his KAUST group. He highlighted key principles and new variants of SGD, the key method for training modern machine learning models. Why it matters: This award recognizes KAUST's contribution to fundamental machine learning optimization, which is critical for advancing AI in the region.

Open Problems in Modern Convex Optimization

MBZUAI ·

Alexander Gasnikov from the Moscow Institute of Physics and Technology presented a talk on open problems in convex optimization. The talk covered stochastic averaging vs stochastic average approximation, saddle-point problems and accelerated methods, homogeneous federated learning, and decentralized optimization. Gasnikov's research focuses on optimization algorithms and he has published in NeurIPS, ICML, EJOR, OMS, and JOTA. Why it matters: While the talk itself isn't directly related to GCC AI, understanding convex optimization is crucial for advancing machine learning algorithms used in the region.

Accelerating neural network optimization: The power of second-order methods

MBZUAI ·

MBZUAI researchers presented a new second-order method for optimizing neural networks at NeurIPS 2024. The method addresses optimization problems related to variational inequalities common in machine learning. They demonstrated that for monotone inequalities with inexact second-order derivatives, no faster second- or first-order methods can theoretically exist, supporting this with experiments. Why it matters: This research has the potential to reduce the computational cost of training large and complex neural networks, which could accelerate AI development in the region.

Scalable Community Detection in Massive Networks Using Aggregated Relational Data

MBZUAI ·

A new mini-batch strategy using aggregated relational data is proposed to fit the mixed membership stochastic blockmodel (MMSB) to large networks. The method uses nodal information and stochastic gradients of bipartite graphs for scalable inference. The approach was applied to a citation network with over two million nodes and 25 million edges, capturing explainable structure. Why it matters: This research enables more efficient community detection in massive networks, which is crucial for analyzing complex relationships in various domains, but this article has no clear connection to the Middle East.

KAUST advances scalable AI through global collaboration

KAUST ·

KAUST is hosting a workshop on distributed training in November 2025, led by Professors Peter Richtarik and Marco Canini, focusing on scaling large models like LLMs and ViTs. Richtarik's team recently solved a 75-year-old problem in asynchronous optimization, developing time-optimal stochastic gradient descent algorithms. This research improves the speed and reliability of large model training and supports applications in distributed and federated learning. Why it matters: KAUST's focus on scalable AI and federated learning contributes to Saudi Arabia's Vision 2030 goals and addresses critical challenges in AI deployment and data privacy.

New approaches for machine learning optimization presented at ICML

MBZUAI ·

MBZUAI and KAUST researchers collaborated to present new optimization methods at ICML 2024 for composite and distributed machine learning settings. The study addresses challenges in training large models due to data size and computational power. Their work focuses on minimizing the "loss function" by adjusting internal trainable parameters, using techniques like gradient clipping. Why it matters: This research contributes to the ongoing advancement of machine learning optimization, crucial for improving the performance and efficiency of AI models in the region and globally.