Skip to content
GCC AI Research

Search

Results for "Peter Richtárik"

Faculty Focus: Peter Richtárik

KAUST ·

Peter Richtárik, an associate professor of computer science and mathematics, joined KAUST in February 2017. He is affiliated with the Visual Computing Center and the Extreme Computing Research Center at KAUST. Richtárik's research combines optimization and machine learning, and he values the support KAUST provides to his students, including funding for travel and conference attendance. Why it matters: This highlights KAUST's commitment to attracting and supporting leading researchers in AI and related fields, fostering innovation and talent development in the region.

KAUST Professor Peter Richtárik wins Distinguished Speaker Award

KAUST ·

KAUST Professor Peter Richtárik received a Distinguished Speaker Award at the Sixth International Conference on Continuous Optimization (ICCOPT 2019) in Berlin. Richtárik's lecture series, totaling six hours, focused on stochastic gradient descent (SGD) methods, drawing from recent research by his KAUST group. He highlighted key principles and new variants of SGD, the key method for training modern machine learning models. Why it matters: This award recognizes KAUST's contribution to fundamental machine learning optimization, which is critical for advancing AI in the region.

Open Problems in Modern Convex Optimization

MBZUAI ·

Alexander Gasnikov from the Moscow Institute of Physics and Technology presented a talk on open problems in convex optimization. The talk covered stochastic averaging vs stochastic average approximation, saddle-point problems and accelerated methods, homogeneous federated learning, and decentralized optimization. Gasnikov's research focuses on optimization algorithms and he has published in NeurIPS, ICML, EJOR, OMS, and JOTA. Why it matters: While the talk itself isn't directly related to GCC AI, understanding convex optimization is crucial for advancing machine learning algorithms used in the region.

KAUST advances scalable AI through global collaboration

KAUST ·

KAUST is hosting a workshop on distributed training in November 2025, led by Professors Peter Richtarik and Marco Canini, focusing on scaling large models like LLMs and ViTs. Richtarik's team recently solved a 75-year-old problem in asynchronous optimization, developing time-optimal stochastic gradient descent algorithms. This research improves the speed and reliability of large model training and supports applications in distributed and federated learning. Why it matters: KAUST's focus on scalable AI and federated learning contributes to Saudi Arabia's Vision 2030 goals and addresses critical challenges in AI deployment and data privacy.

KAUST master’s degree student wins best poster award at Data Science Summer School

KAUST ·

KAUST master’s degree student Samuel Horváth won a best poster award at the Data Science Summer School (DS3) in Paris for his poster entitled "Nonconvex Variance Reduced Optimization with Arbitrary Sampling". The poster is based on a paper of the same name currently under review and is joint work between Horváth and his supervisor Professor Peter Richtárik from the KAUST Visual Computing Center. Horváth's research interests are at the interface of statistical learning and big data optimization, with a focus on randomized methods for non-convex problems. Why it matters: This award recognizes the quality of KAUST's research and its students' contributions to the field of data science and optimization.

An Adaptive Stochastic Sequential Quadratic Programming with Differentiable Exact Augmented Lagrangians

MBZUAI ·

Mladen Kolar from the University of Chicago Booth School of Business discussed stochastic optimization with equality constraints at MBZUAI. He presented a stochastic algorithm based on sequential quadratic programming (SQP) using a differentiable exact augmented Lagrangian. The algorithm adapts random stepsizes using a stochastic line search procedure, establishing global "almost sure" convergence. Why it matters: The presentation highlights MBZUAI's role in hosting discussions on advanced optimization techniques, fostering research and knowledge exchange in the field of machine learning.

Working to make AI faster, smarter, and more punctual

MBZUAI ·

MBZUAI Associate Professor Martin Takáč is working on high-performance computing and machine learning with applications in logistics, supply chain management, and other areas. His research focuses on using AI to improve precision and efficiency in tasks like predicting demand and optimizing delivery routes. Takáč's interests include imitative learning, predictive modeling, and reinforcement learning to enable AI to mimic human behavior and predict future outcomes. Why it matters: This research contributes to the development of more efficient and reliable AI systems that can be applied to a wide range of industries in the UAE and beyond.

Better Optimization Algorithms for Machine Learning

MBZUAI ·

Francesco Orabona from Boston University, with a PhD from the University of Genova, researches online learning, optimization, and statistical learning theory. He previously worked at Yahoo Labs and Toyota Technological Institute at Chicago. MBZUAI hosted a panel discussion (topic not specified in provided text). Why it matters: Optimization algorithms are crucial for advancing machine learning and AI, and researchers like Orabona contribute to this field.