Skip to content
GCC AI Research

Search

Results for "Peter Richtárik"

Faculty Focus: Peter Richtárik

KAUST ·

Peter Richtárik, an associate professor of computer science and mathematics, joined KAUST in February 2017. He is affiliated with the Visual Computing Center and the Extreme Computing Research Center at KAUST. Richtárik's research combines optimization and machine learning, and he values the support KAUST provides to his students, including funding for travel and conference attendance. Why it matters: This highlights KAUST's commitment to attracting and supporting leading researchers in AI and related fields, fostering innovation and talent development in the region.

KAUST Professor Peter Richtárik wins Distinguished Speaker Award

KAUST ·

KAUST Professor Peter Richtárik received a Distinguished Speaker Award at the Sixth International Conference on Continuous Optimization (ICCOPT 2019) in Berlin. Richtárik's lecture series, totaling six hours, focused on stochastic gradient descent (SGD) methods, drawing from recent research by his KAUST group. He highlighted key principles and new variants of SGD, the key method for training modern machine learning models. Why it matters: This award recognizes KAUST's contribution to fundamental machine learning optimization, which is critical for advancing AI in the region.

KAUST advances scalable AI through global collaboration

KAUST ·

KAUST is hosting a workshop on distributed training in November 2025, led by Professors Peter Richtarik and Marco Canini, focusing on scaling large models like LLMs and ViTs. Richtarik's team recently solved a 75-year-old problem in asynchronous optimization, developing time-optimal stochastic gradient descent algorithms. This research improves the speed and reliability of large model training and supports applications in distributed and federated learning. Why it matters: KAUST's focus on scalable AI and federated learning contributes to Saudi Arabia's Vision 2030 goals and addresses critical challenges in AI deployment and data privacy.

KAUST master’s degree student wins best poster award at Data Science Summer School

KAUST ·

KAUST master’s degree student Samuel Horváth won a best poster award at the Data Science Summer School (DS3) in Paris for his poster entitled "Nonconvex Variance Reduced Optimization with Arbitrary Sampling". The poster is based on a paper of the same name currently under review and is joint work between Horváth and his supervisor Professor Peter Richtárik from the KAUST Visual Computing Center. Horváth's research interests are at the interface of statistical learning and big data optimization, with a focus on randomized methods for non-convex problems. Why it matters: This award recognizes the quality of KAUST's research and its students' contributions to the field of data science and optimization.

Open Problems in Modern Convex Optimization

MBZUAI ·

Alexander Gasnikov from the Moscow Institute of Physics and Technology presented a talk on open problems in convex optimization. The talk covered stochastic averaging vs stochastic average approximation, saddle-point problems and accelerated methods, homogeneous federated learning, and decentralized optimization. Gasnikov's research focuses on optimization algorithms and he has published in NeurIPS, ICML, EJOR, OMS, and JOTA. Why it matters: While the talk itself isn't directly related to GCC AI, understanding convex optimization is crucial for advancing machine learning algorithms used in the region.

An Adaptive Stochastic Sequential Quadratic Programming with Differentiable Exact Augmented Lagrangians

MBZUAI ·

Mladen Kolar from the University of Chicago Booth School of Business discussed stochastic optimization with equality constraints at MBZUAI. He presented a stochastic algorithm based on sequential quadratic programming (SQP) using a differentiable exact augmented Lagrangian. The algorithm adapts random stepsizes using a stochastic line search procedure, establishing global "almost sure" convergence. Why it matters: The presentation highlights MBZUAI's role in hosting discussions on advanced optimization techniques, fostering research and knowledge exchange in the field of machine learning.

Ph.D. student wins PACE Challenge

KAUST ·

KAUST Ph.D. student Lukas Larisch won the Parameterized Algorithms and Computational Experiments (PACE) 2017 Challenge in the Optimal Tree Decomposition Challenge, solving more instances than competitors. He received the award at the International Symposium on Parameterized and Exact Computation (IPEC 2017) in Vienna, Austria. Larisch is pursuing his Ph.D. at KAUST and working in the University's Extreme Computing Research Center, focusing on acoustics and graph structure theory. Why it matters: This recognition highlights KAUST's contribution to advanced computer science research and its ability to attract and foster talented researchers in niche areas like parameterized complexity.

KAUST Ph.D. student Jinhui Xiong wins best paper award

KAUST ·

KAUST Ph.D. student Jinhui Xiong won the best paper award at the 24th International Symposium on Vision, Modeling, and Visualization in Germany for his paper "Stochastic Convolutional Sparse Coding". The paper, co-authored with KAUST Professors Peter Richtárik and Wolfgang Heidrich, introduces a novel stochastic spatial-domain solver for Convolutional Sparse Coding (CSC). The proposed algorithm outperforms state-of-the-art solutions in terms of execution time and offers an improved representation for learning dictionaries from sample images. Why it matters: This award recognizes significant research in efficient image representation and dictionary learning, contributing to advancements in visual computing and AI at KAUST.