A Marie Curie Fellow from Inria and UIUC presented research on stochastic gradient descent (SGD) through the lens of Markov processes, exploring the relationships between heavy-tailed distributions, generalization error, and algorithmic stability. The research challenges existing theories about the monotonic relationship between heavy tails and generalization error. It introduces a unified approach for proving Wasserstein stability bounds in stochastic optimization, applicable to convex and non-convex losses. Why it matters: The work provides novel insights into the theoretical underpinnings of stochastic optimization, relevant to researchers at MBZUAI and other institutions in the region working on machine learning algorithms.
The article mentions several KAUST faculty and staff, including Matteo Parsani (Assistant Professor of Applied Mathematics), Teofilo Abrajano (Director of Sponsored Research), and David Keyes (Director of the Extreme Computing Research Center). It also references a talk by NASA Senior Scientist Mark Carpenter at the SIAM CSE 2017 conference. The article includes a photograph of King Abdullah bin Abdulaziz Al Saud. Why it matters: This appears to be general information about KAUST faculty and activities, but lacks specific details on research or AI developments.
Bruno Ribeiro from Purdue University presented a talk on Asymmetry Learning and Out-of-Distribution (OOD) Robustness. The talk introduced Asymmetry Learning, a new paradigm that focuses on finding evidence of asymmetries in data to improve classifier performance in both in-distribution and out-of-distribution scenarios. Asymmetry Learning performs a causal structure search to find classifiers that perform well across different environments. Why it matters: This research addresses a key challenge in AI by proposing a novel approach to improve the reliability and generalization of classifiers in unseen environments, potentially leading to more robust AI systems.
Dr. Xinwei Sun from Microsoft Research Asia presented research on trustworthy AI, focusing on statistical learning with theoretical guarantees. The work covers methods for sparse recovery with false-discovery rate analysis and causal inference tools for robustness and explainability. Consistency and identifiability were addressed theoretically, with applications shown in medical imaging analysis. Why it matters: The research contributes to addressing key limitations of current AI models regarding explainability, reproducibility, robustness, and fairness, which are crucial for real-world applications in sensitive fields like healthcare.
KAUST Associate Professor Xiangliang Zhang leads the Machine Intelligence and Knowledge Engineering (MINE) group, focusing on machine learning and data mining algorithms for AI applications. The MINE group researches complex graph data to profile nodes, predict links, detect computing communities, and understand their connections. Zhang's team also works on graph alignment and recommender systems. Why it matters: This research contributes to advancing machine learning techniques at a leading GCC institution, potentially impacting various AI applications in the region.
KAUST researchers developed a machine learning algorithm to control a deformable mirror within the Subaru Telescope's exoplanet imaging camera, compensating for atmospheric turbulence. The algorithm, which computes a partial singular value decomposition (SVD), outperforms a standard SVD by a factor of four. The KAUST team received a best paper award at the PASC Conference for this work, which has already been deployed at the Subaru Telescope. Why it matters: This advancement enables sharper images of exoplanets, facilitating their identification and study, and showcases the impact of optimizing core linear algebra algorithms.
KAUST is hosting a workshop on distributed training in November 2025, led by Professors Peter Richtarik and Marco Canini, focusing on scaling large models like LLMs and ViTs. Richtarik's team recently solved a 75-year-old problem in asynchronous optimization, developing time-optimal stochastic gradient descent algorithms. This research improves the speed and reliability of large model training and supports applications in distributed and federated learning. Why it matters: KAUST's focus on scalable AI and federated learning contributes to Saudi Arabia's Vision 2030 goals and addresses critical challenges in AI deployment and data privacy.
A new framework for constructing confidence sets for causal orderings within structural equation models (SEMs) is presented. It leverages a residual bootstrap procedure to test the goodness-of-fit of causal orderings, quantifying uncertainty in causal discovery. The method is computationally efficient and suitable for medium-sized problems while maintaining theoretical guarantees as the number of variables increases. Why it matters: This offers a new dimension of uncertainty quantification that enhances the robustness and reliability of causal inference in complex systems, but there is no indication of connection to the Middle East.