Skip to content
GCC AI Research

Search

Results for "convergence"

Gaussian Variational Inference in high dimension

MBZUAI ·

This article discusses approximating a high-dimensional distribution using Gaussian variational inference by minimizing Kullback-Leibler divergence. It builds upon previous research and approximates the minimizer using a Gaussian distribution with specific mean and variance. The study details approximation accuracy and applicability using efficient dimension, relevant for analyzing sampling schemes in optimization. Why it matters: This theoretical research can inform the development of more efficient and accurate AI algorithms, particularly in areas dealing with high-dimensional data such as machine learning and data analysis.

A new strategy for complex optimization problems in machine learning presented at ICLR

MBZUAI ·

MBZUAI researchers presented a new strategy for handling complex optimization problems in machine learning at ICLR 2024. The study, a collaboration with ISAM, combines zeroth-order methods with hard-thresholding to address specific settings in machine learning. This approach aims to improve convergence, ensuring algorithms reach quality solutions efficiently. Why it matters: Improving optimization techniques is crucial for advancing machine learning models used in various applications, potentially accelerating development and enhancing performance.

The new combustion conversation

KAUST ·

The 2017 KAUST Research Conference focused on new combustion concepts, bringing together experts from academia, national labs, and industry. Participants discussed various aspects of combustion, including energy conversion by Professor Igor Adamovich from Ohio State University. Anne Bourdon from École Polytechnique presented on plasma-assisted applications. Why it matters: The conference facilitated knowledge exchange and collaboration on advancing combustion technologies, a field relevant to energy and environmental sustainability in the region.

KAUST advances scalable AI through global collaboration

KAUST ·

KAUST is hosting a workshop on distributed training in November 2025, led by Professors Peter Richtarik and Marco Canini, focusing on scaling large models like LLMs and ViTs. Richtarik's team recently solved a 75-year-old problem in asynchronous optimization, developing time-optimal stochastic gradient descent algorithms. This research improves the speed and reliability of large model training and supports applications in distributed and federated learning. Why it matters: KAUST's focus on scalable AI and federated learning contributes to Saudi Arabia's Vision 2030 goals and addresses critical challenges in AI deployment and data privacy.

Merchants in innovation

KAUST ·

KAUST hosted the KAUST Research Conference: Advances in Well Construction with Focus on Near-Wellbore Physics and Chemistry from November 7 to 9. The conference was co-chaired by Eric van Oort, a professor at UT Austin, and Tadeusz Patzek, director of the University’s Upstream Petroleum Engineering Research Center. Attendees included professors from the University of Queensland and UT Austin, and directors from GenesisRTS and Labyrinth Consulting Services, Inc. Why it matters: The conference facilitates international collaboration on advancements in petroleum engineering and well construction technologies, which are strategically important for Saudi Arabia.

Point correlations for graphics, vision and machine learning

MBZUAI ·

The article discusses the importance of sample correlations in computer graphics, vision, and machine learning, highlighting how tailored randomness can improve the efficiency of existing models. It covers various correlations studied in computer graphics and tools to characterize them, including the use of neural networks for developing different correlations. Gurprit Singh from the Max Planck Institute for Informatics will be presenting on the topic. Why it matters: Optimizing sampling techniques via understanding and applying correlations can lead to significant advancements and efficiency gains across multiple AI fields.

Accelerating the combustion conversation

KAUST ·

KAUST's Clean Combustion Research Center (CCRC) hosted the Combustion in Extreme Conditions research conference from March 5-8. The conference focused on combustion under extreme conditions in modern engines, covering high-pressure combustion, advanced diagnostics, and high-performance computations. Experts from academia, national labs, and industry discussed global collaborations toward clean combustion systems, alternative fuels, and emission reduction techniques. Why it matters: The conference highlights KAUST's role as a global hub for combustion research and its commitment to advancing technologies for cleaner and more efficient energy solutions.

SGD from the Lens of Markov process: An Algorithmic Stability Perspective

MBZUAI ·

A Marie Curie Fellow from Inria and UIUC presented research on stochastic gradient descent (SGD) through the lens of Markov processes, exploring the relationships between heavy-tailed distributions, generalization error, and algorithmic stability. The research challenges existing theories about the monotonic relationship between heavy tails and generalization error. It introduces a unified approach for proving Wasserstein stability bounds in stochastic optimization, applicable to convex and non-convex losses. Why it matters: The work provides novel insights into the theoretical underpinnings of stochastic optimization, relevant to researchers at MBZUAI and other institutions in the region working on machine learning algorithms.