Skip to content
GCC AI Research

Topics

Optimization

28 articles RSS ↗

Abu Dhabi’s Technology Innovation Institute Develops Quantum Solver for Large Scale Optimization Problems

TII · · Research Quantum Computing

Abu Dhabi's Technology Innovation Institute (TII) has developed a new quantum optimization solver in collaboration with NVIDIA, Los Alamos National Laboratory, and Caltech. The solver addresses large-scale combinatorial optimization problems using a small number of qubits, encoding over 7000 variables with only 17 qubits. Published in Nature Communications, the research demonstrates a hybrid quantum-classical algorithm with a novel encoding scheme that maximizes the use of quantum resources. Why it matters: This advancement marks a significant step toward practical quantum computing applications in the UAE and beyond, particularly in solving complex optimization challenges across various sectors.

Optimization of Module Transferability in Single Image Super-Resolution: Universality Assessment and Cycle Residual Blocks

arXiv · · CV Research

This paper introduces a method for quantifying the transferability of architectural components in Single Image Super-Resolution (SISR) models, termed "Universality," and proposes a Universality Assessment Equation (UAE). Guided by the UAE, the authors design optimized modules, Cycle Residual Block (CRB) and Depth-Wise Cycle Residual Block (DCRB), and demonstrate their effectiveness across various datasets and low-level tasks. Results show that networks using these modules outperform state-of-the-art methods, achieving improved PSNR or parameter reduction.

Diffusion-BBO: Diffusion-Based Inverse Modeling for Online Black-Box Optimization

arXiv · · Research RL

This paper introduces Diffusion-BBO, a new online black-box optimization (BBO) framework that uses a conditional diffusion model as an inverse surrogate model. The framework employs an Uncertainty-aware Exploration (UaE) acquisition function to propose scores in the objective space for conditional sampling. The approach is shown theoretically to achieve a near-optimal solution and empirically outperforms existing online BBO baselines across 6 scientific discovery tasks.

Solving complex problems with LLMs: A new prompting strategy presented at NeurIPS

MBZUAI · · LLM Research

Researchers from MBZUAI and King's College London have developed a new prompting strategy called self-guided exploration to improve LLM performance on combinatorial problems. The method was tested on complex challenges like the traveling salesman problem. The findings will be presented at the 38th Annual Conference on Neural Information Processing Systems (NeurIPS) in Vancouver. Why it matters: This research could lead to practical applications of LLMs in industries like logistics, planning, and scheduling by offering new approaches to computationally complex problems.

Bayesian Optimization-based Tire Parameter and Uncertainty Estimation for Real-World Data

arXiv · · RL Robotics

This paper introduces a Bayesian optimization method for estimating tire parameters and their uncertainty, addressing a gap in existing literature. The methodology uses Stochastic Variational Inference to estimate parameters and uncertainties, and it is validated against a Nelder-Mead algorithm. The approach is applied to real-world data from the Abu Dhabi Autonomous Racing League, revealing uncertainties in identifying curvature and shape parameters due to insufficient excitation. Why it matters: The research provides a practical tool for assessing tire model parameters in real-world conditions, with implications for autonomous racing and vehicle dynamics modeling in the GCC region.

Energy Pricing in P2P Energy Systems Using Reinforcement Learning

arXiv · · RL Research

This paper presents a reinforcement learning framework for optimizing energy pricing in peer-to-peer (P2P) energy systems. The framework aims to maximize the profit of all components in a microgrid, including consumers, prosumers, the service provider, and a community battery. Experimental results on the Pymgrid dataset demonstrate the approach's effectiveness in price optimization, considering the interests of different components and the impact of community battery capacity.

Ph.D. student Michał Mańkowski helps advance transplantation field

KAUST · · Healthcare Research

KAUST Ph.D. student Michał Mańkowski's research on kidney allocation strategies was recognized as one of the American Journal of Transplantation's "Top 10 Articles of 2019." The research demonstrated how an accelerated allocation strategy could increase the utilization of kidneys at risk for non-use, potentially reducing discard rates. Mańkowski aims to translate his U.S.-focused research to improve organ transplantation within the Saudi Arabian healthcare system. Why it matters: This research has the potential to improve organ transplant outcomes and resource allocation in Saudi Arabia, addressing a critical healthcare need.

KAUST Professor Peter Richtárik wins Distinguished Speaker Award

KAUST · · Research KAUST

KAUST Professor Peter Richtárik received a Distinguished Speaker Award at the Sixth International Conference on Continuous Optimization (ICCOPT 2019) in Berlin. Richtárik's lecture series, totaling six hours, focused on stochastic gradient descent (SGD) methods, drawing from recent research by his KAUST group. He highlighted key principles and new variants of SGD, the key method for training modern machine learning models. Why it matters: This award recognizes KAUST's contribution to fundamental machine learning optimization, which is critical for advancing AI in the region.

Ph.D. student Michał Mańkowski wins poster award at the 18th Annual American Society of Transplant Surgeons Symposium

KAUST · · Healthcare Research

KAUST Ph.D. student Michał Mańkowski won a Poster of Distinction Award at the American Society of Transplant Surgeons (ASTS) 18th Annual State of the Art Winter Symposium for his work on kidney allocation systems. His poster described a simulation for a new kidney allocation system to accelerate organ placement, focusing on marginal quality kidneys. The research involves combinatorial optimization, operation research and management science with healthcare applications, stemming from a collaboration with Johns Hopkins School of Medicine. Why it matters: The research aims to improve organ transplantation efficiency and save lives by optimizing kidney allocation systems, demonstrating the potential of AI and optimization techniques in healthcare.

KAUST master’s degree student wins best poster award at Data Science Summer School

KAUST · · Research KAUST

KAUST master’s degree student Samuel Horváth won a best poster award at the Data Science Summer School (DS3) in Paris for his poster entitled "Nonconvex Variance Reduced Optimization with Arbitrary Sampling". The poster is based on a paper of the same name currently under review and is joint work between Horváth and his supervisor Professor Peter Richtárik from the KAUST Visual Computing Center. Horváth's research interests are at the interface of statistical learning and big data optimization, with a focus on randomized methods for non-convex problems. Why it matters: This award recognizes the quality of KAUST's research and its students' contributions to the field of data science and optimization.

Faculty Focus: Peter Richtárik

KAUST · · Research KAUST

Peter Richtárik, an associate professor of computer science and mathematics, joined KAUST in February 2017. He is affiliated with the Visual Computing Center and the Extreme Computing Research Center at KAUST. Richtárik's research combines optimization and machine learning, and he values the support KAUST provides to his students, including funding for travel and conference attendance. Why it matters: This highlights KAUST's commitment to attracting and supporting leading researchers in AI and related fields, fostering innovation and talent development in the region.

Power-Watershed: a graph-based optimization framework for image and data processing

MBZUAI · · CV Research

Laurent Najman presented the Power Watershed (PW) optimization framework for image and data processing. The PW framework enhances graph-based data processing algorithms like random walker and ratio-cut clustering, leading to faster solutions. It can be adapted for graph-based cost minimization methods and integrated with deep learning networks. Why it matters: This framework could enable more efficient and scalable image and data processing algorithms relevant to computer vision and related fields in the Middle East.

Discrete and Continuous Submodular Bandits with Full Bandit Feedback

MBZUAI · · RL Research

Vaneet Aggarwal from Purdue University presented new research on discrete and continuous submodular bandits with full bandit feedback. The research introduces a framework transforming discrete offline approximation algorithms into sublinear α-regret methods using bandit feedback. Additionally, it introduces a unified approach for maximizing continuous DR-submodular functions, accommodating various settings and oracle access types. Why it matters: This research provides new methods for optimization under uncertainty, which is crucial for real-world AI applications in the region, such as resource allocation and automated decision-making.

Understanding modern machine learning models through the lens of high-dimensional statistics

MBZUAI · · Research LLM

This talk explores modern machine learning through high-dimensional statistics, using random matrix theory to analyze learning models. The speaker, Denny Wu from University of Toronto and the Vector Institute, presents two examples: hyperparameter selection in overparameterized models and gradient-based representation learning in neural networks. The analysis reveals insights such as the possibility of negative optimal ridge penalty and the advantages of feature learning over random features. Why it matters: This research provides a deeper theoretical understanding of deep learning phenomena, with potential implications for optimizing training and improving model performance in the region.

Optimizing AI Systems through Cross-Layer Design: A Data-Centric Approach

MBZUAI · · Research Infrastructure

A Duke University professor presented a data-centric approach to optimizing AI systems by addressing the memory capacity and bandwidth bottleneck. The presentation covered collaborative optimization across algorithms, systems, architecture, and circuit layers. It also explored compute-in-memory as a solution for integrating computation and memory. Why it matters: Optimizing AI systems through a data-centric approach can improve efficiency and performance, critical for advancing AI applications in the region.

SGD from the Lens of Markov process: An Algorithmic Stability Perspective

MBZUAI · · Research NLP

A Marie Curie Fellow from Inria and UIUC presented research on stochastic gradient descent (SGD) through the lens of Markov processes, exploring the relationships between heavy-tailed distributions, generalization error, and algorithmic stability. The research challenges existing theories about the monotonic relationship between heavy tails and generalization error. It introduces a unified approach for proving Wasserstein stability bounds in stochastic optimization, applicable to convex and non-convex losses. Why it matters: The work provides novel insights into the theoretical underpinnings of stochastic optimization, relevant to researchers at MBZUAI and other institutions in the region working on machine learning algorithms.

Fast Rates for Maximum Entropy Exploration

MBZUAI · · RL Research

This paper addresses exploration in reinforcement learning (RL) in unknown environments with sparse rewards, focusing on maximum entropy exploration. It introduces a game-theoretic algorithm for visitation entropy maximization with improved sample complexity of O(H^3S^2A/ε^2). For trajectory entropy, the paper presents an algorithm with O(poly(S, A, H)/ε) complexity, showing the statistical advantage of regularized MDPs for exploration. Why it matters: The research offers new techniques to reduce the sample complexity of RL, potentially enhancing the efficiency of AI agents in complex environments.

Accelerating neural network optimization: The power of second-order methods

MBZUAI · · Research NLP

MBZUAI researchers presented a new second-order method for optimizing neural networks at NeurIPS 2024. The method addresses optimization problems related to variational inequalities common in machine learning. They demonstrated that for monotone inequalities with inexact second-order derivatives, no faster second- or first-order methods can theoretically exist, supporting this with experiments. Why it matters: This research has the potential to reduce the computational cost of training large and complex neural networks, which could accelerate AI development in the region.

A new strategy for complex optimization problems in machine learning presented at ICLR

MBZUAI · · Research Optimization

MBZUAI researchers presented a new strategy for handling complex optimization problems in machine learning at ICLR 2024. The study, a collaboration with ISAM, combines zeroth-order methods with hard-thresholding to address specific settings in machine learning. This approach aims to improve convergence, ensuring algorithms reach quality solutions efficiently. Why it matters: Improving optimization techniques is crucial for advancing machine learning models used in various applications, potentially accelerating development and enhancing performance.

New approaches for machine learning optimization presented at ICML

MBZUAI · · Research NLP

MBZUAI and KAUST researchers collaborated to present new optimization methods at ICML 2024 for composite and distributed machine learning settings. The study addresses challenges in training large models due to data size and computational power. Their work focuses on minimizing the "loss function" by adjusting internal trainable parameters, using techniques like gradient clipping. Why it matters: This research contributes to the ongoing advancement of machine learning optimization, crucial for improving the performance and efficiency of AI models in the region and globally.

Developing efficient algorithms to spread the benefits of AI

MBZUAI · · Research MBZUAI

MBZUAI PhD graduate William de Vazelhes is researching hard-thresholding algorithms to enable AI to work from smaller datasets. His work focuses on optimization algorithms that simplify data, making it easier to analyze and work with, useful for energy-saving and deploying AI models on low-memory devices. He demonstrated that his approach can obtain results similar to those of convex algorithms in many usual settings. Why it matters: This research could broaden AI accessibility by reducing computational costs, and has potential applications in sectors like finance, particularly for portfolio management under budgetary constraints.

Collective Intelligence: from biological and social to robotic systems

MBZUAI · · Robotics Research

Giulia De Masi, Principal Scientist at the Technology Innovation Institute (TII) in Abu Dhabi, specializes in Collective Intelligence and Swarm Robotics. Her work focuses on designing emergent behaviors in robot swarms through local interactions, drawing inspiration from social insects. De Masi's background includes positions at academic institutions in the UAE and a PhD from the University of Rome La Sapienza. Why it matters: This highlights the growing focus on swarm robotics and collective intelligence research within the UAE, with potential applications in various industries.

On the Utility of Gradient Compression in Distributed Training Systems

MBZUAI · · Research Infrastructure

A CMU researcher, Dr. Hongyi Wang, presented an evaluation of gradient compression methods in distributed training, finding limited speedup in most realistic setups. The research identifies the root causes and proposes desirable properties for gradient compression methods to provide significant speedup. The talk was promoted by MBZUAI. Why it matters: Understanding the limitations of gradient compression techniques can help optimize distributed training strategies for AI models in the region.

Achieving black box vertical federated learning

MBZUAI · · Research Privacy

MBZUAI Assistant Professor Bin Gu is working on black-box optimization techniques, especially in the context of vertical federated learning. Gu's work, in collaboration with JD.com, aims to enhance data and model privacy in machine learning. He is also focused on large-scale optimization and spiking neural networks to bring machine automation closer to the way the human brain operates. Why it matters: This research contributes to advancements in privacy-preserving machine learning techniques relevant to sensitive sectors like finance and healthcare in the region.

Gaussian Variational Inference in high dimension

MBZUAI · · Research Inference

This article discusses approximating a high-dimensional distribution using Gaussian variational inference by minimizing Kullback-Leibler divergence. It builds upon previous research and approximates the minimizer using a Gaussian distribution with specific mean and variance. The study details approximation accuracy and applicability using efficient dimension, relevant for analyzing sampling schemes in optimization. Why it matters: This theoretical research can inform the development of more efficient and accurate AI algorithms, particularly in areas dealing with high-dimensional data such as machine learning and data analysis.

Better Optimization Algorithms for Machine Learning

MBZUAI · · Research NLP

Francesco Orabona from Boston University, with a PhD from the University of Genova, researches online learning, optimization, and statistical learning theory. He previously worked at Yahoo Labs and Toyota Technological Institute at Chicago. MBZUAI hosted a panel discussion (topic not specified in provided text). Why it matters: Optimization algorithms are crucial for advancing machine learning and AI, and researchers like Orabona contribute to this field.

An Adaptive Stochastic Sequential Quadratic Programming with Differentiable Exact Augmented Lagrangians

MBZUAI · · Research Optimization

Mladen Kolar from the University of Chicago Booth School of Business discussed stochastic optimization with equality constraints at MBZUAI. He presented a stochastic algorithm based on sequential quadratic programming (SQP) using a differentiable exact augmented Lagrangian. The algorithm adapts random stepsizes using a stochastic line search procedure, establishing global "almost sure" convergence. Why it matters: The presentation highlights MBZUAI's role in hosting discussions on advanced optimization techniques, fostering research and knowledge exchange in the field of machine learning.

Open Problems in Modern Convex Optimization

MBZUAI · · Research Optimization

Alexander Gasnikov from the Moscow Institute of Physics and Technology presented a talk on open problems in convex optimization. The talk covered stochastic averaging vs stochastic average approximation, saddle-point problems and accelerated methods, homogeneous federated learning, and decentralized optimization. Gasnikov's research focuses on optimization algorithms and he has published in NeurIPS, ICML, EJOR, OMS, and JOTA. Why it matters: While the talk itself isn't directly related to GCC AI, understanding convex optimization is crucial for advancing machine learning algorithms used in the region.