Researchers from MBZUAI and King's College London have developed a new prompting strategy called self-guided exploration to improve LLM performance on combinatorial problems. The method was tested on complex challenges like the traveling salesman problem. The findings will be presented at the 38th Annual Conference on Neural Information Processing Systems (NeurIPS) in Vancouver. Why it matters: This research could lead to practical applications of LLMs in industries like logistics, planning, and scheduling by offering new approaches to computationally complex problems.
This paper introduces Diffusion-BBO, a new online black-box optimization (BBO) framework that uses a conditional diffusion model as an inverse surrogate model. The framework employs an Uncertainty-aware Exploration (UaE) acquisition function to propose scores in the objective space for conditional sampling. The approach is shown theoretically to achieve a near-optimal solution and empirically outperforms existing online BBO baselines across 6 scientific discovery tasks.
Abu Dhabi's Technology Innovation Institute (TII) has developed a new quantum optimization solver in collaboration with NVIDIA, Los Alamos National Laboratory, and Caltech. The solver addresses large-scale combinatorial optimization problems using a small number of qubits, encoding over 7000 variables with only 17 qubits. Published in Nature Communications, the research demonstrates a hybrid quantum-classical algorithm with a novel encoding scheme that maximizes the use of quantum resources. Why it matters: This advancement marks a significant step toward practical quantum computing applications in the UAE and beyond, particularly in solving complex optimization challenges across various sectors.
Alexander Gasnikov from the Moscow Institute of Physics and Technology presented a talk on open problems in convex optimization. The talk covered stochastic averaging vs stochastic average approximation, saddle-point problems and accelerated methods, homogeneous federated learning, and decentralized optimization. Gasnikov's research focuses on optimization algorithms and he has published in NeurIPS, ICML, EJOR, OMS, and JOTA. Why it matters: While the talk itself isn't directly related to GCC AI, understanding convex optimization is crucial for advancing machine learning algorithms used in the region.
This paper addresses exploration in reinforcement learning (RL) in unknown environments with sparse rewards, focusing on maximum entropy exploration. It introduces a game-theoretic algorithm for visitation entropy maximization with improved sample complexity of O(H^3S^2A/ε^2). For trajectory entropy, the paper presents an algorithm with O(poly(S, A, H)/ε) complexity, showing the statistical advantage of regularized MDPs for exploration. Why it matters: The research offers new techniques to reduce the sample complexity of RL, potentially enhancing the efficiency of AI agents in complex environments.
KAUST held a research workshop on Optimization and Big Data, gathering researchers to discuss challenges and opportunities in the field. Speakers presented novel optimization algorithms and distributed systems for handling large datasets. The workshop featured 20 speakers from KAUST, global universities, and Microsoft Research. Why it matters: The event highlights KAUST's role as a regional hub for advancing research and development in big data and optimization, crucial for AI and various computational fields.
MBZUAI researchers have developed SVRPBench, a new open benchmark for testing vehicle routing algorithms under real-world conditions. SVRPBench simulates unpredictable urban delivery scenarios including rush-hour traffic, accidents, and customer delivery time preferences. The benchmark uses realistic city models with clustered customer locations, unlike existing deterministic benchmarks. Why it matters: This benchmark offers a more practical evaluation for vehicle routing algorithms, potentially leading to significant cost savings and improved efficiency in logistics within the region and beyond.
KAUST Professor Peter Richtárik received a Distinguished Speaker Award at the Sixth International Conference on Continuous Optimization (ICCOPT 2019) in Berlin. Richtárik's lecture series, totaling six hours, focused on stochastic gradient descent (SGD) methods, drawing from recent research by his KAUST group. He highlighted key principles and new variants of SGD, the key method for training modern machine learning models. Why it matters: This award recognizes KAUST's contribution to fundamental machine learning optimization, which is critical for advancing AI in the region.