KAUST researchers developed a new algorithm for detecting cause and effect in large datasets. The algorithm aims to find underlying models that generate data, helping uncover cause-and-effect dynamics. It could aid researchers across fields like cell biology and genetics by answering questions that typical machine learning cannot. Why it matters: This advancement could equip current machine learning methods with abilities to better deal with abstraction, inference, and concepts such as cause and effect.
KAUST researchers developed a machine learning algorithm to control a deformable mirror within the Subaru Telescope's exoplanet imaging camera, compensating for atmospheric turbulence. The algorithm, which computes a partial singular value decomposition (SVD), outperforms a standard SVD by a factor of four. The KAUST team received a best paper award at the PASC Conference for this work, which has already been deployed at the Subaru Telescope. Why it matters: This advancement enables sharper images of exoplanets, facilitating their identification and study, and showcases the impact of optimizing core linear algebra algorithms.
Researchers from MBZUAI presented a new algorithm at ICLR 2024 that identifies causal relationships involving both observed and latent variables. The algorithm addresses limitations of existing methods that struggle with latent variables or assume observed variables don't directly influence latent variables. The proposed algorithm can accommodate both scenarios, offering a more generalizable approach to causal discovery. Why it matters: This research advances the development of AI systems that can analyze complex data and identify causal relationships, with potential applications in fields like medicine where understanding causality is crucial for developing treatments and preventative measures.
KAUST Ph.D. student Jinhui Xiong won the best paper award at the 24th International Symposium on Vision, Modeling, and Visualization in Germany for his paper "Stochastic Convolutional Sparse Coding". The paper, co-authored with KAUST Professors Peter Richtárik and Wolfgang Heidrich, introduces a novel stochastic spatial-domain solver for Convolutional Sparse Coding (CSC). The proposed algorithm outperforms state-of-the-art solutions in terms of execution time and offers an improved representation for learning dictionaries from sample images. Why it matters: This award recognizes significant research in efficient image representation and dictionary learning, contributing to advancements in visual computing and AI at KAUST.
KAUST Ph.D. student Lukas Larisch won the Parameterized Algorithms and Computational Experiments (PACE) 2017 Challenge in the Optimal Tree Decomposition Challenge, solving more instances than competitors. He received the award at the International Symposium on Parameterized and Exact Computation (IPEC 2017) in Vienna, Austria. Larisch is pursuing his Ph.D. at KAUST and working in the University's Extreme Computing Research Center, focusing on acoustics and graph structure theory. Why it matters: This recognition highlights KAUST's contribution to advanced computer science research and its ability to attract and foster talented researchers in niche areas like parameterized complexity.
This paper addresses exploration in reinforcement learning (RL) in unknown environments with sparse rewards, focusing on maximum entropy exploration. It introduces a game-theoretic algorithm for visitation entropy maximization with improved sample complexity of O(H^3S^2A/ε^2). For trajectory entropy, the paper presents an algorithm with O(poly(S, A, H)/ε) complexity, showing the statistical advantage of regularized MDPs for exploration. Why it matters: The research offers new techniques to reduce the sample complexity of RL, potentially enhancing the efficiency of AI agents in complex environments.
Alexander Gasnikov from the Moscow Institute of Physics and Technology presented a talk on open problems in convex optimization. The talk covered stochastic averaging vs stochastic average approximation, saddle-point problems and accelerated methods, homogeneous federated learning, and decentralized optimization. Gasnikov's research focuses on optimization algorithms and he has published in NeurIPS, ICML, EJOR, OMS, and JOTA. Why it matters: While the talk itself isn't directly related to GCC AI, understanding convex optimization is crucial for advancing machine learning algorithms used in the region.