Skip to content
GCC AI Research

Search

Results for "Normalizing Flows"

Generative models, manifolds and symmetries: From QFT to molecules

MBZUAI ·

A DeepMind researcher presented work on incorporating symmetries into machine learning models, with applications to lattice-QCD and molecular dynamics. The work includes permutation and translation-invariant normalizing flows for free-energy estimation in molecular dynamics. They also presented U(N) and SU(N) Gauge-equivariant normalizing flows for pure Gauge simulations and its extensions to incorporate fermions in lattice-QCD. Why it matters: Applying symmetry principles to generative models could improve AI's ability to model complex physical systems relevant to materials science and other fields in the region.

Unscented Autoencoder

arXiv ·

The paper introduces the Unscented Autoencoder (UAE), a novel deep generative model based on the Variational Autoencoder (VAE) framework. The UAE uses the Unscented Transform (UT) for a more informative posterior representation compared to the reparameterization trick in VAEs. It replaces Kullback-Leibler (KL) divergence with the Wasserstein distribution metric and demonstrates competitive performance in Fréchet Inception Distance (FID) scores.

Gaussian Variational Inference in high dimension

MBZUAI ·

This article discusses approximating a high-dimensional distribution using Gaussian variational inference by minimizing Kullback-Leibler divergence. It builds upon previous research and approximates the minimizer using a Gaussian distribution with specific mean and variance. The study details approximation accuracy and applicability using efficient dimension, relevant for analyzing sampling schemes in optimization. Why it matters: This theoretical research can inform the development of more efficient and accurate AI algorithms, particularly in areas dealing with high-dimensional data such as machine learning and data analysis.

Diffusion-BBO: Diffusion-Based Inverse Modeling for Online Black-Box Optimization

arXiv ·

This paper introduces Diffusion-BBO, a new online black-box optimization (BBO) framework that uses a conditional diffusion model as an inverse surrogate model. The framework employs an Uncertainty-aware Exploration (UaE) acquisition function to propose scores in the objective space for conditional sampling. The approach is shown theoretically to achieve a near-optimal solution and empirically outperforms existing online BBO baselines across 6 scientific discovery tasks.

Golden Noise and Ziazag Sampling of Diffusion Models

MBZUAI ·

Dr. Zeke Xie from HKUST(GZ) presented research on noise initialization and sampling strategies for diffusion models. The talk covered golden noise for text-to-image models, zigzag diffusion sampling, smooth initializations for video diffusion, and leveraging image diffusion for video synthesis. Xie leads the xLeaF Lab, focusing on optimization, inference, and generative AI, with previous experience at Baidu Research. Why it matters: The work addresses core challenges in improving the quality and diversity of generated content from diffusion models, a key area of advancement for AI applications in the region.

A Unified Deep Model of Learning from both Data and Queries for Cardinality Estimation

arXiv ·

This paper introduces a unified deep autoregressive model (UAE) for cardinality estimation that learns joint data distributions from both data and query workloads. It uses differentiable progressive sampling with the Gumbel-Softmax trick to incorporate supervised query information into the deep autoregressive model. Experiments show UAE achieves better accuracy and efficiency compared to state-of-the-art methods.

Two weak assumptions, one strong result presented at ICLR

MBZUAI ·

MBZUAI researchers presented a new machine learning method at ICLR for uncovering hidden variables from observed data. The method, called "complementary gains," combines two weak assumptions to provide identifiability guarantees. This approach aims to recover true latent variables reflecting real-world processes, while solving problems efficiently. Why it matters: The research advances disentangled representation learning by finding minimal assumptions necessary for identifiability, improving the applicability of AI models to real-world data.

Adapting to Distribution Shifts: Recent Advances in Importance Weighting Methods

MBZUAI ·

This article discusses distribution shifts in machine learning and the use of importance weighting methods to address them. Masashi Sugiyama from the University of Tokyo and RIKEN AIP presented recent advances in importance-based distribution shift adaptation methods. The talk covered joint importance-predictor estimation, dynamic importance weighting, and multistep class prior shift adaptation. Why it matters: Understanding and mitigating distribution shifts is crucial for deploying robust and reliable AI models in real-world scenarios within the GCC region and beyond.