Ahmed Elhag, a PhD student at the University of Oxford, presented a new training procedure that approximates equivariance in unconstrained machine learning models via a multitask objective. The approach adds an equivariance loss to unconstrained models, allowing them to learn approximate symmetries without the computational cost of fully equivariant methods. Formulating equivariance as a flexible learning objective allows control over the extent of symmetry enforced, matching the performance of strictly equivariant baselines at a lower cost. Why it matters: This research from a speaker at MBZUAI balances rigorous theory and practical scalability in geometric deep learning, potentially accelerating drug discovery and design.
The paper introduces UAE-3D, a multi-modal VAE for 3D molecule generation that compresses molecules into a unified latent space, maintaining near-zero reconstruction error. This approach simplifies latent diffusion modeling by eliminating the need to handle multi-modality and equivariance separately. Experiments on GEOM-Drugs and QM9 datasets show UAE-3D establishes new benchmarks in de novo and conditional 3D molecule generation, with significant improvements in efficiency and quality.
The paper introduces the Prism Hypothesis, which posits a correspondence between an encoder's feature spectrum and its functional role, with semantic encoders capturing low-frequency components and pixel encoders retaining high-frequency information. Based on this, the authors propose Unified Autoencoding (UAE), a model that harmonizes semantic structure and pixel details using a frequency-band modulator. Experiments on ImageNet and MS-COCO demonstrate that UAE effectively unifies semantic abstraction and pixel-level fidelity, achieving state-of-the-art performance.
The paper introduces the Unscented Autoencoder (UAE), a novel deep generative model based on the Variational Autoencoder (VAE) framework. The UAE uses the Unscented Transform (UT) for a more informative posterior representation compared to the reparameterization trick in VAEs. It replaces Kullback-Leibler (KL) divergence with the Wasserstein distribution metric and demonstrates competitive performance in Fréchet Inception Distance (FID) scores.
A DeepMind researcher presented work on incorporating symmetries into machine learning models, with applications to lattice-QCD and molecular dynamics. The work includes permutation and translation-invariant normalizing flows for free-energy estimation in molecular dynamics. They also presented U(N) and SU(N) Gauge-equivariant normalizing flows for pure Gauge simulations and its extensions to incorporate fermions in lattice-QCD. Why it matters: Applying symmetry principles to generative models could improve AI's ability to model complex physical systems relevant to materials science and other fields in the region.