Efficiently Approximating Equivariance in Unconstrained Models
Ahmed Elhag, a PhD student at the University of Oxford, presented a new training procedure that approximates equivariance in unconstrained machine learning models via a multitask objective. The approach adds an equivariance loss to unconstrained models, allowing them to learn approximate symmetries without the computational cost of fully equivariant methods. Formulating equivariance as a flexible learning objective allows control over the extent of symmetry enforced, matching the performance of strictly equivariant baselines at a lower cost. Why it matters: This research from a speaker at MBZUAI balances rigorous theory and practical scalability in geometric deep learning, potentially accelerating drug discovery and design.