Skip to content
GCC AI Research

Search

Results for "geometric deep learning"

Deep Surface Meshes

MBZUAI ·

Pascal Fua from EPFL presented an approach to implementing convolutional neural nets that output complex 3D surface meshes. The method overcomes limitations in converting implicit representations to explicit surface representations. Applications include single view reconstruction, physically-driven shape optimization, and bio-medical image segmentation. Why it matters: This research advances geometric deep learning by enabling end-to-end trainable models for 3D surface mesh generation, with potential impact on various applications in computer vision and biomedical imaging in the region.

A Geometric Understanding of Deep Learning

MBZUAI ·

This article discusses a talk by Dr. David Xianfeng Gu at MBZUAI on gaining a geometric understanding of deep learning. The talk addresses questions such as what a DL system learns, how it learns, and how to improve the learning process. Dr. Gu is a professor at SUNY Stony Brook and affiliated with multiple prestigious institutions. Why it matters: Understanding the fundamentals of deep learning is crucial for advancing AI research and development in the region.

Actionable and responsible AI in Medicine: a geometric deep learning approach

MBZUAI ·

Pietro Liò from the University of Cambridge will discuss geometric deep learning techniques for building a digital patient twin using graph and hypergraph representation learning. The talk will focus on integrating Computational Biology and Deep Learning, considering physiological, clinical, and molecular variables. He will also cover explainable methodologies for clinicians and protein design using diffusion models. Why it matters: This highlights the growing interest in applying advanced AI techniques like geometric deep learning and diffusion models to healthcare challenges in the region, particularly for personalized medicine.

Efficiently Approximating Equivariance in Unconstrained Models

MBZUAI ·

Ahmed Elhag, a PhD student at the University of Oxford, presented a new training procedure that approximates equivariance in unconstrained machine learning models via a multitask objective. The approach adds an equivariance loss to unconstrained models, allowing them to learn approximate symmetries without the computational cost of fully equivariant methods. Formulating equivariance as a flexible learning objective allows control over the extent of symmetry enforced, matching the performance of strictly equivariant baselines at a lower cost. Why it matters: This research from a speaker at MBZUAI balances rigorous theory and practical scalability in geometric deep learning, potentially accelerating drug discovery and design.

Generative models, manifolds and symmetries: From QFT to molecules

MBZUAI ·

A DeepMind researcher presented work on incorporating symmetries into machine learning models, with applications to lattice-QCD and molecular dynamics. The work includes permutation and translation-invariant normalizing flows for free-energy estimation in molecular dynamics. They also presented U(N) and SU(N) Gauge-equivariant normalizing flows for pure Gauge simulations and its extensions to incorporate fermions in lattice-QCD. Why it matters: Applying symmetry principles to generative models could improve AI's ability to model complex physical systems relevant to materials science and other fields in the region.

Understanding modern machine learning models through the lens of high-dimensional statistics

MBZUAI ·

This talk explores modern machine learning through high-dimensional statistics, using random matrix theory to analyze learning models. The speaker, Denny Wu from University of Toronto and the Vector Institute, presents two examples: hyperparameter selection in overparameterized models and gradient-based representation learning in neural networks. The analysis reveals insights such as the possibility of negative optimal ridge penalty and the advantages of feature learning over random features. Why it matters: This research provides a deeper theoretical understanding of deep learning phenomena, with potential implications for optimizing training and improving model performance in the region.

Temporally Evolving Generalised Networks

MBZUAI ·

Emilio Porcu from Khalifa University presented on temporally evolving generalized networks, where graphs evolve over time with changing topologies. The presentation addressed challenges in building semi-metrics and isometric embeddings for these networks. The research uses kernel specification and network-based metrics and is illustrated using a traffic accident dataset. Why it matters: This work advances the application of kernel methods to dynamic graph structures, relevant for modeling evolving relationships in various domains.

Upsampling Autoencoder for Self-Supervised Point Cloud Learning

arXiv ·

This paper introduces a self-supervised learning method for point cloud analysis using an upsampling autoencoder (UAE). The model uses subsampling and an encoder-decoder architecture to reconstruct the original point cloud, learning both semantic and geometric information. Experiments show the UAE outperforms existing methods in shape classification, part segmentation, and point cloud upsampling tasks.