Skip to content
GCC AI Research

Search

Results for "anatomical embedding"

UAE: Universal Anatomical Embedding on Multi-modality Medical Images

arXiv ·

Researchers propose a universal anatomical embedding (UAE) framework for medical image analysis to learn appearance, semantic, and cross-modality anatomical embeddings. UAE incorporates semantic embedding learning with prototypical contrastive loss, a fixed-point-based matching strategy, and an iterative approach for cross-modality embedding learning. The framework was evaluated on landmark detection, lesion tracking and CT-MRI registration tasks, outperforming existing state-of-the-art methods.

Physics-Based Deep Learning for Medical Imaging

MBZUAI ·

Pascal Fua from EPFL gave a talk at MBZUAI on physics-based deep learning for medical imaging. The talk covered how self-supervision and knowledge of human anatomy and physics can improve deep learning algorithms when training data is limited. Applications discussed included endoscopic heart surgery, colonoscopy, and intubation. Why it matters: This highlights the growing importance of domain knowledge and self-supervision in overcoming data scarcity challenges for AI in healthcare applications within the region.

Deep Surface Meshes

MBZUAI ·

Pascal Fua from EPFL presented an approach to implementing convolutional neural nets that output complex 3D surface meshes. The method overcomes limitations in converting implicit representations to explicit surface representations. Applications include single view reconstruction, physically-driven shape optimization, and bio-medical image segmentation. Why it matters: This research advances geometric deep learning by enabling end-to-end trainable models for 3D surface mesh generation, with potential impact on various applications in computer vision and biomedical imaging in the region.