Skip to content
GCC AI Research

Search

Results for "trainable decomposition"

Orchestrated efficiency: A new technique to increase model efficiency during training

MBZUAI ·

MBZUAI's Samuel Horváth presented a new framework called Maestro at ICML 2024 for efficiently training machine learning models in federated settings. Maestro identifies and removes redundant components of a model through trainable decomposition to increase efficiency on edge devices. The approach decomposes layers into low-dimensional approximations, discarding unused aspects to reduce model size. Why it matters: This research addresses the challenge of running complex models on resource-constrained devices, crucial for expanding AI applications while preserving data privacy.

A “divide-and-conquer” approach to learning from demonstration

MBZUAI ·

MBZUAI researchers have developed a "divide-and-conquer" technique to improve learning from demonstration in robotics. The approach breaks down complex dynamical systems into independently solvable subsystems, modeled as linear parameter-varying systems. This method aims to simplify computations while maintaining stability and accurately capturing joint interactions for robots in complex environments. Why it matters: The research addresses a key challenge in robotics, potentially enabling more efficient and safer robot learning from human demonstrations.

FissionFusion: Fast Geometric Generation and Hierarchical Souping for Medical Image Analysis

arXiv ·

Researchers at MBZUAI introduce FissionFusion, a hierarchical model merging approach to improve medical image analysis performance. The method uses local and global aggregation of models based on hyperparameter configurations, along with a cyclical learning rate scheduler for efficient model generation. Experiments show FissionFusion outperforms standard model souping by approximately 6% on HAM10000 and CheXpert datasets and improves OOD performance.

On Transferability of Machine Learning Models

MBZUAI ·

This article discusses domain shift in machine learning, where testing data differs from training data, and methods to mitigate it via domain adaptation and generalization. Domain adaptation uses labeled source data and unlabeled target data. Domain generalization uses labeled data from single or multiple source domains to generalize to unseen target domains. Why it matters: Research in mitigating domain shift enhances the robustness and applicability of AI models in diverse real-world scenarios.

The Prism Hypothesis: Harmonizing Semantic and Pixel Representations via Unified Autoencoding

arXiv ·

The paper introduces the Prism Hypothesis, which posits a correspondence between an encoder's feature spectrum and its functional role, with semantic encoders capturing low-frequency components and pixel encoders retaining high-frequency information. Based on this, the authors propose Unified Autoencoding (UAE), a model that harmonizes semantic structure and pixel details using a frequency-band modulator. Experiments on ImageNet and MS-COCO demonstrate that UAE effectively unifies semantic abstraction and pixel-level fidelity, achieving state-of-the-art performance.

Actionable and responsible AI in Medicine: a geometric deep learning approach

MBZUAI ·

Pietro Liò from the University of Cambridge will discuss geometric deep learning techniques for building a digital patient twin using graph and hypergraph representation learning. The talk will focus on integrating Computational Biology and Deep Learning, considering physiological, clinical, and molecular variables. He will also cover explainable methodologies for clinicians and protein design using diffusion models. Why it matters: This highlights the growing interest in applying advanced AI techniques like geometric deep learning and diffusion models to healthcare challenges in the region, particularly for personalized medicine.