Skip to content
GCC AI Research

Topics

Efficiency

2 articles RSS ↗

Making computer vision more efficient with state-space models

MBZUAI · · CV Research

MBZUAI researchers developed GroupMamba, a new set of state-space models (SSMs) for computer vision that addresses limitations in existing SSMs related to computational efficiency and optimization challenges. GroupMamba introduces a new layer called modulated group mamba, improving efficiency and stability. In benchmark tests, GroupMamba performed as well as similar SSM systems, but more efficiently, offering a backbone for tasks like image classification, object detection, and segmentation. Why it matters: This research aims to bridge the gap between vision transformers and CNNs by improving SSMs, potentially leading to more efficient and powerful computer vision models.

Orchestrated efficiency: A new technique to increase model efficiency during training

MBZUAI · · Research Federated Learning

MBZUAI's Samuel Horváth presented a new framework called Maestro at ICML 2024 for efficiently training machine learning models in federated settings. Maestro identifies and removes redundant components of a model through trainable decomposition to increase efficiency on edge devices. The approach decomposes layers into low-dimensional approximations, discarding unused aspects to reduce model size. Why it matters: This research addresses the challenge of running complex models on resource-constrained devices, crucial for expanding AI applications while preserving data privacy.