Skip to content
GCC AI Research

Orchestrated efficiency: A new technique to increase model efficiency during training

MBZUAI · Notable

Summary

MBZUAI's Samuel Horváth presented a new framework called Maestro at ICML 2024 for efficiently training machine learning models in federated settings. Maestro identifies and removes redundant components of a model through trainable decomposition to increase efficiency on edge devices. The approach decomposes layers into low-dimensional approximations, discarding unused aspects to reduce model size. Why it matters: This research addresses the challenge of running complex models on resource-constrained devices, crucial for expanding AI applications while preserving data privacy.

Get the weekly digest

Top AI stories from the GCC region, every week.

Related

Green Learning — New Generation Machine Learning and Applications

MBZUAI ·

A recent talk at MBZUAI discussed "Green Learning" and Operational Neural Networks (ONNs) as efficient alternatives to CNNs. ONNs use "nodal" and "pool" operators and "generative neurons" to expand neuron learning capacity. Moncef Gabbouj from Tampere University presented Self-Organized ONNs (Self-ONNs) and their signal processing applications. Why it matters: Exploring more efficient AI models is crucial for sustainable development of AI in the region, as it addresses computational resource constraints and promotes broader accessibility.

LLMEffiChecker: Understanding and Testing Efficiency Degradation of Large Language Models

arXiv ·

The paper introduces LLMEffiChecker, a tool to test the computational efficiency robustness of LLMs by identifying vulnerabilities that can significantly degrade performance. LLMEffiChecker uses both white-box (gradient-guided perturbation) and black-box (causal inference-based perturbation) methods to delay the generation of the end-of-sequence token. Experiments on nine public LLMs demonstrate that LLMEffiChecker can substantially increase response latency and energy consumption with minimal input perturbations.

Accelerating neural network optimization: The power of second-order methods

MBZUAI ·

MBZUAI researchers presented a new second-order method for optimizing neural networks at NeurIPS 2024. The method addresses optimization problems related to variational inequalities common in machine learning. They demonstrated that for monotone inequalities with inexact second-order derivatives, no faster second- or first-order methods can theoretically exist, supporting this with experiments. Why it matters: This research has the potential to reduce the computational cost of training large and complex neural networks, which could accelerate AI development in the region.