Muhammad Shafique from NYU Abu Dhabi discusses building energy-efficient and robust EdgeAI systems. The talk covers trends, challenges, and techniques for optimizing software and hardware stacks. These optimizations aim to enable embodied AI in autonomous systems, IoT-Healthcare, Industrial-IoT, and smart environments. Why it matters: The research addresses key challenges in deploying AI on resource-constrained edge devices in the GCC region, particularly regarding energy efficiency and security.
MBZUAI Assistant Professors Bin Gu and Huan Xiong are advancing spiking neural networks (SNNs) to improve computational power and energy efficiency. They will present their latest research on SNNs at the 38th Annual AAAI Conference on Artificial Intelligence in Vancouver. SNNs process information in discrete events, mimicking biological neurons and offering improved energy efficiency compared to traditional neural networks. Why it matters: This research could enable running advanced AI applications like GPTs on mobile devices, unlocking their full potential due to the energy efficiency of SNNs.
AI's energy consumption is a growing concern, with AI, data centers, and cryptocurrency consuming nearly 2% of the world's energy in 2022, potentially doubling by 2026. Training an LLM like GPT-3 uses the equivalent energy of 130 homes per year, and AI tasks consume 33 times more energy than task-specific software. MBZUAI's computer science department, led by Xiaosong Ma, is researching energy efficiency in AI hardware to address this problem. Why it matters: As AI adoption accelerates in the GCC, energy-efficient AI hardware and algorithms are critical for sustainable development and reducing carbon emissions in the region.
A recent talk at MBZUAI discussed "Green Learning" and Operational Neural Networks (ONNs) as efficient alternatives to CNNs. ONNs use "nodal" and "pool" operators and "generative neurons" to expand neuron learning capacity. Moncef Gabbouj from Tampere University presented Self-Organized ONNs (Self-ONNs) and their signal processing applications. Why it matters: Exploring more efficient AI models is crucial for sustainable development of AI in the region, as it addresses computational resource constraints and promotes broader accessibility.
A Duke University professor presented a data-centric approach to optimizing AI systems by addressing the memory capacity and bandwidth bottleneck. The presentation covered collaborative optimization across algorithms, systems, architecture, and circuit layers. It also explored compute-in-memory as a solution for integrating computation and memory. Why it matters: Optimizing AI systems through a data-centric approach can improve efficiency and performance, critical for advancing AI applications in the region.