Skip to content
GCC AI Research

Search

Results for "AI computing"

Optimizing AI Systems through Cross-Layer Design: A Data-Centric Approach

MBZUAI ·

A Duke University professor presented a data-centric approach to optimizing AI systems by addressing the memory capacity and bandwidth bottleneck. The presentation covered collaborative optimization across algorithms, systems, architecture, and circuit layers. It also explored compute-in-memory as a solution for integrating computation and memory. Why it matters: Optimizing AI systems through a data-centric approach can improve efficiency and performance, critical for advancing AI applications in the region.

Climate conscious computing

MBZUAI ·

MBZUAI's Qirong Ho and colleagues are developing an Artificial Intelligence Operating System (AIOS) for decarbonization, aiming to reduce energy waste in AI development. The AIOS focuses on improving communication efficiency between machines during AI model training, as inefficient communication leads to prolonged tasks and increased energy consumption. This system addresses the high computing power demands of large language models like ChatGPT and LLaMA-2. Why it matters: By optimizing energy usage in AI development, the AIOS could significantly reduce the carbon footprint of AI technologies in the region and globally.

Working to make AI faster, smarter, and more punctual

MBZUAI ·

MBZUAI Associate Professor Martin Takáč is working on high-performance computing and machine learning with applications in logistics, supply chain management, and other areas. His research focuses on using AI to improve precision and efficiency in tasks like predicting demand and optimizing delivery routes. Takáč's interests include imitative learning, predictive modeling, and reinforcement learning to enable AI to mimic human behavior and predict future outcomes. Why it matters: This research contributes to the development of more efficient and reliable AI systems that can be applied to a wide range of industries in the UAE and beyond.

Reaping the full benefits of AI-driven applications

MBZUAI ·

MBZUAI Assistant Professors Bin Gu and Huan Xiong are advancing spiking neural networks (SNNs) to improve computational power and energy efficiency. They will present their latest research on SNNs at the 38th Annual AAAI Conference on Artificial Intelligence in Vancouver. SNNs process information in discrete events, mimicking biological neurons and offering improved energy efficiency compared to traditional neural networks. Why it matters: This research could enable running advanced AI applications like GPTs on mobile devices, unlocking their full potential due to the energy efficiency of SNNs.

Bruteforce computing is the next “winter of AI”

MBZUAI ·

Prof. Mérouane Debbah of the Technology Innovation Institute (TII) warns that current AI development relies on unsustainable, energy-intensive "bruteforce computing." He argues that the field needs more energy-efficient algorithms instead of simply scaling up GPUs. Debbah suggests neuromorphic computing as a potential solution, drawing inspiration from the human brain's energy efficiency. Why it matters: This critique highlights a crucial sustainability challenge for AI in the GCC and globally, as the region invests heavily in compute-intensive AI models.

Computing in the Post-Moore Era

MBZUAI ·

A professor from EPFL (Lausanne) gave a talk at MBZUAI on computing in the post-Moore era, highlighting the slowing of Moore's Law due to physical limits in transistor miniaturization. He discussed research challenges and opportunities for future computing technologies. He presented examples of post-Moore technologies he helped develop in the datacenter space. Why it matters: As Moore's Law slows, research into alternative computing paradigms becomes critical for the continued advancement of AI and digital services in the UAE and globally.

Chip Design and Manufacturing with AI

MBZUAI ·

This article discusses the application of AI in semiconductor chip design and manufacturing, with a focus on examples such as IR-drop estimation and lithography processes. It mentions Youngsoo Shin, a KAIST professor and founder of Baum, who is an expert in this area. The article also briefly mentions panel discussion hosted by MBZUAI. Why it matters: AI-driven chip design and manufacturing could accelerate semiconductor innovation in the GCC region and beyond.

Low-Complexity NN Technology: Model and Precision Search, Acceleration Circuit, and Applications

MBZUAI ·

Researchers at National Taiwan University are developing low-complexity neural network technologies using quantization to reduce model size while maintaining accuracy. Their work includes binary-weighted CNNs and transformers, along with a neural architecture search scheme (TPC-NAS) applied to image recognition, object detection, and NLP tasks. They have also built a PE-based CNN/transformer hardware accelerator in Xilinx FPGA SoC with a PyTorch-based software framework. Why it matters: This research provides practical methods for deploying efficient deep learning models on resource-constrained hardware, potentially enabling broader adoption of AI in embedded systems and edge devices.