Skip to content
GCC AI Research

Search

Results for "CPU"

Nvidia challenges Intel, AMD in CPU arena - Gulf Business

UAE AI Jobs ·

Nvidia is expanding its market beyond GPUs with the development of a central processing unit (CPU) based on Arm architecture. This move positions Nvidia to compete directly with established CPU manufacturers like Intel and AMD. The company aims to offer integrated hardware and software solutions optimized for AI and data science workloads. Why it matters: Nvidia's entry into the CPU market could accelerate AI development and adoption in the Gulf region by providing more specialized and efficient computing solutions.

Computing in the Post-Moore Era

MBZUAI ·

A professor from EPFL (Lausanne) gave a talk at MBZUAI on computing in the post-Moore era, highlighting the slowing of Moore's Law due to physical limits in transistor miniaturization. He discussed research challenges and opportunities for future computing technologies. He presented examples of post-Moore technologies he helped develop in the datacenter space. Why it matters: As Moore's Law slows, research into alternative computing paradigms becomes critical for the continued advancement of AI and digital services in the UAE and globally.

Software-Directed Hardware Reliability for ML Systems

MBZUAI ·

Abdulrahman Mahmoud, a postdoctoral fellow at Harvard University, discusses software-directed tools and techniques for processor design and reliability enhancement in ML systems. He emphasizes the need for a nuanced approach to numerical data formats supported by robust hardware. He advocates for integrating reliability as a foundational element in the design process. Why it matters: This research addresses the critical challenge of hardware reliability in AI processors, particularly relevant as the field moves towards hardware-software co-design for sustained growth.

Optimizing AI Systems through Cross-Layer Design: A Data-Centric Approach

MBZUAI ·

A Duke University professor presented a data-centric approach to optimizing AI systems by addressing the memory capacity and bandwidth bottleneck. The presentation covered collaborative optimization across algorithms, systems, architecture, and circuit layers. It also explored compute-in-memory as a solution for integrating computation and memory. Why it matters: Optimizing AI systems through a data-centric approach can improve efficiency and performance, critical for advancing AI applications in the region.

Bruteforce computing is the next “winter of AI”

MBZUAI ·

Prof. Mérouane Debbah of the Technology Innovation Institute (TII) warns that current AI development relies on unsustainable, energy-intensive "bruteforce computing." He argues that the field needs more energy-efficient algorithms instead of simply scaling up GPUs. Debbah suggests neuromorphic computing as a potential solution, drawing inspiration from the human brain's energy efficiency. Why it matters: This critique highlights a crucial sustainability challenge for AI in the GCC and globally, as the region invests heavily in compute-intensive AI models.

Going under the hood to improve AI efficiency

MBZUAI ·

MBZUAI's computer science department, led by Xiaosong Ma, focuses on improving AI efficiency and sustainability by reducing wasted resources. Xiaosong's background in high-performance computing informs her approach to optimizing AI workloads. She aims to collaborate with experts across different AI domains at MBZUAI to address these challenges. Why it matters: Optimizing AI efficiency is crucial for reducing the environmental impact and computational costs associated with increasingly complex AI models in the GCC region and globally.

Climate conscious computing

MBZUAI ·

MBZUAI's Qirong Ho and colleagues are developing an Artificial Intelligence Operating System (AIOS) for decarbonization, aiming to reduce energy waste in AI development. The AIOS focuses on improving communication efficiency between machines during AI model training, as inefficient communication leads to prolonged tasks and increased energy consumption. This system addresses the high computing power demands of large language models like ChatGPT and LLaMA-2. Why it matters: By optimizing energy usage in AI development, the AIOS could significantly reduce the carbon footprint of AI technologies in the region and globally.

On Optimizing Mobile Memory, Storage, and Beyond

MBZUAI ·

Prof. Chun Jason Xue from the City University of Hong Kong presented research on optimizing mobile memory and storage by analyzing mobile application characteristics, noting their differences from server applications. The research explores system software designs inherited from the Linux kernel and identifies optimization opportunities in mobile memory and storage management. Xue's work aims to enhance user experience on mobile devices through mobile application characterization, focusing on non-volatile and flash memories. Why it matters: Optimizing mobile systems based on the unique characteristics of mobile applications can significantly improve device performance and user experience in the region.