Skip to content
GCC AI Research

Search

Results for "High Performance Computing"

Everything needs HPC

KAUST ·

This is an advertisement for KAUST Discovery, seemingly related to High Performance Computing (HPC). It mentions King Abdullah bin Abdulaziz Al Saud. Why it matters: The ad suggests KAUST is investing in HPC, which is a critical infrastructure component for AI research and development.

Building an HPC ecosystem

KAUST ·

This article discusses KAUST's efforts to build a high-performance computing (HPC) ecosystem. It mentions Jysoo Lee, director of the KAUST Supercomputing Core Lab, and Robert G. Voigt from the Krell Institute, both speakers at the HPC Saudi event held at KAUST. The article also acknowledges King Abdullah's role in establishing KAUST. Why it matters: HPC is crucial for advancing AI research and development in the region, and KAUST is playing a key role in fostering this ecosystem.

Computing in the Post-Moore Era

MBZUAI ·

A professor from EPFL (Lausanne) gave a talk at MBZUAI on computing in the post-Moore era, highlighting the slowing of Moore's Law due to physical limits in transistor miniaturization. He discussed research challenges and opportunities for future computing technologies. He presented examples of post-Moore technologies he helped develop in the datacenter space. Why it matters: As Moore's Law slows, research into alternative computing paradigms becomes critical for the continued advancement of AI and digital services in the UAE and globally.

KAUST wins “Nobel” of high-performance computing for climate modeling

KAUST ·

KAUST has been awarded the ACM Gordon Bell Prize for Climate Modelling, considered the "Nobel" of high-performance computing, for their work on exascale climate emulators. The winning paper, a collaborative effort with institutions including the NSF National Center for Atmospheric Research, addresses the computational and storage demands of high-resolution earth system models. The KAUST team included Sameh Abdulah, Marc G. Genton, David E. Keyes, and others. Why it matters: This is the first time an institution in the Middle East has won the prize, highlighting KAUST's leadership in high-performance computing and climate research in the region.

KAUST makes a distinctive presence at SC17

KAUST ·

KAUST participated in the Supercomputing Conference (SC17) in Denver, Colorado, with faculty, staff, and students. The university's Shaheen 2 Cray XC40 System was ranked the 20th fastest globally and the fastest in the Middle East. KAUST's IT department hosted talks featuring David Keyes, Jack Dongarra, Thierry-Laurent, Mootaz Elnozahy, and Jason Roos. Why it matters: KAUST's strong presence at SC17 highlights its commitment to advancing supercomputing capabilities in the Middle East and fostering international collaboration.

Using supercomputers to enable industrial competitiveness

KAUST ·

A KAUST article highlights the role of supercomputers like Shaheen in enhancing industrial competitiveness. Jean Tachiji, Cray Manager in the Middle East, Steven Scott, Cray CTO, and Saber Feki from KAUST Supercomputing Core Laboratory are featured in front of Shaheen. Why it matters: This underscores the strategic importance of high-performance computing for research and development in the region.

KAUST delivers supercomputing breakthrough in multi-dimensional seismic processing

KAUST ·

KAUST and Cerebras Systems collaborated on multi-dimensional seismic processing using the Condor Galaxy AI supercomputer, achieving record sustained memory bandwidth of 92.58 petabytes per second. They developed a Tile Low-Rank Matrix-Vector Multiplication (TLR-MVM) kernel to exploit the architecture of Cerebras CS-2 systems. This work was recognized as a finalist for the 2023 Gordon Bell Prize. Why it matters: This demonstrates the potential of AI-customized architectures for seismic processing, with broader implications for climate modeling and other scientific domains in the region and globally.

Optimizing AI Systems through Cross-Layer Design: A Data-Centric Approach

MBZUAI ·

A Duke University professor presented a data-centric approach to optimizing AI systems by addressing the memory capacity and bandwidth bottleneck. The presentation covered collaborative optimization across algorithms, systems, architecture, and circuit layers. It also explored compute-in-memory as a solution for integrating computation and memory. Why it matters: Optimizing AI systems through a data-centric approach can improve efficiency and performance, critical for advancing AI applications in the region.