Skip to content
GCC AI Research

Search

Results for "scalability"

KAUST advances scalable AI through global collaboration

KAUST ·

KAUST is hosting a workshop on distributed training in November 2025, led by Professors Peter Richtarik and Marco Canini, focusing on scaling large models like LLMs and ViTs. Richtarik's team recently solved a 75-year-old problem in asynchronous optimization, developing time-optimal stochastic gradient descent algorithms. This research improves the speed and reliability of large model training and supports applications in distributed and federated learning. Why it matters: KAUST's focus on scalable AI and federated learning contributes to Saudi Arabia's Vision 2030 goals and addresses critical challenges in AI deployment and data privacy.

The AI Quorum continues with the first CASL Workshop

MBZUAI ·

MBZUAI's AI Quorum launched its second workshop, "Building Ecosystems for AI at Scale," focusing on AI scalability and business applications. The first CASL workshop aims to define steps for organizations to become self-sufficient with AI and explore new use cases. Speakers include MBZUAI faculty and researchers from CMU, Stanford, KAUST, UC Berkeley, and Google. Why it matters: The workshop highlights the UAE's growing role in fostering AI innovation and bridging the gap between academic research and industry applications in the region.

Principled Scaling of Neural Networks

MBZUAI ·

Soufiane Hayou of the National University of Singapore presented a talk at MBZUAI on principled scaling of neural networks. The talk covered leveraging mathematical results to efficiently scale neural networks. He obtained his PhD in statistics in 2021 from Oxford. Why it matters: Understanding neural network scaling is crucial for developing more efficient and powerful AI models in the region.

Sustainable AI at scale

MBZUAI ·

MBZUAI is developing the AI Operating System (AIOS) to reduce the energy, time, and talent costs of AI computing. AIOS aims to make AI models smaller, faster, and more efficient, reducing reliance on expensive hardware and speeding up compute operations. It also enables cost-aware model tuning and standardizes AI modules for reliable operation. Why it matters: By addressing the environmental impact and resource demands of AI, AIOS could promote more sustainable and accessible AI development in the region and globally.

Scaling Generative Adversarial Networks

MBZUAI ·

Axel Sauer from the University of Tübingen presented research on scaling Generative Adversarial Networks (GANs) using pretrained representations. The work explores shaping GANs into causal structures, training them up to 40 times faster, and achieving state-of-the-art image synthesis. The presentation mentions "Counterfactual Generative Networks", "Projected GANs", "StyleGAN-XL”, and “StyleGAN-T". Why it matters: Scaling GANs and improving their training efficiency is crucial for advancing image and video synthesis, with implications for various applications in computer vision, graphics, and robotics.

Many-cell sequencing: machine learning principles and methods for moving beyond single cells to population-scale analysis

MBZUAI ·

A talk discusses the challenges of single-cell data analysis, such as feature sparsity and the effects of rare cells. AI/ML strategies are uniquely positioned to model this data. ImYoo, a startup founded in 2021, is applying single-cell model architectures for unsupervised discovery of patient groupings and predicting sample-level phenotypical data in autoimmune disease. Why it matters: This highlights the growing application of AI/ML in analyzing single-cell data for population-scale human health studies, an area ripe for innovation and improvement in the Middle East's growing biotech sector.

Developing efficient algorithms to spread the benefits of AI

MBZUAI ·

MBZUAI PhD graduate William de Vazelhes is researching hard-thresholding algorithms to enable AI to work from smaller datasets. His work focuses on optimization algorithms that simplify data, making it easier to analyze and work with, useful for energy-saving and deploying AI models on low-memory devices. He demonstrated that his approach can obtain results similar to those of convex algorithms in many usual settings. Why it matters: This research could broaden AI accessibility by reducing computational costs, and has potential applications in sectors like finance, particularly for portfolio management under budgetary constraints.

Going under the hood to improve AI efficiency

MBZUAI ·

MBZUAI's computer science department, led by Xiaosong Ma, focuses on improving AI efficiency and sustainability by reducing wasted resources. Xiaosong's background in high-performance computing informs her approach to optimizing AI workloads. She aims to collaborate with experts across different AI domains at MBZUAI to address these challenges. Why it matters: Optimizing AI efficiency is crucial for reducing the environmental impact and computational costs associated with increasingly complex AI models in the GCC region and globally.