A presentation discusses using programmable network devices to reduce communication bottlenecks in distributed deep learning. It explores in-network aggregation and data processing to lower memory needs and increase bandwidth usage. The talk also covers gradient compression and the potential role of programmable NICs. Why it matters: Optimizing distributed deep learning infrastructure is critical for scaling AI model training in resource-constrained environments.
MBZUAI President Eric Xing led a global collaboration to develop Vicuna, an LLM alternative to GPT-3 addressing the unsustainable costs of training LLMs. OpenAI CEO Sam Altman acknowledged Abu Dhabi's role in the global AI conversation, building off of achievements like Vicuna. Xing and colleagues are publishing research at MLSys 2023 on "cross-mesh resharding" to improve computer communication in deep learning, aiming for low-carbon, affordable, and miniaturized AI. Why it matters: This research signals a push towards sustainable AI development in the region, emphasizing efficiency and reduced environmental impact.
A new mini-batch strategy using aggregated relational data is proposed to fit the mixed membership stochastic blockmodel (MMSB) to large networks. The method uses nodal information and stochastic gradients of bipartite graphs for scalable inference. The approach was applied to a citation network with over two million nodes and 25 million edges, capturing explainable structure. Why it matters: This research enables more efficient community detection in massive networks, which is crucial for analyzing complex relationships in various domains, but this article has no clear connection to the Middle East.
QRC has developed Qibo, a Python library enabling classical simulation of quantum algorithms with double precision. Qibo leverages hardware accelerators like GPUs and CPUs with multi-threading. It incorporates a multi-GPU distributed approach for circuit simulation. Why it matters: This framework allows researchers and developers in the region to explore and prototype quantum algorithms using existing classical computing infrastructure, fostering innovation in quantum computing research and applications.
A professor from EPFL (Lausanne) gave a talk at MBZUAI on computing in the post-Moore era, highlighting the slowing of Moore's Law due to physical limits in transistor miniaturization. He discussed research challenges and opportunities for future computing technologies. He presented examples of post-Moore technologies he helped develop in the datacenter space. Why it matters: As Moore's Law slows, research into alternative computing paradigms becomes critical for the continued advancement of AI and digital services in the UAE and globally.
KAUST Ph.D. graduate Tariq Alturkestani won the best paper award at Euro-Par 2020 for his doctoral thesis on overlapping I/O and compute in large-scale scientific computation using multilayered buffering mechanisms. His work re-evaluates the Reverse Time Migration (RTM) method used by geoscientists for oil and gas explorations, utilizing emerging storage technologies. The paper was co-authored with Professor David Keyes and Dr. Hatem Ltaief from the KAUST Extreme Computing Research Center (ECRC). Why it matters: This award highlights KAUST's growing prominence as a hub for Saudi talent and research in supercomputing and extreme computing, particularly in applications relevant to the region's energy sector.
MBZUAI Associate Professor Martin Takáč is working on high-performance computing and machine learning with applications in logistics, supply chain management, and other areas. His research focuses on using AI to improve precision and efficiency in tasks like predicting demand and optimizing delivery routes. Takáč's interests include imitative learning, predictive modeling, and reinforcement learning to enable AI to mimic human behavior and predict future outcomes. Why it matters: This research contributes to the development of more efficient and reliable AI systems that can be applied to a wide range of industries in the UAE and beyond.
Marcus Engsig from DERC will present a paper at the MATLAB User Group Meeting in Abu Dhabi on October 6. The paper, titled ‘Generalization of Higher Order Methods For Fast Iterative Matrix Inversion Compatible With GPU Acceleration’, discusses a novel approach to matrix inversion using GPUs. The method, named Nested Neumann, achieves 4-100x acceleration compared to standard MATLAB methods for large matrices. Why it matters: This research contributes to faster computation in numerical and physical modeling, crucial for processing large datasets in various scientific and engineering applications in the region.