A presentation discusses using programmable network devices to reduce communication bottlenecks in distributed deep learning. It explores in-network aggregation and data processing to lower memory needs and increase bandwidth usage. The talk also covers gradient compression and the potential role of programmable NICs. Why it matters: Optimizing distributed deep learning infrastructure is critical for scaling AI model training in resource-constrained environments.
KAUST faculty member Marco Canini is researching networked systems, focusing on improving their design, implementation, and operation. His work centers on Software-Defined Advanced Networked and Distributed Systems (SANDS). Canini aims to address challenges related to reliability, performance, security, and energy efficiency in large-scale networked computer systems. Why it matters: This research contributes to the development of more dependable and efficient digital infrastructure in Saudi Arabia, aligning with KAUST's mission to advance science and technology.
KAUST hosted the "Human-Machine Networks and Intelligent Infrastructures" conference, co-organized by Prof. Jeff Shamma and Asst. Prof. Meriem Laleg. The conference explored the blend of engineered devices and human elements in large-scale systems like smart grids. Keynote speaker Dr. Pramod Khargonekar discussed cyber-physical-social systems and emerging trends. Why it matters: The conference highlights the growing importance of understanding the interplay between AI, infrastructure, and human behavior in the development of smart cities and intelligent systems in the region.
Dr. Zhiqiang Lin from Ohio State University presented the Security-Enhanced Radio Access Network (SE-RAN) project to address cellular network threats using O-RAN. The project includes 5G-Spector, a framework for detecting L3 protocol exploits via MobiFlow and MobieXpert, and 5G-XSec, a framework leveraging deep learning and LLMs for threat analysis at the network edge. Dr. Lin also outlined a vision for AI convergence with cellular security for enhanced threat detection. Why it matters: Enhancing 5G security through AI and open architectures is critical for protecting next-generation mobile networks in the GCC region and globally.
Ang Chen from the University of Michigan presented a talk at MBZUAI on reducing cloud manageability burdens. The talk covered detecting semantic errors before cloud deployment and curating datasets for automated generation of cloud management programs. He introduced the concept of "cloudless computing" to free tenants from cloud management tasks. Why it matters: This research direction could simplify cloud infrastructure management for businesses in the UAE and beyond, allowing them to focus on core activities.
This paper introduces ProgramFC, a fact-checking model that decomposes complex claims into simpler sub-tasks using a library of functions. The model uses LLMs to generate reasoning programs and executes them by delegating sub-tasks, enhancing explainability and data efficiency. Experiments on fact-checking datasets demonstrate ProgramFC's superior performance compared to baseline methods, with publicly available code and data.
Munther Dahleh, director at the MIT Institute for Data, Systems, and Society (IDSS), discussed his group's research on network systems at the KAUST 2018 Winter Enrichment Program. The research focuses on the fragility of large networked systems, like highway systems, in response to disruptions that may lead to catastrophic failures. Dahleh's team studies transportation networks, electrical grids, and financial markets to understand system interconnection in causing systemic risk. Why it matters: Understanding networked systems is crucial for building resilient infrastructure and mitigating risks in critical sectors across the GCC region.
A recent talk at MBZUAI discussed "Green Learning" and Operational Neural Networks (ONNs) as efficient alternatives to CNNs. ONNs use "nodal" and "pool" operators and "generative neurons" to expand neuron learning capacity. Moncef Gabbouj from Tampere University presented Self-Organized ONNs (Self-ONNs) and their signal processing applications. Why it matters: Exploring more efficient AI models is crucial for sustainable development of AI in the region, as it addresses computational resource constraints and promotes broader accessibility.