Skip to content
GCC AI Research

Search

Results for "reliability"

Software-Directed Hardware Reliability for ML Systems

MBZUAI ·

Abdulrahman Mahmoud, a postdoctoral fellow at Harvard University, discusses software-directed tools and techniques for processor design and reliability enhancement in ML systems. He emphasizes the need for a nuanced approach to numerical data formats supported by robust hardware. He advocates for integrating reliability as a foundational element in the design process. Why it matters: This research addresses the critical challenge of hardware reliability in AI processors, particularly relevant as the field moves towards hardware-software co-design for sustained growth.

Reliability Exploration of Neural Network Accelerator

MBZUAI ·

This article discusses the reliability of Deep Neural Networks (DNNs) and their hardware platforms, especially regarding soft errors caused by cosmic rays. It highlights that while DNNs are robust against bit flips, errors can still lead to miscalculations in AI accelerators. The talk, led by Prof. Masanori Hashimoto from Kyoto University, will cover identifying vulnerabilities in neural networks and reliability exploration of AI accelerators for edge computing. Why it matters: As DNNs are deployed in safety-critical applications in the region, ensuring the reliability of AI hardware is crucial for safe and trustworthy operation.

Confidence sets for Causal Discovery

MBZUAI ·

A new framework for constructing confidence sets for causal orderings within structural equation models (SEMs) is presented. It leverages a residual bootstrap procedure to test the goodness-of-fit of causal orderings, quantifying uncertainty in causal discovery. The method is computationally efficient and suitable for medium-sized problems while maintaining theoretical guarantees as the number of variables increases. Why it matters: This offers a new dimension of uncertainty quantification that enhances the robustness and reliability of causal inference in complex systems, but there is no indication of connection to the Middle East.

Advances in uncertainty quantification methods

KAUST ·

KAUST hosted the Advances in Uncertainty Quantification Methods, Algorithms and Applications conference (UQAW2016) in January 2016. The event featured 75 presentations and 20 invited speakers from various countries. Professor Raul Tempone presented research on computational approaches to fouling accumulation and wear degradation using stochastic differential equations. Why it matters: This work provides a new computational approach based on stochastic differential equations to predict fouling patterns of heat exchangers which can optimize maintenance operations and reduce engine shut-down periods.

Towards Trustworthy AI: From High-dimensional Statistics to Causality

MBZUAI ·

Dr. Xinwei Sun from Microsoft Research Asia presented research on trustworthy AI, focusing on statistical learning with theoretical guarantees. The work covers methods for sparse recovery with false-discovery rate analysis and causal inference tools for robustness and explainability. Consistency and identifiability were addressed theoretically, with applications shown in medical imaging analysis. Why it matters: The research contributes to addressing key limitations of current AI models regarding explainability, reproducibility, robustness, and fairness, which are crucial for real-world applications in sensitive fields like healthcare.

Understanding networked systems

KAUST ·

Munther Dahleh, director at the MIT Institute for Data, Systems, and Society (IDSS), discussed his group's research on network systems at the KAUST 2018 Winter Enrichment Program. The research focuses on the fragility of large networked systems, like highway systems, in response to disruptions that may lead to catastrophic failures. Dahleh's team studies transportation networks, electrical grids, and financial markets to understand system interconnection in causing systemic risk. Why it matters: Understanding networked systems is crucial for building resilient infrastructure and mitigating risks in critical sectors across the GCC region.

Empowering Large Language Models with Reliable Reasoning

MBZUAI ·

Liangming Pan from UCSB presented research on building reliable generative AI agents by integrating symbolic representations with LLMs. The neuro-symbolic strategy combines the flexibility of language models with precise knowledge representation and verifiable reasoning. The work covers Logic-LM, ProgramFC, and learning from automated feedback, aiming to address LLM limitations in complex reasoning tasks. Why it matters: Improving the reliability of LLMs is crucial for high-stakes applications in finance, medicine, and law within the region and globally.