Skip to content
GCC AI Research

Trustworthiness Assurance for Autonomous Software Systems in the AI Era

MBZUAI · Notable

Summary

Dr. Youcheng Sun from the University of Manchester presented on ensuring the trustworthiness of AI systems using formal verification, software testing, and explainable AI. He discussed applying these techniques to challenges like copyright protection for AI models. Dr. Sun's research has been funded by organizations including Google, Ethereum Foundation, and the UK’s Defence Science and Technology Laboratory. Why it matters: As AI adoption grows in the GCC, ensuring the safety, dependability, and trustworthiness of these systems is crucial for public trust and responsible innovation.

Get the weekly digest

Top AI stories from the GCC region, every week.

Related

Towards Trustworthy AI: From High-dimensional Statistics to Causality

MBZUAI ·

Dr. Xinwei Sun from Microsoft Research Asia presented research on trustworthy AI, focusing on statistical learning with theoretical guarantees. The work covers methods for sparse recovery with false-discovery rate analysis and causal inference tools for robustness and explainability. Consistency and identifiability were addressed theoretically, with applications shown in medical imaging analysis. Why it matters: The research contributes to addressing key limitations of current AI models regarding explainability, reproducibility, robustness, and fairness, which are crucial for real-world applications in sensitive fields like healthcare.

Automated Decision Making for Safety Critical Applications

MBZUAI ·

Mykel Kochenderfer from Stanford University gave a talk on building robust decision-making systems for autonomous systems, highlighting the challenges of balancing safety and efficiency in uncertain environments. The talk addressed computational tractability and establishing trust in these systems. Kochenderfer outlined methodologies and research applications for building safer systems, drawing from his work on air traffic control, unmanned aircraft, and automated driving. Why it matters: The development of safe and reliable autonomous systems is crucial for various applications in the region, and insights from experts like Kochenderfer can guide research and development efforts at institutions like MBZUAI.

The Role of AI in Revolutionizing Autonomous Vehicles

MBZUAI ·

Daniela Rus from MIT CSAIL discussed the role of AI in revolutionizing autonomous vehicles, emphasizing the need for risk evaluation, intent understanding, and adaptation to diverse driving styles. The talk highlighted integrating risk and behavior analysis in autonomous vehicle control systems. Social Value Orientation (SVO) can be incorporated into decision-making for self-driving vehicles. Why it matters: This research advances the development of safer and more adaptive autonomous vehicles, crucial for their successful deployment in diverse real-world driving scenarios within the GCC region and globally.

Software-Directed Hardware Reliability for ML Systems

MBZUAI ·

Abdulrahman Mahmoud, a postdoctoral fellow at Harvard University, discusses software-directed tools and techniques for processor design and reliability enhancement in ML systems. He emphasizes the need for a nuanced approach to numerical data formats supported by robust hardware. He advocates for integrating reliability as a foundational element in the design process. Why it matters: This research addresses the critical challenge of hardware reliability in AI processors, particularly relevant as the field moves towards hardware-software co-design for sustained growth.