Mykel Kochenderfer from Stanford University gave a talk on building robust decision-making systems for autonomous systems, highlighting the challenges of balancing safety and efficiency in uncertain environments. The talk addressed computational tractability and establishing trust in these systems. Kochenderfer outlined methodologies and research applications for building safer systems, drawing from his work on air traffic control, unmanned aircraft, and automated driving. Why it matters: The development of safe and reliable autonomous systems is crucial for various applications in the region, and insights from experts like Kochenderfer can guide research and development efforts at institutions like MBZUAI.
The article discusses the potential of AI in piloting planes, noting current autopilot systems still require human input. Martin Takáč from MBZUAI expresses confidence in AI's ability to handle flight scenarios, citing its capacity for extensive simulation and error minimization through reinforcement learning. AI is already used in aviation for tasks like route planning and maintenance. Why it matters: The piece highlights the growing role of AI in aviation and raises important questions about the future of autonomous flight in the region.
Dr. Youcheng Sun from the University of Manchester presented on ensuring the trustworthiness of AI systems using formal verification, software testing, and explainable AI. He discussed applying these techniques to challenges like copyright protection for AI models. Dr. Sun's research has been funded by organizations including Google, Ethereum Foundation, and the UK’s Defence Science and Technology Laboratory. Why it matters: As AI adoption grows in the GCC, ensuring the safety, dependability, and trustworthiness of these systems is crucial for public trust and responsible innovation.
Patrick van der Smagt, Director of AI Research at Volkswagen Group, discussed the use of generative machine learning models for predicting and controlling complex stochastic systems in robotics. The talk highlighted examples in robotics and beyond and addressed the challenges of achieving quality and trust in AI systems. He also mentioned his involvement in a European industry initiative on trust in AI and his membership in the AI Council of the State of Bavaria. Why it matters: Understanding control in robotics, along with trust in AI, are key issues for further development of autonomous systems, especially in industrial applications within the GCC region.
MBZUAI faculty Kun Zhang is researching methods to improve the reliability of generative AI, particularly in healthcare applications. Current generative AI models often act as "black boxes," making it difficult to understand why a specific result was produced. Zhang's research focuses on incorporating causal relationships into AI systems to ensure more accurate and meaningful information. Why it matters: Improving the trustworthiness of generative AI is crucial for sensitive sectors like healthcare and ensuring responsible AI deployment across the region.
Christian Montag from Ulm University gave a talk about assessing attitudes towards AI, covering the IMPACT framework (Modality, Person, Area, Country/Culture, and Transparency). He discussed how factors like age, gender, personality, and culture relate to attitudes toward AI, and how those attitudes link to trust in automation and specific AI models like ChatGPT and Ernie Bot. Montag's research explores the intersection of psychology, neuroscience, behavioral economics, and computer science, focusing on the impact of AI on the human mind. Why it matters: Understanding public perception of AI is crucial for responsible development and deployment, especially in the Arab world where cultural and demographic factors can significantly shape attitudes.
Xiuying Chen from KAUST presented her work on improving the trustworthiness of AI-generated text, focusing on accuracy and robustness. Her research analyzes causes of hallucination in language models related to semantic understanding and neglect of input knowledge, and proposes solutions. She also demonstrated vulnerabilities of language models to noise and enhances robustness using augmentation techniques. Why it matters: Improving the reliability of AI-generated text is crucial for its deployment in sensitive domains like healthcare and scientific discovery, where accuracy is paramount.
Giuseppe Loianno from NYU presented research on creating "Super Autonomous" robots (USARC) that are Unmanned, Small, Agile, Resilient, and Collaborative. The research focuses on learning models, control, and navigation policies for single and collaborative robots operating in challenging environments. The talk highlighted the potential of these robots in logistics, reconnaissance, and other time-sensitive tasks. Why it matters: This points to growing research interest in advanced robotics in the region, especially given the focus on smart cities and automation.