Skip to content
GCC AI Research

Search

Results for "Trustworthy AI"

Towards Trustworthy AI: From High-dimensional Statistics to Causality

MBZUAI ·

Dr. Xinwei Sun from Microsoft Research Asia presented research on trustworthy AI, focusing on statistical learning with theoretical guarantees. The work covers methods for sparse recovery with false-discovery rate analysis and causal inference tools for robustness and explainability. Consistency and identifiability were addressed theoretically, with applications shown in medical imaging analysis. Why it matters: The research contributes to addressing key limitations of current AI models regarding explainability, reproducibility, robustness, and fairness, which are crucial for real-world applications in sensitive fields like healthcare.

Towards trustworthy generative AI

MBZUAI ·

MBZUAI faculty Kun Zhang is researching methods to improve the reliability of generative AI, particularly in healthcare applications. Current generative AI models often act as "black boxes," making it difficult to understand why a specific result was produced. Zhang's research focuses on incorporating causal relationships into AI systems to ensure more accurate and meaningful information. Why it matters: Improving the trustworthiness of generative AI is crucial for sensitive sectors like healthcare and ensuring responsible AI deployment across the region.

Towards Trustworthy AI-Generated Text

MBZUAI ·

Xiuying Chen from KAUST presented her work on improving the trustworthiness of AI-generated text, focusing on accuracy and robustness. Her research analyzes causes of hallucination in language models related to semantic understanding and neglect of input knowledge, and proposes solutions. She also demonstrated vulnerabilities of language models to noise and enhances robustness using augmentation techniques. Why it matters: Improving the reliability of AI-generated text is crucial for its deployment in sensitive domains like healthcare and scientific discovery, where accuracy is paramount.

Trustworthiness Assurance for Autonomous Software Systems in the AI Era

MBZUAI ·

Dr. Youcheng Sun from the University of Manchester presented on ensuring the trustworthiness of AI systems using formal verification, software testing, and explainable AI. He discussed applying these techniques to challenges like copyright protection for AI models. Dr. Sun's research has been funded by organizations including Google, Ethereum Foundation, and the UK’s Defence Science and Technology Laboratory. Why it matters: As AI adoption grows in the GCC, ensuring the safety, dependability, and trustworthiness of these systems is crucial for public trust and responsible innovation.

Latent Space Exploration for Safe and Trustworthy AI Models

MBZUAI ·

Hassan Sajjad from Dalhousie University presented research on exploring the latent space of AI models to assess their safety and trustworthiness. He discussed use cases where analyzing latent space helps understand the robustness-generalization tradeoff in adversarial training and evaluate language comprehension. Sajjad's work aims to build better AI models and increase trust in their capabilities by looking at model internals. Why it matters: Intrinsic evaluation of model internals will become important to improving AI safety and robustness.

‘Rising Stars’ in AI research explore reasoning, trust, and real-world impact

KAUST ·

KAUST hosted the fifth Rising Stars in AI Symposium, convening 25 early-career AI researchers from over 430 applicants. Discussions centered on reasoning in AI models, AI's role in addressing global challenges, embodied systems, and the necessity of trustworthy AI. Participants, including Dr. Sahar Abdelnabi from the ELLIS Institute Tübingen, emphasized the symposium's value for collaboration and identifying future AI research directions. Why it matters: The event highlights KAUST's commitment to fostering emerging AI talent and addressing critical issues in the field, with a focus on AI's real-world impact and ethical considerations.

Empowering Large Language Models with Reliable Reasoning

MBZUAI ·

Liangming Pan from UCSB presented research on building reliable generative AI agents by integrating symbolic representations with LLMs. The neuro-symbolic strategy combines the flexibility of language models with precise knowledge representation and verifiable reasoning. The work covers Logic-LM, ProgramFC, and learning from automated feedback, aiming to address LLM limitations in complex reasoning tasks. Why it matters: Improving the reliability of LLMs is crucial for high-stakes applications in finance, medicine, and law within the region and globally.