Skip to content
GCC AI Research

Search

Results for "AI provenance"

Towards Trustworthy AI: From High-dimensional Statistics to Causality

MBZUAI ·

Dr. Xinwei Sun from Microsoft Research Asia presented research on trustworthy AI, focusing on statistical learning with theoretical guarantees. The work covers methods for sparse recovery with false-discovery rate analysis and causal inference tools for robustness and explainability. Consistency and identifiability were addressed theoretically, with applications shown in medical imaging analysis. Why it matters: The research contributes to addressing key limitations of current AI models regarding explainability, reproducibility, robustness, and fairness, which are crucial for real-world applications in sensitive fields like healthcare.

Sovereign AI: Rethinking Autonomy in the Age of Global Interdependence

arXiv ·

This paper proposes a framework for understanding AI sovereignty as a balance between autonomy and interdependence, considering global data, supply chains, and standards. It introduces a planner's model with policy heuristics for equalizing marginal returns across sovereignty pillars and setting openness. The model is applied to India and the Middle East (Saudi Arabia and UAE), finding that managed interdependence, rather than isolation, is key for AI sovereignty.

Trustworthiness Assurance for Autonomous Software Systems in the AI Era

MBZUAI ·

Dr. Youcheng Sun from the University of Manchester presented on ensuring the trustworthiness of AI systems using formal verification, software testing, and explainable AI. He discussed applying these techniques to challenges like copyright protection for AI models. Dr. Sun's research has been funded by organizations including Google, Ethereum Foundation, and the UK’s Defence Science and Technology Laboratory. Why it matters: As AI adoption grows in the GCC, ensuring the safety, dependability, and trustworthiness of these systems is crucial for public trust and responsible innovation.

Towards trustworthy generative AI

MBZUAI ·

MBZUAI faculty Kun Zhang is researching methods to improve the reliability of generative AI, particularly in healthcare applications. Current generative AI models often act as "black boxes," making it difficult to understand why a specific result was produced. Zhang's research focuses on incorporating causal relationships into AI systems to ensure more accurate and meaningful information. Why it matters: Improving the trustworthiness of generative AI is crucial for sensitive sectors like healthcare and ensuring responsible AI deployment across the region.

Time Travel: A Comprehensive Benchmark to Evaluate LMMs on Historical and Cultural Artifacts

arXiv ·

Researchers introduce TimeTravel, a benchmark dataset for evaluating large multimodal models (LMMs) on historical and cultural artifacts. The benchmark comprises 10,250 expert-verified samples across 266 cultures and 10 historical regions, designed to assess AI in tasks like classification and interpretation of manuscripts, artworks, inscriptions, and archaeological discoveries. The goal is to establish AI as a reliable partner in preserving cultural heritage and assisting researchers.

The AI Pentad, the CHARME$^{2}$D Model, and an Assessment of Current-State AI Regulation

arXiv ·

This paper introduces the AI Pentad model, comprising humans/organizations, algorithms, data, computing, and energy, as a framework for AI regulation. It also presents the CHARME²D Model to link the AI Pentad with regulatory enablers like registration, monitoring, and enforcement. The paper assesses AI regulatory efforts in the EU, China, UAE, UK, and US using the CHARME²D model, highlighting strengths and weaknesses.