Skip to content
GCC AI Research

Search

Results for "Explainable AI"

Evaluating Models and their Explanations

MBZUAI ·

This article discusses the increasing concerns about the interpretability of large deep learning models. It highlights a talk by Danish Pruthi, an Assistant Professor at the Indian Institute of Science (IISc), Bangalore, who presented a framework to quantify the value of explanations and the need for holistic model evaluation. Pruthi's talk touched on geographically representative artifacts from text-to-image models and how well conversational LLMs challenge false assumptions. Why it matters: Addressing interpretability and evaluation is crucial for building trustworthy and reliable AI systems, particularly in sensitive applications within the Middle East and globally.

Community-Based Early-Stage Chronic Kidney Disease Screening using Explainable Machine Learning for Low-Resource Settings

arXiv ·

This paper introduces an explainable machine learning framework for early-stage chronic kidney disease (CKD) screening, specifically designed for low-resource settings in Bangladesh and South Asia. The framework utilizes a community-based dataset from Bangladesh and evaluates multiple ML classifiers with feature selection techniques. Results show that the ML models achieve high accuracy and sensitivity, outperforming existing screening tools and demonstrating strong generalizability across independent datasets from India, the UAE, and Bangladesh.

Sir Michael Brady on why healthcare AI must move from detection to articulation

MBZUAI ·

Sir Michael Brady, professor at Oxford and MBZUAI, argues that AI in healthcare must move beyond pattern recognition to causal understanding. He states that clinicians require AI models to articulate their reasoning behind diagnoses and therapy recommendations, not just provide statistical scores. He believes AI's immediate impact will be in personalized medicine, tailoring treatments to the individual rather than relying on epidemiological averages. Why it matters: This perspective highlights the critical need for explainable AI in sensitive domains like healthcare, paving the way for more trustworthy and clinically relevant AI applications in the region.

Towards trustworthy generative AI

MBZUAI ·

MBZUAI faculty Kun Zhang is researching methods to improve the reliability of generative AI, particularly in healthcare applications. Current generative AI models often act as "black boxes," making it difficult to understand why a specific result was produced. Zhang's research focuses on incorporating causal relationships into AI systems to ensure more accurate and meaningful information. Why it matters: Improving the trustworthiness of generative AI is crucial for sensitive sectors like healthcare and ensuring responsible AI deployment across the region.

Explainable Fact Checking for Statistical and Property Claims

MBZUAI ·

EURECOM researchers developed data-driven verification methods using structured datasets to assess statistical and property claims. The approach translates text claims into SQL queries on relational databases for statistical claims. For property claims, they use knowledge graphs to verify claims and generate explanations. Why it matters: The methods aim to support fact-checkers by efficiently labeling claims with interpretable explanations, potentially combating misinformation in the region and beyond.

Interpretable and synergistic deep learning for visual explanation and statistical estimations of segmentation of disease features from medical images

arXiv ·

The study compares deep learning models trained via transfer learning from ImageNet (TII-models) against those trained solely on medical images (LMI-models) for disease segmentation. Results show that combining outputs from both model types can improve segmentation performance by up to 10% in certain scenarios. A repository of models, code, and over 10,000 medical images is available on GitHub to facilitate further research.

Actionable and responsible AI in Medicine: a geometric deep learning approach

MBZUAI ·

Pietro Liò from the University of Cambridge will discuss geometric deep learning techniques for building a digital patient twin using graph and hypergraph representation learning. The talk will focus on integrating Computational Biology and Deep Learning, considering physiological, clinical, and molecular variables. He will also cover explainable methodologies for clinicians and protein design using diffusion models. Why it matters: This highlights the growing interest in applying advanced AI techniques like geometric deep learning and diffusion models to healthcare challenges in the region, particularly for personalized medicine.