Skip to content
GCC AI Research

Search

Results for "explainability"

Evaluating Models and their Explanations

MBZUAI ·

This article discusses the increasing concerns about the interpretability of large deep learning models. It highlights a talk by Danish Pruthi, an Assistant Professor at the Indian Institute of Science (IISc), Bangalore, who presented a framework to quantify the value of explanations and the need for holistic model evaluation. Pruthi's talk touched on geographically representative artifacts from text-to-image models and how well conversational LLMs challenge false assumptions. Why it matters: Addressing interpretability and evaluation is crucial for building trustworthy and reliable AI systems, particularly in sensitive applications within the Middle East and globally.

Community-Based Early-Stage Chronic Kidney Disease Screening using Explainable Machine Learning for Low-Resource Settings

arXiv ·

This paper introduces an explainable machine learning framework for early-stage chronic kidney disease (CKD) screening, specifically designed for low-resource settings in Bangladesh and South Asia. The framework utilizes a community-based dataset from Bangladesh and evaluates multiple ML classifiers with feature selection techniques. Results show that the ML models achieve high accuracy and sensitivity, outperforming existing screening tools and demonstrating strong generalizability across independent datasets from India, the UAE, and Bangladesh.

Interpretable and synergistic deep learning for visual explanation and statistical estimations of segmentation of disease features from medical images

arXiv ·

The study compares deep learning models trained via transfer learning from ImageNet (TII-models) against those trained solely on medical images (LMI-models) for disease segmentation. Results show that combining outputs from both model types can improve segmentation performance by up to 10% in certain scenarios. A repository of models, code, and over 10,000 medical images is available on GitHub to facilitate further research.

On Transferability of Machine Learning Models

MBZUAI ·

This article discusses domain shift in machine learning, where testing data differs from training data, and methods to mitigate it via domain adaptation and generalization. Domain adaptation uses labeled source data and unlabeled target data. Domain generalization uses labeled data from single or multiple source domains to generalize to unseen target domains. Why it matters: Research in mitigating domain shift enhances the robustness and applicability of AI models in diverse real-world scenarios.

Towards Trustworthy AI: From High-dimensional Statistics to Causality

MBZUAI ·

Dr. Xinwei Sun from Microsoft Research Asia presented research on trustworthy AI, focusing on statistical learning with theoretical guarantees. The work covers methods for sparse recovery with false-discovery rate analysis and causal inference tools for robustness and explainability. Consistency and identifiability were addressed theoretically, with applications shown in medical imaging analysis. Why it matters: The research contributes to addressing key limitations of current AI models regarding explainability, reproducibility, robustness, and fairness, which are crucial for real-world applications in sensitive fields like healthcare.

Sir Michael Brady on why healthcare AI must move from detection to articulation

MBZUAI ·

Sir Michael Brady, professor at Oxford and MBZUAI, argues that AI in healthcare must move beyond pattern recognition to causal understanding. He states that clinicians require AI models to articulate their reasoning behind diagnoses and therapy recommendations, not just provide statistical scores. He believes AI's immediate impact will be in personalized medicine, tailoring treatments to the individual rather than relying on epidemiological averages. Why it matters: This perspective highlights the critical need for explainable AI in sensitive domains like healthcare, paving the way for more trustworthy and clinically relevant AI applications in the region.

Explainable Fact Checking for Statistical and Property Claims

MBZUAI ·

EURECOM researchers developed data-driven verification methods using structured datasets to assess statistical and property claims. The approach translates text claims into SQL queries on relational databases for statistical claims. For property claims, they use knowledge graphs to verify claims and generate explanations. Why it matters: The methods aim to support fact-checkers by efficiently labeling claims with interpretable explanations, potentially combating misinformation in the region and beyond.