This article discusses the increasing concerns about the interpretability of large deep learning models. It highlights a talk by Danish Pruthi, an Assistant Professor at the Indian Institute of Science (IISc), Bangalore, who presented a framework to quantify the value of explanations and the need for holistic model evaluation. Pruthi's talk touched on geographically representative artifacts from text-to-image models and how well conversational LLMs challenge false assumptions. Why it matters: Addressing interpretability and evaluation is crucial for building trustworthy and reliable AI systems, particularly in sensitive applications within the Middle East and globally.
This paper introduces an explainable machine learning framework for early-stage chronic kidney disease (CKD) screening, specifically designed for low-resource settings in Bangladesh and South Asia. The framework utilizes a community-based dataset from Bangladesh and evaluates multiple ML classifiers with feature selection techniques. Results show that the ML models achieve high accuracy and sensitivity, outperforming existing screening tools and demonstrating strong generalizability across independent datasets from India, the UAE, and Bangladesh.
Sir Michael Brady, professor at Oxford and MBZUAI, argues that AI in healthcare must move beyond pattern recognition to causal understanding. He states that clinicians require AI models to articulate their reasoning behind diagnoses and therapy recommendations, not just provide statistical scores. He believes AI's immediate impact will be in personalized medicine, tailoring treatments to the individual rather than relying on epidemiological averages. Why it matters: This perspective highlights the critical need for explainable AI in sensitive domains like healthcare, paving the way for more trustworthy and clinically relevant AI applications in the region.
MBZUAI faculty Kun Zhang is researching methods to improve the reliability of generative AI, particularly in healthcare applications. Current generative AI models often act as "black boxes," making it difficult to understand why a specific result was produced. Zhang's research focuses on incorporating causal relationships into AI systems to ensure more accurate and meaningful information. Why it matters: Improving the trustworthiness of generative AI is crucial for sensitive sectors like healthcare and ensuring responsible AI deployment across the region.
The study compares deep learning models trained via transfer learning from ImageNet (TII-models) against those trained solely on medical images (LMI-models) for disease segmentation. Results show that combining outputs from both model types can improve segmentation performance by up to 10% in certain scenarios. A repository of models, code, and over 10,000 medical images is available on GitHub to facilitate further research.
MBZUAI researchers have developed K2 Think, an open-source AI reasoning system for interpretable energy decisions. K2 Think uses long chain-of-thought supervised fine-tuning and reinforcement learning to improve accuracy on multi-step reasoning in complex energy problems. The system breaks down challenges into smaller, auditable steps and uses test-time scaling for real-time adaptation. Why it matters: The open-source nature of K2 Think promotes transparency, trust, and compliance in critical energy environments while allowing secure deployment on sovereign infrastructure.
Pietro Liò from the University of Cambridge will discuss geometric deep learning techniques for building a digital patient twin using graph and hypergraph representation learning. The talk will focus on integrating Computational Biology and Deep Learning, considering physiological, clinical, and molecular variables. He will also cover explainable methodologies for clinicians and protein design using diffusion models. Why it matters: This highlights the growing interest in applying advanced AI techniques like geometric deep learning and diffusion models to healthcare challenges in the region, particularly for personalized medicine.