An MBZUAI team developed a self-ensembling vision transformer to enhance the security of AI in medical imaging. The model aims to protect patient anonymity and ensure the validity of medical image analysis. It addresses vulnerabilities where AI systems can be manipulated, leading to misinterpretations with potentially harmful consequences in healthcare. Why it matters: This research is crucial for building trust and enabling the safe deployment of AI in sensitive medical applications, protecting against fraud and ensuring patient safety.
Researchers from MBZUAI, KAUST, and Mila are collaborating to develop methods for identifying and mitigating the impact of malicious actors in federated learning systems used for health data analysis. These systems aggregate anonymized data from numerous devices to generate insights for healthcare improvements. The team's research, accepted at ICLR 2023, focuses on using variance reduction techniques to counteract the disruptive effects of skewed or corrupted data submitted by dishonest users. Why it matters: Protecting the integrity of AI-driven health systems is crucial for ensuring the reliability and safety of insights derived from sensitive patient data in the GCC region and globally.
MBZUAI hosted a panel discussion in collaboration with the Manara Center for Coexistence and Dialogue. The discussion focused on the intersection of AI and medical image computing. Jiebo Luo, a professor at the University of Rochester, discussed his work on applying AI to healthcare, including moving beyond classification to semantic description and expanding use from hospitals to home telemedicine. Why it matters: This highlights the increasing focus on AI applications in healthcare within the Middle East, particularly at institutions like MBZUAI, which are fostering discussions on the ethical and practical implications of AI in medicine.
MBZUAI doctoral student Umaima Rahman is researching domain adaptation and generalization in deep learning for medical imaging to improve AI model performance across diverse hospitals and equipment. Her work focuses on building models that learn consistent features across different data sources to ensure reliability in various healthcare settings. Rahman emphasizes that generalization in healthcare AI is a necessity, especially in resource-limited settings, and aims to develop AI that assists clinicians rather than replaces them. Why it matters: This research addresses a critical challenge in deploying AI in healthcare, ensuring that models can be reliably used in diverse settings, particularly benefiting developing countries and improving global healthcare accessibility.
MBZUAI Ph.D. student Raza Imam and colleagues presented a new benchmark called MediMeta-C to test the robustness of medical vision-language models (MVLMs) under real-world image corruptions. They found that top-performing MVLMs on clean data often fail under mild corruption, with fundoscopy models particularly vulnerable. To address this, they developed RobustMedCLIP (RMC), a lightweight defense using few-shot LoRA tuning to improve model robustness. Why it matters: This research highlights the critical need for robustness testing in medical AI to ensure reliability in clinical settings, particularly in resource-constrained environments where image quality may be compromised.