Skip to content
GCC AI Research

Safeguarding AI-for-health systems

MBZUAI · Significant research

Summary

Researchers from MBZUAI, KAUST, and Mila are collaborating to develop methods for identifying and mitigating the impact of malicious actors in federated learning systems used for health data analysis. These systems aggregate anonymized data from numerous devices to generate insights for healthcare improvements. The team's research, accepted at ICLR 2023, focuses on using variance reduction techniques to counteract the disruptive effects of skewed or corrupted data submitted by dishonest users. Why it matters: Protecting the integrity of AI-driven health systems is crucial for ensuring the reliability and safety of insights derived from sensitive patient data in the GCC region and globally.

Get the weekly digest

Top AI stories from the GCC region, every week.

Related

Safeguarding AI medical imaging

MBZUAI ·

An MBZUAI team developed a self-ensembling vision transformer to enhance the security of AI in medical imaging. The model aims to protect patient anonymity and ensure the validity of medical image analysis. It addresses vulnerabilities where AI systems can be manipulated, leading to misinterpretations with potentially harmful consequences in healthcare. Why it matters: This research is crucial for building trust and enabling the safe deployment of AI in sensitive medical applications, protecting against fraud and ensuring patient safety.

Forget-MI: Machine Unlearning for Forgetting Multimodal Information in Healthcare Settings

arXiv ·

Researchers from MBZUAI introduce Forget-MI, a machine unlearning method tailored for multimodal medical data, enhancing privacy by removing specific patient data from AI models. Forget-MI utilizes loss functions and perturbation techniques to unlearn both unimodal and joint data representations. The method demonstrates superior performance in reducing Membership Inference Attacks and improving data removal compared to existing techniques, while preserving overall model performance and enabling data forgetting.

A new playbook for patient privacy in the age of foundation models

MBZUAI ·

MBZUAI researchers Darya Taratynova and Shahad Hardan developed Forget-MI, a method for making clinical AI models "unlearn" specific patient data without retraining the entire model. Forget-MI addresses the challenge of removing patient data from AI models trained on multimodal records (like chest X-rays and reports) due to regulations like GDPR and HIPAA. The method unlearns both unimodal (image or text) and joint (image-text) associations while retaining overall accuracy using a late-fusion multimodal classifier. Why it matters: This research provides a practical solution to a critical privacy concern in healthcare AI, enabling compliance with data protection regulations and fostering trust in AI-driven medical applications.

Enhancing Human Touch in Healthcare: The Role of Generative AI and Multimodal Technologies

MBZUAI ·

Ehsan Hoque from the University of Rochester gave a talk at MBZUAI discussing how to integrate AI into healthcare to improve access and equity. He emphasized that technology should align with values and infrastructure, advocating for AI solutions developed through collaboration between computer scientists and healthcare professionals. Hoque presented examples like using AI to quantify movement disorders and improve empathy skills. Why it matters: This highlights the importance of human-centered AI development in the GCC region, particularly in sensitive sectors like healthcare, and MBZUAI's role in fostering such discussions.