Skip to content
GCC AI Research

A prescription for privacy

MBZUAI · Significant research

Summary

MBZUAI researchers developed FeSViBS, a new federated split learning technique for vision transformers that addresses data scarcity and privacy concerns in healthcare image classification. The method combines federated learning and split learning to train models collaboratively without sharing sensitive patient data directly. It overcomes limitations of traditional centralized training and vulnerabilities in federated learning. Why it matters: This approach enables the development of AI-powered healthcare applications while adhering to stringent data privacy regulations, unlocking the potential of machine learning in medical imaging.

Get the weekly digest

Top AI stories from the GCC region, every week.

Related

A new playbook for patient privacy in the age of foundation models

MBZUAI ·

MBZUAI researchers Darya Taratynova and Shahad Hardan developed Forget-MI, a method for making clinical AI models "unlearn" specific patient data without retraining the entire model. Forget-MI addresses the challenge of removing patient data from AI models trained on multimodal records (like chest X-rays and reports) due to regulations like GDPR and HIPAA. The method unlearns both unimodal (image or text) and joint (image-text) associations while retaining overall accuracy using a late-fusion multimodal classifier. Why it matters: This research provides a practical solution to a critical privacy concern in healthcare AI, enabling compliance with data protection regulations and fostering trust in AI-driven medical applications.

Powerful predictions and privacy

MBZUAI ·

MBZUAI Assistant Professor Samuel Horváth is researching federated learning to address the tension between data privacy and the predictive power of machine learning models. Federated learning trains models on decentralized data, keeping sensitive information on devices. Horváth's research focuses on designing algorithms that can efficiently train on distributed data while respecting user privacy. Why it matters: This work is crucial for advancing AI in sensitive domains like healthcare, where privacy regulations limit centralized data collection.

Safeguarding AI medical imaging

MBZUAI ·

An MBZUAI team developed a self-ensembling vision transformer to enhance the security of AI in medical imaging. The model aims to protect patient anonymity and ensure the validity of medical image analysis. It addresses vulnerabilities where AI systems can be manipulated, leading to misinterpretations with potentially harmful consequences in healthcare. Why it matters: This research is crucial for building trust and enabling the safe deployment of AI in sensitive medical applications, protecting against fraud and ensuring patient safety.

Forget-MI: Machine Unlearning for Forgetting Multimodal Information in Healthcare Settings

arXiv ·

Researchers from MBZUAI introduce Forget-MI, a machine unlearning method tailored for multimodal medical data, enhancing privacy by removing specific patient data from AI models. Forget-MI utilizes loss functions and perturbation techniques to unlearn both unimodal and joint data representations. The method demonstrates superior performance in reducing Membership Inference Attacks and improving data removal compared to existing techniques, while preserving overall model performance and enabling data forgetting.