Skip to content
GCC AI Research

Safeguarding AI-for-health systems

MBZUAI · Significant research

Summary

Researchers from MBZUAI, KAUST, and Mila are collaborating to develop methods for identifying and mitigating the impact of malicious actors in federated learning systems used for health data analysis. These systems aggregate anonymized data from numerous devices to generate insights for healthcare improvements. The team's research, accepted at ICLR 2023, focuses on using variance reduction techniques to counteract the disruptive effects of skewed or corrupted data submitted by dishonest users. Why it matters: Protecting the integrity of AI-driven health systems is crucial for ensuring the reliability and safety of insights derived from sensitive patient data in the GCC region and globally.

Get the weekly digest

Top AI stories from the GCC region, every week.

Related

Safeguarding AI medical imaging

MBZUAI ·

An MBZUAI team developed a self-ensembling vision transformer to enhance the security of AI in medical imaging. The model aims to protect patient anonymity and ensure the validity of medical image analysis. It addresses vulnerabilities where AI systems can be manipulated, leading to misinterpretations with potentially harmful consequences in healthcare. Why it matters: This research is crucial for building trust and enabling the safe deployment of AI in sensitive medical applications, protecting against fraud and ensuring patient safety.

Making Machine Learning Safe for the World - New Lines Institute

The National ·

The New Lines Institute published a report analyzing the risks associated with advanced AI systems. It examines potential harms like disinformation, bias, and autonomous weapons. Why it matters: The report highlights the need for proactive safety measures and ethical guidelines in AI development to mitigate negative impacts in the Middle East and globally.