Skip to content
GCC AI Research

Search

Results for "data protection"

Powerful predictions and privacy

MBZUAI ·

MBZUAI Assistant Professor Samuel Horváth is researching federated learning to address the tension between data privacy and the predictive power of machine learning models. Federated learning trains models on decentralized data, keeping sensitive information on devices. Horváth's research focuses on designing algorithms that can efficiently train on distributed data while respecting user privacy. Why it matters: This work is crucial for advancing AI in sensitive domains like healthcare, where privacy regulations limit centralized data collection.

Building Planetary-Scale Collaborative Intelligence

MBZUAI ·

Sai Praneeth Karimireddy from UC Berkeley presented a talk on building planetary-scale collaborative intelligence, highlighting the challenges of using distributed data in machine learning due to data silos and ethical-legal restrictions. He proposed collaborative systems like federated learning as a solution to bring together distributed data while respecting privacy. The talk addressed the need for efficiency, reliability, and management of divergent goals in these systems, suggesting the use of tools from optimization, statistics, and economics. Why it matters: Collaborative AI systems can unlock valuable distributed data in the region, especially in sensitive sectors like healthcare, while ensuring privacy and addressing ethical concerns.

Research talk on Privacy and Security Issues in Speech

MBZUAI ·

A research talk was given on privacy and security issues in speech processing, highlighting the unique privacy challenges due to the biometric information embedded in speech. The talk covered the legal landscape, proposed solutions like cryptographic and hashing-based methods, and adversarial processing techniques. Dr. Bhiksha Raj from Carnegie Mellon University, an expert in speech and audio processing, delivered the talk. Why it matters: As speech-based interfaces become more prevalent in the Middle East, understanding and addressing the associated privacy risks is crucial for ethical AI development and deployment.

Hardware Security through the Lens of Dr ML

MBZUAI ·

NYU Abu Dhabi hosted a talk by Prof. Debdeep Mukhopadhyay on the intersection of machine learning and hardware security. The talk covered using ML/DL for side-channel attacks, leakage assessment in crypto-devices, and threats to hardware security primitives. Prof. Mukhopadhyay is a visiting professor at NYU Abu Dhabi and Institute Chair Professor at IIT Kharagpur. Why it matters: The talk highlights the growing importance of hardware security in modern systems and the role of machine learning in both attacking and defending hardware vulnerabilities.

Iranian drone attacks on Amazon’s Gulf data centers a harbinger of new tactics in future conflicts, experts say - Fortune

GCC AI Events ·

A recent Fortune article discusses the potential vulnerability of Gulf data centers, including those operated by Amazon, to drone attacks. Experts suggest that Iranian-backed groups may employ such tactics in future regional conflicts. The hypothetical scenario raises concerns about data security and infrastructure resilience in the region. Why it matters: Highlights the increasing importance of protecting critical digital infrastructure in the GCC from emerging security threats.

Safeguarding AI-for-health systems

MBZUAI ·

Researchers from MBZUAI, KAUST, and Mila are collaborating to develop methods for identifying and mitigating the impact of malicious actors in federated learning systems used for health data analysis. These systems aggregate anonymized data from numerous devices to generate insights for healthcare improvements. The team's research, accepted at ICLR 2023, focuses on using variance reduction techniques to counteract the disruptive effects of skewed or corrupted data submitted by dishonest users. Why it matters: Protecting the integrity of AI-driven health systems is crucial for ensuring the reliability and safety of insights derived from sensitive patient data in the GCC region and globally.

Security-Enhanced Radio Access Networks for 5G OpenRAN

MBZUAI ·

Dr. Zhiqiang Lin from Ohio State University presented the Security-Enhanced Radio Access Network (SE-RAN) project to address cellular network threats using O-RAN. The project includes 5G-Spector, a framework for detecting L3 protocol exploits via MobiFlow and MobieXpert, and 5G-XSec, a framework leveraging deep learning and LLMs for threat analysis at the network edge. Dr. Lin also outlined a vision for AI convergence with cellular security for enhanced threat detection. Why it matters: Enhancing 5G security through AI and open architectures is critical for protecting next-generation mobile networks in the GCC region and globally.

A new playbook for patient privacy in the age of foundation models

MBZUAI ·

MBZUAI researchers Darya Taratynova and Shahad Hardan developed Forget-MI, a method for making clinical AI models "unlearn" specific patient data without retraining the entire model. Forget-MI addresses the challenge of removing patient data from AI models trained on multimodal records (like chest X-rays and reports) due to regulations like GDPR and HIPAA. The method unlearns both unimodal (image or text) and joint (image-text) associations while retaining overall accuracy using a late-fusion multimodal classifier. Why it matters: This research provides a practical solution to a critical privacy concern in healthcare AI, enabling compliance with data protection regulations and fostering trust in AI-driven medical applications.