Skip to content
GCC AI Research

Search

Results for "anonymization"

CRC Seminar Series - Associate Professor Anamaria Costache

TII ·

Associate Professor Anamaria Costache from the Norwegian University of Science and Technology (NTNU) will present a seminar on Fully Homomorphic Encryption (FHE). The talk will cover recent advancements in FHE, its mathematical foundations, and implementation results. It will also address remaining challenges in the field. Why it matters: FHE's growing importance is driven by Machine Learning as a Service and the increasing value of secure computation, though the seminar itself has no direct connection to the Middle East.

Digital Privacy in Personalized Pricing and New Directions in Web3

MBZUAI ·

Xi Chen from NYU Stern gave a talk at MBZUAI on digital privacy in personalized pricing using differential privacy. The talk also covered research in Web3 and decentralized finance, including delta hedging liquidity positions on Uniswap V3. Chen highlighted open problems in decentralized finance during the presentation. Why it matters: The talk suggests MBZUAI's interest in exploring the intersection of AI, privacy, and blockchain technologies, reflecting growing trends in data protection and decentralized systems.

Forget-MI: Machine Unlearning for Forgetting Multimodal Information in Healthcare Settings

arXiv ·

Researchers from MBZUAI introduce Forget-MI, a machine unlearning method tailored for multimodal medical data, enhancing privacy by removing specific patient data from AI models. Forget-MI utilizes loss functions and perturbation techniques to unlearn both unimodal and joint data representations. The method demonstrates superior performance in reducing Membership Inference Attacks and improving data removal compared to existing techniques, while preserving overall model performance and enabling data forgetting.

Evaluating Web Search Engines Results for Personalization and User Tracking

arXiv ·

This paper presents six experiments evaluating personalization and user tracking in web search engine results. The experiments involve comparing search results based on VPN location (including UAE vs others), logged-in status, network type, search engine, browser, and trained Google accounts. The study measures total hits, first hit, and correlation between hits to identify patterns of personalization. Why it matters: The findings shed light on the extent of filter bubble effects and potential biases in search results for users in the UAE and globally.

A new playbook for patient privacy in the age of foundation models

MBZUAI ·

MBZUAI researchers Darya Taratynova and Shahad Hardan developed Forget-MI, a method for making clinical AI models "unlearn" specific patient data without retraining the entire model. Forget-MI addresses the challenge of removing patient data from AI models trained on multimodal records (like chest X-rays and reports) due to regulations like GDPR and HIPAA. The method unlearns both unimodal (image or text) and joint (image-text) associations while retaining overall accuracy using a late-fusion multimodal classifier. Why it matters: This research provides a practical solution to a critical privacy concern in healthcare AI, enabling compliance with data protection regulations and fostering trust in AI-driven medical applications.

Following in the footsteps of the Godfather

MBZUAI ·

MBZUAI master's graduate Rohit Bharadwaj is pursuing a Ph.D. at the University of Edinburgh, following in the footsteps of Geoffrey Hinton. His research focuses on developing generative models, specifically diffusion models, to anonymize datasets while preserving utility, addressing GDPR compliance. He aims to balance privacy protection with the need for useful data in AI systems. Why it matters: This highlights the growing importance of MBZUAI as a feeder institution for top global AI research programs and the increasing focus on privacy-preserving AI technologies.

Powerful predictions and privacy

MBZUAI ·

MBZUAI Assistant Professor Samuel Horváth is researching federated learning to address the tension between data privacy and the predictive power of machine learning models. Federated learning trains models on decentralized data, keeping sensitive information on devices. Horváth's research focuses on designing algorithms that can efficiently train on distributed data while respecting user privacy. Why it matters: This work is crucial for advancing AI in sensitive domains like healthcare, where privacy regulations limit centralized data collection.