MBZUAI Assistant Professor Samuel Horváth is researching federated learning to address the tension between data privacy and the predictive power of machine learning models. Federated learning trains models on decentralized data, keeping sensitive information on devices. Horváth's research focuses on designing algorithms that can efficiently train on distributed data while respecting user privacy. Why it matters: This work is crucial for advancing AI in sensitive domains like healthcare, where privacy regulations limit centralized data collection.
Xi Chen from NYU Stern gave a talk at MBZUAI on digital privacy in personalized pricing using differential privacy. The talk also covered research in Web3 and decentralized finance, including delta hedging liquidity positions on Uniswap V3. Chen highlighted open problems in decentralized finance during the presentation. Why it matters: The talk suggests MBZUAI's interest in exploring the intersection of AI, privacy, and blockchain technologies, reflecting growing trends in data protection and decentralized systems.
An MBZUAI team developed a self-ensembling vision transformer to enhance the security of AI in medical imaging. The model aims to protect patient anonymity and ensure the validity of medical image analysis. It addresses vulnerabilities where AI systems can be manipulated, leading to misinterpretations with potentially harmful consequences in healthcare. Why it matters: This research is crucial for building trust and enabling the safe deployment of AI in sensitive medical applications, protecting against fraud and ensuring patient safety.
Technology Innovation Institute (TII) in Abu Dhabi has launched the UAE’s first secure cloud technologies programme via its Cryptography Research Center (CRC). The program will focus on advancing Privacy Enhancing Technologies (PETs) like fully homomorphic encryption (FHE) and secure multi-party computation (MPC). TII researchers are also developing hardware accelerators to improve the efficiency of FHE. Why it matters: The initiative addresses growing security and privacy challenges in cloud computing, positioning the UAE as a leader in advanced cryptographic solutions for data protection.
A research talk was given on privacy and security issues in speech processing, highlighting the unique privacy challenges due to the biometric information embedded in speech. The talk covered the legal landscape, proposed solutions like cryptographic and hashing-based methods, and adversarial processing techniques. Dr. Bhiksha Raj from Carnegie Mellon University, an expert in speech and audio processing, delivered the talk. Why it matters: As speech-based interfaces become more prevalent in the Middle East, understanding and addressing the associated privacy risks is crucial for ethical AI development and deployment.
Researchers from KAUST, University of St. Andrews, and the Center for Unconventional Processes of Sciences have developed an uncrackable security system using optical chips. The system uses silicon chips with complex structures that are irreversibly changed to send information, achieving "perfect secrecy" through a one-time key. This method leverages classical physics and the second law of thermodynamics to ensure that keys are never stored, communicated, or recreated, making interception impossible. Why it matters: This breakthrough has the potential to revolutionize communications privacy globally, offering an unbreakable method for securing confidential data on public channels.
MBZUAI hosted a panel discussion in collaboration with the Manara Center for Coexistence and Dialogue. Chaoyang He, co-founder of FedML, presented on federated learning (FL), covering privacy/security, resource constraints, label scarcity, and scalable system design. FedML is a platform for zero-code, cross-platform, secure federated learning across industries like healthcare and finance. Why it matters: Federated learning is an important subfield for the GCC region, allowing privacy-preserving model training across distributed data sources.
MBZUAI researchers Darya Taratynova and Shahad Hardan developed Forget-MI, a method for making clinical AI models "unlearn" specific patient data without retraining the entire model. Forget-MI addresses the challenge of removing patient data from AI models trained on multimodal records (like chest X-rays and reports) due to regulations like GDPR and HIPAA. The method unlearns both unimodal (image or text) and joint (image-text) associations while retaining overall accuracy using a late-fusion multimodal classifier. Why it matters: This research provides a practical solution to a critical privacy concern in healthcare AI, enabling compliance with data protection regulations and fostering trust in AI-driven medical applications.