Skip to content
GCC AI Research

Search

Results for "Confidential AI"

OPAQUE Acquires Abu Dhabi-Developed Cryptographic AI Technology from TII, Extending Confidential AI Across the Full Lifecycle with Post-Quantum Protection

TII ·

OPAQUE, a San Francisco-based Confidential AI company, acquired advanced cryptographic AI technologies from Abu Dhabi's Technology Innovation Institute (TII), the applied research pillar of ATRC. The acquired technology enhances OPAQUE's platform with confidential AI model training using multi-party computation and fully homomorphic encryption, alongside post-quantum cryptographic protections. Overseen by H.E. Faisal Al Bannai and Ion Stoica, this marks the first time UAE-developed cryptographic AI has been acquired and deployed globally by a leading US technology company. Why it matters: This acquisition strengthens OPAQUE's secure AI workflow capabilities and highlights the UAE's growing role in developing and exporting foundational AI technologies.

Powerful predictions and privacy

MBZUAI ·

MBZUAI Assistant Professor Samuel Horváth is researching federated learning to address the tension between data privacy and the predictive power of machine learning models. Federated learning trains models on decentralized data, keeping sensitive information on devices. Horváth's research focuses on designing algorithms that can efficiently train on distributed data while respecting user privacy. Why it matters: This work is crucial for advancing AI in sensitive domains like healthcare, where privacy regulations limit centralized data collection.

Forget-MI: Machine Unlearning for Forgetting Multimodal Information in Healthcare Settings

arXiv ·

Researchers from MBZUAI introduce Forget-MI, a machine unlearning method tailored for multimodal medical data, enhancing privacy by removing specific patient data from AI models. Forget-MI utilizes loss functions and perturbation techniques to unlearn both unimodal and joint data representations. The method demonstrates superior performance in reducing Membership Inference Attacks and improving data removal compared to existing techniques, while preserving overall model performance and enabling data forgetting.

A prescription for privacy

MBZUAI ·

MBZUAI researchers developed FeSViBS, a new federated split learning technique for vision transformers that addresses data scarcity and privacy concerns in healthcare image classification. The method combines federated learning and split learning to train models collaboratively without sharing sensitive patient data directly. It overcomes limitations of traditional centralized training and vulnerabilities in federated learning. Why it matters: This approach enables the development of AI-powered healthcare applications while adhering to stringent data privacy regulations, unlocking the potential of machine learning in medical imaging.

TII’s SSRC joins Confidential Computing Consortium

TII ·

Technology Innovation Institute’s (TII) Secure Systems Research Center (SSRC) has joined the Confidential Computing Consortium (CCC). The CCC aims to accelerate the adoption of confidential computing through hardware-based Trusted Execution Environment (TEE) technologies. SSRC will contribute to standardizing hardware-level security capabilities, particularly for secure RISC-V solutions. Why it matters: This partnership strengthens the UAE's position in cyber-physical systems security by enhancing data protection during processing, an area often overlooked in conventional infrastructure.

Achieving black box vertical federated learning

MBZUAI ·

MBZUAI Assistant Professor Bin Gu is working on black-box optimization techniques, especially in the context of vertical federated learning. Gu's work, in collaboration with JD.com, aims to enhance data and model privacy in machine learning. He is also focused on large-scale optimization and spiking neural networks to bring machine automation closer to the way the human brain operates. Why it matters: This research contributes to advancements in privacy-preserving machine learning techniques relevant to sensitive sectors like finance and healthcare in the region.

DaringFed: A Dynamic Bayesian Persuasion Pricing for Online Federated Learning under Two-sided Incomplete Information

arXiv ·

This paper introduces DaringFed, a novel dynamic Bayesian persuasion pricing mechanism for online federated learning (OFL) that addresses the challenge of two-sided incomplete information (TII) regarding resources. It formulates the interaction between the server and clients as a dynamic signaling and pricing allocation problem within a Bayesian persuasion game, demonstrating the existence of a unique Bayesian persuasion Nash equilibrium. Evaluations on real and synthetic datasets demonstrate that DaringFed optimizes accuracy and convergence speed and improves the server's utility.