Xi Chen from NYU Stern gave a talk at MBZUAI on digital privacy in personalized pricing using differential privacy. The talk also covered research in Web3 and decentralized finance, including delta hedging liquidity positions on Uniswap V3. Chen highlighted open problems in decentralized finance during the presentation. Why it matters: The talk suggests MBZUAI's interest in exploring the intersection of AI, privacy, and blockchain technologies, reflecting growing trends in data protection and decentralized systems.
MBZUAI Assistant Professor Samuel Horváth is researching federated learning to address the tension between data privacy and the predictive power of machine learning models. Federated learning trains models on decentralized data, keeping sensitive information on devices. Horváth's research focuses on designing algorithms that can efficiently train on distributed data while respecting user privacy. Why it matters: This work is crucial for advancing AI in sensitive domains like healthcare, where privacy regulations limit centralized data collection.
This paper introduces DaringFed, a novel dynamic Bayesian persuasion pricing mechanism for online federated learning (OFL) that addresses the challenge of two-sided incomplete information (TII) regarding resources. It formulates the interaction between the server and clients as a dynamic signaling and pricing allocation problem within a Bayesian persuasion game, demonstrating the existence of a unique Bayesian persuasion Nash equilibrium. Evaluations on real and synthetic datasets demonstrate that DaringFed optimizes accuracy and convergence speed and improves the server's utility.
MBZUAI Assistant Professor Bin Gu is working on black-box optimization techniques, especially in the context of vertical federated learning. Gu's work, in collaboration with JD.com, aims to enhance data and model privacy in machine learning. He is also focused on large-scale optimization and spiking neural networks to bring machine automation closer to the way the human brain operates. Why it matters: This research contributes to advancements in privacy-preserving machine learning techniques relevant to sensitive sectors like finance and healthcare in the region.
MBZUAI researchers developed FeSViBS, a new federated split learning technique for vision transformers that addresses data scarcity and privacy concerns in healthcare image classification. The method combines federated learning and split learning to train models collaboratively without sharing sensitive patient data directly. It overcomes limitations of traditional centralized training and vulnerabilities in federated learning. Why it matters: This approach enables the development of AI-powered healthcare applications while adhering to stringent data privacy regulations, unlocking the potential of machine learning in medical imaging.
Researchers are exploring methods for evaluating the outcome of actions using off-policy observations where the context is noisy or anonymized. They employ proxy causal learning, using two noisy views of the context to recover the average causal effect of an action without explicitly modeling the hidden context. The implementation uses learned neural net representations for both action and context, and demonstrates outperformance compared to an autoencoder-based alternative. Why it matters: This research addresses a key challenge in applying AI in real-world scenarios where data privacy or bandwidth limitations necessitate working with noisy or anonymized data.