Xi Chen from NYU Stern gave a talk at MBZUAI on digital privacy in personalized pricing using differential privacy. The talk also covered research in Web3 and decentralized finance, including delta hedging liquidity positions on Uniswap V3. Chen highlighted open problems in decentralized finance during the presentation. Why it matters: The talk suggests MBZUAI's interest in exploring the intersection of AI, privacy, and blockchain technologies, reflecting growing trends in data protection and decentralized systems.
MBZUAI Assistant Professor Samuel Horváth is researching federated learning to address the tension between data privacy and the predictive power of machine learning models. Federated learning trains models on decentralized data, keeping sensitive information on devices. Horváth's research focuses on designing algorithms that can efficiently train on distributed data while respecting user privacy. Why it matters: This work is crucial for advancing AI in sensitive domains like healthcare, where privacy regulations limit centralized data collection.
This paper introduces DaringFed, a novel dynamic Bayesian persuasion pricing mechanism for online federated learning (OFL) that addresses the challenge of two-sided incomplete information (TII) regarding resources. It formulates the interaction between the server and clients as a dynamic signaling and pricing allocation problem within a Bayesian persuasion game, demonstrating the existence of a unique Bayesian persuasion Nash equilibrium. Evaluations on real and synthetic datasets demonstrate that DaringFed optimizes accuracy and convergence speed and improves the server's utility.
MBZUAI Assistant Professor Bin Gu is working on black-box optimization techniques, especially in the context of vertical federated learning. Gu's work, in collaboration with JD.com, aims to enhance data and model privacy in machine learning. He is also focused on large-scale optimization and spiking neural networks to bring machine automation closer to the way the human brain operates. Why it matters: This research contributes to advancements in privacy-preserving machine learning techniques relevant to sensitive sectors like finance and healthcare in the region.
Sai Praneeth Karimireddy from UC Berkeley presented a talk on building planetary-scale collaborative intelligence, highlighting the challenges of using distributed data in machine learning due to data silos and ethical-legal restrictions. He proposed collaborative systems like federated learning as a solution to bring together distributed data while respecting privacy. The talk addressed the need for efficiency, reliability, and management of divergent goals in these systems, suggesting the use of tools from optimization, statistics, and economics. Why it matters: Collaborative AI systems can unlock valuable distributed data in the region, especially in sensitive sectors like healthcare, while ensuring privacy and addressing ethical concerns.
MBZUAI researchers developed FeSViBS, a new federated split learning technique for vision transformers that addresses data scarcity and privacy concerns in healthcare image classification. The method combines federated learning and split learning to train models collaboratively without sharing sensitive patient data directly. It overcomes limitations of traditional centralized training and vulnerabilities in federated learning. Why it matters: This approach enables the development of AI-powered healthcare applications while adhering to stringent data privacy regulations, unlocking the potential of machine learning in medical imaging.
MBZUAI researchers are applying federated learning to optimize smart grids while protecting user data privacy. This approach leverages techniques from smart healthcare systems to enhance energy efficiency and local energy sharing. The research addresses the challenge of balancing grid optimization with the risk of user identity theft associated with traditional data-intensive smart grids. Why it matters: This research demonstrates a practical application of privacy-preserving AI in critical infrastructure, addressing key concerns around data security and fostering trust in smart grid technologies.
Researchers are exploring methods for evaluating the outcome of actions using off-policy observations where the context is noisy or anonymized. They employ proxy causal learning, using two noisy views of the context to recover the average causal effect of an action without explicitly modeling the hidden context. The implementation uses learned neural net representations for both action and context, and demonstrates outperformance compared to an autoencoder-based alternative. Why it matters: This research addresses a key challenge in applying AI in real-world scenarios where data privacy or bandwidth limitations necessitate working with noisy or anonymized data.