MBZUAI Assistant Professor Bin Gu is working on black-box optimization techniques, especially in the context of vertical federated learning. Gu's work, in collaboration with JD.com, aims to enhance data and model privacy in machine learning. He is also focused on large-scale optimization and spiking neural networks to bring machine automation closer to the way the human brain operates. Why it matters: This research contributes to advancements in privacy-preserving machine learning techniques relevant to sensitive sectors like finance and healthcare in the region.
MBZUAI researchers developed FeSViBS, a new federated split learning technique for vision transformers that addresses data scarcity and privacy concerns in healthcare image classification. The method combines federated learning and split learning to train models collaboratively without sharing sensitive patient data directly. It overcomes limitations of traditional centralized training and vulnerabilities in federated learning. Why it matters: This approach enables the development of AI-powered healthcare applications while adhering to stringent data privacy regulations, unlocking the potential of machine learning in medical imaging.
MBZUAI hosted a panel discussion in collaboration with the Manara Center for Coexistence and Dialogue. Chaoyang He, co-founder of FedML, presented on federated learning (FL), covering privacy/security, resource constraints, label scarcity, and scalable system design. FedML is a platform for zero-code, cross-platform, secure federated learning across industries like healthcare and finance. Why it matters: Federated learning is an important subfield for the GCC region, allowing privacy-preserving model training across distributed data sources.
MBZUAI researchers have developed 'Byzantine antidote' (Bant), a novel defense mechanism against Byzantine attacks in federated learning. Bant uses trust scores and a trial function to dynamically filter and neutralize corrupted updates, even when a majority of nodes are compromised. The research was presented at the 40th Annual AAAI Conference on Artificial Intelligence.
MBZUAI researchers have developed a new method called "Byzantine antidote" (Bant) to defend federated learning systems against Byzantine attacks, where malicious nodes intentionally disrupt the training process. Bant uses trust scores and a trial function to dynamically filter out corrupted updates, even when most nodes are compromised. The system can identify poorly labeled data while still training models effectively, addressing both unconscious mistakes and deliberate sabotage. Why it matters: This research enhances the reliability and security of federated learning in sensitive sectors like healthcare and finance, enabling safer collaborative AI development.