MBZUAI researchers have developed a new method called "Byzantine antidote" (Bant) to defend federated learning systems against Byzantine attacks, where malicious nodes intentionally disrupt the training process. Bant uses trust scores and a trial function to dynamically filter out corrupted updates, even when most nodes are compromised. The system can identify poorly labeled data while still training models effectively, addressing both unconscious mistakes and deliberate sabotage. Why it matters: This research enhances the reliability and security of federated learning in sensitive sectors like healthcare and finance, enabling safer collaborative AI development.
MBZUAI researchers have developed 'Byzantine antidote' (Bant), a novel defense mechanism against Byzantine attacks in federated learning. Bant uses trust scores and a trial function to dynamically filter and neutralize corrupted updates, even when a majority of nodes are compromised. The research was presented at the 40th Annual AAAI Conference on Artificial Intelligence.
Conor McMenamin from Universitat Pompeu Fabra presented a seminar on State Machine Replication (SMR) without honest participants. The talk covered the limitations of current SMR protocols and introduced the ByRa model, a framework for player characterization free of honest participants. He then described FAIRSICAL, a sandbox SMR protocol, and discussed how the ideas could be extended to real-world protocols, with a focus on blockchains and cryptocurrencies. Why it matters: This research on SMR protocols and their incentive compatibility could lead to more robust and secure blockchain technologies in the region.
Researchers at ETH Zurich have formalized models of the EMV payment protocol using the Tamarin model checker. They discovered flaws allowing attackers to bypass PIN requirements for high-value purchases on EMV cards like Mastercard and Visa. The team also collaborated with an EMV consortium member to verify the improved EMV Kernel C-8 protocol. Why it matters: This research highlights the importance of formal methods in identifying critical vulnerabilities in widely used payment systems, potentially impacting financial security for consumers in the GCC region and worldwide.
Researchers at TII, in cooperation with University Paderborn and Ruhr University Bochum, have discovered a vulnerability called the Opossum Attack in Transport Layer Security (TLS) impacting protocols like HTTP(S), FTP(S), POP3(S), and SMTP(S). The vulnerability exposes a risk of desynchronization between client and server communications, potentially leading to exploits like session fixation and content confusion. Scans revealed over 2.9 million potentially affected servers, including over 1.4 million IMAP servers and 1.1 million POP3 servers. Why it matters: This discovery highlights the importance of ongoing cybersecurity research in the UAE and internationally to identify and address vulnerabilities in fundamental internet protocols, especially as it led to immediate action by Apache and Cyrus IMAPd.
Researchers at MBZUAI and other institutions have published a study at ACL 2024 investigating how jailbreak attacks work on LLMs. The study used a dataset of 30,000 prompts and non-linear probing to interpret the effects of jailbreak attacks, finding that existing interpretations were inadequate. The researchers propose a new approach to improve LLM safety against such attacks by identifying the layers in neural networks where the behavior occurs. Why it matters: Understanding and mitigating jailbreak attacks is crucial for ensuring the responsible and secure deployment of LLMs, particularly in the Arabic-speaking world where these models are increasingly being used.
Cristofaro Mune and Niek Timmers presented a seminar on bypassing unbreakable crypto using fault injection on Espressif ESP32 chips. The presentation detailed how the hardware-based Encrypted Secure Boot implementation of the ESP32 SoC was bypassed using a single EM glitch, without knowing the decryption key. This attack exploited multiple hardware vulnerabilities, enabling arbitrary code execution and extraction of plain-text data from external flash. Why it matters: The research highlights critical security vulnerabilities in embedded systems and the potential for fault injection attacks to bypass secure boot mechanisms, necessitating stronger hardware-level security measures.
This article discusses adversarial training (AT) as a method to improve the robustness of machine learning models against adversarial attacks. AT aims to correctly classify data and ensure no data fall near decision boundaries, simulating adversarial attacks during training. Dr. Jingfeng Zhang from RIKEN-AIP will present on improvements to AT and its application in evaluating and enhancing the reliability of ML methods. Why it matters: As ML models become more prevalent in real-world applications in the GCC region, ensuring their robustness against adversarial attacks is crucial for maintaining their reliability and security.