Skip to content
GCC AI Research

Search

Results for "PIN bypass"

Formal Methods for Modern Payment Protocols

MBZUAI ·

Researchers at ETH Zurich have formalized models of the EMV payment protocol using the Tamarin model checker. They discovered flaws allowing attackers to bypass PIN requirements for high-value purchases on EMV cards like Mastercard and Visa. The team also collaborated with an EMV consortium member to verify the improved EMV Kernel C-8 protocol. Why it matters: This research highlights the importance of formal methods in identifying critical vulnerabilities in widely used payment systems, potentially impacting financial security for consumers in the GCC region and worldwide.

CRC Seminar Series - Cristofaro Mune, Niek Timmers

TII ·

Cristofaro Mune and Niek Timmers presented a seminar on bypassing unbreakable crypto using fault injection on Espressif ESP32 chips. The presentation detailed how the hardware-based Encrypted Secure Boot implementation of the ESP32 SoC was bypassed using a single EM glitch, without knowing the decryption key. This attack exploited multiple hardware vulnerabilities, enabling arbitrary code execution and extraction of plain-text data from external flash. Why it matters: The research highlights critical security vulnerabilities in embedded systems and the potential for fault injection attacks to bypass secure boot mechanisms, necessitating stronger hardware-level security measures.

Hard to crack hardware

KAUST ·

KAUST researchers have designed an integrated circuit logic lock to protect electronic devices from cyberattacks. The protective logic locks are based on spintronics and can be incorporated into electronic chips. The lock uses a magnetic tunnel junction (MTJ) where the keys are stored in tamper-proof memory, ensuring hardware security. Why it matters: This hardware-based security feature could significantly increase confidence in globalized integrated circuit manufacturing, protecting against counterfeiting and malicious modifications.

How jailbreak attacks work and a new way to stop them

MBZUAI ·

Researchers at MBZUAI and other institutions have published a study at ACL 2024 investigating how jailbreak attacks work on LLMs. The study used a dataset of 30,000 prompts and non-linear probing to interpret the effects of jailbreak attacks, finding that existing interpretations were inadequate. The researchers propose a new approach to improve LLM safety against such attacks by identifying the layers in neural networks where the behavior occurs. Why it matters: Understanding and mitigating jailbreak attacks is crucial for ensuring the responsible and secure deployment of LLMs, particularly in the Arabic-speaking world where these models are increasingly being used.

New security system to revolutionize communications privacy

KAUST ·

Researchers from KAUST, University of St. Andrews, and the Center for Unconventional Processes of Sciences have developed an uncrackable security system using optical chips. The system uses silicon chips with complex structures that are irreversibly changed to send information, achieving "perfect secrecy" through a one-time key. This method leverages classical physics and the second law of thermodynamics to ensure that keys are never stored, communicated, or recreated, making interception impossible. Why it matters: This breakthrough has the potential to revolutionize communications privacy globally, offering an unbreakable method for securing confidential data on public channels.

Your voice can jailbreak a speech model – here’s how to stop it, without retraining

MBZUAI ·

A new paper from MBZUAI demonstrates that state-of-the-art speech models can be easily jailbroken using audio perturbations to generate harmful content, achieving success rates of 76-93% on models like Qwen2-Audio and LLaMA-Omni. The researchers adapted projected gradient descent (PGD) to the audio domain to optimize waveforms that push the model towards harmful responses. They propose a defense mechanism based on post-hoc activation patching that hardens models at inference time without retraining. Why it matters: This research highlights a critical vulnerability in speech-based LLMs and offers a practical solution, contributing to the development of more secure and trustworthy AI systems in the region and globally.

Challenging the promise of invisible ink in the era of large models

MBZUAI ·

MBZUAI researchers Nils Lukas and Toluwani Samuel Aremu will present a paper at ICML 2025 demonstrating the vulnerability of current watermarking techniques in LLMs. Their research shows that adaptive paraphrasers can evade detection from watermarks with negligible impact on text quality, costing less than $10 of GPU compute. The attack involves fine-tuning a small open-weight model to rewrite sentences until surrogate keys no longer trigger detection. Why it matters: This work highlights critical weaknesses in current AI provenance methods, suggesting the need for more robust watermarking techniques to maintain trust in the authenticity of AI-generated content.

Self-powered dental braces

KAUST ·

I am sorry, but the provided content appears to be incomplete and does not offer enough information to create a meaningful summary. It mentions 'Self-powered dental braces' in the title, but the content is just a copyright notice and a link to KAUST.