Skip to content
GCC AI Research

Search

Results for "vulnerability"

Opossum Attack

TII ·

Researchers at TII, in cooperation with University Paderborn and Ruhr University Bochum, have discovered a vulnerability called the Opossum Attack in Transport Layer Security (TLS) impacting protocols like HTTP(S), FTP(S), POP3(S), and SMTP(S). The vulnerability exposes a risk of desynchronization between client and server communications, potentially leading to exploits like session fixation and content confusion. Scans revealed over 2.9 million potentially affected servers, including over 1.4 million IMAP servers and 1.1 million POP3 servers. Why it matters: This discovery highlights the importance of ongoing cybersecurity research in the UAE and internationally to identify and address vulnerabilities in fundamental internet protocols, especially as it led to immediate action by Apache and Cyrus IMAPd.

How secure is AI-generated Code: A Large-Scale Comparison of Large Language Models

arXiv ·

A study compared the vulnerability of C programs generated by nine state-of-the-art Large Language Models (LLMs) using a zero-shot prompt. The researchers introduced FormAI-v2, a dataset of 331,000 C programs generated by these LLMs, and found that at least 62.07% of the generated programs contained vulnerabilities, detected via formal verification. The research highlights the need for risk assessment and validation when deploying LLM-generated code in production environments.

How many queries does it take to break an AI? We put a number on it.

MBZUAI ·

MBZUAI researchers presented a NeurIPS 2024 Spotlight paper that quantifies AI vulnerability by measuring bits leaked per query. Their formula predicts the minimum queries needed for attacks based on mutual information between model output and attacker's target. Experiments across seven models and three attack types (system-prompt extraction, jailbreaks, relearning) validate the relationship. Why it matters: This work offers a framework to translate UI choices (like exposing log-probs or chain-of-thought) into concrete attack surfaces, informing more secure AI design and deployment in the region.

Hackers and the Internet of Things

KAUST ·

Cybersecurity specialist James Lyne spoke at KAUST's 2018 Winter Enrichment Program (WEP) about cybersecurity threats and techniques. Lyne demonstrated hacking and phishing attacks, emphasizing how hackers can exploit personal information by bypassing basic security measures. He highlighted the increasing sophistication of cybercriminals and the existence of illicit marketplaces on the dark web where hacking applications are sold. Why it matters: Raising awareness of cybersecurity threats is crucial for protecting individuals and organizations in Saudi Arabia and the broader region as digital infrastructure expands.

Jose Martinez Named among Google Chrome’s 20 Top Vulnerability Researchers in 2021

TII ·

Jose Martinez, a Principal Researcher at the DSRC, was named one of Google's Top 20 Chrome Vulnerability Researchers for 2021, ranking 14th. He was recognized for detecting and demonstrating the exploitation of a serious vulnerability in the Chrome browser. This helped Google improve Chrome's security and contributed to safer development practices. Why it matters: The recognition highlights the growing cybersecurity expertise within the UAE and TII's ability to attract global talent in advanced security research.

LLMEffiChecker: Understanding and Testing Efficiency Degradation of Large Language Models

arXiv ·

The paper introduces LLMEffiChecker, a tool to test the computational efficiency robustness of LLMs by identifying vulnerabilities that can significantly degrade performance. LLMEffiChecker uses both white-box (gradient-guided perturbation) and black-box (causal inference-based perturbation) methods to delay the generation of the end-of-sequence token. Experiments on nine public LLMs demonstrate that LLMEffiChecker can substantially increase response latency and energy consumption with minimal input perturbations.

Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark

arXiv ·

This paper introduces a novel black-box adversarial attack method, Mixup-Attack, to generate universal adversarial examples for remote sensing data. The method identifies common vulnerabilities in neural networks by attacking features in the shallow layer of a surrogate model. The authors also present UAE-RS, the first dataset of black-box adversarial samples in remote sensing, to benchmark the robustness of deep learning models against adversarial attacks.

CRC Seminar Series - Cristofaro Mune, Niek Timmers

TII ·

Cristofaro Mune and Niek Timmers presented a seminar on bypassing unbreakable crypto using fault injection on Espressif ESP32 chips. The presentation detailed how the hardware-based Encrypted Secure Boot implementation of the ESP32 SoC was bypassed using a single EM glitch, without knowing the decryption key. This attack exploited multiple hardware vulnerabilities, enabling arbitrary code execution and extraction of plain-text data from external flash. Why it matters: The research highlights critical security vulnerabilities in embedded systems and the potential for fault injection attacks to bypass secure boot mechanisms, necessitating stronger hardware-level security measures.