Skip to content
GCC AI Research

Opossum Attack

TII · · Significant research

Summary

Researchers at TII, in cooperation with University Paderborn and Ruhr University Bochum, have discovered a vulnerability called the Opossum Attack in Transport Layer Security (TLS) impacting protocols like HTTP(S), FTP(S), POP3(S), and SMTP(S). The vulnerability exposes a risk of desynchronization between client and server communications, potentially leading to exploits like session fixation and content confusion. Scans revealed over 2.9 million potentially affected servers, including over 1.4 million IMAP servers and 1.1 million POP3 servers. Why it matters: This discovery highlights the importance of ongoing cybersecurity research in the UAE and internationally to identify and address vulnerabilities in fundamental internet protocols, especially as it led to immediate action by Apache and Cyrus IMAPd.

Get the weekly digest

Top AI stories from the GCC region, every week.

Related

VENOM: Text-driven Unrestricted Adversarial Example Generation with Diffusion Models

arXiv ·

The paper introduces VENOM, a text-driven framework for generating high-quality unrestricted adversarial examples using diffusion models. VENOM unifies image content generation and adversarial synthesis into a single reverse diffusion process, enhancing both attack success rate and image quality. The framework incorporates an adaptive adversarial guidance strategy with momentum to ensure the generated adversarial examples align with the distribution of natural images.

LLM-based Multi-class Attack Analysis and Mitigation Framework in IoT/IIoT Networks

arXiv ·

This paper introduces a framework that combines machine learning for multi-class attack detection in IoT/IIoT networks with large language models (LLMs) for attack behavior analysis and mitigation suggestion. The framework uses role-play prompt engineering with RAG to guide LLMs like ChatGPT-o3 and DeepSeek-R1, and introduces new evaluation metrics for quantitative assessment. Experiments using Edge-IIoTset and CICIoT2023 datasets showed Random Forest as the best detection model and ChatGPT-o3 outperforming DeepSeek-R1 in attack analysis and mitigation.

ScoreAdv: Score-based Targeted Generation of Natural Adversarial Examples via Diffusion Models

arXiv ·

The paper introduces ScoreAdv, a novel approach for generating natural adversarial examples (UAEs) using diffusion models. It incorporates an adversarial guidance mechanism and saliency maps to shift the sampling distribution and inject visual information. Experiments on ImageNet and CelebA datasets demonstrate state-of-the-art attack success rates, image quality, and robustness against defenses.

Provable Unrestricted Adversarial Training without Compromise with Generalizability

arXiv ·

This paper introduces Provable Unrestricted Adversarial Training (PUAT), a novel adversarial training approach. PUAT enhances robustness against both unrestricted and restricted adversarial examples while improving standard generalizability by aligning the distributions of adversarial examples, natural data, and the classifier's learned distribution. The approach uses partially labeled data and an augmented triple-GAN to generate effective unrestricted adversarial examples, demonstrating superior performance on benchmarks.