Skip to content
GCC AI Research

Search

Results for "Threat Detection"

TII-SSRC-23 Dataset: Typological Exploration of Diverse Traffic Patterns for Intrusion Detection

arXiv ·

Researchers introduce TII-SSRC-23, a new network intrusion detection dataset designed to improve the diversity and representation of modern network traffic for machine learning models. The dataset includes a range of traffic types and subtypes to address the limitations of existing datasets. Feature importance analysis and baseline experiments for supervised and unsupervised intrusion detection are also provided.

LLM-based Multi-class Attack Analysis and Mitigation Framework in IoT/IIoT Networks

arXiv ·

This paper introduces a framework that combines machine learning for multi-class attack detection in IoT/IIoT networks with large language models (LLMs) for attack behavior analysis and mitigation suggestion. The framework uses role-play prompt engineering with RAG to guide LLMs like ChatGPT-o3 and DeepSeek-R1, and introduces new evaluation metrics for quantitative assessment. Experiments using Edge-IIoTset and CICIoT2023 datasets showed Random Forest as the best detection model and ChatGPT-o3 outperforming DeepSeek-R1 in attack analysis and mitigation.

Scientists Develop Ground-breaking Deep Learning Model for Real-time Security Environments

TII ·

Researchers including Dr. Najwa Aaraj developed ML-FEED, a new exploit detection framework using pattern-based techniques. The model is 70x faster than LSTMs and 75,000x faster than Transformers in exploit detection tasks, while also being slightly more accurate. The "ML-FEED" paper won best paper at the 2022 IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications. Why it matters: This research enables more efficient real-time security applications and highlights growing AI expertise in the Arab world.

Detecting the undetectable: Transforming policing with AI

MBZUAI ·

Salem AlMarri, the first Emirati Ph.D. graduate from MBZUAI, developed a video anomaly detection (VAD) system for his thesis. The VAD system can detect subtle anomalies in video, such as suspicious interactions, to help police prevent crimes and save lives. AlMarri's work was carried out under the guidance of Karthik Nandakumar, Affiliated Associate Professor of Computer Vision at MBZUAI. Why it matters: This research showcases the potential of AI in enhancing public safety and security in the UAE, demonstrating practical applications of computer vision in law enforcement.

Analyzing Threats of Large-Scale Machine Learning Systems

MBZUAI ·

A PhD candidate from the University of Waterloo presented on threats from large machine learning systems at MBZUAI. The talk covered data privacy during inference and the misuse of ML systems to generate deepfakes. The speaker also analyzed differential privacy and watermarking as potential solutions. Why it matters: Understanding and mitigating the risks of large ML systems is crucial for responsible AI development and deployment in the region.

Hackers and the Internet of Things

KAUST ·

Cybersecurity specialist James Lyne spoke at KAUST's 2018 Winter Enrichment Program (WEP) about cybersecurity threats and techniques. Lyne demonstrated hacking and phishing attacks, emphasizing how hackers can exploit personal information by bypassing basic security measures. He highlighted the increasing sophistication of cybercriminals and the existence of illicit marketplaces on the dark web where hacking applications are sold. Why it matters: Raising awareness of cybersecurity threats is crucial for protecting individuals and organizations in Saudi Arabia and the broader region as digital infrastructure expands.

Adversarial Training: Improvements and Applications

MBZUAI ·

This article discusses adversarial training (AT) as a method to improve the robustness of machine learning models against adversarial attacks. AT aims to correctly classify data and ensure no data fall near decision boundaries, simulating adversarial attacks during training. Dr. Jingfeng Zhang from RIKEN-AIP will present on improvements to AT and its application in evaluating and enhancing the reliability of ML methods. Why it matters: As ML models become more prevalent in real-world applications in the GCC region, ensuring their robustness against adversarial attacks is crucial for maintaining their reliability and security.