Skip to content
GCC AI Research

Search

Results for "threats"

Hackers and the Internet of Things

KAUST ·

Cybersecurity specialist James Lyne spoke at KAUST's 2018 Winter Enrichment Program (WEP) about cybersecurity threats and techniques. Lyne demonstrated hacking and phishing attacks, emphasizing how hackers can exploit personal information by bypassing basic security measures. He highlighted the increasing sophistication of cybercriminals and the existence of illicit marketplaces on the dark web where hacking applications are sold. Why it matters: Raising awareness of cybersecurity threats is crucial for protecting individuals and organizations in Saudi Arabia and the broader region as digital infrastructure expands.

Analyzing Threats of Large-Scale Machine Learning Systems

MBZUAI ·

A PhD candidate from the University of Waterloo presented on threats from large machine learning systems at MBZUAI. The talk covered data privacy during inference and the misuse of ML systems to generate deepfakes. The speaker also analyzed differential privacy and watermarking as potential solutions. Why it matters: Understanding and mitigating the risks of large ML systems is crucial for responsible AI development and deployment in the region.

Overview of Abusive and Threatening Language Detection in Urdu at FIRE 2021

arXiv ·

This paper introduces two shared tasks for abusive and threatening language detection in Urdu, a low-resource language with over 170 million speakers. The tasks involve binary classification of Urdu tweets into Abusive/Non-Abusive and Threatening/Non-Threatening categories, respectively. Datasets of 2400/6000 training tweets and 1100/3950 testing tweets were created and manually annotated, along with logistic regression and BERT-based baselines. 21 teams participated and the best systems achieved F1-scores of 0.880 and 0.545 on the abusive and threatening language tasks, respectively, with m-BERT showing the best performance.

Iranian drone attacks on Amazon’s Gulf data centers a harbinger of new tactics in future conflicts, experts say - Fortune

GCC AI Events ·

A recent Fortune article discusses the potential vulnerability of Gulf data centers, including those operated by Amazon, to drone attacks. Experts suggest that Iranian-backed groups may employ such tactics in future regional conflicts. The hypothetical scenario raises concerns about data security and infrastructure resilience in the region. Why it matters: Highlights the increasing importance of protecting critical digital infrastructure in the GCC from emerging security threats.

Security-Enhanced Radio Access Networks for 5G OpenRAN

MBZUAI ·

Dr. Zhiqiang Lin from Ohio State University presented the Security-Enhanced Radio Access Network (SE-RAN) project to address cellular network threats using O-RAN. The project includes 5G-Spector, a framework for detecting L3 protocol exploits via MobiFlow and MobieXpert, and 5G-XSec, a framework leveraging deep learning and LLMs for threat analysis at the network edge. Dr. Lin also outlined a vision for AI convergence with cellular security for enhanced threat detection. Why it matters: Enhancing 5G security through AI and open architectures is critical for protecting next-generation mobile networks in the GCC region and globally.

LLMEffiChecker: Understanding and Testing Efficiency Degradation of Large Language Models

arXiv ·

The paper introduces LLMEffiChecker, a tool to test the computational efficiency robustness of LLMs by identifying vulnerabilities that can significantly degrade performance. LLMEffiChecker uses both white-box (gradient-guided perturbation) and black-box (causal inference-based perturbation) methods to delay the generation of the end-of-sequence token. Experiments on nine public LLMs demonstrate that LLMEffiChecker can substantially increase response latency and energy consumption with minimal input perturbations.

VENOM: Text-driven Unrestricted Adversarial Example Generation with Diffusion Models

arXiv ·

The paper introduces VENOM, a text-driven framework for generating high-quality unrestricted adversarial examples using diffusion models. VENOM unifies image content generation and adversarial synthesis into a single reverse diffusion process, enhancing both attack success rate and image quality. The framework incorporates an adaptive adversarial guidance strategy with momentum to ensure the generated adversarial examples align with the distribution of natural images.