Skip to content
GCC AI Research

Topics

Security

28 articles RSS ↗

Abu Dhabi’s TII and Honeywell Team to Advance Quantum-Secure Satellite Technology

TII · · Partnership Infrastructure

TII and Honeywell are partnering to develop quantum-secure satellite communication systems. Honeywell's ‘QKDSat’ platform will integrate with TII’s Abu Dhabi Quantum Optical Ground Station (ADQOGS) to test QKD links between satellites and terrestrial networks. The collaboration aims to build quantum-resilient communication infrastructure for government, security, and commercial use. Why it matters: This initiative positions Abu Dhabi as a key player in advancing global cybersecurity and quantum communication technologies.

UAE Launches Next-Gen GPS-Less Navigation and Secure Flight Control to Strengthen Aviation Security

TII · · Product Partnership

ADASI has adopted VentureOne's Perceptra, a GPS-less navigation technology, and Saluki, a high-security flight control technology, both developed by the Technology Innovation Institute (TII). These technologies enhance resilience, precision, and security for autonomous aerial operations, addressing vulnerabilities in GPS-dependent systems. The agreement was formalized at IDEX 2025. Why it matters: This deployment of advanced autonomous flight technologies in the UAE strengthens aviation security and positions the region as a leader in resilient, GPS-independent navigation solutions.

Scientists Develop Ground-breaking Deep Learning Model for Real-time Security Environments

TII · · Research Security

Researchers including Dr. Najwa Aaraj developed ML-FEED, a new exploit detection framework using pattern-based techniques. The model is 70x faster than LSTMs and 75,000x faster than Transformers in exploit detection tasks, while also being slightly more accurate. The "ML-FEED" paper won best paper at the 2022 IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications. Why it matters: This research enables more efficient real-time security applications and highlights growing AI expertise in the Arab world.

Researchers at CRC Set Decoding Records at INRIA’s McEliece Challenge

TII · · Research Cryptography

A cryptanalysis team at the UAE's Cryptography Research Center (CRC) has set new records in computation by decrypting a McEliece ciphertext without the secret key at INRIA’s McEliece decoding challenge, taking first and second place. The record computation took about 31.4 days on a cluster using 256 CPU-cores. The team also achieved top ranks in decoding quasi-cyclic codes and ternary codes, used in post-quantum cryptography. Why it matters: This achievement demonstrates the UAE's growing capabilities in advanced cryptography research and its contributions to the global effort to develop quantum-resistant algorithms.

UAE trains Emiratis in AI as tech plays increasing role in security, governance - Khaleej Times

Khaleej Times News · · Policy Infrastructure

The UAE is actively engaged in training its Emirati citizens in Artificial Intelligence technologies. This national initiative aims to equip the workforce with advanced AI skills to meet the increasing demands across various sectors. The program specifically emphasizes AI's expanding role in enhancing national security and improving governmental operations and efficiency. Why it matters: This strategic investment in local AI talent underscores the UAE's commitment to technological self-reliance and the proactive integration of AI into critical public and security domains.

How secure is AI-generated Code: A Large-Scale Comparison of Large Language Models

arXiv · · Research LLM

A study compared the vulnerability of C programs generated by nine state-of-the-art Large Language Models (LLMs) using a zero-shot prompt. The researchers introduced FormAI-v2, a dataset of 331,000 C programs generated by these LLMs, and found that at least 62.07% of the generated programs contained vulnerabilities, detected via formal verification. The research highlights the need for risk assessment and validation when deploying LLM-generated code in production environments.

Provable Unrestricted Adversarial Training without Compromise with Generalizability

arXiv · · Research NLP

This paper introduces Provable Unrestricted Adversarial Training (PUAT), a novel adversarial training approach. PUAT enhances robustness against both unrestricted and restricted adversarial examples while improving standard generalizability by aligning the distributions of adversarial examples, natural data, and the classifier's learned distribution. The approach uses partially labeled data and an augmented triple-GAN to generate effective unrestricted adversarial examples, demonstrating superior performance on benchmarks.

LLMEffiChecker: Understanding and Testing Efficiency Degradation of Large Language Models

arXiv · · LLM Research

The paper introduces LLMEffiChecker, a tool to test the computational efficiency robustness of LLMs by identifying vulnerabilities that can significantly degrade performance. LLMEffiChecker uses both white-box (gradient-guided perturbation) and black-box (causal inference-based perturbation) methods to delay the generation of the end-of-sequence token. Experiments on nine public LLMs demonstrate that LLMEffiChecker can substantially increase response latency and energy consumption with minimal input perturbations.

Challenging the promise of invisible ink in the era of large models

MBZUAI · · LLM Research

MBZUAI researchers Nils Lukas and Toluwani Samuel Aremu will present a paper at ICML 2025 demonstrating the vulnerability of current watermarking techniques in LLMs. Their research shows that adaptive paraphrasers can evade detection from watermarks with negligible impact on text quality, costing less than $10 of GPU compute. The attack involves fine-tuning a small open-weight model to rewrite sentences until surrogate keys no longer trigger detection. Why it matters: This work highlights critical weaknesses in current AI provenance methods, suggesting the need for more robust watermarking techniques to maintain trust in the authenticity of AI-generated content.

Formal Methods for Modern Payment Protocols

MBZUAI · · Finance Ethics

Researchers at ETH Zurich have formalized models of the EMV payment protocol using the Tamarin model checker. They discovered flaws allowing attackers to bypass PIN requirements for high-value purchases on EMV cards like Mastercard and Visa. The team also collaborated with an EMV consortium member to verify the improved EMV Kernel C-8 protocol. Why it matters: This research highlights the importance of formal methods in identifying critical vulnerabilities in widely used payment systems, potentially impacting financial security for consumers in the GCC region and worldwide.

Is that person real? Zoom rolls out new tool to verify meeting participants - Gulf News

Gulf News News · · Product Security

Zoom is reportedly rolling out a new tool designed to verify the identity of participants in online meetings, as indicated by a report from Gulf News. This initiative aims to enhance the security and authenticity of virtual interactions on its platform. The specific technologies employed for this verification, such as AI or computer vision, are not detailed in the provided title. Why it matters: This feature could significantly improve trust and security in virtual communication for businesses and individuals across the Middle East region.

ARRC Team’s Research Paper Features in IEEE Transactions on Industrial Informatics Journal

TII · · Research Robotics

A research paper by Fatima Al Nuaimi, Dr. Pietro Tedeschi, and Dr. Enrico Natalizio from the Autonomous Robotics Research Center (ARRC) has been published in IEEE Transactions on Industrial Informatics. The paper, titled “Privacy-Aware Remote Identification for Unmanned Aerial Vehicles: Current Solutions, Potential Threats, and Future Directions”, examines vulnerabilities in UAV Remote ID systems. It identifies challenges for industry and academia in enhancing UAV security and privacy. Why it matters: The research highlights critical security and privacy considerations for the rapidly growing UAV sector in the region and globally.

Technology Innovation Institute Launches UAE-first Cryptography Challenges to Enhance Online Security

TII · · Cryptography Security

The Technology Innovation Institute (TII) has launched the "TII McEliece Challenges," the UAE's first cryptography challenges focused on evaluating the McEliece cryptosystem's hardness. The challenges, led by TII’s Cryptography Research Center (CRC), will present cryptanalysis problems across three tracks: Theoretical Key Recovery Algorithms, Practical Key Recovery, and Message Recovery. Participants can compete for a share of a US$75,000 prize pool by identifying vulnerabilities in the McEliece system. Why it matters: This initiative aims to enhance online security, foster local talent in cryptography, and strengthen the UAE's position in post-quantum encryption research.

DSRC appoints Prof. David Naccache as Newest Board of Advisors Member

TII · · Partnership Research

The Digital Science Research Center (DSRC) has appointed Prof. David Naccache to its Board of Advisors for the Digital Security Unit. Prof. Naccache's experience includes cryptography and security, with prior roles at École Normale Supérieure (ENS) and Gemplus. He will provide external research assessment and foster collaboration between TII, ENS, RUHL and ULux. Why it matters: The appointment strengthens DSRC's digital security research capabilities through Prof. Naccache's expertise and academic network.

Technology Innovation Institute’s Secure Systems Research Center in Abu Dhabi Announces Integration of Secure PX4 Stack into RISC-V Based Drone

TII · · Robotics Research

TII's Secure Systems Research Center in Abu Dhabi has integrated a secure PX4 stack into a RISC-V based drone, marking a milestone in making RISC-V UAV systems a reality. The center ported DroneCode's PX4 open source software to RISC-V using a commercially available RISC-V development platform. SSRC aims to improve the security and resilience of the PX4 flight control software and NuttX real-time OS, contributing modifications back to the open-source community. Why it matters: This achievement enhances TII's position in drone and autonomous systems research, contributing to safer and more efficient smart city applications in the region.

CRC Seminar Series - Cristofaro Mune, Niek Timmers

TII · · Research Ethics

Cristofaro Mune and Niek Timmers presented a seminar on bypassing unbreakable crypto using fault injection on Espressif ESP32 chips. The presentation detailed how the hardware-based Encrypted Secure Boot implementation of the ESP32 SoC was bypassed using a single EM glitch, without knowing the decryption key. This attack exploited multiple hardware vulnerabilities, enabling arbitrary code execution and extraction of plain-text data from external flash. Why it matters: The research highlights critical security vulnerabilities in embedded systems and the potential for fault injection attacks to bypass secure boot mechanisms, necessitating stronger hardware-level security measures.

UAE arrests 10 for posting interception videos and fake AI clips targeting national security - Gulf News

Gulf News News · · Policy Ethics

UAE authorities arrested 10 individuals for creating and sharing videos that falsely depicted security interceptions and used AI to fabricate content threatening national security. The videos, circulated on social media, aimed to disrupt public order and incite negative reactions. The Public Prosecution Office is investigating the case and emphasizes the importance of responsible social media use. Why it matters: This incident highlights growing concerns around AI-generated misinformation and the UAE's commitment to combatting digital threats to its stability.

Iranian drone attacks on Amazon’s Gulf data centers a harbinger of new tactics in future conflicts, experts say - Fortune

GCC AI Events · · Infrastructure Policy

A recent Fortune article discusses the potential vulnerability of Gulf data centers, including those operated by Amazon, to drone attacks. Experts suggest that Iranian-backed groups may employ such tactics in future regional conflicts. The hypothetical scenario raises concerns about data security and infrastructure resilience in the region. Why it matters: Highlights the increasing importance of protecting critical digital infrastructure in the GCC from emerging security threats.

What to expect during the Saudi Crown Prince’s visit to the White House - Al Arabiya English

Al Arabiya News · · Policy Partnership

Saudi Crown Prince Mohammed bin Salman is scheduled to visit the White House to meet with US President Joe Biden. Discussions are expected to cover a range of topics including security, energy, and economic cooperation. The visit aims to strengthen the strategic partnership between Saudi Arabia and the United States. Why it matters: The high-level meeting signals a potential reset in US-Saudi relations and could influence regional stability and energy markets.

TII-SSRC-23 Dataset: Typological Exploration of Diverse Traffic Patterns for Intrusion Detection

arXiv · · Research NLP

Researchers introduce TII-SSRC-23, a new network intrusion detection dataset designed to improve the diversity and representation of modern network traffic for machine learning models. The dataset includes a range of traffic types and subtypes to address the limitations of existing datasets. Feature importance analysis and baseline experiments for supervised and unsupervised intrusion detection are also provided.

Computer vision: Teaching computers how to see the world

KAUST · · CV Research

KAUST's Visual Computing Center (VCC) is researching computer vision, image processing, and machine learning, with applications in self-driving cars, surveillance, and security. Professor Bernard Ghanem is working on teaching machines to understand visual data semantically, similar to how humans perceive the world. Self-driving cars use visual sensors to interpret traffic signals and detect obstacles, while computer vision also assists governments and corporations with security applications like facial recognition and detecting unattended luggage. Why it matters: Advancements in computer vision at KAUST can contribute to innovations in autonomous vehicles and enhance security measures in the region.

Research talk on Privacy and Security Issues in Speech

MBZUAI · · NLP Ethics

A research talk was given on privacy and security issues in speech processing, highlighting the unique privacy challenges due to the biometric information embedded in speech. The talk covered the legal landscape, proposed solutions like cryptographic and hashing-based methods, and adversarial processing techniques. Dr. Bhiksha Raj from Carnegie Mellon University, an expert in speech and audio processing, delivered the talk. Why it matters: As speech-based interfaces become more prevalent in the Middle East, understanding and addressing the associated privacy risks is crucial for ethical AI development and deployment.

Security-Enhanced Radio Access Networks for 5G OpenRAN

MBZUAI · · Security 5G

Dr. Zhiqiang Lin from Ohio State University presented the Security-Enhanced Radio Access Network (SE-RAN) project to address cellular network threats using O-RAN. The project includes 5G-Spector, a framework for detecting L3 protocol exploits via MobiFlow and MobieXpert, and 5G-XSec, a framework leveraging deep learning and LLMs for threat analysis at the network edge. Dr. Lin also outlined a vision for AI convergence with cellular security for enhanced threat detection. Why it matters: Enhancing 5G security through AI and open architectures is critical for protecting next-generation mobile networks in the GCC region and globally.

Energy-Efficient and Secure EdgeAI Systems: From Architectures to Applications

MBZUAI · · Research EdgeAI

Muhammad Shafique from NYU Abu Dhabi discusses building energy-efficient and robust EdgeAI systems. The talk covers trends, challenges, and techniques for optimizing software and hardware stacks. These optimizations aim to enable embodied AI in autonomous systems, IoT-Healthcare, Industrial-IoT, and smart environments. Why it matters: The research addresses key challenges in deploying AI on resource-constrained edge devices in the GCC region, particularly regarding energy efficiency and security.

Hardware Security through the Lens of Dr ML

MBZUAI · · Hardware Security

NYU Abu Dhabi hosted a talk by Prof. Debdeep Mukhopadhyay on the intersection of machine learning and hardware security. The talk covered using ML/DL for side-channel attacks, leakage assessment in crypto-devices, and threats to hardware security primitives. Prof. Mukhopadhyay is a visiting professor at NYU Abu Dhabi and Institute Chair Professor at IIT Kharagpur. Why it matters: The talk highlights the growing importance of hardware security in modern systems and the role of machine learning in both attacking and defending hardware vulnerabilities.

Analyzing Threats of Large-Scale Machine Learning Systems

MBZUAI · · Research Ethics

A PhD candidate from the University of Waterloo presented on threats from large machine learning systems at MBZUAI. The talk covered data privacy during inference and the misuse of ML systems to generate deepfakes. The speaker also analyzed differential privacy and watermarking as potential solutions. Why it matters: Understanding and mitigating the risks of large ML systems is crucial for responsible AI development and deployment in the region.

Adversarial Training: Improvements and Applications

MBZUAI · · Research Ethics

This article discusses adversarial training (AT) as a method to improve the robustness of machine learning models against adversarial attacks. AT aims to correctly classify data and ensure no data fall near decision boundaries, simulating adversarial attacks during training. Dr. Jingfeng Zhang from RIKEN-AIP will present on improvements to AT and its application in evaluating and enhancing the reliability of ML methods. Why it matters: As ML models become more prevalent in real-world applications in the GCC region, ensuring their robustness against adversarial attacks is crucial for maintaining their reliability and security.

Aligning research and student careers

MBZUAI · · Research CV

MBZUAI Associate Professor Karthik Nandakumar is conducting computer vision research with students, focusing on security, privacy, and trustworthiness in AI. His SPriNT-AI lab is developing machine learning algorithms for industries like energy, health, and security, aligning research with the UAE’s strategic goals. One project involves using drones to detect defects in solar plants in collaboration with Masdar. Why it matters: This applied research contributes to the UAE's sustainable development goals and enhances the practical skills of AI students for local industry needs.