TII and Honeywell are partnering to develop quantum-secure satellite communication systems. Honeywell's ‘QKDSat’ platform will integrate with TII’s Abu Dhabi Quantum Optical Ground Station (ADQOGS) to test QKD links between satellites and terrestrial networks. The collaboration aims to build quantum-resilient communication infrastructure for government, security, and commercial use. Why it matters: This initiative positions Abu Dhabi as a key player in advancing global cybersecurity and quantum communication technologies.
ADASI has adopted VentureOne's Perceptra, a GPS-less navigation technology, and Saluki, a high-security flight control technology, both developed by the Technology Innovation Institute (TII). These technologies enhance resilience, precision, and security for autonomous aerial operations, addressing vulnerabilities in GPS-dependent systems. The agreement was formalized at IDEX 2025. Why it matters: This deployment of advanced autonomous flight technologies in the UAE strengthens aviation security and positions the region as a leader in resilient, GPS-independent navigation solutions.
Researchers including Dr. Najwa Aaraj developed ML-FEED, a new exploit detection framework using pattern-based techniques. The model is 70x faster than LSTMs and 75,000x faster than Transformers in exploit detection tasks, while also being slightly more accurate. The "ML-FEED" paper won best paper at the 2022 IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications. Why it matters: This research enables more efficient real-time security applications and highlights growing AI expertise in the Arab world.
A cryptanalysis team at the UAE's Cryptography Research Center (CRC) has set new records in computation by decrypting a McEliece ciphertext without the secret key at INRIA’s McEliece decoding challenge, taking first and second place. The record computation took about 31.4 days on a cluster using 256 CPU-cores. The team also achieved top ranks in decoding quasi-cyclic codes and ternary codes, used in post-quantum cryptography. Why it matters: This achievement demonstrates the UAE's growing capabilities in advanced cryptography research and its contributions to the global effort to develop quantum-resistant algorithms.
The UAE is actively engaged in training its Emirati citizens in Artificial Intelligence technologies. This national initiative aims to equip the workforce with advanced AI skills to meet the increasing demands across various sectors. The program specifically emphasizes AI's expanding role in enhancing national security and improving governmental operations and efficiency. Why it matters: This strategic investment in local AI talent underscores the UAE's commitment to technological self-reliance and the proactive integration of AI into critical public and security domains.
A study compared the vulnerability of C programs generated by nine state-of-the-art Large Language Models (LLMs) using a zero-shot prompt. The researchers introduced FormAI-v2, a dataset of 331,000 C programs generated by these LLMs, and found that at least 62.07% of the generated programs contained vulnerabilities, detected via formal verification. The research highlights the need for risk assessment and validation when deploying LLM-generated code in production environments.
This paper introduces Provable Unrestricted Adversarial Training (PUAT), a novel adversarial training approach. PUAT enhances robustness against both unrestricted and restricted adversarial examples while improving standard generalizability by aligning the distributions of adversarial examples, natural data, and the classifier's learned distribution. The approach uses partially labeled data and an augmented triple-GAN to generate effective unrestricted adversarial examples, demonstrating superior performance on benchmarks.
The paper introduces LLMEffiChecker, a tool to test the computational efficiency robustness of LLMs by identifying vulnerabilities that can significantly degrade performance. LLMEffiChecker uses both white-box (gradient-guided perturbation) and black-box (causal inference-based perturbation) methods to delay the generation of the end-of-sequence token. Experiments on nine public LLMs demonstrate that LLMEffiChecker can substantially increase response latency and energy consumption with minimal input perturbations.
MBZUAI researchers Nils Lukas and Toluwani Samuel Aremu will present a paper at ICML 2025 demonstrating the vulnerability of current watermarking techniques in LLMs. Their research shows that adaptive paraphrasers can evade detection from watermarks with negligible impact on text quality, costing less than $10 of GPU compute. The attack involves fine-tuning a small open-weight model to rewrite sentences until surrogate keys no longer trigger detection. Why it matters: This work highlights critical weaknesses in current AI provenance methods, suggesting the need for more robust watermarking techniques to maintain trust in the authenticity of AI-generated content.
Researchers at ETH Zurich have formalized models of the EMV payment protocol using the Tamarin model checker. They discovered flaws allowing attackers to bypass PIN requirements for high-value purchases on EMV cards like Mastercard and Visa. The team also collaborated with an EMV consortium member to verify the improved EMV Kernel C-8 protocol. Why it matters: This research highlights the importance of formal methods in identifying critical vulnerabilities in widely used payment systems, potentially impacting financial security for consumers in the GCC region and worldwide.
Zoom is reportedly rolling out a new tool designed to verify the identity of participants in online meetings, as indicated by a report from Gulf News. This initiative aims to enhance the security and authenticity of virtual interactions on its platform. The specific technologies employed for this verification, such as AI or computer vision, are not detailed in the provided title. Why it matters: This feature could significantly improve trust and security in virtual communication for businesses and individuals across the Middle East region.
A research paper by Fatima Al Nuaimi, Dr. Pietro Tedeschi, and Dr. Enrico Natalizio from the Autonomous Robotics Research Center (ARRC) has been published in IEEE Transactions on Industrial Informatics. The paper, titled “Privacy-Aware Remote Identification for Unmanned Aerial Vehicles: Current Solutions, Potential Threats, and Future Directions”, examines vulnerabilities in UAV Remote ID systems. It identifies challenges for industry and academia in enhancing UAV security and privacy. Why it matters: The research highlights critical security and privacy considerations for the rapidly growing UAV sector in the region and globally.
The Technology Innovation Institute (TII) has launched the "TII McEliece Challenges," the UAE's first cryptography challenges focused on evaluating the McEliece cryptosystem's hardness. The challenges, led by TII’s Cryptography Research Center (CRC), will present cryptanalysis problems across three tracks: Theoretical Key Recovery Algorithms, Practical Key Recovery, and Message Recovery. Participants can compete for a share of a US$75,000 prize pool by identifying vulnerabilities in the McEliece system. Why it matters: This initiative aims to enhance online security, foster local talent in cryptography, and strengthen the UAE's position in post-quantum encryption research.
The Digital Science Research Center (DSRC) has appointed Prof. David Naccache to its Board of Advisors for the Digital Security Unit. Prof. Naccache's experience includes cryptography and security, with prior roles at École Normale Supérieure (ENS) and Gemplus. He will provide external research assessment and foster collaboration between TII, ENS, RUHL and ULux. Why it matters: The appointment strengthens DSRC's digital security research capabilities through Prof. Naccache's expertise and academic network.
TII's Secure Systems Research Center in Abu Dhabi has integrated a secure PX4 stack into a RISC-V based drone, marking a milestone in making RISC-V UAV systems a reality. The center ported DroneCode's PX4 open source software to RISC-V using a commercially available RISC-V development platform. SSRC aims to improve the security and resilience of the PX4 flight control software and NuttX real-time OS, contributing modifications back to the open-source community. Why it matters: This achievement enhances TII's position in drone and autonomous systems research, contributing to safer and more efficient smart city applications in the region.
Cristofaro Mune and Niek Timmers presented a seminar on bypassing unbreakable crypto using fault injection on Espressif ESP32 chips. The presentation detailed how the hardware-based Encrypted Secure Boot implementation of the ESP32 SoC was bypassed using a single EM glitch, without knowing the decryption key. This attack exploited multiple hardware vulnerabilities, enabling arbitrary code execution and extraction of plain-text data from external flash. Why it matters: The research highlights critical security vulnerabilities in embedded systems and the potential for fault injection attacks to bypass secure boot mechanisms, necessitating stronger hardware-level security measures.
UAE authorities arrested 10 individuals for creating and sharing videos that falsely depicted security interceptions and used AI to fabricate content threatening national security. The videos, circulated on social media, aimed to disrupt public order and incite negative reactions. The Public Prosecution Office is investigating the case and emphasizes the importance of responsible social media use. Why it matters: This incident highlights growing concerns around AI-generated misinformation and the UAE's commitment to combatting digital threats to its stability.
A recent Fortune article discusses the potential vulnerability of Gulf data centers, including those operated by Amazon, to drone attacks. Experts suggest that Iranian-backed groups may employ such tactics in future regional conflicts. The hypothetical scenario raises concerns about data security and infrastructure resilience in the region. Why it matters: Highlights the increasing importance of protecting critical digital infrastructure in the GCC from emerging security threats.
Saudi Crown Prince Mohammed bin Salman is scheduled to visit the White House to meet with US President Joe Biden. Discussions are expected to cover a range of topics including security, energy, and economic cooperation. The visit aims to strengthen the strategic partnership between Saudi Arabia and the United States. Why it matters: The high-level meeting signals a potential reset in US-Saudi relations and could influence regional stability and energy markets.
Researchers introduce TII-SSRC-23, a new network intrusion detection dataset designed to improve the diversity and representation of modern network traffic for machine learning models. The dataset includes a range of traffic types and subtypes to address the limitations of existing datasets. Feature importance analysis and baseline experiments for supervised and unsupervised intrusion detection are also provided.
KAUST's Visual Computing Center (VCC) is researching computer vision, image processing, and machine learning, with applications in self-driving cars, surveillance, and security. Professor Bernard Ghanem is working on teaching machines to understand visual data semantically, similar to how humans perceive the world. Self-driving cars use visual sensors to interpret traffic signals and detect obstacles, while computer vision also assists governments and corporations with security applications like facial recognition and detecting unattended luggage. Why it matters: Advancements in computer vision at KAUST can contribute to innovations in autonomous vehicles and enhance security measures in the region.
A research talk was given on privacy and security issues in speech processing, highlighting the unique privacy challenges due to the biometric information embedded in speech. The talk covered the legal landscape, proposed solutions like cryptographic and hashing-based methods, and adversarial processing techniques. Dr. Bhiksha Raj from Carnegie Mellon University, an expert in speech and audio processing, delivered the talk. Why it matters: As speech-based interfaces become more prevalent in the Middle East, understanding and addressing the associated privacy risks is crucial for ethical AI development and deployment.
Dr. Zhiqiang Lin from Ohio State University presented the Security-Enhanced Radio Access Network (SE-RAN) project to address cellular network threats using O-RAN. The project includes 5G-Spector, a framework for detecting L3 protocol exploits via MobiFlow and MobieXpert, and 5G-XSec, a framework leveraging deep learning and LLMs for threat analysis at the network edge. Dr. Lin also outlined a vision for AI convergence with cellular security for enhanced threat detection. Why it matters: Enhancing 5G security through AI and open architectures is critical for protecting next-generation mobile networks in the GCC region and globally.
Muhammad Shafique from NYU Abu Dhabi discusses building energy-efficient and robust EdgeAI systems. The talk covers trends, challenges, and techniques for optimizing software and hardware stacks. These optimizations aim to enable embodied AI in autonomous systems, IoT-Healthcare, Industrial-IoT, and smart environments. Why it matters: The research addresses key challenges in deploying AI on resource-constrained edge devices in the GCC region, particularly regarding energy efficiency and security.
NYU Abu Dhabi hosted a talk by Prof. Debdeep Mukhopadhyay on the intersection of machine learning and hardware security. The talk covered using ML/DL for side-channel attacks, leakage assessment in crypto-devices, and threats to hardware security primitives. Prof. Mukhopadhyay is a visiting professor at NYU Abu Dhabi and Institute Chair Professor at IIT Kharagpur. Why it matters: The talk highlights the growing importance of hardware security in modern systems and the role of machine learning in both attacking and defending hardware vulnerabilities.
A PhD candidate from the University of Waterloo presented on threats from large machine learning systems at MBZUAI. The talk covered data privacy during inference and the misuse of ML systems to generate deepfakes. The speaker also analyzed differential privacy and watermarking as potential solutions. Why it matters: Understanding and mitigating the risks of large ML systems is crucial for responsible AI development and deployment in the region.
This article discusses adversarial training (AT) as a method to improve the robustness of machine learning models against adversarial attacks. AT aims to correctly classify data and ensure no data fall near decision boundaries, simulating adversarial attacks during training. Dr. Jingfeng Zhang from RIKEN-AIP will present on improvements to AT and its application in evaluating and enhancing the reliability of ML methods. Why it matters: As ML models become more prevalent in real-world applications in the GCC region, ensuring their robustness against adversarial attacks is crucial for maintaining their reliability and security.
MBZUAI Associate Professor Karthik Nandakumar is conducting computer vision research with students, focusing on security, privacy, and trustworthiness in AI. His SPriNT-AI lab is developing machine learning algorithms for industries like energy, health, and security, aligning research with the UAE’s strategic goals. One project involves using drones to detect defects in solar plants in collaboration with Masdar. Why it matters: This applied research contributes to the UAE's sustainable development goals and enhances the practical skills of AI students for local industry needs.