Skip to content
GCC AI Research

Search

Results for "software security"

How secure is AI-generated Code: A Large-Scale Comparison of Large Language Models

arXiv ·

A study compared the vulnerability of C programs generated by nine state-of-the-art Large Language Models (LLMs) using a zero-shot prompt. The researchers introduced FormAI-v2, a dataset of 331,000 C programs generated by these LLMs, and found that at least 62.07% of the generated programs contained vulnerabilities, detected via formal verification. The research highlights the need for risk assessment and validation when deploying LLM-generated code in production environments.

Hackers and the Internet of Things

KAUST ·

Cybersecurity specialist James Lyne spoke at KAUST's 2018 Winter Enrichment Program (WEP) about cybersecurity threats and techniques. Lyne demonstrated hacking and phishing attacks, emphasizing how hackers can exploit personal information by bypassing basic security measures. He highlighted the increasing sophistication of cybercriminals and the existence of illicit marketplaces on the dark web where hacking applications are sold. Why it matters: Raising awareness of cybersecurity threats is crucial for protecting individuals and organizations in Saudi Arabia and the broader region as digital infrastructure expands.

Hardware Security through the Lens of Dr ML

MBZUAI ·

NYU Abu Dhabi hosted a talk by Prof. Debdeep Mukhopadhyay on the intersection of machine learning and hardware security. The talk covered using ML/DL for side-channel attacks, leakage assessment in crypto-devices, and threats to hardware security primitives. Prof. Mukhopadhyay is a visiting professor at NYU Abu Dhabi and Institute Chair Professor at IIT Kharagpur. Why it matters: The talk highlights the growing importance of hardware security in modern systems and the role of machine learning in both attacking and defending hardware vulnerabilities.

Trustworthiness Assurance for Autonomous Software Systems in the AI Era

MBZUAI ·

Dr. Youcheng Sun from the University of Manchester presented on ensuring the trustworthiness of AI systems using formal verification, software testing, and explainable AI. He discussed applying these techniques to challenges like copyright protection for AI models. Dr. Sun's research has been funded by organizations including Google, Ethereum Foundation, and the UK’s Defence Science and Technology Laboratory. Why it matters: As AI adoption grows in the GCC, ensuring the safety, dependability, and trustworthiness of these systems is crucial for public trust and responsible innovation.

Building a secure digital future for Saudi Arabia

KAUST ·

KAUST professors Roberto Di Pietro and Marc Dacier co-authored a paper on cybersecurity strategies for Saudi Arabia and the Arab world, published in Communications of the ACM. The paper outlines a multidisciplinary framework for digitization aligned with Saudi Vision 2030, emphasizing global best practices, cultural adaptation, and capacity building. KAUST is positioned to advise on national cybersecurity policy in cooperation with the Saudi National Cybersecurity Authority. Why it matters: The framework addresses the critical need for advanced cybersecurity to support Saudi Arabia's rapidly growing digital economy and infrastructure.

CRC Seminar Series - Cristofaro Mune, Niek Timmers

TII ·

Cristofaro Mune and Niek Timmers presented a seminar on bypassing unbreakable crypto using fault injection on Espressif ESP32 chips. The presentation detailed how the hardware-based Encrypted Secure Boot implementation of the ESP32 SoC was bypassed using a single EM glitch, without knowing the decryption key. This attack exploited multiple hardware vulnerabilities, enabling arbitrary code execution and extraction of plain-text data from external flash. Why it matters: The research highlights critical security vulnerabilities in embedded systems and the potential for fault injection attacks to bypass secure boot mechanisms, necessitating stronger hardware-level security measures.

SSRC Joins Forces with UNSW to Fortify Systems, Prevent Hacking

TII ·

The Secure Systems Research Center (SSRC) has partnered with the University of New South Wales (UNSW Sydney) to research enhancements and scaling of the seL4 microkernel on edge devices. The collaboration aims to extend the seL4 microkernel to support dynamic virtualization, combining minimal trusted computing base with strong isolation. This will address challenges related to heterogeneous hardware, software, and environmental factors in edge computing. Why it matters: This partnership aims to improve the security of edge devices in critical sectors, addressing vulnerabilities in cyber-physical and autonomous systems.

Security-Enhanced Radio Access Networks for 5G OpenRAN

MBZUAI ·

Dr. Zhiqiang Lin from Ohio State University presented the Security-Enhanced Radio Access Network (SE-RAN) project to address cellular network threats using O-RAN. The project includes 5G-Spector, a framework for detecting L3 protocol exploits via MobiFlow and MobieXpert, and 5G-XSec, a framework leveraging deep learning and LLMs for threat analysis at the network edge. Dr. Lin also outlined a vision for AI convergence with cellular security for enhanced threat detection. Why it matters: Enhancing 5G security through AI and open architectures is critical for protecting next-generation mobile networks in the GCC region and globally.