Skip to content
GCC AI Research

Search

Results for "Formal verification"

Formal Methods for Modern Payment Protocols

MBZUAI ·

Researchers at ETH Zurich have formalized models of the EMV payment protocol using the Tamarin model checker. They discovered flaws allowing attackers to bypass PIN requirements for high-value purchases on EMV cards like Mastercard and Visa. The team also collaborated with an EMV consortium member to verify the improved EMV Kernel C-8 protocol. Why it matters: This research highlights the importance of formal methods in identifying critical vulnerabilities in widely used payment systems, potentially impacting financial security for consumers in the GCC region and worldwide.

Martingale-based Verification of Probabilistic Programs

MBZUAI ·

Amir Goharshady from Hong Kong University of Science and Technology presented a talk at MBZUAI on martingale-based verification of probabilistic programs. The talk covered using martingale-based approaches for proving termination and synthesizing cost bounds for probabilistic programs, automating program analysis with template-based methods. He also discussed remaining challenges and open problems in the area. Why it matters: Advances in formal verification and analysis of probabilistic programs are crucial for ensuring the reliability and safety of AI systems that rely on randomization.

How secure is AI-generated Code: A Large-Scale Comparison of Large Language Models

arXiv ·

A study compared the vulnerability of C programs generated by nine state-of-the-art Large Language Models (LLMs) using a zero-shot prompt. The researchers introduced FormAI-v2, a dataset of 331,000 C programs generated by these LLMs, and found that at least 62.07% of the generated programs contained vulnerabilities, detected via formal verification. The research highlights the need for risk assessment and validation when deploying LLM-generated code in production environments.

Trustworthiness Assurance for Autonomous Software Systems in the AI Era

MBZUAI ·

Dr. Youcheng Sun from the University of Manchester presented on ensuring the trustworthiness of AI systems using formal verification, software testing, and explainable AI. He discussed applying these techniques to challenges like copyright protection for AI models. Dr. Sun's research has been funded by organizations including Google, Ethereum Foundation, and the UK’s Defence Science and Technology Laboratory. Why it matters: As AI adoption grows in the GCC, ensuring the safety, dependability, and trustworthiness of these systems is crucial for public trust and responsible innovation.

CRC Seminar Series - Conor McMenamin

TII ·

Conor McMenamin from Universitat Pompeu Fabra presented a seminar on State Machine Replication (SMR) without honest participants. The talk covered the limitations of current SMR protocols and introduced the ByRa model, a framework for player characterization free of honest participants. He then described FAIRSICAL, a sandbox SMR protocol, and discussed how the ideas could be extended to real-world protocols, with a focus on blockchains and cryptocurrencies. Why it matters: This research on SMR protocols and their incentive compatibility could lead to more robust and secure blockchain technologies in the region.

SSRC Secures seL4 Membership

TII ·

The Secure Systems Research Center (SSRC) has obtained membership in the seL4 Foundation. This membership allows SSRC to participate in and contribute to the open-source development of seL4, a formally verified microkernel OS. SSRC aims to research, contribute to, and advance next-generation high-end edge device environments using seL4's capabilities. Why it matters: This move enhances the UAE's capabilities in developing secure and resilient edge computing solutions, fostering innovation in critical sectors like secure communications and drone technology.