An MBZUAI team developed a self-ensembling vision transformer to enhance the security of AI in medical imaging. The model aims to protect patient anonymity and ensure the validity of medical image analysis. It addresses vulnerabilities where AI systems can be manipulated, leading to misinterpretations with potentially harmful consequences in healthcare. Why it matters: This research is crucial for building trust and enabling the safe deployment of AI in sensitive medical applications, protecting against fraud and ensuring patient safety.
This paper introduces Provable Unrestricted Adversarial Training (PUAT), a novel adversarial training approach. PUAT enhances robustness against both unrestricted and restricted adversarial examples while improving standard generalizability by aligning the distributions of adversarial examples, natural data, and the classifier's learned distribution. The approach uses partially labeled data and an augmented triple-GAN to generate effective unrestricted adversarial examples, demonstrating superior performance on benchmarks.
Researchers from MBZUAI, KAUST, and Mila are collaborating to develop methods for identifying and mitigating the impact of malicious actors in federated learning systems used for health data analysis. These systems aggregate anonymized data from numerous devices to generate insights for healthcare improvements. The team's research, accepted at ICLR 2023, focuses on using variance reduction techniques to counteract the disruptive effects of skewed or corrupted data submitted by dishonest users. Why it matters: Protecting the integrity of AI-driven health systems is crucial for ensuring the reliability and safety of insights derived from sensitive patient data in the GCC region and globally.
Dr. Zhiqiang Lin from Ohio State University presented the Security-Enhanced Radio Access Network (SE-RAN) project to address cellular network threats using O-RAN. The project includes 5G-Spector, a framework for detecting L3 protocol exploits via MobiFlow and MobieXpert, and 5G-XSec, a framework leveraging deep learning and LLMs for threat analysis at the network edge. Dr. Lin also outlined a vision for AI convergence with cellular security for enhanced threat detection. Why it matters: Enhancing 5G security through AI and open architectures is critical for protecting next-generation mobile networks in the GCC region and globally.
A study compared the vulnerability of C programs generated by nine state-of-the-art Large Language Models (LLMs) using a zero-shot prompt. The researchers introduced FormAI-v2, a dataset of 331,000 C programs generated by these LLMs, and found that at least 62.07% of the generated programs contained vulnerabilities, detected via formal verification. The research highlights the need for risk assessment and validation when deploying LLM-generated code in production environments.
This paper proposes a framework for understanding AI sovereignty as a balance between autonomy and interdependence, considering global data, supply chains, and standards. It introduces a planner's model with policy heuristics for equalizing marginal returns across sovereignty pillars and setting openness. The model is applied to India and the Middle East (Saudi Arabia and UAE), finding that managed interdependence, rather than isolation, is key for AI sovereignty.