MBZUAI researchers Nils Lukas and Toluwani Samuel Aremu will present a paper at ICML 2025 demonstrating the vulnerability of current watermarking techniques in LLMs. Their research shows that adaptive paraphrasers can evade detection from watermarks with negligible impact on text quality, costing less than $10 of GPU compute. The attack involves fine-tuning a small open-weight model to rewrite sentences until surrogate keys no longer trigger detection. Why it matters: This work highlights critical weaknesses in current AI provenance methods, suggesting the need for more robust watermarking techniques to maintain trust in the authenticity of AI-generated content.
A PhD candidate from the University of Waterloo presented on threats from large machine learning systems at MBZUAI. The talk covered data privacy during inference and the misuse of ML systems to generate deepfakes. The speaker also analyzed differential privacy and watermarking as potential solutions. Why it matters: Understanding and mitigating the risks of large ML systems is crucial for responsible AI development and deployment in the region.
Xiuying Chen from KAUST presented her work on improving the trustworthiness of AI-generated text, focusing on accuracy and robustness. Her research analyzes causes of hallucination in language models related to semantic understanding and neglect of input knowledge, and proposes solutions. She also demonstrated vulnerabilities of language models to noise and enhances robustness using augmentation techniques. Why it matters: Improving the reliability of AI-generated text is crucial for its deployment in sensitive domains like healthcare and scientific discovery, where accuracy is paramount.
Researchers at MBZUAI have developed LLM-DetectAIve, a tool to classify the degree of machine involvement in text generation. The system categorizes text into four types: human-written, machine-generated, machine-written and machine-humanized, and human-written and machine-polished. A demo website allows users to test the tool's ability to detect machine involvement. Why it matters: This research addresses the growing need to identify and classify AI-generated content in academic and professional settings, particularly in light of increasing LLM misuse.
This paper introduces a novel black-box adversarial attack method, Mixup-Attack, to generate universal adversarial examples for remote sensing data. The method identifies common vulnerabilities in neural networks by attacking features in the shallow layer of a surrogate model. The authors also present UAE-RS, the first dataset of black-box adversarial samples in remote sensing, to benchmark the robustness of deep learning models against adversarial attacks.