MBZUAI researchers, in collaboration with Monash University, have introduced ArEnAV, a new dataset for deepfake detection featuring Arabic-English code-switching. The dataset comprises 765 hours of manipulated YouTube videos, incorporating intra-utterance code-switching and dialect variations. Experiments showed that code-switching significantly reduces the performance of existing deepfake detectors. Why it matters: This work addresses a critical gap in AI's ability to handle linguistic diversity, particularly in regions where code-switching is prevalent, enhancing the reliability of deepfake detection in real-world scenarios.
A talk explores multimodal approaches inspired by user behavior for detecting deepfakes, considering user studies on multicultural deepfakes and the ACM Multimedia 2024 benchmark. The research leverages insights into how different audiences perceive manipulated media. Abhinav Dhall from Flinders University will present findings and future directions in deepfake analysis at MBZUAI. Why it matters: Addressing deepfakes is crucial for maintaining trust in digital content, especially with the increasing sophistication and accessibility of AI-driven manipulation tools.
MBZUAI researchers developed a new AI-generated image detection method called 'consistency verification' (ConV). Instead of training on labeled real and fake images, ConV identifies structural patterns unique to real photos using a data manifold concept. The system modifies images and uses DINOv2 to measure the difference between original and transformed representations, classifying images based on their proximity to the manifold. Why it matters: This approach offers a more robust way to detect AI-generated images without needing training data from every image generator, addressing a key limitation in the rapidly evolving landscape of AI image synthesis.