MBZUAI researchers, in collaboration with Monash University, have introduced ArEnAV, a new dataset for deepfake detection featuring Arabic-English code-switching. The dataset comprises 765 hours of manipulated YouTube videos, incorporating intra-utterance code-switching and dialect variations. Experiments showed that code-switching significantly reduces the performance of existing deepfake detectors. Why it matters: This work addresses a critical gap in AI's ability to handle linguistic diversity, particularly in regions where code-switching is prevalent, enhancing the reliability of deepfake detection in real-world scenarios.
A talk explores multimodal approaches inspired by user behavior for detecting deepfakes, considering user studies on multicultural deepfakes and the ACM Multimedia 2024 benchmark. The research leverages insights into how different audiences perceive manipulated media. Abhinav Dhall from Flinders University will present findings and future directions in deepfake analysis at MBZUAI. Why it matters: Addressing deepfakes is crucial for maintaining trust in digital content, especially with the increasing sophistication and accessibility of AI-driven manipulation tools.
MBZUAI's Metaverse Lab is developing AI algorithms for photorealistic virtual humans and dynamic environments. Hao Li, Director of the lab, envisions using the metaverse for immersive learning experiences related to history and culture. He is also working on tools to prevent deepfakes and other cyberthreats. Why it matters: This research at MBZUAI aims to advance AI and immersive technologies for education and address potential risks in the metaverse.
The UAE government has issued a warning to the public regarding the dangers of misleading AI-generated videos, particularly those used to spread rumors and false information. Authorities emphasized the importance of verifying the credibility of video content before sharing it on social media. The warning highlights potential legal consequences for individuals involved in creating or disseminating such content. Why it matters: This proactive stance reflects growing concerns in the UAE about the misuse of AI-driven technologies and its commitment to combatting disinformation.
A PhD candidate from the University of Waterloo presented on threats from large machine learning systems at MBZUAI. The talk covered data privacy during inference and the misuse of ML systems to generate deepfakes. The speaker also analyzed differential privacy and watermarking as potential solutions. Why it matters: Understanding and mitigating the risks of large ML systems is crucial for responsible AI development and deployment in the region.
Ekaterina Radionova from Smarter AI (formerly Samsung AI Center) presented an approach to generating lifelike real-time avatars. The work focuses on generating high-quality video with authentic facial features to support online generation. Radionova's master's degree is from Skoltech on Data Science program and Bachelor degree at Moscow Institute of Physics and Technology on Applied Math. Why it matters: Achieving realistic real-time avatars is critical for applications in online communication, entertainment, and virtual reality within the region.