Skip to content
GCC AI Research

Search

Results for "Refik Anadol"

Mohamed bin Zayed University of Artificial Intelligence launches The Academy and world’s first AI x Arts Fellowship

MBZUAI ·

MBZUAI has launched The Academy, a global platform for AI knowledge exchange, along with the world’s first AI x Arts Fellowship. The inaugural cohort of eight fellows includes artists and innovators like Refik Anadol and Amrita Sethi, who will participate in residency programs in Abu Dhabi in 2026. The Academy also encompasses the MBZUAI Executive Program (MEP), which empowers UAE leaders to leverage AI. Why it matters: This initiative signals MBZUAI's commitment to fostering interdisciplinary collaboration between AI and the arts, positioning Abu Dhabi as a hub for innovative AI applications in culture and heritage.

Expanding artistic frontiers in artificial intelligence

KAUST ·

KAUST computer scientist Mohamed Elhoseiny and his VISION CAIR team developed Creative Walk Adversarial Networks (CWAN) for novel art generation. CWAN learns from existing art styles and deviates using 'random walk deviation' methods. Human evaluators preferred CWAN-generated art compared to other methods like StyleGAN2. Why it matters: The research demonstrates AI's potential as a valuable tool for artists, enabling the creation of unique and meaningful art, and explores more effective emotional language in image captioning.

A unified theory of all things visual

MBZUAI ·

MBZUAI Professor Fahad Khan is working on a unified theory of machine visual intelligence. His goal is to enable AI systems to better understand and function in complex, chaotic visual environments. The aim is to improve real-world applications like smart cities, personalized healthcare, and autonomous vehicles. Why it matters: This research could significantly advance AI's ability to perceive and interact with the real world, especially in challenging environments common in the developing world.

Cross-modal understanding and generation of multimodal content

MBZUAI ·

Nicu Sebe from the University of Trento presented recent work on video generation, focusing on animating objects in a source image using external information like labels, driving videos, or text. He introduced a Learnable Game Engine (LGE) trained from monocular annotated videos, which maintains states of scenes, objects, and agents to render controllable viewpoints. Why it matters: This talk highlights advancements in cross-modal AI, potentially enabling new applications in gaming, simulation, and content creation within the region.

Is Human Motion a Language without Words?

MBZUAI ·

This article previews a talk by Gül Varol from Ecole des Ponts ParisTech on bridging natural language and 3D human motions. The talk will cover text-to-motion synthesis using generative models and text-to-motion retrieval models based on the ACTOR, TEMOS, TMR, TEACH, and SINC papers. Varol's research interests include video representation learning, human motion synthesis, and sign languages. Why it matters: Research in this area could enable more intuitive human-computer interaction and new applications in areas like virtual reality and robotics.

AI and Digital Science Research Center’s Prof. George Alexandropoulos to Address Prestigious RISTA Cutting-Edge Forum

TII ·

Prof. George Alexandropoulos from the AI and Digital Science Research Center (AIDRC) at TII presented a keynote at the RIS Technical Alliance (RISTA) forum. The presentation focused on hybrid reconfigurable intelligent surfaces for wireless communication and sensing applications. He discussed the role of RIS technology in enabling smart wireless environments within 5G and 6G networks. Why it matters: This highlights the UAE's contribution to cutting-edge research in next-generation wireless communication technologies and its potential impact on future network architectures.

Multimodality for story-level understanding and generation of visual data

MBZUAI ·

Vicky Kalogeiton from École Polytechnique discussed the importance of multimodality for story-level recognition and generation using video, audio, text, masks and clinical data. She presented on multimodal video understanding using FunnyNet-W and Short Film Dataset. She further showed examples of visual generation from text and other modalities (ET, CAD, DynamicGuidance). Why it matters: Multimodal AI research is growing globally, and this talk highlights the potential of combining different data types for enhanced understanding and generation, which could have implications for various applications, including those relevant to the Middle East.

AI and Biomedicine: the Hospital of the Future

MBZUAI ·

Pierre Baldi from UC Irvine presented applications of AI to biomedicine, covering molecular-level analysis of circadian rhythms, real-time polyp detection in colonoscopy videos, and prediction of post-operative adverse outcomes. He discussed integrating AI in future AI-driven hospitals. The presentation was likely part of a panel discussion hosted by MBZUAI in collaboration with the Manara Center for Coexistence and Dialogue. Why it matters: This highlights the growing interest in AI applications within the healthcare sector in the UAE, particularly through institutions like MBZUAI.