Michael Yu Wang, Chair Professor and Founding Dean of the School of Engineering at Great Bay University, argues for combining "good old fashioned engineering" (GOFE) with learning-based approaches like LLMs for robot skill acquisition, particularly in manipulation. He suggests a modular framework that integrates engineering principles with learning, drawing inspiration from human hand-eye coordination and tactile perception. Wang emphasizes the need to address engineering features of robot tactile sensors, such as spatial and temporal resolutions, to achieve human-like robot manipulation skills. Why it matters: This perspective highlights the importance of hybrid approaches combining traditional engineering with modern AI for advancing robotics, especially in complex manipulation tasks relevant to industries in the GCC region.
Song Chaoyang from the Southern University of Science and Technology (SUSTech) presented research on Vision-Based Tactile Sensing (VBTS) for robot learning, combining soft robotic design with learning algorithms to achieve state-of-the-art performance in tactile perception. Their VBTS solution demonstrates robustness up to 1 million test cycles and enables multi-modal outputs from a single, vision-based input, facilitating applications such as amphibious tactile grasping and industrial welding. The talk also highlighted the DeepClaw system for capturing human demonstration actions, aiming for a universal interaction interface. Why it matters: This research advances embodied intelligence by improving robot dexterity and adaptability through enhanced tactile sensing, which is crucial for complex manipulation tasks in various sectors such as manufacturing and healthcare within the region.
Ivan Laptev from INRIA Paris presented a talk at MBZUAI on embodied multi-modal visual understanding, covering advancements in video understanding tasks like question answering and captioning. The talk highlighted recent work on vision-language navigation and manipulation. He argued that detailed understanding of the physical world through vision is still in early stages, discussing open research directions related to robotics and video generation. Why it matters: The discussion of robotics applications and future research directions in embodied AI could influence the direction of AI research and development in the UAE, particularly at MBZUAI.
MBZUAI Professor Ian Reid discusses his career in embodied AI, from early work on active vision at Oxford to current research. He highlights three key developments: cameras as geometric sensors, visual SLAM, and advancements in robot navigation. Reid distinguishes embodied AI from systems like ChatGPT, emphasizing its need for understanding and interaction with the physical world. Why it matters: The insights from a leading expert underscore the importance of embodied AI as the next frontier in intelligent systems and robotics in the region.
Dr. Hao Dong from Peking University presented research on addressing the challenge of limited large-scale training data in embodied AI, particularly for manipulation, task planning, and navigation. The presentation covered simulation learning and large models. Dr. Dong is a chief scientist of China's National Key Research and Development Program and an area chair/associate editor for NeurIPS, CVPR, AAAI, and ICRA. Why it matters: Overcoming data scarcity is crucial for advancing embodied AI research and enabling more sophisticated robotic applications in the region.
Professor Hava Siegelmann, a computer science expert, is researching lifelong learning AI, drawing inspiration from the brain's abstraction and generalization capabilities. The research aims to enable intelligent systems in satellites, robots, and medical devices to adapt and improve their expertise in real-time, even with limited communication and power. The goal is to develop AI systems applicable for far edge computing that can learn in runtime and handle unanticipated situations. Why it matters: This research could lead to more resilient and adaptable AI systems for critical applications in remote and resource-constrained environments, with potential benefits for various sectors in the Middle East.
Lorenzo Jamone from Queen Mary University of London presented on cognitive robotics, focusing on tactile exploration and manipulation by robots. The talk covered combining biology, engineering, and AI for advanced robotic systems. Jamone directs the CRISP group and has over 100 publications in cognitive robotics. Why it matters: This highlights the ongoing research into more sophisticated robotic systems that can interact with complex environments, an area crucial for future applications in manufacturing and human-robot collaboration in the GCC.
Paul Liang from CMU presented on machine learning foundations for multisensory AI, discussing a theoretical framework for modality interactions. The talk covered cross-modal attention and multimodal transformer architectures, and applications in mental health, pathology, and robotics. Liang's research aims to enable AI systems to integrate and learn from diverse real-world sensory modalities. Why it matters: This highlights the growing importance of multimodal AI research and its potential for advancements across various sectors in the region, including healthcare and robotics.