Tetsunari Inamura's talk explores using VR to collect HRI data and tailor assistive robotic functionalities to individual users. He discusses symbol emergence via multimodal interaction, interactive behavior generation through symbol manipulation, and VR for data collection. The talk emphasizes long-term human capability enhancement and avoiding over-reliance on technology. Why it matters: This research promotes independence and growth in human-robot interactions, potentially revolutionizing assistive technologies in the region.
Researchers from MBZUAI, Khalifa University, and Sorbonne University Abu Dhabi developed H-SURF, a system of underwater robotic fish that can swim, communicate, and gather information without human guidance. The robotic fish use bioinspired robotics with streamlined bodies, fins, and propellers to produce fluid movement. They communicate with each other using light instead of sound to reduce noise. Why it matters: This award-winning system represents a significant advancement in autonomous underwater robotics, offering a less intrusive way to monitor marine environments and gather data, with potential applications in marine biology and environmental research.
Mingyu Ding from UC Berkeley presented research on endowing robots with human-like commonsense and physical reasoning capabilities. The talk covered multimodal commonsense reasoning integrating vision, world models, and language-based task planners. It also discussed physical reasoning approaches for robots to infer dynamics and physical properties of objects. Why it matters: Enhancing robots with these capabilities can improve their ability to generalize across everyday tasks, leading to greater social benefits and impact.
Gregory Chirikjian presented an overview talk on applying probability, harmonic analysis, and geometry to robotics, emphasizing the need for robots to function beyond traditional industrial programming. He discussed a new approach where robots define affordances of objects, using simulation to 'imagine' object use and enabling reasoning about novel objects. Probabilistic methods on Lie-groups, initially developed for mobile robot state estimation, are now adapted for one-shot learning of affordances, with plans to integrate large language models. Why it matters: This research direction aims to enhance robot intelligence and adaptability, crucial for service robots in dynamic environments and aligning with broader goals of advanced AI integration in robotics.
Sami Haddadin from the Technical University of Munich (TUM) discusses a shift in robotics towards machines that autonomously develop their own blueprints and controls. He highlights advancements driven by human-centered design, soft control, and model-based machine learning, enabling human-robot collaboration in manufacturing and healthcare. Haddadin also presents progress towards autonomous machine design and modular control architectures for complex manipulation tasks. Why it matters: This research has implications for advancing robotics and AI in the GCC region, especially in manufacturing and healthcare, by enabling safer and more efficient human-robot collaboration.
MBZUAI held its inaugural Human-Computer Interaction (HCI) Symposium in Abu Dhabi, focusing on the human and societal impacts of AI. The event, led by Professor Elizabeth Churchill, featured workshops and keynotes from figures like Google's Matias Duarte. Participants collaborated to address critical design aspects of human-AI interaction and co-author a book. Why it matters: The symposium highlights the increasing importance of human-centered design in AI development, ensuring AI tools are useful, desirable, and beneficial for society in the GCC region and beyond.
Lorenzo Jamone from Queen Mary University of London presented on cognitive robotics, focusing on tactile exploration and manipulation by robots. The talk covered combining biology, engineering, and AI for advanced robotic systems. Jamone directs the CRISP group and has over 100 publications in cognitive robotics. Why it matters: This highlights the ongoing research into more sophisticated robotic systems that can interact with complex environments, an area crucial for future applications in manufacturing and human-robot collaboration in the GCC.
MBZUAI Professor Sami Haddadin and his team developed a new framework called Tactile Skills to teach robots manual skills through touch and trial and error. This framework aims to address the gap in robots' ability to learn basic physical tasks compared to AI's advancements in language and image generation. The research, published in Nature Machine Intelligence, focuses on enabling robots to perform manipulation skills at industrial levels with low energy and compute demands. Why it matters: This research could lead to robots capable of performing household maintenance, industrial tasks, and even assisting in medical or rehabilitation settings, potentially solving labor shortages in various sectors in the region and beyond.