Sami Haddadin from the Technical University of Munich (TUM) discusses a shift in robotics towards machines that autonomously develop their own blueprints and controls. He highlights advancements driven by human-centered design, soft control, and model-based machine learning, enabling human-robot collaboration in manufacturing and healthcare. Haddadin also presents progress towards autonomous machine design and modular control architectures for complex manipulation tasks. Why it matters: This research has implications for advancing robotics and AI in the GCC region, especially in manufacturing and healthcare, by enabling safer and more efficient human-robot collaboration.
Song Chaoyang from the Southern University of Science and Technology (SUSTech) presented research on Vision-Based Tactile Sensing (VBTS) for robot learning, combining soft robotic design with learning algorithms to achieve state-of-the-art performance in tactile perception. Their VBTS solution demonstrates robustness up to 1 million test cycles and enables multi-modal outputs from a single, vision-based input, facilitating applications such as amphibious tactile grasping and industrial welding. The talk also highlighted the DeepClaw system for capturing human demonstration actions, aiming for a universal interaction interface. Why it matters: This research advances embodied intelligence by improving robot dexterity and adaptability through enhanced tactile sensing, which is crucial for complex manipulation tasks in various sectors such as manufacturing and healthcare within the region.
Lorenzo Jamone from Queen Mary University of London presented on cognitive robotics, focusing on tactile exploration and manipulation by robots. The talk covered combining biology, engineering, and AI for advanced robotic systems. Jamone directs the CRISP group and has over 100 publications in cognitive robotics. Why it matters: This highlights the ongoing research into more sophisticated robotic systems that can interact with complex environments, an area crucial for future applications in manufacturing and human-robot collaboration in the GCC.
MBZUAI researchers have developed "Tactile Skills," a new embodied AI framework enabling robots to rapidly learn complex tactile tasks. The framework combines expert process knowledge with reusable tactile control and adaptation components, reducing reliance on extensive datasets. Tested on 28 industrial tasks, the robots achieved nearly 100% success, demonstrating adaptability to changing conditions. Why it matters: This breakthrough offers a practical and scalable approach to robotic automation, potentially transforming robots into adaptable assistants across diverse industries in the GCC.
MBZUAI Professor Sami Haddadin and his team developed a new framework called Tactile Skills to teach robots manual skills through touch and trial and error. This framework aims to address the gap in robots' ability to learn basic physical tasks compared to AI's advancements in language and image generation. The research, published in Nature Machine Intelligence, focuses on enabling robots to perform manipulation skills at industrial levels with low energy and compute demands. Why it matters: This research could lead to robots capable of performing household maintenance, industrial tasks, and even assisting in medical or rehabilitation settings, potentially solving labor shortages in various sectors in the region and beyond.
KAUST, Stanford University, and Meka Robotics are collaborating on a new underwater robotic platform called the Red Sea Robotics Exploratorium. The project aims to create a robotic avatar diver that can explore deep-sea coral reefs with greater dexterity than existing underwater vehicles. The robot will address the limitations of current ROVs, which are large and difficult to operate in confined spaces. Why it matters: This technology could significantly advance marine research in the Red Sea and other challenging underwater environments, enabling more detailed exploration and sample collection of unique deep-sea ecosystems.
Krishna Murthy, a postdoc at MIT, researches computational world models to enable robots to understand and operate effectively in the physical world. His work focuses on differentiable computing approaches for spatial perception and interfaces large image, language, and audio models with 3D scenes. Murthy envisions structured world models working with scaling-based approaches to create versatile robot perception and planning algorithms. Why it matters: This research could significantly advance robotics by enabling more sophisticated perception, reasoning, and action capabilities in embodied agents.
KAUST Ph.D. candidate Ahmed Alfadhel won the IEEE best research paper award for his work on artificial skin. The artificial skin design uses a flexible magnetic nano-composite cilia surface with a magnetic field sensing element. The device exhibits unprecedented flexibility due to the embedding of magnetic cilia and the sensing element in a polymeric surface. Why it matters: This research enables the development of cheaper, more versatile tactile sensors for health monitoring, robotics, and prosthetics, potentially advancing personalized healthcare and human-machine interfaces in the region.