MBZUAI researchers have developed a new action tokenization method called LipVQ-VAE to improve in-context robot learning. LipVQ-VAE combines VQ-VAE with a Lipschitz constraint to generate smoother robotic motions, addressing limitations of traditional methods. The technique was tested on simulated and real robots, showing improved performance in imitation learning. Why it matters: This research advances robot learning by enabling more fluid and successful robot actions through improved action representation, drawing inspiration from NLP techniques.
Krishna Murthy, a postdoc at MIT, researches computational world models to enable robots to understand and operate effectively in the physical world. His work focuses on differentiable computing approaches for spatial perception and interfaces large image, language, and audio models with 3D scenes. Murthy envisions structured world models working with scaling-based approaches to create versatile robot perception and planning algorithms. Why it matters: This research could significantly advance robotics by enabling more sophisticated perception, reasoning, and action capabilities in embodied agents.
Song Chaoyang from the Southern University of Science and Technology (SUSTech) presented research on Vision-Based Tactile Sensing (VBTS) for robot learning, combining soft robotic design with learning algorithms to achieve state-of-the-art performance in tactile perception. Their VBTS solution demonstrates robustness up to 1 million test cycles and enables multi-modal outputs from a single, vision-based input, facilitating applications such as amphibious tactile grasping and industrial welding. The talk also highlighted the DeepClaw system for capturing human demonstration actions, aiming for a universal interaction interface. Why it matters: This research advances embodied intelligence by improving robot dexterity and adaptability through enhanced tactile sensing, which is crucial for complex manipulation tasks in various sectors such as manufacturing and healthcare within the region.
MBZUAI Professor Sami Haddadin and his team developed a new framework called Tactile Skills to teach robots manual skills through touch and trial and error. This framework aims to address the gap in robots' ability to learn basic physical tasks compared to AI's advancements in language and image generation. The research, published in Nature Machine Intelligence, focuses on enabling robots to perform manipulation skills at industrial levels with low energy and compute demands. Why it matters: This research could lead to robots capable of performing household maintenance, industrial tasks, and even assisting in medical or rehabilitation settings, potentially solving labor shortages in various sectors in the region and beyond.
Mingyu Ding from UC Berkeley presented research on endowing robots with human-like commonsense and physical reasoning capabilities. The talk covered multimodal commonsense reasoning integrating vision, world models, and language-based task planners. It also discussed physical reasoning approaches for robots to infer dynamics and physical properties of objects. Why it matters: Enhancing robots with these capabilities can improve their ability to generalize across everyday tasks, leading to greater social benefits and impact.