Skip to content
GCC AI Research

Search

Results for "robot learning"

Learning Robot Super Autonomy

MBZUAI ·

Giuseppe Loianno from NYU presented research on creating "Super Autonomous" robots (USARC) that are Unmanned, Small, Agile, Resilient, and Collaborative. The research focuses on learning models, control, and navigation policies for single and collaborative robots operating in challenging environments. The talk highlighted the potential of these robots in logistics, reconnaissance, and other time-sensitive tasks. Why it matters: This points to growing research interest in advanced robotics in the region, especially given the focus on smart cities and automation.

Tactile robots: building the machine and learning the self

MBZUAI ·

Sami Haddadin from the Technical University of Munich (TUM) discusses a shift in robotics towards machines that autonomously develop their own blueprints and controls. He highlights advancements driven by human-centered design, soft control, and model-based machine learning, enabling human-robot collaboration in manufacturing and healthcare. Haddadin also presents progress towards autonomous machine design and modular control architectures for complex manipulation tasks. Why it matters: This research has implications for advancing robotics and AI in the GCC region, especially in manufacturing and healthcare, by enabling safer and more efficient human-robot collaboration.

Structured World Models for Robots

MBZUAI ·

Krishna Murthy, a postdoc at MIT, researches computational world models to enable robots to understand and operate effectively in the physical world. His work focuses on differentiable computing approaches for spatial perception and interfaces large image, language, and audio models with 3D scenes. Murthy envisions structured world models working with scaling-based approaches to create versatile robot perception and planning algorithms. Why it matters: This research could significantly advance robotics by enabling more sophisticated perception, reasoning, and action capabilities in embodied agents.

Robot Navigation in the Wild

MBZUAI ·

Gregory Chirikjian presented an overview of research on robot navigation in unstructured environments, using computer vision, sensor tech, ML, and motion planning. The methods use multi-modal observations from RGB cameras, 3D LiDAR, and robot odometry for scene perception, along with deep RL for planning. These methods have been integrated with wheeled, home, and legged robots and tested in crowded indoor scenes, home environments, and dense outdoor terrains. Why it matters: This research pushes the boundaries of robotics in complex environments, paving the way for more versatile and autonomous robots in the Middle East.

Beyond self-driving simulations: teaching machines to learn

KAUST ·

KAUST researchers in the Image and Video Understanding Lab are applying machine learning to computer vision for automated navigation, including self-driving cars and UAVs. They tested their algorithms on KAUST roads, aiming to replicate the brain's efficiency in tasks like activity and object recognition. The team is also exploring the possibility of creative algorithms that can transfer skills without direct training. Why it matters: This research contributes to the advancement of autonomous systems and explores the fundamental questions of replicating human intelligence in machines within the GCC region.

Human Commonsense and Physical Reasoning for Robot Learning

MBZUAI ·

Mingyu Ding from UC Berkeley presented research on endowing robots with human-like commonsense and physical reasoning capabilities. The talk covered multimodal commonsense reasoning integrating vision, world models, and language-based task planners. It also discussed physical reasoning approaches for robots to infer dynamics and physical properties of objects. Why it matters: Enhancing robots with these capabilities can improve their ability to generalize across everyday tasks, leading to greater social benefits and impact.

A “divide-and-conquer” approach to learning from demonstration

MBZUAI ·

MBZUAI researchers have developed a "divide-and-conquer" technique to improve learning from demonstration in robotics. The approach breaks down complex dynamical systems into independently solvable subsystems, modeled as linear parameter-varying systems. This method aims to simplify computations while maintaining stability and accurately capturing joint interactions for robots in complex environments. Why it matters: The research addresses a key challenge in robotics, potentially enabling more efficient and safer robot learning from human demonstrations.

Tools of the trade: teaching robots to learn manual skills

MBZUAI ·

MBZUAI Professor Sami Haddadin and his team developed a new framework called Tactile Skills to teach robots manual skills through touch and trial and error. This framework aims to address the gap in robots' ability to learn basic physical tasks compared to AI's advancements in language and image generation. The research, published in Nature Machine Intelligence, focuses on enabling robots to perform manipulation skills at industrial levels with low energy and compute demands. Why it matters: This research could lead to robots capable of performing household maintenance, industrial tasks, and even assisting in medical or rehabilitation settings, potentially solving labor shortages in various sectors in the region and beyond.