Skip to content
GCC AI Research

Search

Results for "learning from demonstration"

A “divide-and-conquer” approach to learning from demonstration

MBZUAI ·

MBZUAI researchers have developed a "divide-and-conquer" technique to improve learning from demonstration in robotics. The approach breaks down complex dynamical systems into independently solvable subsystems, modeled as linear parameter-varying systems. This method aims to simplify computations while maintaining stability and accurately capturing joint interactions for robots in complex environments. Why it matters: The research addresses a key challenge in robotics, potentially enabling more efficient and safer robot learning from human demonstrations.

Beyond self-driving simulations: teaching machines to learn

KAUST ·

KAUST researchers in the Image and Video Understanding Lab are applying machine learning to computer vision for automated navigation, including self-driving cars and UAVs. They tested their algorithms on KAUST roads, aiming to replicate the brain's efficiency in tasks like activity and object recognition. The team is also exploring the possibility of creative algorithms that can transfer skills without direct training. Why it matters: This research contributes to the advancement of autonomous systems and explores the fundamental questions of replicating human intelligence in machines within the GCC region.

Learn to control

MBZUAI ·

Patrick van der Smagt, Director of AI Research at Volkswagen Group, discussed the use of generative machine learning models for predicting and controlling complex stochastic systems in robotics. The talk highlighted examples in robotics and beyond and addressed the challenges of achieving quality and trust in AI systems. He also mentioned his involvement in a European industry initiative on trust in AI and his membership in the AI Council of the State of Bavaria. Why it matters: Understanding control in robotics, along with trust in AI, are key issues for further development of autonomous systems, especially in industrial applications within the GCC region.

Super-aligned Machine Intelligence via a Soft Touch

MBZUAI ·

Song Chaoyang from the Southern University of Science and Technology (SUSTech) presented research on Vision-Based Tactile Sensing (VBTS) for robot learning, combining soft robotic design with learning algorithms to achieve state-of-the-art performance in tactile perception. Their VBTS solution demonstrates robustness up to 1 million test cycles and enables multi-modal outputs from a single, vision-based input, facilitating applications such as amphibious tactile grasping and industrial welding. The talk also highlighted the DeepClaw system for capturing human demonstration actions, aiming for a universal interaction interface. Why it matters: This research advances embodied intelligence by improving robot dexterity and adaptability through enhanced tactile sensing, which is crucial for complex manipulation tasks in various sectors such as manufacturing and healthcare within the region.

Learning Robot Super Autonomy

MBZUAI ·

Giuseppe Loianno from NYU presented research on creating "Super Autonomous" robots (USARC) that are Unmanned, Small, Agile, Resilient, and Collaborative. The research focuses on learning models, control, and navigation policies for single and collaborative robots operating in challenging environments. The talk highlighted the potential of these robots in logistics, reconnaissance, and other time-sensitive tasks. Why it matters: This points to growing research interest in advanced robotics in the region, especially given the focus on smart cities and automation.