Skip to content
GCC AI Research

Visual SLAM in the era of Deep Learning

MBZUAI · Notable

Summary

Ian Reid, a Professor of Computer Science at the University of Adelaide, gave a talk at MBZUAI on leveraging deep learning to go beyond geometric SLAM. The talk covered using prior domain knowledge to improve map and shape estimation and enabling navigation in unvisited environments. The research aims to turn cameras into devices for flexible, large-scale situational awareness or "Spatial AI" sensors. Why it matters: Integrating deep learning with SLAM could significantly advance robotic navigation and spatial understanding, with applications for autonomous systems in various industries.

Get the weekly digest

Top AI stories from the GCC region, every week.

Related

Robot Navigation in the Wild

MBZUAI ·

Gregory Chirikjian presented an overview of research on robot navigation in unstructured environments, using computer vision, sensor tech, ML, and motion planning. The methods use multi-modal observations from RGB cameras, 3D LiDAR, and robot odometry for scene perception, along with deep RL for planning. These methods have been integrated with wheeled, home, and legged robots and tested in crowded indoor scenes, home environments, and dense outdoor terrains. Why it matters: This research pushes the boundaries of robotics in complex environments, paving the way for more versatile and autonomous robots in the Middle East.

Spatial AI to help humans and enable robots

MBZUAI ·

Marc Pollefeys from ETH Zurich and Microsoft Spatial AI Lab will discuss building 3D environment representations for assisting humans and robots. The talk covers visual 3D mapping, localization, spatial data access, and navigation using geometry and learning-based methods. It also explores building rich 3D semantic representations for scene interaction via open vocabulary queries leveraging foundation models. Why it matters: Advancements in spatial AI and 3D scene understanding are critical for enabling more capable robots and AI assistants in various applications within the region.

Towards embodied multi-modal visual understanding

MBZUAI ·

Ivan Laptev from INRIA Paris presented a talk at MBZUAI on embodied multi-modal visual understanding, covering advancements in video understanding tasks like question answering and captioning. The talk highlighted recent work on vision-language navigation and manipulation. He argued that detailed understanding of the physical world through vision is still in early stages, discussing open research directions related to robotics and video generation. Why it matters: The discussion of robotics applications and future research directions in embodied AI could influence the direction of AI research and development in the UAE, particularly at MBZUAI.

Human-Computer Conversational Vision-and-Language Navigation

MBZUAI ·

A presentation discusses the evolution of Vision-and-Language Navigation (VLN) from benchmarks like Room-to-Room (R2R). It highlights the role of Large Language Models (LLMs) such as GPT-4 in enabling more natural human-machine interactions. The presentation showcases work using LLMs to decode navigational instructions and improve robotic navigation. Why it matters: This research demonstrates the potential of merging vision, language, and robotics for advanced AI applications in navigation and human-computer interaction.