Gregory Chirikjian presented an overview of research on robot navigation in unstructured environments, using computer vision, sensor tech, ML, and motion planning. The methods use multi-modal observations from RGB cameras, 3D LiDAR, and robot odometry for scene perception, along with deep RL for planning. These methods have been integrated with wheeled, home, and legged robots and tested in crowded indoor scenes, home environments, and dense outdoor terrains. Why it matters: This research pushes the boundaries of robotics in complex environments, paving the way for more versatile and autonomous robots in the Middle East.
ARRC researchers in collaboration with the University of Bologna and ETH Zürich have developed a CNN-based AI deck to enable autonomous navigation of a 27g nano-drone in unknown environments. The CNN allows the drone to recognize and avoid obstacles using only an onboard camera, running 10x faster and using 10x less memory than previous versions. The demo also featured a swarm of nano-drones flying in formation using ultra-wideband communication. Why it matters: This advancement could significantly enhance the capabilities of nano-drones for applications such as disaster response, where quick and efficient intervention is crucial.
A presentation discusses the evolution of Vision-and-Language Navigation (VLN) from benchmarks like Room-to-Room (R2R). It highlights the role of Large Language Models (LLMs) such as GPT-4 in enabling more natural human-machine interactions. The presentation showcases work using LLMs to decode navigational instructions and improve robotic navigation. Why it matters: This research demonstrates the potential of merging vision, language, and robotics for advanced AI applications in navigation and human-computer interaction.
Giuseppe Loianno from NYU presented research on creating "Super Autonomous" robots (USARC) that are Unmanned, Small, Agile, Resilient, and Collaborative. The research focuses on learning models, control, and navigation policies for single and collaborative robots operating in challenging environments. The talk highlighted the potential of these robots in logistics, reconnaissance, and other time-sensitive tasks. Why it matters: This points to growing research interest in advanced robotics in the region, especially given the focus on smart cities and automation.
This paper presents the design and deployment of an autonomous unmanned ground vehicle (UGV) equipped with a robotic arm for urban firefighting. The UGV uses on-board sensors for navigation and a thermal camera for fire source identification, with a custom pump for fire suppression. The system was developed for the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) 2020, where it achieved the highest score among UGV solutions and contributed to winning first place. Why it matters: This demonstrates the potential of autonomous robotics in addressing complex and dangerous real-world challenges like urban firefighting in the GCC region and beyond.
This paper presents a UAV-UGV team designed for autonomous firefighting, developed for the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) 2020. The system uses LiDAR for localization in GNSS-restricted environments and fuses LiDAR and thermal camera data to track fires. Relative navigation enables successful fire extinguishing. Why it matters: This research demonstrates the potential of robotic systems in autonomous firefighting, addressing challenges in dangerous and inaccessible environments, and advancing robotics research within the UAE.
The paper presents MonoRace, an onboard drone racing approach using a monocular camera and IMU. The system combines neural-network-based gate segmentation with a drone model for robust state estimation, along with offline optimization using gate geometry. MonoRace won the 2025 Abu Dhabi Autonomous Drone Racing Competition (A2RL), outperforming AI teams and human world champions, reaching speeds up to 100 km/h. Why it matters: This demonstrates a significant advancement in autonomous drone racing, achieving champion-level performance with a resource-efficient monocular system, validated in a real-world competition setting in the UAE.
This paper presents a fully autonomous micro aerial vehicle (MAV) developed to pop balloons using onboard sensing and computing. The system was evaluated at the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) 2020. The MAV successfully popped all five balloons in under two minutes in each of the three competition runs. Why it matters: This demonstrates the potential of autonomous robotics and computer vision for real-world applications in challenging environments.