Skip to content
GCC AI Research

Search

Results for "LiDAR"

OmniGen: Unified Multimodal Sensor Generation for Autonomous Driving

arXiv ·

The paper introduces OmniGen, a unified framework for generating aligned multimodal sensor data for autonomous driving using a shared Bird's Eye View (BEV) space. It uses a novel generalizable multimodal reconstruction method (UAE) to jointly decode LiDAR and multi-view camera data through volume rendering. The framework incorporates a Diffusion Transformer (DiT) with a ControlNet branch to enable controllable multimodal sensor generation, demonstrating good performance and multimodal consistency.

Autonomous Fire Fighting with a UAV-UGV Team at MBZIRC 2020

arXiv ·

This paper presents a UAV-UGV team designed for autonomous firefighting, developed for the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) 2020. The system uses LiDAR for localization in GNSS-restricted environments and fuses LiDAR and thermal camera data to track fires. Relative navigation enables successful fire extinguishing. Why it matters: This research demonstrates the potential of robotic systems in autonomous firefighting, addressing challenges in dangerous and inaccessible environments, and advancing robotics research within the UAE.

Computing in three dimensions: A conversation with Peter Wonka

KAUST ·

KAUST's Peter Wonka discusses the challenges and advancements in creating data-rich, three-dimensional maps for various applications. His team is working with Boeing on 3D modeling tools for aerospace design. KAUST-funded FalconViz uses UAV drones to create 3D maps of disaster areas for first responders. Why it matters: This highlights KAUST's contribution to cutting-edge 3D modeling and its practical applications in industries like aerospace and disaster response in the region.

KAUST's 3D mapping technology helps preserve a landmark

KAUST ·

KAUST researchers used 3D mapping technology via remote control helicopter to survey and create detailed renderings of Jeddah's Al Balad, a UNESCO World Heritage Site. The team, from KAUST's Visual Computer Center and FalconViz, captured high-definition images from about 50 meters above street level. This enabled the creation of accurate 3D models, showing building shifts and potential problems for urban planners. Why it matters: This method provides a rapid and accurate way to document and preserve historical landmarks, especially in areas where traditional surveying is difficult or infeasible, aiding in cultural heritage preservation efforts.

A new way of seeing: vision transformers for radar data

MBZUAI ·

MBZUAI researchers presented "TransRadar," a study at WACV proposing new uses for radar in object identification. The study, led by Yahia Dalbah, explores fusing radar with other technologies to identify objects, particularly for autonomous vehicles. The "TransRadar" approach uses an adaptive-directional transformer for real-time multi-view radar semantic segmentation. Why it matters: This research addresses the limitations of radar by enhancing its object recognition capabilities, potentially improving the reliability of autonomous systems in adverse conditions.

Co-Modality Active sensing and Perception (C-MAP) in Autonomous Vehicles, Augmented Reality, Remote Environmental Monitoring, and Robotic Grasping

MBZUAI ·

Dezhen Song from Texas A&M University presented a talk on Co-Modality Active sensing and Perception (C-MAP) for robotics, covering sensor fusion for autonomous vehicles, augmented reality, and remote environmental monitoring. The talk highlighted lessons learned in sensor fusion using autonomous motorcycles and NASA Robonaut as examples. Recent works in robotic remote environment monitoring, especially focused on subsurface surface void and pipeline mapping were discussed. Why it matters: This research explores sensor fusion techniques to enhance robot perception, which could improve the robustness and capabilities of autonomous systems developed and deployed in the Middle East, particularly in challenging environments.

Autonomous Wall Building with a UGV-UAV Team at MBZIRC 2020

arXiv ·

This paper presents two robotic systems developed for the MBZIRC 2020 competition, designed for autonomous wall construction. The systems utilize a UGV with 3D LiDAR for precise brick pose estimation and a UAV employing real-time visual servoing. The authors report results from the competition and lab experiments, discussing lessons learned from the autonomous wall-building task. Why it matters: The work highlights advancements in mobile manipulation and autonomous robotics, with potential applications in construction and infrastructure development in the region.