Skip to content
GCC AI Research

Search

Results for "monocular vision"

MonoRace: Winning Champion-Level Drone Racing with Robust Monocular AI

arXiv ·

The paper presents MonoRace, an onboard drone racing approach using a monocular camera and IMU. The system combines neural-network-based gate segmentation with a drone model for robust state estimation, along with offline optimization using gate geometry. MonoRace won the 2025 Abu Dhabi Autonomous Drone Racing Competition (A2RL), outperforming AI teams and human world champions, reaching speeds up to 100 km/h. Why it matters: This demonstrates a significant advancement in autonomous drone racing, achieving champion-level performance with a resource-efficient monocular system, validated in a real-world competition setting in the UAE.

A vision to change how we see

KAUST ·

Dr. Andrew Bastawrous, CEO/co-founder of Peek, discussed his work on mobile eye clinics at KAUST. He developed Peek Acuity and Peek Retina, which turn smartphones into tools for detecting visual impairment. The technology uses smartphone screens and camera clip-ons to image inside the eye. Why it matters: This low-cost mobile ophthalmic tool has the potential to prevent and treat vision loss in underserved communities.

A unified theory of all things visual

MBZUAI ·

MBZUAI Professor Fahad Khan is working on a unified theory of machine visual intelligence. His goal is to enable AI systems to better understand and function in complex, chaotic visual environments. The aim is to improve real-world applications like smart cities, personalized healthcare, and autonomous vehicles. Why it matters: This research could significantly advance AI's ability to perceive and interact with the real world, especially in challenging environments common in the developing world.

Art as a window into sight

KAUST ·

Margaret Livingstone, a neurobiology professor at Harvard Medical School, lectured at KAUST's Winter Enrichment Program 2018 on how art can reveal insights into the human brain. She discussed how artists have long understood the independent roles of color and luminance in visual perception. Livingstone highlighted examples from Picasso, Monet, and Warhol to illustrate how artists manipulate visual cues. Why it matters: This interdisciplinary approach can potentially lead to new understandings of how the brain processes visual information and inform advances in both neuroscience and art.

Computer Vision: A Journey of Pursuing 3D World Understanding

MBZUAI ·

Dr. Xiaoming Liu from Michigan State University discussed computer vision techniques for 3D world understanding at a talk hosted by MBZUAI. The talk covered 3D reconstruction, detection, depth estimation, and velocity estimation, with applications in biometrics and autonomous driving. Dr. Liu also touched on anti-spoofing and fair face recognition research at MSU's Computer Vision Lab. Why it matters: Showcasing international experts and research directions helps to catalyze computer vision and 3D understanding research efforts within the UAE's AI ecosystem.

Drift-Corrected Monocular VIO and Perception-Aware Planning for Autonomous Drone Racing

arXiv ·

This paper details the autonomous drone racing system developed for the Abu Dhabi Autonomous Racing League (A2RL) x Drone Champions League competition. The system uses drift-corrected monocular Visual-Inertial Odometry (VIO) fused with YOLO-based gate detection for global position measurements, managed via Kalman filter. A perception-aware planner generates trajectories balancing speed and gate visibility. Why it matters: The system's podium finishes validate the effectiveness of monocular vision-based autonomous drone flight and showcases advancements in AI-powered robotics within the UAE.

Unifying Vision Representation

MBZUAI ·

This seminar explores vision systems through self-supervised representation learning, addressing challenges and solutions in mainstream vision self-supervised learning methods. It discusses developing versatile representations across modalities, tasks, and architectures to propel the evolution of the vision foundation model. Tong Zhang from EPFL, with a background from Beihang University, New York University, and Australian National University, will lead the talk. Why it matters: Advancing vision foundation models is crucial for expanding AI applications, especially in the Middle East where computer vision can address challenges in areas like urban planning, agriculture, and environmental monitoring.