Skip to content
GCC AI Research

Search

Results for "visual intelligence"

A unified theory of all things visual

MBZUAI ·

MBZUAI Professor Fahad Khan is working on a unified theory of machine visual intelligence. His goal is to enable AI systems to better understand and function in complex, chaotic visual environments. The aim is to improve real-world applications like smart cities, personalized healthcare, and autonomous vehicles. Why it matters: This research could significantly advance AI's ability to perceive and interact with the real world, especially in challenging environments common in the developing world.

To Make Just-Noticeable Difference (JND) Computable toward Visual Intelligence

MBZUAI ·

A professor from Nanyang Technological University (NTU), Singapore gave a talk at MBZUAI about "Just-Noticeable Difference (JND)" models in visual intelligence. The talk covered visual JND models, research and applications, and future opportunities for JND modeling. JND can help tackle big data challenges with limited resources by focusing on user-centric and green systems. Why it matters: Exploring JND could lead to advancements in AI applications related to visual signal processing, image synthesis, and generative AI in the region.

Computer vision: Teaching computers how to see the world

KAUST ·

KAUST's Visual Computing Center (VCC) is researching computer vision, image processing, and machine learning, with applications in self-driving cars, surveillance, and security. Professor Bernard Ghanem is working on teaching machines to understand visual data semantically, similar to how humans perceive the world. Self-driving cars use visual sensors to interpret traffic signals and detect obstacles, while computer vision also assists governments and corporations with security applications like facial recognition and detecting unattended luggage. Why it matters: Advancements in computer vision at KAUST can contribute to innovations in autonomous vehicles and enhance security measures in the region.

Visualizing the future

KAUST ·

KAUST's Visual Computing Center (VCC) hosted an Open House event on March 28, showcasing its interdisciplinary research in visual computing. Demonstrations included a virtual reality driving simulator by FalconViz, intended for driver education in Saudi Arabia. Researchers also presented a drone trained to autonomously navigate race courses and a neural network for autonomous driving using image-based technology without GPS. Why it matters: The VCC's work highlights KAUST's role in advancing visual computing applications relevant to Saudi Arabia, from driver training to autonomous systems.

Teaching algorithms to see

KAUST ·

KAUST's Image and Video Understanding Lab is developing machine learning algorithms for computer vision and object tracking, with applications in video content search and UAV navigation. Their algorithms can detect specific activities in videos, helping platforms detect unwanted content and deliver relevant ads. The object tracking algorithm is also used to empower UAVs, enabling them to follow objects autonomously. Why it matters: This research enhances video content analysis and UAV capabilities, positioning KAUST as a leader in computer vision and AI applications within the region.

Multimodality for story-level understanding and generation of visual data

MBZUAI ·

Vicky Kalogeiton from École Polytechnique discussed the importance of multimodality for story-level recognition and generation using video, audio, text, masks and clinical data. She presented on multimodal video understanding using FunnyNet-W and Short Film Dataset. She further showed examples of visual generation from text and other modalities (ET, CAD, DynamicGuidance). Why it matters: Multimodal AI research is growing globally, and this talk highlights the potential of combining different data types for enhanced understanding and generation, which could have implications for various applications, including those relevant to the Middle East.

Satellites are speaking a visual language that today’s AI doesn’t quite get

MBZUAI ·

Researchers from MBZUAI, IBM, and ServiceNow introduced GEOBench-VLM, a benchmark for evaluating vision-language models on Earth observation tasks using satellite and aerial imagery. The benchmark includes over 10,000 human-verified instructions across 31 sub-tasks spanning object classification, localization, change detection, and more. GEOBench-VLM addresses the gap in current VLMs' ability to perform spatially grounded reasoning and change detection in satellite imagery. Why it matters: This benchmark will drive progress in AI's ability to analyze satellite data for critical applications like disaster response, climate monitoring, and urban planning in the Middle East and globally.