Skip to content
GCC AI Research

Search

Results for "ACVA"

To Make Just-Noticeable Difference (JND) Computable toward Visual Intelligence

MBZUAI ·

A professor from Nanyang Technological University (NTU), Singapore gave a talk at MBZUAI about "Just-Noticeable Difference (JND)" models in visual intelligence. The talk covered visual JND models, research and applications, and future opportunities for JND modeling. JND can help tackle big data challenges with limited resources by focusing on user-centric and green systems. Why it matters: Exploring JND could lead to advancements in AI applications related to visual signal processing, image synthesis, and generative AI in the region.

Computer vision: Teaching computers how to see the world

KAUST ·

KAUST's Visual Computing Center (VCC) is researching computer vision, image processing, and machine learning, with applications in self-driving cars, surveillance, and security. Professor Bernard Ghanem is working on teaching machines to understand visual data semantically, similar to how humans perceive the world. Self-driving cars use visual sensors to interpret traffic signals and detect obstacles, while computer vision also assists governments and corporations with security applications like facial recognition and detecting unattended luggage. Why it matters: Advancements in computer vision at KAUST can contribute to innovations in autonomous vehicles and enhance security measures in the region.

Video search gets closer to how humans look for clips

MBZUAI ·

A new paper at ICCV 2025, co-authored by MBZUAI Ph.D. student Dmitry Demidov, introduces Dense-WebVid-CoVR, a 1.6-million sample benchmark for composed video retrieval (CoVR). The benchmark features longer, context-rich descriptions and modification texts, generated using Gemini Pro and GPT-4o, with manual verification. The paper also presents a unified fusion approach that jointly reasons across video and text inputs, improving performance on fine-grained edit details. Why it matters: This work advances video search capabilities by enabling more human-like queries, which is crucial for creative and analytic workflows that require nuanced video retrieval.

A Benchmark and Agentic Framework for Omni-Modal Reasoning and Tool Use in Long Videos

arXiv ·

A new benchmark, LongShOTBench, is introduced for evaluating multimodal reasoning and tool use in long videos, featuring open-ended questions and diagnostic rubrics. The benchmark addresses the limitations of existing datasets by combining temporal length and multimodal richness, using human-validated samples. LongShOTAgent, an agentic system, is also presented for analyzing long videos, with both the benchmark and agent demonstrating the challenges faced by state-of-the-art MLLMs.