Skip to content
GCC AI Research

Unlocking the Potential of Large Models for Vision Related Tasks

MBZUAI

Summary

Yanwei Fu from Fudan University will present research on multimodal models, robotic grasping, and fMRI neural decoding. Topics include few-shot learning, object-centered self-supervised learning, image manipulation, and visual-language alignment. The research also covers Transformer compression and applications of large models with MVS 3D modeling in robotic arm grasping. Why it matters: While the talk is not directly about Middle East AI, the topics covered are core to advancing AI research and applications in the region.

Get the weekly digest

Top AI stories from the GCC region, every week.

Related

A Benchmark and Agentic Framework for Omni-Modal Reasoning and Tool Use in Long Videos

arXiv ·

A new benchmark, LongShOTBench, is introduced for evaluating multimodal reasoning and tool use in long videos, featuring open-ended questions and diagnostic rubrics. The benchmark addresses the limitations of existing datasets by combining temporal length and multimodal richness, using human-validated samples. LongShOTAgent, an agentic system, is also presented for analyzing long videos, with both the benchmark and agent demonstrating the challenges faced by state-of-the-art MLLMs.

How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite for Video-LMMs

arXiv ·

Researchers from MBZUAI have introduced the Complex Video Reasoning and Robustness Evaluation Suite (CVRR-ES) for assessing Video-LLMs. The benchmark evaluates models across 11 real-world video dimensions, revealing challenges in robustness and reasoning, particularly for open-source models. A training-free Dual-Step Contextual Prompting (DSCP) technique is proposed to enhance Video-LMM performance, with the dataset and code made publicly available.