Skip to content
GCC AI Research

Search

Results for "ECCV"

International Conference on Computer Vision highlights MBZUAI’s position at the forefront of global AI research

MBZUAI ·

MBZUAI had 30 papers accepted at the International Conference on Computer Vision (ICCV) in Paris, out of 8,260 submissions. Visiting Professor Ivan Laptev served as one of the ICCV Program Chairs. Two papers from MBZUAI researchers focused on analyzing moving images, with one introducing Video-FocalNets for action analysis and the other exploring the transfer of knowledge from still image analysis to video. Why it matters: MBZUAI's strong presence at ICCV demonstrates its growing prominence in the global computer vision research landscape.

Computer vision: Teaching computers how to see the world

KAUST ·

KAUST's Visual Computing Center (VCC) is researching computer vision, image processing, and machine learning, with applications in self-driving cars, surveillance, and security. Professor Bernard Ghanem is working on teaching machines to understand visual data semantically, similar to how humans perceive the world. Self-driving cars use visual sensors to interpret traffic signals and detect obstacles, while computer vision also assists governments and corporations with security applications like facial recognition and detecting unattended luggage. Why it matters: Advancements in computer vision at KAUST can contribute to innovations in autonomous vehicles and enhance security measures in the region.

MBZUAI faculty member wins top prizes at European AI conference

MBZUAI ·

MBZUAI faculty member Dr. Hang Dai won first and second place in the Commands 4 Autonomous Vehicles (C4AV) Workshop Challenge at ECCV 2020. Dr. Dai participated in the competition as part of two teams, earning top spots for using AI in autonomous vehicles. The C4AV Workshop Challenge aims to develop models for joint understanding of vision and language in self-driving cars. Why it matters: This win demonstrates MBZUAI's commitment to advancing AI research and its applications in key areas like autonomous vehicles.

EchoCoTr: Estimation of the Left Ventricular Ejection Fraction from Spatiotemporal Echocardiography

arXiv ·

Researchers from MBZUAI have developed EchoCoTr, a novel spatiotemporal deep learning method for estimating left ventricular ejection fraction (LVEF) from echocardiograms. EchoCoTr combines CNNs and vision transformers to overcome the limitations of each when applied to medical video data. The method achieves state-of-the-art results on the EchoNet-Dynamic dataset, demonstrating improved accuracy compared to existing approaches, with code available on GitHub.

Making sense of space and time in video

MBZUAI ·

MBZUAI researchers presented a new approach to video analysis at ICCV in Paris, led by Syed Talal Wasim. The approach builds on still image processing techniques like focal modulation to analyze spatial and temporal information in video separately. It aims to improve temporal aggregation while avoiding the computational complexity of transformers. Why it matters: This research advances video understanding in computer vision by offering a more efficient method for temporal modeling, crucial for applications like activity recognition and video surveillance.

Towards Practical Remote Photoplethysmography Detector

MBZUAI ·

Pong C Yuen from Hong Kong Baptist University will present a talk on remote photoplethysmography (rPPG) detection. The talk will review the development of rPPG detection, share recent research, and discuss future directions. rPPG is a technology for non-contact computer vision and healthcare applications like heart rate estimation. Why it matters: Advancements in rPPG could enable new remote patient monitoring and diagnostic tools in the region, reducing the need for physical contact.

Contrastive Pretraining for Echocardiography Segmentation with Limited Data

arXiv ·

This paper introduces a self-supervised contrastive learning method for segmenting the left ventricle in echocardiography images when limited labeled data is available. The approach uses contrastive pretraining to improve the performance of UNet and DeepLabV3 segmentation networks. Experiments on the EchoNet-Dynamic dataset show the method achieves a Dice score of 0.9252, outperforming existing approaches, with code available on Github.

Reconstruction and Animation of Realistic Head Avatars

MBZUAI ·

Egor Zakharov from ETH Zurich AIT lab will present research on creating controllable and detailed 3D head avatars using data from consumer-grade devices. The presentation will cover high-fidelity image-based facial reconstruction/animation and video-based reconstruction of detailed structures like hairstyles. He will showcase integrating human-centric assets into virtual environments for real-time telepresence and entertainment. Why it matters: This research contributes to advancements in digital human modeling and telepresence, with applications in communication and gaming within the region.