KAUST's Visual Computing Center (VCC) is researching computer vision, image processing, and machine learning, with applications in self-driving cars, surveillance, and security. Professor Bernard Ghanem is working on teaching machines to understand visual data semantically, similar to how humans perceive the world. Self-driving cars use visual sensors to interpret traffic signals and detect obstacles, while computer vision also assists governments and corporations with security applications like facial recognition and detecting unattended luggage. Why it matters: Advancements in computer vision at KAUST can contribute to innovations in autonomous vehicles and enhance security measures in the region.
KAUST's Image and Video Understanding Lab is developing machine learning algorithms for computer vision and object tracking, with applications in video content search and UAV navigation. Their algorithms can detect specific activities in videos, helping platforms detect unwanted content and deliver relevant ads. The object tracking algorithm is also used to empower UAVs, enabling them to follow objects autonomously. Why it matters: This research enhances video content analysis and UAV capabilities, positioning KAUST as a leader in computer vision and AI applications within the region.
MBZUAI's BioMedIA lab, led by Mohammad Yaqub, is developing AI solutions for healthcare challenges in cardiology, pulmonology, and oncology using computer vision. Yaqub's previous research analyzed fetal ultrasound images to correlate bone development with maternal vitamin D levels. The lab is now applying image analysis to improve the treatment of head and neck cancer using PET and CT scans. Why it matters: This research demonstrates the potential of AI and computer vision to improve diagnostic accuracy and accessibility of healthcare in the region and beyond.
Dr. Xiaoming Liu from Michigan State University discussed computer vision techniques for 3D world understanding at a talk hosted by MBZUAI. The talk covered 3D reconstruction, detection, depth estimation, and velocity estimation, with applications in biometrics and autonomous driving. Dr. Liu also touched on anti-spoofing and fair face recognition research at MSU's Computer Vision Lab. Why it matters: Showcasing international experts and research directions helps to catalyze computer vision and 3D understanding research efforts within the UAE's AI ecosystem.
MBZUAI researchers are working to improve computer vision models by incorporating common sense knowledge. They aim to address issues like the generation of unrealistic human features, such as hands with incorrect numbers of fingers. By integrating common-sense knowledge, like the fact that humans typically have five fingers per hand, they seek to make deep learning models more reliable. Why it matters: This research could improve the accuracy and trustworthiness of AI-generated content, making it more suitable for real-world applications.
MBZUAI Professor Fahad Khan is working on a unified theory of machine visual intelligence. His goal is to enable AI systems to better understand and function in complex, chaotic visual environments. The aim is to improve real-world applications like smart cities, personalized healthcare, and autonomous vehicles. Why it matters: This research could significantly advance AI's ability to perceive and interact with the real world, especially in challenging environments common in the developing world.
Laurent Najman presented the Power Watershed (PW) optimization framework for image and data processing. The PW framework enhances graph-based data processing algorithms like random walker and ratio-cut clustering, leading to faster solutions. It can be adapted for graph-based cost minimization methods and integrated with deep learning networks. Why it matters: This framework could enable more efficient and scalable image and data processing algorithms relevant to computer vision and related fields in the Middle East.
A talk introduces a computational framework for learning a compact structured representation for real-world datasets, that is both discriminative and generative. It proposes to learn a closed-loop transcription between the distribution of a high-dimensional multi-class dataset and an arrangement of multiple independent subspaces, known as a linear discriminative representation (LDR). The optimality of the closed-loop transcription can be characterized in closed-form by an information-theoretic measure known as the rate reduction. Why it matters: The framework unifies concepts and benefits of auto-encoding and GAN and generalizes them to the settings of learning a both discriminative and generative representation for multi-class visual data.