Dr. Andrew Bastawrous, CEO/co-founder of Peek, discussed his work on mobile eye clinics at KAUST. He developed Peek Acuity and Peek Retina, which turn smartphones into tools for detecting visual impairment. The technology uses smartphone screens and camera clip-ons to image inside the eye. Why it matters: This low-cost mobile ophthalmic tool has the potential to prevent and treat vision loss in underserved communities.
MBZUAI Professor Fahad Khan is working on a unified theory of machine visual intelligence. His goal is to enable AI systems to better understand and function in complex, chaotic visual environments. The aim is to improve real-world applications like smart cities, personalized healthcare, and autonomous vehicles. Why it matters: This research could significantly advance AI's ability to perceive and interact with the real world, especially in challenging environments common in the developing world.
A professor from Nanyang Technological University (NTU), Singapore gave a talk at MBZUAI about "Just-Noticeable Difference (JND)" models in visual intelligence. The talk covered visual JND models, research and applications, and future opportunities for JND modeling. JND can help tackle big data challenges with limited resources by focusing on user-centric and green systems. Why it matters: Exploring JND could lead to advancements in AI applications related to visual signal processing, image synthesis, and generative AI in the region.
Shozo Yokoyama, a biology professor at Emory University specializing in color vision evolution, was interviewed by KAUST. Yokoyama's lab identified amino acids regulating red-green and UV vision in vertebrates. He emphasizes the importance of young scientists developing fresh perspectives on evolution and learning directly from animals. Why it matters: While not directly an AI story, the piece highlights KAUST's broader research focus and its investment in attracting and showcasing international scientific expertise, relevant to building a strong research ecosystem.
KAUST researchers in the Sensors Lab are developing neuromorphic circuits for vision sensors, drawing inspiration from the human eye. They created flexible photoreceptors using hybrid perovskite materials, with capacitance tunable by light stimulation, mimicking the human retina. The team collaborates with experts in image characterization and brain pattern recognition to connect the 'eye' to the 'brain' for object identification. Why it matters: This biomimetic approach promises advancements in AI, machine learning, and smart city development within the region.