This article discusses a talk on "Assistive Augmentation," designing human-computer interfaces to augment human abilities. Examples include 'AiSee' for blind users, 'Prospero' for memory training, and 'MuSS-Bits' for deaf users to feel music. Suranga Nanayakkara from the National University of Singapore will present the talk, highlighting insights from psychology, human-centered machine learning, and design thinking. Why it matters: Such assistive technologies can significantly improve the quality of life for individuals with disabilities and extend human capabilities.
A computer science vision involves computing devices becoming proactive assistants, enhancing various aspects of life through user digitization. Current devices provide coarse digital representations of users, but there's significant potential for improvement. Karan, a Ph.D. candidate at CMU, develops technologies for consumer devices to capture richer user representations without sacrificing practicality. Why it matters: Advancements in user digitization can lead to improved extended reality experiences, health tracking, and more productive work environments, enhancing the utility of consumer devices.
Tetsunari Inamura's talk explores using VR to collect HRI data and tailor assistive robotic functionalities to individual users. He discusses symbol emergence via multimodal interaction, interactive behavior generation through symbol manipulation, and VR for data collection. The talk emphasizes long-term human capability enhancement and avoiding over-reliance on technology. Why it matters: This research promotes independence and growth in human-robot interactions, potentially revolutionizing assistive technologies in the region.
MBZUAI will present two assistive AI prototypes at GITEX 2025: smart glasses with a camera and eye tracker that identify objects and medication, and a brain-computer interface (BCI) device integrated with robotics to control a robotic dog's movements. The smart glasses use a multimodal large language model (LLM) to help visually impaired individuals, while the BCI aims to restore hands-free communication for people with mobility limitations. Hisham Cholakkal leads the research team, which received a Meta Regional Research Grant 2025 for its work on multimodal LLM for smart wearables. Why it matters: The research demonstrates the potential of AI to improve the quality of life for vulnerable populations and addresses the challenge of providing cost-effective care for aging societies.
The article discusses the potential of AI-enabled assistive technologies to empower People with Disabilities (PWD), citing that over one billion people live with some form of disability globally. It highlights examples like communication tools, assistive robots, and smart visual aids, and emphasizes the need to address security and privacy concerns. The author, Ishfaq Ahmad from the University of Texas at Arlington, points out that with a growing global population, over two billion people will need assistive products by 2030. Why it matters: The piece advocates for using AI to tackle critical human rights issues and improve the lives of a significant portion of the global population in the face of increasing disability rates.