Skip to content
GCC AI Research

Search

Results for "social robot"

Humanoid Robots and the Computational Problems Regarding the Human

MBZUAI ·

Yoshihiko Nakamura from the University of Tokyo discusses the computational challenges of humanoid robots, extending beyond sensing and control to understanding human movement, sensation, and relationships. The talk covers recent research on mechanical humanoid robots with a focus on actuators and computational problems related to human movements. Nakamura highlights the need for humanoid robots to interpret human actions and interactions for effective application. Why it matters: Addressing these computational challenges is crucial for developing more sophisticated and human-compatible robots for use in various human-centered applications within the region and globally.

The chameleon effect in education with social AI: can children learn by subconsciously mimicking a social robot?

MBZUAI ·

Maha Elgarf from NYU Abu Dhabi presented research on using social robots to stimulate creativity in children through subconscious mimicry, leveraging the 'chameleon effect'. The research involved a series of studies where children engaged in storytelling with a social robot, and their creativity was assessed. Elgarf also discussed using Large Language Models (LLMs) in education and challenges in the field. Why it matters: This explores innovative applications of social robotics and AI in education within the UAE, potentially enhancing children's learning and creativity.

Super-aligned Machine Intelligence via a Soft Touch

MBZUAI ·

Song Chaoyang from the Southern University of Science and Technology (SUSTech) presented research on Vision-Based Tactile Sensing (VBTS) for robot learning, combining soft robotic design with learning algorithms to achieve state-of-the-art performance in tactile perception. Their VBTS solution demonstrates robustness up to 1 million test cycles and enables multi-modal outputs from a single, vision-based input, facilitating applications such as amphibious tactile grasping and industrial welding. The talk also highlighted the DeepClaw system for capturing human demonstration actions, aiming for a universal interaction interface. Why it matters: This research advances embodied intelligence by improving robot dexterity and adaptability through enhanced tactile sensing, which is crucial for complex manipulation tasks in various sectors such as manufacturing and healthcare within the region.

A Cross-cultural Corpus of Annotated Verbal and Nonverbal Behaviors in Receptionist Encounters

arXiv ·

Researchers created a cross-cultural corpus of annotated verbal and nonverbal behaviors in receptionist interactions. The corpus includes native speakers of American English and Arabic role-playing scenarios at university reception desks in Doha, Qatar, and Pittsburgh, USA. The manually annotated nonverbal behaviors include gaze direction, hand gestures, torso positions, and facial expressions. Why it matters: This resource can be valuable for the human-robot interaction community, especially for building culturally aware AI systems.

Integrating Virtual Reality and Robotics: Enhancing Human and Robot Experiences in Assistive Technologies

MBZUAI ·

Tetsunari Inamura's talk explores using VR to collect HRI data and tailor assistive robotic functionalities to individual users. He discusses symbol emergence via multimodal interaction, interactive behavior generation through symbol manipulation, and VR for data collection. The talk emphasizes long-term human capability enhancement and avoiding over-reliance on technology. Why it matters: This research promotes independence and growth in human-robot interactions, potentially revolutionizing assistive technologies in the region.

Human-Computer Conversational Vision-and-Language Navigation

MBZUAI ·

A presentation discusses the evolution of Vision-and-Language Navigation (VLN) from benchmarks like Room-to-Room (R2R). It highlights the role of Large Language Models (LLMs) such as GPT-4 in enabling more natural human-machine interactions. The presentation showcases work using LLMs to decode navigational instructions and improve robotic navigation. Why it matters: This research demonstrates the potential of merging vision, language, and robotics for advanced AI applications in navigation and human-computer interaction.

The intelligence of the hand

MBZUAI ·

Lorenzo Jamone from Queen Mary University of London presented on cognitive robotics, focusing on tactile exploration and manipulation by robots. The talk covered combining biology, engineering, and AI for advanced robotic systems. Jamone directs the CRISP group and has over 100 publications in cognitive robotics. Why it matters: This highlights the ongoing research into more sophisticated robotic systems that can interact with complex environments, an area crucial for future applications in manufacturing and human-robot collaboration in the GCC.

Tactile robots: building the machine and learning the self

MBZUAI ·

Sami Haddadin from the Technical University of Munich (TUM) discusses a shift in robotics towards machines that autonomously develop their own blueprints and controls. He highlights advancements driven by human-centered design, soft control, and model-based machine learning, enabling human-robot collaboration in manufacturing and healthcare. Haddadin also presents progress towards autonomous machine design and modular control architectures for complex manipulation tasks. Why it matters: This research has implications for advancing robotics and AI in the GCC region, especially in manufacturing and healthcare, by enabling safer and more efficient human-robot collaboration.