Skip to content
GCC AI Research

Search

Results for "robot phone"

HONOR advances its AI vision at MWC 2026 with robot phone, humanoid robot and magic V6 - Saudi Gazette

Saudi Gazette ·

HONOR showcased advancements in its AI strategy at MWC 2024, featuring a robot phone concept capable of adapting to user needs and a humanoid robot prototype. The company also introduced the Magic V6, integrating AI for enhanced user experiences. Why it matters: This signals increasing investment and innovation in AI-driven mobile devices from Chinese manufacturers, potentially shaping the future of human-device interaction in the region and globally.

Learning Robot Super Autonomy

MBZUAI ·

Giuseppe Loianno from NYU presented research on creating "Super Autonomous" robots (USARC) that are Unmanned, Small, Agile, Resilient, and Collaborative. The research focuses on learning models, control, and navigation policies for single and collaborative robots operating in challenging environments. The talk highlighted the potential of these robots in logistics, reconnaissance, and other time-sensitive tasks. Why it matters: This points to growing research interest in advanced robotics in the region, especially given the focus on smart cities and automation.

Humanoid Robots and the Computational Problems Regarding the Human

MBZUAI ·

Yoshihiko Nakamura from the University of Tokyo discusses the computational challenges of humanoid robots, extending beyond sensing and control to understanding human movement, sensation, and relationships. The talk covers recent research on mechanical humanoid robots with a focus on actuators and computational problems related to human movements. Nakamura highlights the need for humanoid robots to interpret human actions and interactions for effective application. Why it matters: Addressing these computational challenges is crucial for developing more sophisticated and human-compatible robots for use in various human-centered applications within the region and globally.

ARRC's Groundbreaking Advancements in Underwater Communication Technology

TII ·

The Autonomous Robotics Research Center (ARRC) is developing underwater communication systems, including a multimode modem prototype, and has filed three patents. One key technology is the Universal Underwater Software Defined Modem (UniSDM), which supports sound, magnetic induction, light, and radio waves. ARRC also developed a network management framework for automatic network slicing (ANS) of communication resources. Why it matters: These advancements are crucial for improving underwater exploration, industrial maintenance, and marine monitoring in the region, enabling more efficient and reliable communication for underwater robots.

The intelligence of the hand

MBZUAI ·

Lorenzo Jamone from Queen Mary University of London presented on cognitive robotics, focusing on tactile exploration and manipulation by robots. The talk covered combining biology, engineering, and AI for advanced robotic systems. Jamone directs the CRISP group and has over 100 publications in cognitive robotics. Why it matters: This highlights the ongoing research into more sophisticated robotic systems that can interact with complex environments, an area crucial for future applications in manufacturing and human-robot collaboration in the GCC.

Exploring deep-sea exploration

KAUST ·

Stanford's Robotics Laboratory, in collaboration with KAUST professors Khaled Nabil Salama and Christian Voolstra and MEKA Robotics, developed OceanOne, a bimanual underwater humanoid robot avatar with haptic feedback. OceanOne allows human pilots to explore ocean depths with high fidelity by relaying instantaneous images. The robot has two fully articulated arms and a tail section with batteries, computers, and thrusters. Why it matters: This collaboration between KAUST and Stanford highlights the increasing role of robotics and AI in deep-sea exploration, with potential applications in underwater research and resource discovery in the Red Sea and beyond.

Structured World Models for Robots

MBZUAI ·

Krishna Murthy, a postdoc at MIT, researches computational world models to enable robots to understand and operate effectively in the physical world. His work focuses on differentiable computing approaches for spatial perception and interfaces large image, language, and audio models with 3D scenes. Murthy envisions structured world models working with scaling-based approaches to create versatile robot perception and planning algorithms. Why it matters: This research could significantly advance robotics by enabling more sophisticated perception, reasoning, and action capabilities in embodied agents.

Super-aligned Machine Intelligence via a Soft Touch

MBZUAI ·

Song Chaoyang from the Southern University of Science and Technology (SUSTech) presented research on Vision-Based Tactile Sensing (VBTS) for robot learning, combining soft robotic design with learning algorithms to achieve state-of-the-art performance in tactile perception. Their VBTS solution demonstrates robustness up to 1 million test cycles and enables multi-modal outputs from a single, vision-based input, facilitating applications such as amphibious tactile grasping and industrial welding. The talk also highlighted the DeepClaw system for capturing human demonstration actions, aiming for a universal interaction interface. Why it matters: This research advances embodied intelligence by improving robot dexterity and adaptability through enhanced tactile sensing, which is crucial for complex manipulation tasks in various sectors such as manufacturing and healthcare within the region.