Skip to content
GCC AI Research

Search

Results for "motion synthesis"

Synthesis of a Six-Bar Gripper Mechanism for Aerial Grasping

arXiv ·

This paper presents the synthesis of a 1-DoF six-bar gripper mechanism for aerial grasping, designed for a task in the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) 2020. The synthesis process involves selecting the mechanism class, determining the number of links and joints using algebraic methods, and optimizing link dimensions via geometric programming. The gripper was modeled in CAD software, additively manufactured, and mounted on a UAV with a DC motor for gripping spherical objects. Why it matters: The research contributes to advancements in robotics and aerial manipulation, with potential applications in various industries, particularly for tasks requiring remote object retrieval and manipulation.

Is Human Motion a Language without Words?

MBZUAI ·

This article previews a talk by Gül Varol from Ecole des Ponts ParisTech on bridging natural language and 3D human motions. The talk will cover text-to-motion synthesis using generative models and text-to-motion retrieval models based on the ACTOR, TEMOS, TMR, TEACH, and SINC papers. Varol's research interests include video representation learning, human motion synthesis, and sign languages. Why it matters: Research in this area could enable more intuitive human-computer interaction and new applications in areas like virtual reality and robotics.

FancyVideo: Towards Dynamic and Consistent Video Generation via Cross-frame Textual Guidance

arXiv ·

FancyVideo, a new video generator, introduces a Cross-frame Textual Guidance Module (CTGM) to enhance text-to-video models. CTGM uses a Temporal Information Injector and Temporal Affinity Refiner to achieve frame-specific textual guidance, improving comprehension of temporal logic. Experiments on the EvalCrafter benchmark demonstrate FancyVideo's state-of-the-art performance in generating dynamic and consistent videos, also supporting image-to-video tasks.