EE Seminar: Learning the Structure of Motion

03 בפברואר 2020, 15:00 
Room 011, Kitot Building 

Speaker: Kfir Aberman

Ph.D. student under the supervision of Prof. Daniel Cohen-Or and Prof. Shai Avidan

 

Monday, February 3rd, 2020 at 15:00
Room 206, Wolfson Mechanical Eng. Bldg., Faculty of Engineering

Learning the Structure of Motion

 

Abstract

Analysis and synthesis of human motion, which is an abstract, fundamental attribute, underlying actions, gestures, and behavior, has been a central research topic in computer vision and computer animation. However, understanding motion, and controlling or editing it in a high-level intuitive manner, have been hindered by lack of methods for effective disentanglement of the various motion attributes. As with many other areas of vision and graphics, motion analysis and synthesis have also benefited greatly from recent progress in machine learning and deep neural networks.

 

In this research, we propose various structures of neural networks that enable to decompose motion into different, disentangled, attributes and re-compose them into newly synthesized sequences. For example, separating the dynamic aspects of motions from the static ones, distinguishing between motion style and its content, and learn an abstract, character-agnosic motion representation, that can be transferred to characters with different body proportions.

This pipeline enables to mix attributes extracted from different inputs and to synthesize motion in an intuitive, exemplar-based, manner.

 

Our methodology is applied to 3D motion originated by animated characters as well as motion capture (MoCap) data. Furthermore, our research makes the utilization of ordinary videos, which account for the vast majority of the available depictions of human motion, for the purposes of motion capture and motion editing, paving the way for a multitude of applications in computer vision and computer graphics. In particular, we demonstrate state-of-the-art performances in various, fundamental, tasks in motion analysis and synthesis, including motion retargeting, motion style transfer, video performance cloning, and monocular motion reconstruction.

אוניברסיטת תל אביב עושה כל מאמץ לכבד זכויות יוצרים. אם בבעלותך זכויות יוצרים בתכנים שנמצאים פה ו/או השימוש
שנעשה בתכנים אלה לדעתך מפר זכויות, נא לפנות בהקדם לכתובת שכאן >>