Publication

Human motion prediction via spatio-temporal inpainting

Conference Article

Conference

International Conference on Computer Vision (ICCV)

Edition

2019

Pages

7133-7142

Doc link

https://doi.og/10.1109/ICCV.2019.00723

File

Download the digital copy of the doc pdf document

Abstract

We propose a Generative Adversarial Network (GAN) to forecast 3D human motion given a sequence of past 3D skeleton poses. While recent GANs have shown promising results, they can only forecast plausible motion over relatively short periods of time (few hundred milliseconds) and typically ignore the absolute position of the skeleton w.r.t. the camera. Our scheme provides long term predictions (two seconds or more) for both the body pose and its absolute position. Our approach builds upon three main contributions. First, we represent the data using a spatio-temporal tensor of 3D skeleton coordinates which allows formulating the prediction problem as an inpainting one, for which GANs work particularly well. Secondly, we design an architecture to learn the joint distribution of body poses and global motion, capable to hypothesize large chunks of the input 3D tensor with missing data. And finally, we argue that the L2 metric, considered so far by most approaches, fails to capture the actual distribution of long-term human motion. We propose two alternative metrics, based on the distribution of frequencies, that are able to capture more realistic motion patterns. Extensive experiments demonstrate our approach to significantly improve the state of the art, while also handling situations in which past observations are corrupted by occlusions, noise and missing frames.

Categories

computer vision, pattern recognition.

Scientific reference

A. Hernandez Ruiz, J. Gall and F. Moreno-Noguer. Human motion prediction via spatio-temporal inpainting, 2019 International Conference on Computer Vision, 2019, Seoul, pp. 7133-7142.