Publication

Neural dense non-rigid structure from motion with latent space constraints

Conference Article

Conference

European Conference on Computer Vision (ECCV)

Edition

16th

File

Download the digital copy of the doc pdf document

Authors

Projects associated

Abstract

We introduce the first dense neural non-rigid structure from motion (N-NRSfM) approach, which can be trained end-to-end in an unsupervised manner from 2D point tracks. Compared to the competing methods, our combination of loss functions is fully-differentiable and can be readily integrated into deep-learning systems. We formulate the deformation model by an auto-decoder and impose subspace constraints on the recovered latent space function in a frequency domain. Thanks to the state recurrence cue, we classify the reconstructed non-rigid surfaces based on their similarity and recover the period of the input sequence. Our N-NRSfM approach achieves competitive accuracy on widely-used benchmark sequences and high visual quality on various real videos. Apart from being a standalone technique, our method enables multiple applications including shape compression, completion and interpolation, among others. Combined with an encoder trained directly on 2D images, we perform scenario-specific monocular 3D shape reconstruction at interactive frame rates. To facilitate the reproducibility of the results and boost the new research direction, we open-source our code and provide trained models for research purposes.

Categories

computer vision.

Author keywords

Neural non-rigid structure from motion, sequence period detection, latent space constraints, deformation auto-decoder

Scientific reference

V. Sidhu, E. Tretschk, V. Golyanik, A. Agudo and C. Theobalt. Neural dense non-rigid structure from motion with latent space constraints, 16th European Conference on Computer Vision, 2020, Online, to appear.