Publication

Segmentation and 3D reconstruction of non-rigid shape from RGB video

Conference Article

Conference

IEEE International Conference on Image Processing (ICIP)

Edition

27th

Pages

2845-2849

Doc link

https://doi.org/10.1109/ICIP40778.2020.9190750

File

Download the digital copy of the doc pdf document

Abstract

In this paper we propose a unsupervised and unified approach to simultaneously recover time-varying 3D shape, camera motion, and temporal clustering into deformations, all of them, from partial 2D point tracks in a RGB video and without assuming any pre-trained model. As the data are drawn from a sequentially ordered images, we fully exploit this information to constrain all model parameters we estimate. We present an energy-based formulation that is efficiently solved and allows to estimate all model parameters in the same loop via augmented Lagrange multipliers in polynomial time, enforcing similarities between images at any level. Validation is done in a wide variety of human video sequences, including articulated and continuous motion, and for dense and missing tracks. Our approach is shown to outperform state-of-the-art solutions in terms of 3D reconstruction and clustering.

Categories

computer vision.

Author keywords

Non-Rigid Structure from Motion, Deformation Segmentation, Sequential Data, Optimization

Scientific reference

A. Agudo. Segmentation and 3D reconstruction of non-rigid shape from RGB video, 27th IEEE International Conference on Image Processing, 2020, Abu Dhabi, United Arab Emirates (Virtual), pp. 2845-2849.