Publication

3DPeople: Modeling the geometry of dressed humans

Conference Article

Conference

International Conference on Computer Vision (ICCV)

Edition

2019

Pages

2242-2251

Doc link

https://doi.org/10.1109/ICCV.2019.00233

File

Download the digital copy of the doc pdf document

Abstract

Recent advances in 3D human shape estimation build upon parametric representations that model very well the shape of the naked body, but are not appropriate to represent the clothing geometry. In this paper, we present an approach to model dressed humans and predict their geometry from single images. We contribute in three fundamental aspects of the problem, namely, a new dataset, a novel shape parameterization algorithm and an end-to-end deep generative network for predicting shape. First, we present 3DPeople, a large-scale synthetic dataset with 2.5 Million photo-realistic images of 80 subjects performing 70 activities and wearing diverse outfits. Besides providing textured 3D meshes for clothes and body, we annotate the dataset with segmentation masks, skeletons, depth, normal maps and optical flow. All this together makes 3DPeople suitable for a plethora of tasks.

Categories

computer vision.

Author keywords

3D reconstruction, 3DPeople dataset

Scientific reference

A. Pumarola, J. Sanchez, G. P. T. Choi, A. Sanfeliu and F. Moreno-Noguer. 3DPeople: Modeling the geometry of dressed humans, 2019 International Conference on Computer Vision, 2019, Seoul, pp. 2242-2251.