Publication
LayerNet: high-resolution semantic 3D reconstruction of clothed people
Journal Article (2024)
Journal
IEEE Transactions on Pattern Analysis and Machine Intelligence
Pages
1257-1272
Volume
46
Number
2
Doc link
http://dx.doi.org/10.1109/TPAMI.2023.3332677
File
Authors
-
Corona Puyané, Enric
-
Alenyà Ribas, Guillem
-
Pons-Moll, Gerard
-
Moreno Noguer, Francesc
Projects associated
Abstract
In this paper we introduce SMPLicit, a novel generative model to jointly represent body pose, shape and clothing geometry; and LayerNet, a deep network that given a single image of a person simultaneously performs detailed 3D reconstruction of body and clothes. In contrast to existing learning-based approaches that require training specific models for each type of garment, SMPLicit can represent in a unified manner different garment topologies (eg from sleeveless tops to hoodies and open jackets), while controlling other properties like garment size or tightness/looseness. LayerNet follows a coarse-to-fine multi-stage strategy by first predicting smooth cloth geometries from SMPLicit, which are then refined by an image-guided displacement network that gracefully fits the body recovering high-frequency details and wrinkles. LayerNet achieves competitive accuracy in the task of 3D reconstruction against current `garment-agnostic' state of the art for images of people in up-right positions and controlled environments, and consistently surpasses these methods on challenging body poses and uncontrolled settings. Furthermore, the semantically rich outcome of our approach is suitable for performing Virtual Try-on tasks directly on 3D, a task which, so far, has only been addressed in the 2D domain.
Categories
pattern recognition.
Scientific reference
E. Corona, G. Alenyà, G. Pons-Moll and F. Moreno-Noguer. LayerNet: high-resolution semantic 3D reconstruction of clothed people. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(2): 1257-1272, 2024.
Follow us!