Publication
PhysXNet: A customizable approach for learning cloth dynamics on dressed people
Conference Article
Conference
International Conference on 3D Vision (3DV)
Edition
2021
Pages
879-888
Doc link
https://doi.org/10.1109/3DV53792.2021.00096
File
Abstract
We introduce PhysXNet, a learning-based approach to predict the dynamics of deformable clothes given 3D skeleton motion sequences of humans wearing these clothes. The proposed model is adaptable to a large variety of garments and changing topologies, without need of being retrained. Such simulations are typically carried out by physics engines that require manual human expertise and are subject to computationally intensive computations. PhysXNet, by contrast, is a fully differentiable deep network that at inference is able to estimate the geometry of dense cloth meshes in a matter of milliseconds, and thus, can be readily deployed as a layer of a larger deep learning architecture. This efficiency is achieved thanks to the specific parameterization of the clothes we consider, based on 3D UV maps encoding spatial garment displacements.
Categories
object recognition.
Author keywords
gan, cloth, simulation
Scientific reference
J. Sanchez, A. Pumarola and F. Moreno-Noguer. PhysXNet: A customizable approach for learning cloth dynamics on dressed people, 2021 International Conference on 3D Vision, 2021, London, UK (Virtual), pp. 879-888.
Follow us!