Publication
PoseScript: 3D human poses from natural language
Conference Article
Conference
European Conference on Computer Vision (ECCV)
Edition
17th
Pages
346-362
Doc link
http://dx.doi.org/10.1007/978-3-031-20068-7_20
File
Authors
-
Delmas, Ginger Diana
-
Weinzaepfel, Philippe
-
Lucas, Thomas
-
Moreno Noguer, Francesc
-
Rogez, Grégory
Projects associated
Abstract
Natural language is leveraged in many computer vision tasks such as image captioning, cross-modal retrieval or visual question answering, to provide fine-grained semantic information. While human pose is key to human understanding, current 3D human pose datasets lack detailed language descriptions. In this work, we introduce the PoseScript dataset, which pairs a few thousand 3D human poses from AMASS with rich human-annotated descriptions of the body parts and their spatial relationships. To increase the size of this dataset to a scale compatible with typical data hungry learning algorithms, we propose an elaborate captioning process that generates automatic synthetic descriptions in natural language from given 3D keypoints. This process extracts low-level pose information – the posecodes – using a set of simple but generic rules on the 3D keypoints. The posecodes are then combined into higher level textual descriptions using syntactic rules. Automatic annotations substantially increase the amount of available data, and make it possible to effectively pretrain deep models for finetuning on human captions. To demonstrate the potential of annotated poses, we show applications of the PoseScript dataset to retrieval of relevant poses from large-scale datasets and to synthetic pose generation, both based on a textual pose description.
Categories
computer vision.
Scientific reference
G.D. Delmas, P. Weinzaepfel, T. Lucas, F. Moreno-Noguer and G. Rogez. PoseScript: 3D human poses from natural language, 17th European Conference on Computer Vision, 2022, Tel Aviv (Israel), in Computer Vision – ECCV 2022 , Vol 13666 of Lecture Notes in Computer Science, pp. 346-362, 2022.
Follow us!