Body size and depth disambiguation in multi-person reconstruction from single images

Conference Article


International Conference on 3D Vision (3DV)





Doc link


Download the digital copy of the doc pdf document


We address the problem of multi-person 3D body pose and shape estimation from a single image. While this problem can be addressed by applying single-person approaches multiple times for the same scene, recent works have shown the advantages of building upon deep architectures that simultaneously reason about all people in the scene in a holistic manner by enforcing, e.g., depth order constraints or minimizing interpenetration among reconstructed bodies. However, existing approaches are still unable to capture the size variability of people caused by the inherent body scale and depth ambiguity. In this work we tackle this challenge by devising a novel optimization scheme that learns the appropriate body scale and relative camera pose, by enforcing the feet of all people to remain on the ground floor. A thorough evaluation on MuPoTS-3D and 3DPW datasets demonstrates that our approach is able to robustly estimate the body translation and shape of multiple people while retrieving their spatial arrangement, consistently improving current state-of-the-art, especially in scenes with people of very different heights.


computer vision, pattern recognition.

Author keywords

3D pose and shape estimation, 3D pose, depth

Scientific reference

N. Ugrinovic, A. Ruiz, A. Agudo, A. Sanfeliu and F. Moreno-Noguer. Body size and depth disambiguation in multi-person reconstruction from single images, 2021 International Conference on 3D Vision, 2021, London, UK (Virtual), pp. 53-63.