We introduce a novel approach to automatically recover 3D human pose from a single image. Most previous work follows a pipelined approach: initially, a set of 2D features such as edges, joints or silhouettes are detected in the image, and then these observations are used to infer the 3D pose. Solving these two problems separately may lead to erroneous 3D poses when the feature detector has performed poorly. In this paper, we address this issue by jointly solving both the 2D detection and the 3D inference problems. For this purpose, we propose a Bayesian framework that integrates a generative model based on latent variables and discriminative 2D part detectors based on HOGs, and perform inference using evolutionary algorithms. Real experimentation demonstrates competitive results, and the ability of our methodology to provide accurate 2D and 3D pose estimations even when the 2D detectors are inaccurate.


computer vision, pose estimation.

Scientific reference

E. Simo-Serra, A. Quattoni, C. Torras and F. Moreno-Noguer. A joint model for 2D and 3D pose estimation from a single image, 2013 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2013, Portland, OR, USA, pp. 3634--3641.