LETHA: Learning from high quality inputs for 3D pose estimation in low quality images

Conference Article


International Conference on 3D Vision (3DV)





Doc link


Download the digital copy of the doc pdf document


We introduce LETHA (Learning on Easy data, Test on Hard), a new learning paradigm consisting of building strong priors from high quality training data, and combining them with discriminative machine learning to deal with low- quality test data. Our main contribution is an implementation of that concept for pose estimation. We first automatically build a 3D model of the object of interest from high-definition images, and devise from it a pose-indexed feature extraction scheme. We then train a single classifier to process these feature vectors. Given a low quality test image, we visit many hypothetical poses, extract features consistently and evaluate the response of the classifier. Since this process uses locations recorded during learning, it does not require matching points anymore. We use a boosting procedure to train this classifier common to all poses, which is able to deal with missing features, due in this context to self-occlusion. Our results demonstrate that the method combines the strengths of global image representations, discriminative even for very tiny images, and the robustness to occlusions of approaches based on local feature point descriptors.


computer vision, pattern recognition.

Author keywords

3D pose estimation

Scientific reference

A. Penate-Sanchez, F. Moreno-Noguer, J. Andrade-Cetto and F. Fleuret. LETHA: Learning from high quality inputs for 3D pose estimation in low quality images, 2nd International Conference on 3D Vision, 2014, Tokyo, pp. 517-524.