Publication

H3D-Net: Few-shot high-fidelity 3D head reconstruction

Conference Article

Conference

International Conference on Computer Vision (ICCV)

Edition

2021

Pages

5600-5609

Doc link

https://doi.org/10.1109/ICCV48922.2021.00557

File

Download the digital copy of the doc pdf document

Abstract

Recent learning approaches that implicitly represent surface geometry using coordinate-based neural representations have shown impressive results in the problem of multi- view 3D reconstruction. The effectiveness of these techniques is, however, subject to the availability of a large number (several tens) of input views of the scene, and computationally demanding optimizations. In this paper, we tackle these limitations for the specific problem of few-shot full 3D head reconstruction, by endowing coordinate-based representations with a probabilistic shape prior that enables faster convergence and better generalization when using few input images (down to three). First, we learn a shape model of 3D heads from thousands of incomplete raw scans using implicit representations. At test time, we jointly overfit two coordinate-based neural networks to the scene, one modelling the geometry and another estimating the surface radiance, using implicit differentiable rendering. We devise a two-stage optimization strategy in which the learned prior is used to initialize and constrain the geometry during an initial optimization phase. Then, the prior is unfrozen and fine-tuned to the scene. By doing this, we achieve high-fidelity head reconstructions, including hair and shoulders, and with a high level of detail that consistently outperforms both state-of-the-art 3D Morphable Models methods in the few-shot scenario, and non- parametric methods when large sets of views are available.

Categories

computer vision.

Scientific reference

E. Ramon, G. Triginer, J. Escur, A. Pumarola, J. García, X. Giro-i-Nieto and F. Moreno-Noguer. H3D-Net: Few-shot high-fidelity 3D head reconstruction, 2021 International Conference on Computer Vision, 2021, Montreal, Canada, pp. 5600-5609.