Publication

2D-to-3D facial expression transfer

Conference Article

Conference

International Conference on Pattern Recognition (ICPR)

Edition

24th

Pages

2008-2013

Doc link

https://doi.org/10.1109/ICPR.2018.8545228

File

Download the digital copy of the doc pdf document

Abstract

Automatically changing the expression and physical features of a face from an input image is a topic that has been traditionally tackled in a 2D domain. In this paper, we bring this problem to 3D and propose a framework that given an input RGB video of a human face under a neutral expression, initially computes his/her 3D shape and then performs a transfer to a new and potentially non-observed expression. For this purpose, we parameterize the rest shape --obtained from standard factorization approaches over the input video-- using a triangular mesh which is further clustered into larger macro-segments. The expression transfer problem is then posed as a direct mapping between this shape and a source shape, such as the blend shapes of an off-the-shelf 3D dataset of human facial expressions. The mapping is resolved to be geometrically consistent between 3D models by requiring points in specific regions to map on semantic equivalent regions. We validate the approach on several synthetic and real examples of input faces that largely differ from the source shapes, yielding very realistic expression transfers even in cases with topology changes, such as a synthetic video sequence of a single-eyed cyclops.

Categories

computer vision, optimisation.

Author keywords

Facial Expression Transfer

Scientific reference

G. Rotger, F. Lumbreras, F. Moreno-Noguer and A. Agudo. 2D-to-3D facial expression transfer, 24th International Conference on Pattern Recognition, 2018, Beijing, China, pp. 2008-2013.