Publication
Enhancing egocentric 3D pose estimation with third person views
Journal Article (2023)
Journal
Pattern Recognition
Volume
138
Number
109358
Doc link
https://doi.org/10.1016/j.patcog.2023.109358
File
Abstract
We propose a novel approach to enhance the 3D body pose estimation of a person computed from videos captured from a single wearable camera. The main technical contribution consists of leveraging high-level features linking first- and third-views in a joint embedding space. To learn such embedding space we introduce First2Third-Pose, a new paired synchronized dataset of nearly 2000 videos depicting human activities captured from both first- and third-view perspectives. We explicitly consider spatial- and motion-domain features, combined using a semi-Siamese architecture trained in a self-supervised fashion. Experimental results demonstrate that the joint multi-view embedded space learned with our dataset is useful to extract discriminatory features from arbitrary single-view egocentric videos, with no need to perform any sort of domain adaptation or knowledge of camera parameters. An extensive evaluation demonstrates that we achieve significant improvement in egocentric 3D body pose estimation performance on two unconstrained datasets, over three supervised state-of-the-art approaches. The collected dataset and pre-trained model are available for research purposes.
Categories
pattern recognition.
Author keywords
3D pose estimation Self-supervised learning Egocentric vision
Scientific reference
A. Dhamanaskar, M. Dimiccoli, E. Corona, A. Pumarola and F. Moreno-Noguer. Enhancing egocentric 3D pose estimation with third person views . Pattern Recognition, 138(109358), 2023.
Follow us!