Publication

How2Sign: A large-scale multimodal dataset for continuous American sign language

Conference Article

Conference

IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

Edition

2021

Pages

2734-2743

Doc link

https://doi.org/10.1109/CVPR46437.2021.00276

File

Download the digital copy of the doc pdf document

Authors

  • Duarte, Amanda

  • Palaskar, Shruti

  • Ventura, Lucas

  • Ghadiyaram, Deepti

  • DeHaan, Kenneth

  • Metze, Florian

  • Torres, Jordi

  • Giró i Nieto, Xavier

Abstract

One of the factors that have hindered progress in the areas of sign language recognition, translation, and production is the absence of large annotated datasets. Towards this end, we introduce How2Sign, a multimodal and multiview continuous American Sign Language (ASL) dataset, consisting of a parallel corpus of more than 80 hours of sign language videos and a set of corresponding modalities including speech, English transcripts, and depth. A three-hour subset was further recorded in the Panoptic studio enabling detailed 3D pose estimation. To evaluate the potential of How2Sign for real-world impact, we conduct a study with ASL signers and show that synthesized videos using our dataset can indeed be understood. The study further gives insights on challenges that computer vision should address in order to make progress in this field. Dataset website: http://how2sign.github.io/

Categories

computer vision.

Author keywords

sign language, dataset

Scientific reference

A. Duarte, S. Palaskar, L. Ventura, D. Ghadiyaram, K. DeHaan, F. Metze, J. Torres and X. Giro-i-Nieto. How2Sign: A large-scale multimodal dataset for continuous American sign language, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, Nashville, TN, USA (Virtual), pp. 2734-2743, Computer Vision Foundation.