Publication

Textual visual semantic dataset for Text Spotting

Conference Article

Conference

CVPR Workshop on Text and Documents in the Deep Learning Era (CVPRW)

Edition

2020

Pages

2306-2315

Doc link

https://doi.org/10.1109/CVPRW50498.2020.00279

File

Download the digital copy of the doc pdf document

Abstract

Text Spotting in the wild consists of detecting and recognizing text appearing in images (e.g. signboards, traffic signals or brands in clothing or objects). This is a challenging problem due to the complexity of the context where texts appear (uneven backgrounds, shading, occlusions, perspective distortions, etc.). Only a few approaches try to exploit the relation between text and its surrounding environment to better recognize text in the scene. In this paper, we propose a visual context dataset for Text Spotting in the wild, where the publicly available dataset COCO-text [40] has been extended with information about the scene (such as objects and places appearing in the image) to enable researchers to include semantic relations between texts and scene in their Text Spotting systems, and to offer a common framework for such approaches. For each text in an image, we extract three kinds of context information: objects in the scene, image location label and a textual image description (caption). We use state-of-the-art out-of-the-box available tools to extract this additional information. Since this information has textual form, it can be used to leverage text similarity or semantic relation methods into Text Spotting systems, either as a post-processing or in an end-to-end training strategy. Our data is publicly available in https://git.io/JeZTb.

Categories

computer vision, image recognition, object detection, pattern recognition.

Author keywords

Text Spotting, text recognition

Scientific reference

A. Sabir, F. Moreno-Noguer and L. Padró. Textual visual semantic dataset for Text Spotting, 2020 CVPR Workshop on Text and Documents in the Deep Learning Era, 2020, Seattle, WA, USA, pp. 2306-2315, IEEE.