Publication

Enhancing text spotting with a language model and visual context information

Conference Article

Conference

Catalan Conference on Artificial Intelligence (CCIA)

Edition

21st

Pages

271-280

Doc link

https://doi.org/10.3233/978-1-61499-918-8-271

File

Download the digital copy of the doc pdf document

Authors

Abstract

This paper addresses the problem of detecting and recognizing text in images acquired ‘in the wild’. This is a severely under-constrained problem which needs to tackle a number of challenges including large occlusions, changing light- ing conditions, cluttered backgrounds and different font types and sizes. In order to address this problem we leverage on recent and successful developments in the cross-fields of machine learning and natural language understanding. In particular, we initially rely on off-the-shelf deep networks already trained with large amounts of data and that provide a series of text hypotheses per input image. The outputs of this network are then combined with different priors obtained from both the se- mantic interpretation of the image and from a scene-based language model. As a result of this combination, the performance of the original network is consistently boosted. We validate our approach on ICDAR’17 shared task dataset.

Categories

computer vision.

Scientific reference

A. Sabir, F. Moreno-Noguer and L. Padró. Enhancing text spotting with a language model and visual context information, 21st Catalan Conference on Artificial Intelligence, 2018, Roses, in Artificial Intelligence Research and Development, Vol 308 of Frontiers in Artificial Intelligence and Applications, pp. 271-280, 2018, IOS Press.