Publication

Semantic segmentation priors for object discovery

Conference Article

Conference

International Conference on Pattern Recognition (ICPR)

Edition

23rd

Pages

549-554

Doc link

http://dx.doi.org/10.1109/ICPR.2016.7899691

File

Download the digital copy of the doc pdf document

Abstract

Reliable object discovery in realistic indoor scenes is a necessity for many computer vision and service robot applications. In these scenes, semantic segmentation methods have made huge advances in recent years. Such methods can provide useful prior information for object discovery by removing false positives and by delineating object boundaries. We propose a novel method that combines bottom-up object discovery and semantic priors for producing generic object candidates in RGB-D images. We use a deep learning method for semantic segmentation to classify colour and depth superpixels into meaningful categories. Separately for each category, we use saliency to estimate the location and scale of objects, and superpixels to find their precise boundaries. Finally, object candidates of all categories are combined and ranked. We evaluate our approach on the NYU Depth V2 dataset and show that we outperform other state-of-the-art object discovery methods in terms of recall.

Categories

computer vision, object detection, object recognition, robot vision, service robots.

Author keywords

Semantic segmentation, object discovery, deep learning

Scientific reference

G. Martín, F. Husain, H. Schulz, S. Frintrop, C. Torras and S. Behnke. Semantic segmentation priors for object discovery, 23rd International Conference on Pattern Recognition, 2016, Cancún, Mexico, pp. 549-554.