Publication

Combining semantic and geometric features for object class segmentation of indoor scenes

Journal Article (2017)

Journal

IEEE Robotics and Automation Letters

Pages

49-55

Volume

2

Number

1

Doc link

http://dx.doi.org/10.1109/LRA.2016.2532927

File

Download the digital copy of the doc pdf document

Abstract

Scene understanding is a necessary prerequisite for robots acting autonomously in complex environments. Low-cost RGB-D cameras such as Microsoft Kinect enabled new methods for analyzing indoor scenes and are now ubiquitously used in indoor robotics. We investigate strategies for efficient pixelwise object class labeling of indoor scenes that combine both pretrained semantic features transferred from a large color image dataset and geometric features, computed relative to the room structures, including a novel distance-from-wall feature, which encodes the proximity of scene points to a detected major wall of the room. We evaluate our approach on the popular NYU v2 dataset. Several deep learning models are tested, which are designed to exploit different characteristics of the data. This includes feature learning with two different pooling sizes. Our results indicate that combining semantic and geometric features yields significantly improved results for the task of object class segmentation.




Categories

computer vision, pattern recognition.

Author keywords

Semantic scene understanding, categorization, segmentation

Scientific reference

F. Husain, H. Schulz, B. Dellen, C. Torras and S. Behnke. Combining semantic and geometric features for object class segmentation of indoor scenes. IEEE Robotics and Automation Letters, 2(1): 49-55, 2017.