Publication
Modeling robot's world with minimal effort
Conference Article
Conference
IEEE International Conference on Robotics and Automation (ICRA)
Edition
2015
Pages
4890-4896
Doc link
http://dx.doi.org/10.1109/ICRA.2015.7139878
File
Authors
Projects associated
Abstract
We propose an efficient Human Robot Interaction approach to efficiently model the appearance of all relevant objects in robot's environment. Given an input video stream recorded while the robot is navigating, the user just needs to annotate a very small number of frames to build specific classifiers for each of the objects of interest. At the core of the method, there are several random ferns classifiers that share the same features and are updated online. The resulting methodology is fast (runs at 8 fps), versatile (it can be applied to unconstrained scenarios), scalable (real experiments show we can model up to 30 different object classes), and minimizes the amount of human intervention by leveraging the uncertainty measures associated to each classifier. We thoroughly validate the approach on synthetic data and on real sequences acquired with a mobile platform in outdoor and challenging scenarios containing a multitude of different objects. We show that the human can, with minimal effort, provide the robot with a detailed model of the objects in the scene.
Categories
computer vision, image classification, mobile robots, object detection, pattern classification, robot vision.
Author keywords
online learning, random ferns, object detection, robot, interaction
Scientific reference
M. Villamizar, A. Garrell Zulueta, A. Sanfeliu and F. Moreno-Noguer. Modeling robot's world with minimal effort, 2015 IEEE International Conference on Robotics and Automation, 2015, Seattle, WA, USA, pp. 4890-4896.
Follow us!