Publication
Human-robot collaborative scene mapping from relational descriptions
Conference Article
Conference
Iberian Robotics Conference (ROBOT)
Edition
1st
Pages
331-346
Doc link
http://dx.doi.org/10.1007/978-3-319-03413-3_24
File
Authors
Projects associated
Abstract
In this article we propose a method for cooperatively building a scene map between a human and a robot by using a spatial relational model employed by the robot to interpret human descriptions of the scene. The description will consist in a set of spatial relations between the objects in the scene. The scene map will contain the position of these objects. For this end we propose a model based on the generation of scalar fields of applicability for each of the available relations. The method can be summarized as follows. In first place a person will come into the room and describe the scene to the robot, including in the description semantic information about the objects which the robot can't get from its sensors. From the description the robot will form the scene mental map. In second place the robot will sense the scene with a 2D range laser building the scene sensed map. The objects positions in the mental map will be used to guide the sensing process. In a third step the robot will fuse the two maps, linking the semantic information about the described objects to the corresponding sensed ones. The resulting map is called the scene enriched map.
Categories
mobile robots, uncertainty handling.
Scientific reference
E. Retamino and A. Sanfeliu. Human-robot collaborative scene mapping from relational descriptions, 1st Iberian Robotics Conference, 2013, Madrid, in Robot 2013: First Iberian Robotics Conference, Vol 252-3 of Advances in Intelligent Systems and Computing, pp. 331-346, 2014, Springer.
Follow us!