Autonomous Robot Localization Using Vision
It is well known that the estimation of the robot position from odometry is subjected to large errors in the long term. Consequently, additional sources of information must be used to determine the robot's position. In our case, we advocate for the use of vision.
We use the appearance-based approach that is appealing for its simplicity. In this approach, the robot position is determined directly comparing the last observed image with images observed before (for which we know the position at which they where observed).
So, the appearance-based localization departs from a training set including images shot at known positions. For efficiently reasons, the the dimensionality of the images on this training set is reduced using a standard PCA process. When the robot is moving in the environment, the collected images (after the corresponding dimensionality reduction) are compared with those in the training set and a particle filter algorithm is used to determine the position of the robot.
The robot were equipped with an stereo camera mounted on a pan and tilt device. With the use of the pan and tilt device, different images can be readily obtained from a given position. Therefore we can just discard all those images that, due to occlusions or changes in the environment, do not match with any image in the training set.
Additionally, stereo vision can provide not only intensity images but also depth maps that are less sensitive to change in illumination. We want to explore the possibility of using those depth maps for localization.You can download videos with the experiments from here.