Publication

Depth from the visual motion of a planar target induced by zooming

Conference Article

Conference

IEEE International Conference on Robotics and Automation (ICRA)

Edition

2007

Pages

4727-4732

Doc link

http://dx.doi.org/10.1109/ROBOT.2007.364207

File

Download the digital copy of the doc pdf document

Abstract

Robot egomotion can be estimated from an acquired video stream up to the scale of the scene. To remove this uncertainty (and obtain true egomotion), a distance within the scene needs to be known. If no a priori knowledge on the scene is assumed, the usual solution is to derive “in some way” the initial distance from the camera to a target object. This paper proposes a new, very simple way to obtain such a distance, when a zooming camera is available and there is a planar target in the scene. Similarly to “two-grid calibration” algorithms, no estimation of the camera parameters is required, and no assumption on the optical axis stability between the different focal lengths is needed. Quite the reverse, the non stability of the optical axis between the different focal lengths is the key ingredient that enables to derive our depth estimate, by applying a result in projective geometry. Experiments carried out on a mobile robot platform show the promise of the approach.

Categories

computer vision.

Scientific reference

G. Alenyà, M. Alberich-Carramiñana and C. Torras. Depth from the visual motion of a planar target induced by zooming, 2007 IEEE International Conference on Robotics and Automation, 2007, Rome, Italy, pp. 4727-4732, IEEE.