Assessing image features for vision-based robot positioning

Journal Article (2001)


Journal of Intelligent and Robotic Systems







Doc link


Download the digital copy of the doc pdf document


The development of any robotics application relying on visual information always raises the key question of what image features would be most informative about the motion to be performed. In this paper, we address this question in the context of visual robot positioning, where a neural network is used to learn the mapping between image features and robot movements, and global image descriptors are preferred to local geometric features. Using a statistical measure of variable interdependence called Mutual Information, subsets of image features most relevant for determining pose variations along each of the six degrees of freedom (dof's) of camera motion are selected. Four families of global features are considered: geometric moments, eigenfeatures, Local Feature Analysis vectors, and a novel feature called Pose-Image Covariance vectors. The experimental results described show the quantitative and qualitative benefits of performing this feature selection prior to training the neural network: Less network inputs are needed, thus considerably shortening training times; the dof's that would yield larger errors can be determined beforehand, so that more informative features can be sought; the order of the features selected for each dof often accepts an intuitive explanation, which in turn helps to provide insights for devising features tailored to each dof.



Author keywords

feature selection, global image descriptors, mutual information, robot neurocontrol, variable interdependence, visual robot positioning

Scientific reference

G. Wells and C. Torras. Assessing image features for vision-based robot positioning. Journal of Intelligent and Robotic Systems, 30(1): 95-118, 2001.