Learning in categorizable environments

Conference Article


International Conference on Simulation of Adaptative Behavior (SAB)





Doc link


Download the digital copy of the doc pdf document


Efficient learning in animats living in complex environments can only be achieved if we depart from more restrictive initial hypothesis about the environment than the ones usually assumed in standard Reinforcement Learning. In this paper, we observe that in realistic situations the main part of the sensory information available for the robot is irrelevant to determine the effects of an animat's action and that, consequently, those effects can be correctly foreseen attaining to only few of the animat sensors. The environments in which this condition holds are called categorizable environments. In this kind of environments, the reinforcement learning problem can be redefined from the usual state-action evaluation approach to an approach in which the objective is to determine the relevance of the sensors with respect to each action (and, of course, the expected reward for each action). We present an animat control system based on that analysis that is able to find the conditions from which the effect of each action can be determined. The system focus its attention to the situations and actions that result more critical for the animat. Additionally, it tries to efficiently use the scarcely presented opportunities to learn interesting things. Results of the application of this animat control architecture to the task of step coordination in a simulated six legged robot walking on flat an rough terrain are presented.



Scientific reference

J.M. Porta and E. Celaya. Learning in categorizable environments, 6th International Conference on Simulation of Adaptative Behavior, 2000, Paris, França, in From Animals to Animats, pp. 343-352, 2000, MIT Press, Cambridge, Estats Units d'Amèrica.