Publication

Generalization in reinforcement learning with a task-related world description using rules

Technical Report (2006)

IRI code

IRI-TR-06-01

File

Download the digital copy of the doc pdf document

Abstract

A Reinforcement Learning problem is formulated as trying to find the action policy that maximizes the accumulated reward received by the agent through time. One of the most popular algorithms used in RL is Q-Learning which uses an action-value function q(s,a) to evaluate the expectation of the maximum future cumulative reward that will be obtained from executing action a in situation s. Q-Learning, as well as conventional RL techniques, is defined for discrete environments with a finite set of states and actions. The action-value function is explicitly represented by storing values for each state-action (s,a) pair. In order to reach a good approximation of the value function all the (s,a) pairs must be experienced many times but in practical applications the amount of experience for learning to take place is unfeasible. Therefore, the value function must be generalized to infer in situations never experienced so far. The generalization problem has been widely treated in the field of machine learning. Supervised learning directly treats this issue and many generalization techniques have been developed in this field. Any of the representations used in supervised learning could, in principle, be applied to RL. But there are some important issues to take into account that make good generalization in RL very hard to achieve. One of the most remarkable is that the value function is learned while represented. In this work we propose a RL approach that uses a new function representation of the Q function that allows good generalization by capturing function regularities into decision rules. The representation is a kind of Decision List where each rule configures a subspace of the state-action space and provides an approximation of the Q function in its covered region. Rule selection for action evaluation is given by the rule with both, good accuracy in the estimation and high confidence in the related statistics.

Categories

generalisation (artificial intelligence), intelligent robots, learning (artificial intelligence).

Author keywords

Reinforcement learning, generalization, categorizartion, decision list

Scientific reference

A. Agostini and E. Celaya. Generalization in reinforcement learning with a task-related world description using rules. Technical Report IRI-TR-06-01, Institut de Robòtica i Informàtica Industrial, CSIC-UPC, 2006.