Publication
V-MIN: Efficient reinforcement learning through demonstrations and relaxed reward demands
Conference Article
Conference
AAAI Conference on Artificial Intelligence (AAAI)
Edition
29th
Pages
2857-2863
Doc link
http://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/view/9634/9952
File
Abstract
Reinforcement learning (RL) is a common paradigm for learning tasks in robotics. However, a lot of exploration is usually required, making RL too slow for high-level tasks. We present V-MIN, an algorithm that integrates teacher demonstrations with RL to learn complex tasks faster. The algorithm combines active demonstration requests and autonomous exploration to find policies yielding rewards higher than a given threshold Vmin.
This threshold sets the degree of quality with which the robot is expected to complete the task, thus allowing the user to either opt for very good policies that require many learning experiences, or to be more permissive with sub-optimal policies that are easier to learn. The threshold can also be increased online to force the system to improve its policies until the desired behavior is obtained. Furthermore, the algorithm generalizes previously learned knowledge, adapting well to changes. The performance of V-MIN has been validated through experimentation, including domains from the international planning competition. Our approach achieves the desired behavior where previous algorithms failed.
Categories
learning (artificial intelligence), uncertainty handling.
Author keywords
reinforcement learning, active learning, model-based reinforcement learning
Scientific reference
D. Martínez, G. Alenyà and C. Torras. V-MIN: Efficient reinforcement learning through demonstrations and relaxed reward demands, 29th AAAI Conference on Artificial Intelligence, 2015, Austin, Texas, pp. 2857-2863.
Follow us!