Publication
Finding safe policies in model-based active learning
Conference Article
Conference
IROS Machine Learning in Planning and Control of Robot Motion Workshop (IROS MLPC)
Edition
2014
Pages
1-6
Doc link
http://www.cs.unm.edu/amprg/mlpc14Workshop/proceedings.html
File
Authors
Projects associated
Abstract
Task learning in robotics is a time-consuming process, and model-based reinforcement learning algorithms have been proposed to learn with just a small amount of experiences. However, reducing the number of experiences used to learn implies that the algorithm may overlook crucial actions required to get an optimal behavior. For example, a robot may learn simple policies that have a high risk of not reaching the goal because they often fall into dead-ends.
We propose a new method that allows the robot to reason about dead-ends and their causes. Analyzing its current model and experiences, the robot will hypothesize the possible causes for the dead-end, and identify the actions that may cause it, marking them as dangerous. Afterwards, whenever a dangerous action is included into a plan which has a high risk of leading to a dead-end, the special action request teacher confirmation will be triggered by the robot to actively confirm with a teacher that the planned risky action should be executed.
This method permits learning safer policies with the addition of just a few teacher demonstration requests. Experimental validation of the approach is provided in two different scenarios: a robotic assembly task and a domain from the international planning competition. Our approach gets success ratios very close to 1 in problems where previous approaches had high probabilities of reaching dead-ends
Categories
intelligent robots, learning (artificial intelligence), planning (artificial intelligence), uncertainty handling.
Scientific reference
D. Martínez, G. Alenyà and C. Torras. Finding safe policies in model-based active learning, 2014 IROS Machine Learning in Planning and Control of Robot Motion Workshop, 2014, Chicago, pp. 1-6.
Follow us!