Publication
Handling high parameter dimensionality in reinforcement learning with dynamic motor primitives
Conference Article
Conference
ICRA Workshop on Novel Methods for Learning and Optimization of Control Policies and Trajectories for Robotics (LOCPTR)
Edition
2013
Doc link
http://www.robot-learning.de/Research/ICRA2013
File
Abstract
Dynamic Motor Primitives (DMP) are nowadays widely used as movement parametrization for learning trajectories, because of their linearity in the parameters, rescalation robustness and continuity. However, when learning a movement with DMP, where a set of gaussians distributed along the trajectory is used to approximate an acceleration excitation function, a very large number of gaussian approximations need to be performed. Adding them up for all joints yields too many parameters to be explored, thus requiring a prohibitive number of experiments/simulations to converge to a solution with an optimal (locally or globally) reward. We propose here two strategies to reduce this dimensionality: the first is to explore only the most significant directions in the parameter space, and the second is to add a reduced second set of gaussians that should only optimize the trajectory after fixing the gaussians that approximate the demonstrated movement.
Categories
intelligent robots, manipulators, robot kinematics, robot programming.
Author keywords
dynamic motor primitives, learning by demonstration
Scientific reference
A. Colomé, G. Alenyà and C. Torras. Handling high parameter dimensionality in reinforcement learning with dynamic motor primitives, 2013 ICRA Workshop on Novel Methods for Learning and Optimization of Control Policies and Trajectories for Robotics, 2013, Karlsruhe, Germany.
Follow us!