A Friction-Model-Based Framework for Reinforcement Learning of Robotic Tasks in Non-Rigid Environments

Adrià Colomé, Antoni Planells and Carme Torras

Learning motion tasks in a real environment with deformable objects requires not only a Reinforcement Learning (RL) algorithm, but also a good motion characterization, a preferably compliant robot controller, and an agent giving feedback for the rewards/costs in the RL algorithm. In this paper, we unify all these parts in a simple but effective way to properly learn safety-critical robotic tasks such as wrapping a scarf around the neck (so far, of a mannequin). We found that a suitable compliant controller ought to have a good Inverse Dynamic Model (IDM) of the robot. However, most approaches to build such a model do not consider the possibility of having hystheresis of the friction, which is the case for robots such as the Barrett WAM. For this reason, in order to improve the available IDM, we derived an analytical model of friction in the seven robot joints, whose parameters can be automatically tuned for each particular robot. This permits compliantly tracking diverse trajectories in the whole workspace. By using such friction-aware controller, Dynamic Movement Primitives (DMP) as motion characterization and visual/force feedback within the RL algorithm, experimental results demonstrate that the robot is consistently capable of learning tasks that could not be learned otherwise.

Experiments

In this video, we show two experiments. The first one shows how a Dynamic Movement Primitive is taught to a robotic arm, and how the robot reproduces it compliantly, with the addition of a friction model for a better controller. The second experiment in the video shows the learning process of learning to place a scarf around a mannequin's neck. With the help of visual feedback and a compliant controller, the robot successfully learns such task, which would probably be unachievable otherwise.