Robot motion adaptation through user intervention and reinforcement learning

Aleksandar Jevtić, Adrià Colomé, Guillem Alenyà and Carme Torras

Assistant robots are designed to perform specific tasks for the user, but their performance is rarely optimal, hence they are required to adapt to user preferences or new task requirements. We tackle this problem at the level of robot trajectory adaptation. We propose an interactive learning framework based on user intervention and reinforcement learning (RL) that allows the user to correct an unfitted segment of the robot trajectory by using hand movements to guide the robot along a corrective path.

In continuation, we show two video demos of user intervention for: (1) a single-arm robot task, and (2) a dual-arm robot task. The robots are programmed to displace the end-effectors along a straight line at the constant height from the table. The goal of the task, for the user, is to guide the robot end-effectors around the obstacles. Kinect camera placed opposite to the user tracks the hands movements. At a desired robot position, the user raises the right hand to stop the robot and initiate the teleoperation mode. The robot trajectory adaptation is done using hands gestures to guide the robot along a corrective path. User-guided trajectories are used as input for a reinforcement learning (RL) algorithm, which produce an improved trajectory solution. The videos show final trajectories learned after 5 and 20 user interventions, i.e. trials.

(1) User intervention on a single-arm robot.

(2) User intervention on a dual-arm robot.