User Evaluation of an Interactive Learning Framework for Single-Arm and Dual-Arm Robots

Aleksandar Jevtić, Adrià Colomé, Guillem Alenyà and Carme Torras

Social robots are expected to adapt to their users and, like their human counterparts, learn from the interaction. In our previous work, we proposed an interactive learning framework that enables a user to intervene and teach the robot new motions required to execute a modified manipulation task. In the current work, we implemented the proposed framework on a single-arm and dual-arm robots and compared the user experience with these two robot settings. Two Barrett's 7-DOF WAM robots used in experiments were equipped with a Microsoft Kinect camera for user tracking and gesture recognition. User performance and workload were measured in a series of experiments with a group of 12 participants. Not surprisingly, the results showed that, for the same task, users required less time and produced shorter robot trajectories with the single-arm robot than with the dual-arm robot. The results also showed that the users who performed the task with the single-arm robot first experienced considerably less workload in performing the task with the dual-arm robot while achieving a higher task success rate for a shorter time.

The video shows a user interaction with the dual-arm robot that implements the proposed interactive learning framework for a manipulation task. The system comprises two WAM robots and a Microsoft Kinect camera mounted on a tripod at the opposite side of the table from the user. The algorithms are run in ROS on a Linux based desktop PC. The manipulation task consists in displacing the robot end-effectors above the table without knocking over the obstacles. The user should us hand gestures to take over the control of the robot arms and guide them around the obstacles along the shortest trajectory and in the shortest time possible.