Master Thesis

Speech and motion recognition for a robot assistant in dressing

Work default illustration

Supervisor/s

Information

  • Started: 23/02/2016
  • Finished: 31/01/2017

Description

Robot assistants are expected to help users with their everyday tasks, for which they must have the ability of recognizing users’ attention and intentions. This project will focus on attention and intention recognition from motion and speech, as well as development of a unified interaction framework that can efficiently combine these two modalities. The proposed framework will be tested in the assistive dressing scenario, i.e. helping the user to put on and take off a piece of clothing.

Methodology:
The project will start with an evaluation of existing tools for user motion tracking (Kinect, etc.) and speech recognition (e.g. ROS-based tools), and also with the study of relevant literature about multi-modal interaction. The expected outcomes of the project are:
- Development of an algorithm for user following associated with the assistive dressing task. (User following requires both user motion tracking and robot motion planning.)
- Definition of a speech vocabulary relevant to the assistive dressing task, which will allow the user to interrupt the robot and suggest a corrected action.
- Development of an algorithm that combines two modalities and suggests robots actions based on the recognized user intention.

Platform:
The algorithms will be implemented on two Barret's WAM robotic arms and external depth cameras (Kinect 1 and Kinect 2).

UPCommons link

The work is under the scope of the following projects:

  • I-DRESS: Assistive interactive robotic system for support in dressing (web)