Master Thesis

Gesture and haptic human-robot interaction

Work default illustration

Supervisor/s

Information

  • If you are interested in the proposal, please contact with the supervisors.

Description

Service robots need to be able to adapt to user needs, and teaching the robot new tasks is usually done through close human-robot interaction. This project will explore two interaction modalities, namely gestures and physical interaction. The goal is to develop a multi-modal interaction framework that will integrate these two modalities to recognize user intentions. The developed algorithms will be tested in an assistive-dressing scenario, in which the robot will assist the user to put on a shoe or a coat.

Methodology:
The project will start with an evaluation of existing tools for user tracking and gesture recognition (Kinect skeleton tracking, etc.) and also with a study of relevant literature about gesture-based and physical human-robot interaction. The expected outcomes of the project are:
- Development of a user intention recognition algorithm that uses gestures and physical interaction as input.
- Development of an interaction framework that allows switching between two modalities depending on the recognized user intention.
The project will allow possible extensions depending on the obtained results.

The candidate will have access to the WAM robotic arms and Kinect 1 and Kinect 2 cameras of our Perception and Manipulation Lab.

Required skills:
- Great interest in robotics.
- Good programming skills in C++.
Desired skills:
- Programming in ROS.
- Previous experience in working with 3D cameras.

The work is under the scope of the following projects:

  • I-DRESS: Assistive interactive robotic system for support in dressing (web)