Publication

Body gestures recognition for social human-robot interaction

Conference Article

Conference

Iberian Robotics Conference (ROBOT)

Edition

7th

Doc link

https://doi.org/10.1109/ROBOT61475.2024.10797349

File

Download the digital copy of the doc pdf document

Abstract

In this paper, a solution for human gesture classification is proposed. The solution uses a Deep Learning model and is meant to be useful for non-verbal communication between humans and robots. The research focuses on the creation of the “temPoral bOdy geSTUre REcognition model” (POSTURE) that can recognise continuous gestures performed in real-life situations. The suggested model takes into account spatial and temporal components so as to achieve the recognition of more natural and intuitive gestures. In a first step, a framework extracts from all the images the corresponding landmarks for each of the body joints. Next, some data filtering techniques are applied with the aim of avoiding problems related with the data. And finally, different neural network configurations and approaches are tested to find the optimal performance. The validation of the model is accomplished throughout an extensive set of simulations and real-life experiments.

Categories

artificial intelligence, computer vision, image recognition, service robots.

Scientific reference

J. Laplaza, J.J. Oliver, A. Sanfeliu and A. Garrell Zulueta. Body gestures recognition for social human-robot interaction, 7th Iberian Robotics Conference, 2024, Madrid, Spain.