Publication

What's “up”? — Resolving interaction ambiguity through non-visual cues for a robotic dressing assistant

Conference Article

Conference

IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)

Edition

26th

Pages

284-291

Doc link

https://doi.org/10.1109/ROMAN.2017.8172315

File

Download the digital copy of the doc pdf document

Authors

Projects associated

Abstract

Robots that can assist in activities of daily living (ADL) such as dressing assistance, need to be capable of intuitive and safe interaction. Vision systems are often used to provide information on the position and movement of the robot and user. However, in a dressing context, technical complexity, occlusion and concerns over user privacy pushes research to investigate other approaches for human-robot interaction (HRI). We analysed verbal, proprioceptive and force feedback from 18 participants during a human-human dressing experiment where users received dressing assistance from a researcher mimicking robot behaviour. This paper investigates the occurrence of deictic speech in an assisted-dressing task and how any ambiguity could be resolved to ensure safe and reliable HRI. We focus on one of the most frequently occurring deictic words "up", which was captured over 300 times during the experiments and is used as an example of an ambiguous command. We attempt to resolve the ambiguity of these commands through predictive models. These models were used to predict end effector choice and the direction in which the garment should move. The model for predicting end effector choice resulted in 70.4% accuracy based on the user’s head orientation. For predicting garment direction, the model used the angle of the user’s arm and resulted in 87.8% accuracy. We also found that additional categories such as the starting position of the user’s arms and end-effector height may improve the accuracy of a predictive model. We present suggestions on how these inputs may be attained through non-visual means, for example through haptic perception of end-effector position, proximity sensors and acoustic source localisation.

Categories

generalisation (artificial intelligence), speech recognition.

Author keywords

Human-Robot Interaction, Assistive Robots

Scientific reference

G. Chance, P. Caleb-Solly, A. Jevtić and S. Dogramadzi. What's “up”? — Resolving interaction ambiguity through non-visual cues for a robotic dressing assistant, 26th IEEE International Symposium on Robot and Human Interactive Communication, 2017, Lisbon, Portugal, pp. 284-291.