Publication
Adaptable multimodal interaction framework for robot-assisted cognitive training
Conference Article
Conference
ACM/IEEE International Conference on Human-Robot Interaction (HRI)
Edition
2018
Pages
327-328
Doc link
http://dx.doi.org/10.1145/3173386.3176911
File
Authors
Projects associated
Abstract
The size of the population with cognitive impairment is increasing worldwide, and socially assistive robotics offers a solution to the growing demand for professional carers. Adaptation to users generates more natural, human-like behavior that may be crucial for a wider robot acceptance. The focus of this work is on robot-assisted cognitive training of the patients that suffer from mild cognitive impairment (MCI) or Alzheimer. We propose a framework that adjusts the level of robot assistance and the way the robot actions are executed, according to the user input. The actions can be performed using any of the following modalities: speech, gesture, and display, or their combination. The choice of modalities depends on the availability of the required resources. The memory state of the user was implemented as a Hidden Markov Model, and it was used to determine the level of robot assistance. A pilot user study was performed to evaluate the effects of the proposed framework on the quality of interaction with the robot.
Categories
intelligent robots.
Author keywords
human-robot interaction, social robotics
Scientific reference
A. Taranović, A. Jevtić and C. Torras. Adaptable multimodal interaction framework for robot-assisted cognitive training, 2018 ACM/IEEE International Conference on Human-Robot Interaction, 2018, Chicago, USA, pp. 327-328.
Follow us!