Aleksandar Taranović1, Aleksandar Jevtić1, Joan Hernández-Farigola2, Natalia Tantinya2, Carla Abdelnour2 and Carme Torras1
1Institut de Robòtica i Informàtica Industrial, CSIC-UPC, C/ Llorens i Artigas 4-6, 08028 Barcelona, Spain.
2Memory Clinic and Research Center of Fundació ACE, Institut Català de Neurociències Aplicades, Barcelona, Spain.
The majority of socially assistive robots interact with their users using multiple modalities. Multimodality is an important feature that can enable them to adapt to the user behavior and the environment. In this work, we propose a resource-based modality-selection algorithm that adjusts the use of the robot interaction modalities taking into account the available resources to keep the interaction with the user comfortable and safe. For example, the robot should not enter the board space while the user is occupying it, or speak while the user is speaking. We performed a pilot study in which the robot acted as a caregiver in cognitive training. We compared a system with the proposed algorithm to a baseline system that uses all modalities for all actions unconditionally. Results of the study suggest that a reduced complexity of interaction does not significantly affect the user experience, and may improve task performance.