Publication

Evaluating the impact of explainability on the users’ mental models of robots over time

Conference Article

Conference

IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)

Edition

33rd

Doc link

https://www.inf.uni-hamburg.de/research/projects/trail/publications/pdfs/2024-03-gebelli-lbr-roman24.pdf

File

Download the digital copy of the doc pdf document

Authors

Projects associated

Abstract

To evaluate how explanations affect the users' understanding of robots, researchers typically elicit the user's Mental Model (MM) of the robot and then compare it to the robot's actual decision-making and behaviour. However, the user's self-rating of their level of understanding, which we define as ``user-perceived understanding'', is generally not evaluated. Moreover, this evaluation is typically done only once, while robots are often expected to interact with the same users over long periods. In this work, we suggest a framework to analyse the evolution of the mental models over time across the dimensions of completeness and correctness. We argue that the goal of explainability should be two-fold. On one hand, it should help align the user's perceived understanding with the real one. On the other hand, explainability should enhance the completeness of the mental model to a target level, which varies depending on the user type, while also striving for maximum correctness.

Categories

mobile robots, planning (artificial intelligence), service robots.

Author keywords

Human-Robot Interaction, Social robots

Scientific reference

F. Gebelli, R. Ros, S. Lemaignan and A. Garrell Zulueta. Evaluating the impact of explainability on the users’ mental models of robots over time, 33rd IEEE International Symposium on Robot and Human Interactive Communication, 2024, Pasadena, California, USA, IEEE.