Publication
What would I do if? Promoting understanding in HRI through real-time explanations in the wild
Conference Article
Conference
IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)
Edition
33rd
Pages
504-509
Doc link
http://dx.doi.org/10.1109/RO-MAN60168.2024.10731403
File
Abstract
As robots become more and more integrated in human spaces, it is increasingly important for them to be able to explain their decisions to the people they interact with. These explanations need to be generated automatically and in real-time in response to decisions taken in dynamic and often unstructured environments. However, most research in explainable human-robot interaction only considers explanations (often manually selected) presented in controlled environments. We present an explanation generation method based on counterfactuals and demonstrate its use in an “in-the-wild” experiment using automatically generated and selected explanations of autonomous interactions with real people to assess the effect of these explanations on participants’ ability to predict the robot’s behaviour in hypothetical scenarios. Our results suggest that explanations aid one’s ability to predict the robot’s behaviour, but also that the addition of counterfactual statements may add some burden and counteract this beneficial effect.
Categories
social aspects of automation.
Author keywords
Explainability,Transparency,Human-Robot Interaction,Counterfactual Explanation
Scientific reference
T. Love, A. Andriella and G. Alenyà. What would I do if? Promoting understanding in HRI through real-time explanations in the wild, 33rd IEEE International Symposium on Robot and Human Interactive Communication, 2024, Pasadena, California, USA, pp. 504-509, IEEE.
Follow us!