Publication
Towards explainable proactive robot interactions for groups of people in unstructured environments
Conference Article
Conference
ACM/IEEE International Conference on Human-Robot Interaction (HRI)
Edition
2024
Pages
697–701
Doc link
https://doi.org/10.1145/3610978.3640734
File
Abstract
For social robots to be able to operate in unstructured public spaces, they need to be able to gauge complex factors such as human-robot engagement and inter-person social groups, and be able to decide how and with whom to interact. Additionally, such robots should be able to explain their decisions after the fact, to improve accountability and confidence in their behavior. To address this, we present a two-layered proactive system that extracts high-level social features from low-level perceptions and uses these features to make high-level decisions regarding the initiation and maintenance of human-robot interactions. With this system outlined, the primary focus of this work is then a novel method to generate counterfactual explanations in response to a variety of contrastive queries. We provide an early proof of concept to illustrate how these explanations can be generated by leveraging the two-layer system.
Categories
pose estimation, social aspects of automation.
Author keywords
Explainability, Human-Robot Interaction, Engagement, Proactive Decision-Making
Scientific reference
T. Love, A. Andriella and G. Alenyà. Towards explainable proactive robot interactions for groups of people in unstructured environments, 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024, Boulder, CO, USA, pp. 697–701.
Follow us!