Publication

Knowledge Representation for Explainability in Collaborative Robotics and Adaptation

Conference Article

Conference

International Workshop on Data meets Applied Ontologies (DAO-XAI)

Edition

3rd

Pages

1-8

Doc link

http://ceur-ws.org/Vol-2998/paper4.pdf

File

Download the digital copy of the doc pdf document

Abstract

Autonomous robots are going to be used in a large diversity of contexts, interacting and/or collaborating with humans, who will add uncertainty to the collaborations and cause re-planning and adaptations to the execution of robots’ plans. Hence, trustworthy robots must be able to store and retrieve relevant knowledge about their collaborations and adaptations. Furthermore, they shall also use that knowledge to generate explanations for human collaborators. A reasonable approach is first to represent the domain knowledge in triples using an ontology, and then generate natural language explanations from the stored knowledge. In this article, we propose ARE-OCRA, an algorithm that generates explanations about target queries, which are answered by a knowledge base built using an Ontology for Collaborative Robotics and Adaptation (OCRA). The algorithm first queries the knowledge base to retrieve the set of sufficient triples that would answer the queries. Then, it generates the explanation in natural language using the triples. We also present the implementation of the core algorithm’s routine: construct explanation, which generates the explanations from a set of given triples. We consider three different levels of abstraction, being able to generate explanations for different uses and preferences. This is different from most of the literature works that use ontologies, which only provide a single type of explanation. The least abstract level, the set of triples, is intended for ontology experts and debugging, while the second level, aggregated triples, is inspired by other literature baselines. Finally, the third level of abstraction, which combines the triples’ knowledge and the natural language definitions of the ontological terms, is our novel contribution. We showcase the performance of the implementation in a collaborative robotic scenario, showing the generated explanations about the set of OCRA’s competency questions. This work is a step forward to explainable agency in collaborative scenarios where robots adapt their plans.

Categories

knowledge engineering.

Scientific reference

A. Olivares-Alarcos, S. Foix and G. Alenyà. Knowledge Representation for Explainability in Collaborative Robotics and Adaptation, 3rd International Workshop on Data meets Applied Ontologies, 2021, Bratislava, Slovakia (Virtual), pp. 1-8.