Knowledge Representation for Explainability in Collaborative Robotics and Adaptation

Alberto Olivares-Alarcosa , Sergi Foixa and Guillem Alenyàa

a Institut de Robòtica i Informàtica Industrial, CSIC-UPC, C/ Llorens i Artigas 4-6, 08028 Barcelona, Spain.

Abstract - Autonomous robots are going to be used in a large diversity of contexts, interacting and/or collaborating with humans, who will add uncertainty to the collaborations and cause adaptations to the execution of robots' plans. Hence, trustworthy robots must be able to store and retrieve relevant knowledge about their collaborations and adaptations. Furthermore, they shall also use that knowledge to generate explanations for human collaborators. A reasonable approach is first to represent the domain knowledge in triples using an ontology, and then generate natural language explanations from the stored knowledge. In this article, we propose an algorithm that would generate explanations about target queries, which are answered by a knowledge base built using an Ontology for Collaborative Robotics and Adaptation (OCRA). Specifically, the algorithm would first query the knowledge base to retrieve the set of sufficient triples that would answer the queries. Then, it would generate the explanation in natural language using the triples. In this article, we focus on the last part, providing an implementation that generates the explanations from a set of given triples. We consider three different levels of abstraction, being able to generate explanations for different uses and preferences. This is different from most of the literature works that use ontologies, which only provide a single type of explanation. The least abstract level, the set of triples, is intended for ontology experts and debugging, while the second level, aggregated triples, is inspired by other literature baselines. Finally, the third level of abstraction, which combines the triples' knowledge and the natural language definitions of the ontological terms, is our novel contribution. We showcase the performance of the implementation in a collaborative robotic scenario, showing the generated explanations about the set of OCRA’s competency questions. This work is a step forward to explainable agency in collaborative scenarios where robots adapt their plans.


The 'construct explanation' routine has been implemnted in Python, a notebook with the code and some use cases can be found here.


Demo of the collaborative task from which we would generate the explanations to enhance human collaborators trustworthy.


OCRA's OWL DL version was implemented using Protégé, the developed OWL file is publicly available to facilitate reuse and comparison OCRA. The application ontology that was implemented for the collaborative task of filling a tray can be downloaded here.


Olivares-Alarcos, A.; Foix, S.; Alenyà, G. Knowledge Representation for Explainability in Collaborative Robotics and Adaptation. 2021, submitted to workshop.