PhD Thesis

Fostering human-robot mutual understanding by explaining the internal beliefs

Work default illustration




  • Started: 01/07/2023


Robots should interact with the real world using partial knowledge and noisy perceptions. To decide which is the most suitable next action to perform, robots usually, often implicitly, make use of an internal representation of the human knowledge about the task, the human’s preferences, and the robot's capabilities. This project will investigate methods to represent and communicate the internal beliefs that a robot’s decision system has employed. The goal is to execute a task and afterwards be able to remember and explain the decisions taken after a task is done (only past is relevant) or while is being executed (past and future are relevant).

The work is under the scope of the following projects:

  • TRAIL: TRAnsparent InterpretabLe robots (web)