Publication

Personalised explainable robots using LLMs

Conference Article

Conference

ACM/IEEE International Conference on Human-Robot Interaction (HRI)

Edition

2025

Pages

1304-1308

Doc link

https://dl.acm.org/doi/10.5555/3721488.3721668

File

Download the digital copy of the doc pdf document

Abstract

In the field of Human-Robot Interaction (HRI), a key challenge lies in enabling humans to comprehend the decisions and behaviours of robots. One promising approach involves leveraging Theory of Mind (ToM) frameworks, wherein a robot estimates the mental model that a user holds about its functioning and compares this with the representation of its internal mental model. This comparison allows the robot to identify potential mismatches and generate communicative actions to bridge such gaps. Effective communication requires the robot to maintain unique mental models for each user and personalise explanations based on past interactions. To address this, we propose an architecture grounded in Large Language Models (LLMs) that operationalises this theoretical framework. We demonstrate the feasibility of this approach through qualitative examples, showcasing responses provided by a robot patrolling a geriatric hospital.

Categories

artificial intelligence, service robots.

Author keywords

HRI, Explainable Robots

Scientific reference

F. Gebelli, L.B. Hriscu, R. Ros, S. Lemaignan, A. Sanfeliu and A. Garrell Zulueta. Personalised explainable robots using LLMs, 2025 ACM/IEEE International Conference on Human-Robot Interaction, 2025, Melbourne, Australia, pp. 1304-1308.