Adaptive Human-Robot Collaboration: Evolutionary Learning of Action Costs Using an Action Outcome Simulator

Silvia Izquierdo Badiola, Guillem Alenyà, Carlos Rizzo

Abstract: One of the main challenges for successful human-robot collaborative applications lies in adapting the plan to the human agent's changing state and preferences. A promising solution is to bridge the gap between agent modelling and AI task planning, which can be done by integrating the agent state as action costs in the task planning domain. This allows for the plan to be adapted to different partners, by influencing the action allocation. The difficulty then lies in setting appropriate action costs. This paper presents a novel framework to learn a set of planning action costs considering the preferred actions for an agent based on their state. An evolutionary optimisation algorithm is used for this purpose, and an action outcome simulator is developed to act as the black-box function, based on both an agent model and an action type model. This addresses the challenge of collecting data in HRC real-world scenarios, accelerating the learning for posterior fine-tuning in real applications. The coherence of the models and the simulator is proven through a conducted survey, and the learning algorithm is shown to learn appropriate action costs, producing plans that satisfy both the agents' preferences and the prioritised plan requisites. The resulting system is a generic learning framework integrating components that can be easily extended to a wide range of applications, models and planning formalisms.

Paper: Open pdf file.



RO-MAN23 Presentation Video

The following video describes the important concepts of the paper, as presented at RO-MAN 2023.

Additional Material

The survey designed to evaluate the validity of the action outcome simulator can be found in this link: Try the survey!.