Publication

Force and velocity prediction in human-robot collaborative transportation tasks through video retentive networks

Conference Article

Conference

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)

Edition

2024

Pages

9307-9313

Doc link

https://doi.org/10.1109/IROS58592.2024.10801981

File

Download the digital copy of the doc pdf document

Abstract

In this article, we propose a generalization of a Deep Learning State-of-the-Art architecture such as Retentive Networks so that it can accept video sequences as input. With this generalization, we design a force/velocity predictor applied to the medium-distance Human-Robot collaborative object transportation task. We achieve better results than with our previous predictor by reaching success rates in testset of up to 93.7% in predicting the force to be exerted by the human and up to 96.5% in the velocity of the human-robot pair during the next 1 s, and up to 91.0% and 95.0% respectively in real experiments. This new architecture also manages to improve inference times by up to 32.8% with different graphics cards. Finally, an ablation test allows us to detect that one of the input variables used so far, such as the position of the task goal, could be discarded allowing this goal to be chosen dynamically by the human instead of being pre-set.

Categories

artificial intelligence, generalisation (artificial intelligence), humanoid robots, intelligent robots, mobile robots.

Author keywords

Physical Human-Robot Interaction, Object Transportation, Force Prediction, Human-in-the-Loop

Scientific reference

J.E. Domínguez and A. Sanfeliu. Force and velocity prediction in human-robot collaborative transportation tasks through video retentive networks, 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2024, Abu Dhabi, UAE, pp. 9307-9313.