Publication
Exploring transformers and visual transformers for force prediction in human-robot collaborative transportation tasks
Conference Article
Conference
IEEE International Conference on Robotics and Automation (ICRA)
Edition
2024
Pages
3191-3197
Doc link
https://doi.org/10.1109/ICRA57147.2024.10611205
File
Abstract
In this paper, we analyze the possibilities offered by Deep Learning State-of-the-Art architectures such as Transformers and Visual Transformers in generating a prediction of the human’s force in a Human-Robot collaborative object transportation task at a middle distance. We outperform our previous predictor by achieving a success rate of 93.8% in testset and 90.9% in real experiments with 21 volunteers predicting in both cases the force that the human will exert during the next 1 s. A modification in the architecture allows us to obtain a second output from the model with a velocity prediction, which allows us to improve the capabilities of our predictor if it is used to estimate the trajectory that the human-robot pair will follow. An ablation test is also performed to verify the relative contribution to performance of each input.
Categories
humanoid robots, learning (artificial intelligence), mobile robots.
Author keywords
Physical Human-Robot Interaction, Object Transportation, Force Prediction
Scientific reference
J.E. Domínguez and A. Sanfeliu. Exploring transformers and visual transformers for force prediction in human-robot collaborative transportation tasks, 2024 IEEE International Conference on Robotics and Automation, 2024, Yokohama (Japan), pp. 3191-3197.
Follow us!