Publication
Dynamic cloth manipulation with deep reinforcement learning
Conference Article
Conference
IEEE International Conference on Robotics and Automation (ICRA)
Edition
2020
Pages
4630-4636
Doc link
http://dx.doi.org/10.1109/ICRA40945.2020.9196659
File
Abstract
In this paper we present a Deep Reinforcement Learning approach to solve dynamic cloth manipulation tasks. Differing from the case of rigid objects, we stress that the followed trajectory (including speed and acceleration) has a decisive influence on the final state of cloth, which can greatly vary even if the positions reached by the grasped points are the same. We explore how goal positions for non-grasped points can be attained through learning adequate trajectories for the grasped points. Our approach uses few demonstrations to improve control policy learning, and a sparse reward approach to avoid engineering complex reward functions. Since perception of textiles is challenging, we also study different state representations to assess the minimum observation space required for learning to succeed. Finally, we compare different combinations of control policy encodings, demonstrations, and sparse reward learning techniques, and show that our proposed approach can learn dynamic cloth manipulation in an efficient way, i.e., using a reduced observation space, a few demonstrations, and a sparse reward.
Categories
intelligent robots, manipulators.
Author keywords
Deep Reinforcement Learning, Dynamic Manipulation, Learning in Simulation, Deformable Object manipulation
Scientific reference
R. Jangir, G. Alenyà and C. Torras. Dynamic cloth manipulation with deep reinforcement learning, 2020 IEEE International Conference on Robotics and Automation, 2020, Paris, France, pp. 4630-4636, IEEE.
Follow us!