Using CNNs to classify and grasp cloth garments

Work default illustration


  • Started: 01/02/2016
  • Finished: 11/07/2016


Robots are getting more autonomous everyday, but it is still hard for them to work with deformable objects such as cloth garments. Due to the changing nature of these objects, robots have to deal with unknown situations for them. In the case of clothing, before the robots can perform tasks with them (Dressing a person, folding clothes...) they should be capable of identifying the garment and grasping it by a known configuration. This lets them manipulate each garment the expected way.

The objective of this project is to classify pieces of cloth and find relevant information for the manipulation tasks. To do that, we will use Convolutional Neural Networks (CNNs), which are giving really promising results in classification tasks, on the garments depth images. To train an accurate network with CNNs a huge amount of data is necessary. In order to obtain enough images we will use a cloth simulation software to create synthetic data.
Real pieces of clothing will be grasped from a table by a robotic arm and held in the same way as the synthetic images. Then a camera will take its depth image, which will be evaluated by the network, and let the robot know what garment it is and how to manipulate it.

The work is under the scope of the following projects:

  • I-DRESS: Assistive interactive robotic system for support in dressing (web)