Publication

Abstract

Robotic handling of textile objects in household environments is an emerging application that has recently received considerable attention thanks to the development of domestic robots. Most current approaches follow a multiple re-grasp strategy for this purpose, in which clothes are sequentially grasped from different points until one of them yields a desired configuration.



In this work we propose a vision-based method, built on the Bag of Visual Words approach, that combines appearance and 3D information to detect parts suitable for grasping in clothes, even when they are highly wrinkled.



We also contribute a new, annotated, garment part dataset that can be used for benchmarking classification, part detection, and segmentation algorithms. The dataset is used to evaluate our approach and several state-of-the-art 3D descriptors for the task of garment part detection. Results indicate that appearance is a reliable source of information, but that augmenting it with 3D information can help the method perform better with new clothing items.

Categories

computer vision, manipulators, object detection, robot vision.

Author keywords

computer vision, pattern recognition, machine learning, garment part detection, classification, bag-of-visual-words

Scientific reference

A. Ramisa, G. Alenyà, F. Moreno-Noguer and C. Torras. Learning RGB-D descriptors of garment parts for informed robot grasping. Engineering Applications of Artificial Intelligence, 35: 246-258, 2014.