Enric Corona, Guillem Alenyà, Toni Gabàs and Carme Torras
Abstract: Identification and bi-manual handling of deformable objects, like textiles, is one of the most challenging tasks in the field of industrial and service robotics. Their unpredictable shape and pose makes it very difficult to identify the type of garment and locate the most relevant parts that can be used for grasping. In this paper, we propose an algorithm that first, identifies the type of garment and second, performs a search of the two grasping points that allow a robot to bring the garment to a known pose. We show that using an active search strategy it is possible to grasp a garment directly from predefined grasping points, as opposed to the usual approach based on multiple re-graspings of the lowest hanging parts. Our approach uses a hierarchy of three Convolutional Neural Networks (CNNs) with different levels of specialization, trained both with synthetic and real images. The results obtained in the three steps (recognition, first grasping point, second grasping point) are promising. Experiments with real robots show that most of the errors are due to unsuccessful grasps and not to the localization of the grasping points, thus a more robust grasping strategy is required.
We present a pipeline based on Convolutional Neural Networks (CNN) to carry out garment identification and bring them to a known configuration. A piece of cloth is in a known configuration when it is grasped from two predefined reference points, in order to perform a task such as folding the garment or dressing a person. The process has the following steps:
The pipeline needs a classifier CNN and two more CNNs per garment, appart from the towel, whose vertexes can be found grasping the lowest point on the image when they are hanging. Then, the reference points for each garment are indicated and the whole process can be automated, since it is trained on simulated images.
We use the example of a pair of jeans to illustrate the whole process in the following video, and evaluate the performance in the manipulation process. The jeans are initially grasped from a random point in the left column of the video. The ground truth, predefined in the waist, and the predictions are shown in white and green points, respectively. Observe that, in this step the garment can be in an infinite range of positions. If no points are predicted, the robot rotates the garment until at least one point is visible. Then, the middle column shows the pair of jeans being grasped using the point whose coordinates are more accurately localized in the point cloud. A last CNN then predicts the second grasping point, which is then grasped and shown in the third column.
We performed some experiments of the whole process of bringing a grasped real garment to a known configuration. Our setup includes two Barret's WAM robot arms and a Xtion camera. The predictions are not as accurate as in simulation but, still, the process leads to a similar pose to the reference one, for each garment. Regarding the robot execution, we have observed that the grasping action is a critical aspect. Most of the failures were caused by defective graspings, mainly because the robot gripper sometimes collides with the garment in the approach trajectory changing the grasping point position. We think that a more elaborated grasping strategy will help, for example using a specific grasping orientation for each point. This orientation could be either predicted with the CNN or computed from the garment pointcloud. Moreover, our gripper is generic and a specialized gripper for garment manipulation may help.