Grasping highly deformable objects, like textiles, is an emerging area of research that involves both perception and manipulation abilities. As new techniques appear, it becomes essential to design strategies to compare them. However, this is not an easy task, since the large state-space of textile objects explodes when coupled with the variability of grippers, robotic hands and robot arms performing the manipulation task. This high variability makes it very difficult to design experiments to evaluate the performance of a system in a repeatable way and compare it to others. We propose a framework to allow the comparison of different grasping methods for textile objects. Instead of measuring each component separately, we therefore propose a methodology to explicitly measure the vision-manipulation correlation by taking into account the throughput of the actions. Perceptions of deformable objects should be grouped into different clusters, and the different grasping actions available should be tested for each perception type to obtain the action-perception success ratio. This characterization potentially allows to compare very different systems in terms of specialized actions, perceptions or widely useful actions, along with the cost of performing each action. We will also show that this categorization is useful in manipulation planning of deformable objects.


feature extraction, manipulators, robot vision.

Author keywords

robot vision, textile manipulation, repeatable experiments, system comparison

Scientific reference

G. Alenyà, A. Ramisa, F. Moreno-Noguer and C. Torras. Characterization of textile grasping experiments, 2012 ICRA Workshop on Conditions for Replicable Experiments and Performance Comparison in Robotics Research, 2012, St Paul, Minnesota, USA, pp. 1-6.