Multi-FinGAN: Generative coarse-to-fine sampling of multi-finger grasps

Conference Article


IEEE International Conference on Robotics and Automation (ICRA)





Doc link


Download the digital copy of the doc pdf document


While there exists many methods for manipulating rigid objects with parallel-jaw grippers, grasping with multi- finger robotic hands remains a quite unexplored research topic. Reasoning and planning collision-free trajectories on the additional degrees of freedom of several fingers represents an important challenge that, so far, involves computationally costly and slow processes. In this work, we present Multi-FinGAN, a fast generative multi-finger grasp sampling method that synthesizes high quality grasps directly from RGB-D images in about a second. We achieve this by training in an end-to-end fashion a coarse-to-fine model composed of a classification network that distinguishes grasp types according to a specific taxonomy and a refinement network that produces refined grasp poses and joint angles. We experimentally validate and benchmark our method against a standard grasp-sampling method on 790 grasps in simulation and 20 grasps on a real Franka Emika Panda. All experimental results using our method show consistent improvements both in terms of grasp quality metrics and grasp success rate. Remarkably, our approach is up to 20-30 times faster than the baseline, a significant improvement that opens the door to feedback-based grasp re-planning and task informative grasping. Code is available at


computer vision, manipulators.

Scientific reference

J. Lundell, E. Corona, T. Nguyen Le, F. Verdoja, P. Weinzaepfel, G. Rogez, F. Moreno-Noguer and V. Kyrki. Multi-FinGAN: Generative coarse-to-fine sampling of multi-finger grasps, 2021 IEEE International Conference on Robotics and Automation, 2021, Xian, China, pp. 4495-4501.