Publication

BiFold: Bimanual cloth folding with language guidance

Conference Article

Conference

IEEE International Conference on Robotics and Automation (ICRA)

Edition

2025

Pages

5652-5659

Doc link

https://dx.doi.org/10.1109/ICRA55743.2025.11127549

File

Download the digital copy of the doc pdf document

Abstract

Cloth folding is a complex task due to the inevitable self-occlusions of clothes, their complicated dynamics, and the disparate materials, geometries, and textures that garments can have. In this work, we learn folding actions conditioned on text commands. Translating high-level, abstract instructions into precise robotic actions requires sophisticated language understanding and manipulation capabilities. To do that, we leverage a pre-trained vision-language model and repurpose it to predict manipulation actions. Our model, BiFold, can take context into account and achieves state-of-the-art performance on an existing language-conditioned folding benchmark. To address the lack of annotated bimanual folding data, we introduce a novel dataset with automatically parsed actions and language-aligned instructions, enabling better learning of text-conditioned manipulation. BiFold attains the best performance on our dataset and demonstrates strong generalization to new instructions, garments, and environments.

Categories

artificial intelligence, computer vision.

Scientific reference

O. Barbany, A. Colomé and C. Torras. BiFold: Bimanual cloth folding with language guidance, 2025 IEEE International Conference on Robotics and Automation, 2025, Atlanta (GA, USA), pp. 5652-5659.