Publication

Joint segmentation and tracking of object surfaces in depth movies along human/robot manipulations

Conference Article

Conference

International Conference on Computer Vision Theory and Applications (VISAPP)

Edition

8th

Pages

244-251

Doc link

http://www.visapp.visigrapp.org/Abstracts/2013/VISAPP_2013_Abstracts.htm

File

Download the digital copy of the doc pdf document

Abstract

A novel framework for joint segmentation and tracking in depth videos of object surfaces is presented. Initially, the 3D colored point cloud obtained using the Kinect camera is used to segment the scene into surface patches, defined by quadratic functions. The computed segments together with their functional descriptions are then used to partition the depth image of the subsequent frame in a consistent manner with respect to the precedent frame. This way, solutions established in previous frames can be reused which improves the efficiency of the algorithm and the coherency of the segmentations along the movie. The algorithm is tested for scenes showing human and robot manipulations of objects. We demonstrate that the method can successfully segment and track the human/robot arm and object surfaces along the manipulations. The performance is evaluated quantitatively by measuring the temporal coherency of the segmentations and the segmentation covering using ground truth. The method provides a visual front-end designed for robotic applications, and can potentially be used in the context of manipulation recognition, visual servoing, and robot-grasping tasks.

Categories

computer vision, pattern matching.

Author keywords

range data, segmentation, motion, shape, surface fitting

Scientific reference

B. Dellen, F. Husain and C. Torras. Joint segmentation and tracking of object surfaces in depth movies along human/robot manipulations, 8th International Conference on Computer Vision Theory and Applications, 2013, Barcelona, pp. 244-251.