Publication

2by2: weakly-supervised learning for global action segmentation

Conference Article

Conference

International Conference on Pattern Recognition (ICPR)

Edition

27th

Pages

380-395

Doc link

https://doi.org/10.1007/978-3-031-78125-4_26

File

Download the digital copy of the doc pdf document

Abstract

This paper presents a simple yet effective approach for the poorly investigated task of global action segmentation, aiming at grouping frames capturing the same action across videos of different activities. Unlike the case of videos depicting all the same activity, the temporal order of actions is not roughly shared among all videos, making the task even more challenging. We propose to use activity labels to learn, in a weakly-supervised fashion, action representations suitable for global action segmentation. For this purpose, we introduce a triadic learning approach for video pairs, to ensure intra-video action discrimination, as well as inter-video and inter-activity action association. For the backbone architecture, we use a Siamese network based on sparse transformers that takes as input video pairs and determine whether they belong to the same activity. The proposed approach is validated on two challenging benchmark datasets: Breakfast and YouTube Instructions, outperforming state-of-the-art methods.

Categories

pattern recognition.

Author keywords

Temporal Action Segmentation; Weakly-Supervised Learning; Video Alignment.

Scientific reference

E.B. Bueno Benito and M. Dimiccoli. 2by2: weakly-supervised learning for global action segmentation, 27th International Conference on Pattern Recognition, 2024, Kolkata, in Pattern Recognition, Vol 15315 of Lecture Notes in Computer Science, pp. 380-395, 2024, Cham.