Publication
Leveraging triplet loss for unsupervised action segmentation
Conference Article
Conference
CVPR Workshop on Learning with Limited Labelled Data (L3D-IVU)
Edition
2023
Pages
4922-4930
Doc link
https://doi.ieeecomputersociety.org/10.1109/CVPRW59228.2023.00520
File
Abstract
In this paper, we propose a novel fully unsupervised framework that learns action representations suitable for the action segmentation task from the single input video itself, without requiring any training data. Our method is a deep metric learning approach rooted in a shallow network with a triplet loss operating on similarity distributions and a novel triplet selection strategy that effectively models temporal and semantic priors to discover actions in the new representational space. Under these circumstances, we successfully recover temporal boundaries in the learned action representations with higher quality compared with existing unsupervised approaches. The proposed method is evaluated on two widely used benchmark datasets for the action segmentation task and it achieves competitive performance by applying a generic clustering algorithm on the learned representations.
Categories
computer vision, pattern recognition.
Author keywords
Video Understanding, Action Segmentation, Deep Metric Learning
Scientific reference
E.B. Bueno Benito, B. Tura and M. Dimiccoli. Leveraging triplet loss for unsupervised action segmentation, 2023 CVPR Workshop on Learning with Limited Labelled Data, 2023, Vancouver, Canadá, pp. 4922-4930, IEEE.
Follow us!