Publication
Enhancing temporal segmentation by nonlocal self-similarity
Conference Article
Conference
IEEE International Conference on Image Processing (ICIP)
Edition
26th
Pages
3681-3685
Doc link
https://doi.org/10.1109/ICIP.2019.8803499
File
Authors
-
Dimiccoli, Mariella
-
Wendt, Herwig
Abstract
Temporal segmentation of untrimmed videos and photo-streams is currently an active area of research in computer vision and image processing. This paper proposes a new approach to improve the temporal segmentation of photo-streams. The method consists in enhancing image representations by encoding long-range temporal dependencies. Our key contribution is to take advantage of the temporal stationarity assumption of photostreams for modeling each frame by its nonlocal self-similarity function. The proposed approach is put to test on the EDUB-Seg dataset, a standard benchmark for egocentric photostream temporal segmentation. Starting from seven different (CNN based) image features, the method yields consistent improvements in event segmentation quality, leading to an average increase of F-measure of 3.71% with respect to the state of the art.
Categories
pattern recognition.
Author keywords
temporal segmentation, self-similarity,nonlocal means, event representation, egocentric vision
Scientific reference
M. Dimiccoli and H. Wendt. Enhancing temporal segmentation by nonlocal self-similarity, 26th IEEE International Conference on Image Processing, 2019, Taipei, Taiwan, pp. 3681-3685.
Follow us!