Publication
Visual event-based egocentric human action recognition
Conference Article
Conference
Iberian Conference on Pattern Recognition and Image Analysis (IbPRIA)
Edition
2022
Pages
402-404
Doc link
https://doi.org/10.1007/978-3-031-04881-4_32
File
Authors
-
Moreno Rodriguez, Francisco J.
-
Traver, V. Javier
-
Barranco, Francisco
-
Dimiccoli, Mariella
-
Pla, Filiberto
Abstract
This paper lies at the intersection of three research areas: human action recognition, egocentric vision, and visual event-based sensors. The main goal is the comparison of egocentric action recognition performance under either of two visual sources: conventional images, or event-based visual data. In this work, the events, as triggered by asynchronous event sensors or their simulation, are spatio-temporally aggregated into event frames (a grid-like representation). This allows to use exactly the same neural model for both visual sources, thus easing a fair comparison. Specifically, a hybrid neural architecture combining a convolutional neural network and a recurrent network is used. It is empirically found that this general architecture works for both, conventional gray-level frames, and event frames. This finding is relevant because it reveals that no modification or adaptation is strictly required to deal with event data for egocentric action classification. Interestingly, action recognition is found to perform better with event frames, suggesting that these data provide discriminative information that aids the neural model to learn good features.
Categories
pattern recognition.
Author keywords
egocentric vision, event vision, action recognition
Scientific reference
F.J. Moreno, V.J. Traver, F. Barranco, M. Dimiccoli and F. Pla. Visual event-based egocentric human action recognition, 2022 Iberian Conference on Pattern Recognition and Image Analysis, 2022, Aveiro, Portugal, Vol 13256 of Lecture Notes in Computer Science, pp. 402-404, 2022.
Follow us!