Publication
Class prototypical loss for enhanced feature separation in 3D object detection
Conference Article
Conference
IEEE Intelligent Transportation Systems Conference (ITSC)
Edition
2024
Pages
3505-3512
Doc link
http://dx.doi.org/10.1109/ITSC58415.2024.10919676
File
Authors
-
Pérez Quintana, Marc
-
Agudo Martínez, Antonio
-
Dubbelman, Gijs
-
Jancura, Pavol
Projects associated
Abstract
We present a novel loss to increase the class separation of learned features for 3D object detection from lidar point clouds. To correctly classify objects, learned object-level feature distributions of each class need to be distinct. Therefore, we hypothesize that if we make the feature distributions of the classes more separated, then the overall performance of the object detector will improve. To this end, we calculate class prototypes as the mean and covariance of the feature vectors extracted from the annotated objects of each class. Then, we exploit these prototypes with a novel class prototypical loss, defined as the Mahalanobis distance from the feature vector of annotated objects to the corresponding class prototype. This auxiliary loss is then integrated with other object detection losses to improve the object-level feature separation between classes and the overall performance of the detector. We show results applying this loss to the NuScenes dataset where we get improvements of +3.85% and +1.76% mAP for 1 and 10 frames, respectively, compared to the baseline Centerpoint detector, while keeping the same inference computational cost.
Categories
intelligent robots, object detection.
Author keywords
Lidar, feature separation, deep learning
Scientific reference
M. Pérez, A. Agudo, G. Dubbelman and P. Jancura. Class prototypical loss for enhanced feature separation in 3D object detection, 2024 IEEE Intelligent Transportation Systems Conference, 2024, Edmonton, Canada, pp. 3505-3512.
Follow us!