PhD Thesis

Multimodal Data Fusion for Multiple Object Tracking: A Reference Perception System for ADAS and AV Functions Validation

Work default illustration

Information

  • Started: 22/03/2021
  • Thesis project read: 24/02/2022

Description

In 2021, 1.19 million people died from traffic accidents, which are the leading cause of death among people aged 5 to 29 years. Most of these accidents are caused by the driver; therefore, automated vehicles present a unique opportunity to reduce fatalities and improve road safety. But the large-scale deployment and adoption of safe automated vehicles require a robust validation procedure, including open-road tests, which are significantly more challenging than tests on proving grounds or in simulation, because there is no ground-truth data on the objects around the vehicle under test. This thesis studies how to build a reference perception system to enable the validation of automated vehicles and advanced driver assistance systems on the open road. We discuss object detection and present a clustering method to detect class-agnostic objects on point clouds. These detections can be projected to the image plane to be combined with image-based detections to improve robustness and add class information. We also present a method to exploit class prototypes, defined as the mean and covariance of the features of that class, to improve the performance of a learning-based object detector on point clouds. Object detections from different modalities can be combined in the presented multiple object tracking method, which works without assuming sensor synchronization and can be adapted to include object information from any source, such as vehicle-to-vehicle communications, and includes the handling of common errors that are understudied in the literature: Misclassifications and partial bounding box detections. These object detectors and multiple object trackers can be evaluated with a novel proposed methodology to evaluate perception systems focusing on safety, which we use to evaluate the effect on object detection performance of different weather conditions and the robustness layers included in recent testing protocols. We also present a pipeline to extract scenarios in standardized formats from recorded images and point clouds, that can be directly used by a simulator. Finally, we discuss how scenario extraction, combined with other tools, can help in the validation of automated vehicles on the open road.