PhD Thesis

Deep learning of feature descriptors for event based SLAM

Work default illustration

Information

  • Started: 04/02/2019

Description

Event-based cameras also known as Dynamic Vision Sensors (DVS) are new bio-inspired sensors that instead of capturing images at a fixed frame-rate (as conventional cameras do), they measure the brightness changes of each pixel in the image sensor asynchronously. We say that an event takes place when the change of brightness is greater than a threshold value. Then, the output obtained from the sensor is a stream of events, where each event encodes: the time stamp of the event, the location in the image sensor and the sign of brightness changes. Event cameras have higher temporal resolution (in the order of microseconds) and higher dynamic range than conventional cameras, and also do not suffer from motion blur. Due to these important properties, event-based cameras have a large potential in high speed scenarios such as autonomous vehicles or a UAVs. These new sensors require of novel methods for processing their data where conventional computer visions methods are not possible. In this work we are interested in studying how to process the unconventional output data provided by event-based sensors. More particularly, we are interested in obtaining event descriptors and using them for these particular cases: event-based object recognition and event-based SLAM. We believe the use of event-based cameras can highly improve the performance doing those tasks in highly speed scenarios.

The work is under the scope of the following projects:

  • MdM: Unit of Excellence María de Maeztu (web)
  • EB-SLAM: Event-based simultaneous localization and mapping (web)