Home

Time-Based Features for Simultaneous Localization and Mapping (SLAM) with an Event-based Camera

Standard Computer Vision approaches to SLAM rely on detecting and matching keypoints in different images to estimate the position of the camera and navigate in the environment. These keypoints are typically based on gradient information extracted from the local distribution of the intensities in a scene. However, these approaches are typically computationally demanding. Moreover, tracking the keypoints is prone to drift and low accuracy due to the discrete image sampling rate.

Event-based cameras offer a completely new approach to Computer Vision. The high temporal resolution of the sensor makes tracking easier and the sparse nature of the data makes event-based algorithms more efficient.

The goal of this project is to define event-based keypoints that can be used for SLAM applications. By contrast with frame-based approaches, gradient information cannot be used for extracting stable keypoints, as the sensors is sensitive only to gradients orthogonal to the direction of motion [1]. Therefore, in this project we will investigate the definition of event-based keypoints based of the temporal information carried by the events. For example, keypoints can be defined by considering the differences in time between neighboring events, or by considering the events from the brightest events, which will have lower latency compared to lower contrast ones.

Prerequisites: Good programming skills (Python/C++); Mathematical and/or Computer Science background; Experience with frame-based Computer Vision and Deep Learning is a plus.

[1] Mueggler, E., Bartolozzi, C., Scaramuzza, D., Fast Event-based Corner Detection. British Machine Vision Conf. (BMVC), 2017

For more information, please contact Vincent Lepetit.