Speed invariant time surface for learning to detect corner points with event-based cameras

Vincent Lepetit, Jacques Manderscheid, Amos Sironi, Nicolas Bourdis, Davide Migliore

Research output: Chapter in Book/Report/Conference proceedingConference contributionResearchpeer-review

Abstract

We propose a learning approach to corner detection for event-based cameras that is stable even under fast and abrupt motions. Event-based cameras offer high temporal resolution, power efficiency, and high dynamic range. However, the properties of event-based data are very different compared to standard intensity images, and simple extensions of corner detection methods designed for these images do not perform well on event-based data. We first introduce an efficient way to compute a time surface that is invariant to the speed of the objects. We then show that we can train a Random Forest to recognize events generated by a moving corner from our time surface. Random Forests are also extremely efficient, and therefore a good choice to deal with the high capture frequency of event-based cameras---our implementation processes up to 1.6 Mev/s on a single CPU. Thanks to our time surface formulation and this learning approach, our method is significantly more robust to abrupt changes of direction of the corners compared to previous ones. Our method also naturally assigns a confidence score for the corners, which can be useful for postprocessing. Moreover, we introduce a high-resolution dataset suitable for quantitative evaluation and comparison of corner detection methods for event-based cameras. We call our approach SILC, for Speed Invariant Learned Corners, and compare it to the state-of-the-art with extensive experiments, showing better performance.
Original languageEnglish
Title of host publicationProceedings of the IEEE Conference on Computer Vision and Pattern Recognition
Pages10245-10254
Publication statusPublished - 2019

Fingerprint

Cameras
Program processors
Experiments

Cite this

Lepetit, V., Manderscheid, J., Sironi, A., Bourdis, N., & Migliore, D. (2019). Speed invariant time surface for learning to detect corner points with event-based cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 10245-10254)

Speed invariant time surface for learning to detect corner points with event-based cameras. / Lepetit, Vincent; Manderscheid, Jacques; Sironi, Amos; Bourdis, Nicolas ; Migliore, Davide.

Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019. p. 10245-10254.

Research output: Chapter in Book/Report/Conference proceedingConference contributionResearchpeer-review

Lepetit, V, Manderscheid, J, Sironi, A, Bourdis, N & Migliore, D 2019, Speed invariant time surface for learning to detect corner points with event-based cameras. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 10245-10254.
Lepetit V, Manderscheid J, Sironi A, Bourdis N, Migliore D. Speed invariant time surface for learning to detect corner points with event-based cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019. p. 10245-10254
Lepetit, Vincent ; Manderscheid, Jacques ; Sironi, Amos ; Bourdis, Nicolas ; Migliore, Davide. / Speed invariant time surface for learning to detect corner points with event-based cameras. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019. pp. 10245-10254
@inproceedings{ee8e23a54c274b50900fb3d15356639b,
title = "Speed invariant time surface for learning to detect corner points with event-based cameras",
abstract = "We propose a learning approach to corner detection for event-based cameras that is stable even under fast and abrupt motions. Event-based cameras offer high temporal resolution, power efficiency, and high dynamic range. However, the properties of event-based data are very different compared to standard intensity images, and simple extensions of corner detection methods designed for these images do not perform well on event-based data. We first introduce an efficient way to compute a time surface that is invariant to the speed of the objects. We then show that we can train a Random Forest to recognize events generated by a moving corner from our time surface. Random Forests are also extremely efficient, and therefore a good choice to deal with the high capture frequency of event-based cameras---our implementation processes up to 1.6 Mev/s on a single CPU. Thanks to our time surface formulation and this learning approach, our method is significantly more robust to abrupt changes of direction of the corners compared to previous ones. Our method also naturally assigns a confidence score for the corners, which can be useful for postprocessing. Moreover, we introduce a high-resolution dataset suitable for quantitative evaluation and comparison of corner detection methods for event-based cameras. We call our approach SILC, for Speed Invariant Learned Corners, and compare it to the state-of-the-art with extensive experiments, showing better performance.",
author = "Vincent Lepetit and Jacques Manderscheid and Amos Sironi and Nicolas Bourdis and Davide Migliore",
year = "2019",
language = "English",
pages = "10245--10254",
booktitle = "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",

}

TY - GEN

T1 - Speed invariant time surface for learning to detect corner points with event-based cameras

AU - Lepetit, Vincent

AU - Manderscheid, Jacques

AU - Sironi, Amos

AU - Bourdis, Nicolas

AU - Migliore, Davide

PY - 2019

Y1 - 2019

N2 - We propose a learning approach to corner detection for event-based cameras that is stable even under fast and abrupt motions. Event-based cameras offer high temporal resolution, power efficiency, and high dynamic range. However, the properties of event-based data are very different compared to standard intensity images, and simple extensions of corner detection methods designed for these images do not perform well on event-based data. We first introduce an efficient way to compute a time surface that is invariant to the speed of the objects. We then show that we can train a Random Forest to recognize events generated by a moving corner from our time surface. Random Forests are also extremely efficient, and therefore a good choice to deal with the high capture frequency of event-based cameras---our implementation processes up to 1.6 Mev/s on a single CPU. Thanks to our time surface formulation and this learning approach, our method is significantly more robust to abrupt changes of direction of the corners compared to previous ones. Our method also naturally assigns a confidence score for the corners, which can be useful for postprocessing. Moreover, we introduce a high-resolution dataset suitable for quantitative evaluation and comparison of corner detection methods for event-based cameras. We call our approach SILC, for Speed Invariant Learned Corners, and compare it to the state-of-the-art with extensive experiments, showing better performance.

AB - We propose a learning approach to corner detection for event-based cameras that is stable even under fast and abrupt motions. Event-based cameras offer high temporal resolution, power efficiency, and high dynamic range. However, the properties of event-based data are very different compared to standard intensity images, and simple extensions of corner detection methods designed for these images do not perform well on event-based data. We first introduce an efficient way to compute a time surface that is invariant to the speed of the objects. We then show that we can train a Random Forest to recognize events generated by a moving corner from our time surface. Random Forests are also extremely efficient, and therefore a good choice to deal with the high capture frequency of event-based cameras---our implementation processes up to 1.6 Mev/s on a single CPU. Thanks to our time surface formulation and this learning approach, our method is significantly more robust to abrupt changes of direction of the corners compared to previous ones. Our method also naturally assigns a confidence score for the corners, which can be useful for postprocessing. Moreover, we introduce a high-resolution dataset suitable for quantitative evaluation and comparison of corner detection methods for event-based cameras. We call our approach SILC, for Speed Invariant Learned Corners, and compare it to the state-of-the-art with extensive experiments, showing better performance.

M3 - Conference contribution

SP - 10245

EP - 10254

BT - Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

ER -