Combining Edge Images and Depth Maps for Robust Visual Odometry

Fabian Schenk, Friedrich Fraundorfer

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

In this work, we propose a robust visual odometry system for RGBD sensors. The
core of our method is a combination of edge images and depth maps for joint camera pose
estimation. Edges are more stable under varying lighting conditions than raw intensity
values and depth maps further add stability in poorly textured environments. This leads
to higher accuracy and robustness in scenes, where feature- or photoconsistency-based
approaches often fail. We demonstrate the robustness of our method under challeng-
ing conditions on various real-world scenarios recorded with our own RGBD sensor.
Further, we evaluate on several sequences from standard benchmark datasets covering a
wide variety of scenes and camera motions. The results show that our method performs
best in terms of trajectory accuracy for most of the sequences indicating that the chosen
combination of edge and depth terms in the cost function is suitable for a multitude of
scenes.
Original languageEnglish
Title of host publicationProceedings 28th British Machine Vision Conference (BMVC)
Number of pages12
Publication statusPublished - 2017
Event28th British Machine Vision Conference: BMVC 2017 - London, United Kingdom
Duration: 4 Sept 20177 Apr 2018

Conference

Conference28th British Machine Vision Conference
Abbreviated titleBMVC 2017
Country/TerritoryUnited Kingdom
CityLondon
Period4/09/177/04/18

Fingerprint

Dive into the research topics of 'Combining Edge Images and Depth Maps for Robust Visual Odometry'. Together they form a unique fingerprint.

Cite this