Combining Edge Images and Depth Maps for Robust Visual Odometry

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this work, we propose a robust visual odometry system for RGBD sensors. The
core of our method is a combination of edge images and depth maps for joint camera pose
estimation. Edges are more stable under varying lighting conditions than raw intensity
values and depth maps further add stability in poorly textured environments. This leads
to higher accuracy and robustness in scenes, where feature- or photoconsistency-based
approaches often fail. We demonstrate the robustness of our method under challeng-
ing conditions on various real-world scenarios recorded with our own RGBD sensor.
Further, we evaluate on several sequences from standard benchmark datasets covering a
wide variety of scenes and camera motions. The results show that our method performs
best in terms of trajectory accuracy for most of the sequences indicating that the chosen
combination of edge and depth terms in the cost function is suitable for a multitude of
scenes.
LanguageEnglish
Title of host publicationProceedings 28th British Machine Vision Conference (BMVC)
Number of pages12
StatusPublished - 2017
Event 28th British Machine Vision Conference - London, United Kingdom
Duration: 4 Sep 20177 Sep 2017

Conference

Conference 28th British Machine Vision Conference
Abbreviated titleBMCV
CountryUnited Kingdom
CityLondon
Period4/09/177/09/17

Fingerprint

Cameras
Sensors
Cost functions
Lighting
Trajectories

Cite this

Schenk, F., & Fraundorfer, F. (2017). Combining Edge Images and Depth Maps for Robust Visual Odometry. In Proceedings 28th British Machine Vision Conference (BMVC)

Combining Edge Images and Depth Maps for Robust Visual Odometry. / Schenk, Fabian; Fraundorfer, Friedrich.

Proceedings 28th British Machine Vision Conference (BMVC). 2017.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Schenk, F & Fraundorfer, F 2017, Combining Edge Images and Depth Maps for Robust Visual Odometry. in Proceedings 28th British Machine Vision Conference (BMVC). 28th British Machine Vision Conference, London, United Kingdom, 4/09/17.
Schenk F, Fraundorfer F. Combining Edge Images and Depth Maps for Robust Visual Odometry. In Proceedings 28th British Machine Vision Conference (BMVC). 2017.
Schenk, Fabian ; Fraundorfer, Friedrich. / Combining Edge Images and Depth Maps for Robust Visual Odometry. Proceedings 28th British Machine Vision Conference (BMVC). 2017.
@inproceedings{039d5c45a35c45e79e02b04363ead516,
title = "Combining Edge Images and Depth Maps for Robust Visual Odometry",
abstract = "In this work, we propose a robust visual odometry system for RGBD sensors. Thecore of our method is a combination of edge images and depth maps for joint camera poseestimation. Edges are more stable under varying lighting conditions than raw intensityvalues and depth maps further add stability in poorly textured environments. This leadsto higher accuracy and robustness in scenes, where feature- or photoconsistency-basedapproaches often fail. We demonstrate the robustness of our method under challeng-ing conditions on various real-world scenarios recorded with our own RGBD sensor.Further, we evaluate on several sequences from standard benchmark datasets covering awide variety of scenes and camera motions. The results show that our method performsbest in terms of trajectory accuracy for most of the sequences indicating that the chosencombination of edge and depth terms in the cost function is suitable for a multitude ofscenes.",
author = "Fabian Schenk and Friedrich Fraundorfer",
year = "2017",
language = "English",
booktitle = "Proceedings 28th British Machine Vision Conference (BMVC)",

}

TY - GEN

T1 - Combining Edge Images and Depth Maps for Robust Visual Odometry

AU - Schenk,Fabian

AU - Fraundorfer,Friedrich

PY - 2017

Y1 - 2017

N2 - In this work, we propose a robust visual odometry system for RGBD sensors. Thecore of our method is a combination of edge images and depth maps for joint camera poseestimation. Edges are more stable under varying lighting conditions than raw intensityvalues and depth maps further add stability in poorly textured environments. This leadsto higher accuracy and robustness in scenes, where feature- or photoconsistency-basedapproaches often fail. We demonstrate the robustness of our method under challeng-ing conditions on various real-world scenarios recorded with our own RGBD sensor.Further, we evaluate on several sequences from standard benchmark datasets covering awide variety of scenes and camera motions. The results show that our method performsbest in terms of trajectory accuracy for most of the sequences indicating that the chosencombination of edge and depth terms in the cost function is suitable for a multitude ofscenes.

AB - In this work, we propose a robust visual odometry system for RGBD sensors. Thecore of our method is a combination of edge images and depth maps for joint camera poseestimation. Edges are more stable under varying lighting conditions than raw intensityvalues and depth maps further add stability in poorly textured environments. This leadsto higher accuracy and robustness in scenes, where feature- or photoconsistency-basedapproaches often fail. We demonstrate the robustness of our method under challeng-ing conditions on various real-world scenarios recorded with our own RGBD sensor.Further, we evaluate on several sequences from standard benchmark datasets covering awide variety of scenes and camera motions. The results show that our method performsbest in terms of trajectory accuracy for most of the sequences indicating that the chosencombination of edge and depth terms in the cost function is suitable for a multitude ofscenes.

UR - https://www.youtube.com/watch?v=uj3rRyqSEnQ

M3 - Conference contribution

BT - Proceedings 28th British Machine Vision Conference (BMVC)

ER -