Robust Edge-based Visual Odometry using Machine-Learned Edges

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this work, we present a real-time robust edge-based visual odometry framework for RGBD sensors (REVO). Even though our method is independent of the edge detection algorithm, we show that the use of state-of-the-art machine-learned edges gives significant improvements in terms of robustness and accuracy compared to standard edge detection methods. In contrast to approaches that heavily rely on the photo-consistency assumption, edges are less influenced by lighting changes and the sparse edge representation offers a larger convergence basin while the pose estimates are also very fast to compute. Further, we introduce a measure for tracking quality, which we use to determine when to insert a new key frame. We show the feasibility of our system on real-world datasets and extensively evaluate on standard benchmark sequences to demonstrate the performance in a wide variety of scenes and camera motions. Our framework runs in real-time on the CPU of a laptop computer and is available online.
LanguageEnglish
Title of host publicationProceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS)
PublisherInstitute of Electrical and Electronics Engineers
Pages1297-1304
Number of pages8
ISBN (Electronic)978-1-5386-2682-5
DOIs
StatusPublished - 2017
EventInternational Conference on Intelligent Robots and Systems 2017 - Vancouver, Canada
Duration: 24 Sep 201728 Sep 2017

Conference

ConferenceInternational Conference on Intelligent Robots and Systems 2017
Abbreviated titleIEEE/RSJ
CountryCanada
CityVancouver
Period24/09/1728/09/17

Fingerprint

Edge detection
Laptop computers
Program processors
Lighting
Cameras
Sensors

Cite this

Schenk, F., & Fraundorfer, F. (2017). Robust Edge-based Visual Odometry using Machine-Learned Edges. In Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS) (pp. 1297-1304). Institute of Electrical and Electronics Engineers. DOI: 10.1109/IROS.2017.8202305

Robust Edge-based Visual Odometry using Machine-Learned Edges. / Schenk, Fabian; Fraundorfer, Friedrich.

Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS). Institute of Electrical and Electronics Engineers, 2017. p. 1297-1304.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Schenk, F & Fraundorfer, F 2017, Robust Edge-based Visual Odometry using Machine-Learned Edges. in Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS). Institute of Electrical and Electronics Engineers, pp. 1297-1304, International Conference on Intelligent Robots and Systems 2017, Vancouver, Canada, 24/09/17. DOI: 10.1109/IROS.2017.8202305
Schenk F, Fraundorfer F. Robust Edge-based Visual Odometry using Machine-Learned Edges. In Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS). Institute of Electrical and Electronics Engineers. 2017. p. 1297-1304. Available from, DOI: 10.1109/IROS.2017.8202305
Schenk, Fabian ; Fraundorfer, Friedrich. / Robust Edge-based Visual Odometry using Machine-Learned Edges. Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS). Institute of Electrical and Electronics Engineers, 2017. pp. 1297-1304
@inproceedings{0a55275935cc42c79682ef132b515836,
title = "Robust Edge-based Visual Odometry using Machine-Learned Edges",
abstract = "In this work, we present a real-time robust edge-based visual odometry framework for RGBD sensors (REVO). Even though our method is independent of the edge detection algorithm, we show that the use of state-of-the-art machine-learned edges gives significant improvements in terms of robustness and accuracy compared to standard edge detection methods. In contrast to approaches that heavily rely on the photo-consistency assumption, edges are less influenced by lighting changes and the sparse edge representation offers a larger convergence basin while the pose estimates are also very fast to compute. Further, we introduce a measure for tracking quality, which we use to determine when to insert a new key frame. We show the feasibility of our system on real-world datasets and extensively evaluate on standard benchmark sequences to demonstrate the performance in a wide variety of scenes and camera motions. Our framework runs in real-time on the CPU of a laptop computer and is available online.",
author = "Fabian Schenk and Friedrich Fraundorfer",
year = "2017",
doi = "10.1109/IROS.2017.8202305",
language = "English",
pages = "1297--1304",
booktitle = "Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS)",
publisher = "Institute of Electrical and Electronics Engineers",
address = "United States",

}

TY - GEN

T1 - Robust Edge-based Visual Odometry using Machine-Learned Edges

AU - Schenk,Fabian

AU - Fraundorfer,Friedrich

PY - 2017

Y1 - 2017

N2 - In this work, we present a real-time robust edge-based visual odometry framework for RGBD sensors (REVO). Even though our method is independent of the edge detection algorithm, we show that the use of state-of-the-art machine-learned edges gives significant improvements in terms of robustness and accuracy compared to standard edge detection methods. In contrast to approaches that heavily rely on the photo-consistency assumption, edges are less influenced by lighting changes and the sparse edge representation offers a larger convergence basin while the pose estimates are also very fast to compute. Further, we introduce a measure for tracking quality, which we use to determine when to insert a new key frame. We show the feasibility of our system on real-world datasets and extensively evaluate on standard benchmark sequences to demonstrate the performance in a wide variety of scenes and camera motions. Our framework runs in real-time on the CPU of a laptop computer and is available online.

AB - In this work, we present a real-time robust edge-based visual odometry framework for RGBD sensors (REVO). Even though our method is independent of the edge detection algorithm, we show that the use of state-of-the-art machine-learned edges gives significant improvements in terms of robustness and accuracy compared to standard edge detection methods. In contrast to approaches that heavily rely on the photo-consistency assumption, edges are less influenced by lighting changes and the sparse edge representation offers a larger convergence basin while the pose estimates are also very fast to compute. Further, we introduce a measure for tracking quality, which we use to determine when to insert a new key frame. We show the feasibility of our system on real-world datasets and extensively evaluate on standard benchmark sequences to demonstrate the performance in a wide variety of scenes and camera motions. Our framework runs in real-time on the CPU of a laptop computer and is available online.

UR - https://www.youtube.com/watch?v=PUTV9vsdpbA

U2 - 10.1109/IROS.2017.8202305

DO - 10.1109/IROS.2017.8202305

M3 - Conference contribution

SP - 1297

EP - 1304

BT - Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS)

PB - Institute of Electrical and Electronics Engineers

ER -