Accurate real-time visual SLAM combining building models and GPS for mobile robot

Ruyu Liu, Jianhua Zhang*, Shengyong Chen, Thomas Yang, Clemens Arth

*Korrespondierende/r Autor/in für diese Arbeit

Publikation: Beitrag in einer FachzeitschriftArtikel

Abstract

This paper presents a novel 7 DOF (i.e., orientation, translation, and scale) visual simultaneous localization and mapping (vSLAM) system for mobile robots in outdoor environments. In the front end of this vSLAM system, a fast initialization method is designed for different vSLAM backbones, which upgrades the accuracy of trajectory and reconstruction of vSLAM with an absolute scale computed from depth maps generated by building blocks. In the back end of this vSLAM, we propose a nonlinear optimization mechanism throughout which multimodal data are combined for more robust optimization. The modality of building blocks in optimization can improve the tracking accuracy and the scale estimation. By integrating the pose estimated from visual information and the position received through GPS, the optimization further alleviates the drift. The experimental results prove that the proposed method is extremely suitable for outer AR application for outdoor environments, because our method has superior initialization performance, runs in real time, and achieves real scale, higher accuracy, and robustness.

Originalspracheenglisch
Seiten (von - bis)419-429
Seitenumfang11
FachzeitschriftJournal of Real-time Image Processing
Jahrgang18
Ausgabenummer2
Frühes Online-Datum7 Jun 2020
DOIs
PublikationsstatusVeröffentlicht - Apr 2021

ASJC Scopus subject areas

  • Information systems

Fingerprint Untersuchen Sie die Forschungsthemen von „Accurate real-time visual SLAM combining building models and GPS for mobile robot“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren