Accurate real-time visual SLAM combining building models and GPS for mobile robot

Ruyu Liu, Jianhua Zhang*, Shengyong Chen, Thomas Yang, Clemens Arth

*Corresponding author for this work

Research output: Contribution to journalArticle

Abstract

This paper presents a novel 7 DOF (i.e., orientation, translation, and scale) visual simultaneous localization and mapping (vSLAM) system for mobile robots in outdoor environments. In the front end of this vSLAM system, a fast initialization method is designed for different vSLAM backbones, which upgrades the accuracy of trajectory and reconstruction of vSLAM with an absolute scale computed from depth maps generated by building blocks. In the back end of this vSLAM, we propose a nonlinear optimization mechanism throughout which multimodal data are combined for more robust optimization. The modality of building blocks in optimization can improve the tracking accuracy and the scale estimation. By integrating the pose estimated from visual information and the position received through GPS, the optimization further alleviates the drift. The experimental results prove that the proposed method is extremely suitable for outer AR application for outdoor environments, because our method has superior initialization performance, runs in real time, and achieves real scale, higher accuracy, and robustness.

Original languageEnglish
JournalJournal of Real-time Image Processing
DOIs
Publication statusPublished - 7 Jun 2020

Keywords

  • Building models
  • Graph optimization
  • Multimodal fusion
  • Robot localization

ASJC Scopus subject areas

  • Information Systems

Fingerprint Dive into the research topics of 'Accurate real-time visual SLAM combining building models and GPS for mobile robot'. Together they form a unique fingerprint.

Cite this