Basics and Advances in Monocular vSLAM

Hideaki Uchiyama*, Takafumi Taketomi, Sei Ikeda, Shohei Mori

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

This chapter presents basics and recent advances in visual tracking for augmented reality, computer vision, and robotics applications. Specifically, we focus on visual simultaneous localization and mapping (vSLAM) algorithms that allow both camera pose estimation and 3D model generation in unprepared environments. Owing to recent advances of the computational efficiency in vSLAM algorithms, vSLAM using a monocular RGB camera can run in real time even on mobile devices, and therefore has been used in various applications. In this chapter, we summarize basic computer vision technologies used in vSLAM and review existing vSLAM algorithms.
Original languageEnglish
Title of host publicationSmart Sensors and Systems
Subtitle of host publicationTechnology Advancement and Application Demonstrations
PublisherSpringer, Cham
Pages93-104
Number of pages12
ISBN (Electronic)9783030422349
ISBN (Print)9783030422332
DOIs
Publication statusPublished - 1 Jan 2020
Externally publishedYes

Keywords

  • 3D model generation
  • Augmented reality
  • Camera geometry
  • Camera pose estimation
  • Computer vision
  • Key-point matching
  • Monocular RGB camera
  • Robotics
  • Triangulation and bundle adjustment
  • Visual simultaneous localization and mapping (vSLAM)
  • vSLAM algorithm

ASJC Scopus subject areas

  • Engineering(all)

Cite this