3D Localization in Urban Environments from Single Images

Anil Armagan, Martin Hirzer, Peter M. Roth, Vincent Lepetit

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review


In this paper, we tackle the problem of geolocalization in urban environments overcoming the limitations in terms of accuracy of sensors like GPS, compass and accelerometer. For that purpose, we adopt recent findings in image segmentation and machine learning and combine them with the valuable information given by 2.5D maps of buildings. In particular, we first extract the façades of buildings and their edges and use this information to estimate the orientation and location that best align an input image to a 3D rendering of the given 2.5D map. As this step builds on a learned semantic segmentation procedure, rich training data is required. Thus, we also discuss how the required training data can be efficiently generated via a 3D tracking system.
Original languageEnglish
Title of host publicationProceedings of the OAGM/AAPR & ARW Joint Workshop (OAGM/AAPR & ARW)
Publication statusPublished - 2017
EventOAGM/AAPR ARW 2017: Joint Workshop on “Vision, Automation & Robotics” - Palais Eschenbach, Wien, Austria
Duration: 10 May 201712 May 2017


ConferenceOAGM/AAPR ARW 2017
Abbreviated titleOAGM/AAPR ARW 2017
Internet address


Dive into the research topics of '3D Localization in Urban Environments from Single Images'. Together they form a unique fingerprint.

Cite this