Semantic Segmentation for 3D Localization in Urban Environments

Anil Armagan, Martin Hirzer, Vincent Lepetit

Research output: Chapter in Book/Report/Conference proceedingConference contributionResearchpeer-review

Abstract

We show how to use simple 2.5D maps of buildings and recent advances in image segmentation and machine learning to geo-localize an input image of an urban scene: We first extract the façades of the buildings and their edges from the image, and then look for the orientation and location that align a 3D rendering of the map with these segments. We discuss how to use a 3D tracking system to acquire the data required for training the segmentation method, the segmentation itself, and how we use the segmentations to evaluate the quality of the alignment.
Original languageEnglish
Title of host publicationProceedings of the Joint Urban Remote Sensing Event (JURSE)
Publication statusPublished - 2017
EventJoint Urban Remote Sensing Event 2017 - Dubai, United Arab Emirates
Duration: 6 Mar 20178 Mar 2017

Conference

ConferenceJoint Urban Remote Sensing Event 2017
Abbreviated titleJURSE 2017
CountryUnited Arab Emirates
CityDubai
Period6/03/178/03/17

Fingerprint

Semantics
Image segmentation
Learning systems
Rendering (computer graphics)

Cite this

Armagan, A., Hirzer, M., & Lepetit, V. (2017). Semantic Segmentation for 3D Localization in Urban Environments. In Proceedings of the Joint Urban Remote Sensing Event (JURSE)

Semantic Segmentation for 3D Localization in Urban Environments. / Armagan, Anil; Hirzer, Martin; Lepetit, Vincent.

Proceedings of the Joint Urban Remote Sensing Event (JURSE). 2017.

Research output: Chapter in Book/Report/Conference proceedingConference contributionResearchpeer-review

Armagan, A, Hirzer, M & Lepetit, V 2017, Semantic Segmentation for 3D Localization in Urban Environments. in Proceedings of the Joint Urban Remote Sensing Event (JURSE). Joint Urban Remote Sensing Event 2017, Dubai, United Arab Emirates, 6/03/17.
Armagan A, Hirzer M, Lepetit V. Semantic Segmentation for 3D Localization in Urban Environments. In Proceedings of the Joint Urban Remote Sensing Event (JURSE). 2017
Armagan, Anil ; Hirzer, Martin ; Lepetit, Vincent. / Semantic Segmentation for 3D Localization in Urban Environments. Proceedings of the Joint Urban Remote Sensing Event (JURSE). 2017.
@inproceedings{8df12c938ae14817b66a4b4b4e82d0e8,
title = "Semantic Segmentation for 3D Localization in Urban Environments",
abstract = "We show how to use simple 2.5D maps of buildings and recent advances in image segmentation and machine learning to geo-localize an input image of an urban scene: We first extract the fa{\cc}ades of the buildings and their edges from the image, and then look for the orientation and location that align a 3D rendering of the map with these segments. We discuss how to use a 3D tracking system to acquire the data required for training the segmentation method, the segmentation itself, and how we use the segmentations to evaluate the quality of the alignment.",
author = "Anil Armagan and Martin Hirzer and Vincent Lepetit",
year = "2017",
language = "English",
booktitle = "Proceedings of the Joint Urban Remote Sensing Event (JURSE)",

}

TY - GEN

T1 - Semantic Segmentation for 3D Localization in Urban Environments

AU - Armagan, Anil

AU - Hirzer, Martin

AU - Lepetit, Vincent

PY - 2017

Y1 - 2017

N2 - We show how to use simple 2.5D maps of buildings and recent advances in image segmentation and machine learning to geo-localize an input image of an urban scene: We first extract the façades of the buildings and their edges from the image, and then look for the orientation and location that align a 3D rendering of the map with these segments. We discuss how to use a 3D tracking system to acquire the data required for training the segmentation method, the segmentation itself, and how we use the segmentations to evaluate the quality of the alignment.

AB - We show how to use simple 2.5D maps of buildings and recent advances in image segmentation and machine learning to geo-localize an input image of an urban scene: We first extract the façades of the buildings and their edges from the image, and then look for the orientation and location that align a 3D rendering of the map with these segments. We discuss how to use a 3D tracking system to acquire the data required for training the segmentation method, the segmentation itself, and how we use the segmentations to evaluate the quality of the alignment.

M3 - Conference contribution

BT - Proceedings of the Joint Urban Remote Sensing Event (JURSE)

ER -