Learning to Align Semantic Segmentation and 2.5D Maps for Geolocalization

Anil Armagan, Martin Hirzer, Peter M. Roth, Vincent Lepetit

Publikation: Beitrag in Buch/Bericht/KonferenzbandBeitrag in einem KonferenzbandForschungBegutachtung

Abstract

We present an efficient method for geolocalization in urban environments starting from a coarse estimate of the location provided by a GPS and using a simple untextured 2.5D model of the surrounding buildings. Our key contribution is a novel efficient and robust method to optimize the pose: We train a Deep Network to predict the best direction to improve a pose estimate, given a semantic segmentation of the input image and a rendering of the buildings from this estimate. We then iteratively apply this CNN until converging to a good pose. This approach avoids the use of reference images of the surroundings, which are difficult to acquire and match, while 2.5D models are broadly available. We can therefore apply it to places unseen during training.
Originalspracheenglisch
TitelProceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
PublikationsstatusVeröffentlicht - 2017
Veranstaltung2017 IEEE Conference on Computer Vision and Pattern Recognition - Honolulu, USA / Vereinigte Staaten
Dauer: 21 Jul 201726 Jul 2017

Konferenz

Konferenz2017 IEEE Conference on Computer Vision and Pattern Recognition
KurztitelCVPR 2017
LandUSA / Vereinigte Staaten
OrtHonolulu
Zeitraum21/07/1726/07/17

Fingerprint

Semantics
Global positioning system

Dies zitieren

Armagan, A., Hirzer, M., Roth, P. M., & Lepetit, V. (2017). Learning to Align Semantic Segmentation and 2.5D Maps for Geolocalization. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

Learning to Align Semantic Segmentation and 2.5D Maps for Geolocalization. / Armagan, Anil; Hirzer, Martin; Roth, Peter M.; Lepetit, Vincent.

Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017.

Publikation: Beitrag in Buch/Bericht/KonferenzbandBeitrag in einem KonferenzbandForschungBegutachtung

Armagan, A, Hirzer, M, Roth, PM & Lepetit, V 2017, Learning to Align Semantic Segmentation and 2.5D Maps for Geolocalization. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)., Honolulu, USA / Vereinigte Staaten, 21/07/17.
Armagan A, Hirzer M, Roth PM, Lepetit V. Learning to Align Semantic Segmentation and 2.5D Maps for Geolocalization. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017
Armagan, Anil ; Hirzer, Martin ; Roth, Peter M. ; Lepetit, Vincent. / Learning to Align Semantic Segmentation and 2.5D Maps for Geolocalization. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017.
@inproceedings{98985536ebc64a82a3828357da43a409,
title = "Learning to Align Semantic Segmentation and 2.5D Maps for Geolocalization",
abstract = "We present an efficient method for geolocalization in urban environments starting from a coarse estimate of the location provided by a GPS and using a simple untextured 2.5D model of the surrounding buildings. Our key contribution is a novel efficient and robust method to optimize the pose: We train a Deep Network to predict the best direction to improve a pose estimate, given a semantic segmentation of the input image and a rendering of the buildings from this estimate. We then iteratively apply this CNN until converging to a good pose. This approach avoids the use of reference images of the surroundings, which are difficult to acquire and match, while 2.5D models are broadly available. We can therefore apply it to places unseen during training.",
author = "Anil Armagan and Martin Hirzer and Roth, {Peter M.} and Vincent Lepetit",
year = "2017",
language = "English",
booktitle = "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",

}

TY - GEN

T1 - Learning to Align Semantic Segmentation and 2.5D Maps for Geolocalization

AU - Armagan, Anil

AU - Hirzer, Martin

AU - Roth, Peter M.

AU - Lepetit, Vincent

PY - 2017

Y1 - 2017

N2 - We present an efficient method for geolocalization in urban environments starting from a coarse estimate of the location provided by a GPS and using a simple untextured 2.5D model of the surrounding buildings. Our key contribution is a novel efficient and robust method to optimize the pose: We train a Deep Network to predict the best direction to improve a pose estimate, given a semantic segmentation of the input image and a rendering of the buildings from this estimate. We then iteratively apply this CNN until converging to a good pose. This approach avoids the use of reference images of the surroundings, which are difficult to acquire and match, while 2.5D models are broadly available. We can therefore apply it to places unseen during training.

AB - We present an efficient method for geolocalization in urban environments starting from a coarse estimate of the location provided by a GPS and using a simple untextured 2.5D model of the surrounding buildings. Our key contribution is a novel efficient and robust method to optimize the pose: We train a Deep Network to predict the best direction to improve a pose estimate, given a semantic segmentation of the input image and a rendering of the buildings from this estimate. We then iteratively apply this CNN until converging to a good pose. This approach avoids the use of reference images of the surroundings, which are difficult to acquire and match, while 2.5D models are broadly available. We can therefore apply it to places unseen during training.

M3 - Conference contribution

BT - Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

ER -