Optimization of OpenStreetMap Building Footprints Based on Semantic Information of Oblique UAV Images

Xiangyu Zhuo, Friedrich Fraundorfer, Franz Kurz, Peter Reinartz

Publikation: Beitrag in einer FachzeitschriftArtikelForschungBegutachtung


Building footprint information is vital for 3D building modeling. Traditionally, in remote sensing, building footprints are extracted and delineated from aerial imagery and/or LiDAR point cloud. Taking a different approach, this paper is dedicated to the optimization of OpenStreetMap (OSM) building footprints exploiting the contour information, which is derived from deep learning-based semantic segmentation of oblique images acquired by the Unmanned Aerial Vehicle (UAV). First, a simplified 3D
building model of Level of Detail 1 (LoD 1) is initialized using the footprint information from OSM and the
elevation information from Digital Surface Model (DSM). In parallel, a deep neural network for pixel-wise semantic image segmentation is trained in order to extract the building boundaries as contour evidence.
Subsequently, an optimization integrating the contour evidence from multi-view images as a constraint results in a refined 3D building model with optimized footprints and height. Our method is leveraged to optimize OSM building footprints for four datasets with different building types, demonstrating robust performance for both individual buildings and multiple buildings regardless of image resolution. Finally, we compare our result with reference data from German Authority Topographic-Cartographic
Information System (ATKIS). Quantitative and qualitative evaluations reveal that the original OSM
building footprints have large offset, but can be significantly improved from meter level to decimeter level after optimization.
FachzeitschriftRemote Sensing
PublikationsstatusVeröffentlicht - 2018



  • building footprint; oblique UAV images; semantic segmentation; deep neural network

Dieses zitieren