SyntCities: A Large Synthetic Remote Sensing Dataset for Disparity Estimation

Mario Fuentes Reyes, Pablo D'Angelo, Friedrich Fraundorfer

Research output: Contribution to journalArticlepeer-review

Abstract

Studies in the last years have proved the outstanding performance of deep learning for computer vision tasks in the remote sensing field, such as disparity estimation. However, available datasets mostly focus on close-range applications like autonomous driving or robot manipulation. To reduce the domain gap while training we present SyntCities, a synthetic dataset resembling the aerial imagery on urban areas. The pipeline used to render the images is based on 3-D modeling, which helps to avoid acquisition costs, provides subpixel accurate dense ground truth and simulates different illumination conditions. The dataset additionally provides multiclass semantic maps and can be converted to point cloud format to benefit a wider research community. We focus on the task of disparity estimation and evaluate the performance of the traditional semiglobal matching and state-of-the-art architectures, trained with SyntCities and other datasets, on real aerial and satellite images. A comparison with the widely used SceneFlow dataset is also presented. Strategies using a mixture of both real and synthetic samples are studied as well. Results show significant improvements in terms of accuracy for the disparity maps.
Original languageEnglish
Article number9960780
Pages (from-to)10087-10098
Number of pages12
JournalIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Volume15
DOIs
Publication statusPublished - 23 Nov 2022

Keywords

  • Estimation
  • Semantics
  • Three-dimensional displays
  • Remote sensing
  • Synthetic data
  • Software
  • Training

Fingerprint

Dive into the research topics of 'SyntCities: A Large Synthetic Remote Sensing Dataset for Disparity Estimation'. Together they form a unique fingerprint.

Cite this