A Deep Primal-Dual Network for Guided Depth Super-Resolution

Gernot Riegler, David Ferstl, Matthias Rüther, Horst Bischof

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper we present a novel method to increase the spatial resolution of depth images. We combine a deep fully convolutional network with a non-local variational method in a \textit{deep primal-dual network}. The joint network computes a noise-free, high-resolution estimate from a noisy, low-resolution input depth map. Additionally, a high-resolution intensity image is used to guide the reconstruction in the network. By unrolling the optimization steps of a first-order primal-dual algorithm and formulating it as a network, we can train our joint method end-to-end. This not only enables us to learn the weights of the fully convolutional network, but also to optimize all parameters of the variational method and its optimization procedure. The training of such a deep network requires a large dataset for supervision. Therefore, we generate high-quality depth maps and corresponding color images with a physically based renderer. In an exhaustive evaluation we show that our method outperforms the state-of-the-art on multiple benchmarks.
LanguageEnglish
Title of host publicationBritish Machine Vision Conference
PublisherThe British Machine Vision Association
StatusPublished - 2016

Fingerprint

Color

Cite this

Riegler, G., Ferstl, D., Rüther, M., & Bischof, H. (2016). A Deep Primal-Dual Network for Guided Depth Super-Resolution. In British Machine Vision Conference The British Machine Vision Association.

A Deep Primal-Dual Network for Guided Depth Super-Resolution. / Riegler, Gernot; Ferstl, David; Rüther, Matthias; Bischof, Horst.

British Machine Vision Conference. The British Machine Vision Association, 2016.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Riegler, G, Ferstl, D, Rüther, M & Bischof, H 2016, A Deep Primal-Dual Network for Guided Depth Super-Resolution. in British Machine Vision Conference. The British Machine Vision Association.
Riegler G, Ferstl D, Rüther M, Bischof H. A Deep Primal-Dual Network for Guided Depth Super-Resolution. In British Machine Vision Conference. The British Machine Vision Association. 2016.
Riegler, Gernot ; Ferstl, David ; Rüther, Matthias ; Bischof, Horst. / A Deep Primal-Dual Network for Guided Depth Super-Resolution. British Machine Vision Conference. The British Machine Vision Association, 2016.
@inproceedings{41eda7698dc543a28060a77bdf1044d9,
title = "A Deep Primal-Dual Network for Guided Depth Super-Resolution",
abstract = "In this paper we present a novel method to increase the spatial resolution of depth images. We combine a deep fully convolutional network with a non-local variational method in a \textit{deep primal-dual network}. The joint network computes a noise-free, high-resolution estimate from a noisy, low-resolution input depth map. Additionally, a high-resolution intensity image is used to guide the reconstruction in the network. By unrolling the optimization steps of a first-order primal-dual algorithm and formulating it as a network, we can train our joint method end-to-end. This not only enables us to learn the weights of the fully convolutional network, but also to optimize all parameters of the variational method and its optimization procedure. The training of such a deep network requires a large dataset for supervision. Therefore, we generate high-quality depth maps and corresponding color images with a physically based renderer. In an exhaustive evaluation we show that our method outperforms the state-of-the-art on multiple benchmarks.",
author = "Gernot Riegler and David Ferstl and Matthias R{\"u}ther and Horst Bischof",
year = "2016",
language = "English",
booktitle = "British Machine Vision Conference",
publisher = "The British Machine Vision Association",
address = "United Kingdom",

}

TY - GEN

T1 - A Deep Primal-Dual Network for Guided Depth Super-Resolution

AU - Riegler,Gernot

AU - Ferstl,David

AU - Rüther,Matthias

AU - Bischof,Horst

PY - 2016

Y1 - 2016

N2 - In this paper we present a novel method to increase the spatial resolution of depth images. We combine a deep fully convolutional network with a non-local variational method in a \textit{deep primal-dual network}. The joint network computes a noise-free, high-resolution estimate from a noisy, low-resolution input depth map. Additionally, a high-resolution intensity image is used to guide the reconstruction in the network. By unrolling the optimization steps of a first-order primal-dual algorithm and formulating it as a network, we can train our joint method end-to-end. This not only enables us to learn the weights of the fully convolutional network, but also to optimize all parameters of the variational method and its optimization procedure. The training of such a deep network requires a large dataset for supervision. Therefore, we generate high-quality depth maps and corresponding color images with a physically based renderer. In an exhaustive evaluation we show that our method outperforms the state-of-the-art on multiple benchmarks.

AB - In this paper we present a novel method to increase the spatial resolution of depth images. We combine a deep fully convolutional network with a non-local variational method in a \textit{deep primal-dual network}. The joint network computes a noise-free, high-resolution estimate from a noisy, low-resolution input depth map. Additionally, a high-resolution intensity image is used to guide the reconstruction in the network. By unrolling the optimization steps of a first-order primal-dual algorithm and formulating it as a network, we can train our joint method end-to-end. This not only enables us to learn the weights of the fully convolutional network, but also to optimize all parameters of the variational method and its optimization procedure. The training of such a deep network requires a large dataset for supervision. Therefore, we generate high-quality depth maps and corresponding color images with a physically based renderer. In an exhaustive evaluation we show that our method outperforms the state-of-the-art on multiple benchmarks.

M3 - Conference contribution

BT - British Machine Vision Conference

PB - The British Machine Vision Association

ER -