Variational Shape from Light Field

Stefan Heber, Rene Ranftl, Thomas Pock

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

In this paper we propose an efficient method to calculate a high-quality depth map from a single raw image captured by a light field or plenoptic camera. The proposed model combines the main idea of Active Wavefront Sampling (AWS) with the light field technique, i.e. we extract so-called sub-aperture images out of the raw image of a plenoptic camera, in such a way that the virtual view points are arranged on circles around a fixed center view. By tracking an imaged scene point over a sequence of sub-aperture images corresponding to a common circle, one can observe a virtual rotation of the scene point on the image plane. Our model is able to measure a dense field of these rotations, which are inversely related to the scene depth.
Original languageEnglish
Title of host publicationEnergy Minimization Methods in Computer Vision and Pattern Recognition
Subtitle of host publication9th International Conference, EMMCVPR 2013, Lund, Sweden, August 19-21, 2013. Proceedings
PublisherSpringer Berlin - Heidelberg
Pages66-79
Volume8081
ISBN (Electronic)978-3-642-40395-8
ISBN (Print)978-3-642-40394-1
DOIs
Publication statusAccepted/In press - 2013

Fingerprint

Dive into the research topics of 'Variational Shape from Light Field'. Together they form a unique fingerprint.

Cite this