In this paper we propose an efficient method to calculate a high-quality depth map from a single raw image captured by a light field or plenoptic camera. The proposed model combines the main idea of Active Wavefront Sampling (AWS) with the light field technique, i.e. we extract so-called sub-aperture images out of the raw image of a plenoptic camera, in such a way that the virtual view points are arranged on circles around a fixed center view. By tracking an imaged scene point over a sequence of sub-aperture images corresponding to a common circle, one can observe a virtual rotation of the scene point on the image plane. Our model is able to measure a dense field of these rotations, which are inversely related to the scene depth.
|Title of host publication||Energy Minimization Methods in Computer Vision and Pattern Recognition|
|Subtitle of host publication||9th International Conference, EMMCVPR 2013, Lund, Sweden, August 19-21, 2013. Proceedings|
|Publisher||Springer Berlin - Heidelberg|
|Publication status||Accepted/In press - 2013|