Loss-Specific Training of Random Forests for Super-Resolution

Research output: Chapter in Book/Report/Conference proceedingConference contributionResearchpeer-review

Abstract

Super-resolution addresses the problem of image upscaling by reconstructing high-resolution output images from low-resolution input images. One successful approach for this problem is based on random forests. However, this approach has a large memory footprint, since complex models are required to achieve high accuracy. To overcome this drawback, we present a novel method for constructing random forests under a global training objective. In this way, we improve the fitting power and reduce the model size. In particular, we combine and extend recent approaches on loss-specific training of random forests. However, in contrast to previous works, we train random forests with globally optimized structure and globally optimized prediction models. We evaluate our proposed method on benchmarks for single image super-resolution. Our method shows significantly reduced model size while achieving competitive accuracy compared to state-of-the art approaches.
Original languageEnglish
Title of host publicationProceedings of the 22nd Computer Vision Winter Workshop
EditorsWalter G. Kropatsch, Ines Janusch, Nicole M. Artner
PublisherTU Wien, Pattern Recongition and Image Processing Group
Chapter23
Number of pages9
ISBN (Electronic)978-3-200-04969-7
Publication statusPublished - 2017

Fingerprint

Data storage equipment

Cite this

Grabner, A., Poier, G., Opitz, M., Schulter, S., & Roth, P. M. (2017). Loss-Specific Training of Random Forests for Super-Resolution. In W. G. Kropatsch, I. Janusch, & N. M. Artner (Eds.), Proceedings of the 22nd Computer Vision Winter Workshop TU Wien, Pattern Recongition and Image Processing Group.

Loss-Specific Training of Random Forests for Super-Resolution. / Grabner, Alexander; Poier, Georg; Opitz, Michael; Schulter, Samuel; Roth, Peter M.

Proceedings of the 22nd Computer Vision Winter Workshop. ed. / Walter G. Kropatsch; Ines Janusch; Nicole M. Artner. TU Wien, Pattern Recongition and Image Processing Group, 2017.

Research output: Chapter in Book/Report/Conference proceedingConference contributionResearchpeer-review

Grabner, A, Poier, G, Opitz, M, Schulter, S & Roth, PM 2017, Loss-Specific Training of Random Forests for Super-Resolution. in WG Kropatsch, I Janusch & NM Artner (eds), Proceedings of the 22nd Computer Vision Winter Workshop. TU Wien, Pattern Recongition and Image Processing Group.
Grabner A, Poier G, Opitz M, Schulter S, Roth PM. Loss-Specific Training of Random Forests for Super-Resolution. In Kropatsch WG, Janusch I, Artner NM, editors, Proceedings of the 22nd Computer Vision Winter Workshop. TU Wien, Pattern Recongition and Image Processing Group. 2017
Grabner, Alexander ; Poier, Georg ; Opitz, Michael ; Schulter, Samuel ; Roth, Peter M. / Loss-Specific Training of Random Forests for Super-Resolution. Proceedings of the 22nd Computer Vision Winter Workshop. editor / Walter G. Kropatsch ; Ines Janusch ; Nicole M. Artner. TU Wien, Pattern Recongition and Image Processing Group, 2017.
@inproceedings{db15697cbc834f6f825512cbef5066dd,
title = "Loss-Specific Training of Random Forests for Super-Resolution",
abstract = "Super-resolution addresses the problem of image upscaling by reconstructing high-resolution output images from low-resolution input images. One successful approach for this problem is based on random forests. However, this approach has a large memory footprint, since complex models are required to achieve high accuracy. To overcome this drawback, we present a novel method for constructing random forests under a global training objective. In this way, we improve the fitting power and reduce the model size. In particular, we combine and extend recent approaches on loss-specific training of random forests. However, in contrast to previous works, we train random forests with globally optimized structure and globally optimized prediction models. We evaluate our proposed method on benchmarks for single image super-resolution. Our method shows significantly reduced model size while achieving competitive accuracy compared to state-of-the art approaches.",
author = "Alexander Grabner and Georg Poier and Michael Opitz and Samuel Schulter and Roth, {Peter M.}",
year = "2017",
language = "English",
editor = "Kropatsch, {Walter G.} and Ines Janusch and Artner, {Nicole M.}",
booktitle = "Proceedings of the 22nd Computer Vision Winter Workshop",
publisher = "TU Wien, Pattern Recongition and Image Processing Group",
address = "Austria",

}

TY - GEN

T1 - Loss-Specific Training of Random Forests for Super-Resolution

AU - Grabner, Alexander

AU - Poier, Georg

AU - Opitz, Michael

AU - Schulter, Samuel

AU - Roth, Peter M.

PY - 2017

Y1 - 2017

N2 - Super-resolution addresses the problem of image upscaling by reconstructing high-resolution output images from low-resolution input images. One successful approach for this problem is based on random forests. However, this approach has a large memory footprint, since complex models are required to achieve high accuracy. To overcome this drawback, we present a novel method for constructing random forests under a global training objective. In this way, we improve the fitting power and reduce the model size. In particular, we combine and extend recent approaches on loss-specific training of random forests. However, in contrast to previous works, we train random forests with globally optimized structure and globally optimized prediction models. We evaluate our proposed method on benchmarks for single image super-resolution. Our method shows significantly reduced model size while achieving competitive accuracy compared to state-of-the art approaches.

AB - Super-resolution addresses the problem of image upscaling by reconstructing high-resolution output images from low-resolution input images. One successful approach for this problem is based on random forests. However, this approach has a large memory footprint, since complex models are required to achieve high accuracy. To overcome this drawback, we present a novel method for constructing random forests under a global training objective. In this way, we improve the fitting power and reduce the model size. In particular, we combine and extend recent approaches on loss-specific training of random forests. However, in contrast to previous works, we train random forests with globally optimized structure and globally optimized prediction models. We evaluate our proposed method on benchmarks for single image super-resolution. Our method shows significantly reduced model size while achieving competitive accuracy compared to state-of-the art approaches.

M3 - Conference contribution

BT - Proceedings of the 22nd Computer Vision Winter Workshop

A2 - Kropatsch, Walter G.

A2 - Janusch, Ines

A2 - Artner, Nicole M.

PB - TU Wien, Pattern Recongition and Image Processing Group

ER -