Loss-Specific Training of Random Forests for Super-Resolution

Publikation: Beitrag in Buch/Bericht/KonferenzbandBeitrag in einem KonferenzbandForschungBegutachtung

Abstract

Super-resolution addresses the problem of image upscaling by reconstructing high-resolution output images from low-resolution input images. One successful approach for this problem is based on random forests. However, this approach has a large memory footprint, since complex models are required to achieve high accuracy. To overcome this drawback, we present a novel method for constructing random forests under a global training objective. In this way, we improve the fitting power and reduce the model size. In particular, we combine and extend recent approaches on loss-specific training of random forests. However, in contrast to previous works, we train random forests with globally optimized structure and globally optimized prediction models. We evaluate our proposed method on benchmarks for single image super-resolution. Our method shows significantly reduced model size while achieving competitive accuracy compared to state-of-the art approaches.
Originalspracheenglisch
TitelProceedings of the 22nd Computer Vision Winter Workshop
Redakteure/-innenWalter G. Kropatsch, Ines Janusch, Nicole M. Artner
Herausgeber (Verlag)TU Wien, Pattern Recongition and Image Processing Group
Kapitel23
Seitenumfang9
ISBN (elektronisch)978-3-200-04969-7
PublikationsstatusVeröffentlicht - 2017

Fingerprint

Data storage equipment

Dies zitieren

Grabner, A., Poier, G., Opitz, M., Schulter, S., & Roth, P. M. (2017). Loss-Specific Training of Random Forests for Super-Resolution. in W. G. Kropatsch, I. Janusch, & N. M. Artner (Hrsg.), Proceedings of the 22nd Computer Vision Winter Workshop TU Wien, Pattern Recongition and Image Processing Group.

Loss-Specific Training of Random Forests for Super-Resolution. / Grabner, Alexander; Poier, Georg; Opitz, Michael; Schulter, Samuel; Roth, Peter M.

Proceedings of the 22nd Computer Vision Winter Workshop. Hrsg. / Walter G. Kropatsch; Ines Janusch; Nicole M. Artner. TU Wien, Pattern Recongition and Image Processing Group, 2017.

Publikation: Beitrag in Buch/Bericht/KonferenzbandBeitrag in einem KonferenzbandForschungBegutachtung

Grabner, A, Poier, G, Opitz, M, Schulter, S & Roth, PM 2017, Loss-Specific Training of Random Forests for Super-Resolution. in WG Kropatsch, I Janusch & NM Artner (Hrsg.), Proceedings of the 22nd Computer Vision Winter Workshop. TU Wien, Pattern Recongition and Image Processing Group.
Grabner A, Poier G, Opitz M, Schulter S, Roth PM. Loss-Specific Training of Random Forests for Super-Resolution. in Kropatsch WG, Janusch I, Artner NM, Hrsg., Proceedings of the 22nd Computer Vision Winter Workshop. TU Wien, Pattern Recongition and Image Processing Group. 2017
Grabner, Alexander ; Poier, Georg ; Opitz, Michael ; Schulter, Samuel ; Roth, Peter M. / Loss-Specific Training of Random Forests for Super-Resolution. Proceedings of the 22nd Computer Vision Winter Workshop. Hrsg. / Walter G. Kropatsch ; Ines Janusch ; Nicole M. Artner. TU Wien, Pattern Recongition and Image Processing Group, 2017.
@inproceedings{db15697cbc834f6f825512cbef5066dd,
title = "Loss-Specific Training of Random Forests for Super-Resolution",
abstract = "Super-resolution addresses the problem of image upscaling by reconstructing high-resolution output images from low-resolution input images. One successful approach for this problem is based on random forests. However, this approach has a large memory footprint, since complex models are required to achieve high accuracy. To overcome this drawback, we present a novel method for constructing random forests under a global training objective. In this way, we improve the fitting power and reduce the model size. In particular, we combine and extend recent approaches on loss-specific training of random forests. However, in contrast to previous works, we train random forests with globally optimized structure and globally optimized prediction models. We evaluate our proposed method on benchmarks for single image super-resolution. Our method shows significantly reduced model size while achieving competitive accuracy compared to state-of-the art approaches.",
author = "Alexander Grabner and Georg Poier and Michael Opitz and Samuel Schulter and Roth, {Peter M.}",
year = "2017",
language = "English",
editor = "Kropatsch, {Walter G.} and Ines Janusch and Artner, {Nicole M.}",
booktitle = "Proceedings of the 22nd Computer Vision Winter Workshop",
publisher = "TU Wien, Pattern Recongition and Image Processing Group",
address = "Austria",

}

TY - GEN

T1 - Loss-Specific Training of Random Forests for Super-Resolution

AU - Grabner, Alexander

AU - Poier, Georg

AU - Opitz, Michael

AU - Schulter, Samuel

AU - Roth, Peter M.

PY - 2017

Y1 - 2017

N2 - Super-resolution addresses the problem of image upscaling by reconstructing high-resolution output images from low-resolution input images. One successful approach for this problem is based on random forests. However, this approach has a large memory footprint, since complex models are required to achieve high accuracy. To overcome this drawback, we present a novel method for constructing random forests under a global training objective. In this way, we improve the fitting power and reduce the model size. In particular, we combine and extend recent approaches on loss-specific training of random forests. However, in contrast to previous works, we train random forests with globally optimized structure and globally optimized prediction models. We evaluate our proposed method on benchmarks for single image super-resolution. Our method shows significantly reduced model size while achieving competitive accuracy compared to state-of-the art approaches.

AB - Super-resolution addresses the problem of image upscaling by reconstructing high-resolution output images from low-resolution input images. One successful approach for this problem is based on random forests. However, this approach has a large memory footprint, since complex models are required to achieve high accuracy. To overcome this drawback, we present a novel method for constructing random forests under a global training objective. In this way, we improve the fitting power and reduce the model size. In particular, we combine and extend recent approaches on loss-specific training of random forests. However, in contrast to previous works, we train random forests with globally optimized structure and globally optimized prediction models. We evaluate our proposed method on benchmarks for single image super-resolution. Our method shows significantly reduced model size while achieving competitive accuracy compared to state-of-the art approaches.

M3 - Conference contribution

BT - Proceedings of the 22nd Computer Vision Winter Workshop

A2 - Kropatsch, Walter G.

A2 - Janusch, Ines

A2 - Artner, Nicole M.

PB - TU Wien, Pattern Recongition and Image Processing Group

ER -