Abstract
Originalsprache | englisch |
---|---|
Titel | Proceedings of the 22nd Computer Vision Winter Workshop |
Redakteure/-innen | Walter G. Kropatsch, Ines Janusch, Nicole M. Artner |
Herausgeber (Verlag) | TU Wien, Pattern Recongition and Image Processing Group |
Kapitel | 23 |
Seitenumfang | 9 |
ISBN (elektronisch) | 978-3-200-04969-7 |
Publikationsstatus | Veröffentlicht - 2017 |
Fingerprint
Dies zitieren
Loss-Specific Training of Random Forests for Super-Resolution. / Grabner, Alexander; Poier, Georg; Opitz, Michael; Schulter, Samuel; Roth, Peter M.
Proceedings of the 22nd Computer Vision Winter Workshop. Hrsg. / Walter G. Kropatsch; Ines Janusch; Nicole M. Artner. TU Wien, Pattern Recongition and Image Processing Group, 2017.Publikation: Beitrag in Buch/Bericht/Konferenzband › Beitrag in einem Konferenzband › Forschung › Begutachtung
}
TY - GEN
T1 - Loss-Specific Training of Random Forests for Super-Resolution
AU - Grabner, Alexander
AU - Poier, Georg
AU - Opitz, Michael
AU - Schulter, Samuel
AU - Roth, Peter M.
PY - 2017
Y1 - 2017
N2 - Super-resolution addresses the problem of image upscaling by reconstructing high-resolution output images from low-resolution input images. One successful approach for this problem is based on random forests. However, this approach has a large memory footprint, since complex models are required to achieve high accuracy. To overcome this drawback, we present a novel method for constructing random forests under a global training objective. In this way, we improve the fitting power and reduce the model size. In particular, we combine and extend recent approaches on loss-specific training of random forests. However, in contrast to previous works, we train random forests with globally optimized structure and globally optimized prediction models. We evaluate our proposed method on benchmarks for single image super-resolution. Our method shows significantly reduced model size while achieving competitive accuracy compared to state-of-the art approaches.
AB - Super-resolution addresses the problem of image upscaling by reconstructing high-resolution output images from low-resolution input images. One successful approach for this problem is based on random forests. However, this approach has a large memory footprint, since complex models are required to achieve high accuracy. To overcome this drawback, we present a novel method for constructing random forests under a global training objective. In this way, we improve the fitting power and reduce the model size. In particular, we combine and extend recent approaches on loss-specific training of random forests. However, in contrast to previous works, we train random forests with globally optimized structure and globally optimized prediction models. We evaluate our proposed method on benchmarks for single image super-resolution. Our method shows significantly reduced model size while achieving competitive accuracy compared to state-of-the art approaches.
M3 - Conference contribution
BT - Proceedings of the 22nd Computer Vision Winter Workshop
A2 - Kropatsch, Walter G.
A2 - Janusch, Ines
A2 - Artner, Nicole M.
PB - TU Wien, Pattern Recongition and Image Processing Group
ER -