Variational Networks: Connecting Variational Methods and Deep Learning

Erich Kobler, Teresa Klatzer, Kerstin Hammernik, Thomas Pock

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we introduce variational networks (VNs) for image reconstruction. VNs are fully learned models based on the framework of incremental proximal gradient methods. They provide a natural transition between classical variational methods and state-of-the-art residual neural networks. Due to their incremental nature, VNs are very efficient, but only approximately minimize the underlying variational model. Surprisingly, in our numerical experiments on image reconstruction problems it turns out that giving up exact minimization leads to a consistent performance increase, in particular in the case of convex models.
Original languageEnglish
Title of host publicationPattern Recognition
Subtitle of host publicationGerman Conference, GCPR 2017, Proceedings
PublisherSpringer
Pages281-293
ISBN (Print)978-3-319-66708-9
DOIs
Publication statusPublished - 2017

Publication series

NameLecture Notes in Computer Science
Volume10496

Keywords

  • variational methods
  • machine learning

Fingerprint Dive into the research topics of 'Variational Networks: Connecting Variational Methods and Deep Learning'. Together they form a unique fingerprint.

Cite this