Adaptive FISTA for nonconvex optimization

Peter Ochs, Thomas Pock

Research output: Contribution to journalArticleResearchpeer-review

Abstract

In this paper we propose an adaptively extrapolated proximal gradient method, which is based on the accelerated proximal gradient method (also known as FISTA); however, we locally optimize the extrapolation parameter by carrying out an exact (or inexact) line search. It turns out that in some situations, the proposed algorithm is equivalent to a class of SR1 (identity minus rank 1) proximal quasi-Newton methods. Convergence is proved in a general nonconvex setting, and hence, as a byproduct, we also obtain new convergence guarantees for proximal quasi-Newton methods. The efficiency of the new method is shown in numerical experiments on a sparsity regularized nonlinear inverse problem.

Original languageEnglish
Pages (from-to)2482-2503
Number of pages22
JournalSIAM Journal on Optimization
Volume29
Issue number4
DOIs
Publication statusPublished - 1 Jan 2019

Fingerprint

Proximal Methods
Gradient methods
Nonconvex Optimization
Newton-Raphson method
Quasi-Newton Method
Gradient Method
Inverse problems
Extrapolation
Byproducts
Inexact Line Search
Nonlinear Inverse Problems
Sparsity
Experiments
Optimise
Numerical Experiment

Keywords

  • FISTA
  • Proximal algorithm
  • Proximal quasi-Newton
  • SR1 quasi-Newton

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science

Cite this

Adaptive FISTA for nonconvex optimization. / Ochs, Peter; Pock, Thomas.

In: SIAM Journal on Optimization, Vol. 29, No. 4, 01.01.2019, p. 2482-2503.

Research output: Contribution to journalArticleResearchpeer-review

Ochs, Peter ; Pock, Thomas. / Adaptive FISTA for nonconvex optimization. In: SIAM Journal on Optimization. 2019 ; Vol. 29, No. 4. pp. 2482-2503.
@article{a82eb97ba245492fa0fa1476c71db1b5,
title = "Adaptive FISTA for nonconvex optimization",
abstract = "In this paper we propose an adaptively extrapolated proximal gradient method, which is based on the accelerated proximal gradient method (also known as FISTA); however, we locally optimize the extrapolation parameter by carrying out an exact (or inexact) line search. It turns out that in some situations, the proposed algorithm is equivalent to a class of SR1 (identity minus rank 1) proximal quasi-Newton methods. Convergence is proved in a general nonconvex setting, and hence, as a byproduct, we also obtain new convergence guarantees for proximal quasi-Newton methods. The efficiency of the new method is shown in numerical experiments on a sparsity regularized nonlinear inverse problem.",
keywords = "FISTA, Proximal algorithm, Proximal quasi-Newton, SR1 quasi-Newton",
author = "Peter Ochs and Thomas Pock",
year = "2019",
month = "1",
day = "1",
doi = "10.1137/17M1156678",
language = "English",
volume = "29",
pages = "2482--2503",
journal = "SIAM Journal on Optimization",
issn = "1052-6234",
publisher = "Society for Industrial and Applied Mathematics",
number = "4",

}

TY - JOUR

T1 - Adaptive FISTA for nonconvex optimization

AU - Ochs, Peter

AU - Pock, Thomas

PY - 2019/1/1

Y1 - 2019/1/1

N2 - In this paper we propose an adaptively extrapolated proximal gradient method, which is based on the accelerated proximal gradient method (also known as FISTA); however, we locally optimize the extrapolation parameter by carrying out an exact (or inexact) line search. It turns out that in some situations, the proposed algorithm is equivalent to a class of SR1 (identity minus rank 1) proximal quasi-Newton methods. Convergence is proved in a general nonconvex setting, and hence, as a byproduct, we also obtain new convergence guarantees for proximal quasi-Newton methods. The efficiency of the new method is shown in numerical experiments on a sparsity regularized nonlinear inverse problem.

AB - In this paper we propose an adaptively extrapolated proximal gradient method, which is based on the accelerated proximal gradient method (also known as FISTA); however, we locally optimize the extrapolation parameter by carrying out an exact (or inexact) line search. It turns out that in some situations, the proposed algorithm is equivalent to a class of SR1 (identity minus rank 1) proximal quasi-Newton methods. Convergence is proved in a general nonconvex setting, and hence, as a byproduct, we also obtain new convergence guarantees for proximal quasi-Newton methods. The efficiency of the new method is shown in numerical experiments on a sparsity regularized nonlinear inverse problem.

KW - FISTA

KW - Proximal algorithm

KW - Proximal quasi-Newton

KW - SR1 quasi-Newton

UR - http://www.scopus.com/inward/record.url?scp=85076084290&partnerID=8YFLogxK

U2 - 10.1137/17M1156678

DO - 10.1137/17M1156678

M3 - Article

VL - 29

SP - 2482

EP - 2503

JO - SIAM Journal on Optimization

JF - SIAM Journal on Optimization

SN - 1052-6234

IS - 4

ER -