Convex-Concave backtracking for inertial Bregman proximal gradient algorithms in nonconvex optimization

Mahesh Chandra Mukkamala, Peter Ochs, Thomas Pock, Shoham Sabach

Publikation: Beitrag in einer FachzeitschriftArtikelBegutachtung

Abstract

Backtracking line-search is an old yet powerful strategy for finding better step sizes to be used in proximal gradient algorithms. The main principle is to locally find a simple convex upper bound of the objective function, which in turn controls the step size that is used. In case of inertial proximal gradient algorithms, the situation becomes much more difficult and usually leads to very restrictive rules on the extrapolation parameter. In this paper, we show that the extrapolation parameter can be controlled by also locally finding a simple concave lower bound of the objective function. This gives rise to a double convex-concave backtracking procedure which allows for an adaptive choice of both the step size and extrapolation parameters. We apply this procedure to the class of inertial Bregman proximal gradient methods, and prove that any sequence generated by these algorithms converges globally to a critical point of the …
Originalspracheenglisch
Seiten (von - bis)658-682
FachzeitschriftSIAM Journal on Mathematics of Data Science
Jahrgang2
Ausgabenummer3
DOIs
PublikationsstatusVeröffentlicht - 2020

Fingerprint

Untersuchen Sie die Forschungsthemen von „Convex-Concave backtracking for inertial Bregman proximal gradient algorithms in nonconvex optimization“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren