New error bounds and their applications to convergence analysis of iterative algorithms

Zhi Quan Luo

Research output: Contribution to journalArticlepeer-review

22 Scopus citations

Abstract

We present two new error bounds for optimization problems over a convex set whose objective function f is either semianalytic or γ-strictly convex, with γ ≥ 1. We then apply these error bounds to analyze the rate of convergence of a wide class of iterative descent algorithms for the aforementioned optimization problem. Our analysis shows that the function sequence {f(xk)} converges at least at the sublinear rate of k for some positive constant ε, where k is the iteration index. Moreover, the distances from the iterate sequence {xk} to the set of stationary points of the optimization problem converge to zero at least sublinearly.

Original languageEnglish (US)
Pages (from-to)341-355
Number of pages15
JournalMathematical Programming, Series B
Volume88
Issue number2
DOIs
StatePublished - Aug 2000

Fingerprint

Dive into the research topics of 'New error bounds and their applications to convergence analysis of iterative algorithms'. Together they form a unique fingerprint.

Cite this