Author
Cartis, C
Gould, N
Toint, P
Journal title
SIAM Journal on Optimization
DOI
10.1137/16M1106316
Issue
1
Volume
29
Last updated
2024-03-27T15:27:29.12+00:00
Page
595-615
Abstract
Adaptive cubic regularization methods have emerged as a credible alternative to linesearch and trust-region for smooth nonconvex optimization, with optimal complexity amongst second-order methods. Here we consider a general/new class of adaptive regularization methods that use first- or higher-order local Taylor models of the objective regularized by a(ny) power of the step size and applied to convexly constrained optimization problems. We investigate the worst-case evaluation complexity/global rate of convergence of these algorithms, when the level of sufficient smoothness of the objective may be unknown or may even be absent. We find that the methods accurately reflect in their complexity the degree of smoothness of the objective and satisfy increasingly better bounds with improving model accuracy. The bounds vary continuously and robustly with respect to the regularization power and accuracy of the model and the degree of smoothness of the objective.
Symplectic ID
949246
Favourite
Off
Publication type
Journal Article
Publication date
05 Mar 2019
Please contact us with feedback and comments about this page. Created on 02 Dec 2018 - 22:08.