Date
Tue, 09 Feb 2016
Time
14:00 - 14:30
Location
L5
Speaker
Coralia Cartis
Organisation
University of Oxford

Adaptive cubic regularization methods have recently emerged as a credible alternative to line search and trust-region for smooth nonconvex optimization, with optimal complexity amongst second-order methods. Here we consider a general class of adaptive regularization methods, that use first- or higher-order local Taylor models of the objective regularized by a(ny) power of the step size. We investigate the worst-case complexity/global rate of convergence of these algorithms, in the presence of varying (unknown) smoothness of the objective. We find that some methods automatically adapt their complexity to the degree of smoothness of the objective; while others take advantage of the power of the regularization step to satisfy increasingly better bounds with the order of the models. This work is joint with Nick Gould (RAL) and Philippe Toint (Namur).

Please contact us with feedback and comments about this page. Last updated on 03 Apr 2022 01:32.