Author
Blanchet, J
Cartis, C
Menickelly, M
Scheinberg, K
Journal title
INFORMS Journal on Optimization
DOI
10.1287/ijoo.2019.0016
Issue
2
Volume
1
Last updated
2024-03-27T14:49:51.92+00:00
Page
92-119
Abstract
We propose a novel framework for analyzing convergence rates of stochastic optimization algorithms with adaptive step sizes. This framework is based on analyzing properties of an underlying generic stochastic process; in particular, we derive a bound on the expected stopping time of this process. We utilize this framework to analyze the expected global convergence rates of a stochastic variant of a traditional trust-region method. Although traditional trust-region methods rely on exact computations of the gradient, Hessian, and values of the objective function, this method assumes that these values are available only up to some dynamically adjusted accuracy. Moreover, this accuracy is assumed to hold only with some sufficiently large—but fixed—probability without any additional restrictions on the variance of the errors. This setting applies, for example, to standard stochastic optimization and machine learning formulations. Improving upon prior analysis, we show that the stochastic process defined by the trust-region method satisfies the assumptions of our proposed general framework. The stopping time in this setting is defined by an iterate satisfying a first-order accuracy condition. We demonstrate the first global complexity bound for a stochastic trust-region method under the assumption of sufficiently accurate stochastic gradients. Finally, we apply the same framework to derive second-order complexity bounds under additional assumptions.
Symplectic ID
1024951
Favourite
Off
Publication type
Journal Article
Publication date
07 Jun 2019
Please contact us with feedback and comments about this page. Created on 30 Jun 2019 - 23:05.