Convergence rate analysis of a stochastic trust-region method via supermartingales

Author: 

Blanchet, J
Cartis, C
Scheinberg, K
Menickelly, M

Publication Date: 

7 June 2019

Journal: 

INFORMS Journal on Optimization

Last Updated: 

2020-01-22T09:28:49.247+00:00

Issue: 

2

Volume: 

1

DOI: 

10.1287/ijoo.2019.0016

page: 

92-119

abstract: 

We propose a novel framework for analyzing convergence rates of stochastic optimization algorithms with adaptive step sizes. This framework is based on analyzing properties of an underlying generic stochastic process; in particular, we derive a bound on the expected stopping time of this process. We utilize this framework to analyze the expected global convergence rates of a stochastic variant of a traditional trust-region method. Although traditional trust-region methods rely on exact computations of the gradient, Hessian, and values of the objective function, this method assumes that these values are available only up to some dynamically adjusted accuracy. Moreover, this accuracy is assumed to hold only with some sufficiently large—but fixed—probability without any additional restrictions on the variance of the errors. This setting applies, for example, to standard stochastic optimization and machine learning formulations. Improving upon prior analysis, we show that the stochastic process defined by the trust-region method satisfies the assumptions of our proposed general framework. The stopping time in this setting is defined by an iterate satisfying a first-order accuracy condition. We demonstrate the first global complexity bound for a stochastic trust-region method under the assumption of sufficiently accurate stochastic gradients. Finally, we apply the same framework to derive second-order complexity bounds under additional assumptions.

Symplectic id: 

1024951

Submitted to ORA: 

Not Submitted

Publication Type: 

Journal Article