We show that the steepest-descent and Newton's methods for unconstrained nonconvex optimization
under standard assumptions may both require a number of iterations and function evaluations
arbitrarily close to the steepest-descent's global worst-case complexity bound. This implies that
the latter upper bound is essentially tight for steepest descent and that Newton's method may be as
slow as the steepest-descent method in the worst case. Then the cubic regularization of Newton's
method (Griewank (1981), Nesterov & Polyak (2006)) is considered and extended to large-scale
problems, while preserving the same order of its improved worst-case complexity (by comparison to
that of steepest-descent); this improved worst-case bound is also shown to be tight. We further
show that the cubic regularization approach is, in fact, optimal from a worst-case complexity point
of view amongst a wide class of second-order methods. The worst-case problem-evaluation complexity
of constrained optimization will also be discussed. This is joint work with Nick Gould (Rutherford
Appleton Laboratory, UK) and Philippe Toint (University of Namur, Belgium).