Past Numerical Analysis Group Internal Seminar

16 February 2016
14:30
Abstract

At the heart of the interior point method in optimization is a linear system solve, but how accurate must this solve be?  The behaviour of such methods is well-understood when a direct solver is used, but the scale of problems being tackled today means that users increasingly turn to iterative methods to approximate its solution.  Current suggestions of the accuracy required can be seen to be too stringent, leading to inefficiency.

In this talk I will give conditions on the accuracy of the solution in order to guarantee the inexact interior point method converges at the same rate as if there was an exact solve.  These conditions can be shown numerically to be tight, in that performance degrades rapidly if a weaker condition is used.  Finally, I will describe how the norms that appear in these condition are related to the natural norms that are minimized in several popular Krylov subspace methods. This, in turn, could help in the development of new preconditioners in this important field.

  • Numerical Analysis Group Internal Seminar
16 February 2016
14:00
Jared Aurentz
Abstract

Operators, functions, and functionals are combined in many problems of computational science in a fashion that has the same logical structure as is familiar for block matrices and vectors.  It is proposed that the explicit consideration of such block structures at the continuous as opposed to discrete level can be a useful tool.  In particular, block operator diagrams provide templates for spectral discretization by the rectangular differentiation, integration, and identity matrices introduced by Driscoll and Hale.  The notion of the rectangular shape of a linear operator can be made rigorous by the theory of Fredholm operators and their indices, and the block operator formulations apply to nonlinear problems too, where the convention is proposed of representing nonlinear blocks as shaded.  At each step of a Newton iteration, the structure is linearized and the blocks become unshaded, representing Fréchet derivative operators, square or rectangular.  The use of block operator diagrams makes it possible to precisely specify discretizations of even rather complicated problems with just a few lines of pseudocode.

[Joint work with Nick Trefethen]

  • Numerical Analysis Group Internal Seminar
9 February 2016
14:00
Coralia Cartis
Abstract

Adaptive cubic regularization methods have recently emerged as a credible alternative to line search and trust-region for smooth nonconvex optimization, with optimal complexity amongst second-order methods. Here we consider a general class of adaptive regularization methods, that use first- or higher-order local Taylor models of the objective regularized by a(ny) power of the step size. We investigate the worst-case complexity/global rate of convergence of these algorithms, in the presence of varying (unknown) smoothness of the objective. We find that some methods automatically adapt their complexity to the degree of smoothness of the objective; while others take advantage of the power of the regularization step to satisfy increasingly better bounds with the order of the models. This work is joint with Nick Gould (RAL) and Philippe Toint (Namur).

  • Numerical Analysis Group Internal Seminar

Pages