Date
Tue, 31 Oct 2017
Time
14:00 - 14:30
Location
L5
Speaker
Matthew Geleta
Organisation
University of Cambridge


The phenomenon of poor algorithmic scalability is a critical problem in large-scale machine learning and data science. This has led to a resurgence in the use of first-order (Hessian-free) algorithms from classical optimisation. One major drawback is that first-order methods tend to converge extremely slowly. However, there exist techniques for efficiently accelerating them.
    
The topic of this talk is the Dual Regularisation Nonlinear Acceleration algorithm (DRNA) (Geleta, 2017) for nonconvex optimisation. Numerical studies using the CUTEst optimisation problem set show the method to accelerate several nonconvex optimisation algorithms, including quasi-Newton BFGS and steepest descent methods. DRNA compares favourably with a number of existing accelerators in these studies.
    
DRNA extends to the nonconvex setting a recent acceleration algorithm due to Scieur et al. (Advances in Neural Information Processing Systems 29, 2016). We have proven theorems relating DRNA to the Kylov subspace method GMRES, as well as to Anderson's acceleration method and family of multi-secant quasi-Newton methods.
 

Please contact us with feedback and comments about this page. Last updated on 04 Apr 2022 14:57.