Seminar series
Date
Mon, 31 Oct 2022
14:00
Location
L4
Speaker
Stephen Becker
Organisation
University of Colorado Boulder

Numerical optimization is an indispensable tool of modern data analysis, and there are many optimization problems where it is difficult or impossible to compute the full gradient of the objective function. The field of derivative free optimization (DFO) addresses these cases by using only function evaluations, and has wide-ranging applications from hyper-parameter tuning in machine learning to PDE-constrained optimization.

We present two projects that attempt to scale DFO techniques to higher dimensions.  The first method converges slowly but works in very high dimensions, while the second method converges quickly but doesn't scale quite as well with dimension.  The first-method is a family of algorithms called "stochastic subspace descent" that uses a few directional derivatives at every step (i.e. projections of the gradient onto a random subspace). In special cases it is related to Spall's SPSA, Gaussian smoothing of Nesterov, and block-coordinate descent. We provide convergence analysis and discuss Johnson-Lindenstrauss style concentration.  The second method uses conventional interpolation-based trust region methods which require large ill-conditioned linear algebra operations.  We use randomized linear algebra techniques to ameliorate the issues and scale to larger dimensions; we also use a matrix-free approach that reduces memory issues.  These projects are in collaboration with David Kozak, Luis Tenorio, Alireza Doostan, Kevin Doherty and Katya Scheinberg.

Please contact us with feedback and comments about this page. Last updated on 03 Oct 2022 15:26.