Seminar series
Date
Mon, 20 Nov 2023
Time
14:00 - 15:00
Location
Lecture Room 6
Speaker
Prof. Elad Hazan
Organisation
Princeton University and Google DeepMind

How can we find and apply the best optimization algorithm for a given problem?   This question is as old as mathematical optimization itself, and is notoriously hard: even special cases such as finding the optimal learning rate for gradient descent is nonconvex in general. 

In this talk we will discuss a dynamical systems approach to this question. We start by discussing an emerging paradigm in differentiable reinforcement learning called “online nonstochastic control”. The new approach applies techniques from online convex optimization and convex relaxations to obtain new methods with provable guarantees for classical settings in optimal and robust control. We then show how this methodology can yield global guarantees for learning the best algorithm in certain cases of stochastic and online optimization. 

No background is required for this talk, but relevant material can be found in this new text on online control and paper on meta optimization.

 

Prof. Elad's Bio

Please contact us with feedback and comments about this page. Last updated on 05 Oct 2023 14:57.