Seminar series
Date
Mon, 12 Feb 2024
Time
14:00 - 15:00
Location
Lecture Room 3
Speaker
Kfir Levy
Organisation
Technion – Israel Institute of Technology

The tremendous success of the Machine Learning paradigm heavily relies on the development of powerful optimization methods, and the canonical algorithm for training learning models is SGD (Stochastic Gradient Descent). Nevertheless, the latter is quite different from Gradient Descent (GD) which is its noiseless counterpart. Concretely, SGD requires a careful choice of the learning rate, which relies on the properties of the noise as well as the quality of initialization.

 It further requires the use of a test set to estimate the generalization error throughout its run. In this talk, we will present a new SGD variant that obtains the same optimal rates as SGD, while using noiseless machinery as in GD. Concretely, it enables to use the same fixed learning rate as GD and does not require to employ a test/validation set. Curiously, our results rely on a novel gradient estimate that combines two recent mechanisms which are related to the notion of momentum.

Finally, as much as time permits, I will discuss several applications where our method can be extended.

Please contact us with feedback and comments about this page. Last updated on 20 Dec 2023 12:38.