Data Science Seminar

Please note that the list below only shows forthcoming events, which may not include regular events that have not yet been entered for the forthcoming term. Please see the past events page for a list of all seminar series that the department has on offer.

Past events in this series
13 March 2020
12:00
Armin Eftekhari
Abstract

Linear networks provide valuable insight into the workings of neural networks in general. In this talk, we improve the state of the art in (Bah et al., 2019) by identifying conditions under which gradient flow successfully trains a linear network, in spite of the non-strict saddle points present in the optimization landscape. We also improve the state of the art for the computational complexity of training linear networks in (Arora et al., 2018a) by establishing non-local linear convergence rates for gradient flow.

Crucially, these new results are not in the lazy training regime, cautioned against in (Chizat et al., 2019; Yehudai & Shamir, 2019). Our results require the network to have a layer with one neuron, which corresponds to the popular spiked covariance model in statistics, and subsumes the important case of networks with a scalar output. Extending these results to all linear networks remains an open problem.

References:
- Bah, B., Rauhut, H., Terstiege, U., and Westdickenberg, M. Learning deep linear neural networks: Riemannian gradient flows and convergence to global minimizers.
- Arora, S., Cohen, N., Golowich, N., and Hu, W. A convergence analysis of gradient descent for deep linear neural networks.
- Chizat, L., Oyallon, E., and Bach, F. On lazy training in differentiable programming.
- Yehudai, G. and Shamir, O. On the power and limitations of random features for understanding neural networks.
 

  • Data Science Seminar
Add to My Calendar