15:30
Quantitative Convergence of Deep Neural Networks to Gaussian Processes
Abstract
In this seminar, we explore the quantitative convergence of wide deep neural networks with Gaussian weights to Gaussian processes, establishing novel rates for their Gaussian approximation. We show that the Wasserstein distance between the network output and its Gaussian counterpart scales inversely with network width, with bounds apply for any finite input set under specific non-degeneracy conditions of the covariances. Additionally, we extend our analysis to the Bayesian framework, by studying exact posteriors for neural networks, when endowed with Gaussian priors and regular Likelihood functions, but we also provide recent advancements in quantitative approximation of trained networks via gradient descent in the NTK regime. Based on joint works with A. Basteri, and A. Agazzi and E. Mosig.