Date
Thu, 23 Nov 2023
16:00
Location
Lecture Room 4, Mathematical Institute
Speaker
Dr Gholamali Aminian
Organisation
Alan Turing Institute

We propose a novel framework for exploring weak and $L_2$ generalization errors of algorithms through the lens of differential calculus on the space of probability measures. Specifically, we consider the KL-regularized empirical risk minimization problem and establish generic conditions under which the generalization error convergence rate, when training on a sample of size $n$ , is $\matcal{O}(1/n)$. In the context of supervised learning with a one-hidden layer neural network in the mean-field regime, these conditions are reflected in suitable integrability and regularity assumptions on the loss and activation functions.

Last updated on 20 Nov 2023, 1:48pm. Please contact us with feedback and comments about this page.