Date
Tue, 20 Feb 2024
Time
14:30 - 15:00
Location
L6
Speaker
Corrado Coppola
Organisation
Sapienza University of Rome
The supervised training of a deep neural network on a given dataset consists of the unconstrained minimization of the finite sum of continuously differentiable functions, commonly referred to as loss with respect to the samples. These functions depend on the network parameters and most of the times are non-convex.  We develop CMA Light, a new globally convergent mini-batch gradient method to tackle this problem. We consider the recently introduced Controlled Minibatch Algorithm (CMA) framework and we overcome its main bottleneck, removing the need for at least one evaluation of the whole objective function per iteration. We prove global convergence of CMA Light under mild assumptions and we discuss extensive computational results on the same experimental test bed used for CMA, showing that CMA Light requires less computational effort than most of the state-of-the-art optimizers. Eventually, we present early results on a large-scale Image Classification task.
 
The reference pre-print is already on arXiv at https://arxiv.org/abs/2307.15775
Please contact us with feedback and comments about this page. Last updated on 16 Feb 2024 14:10.