Date
Thu, 30 Jan 2025
Time
12:00 - 12:30
Location
Lecture Room 5
Speaker
Sadok Jerad
Organisation
Mathematical Institute (University of Oxford)

An adaptive regularization algorithm for unconstrained nonconvex optimization is presented in
which the objective function is never evaluated, but only derivatives are used and without prior knowledge of Lipschitz constant.  This algorithm belongs to the class of adaptive regularization methods, for which optimal worst-case complexity results are known for the standard framework where the objective function is evaluated. It is shown in this paper that these excellent complexity bounds are also valid for the new algorithm. Theoretical analysis of both exact and stochastic cases are discussed and  new probabilistic conditions on tensor derivatives are proposed.  Initial experiments on large binary classification highlight the merits of our method.

Last updated on 20 Jan 2025, 4:33pm. Please contact us with feedback and comments about this page.