Date
Thu, 26 Feb 2026
Time
16:00 - 17:00
Location
L5
Speaker
Lukas Gonon

In the past years, deep learning algorithms have been applied to numerous classical problems from mathematical finance. In particular, deep learning has been employed to numerically solve high-dimensional derivatives pricing and hedging tasks. Theoretical foundations of deep learning for these tasks, however, are far less developed. In this talk, we start by revisiting deep hedging and introduce a recently developed adversarial training approach for making it more robust. We then present our recent results on theoretical foundations for approximating option prices, solutions to jump-diffusion PDEs and optimal stopping problems using (random) neural networks, allowing to obtain more explicit convergence guarantees. We address neural network expressivity, highlight challenges in analysing optimization errors and show the potential of random neural networks for mitigating these difficulties.

Last updated on 24 Jan 2026, 4:46pm. Please contact us with feedback and comments about this page.