Date
Mon, 19 Jan 2026
Time
14:00 - 15:00
Location
Lecture Room 3
Speaker
Professor Olivier Bokanowski
Organisation
Université Paris Cité

In this talk, we are interested in neural network approximations for Hamilton–Jacobi–Bellman equations.These are non linear PDEs for which the solution should be considered in the viscosity sense. The solutions also corresponds to value functions of deterministic or stochastic optimal control problems. For these equations, it is well known that solving the PDE almost everywhere may lead to wrong solutions. 

We present a new method for approximating these PDEs using neural networks. We will closely follow a previous work by C. Esteve-Yagüe, R. Tsai and A. Massucco (2025), while extending the versatility of the approach. 

We will first show the existence and unicity of a general monotone abstract scheme (that can be chosen in a consistent way to the PDE), and that includes implicit schemes. Then, rather than directly approximating the PDE -- as is done in methods such as PINNs (Physics-Informed Neural Networks) or DGM (Deep Galerkin Method) -- we incorporate the monotone numerical scheme into the definition of the loss function. 

Finally, we can show that the critical point of the loss function is unique and corresponds to solving the desired scheme. When coupled with neural networks, this strategy allows for a (more) rigorous convergence analysis and accommodates a broad class of schemes. Preliminary numerical results are presented to support our theoretical findings.

This is joint work with C. Esteve-Yagüe and R. Tsai.

 

 

 

Last updated on 17 Jan 2026, 9:25am. Please contact us with feedback and comments about this page.