Date
Mon, 09 Nov 2020
Time
16:00 - 17:00
Speaker
DIYORA SALIMOVA
Organisation
ETH Zurich


It is one of the most challenging issues in applied mathematics to approximately solve high-dimensional partial differential equations (PDEs) and most of the numerical approximation methods for PDEs in the scientific literature suffer from the so-called curse of dimensionality (CoD) in the sense that the number of computational operations employed in the corresponding approximation scheme to obtain an  approximation precision $\varepsilon >0$ grows exponentially in the PDE dimension and/or the reciprocal of $\varepsilon$. Recently, certain deep learning based approximation methods for PDEs have been proposed  and various numerical simulations for such methods suggest that deep neural network (DNN) approximations might have the capacity to indeed overcome the CoD in the sense that  the number of real parameters used to describe the approximating DNNs  grows at most polynomially in both the PDE dimension $d \in  \N$ and the reciprocal of the prescribed approximation accuracy $\varepsilon >0$. There are now also a few rigorous mathematical results in the scientific literature which  substantiate this conjecture by proving that  DNNs overcome the CoD in approximating solutions of PDEs.  Each of these results establishes that DNNs overcome the CoD in approximating suitable PDE solutions  at a fixed time point $T >0$ and on a compact cube $[a, b]^d$ but none of these results provides an answer to the question whether the entire PDE solution on $[0, T] \times [a, b]^d$ can be approximated by DNNs without the CoD. 
In this talk we show that for every $a \in \R$, $ b \in (a, \infty)$ solutions of  suitable  Kolmogorov PDEs can be approximated by DNNs on the space-time region $[0, T] \times [a, b]^d$ without the CoD. 

 

Please contact us with feedback and comments about this page. Last updated on 03 Apr 2022 01:32.