Date
Thu, 11 Oct 2018
Time
12:00 - 13:00
Location
L4
Speaker
Philipp Petersen
Organisation
University of Oxford

Novel machine learning techniques based on deep learning, i.e., the data-driven manipulation of neural networks, have reported remarkable results in many areas such as image classification, game intelligence, or speech recognition. Driven by these successes, many scholars have started using them in areas which do not focus on traditional machine learning tasks. For instance, more and more researchers are employing neural networks to develop tools for the discretisation and solution of partial differential equations. Two reasons can be identified to be the driving forces behind the increased interest in neural networks in the area of the numerical analysis of PDEs. On the one hand, powerful approximation theoretical results have been established which demonstrate that neural networks can represent functions from the most relevant function classes with a minimal number of parameters. On the other hand, highly efficient machine learning techniques for the training of these networks are now available and can be used as a black box. In this talk, we will give an overview of some approaches towards the numerical treatment of PDEs with neural networks and study the two aspects above. We will recall some classical and some novel approximation theoretical results and tie these results to PDE discretisation. Afterwards, providing a counterpoint, we analyse the structure of network spaces and deduce considerable problems for the black box solver. In particular, we will identify a number of structural properties of the set of neural networks that render optimisation over this set especially challenging and sometimes impossible. The talk is based on joint work with Helmut Bölcskei, Philipp Grohs, Gitta Kutyniok, Felix Voigtlaender, and Mones Raslan

Please contact us with feedback and comments about this page. Last updated on 04 Apr 2022 15:24.