Domain decomposition training strategies for physics-informed neural networks [talk hosted by Rutherford Appleton Lab]
Abstract
Physics-informed neural networks (PINNs) [2] are a solution method for solving boundary value problems based on differential equations (PDEs). The key idea of PINNs is to incorporate the residual of the PDE as well as boundary conditions into the loss function of the neural network. This provides a simple and mesh-free approach for solving problems relating to PDEs. However, a key limitation of PINNs is their lack of accuracy and efficiency when solving problems with larger domains and more complex, multi-scale solutions.
In a more recent approach, Finite Basis Physics-Informed Neural Networks (FBPINNs) [1], the authors use ideas from domain decomposition to accelerate the learning process of PINNs and improve their accuracy in this setting. In this talk, we show how Schwarz-like additive, multiplicative, and hybrid iteration methods for training FBPINNs can be developed. Furthermore, we will present numerical experiments on the influence on convergence and accuracy of these different variants.
This is joint work with Alexander Heinlein (Delft) and Benjamin Moseley (Oxford).
References
1. [1] B. Moseley, A. Markham, and T. Nissen-Meyer. Finite basis physics- informed neural networks (FBPINNs): a scalable domain decomposition approach for solving differential equations. arXiv:2107.07871, 2021.
2. [2] M. Raissi, P. Perdikaris, and G. E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686–707, 2019.
Scalable Second–order Tensor Based Methods for Unconstrained Non–convex Optimization
Regularization by inexact Krylov methods with applications to blind deblurring
Abstract
In this talk I will present a new class of algorithms for separable nonlinear inverse problems based on inexact Krylov methods. In particular, I will focus in semi-blind deblurring applications. In this setting, inexactness stems from the uncertainty in the parameters defining the blur, which are computed throughout the iterations. After giving a brief overview of the theoretical properties of these methods, as well as strategies to monitor the amount of inexactness that can be tolerated, the performance of the algorithms will be shown through numerical examples. This is joint work with Silvia Gazzola (University of Bath).
Some recent developments in high order finite element methods for incompressible flow
Abstract
Computing functions of matrices via composite rational functions
Abstract
Most algorithms for computing a matrix function $f(A)$ are based on finding a rational (or polynomial) approximant $r(A)≈f(A)$ to the scalar function on the spectrum of $A$. These functions are often in a composite form, that is, $f(z)≈r(z)=r_k(⋯r_2(r_1(z)))$ (where $k$ is the number of compositions, which is often the iteration count, and proportional to the computational cost); this way $r$ is a rational function whose degree grows exponentially in $k$. I will review algorithms that fall into this category and highlight the remarkable power of composite (rational) functions.
Rational approximation of functions with branch point singularities
Abstract
Rational functions are able to approximate functions containing branch point singularities with a root-exponential convergence rate. These appear for example in the solution of boundary value problems on domains containing corners or edges. Results from Newman in 1964 indicate that the poles of the optimal rational approximant are exponentially clustered near the branch point singularities. Trefethen and collaborators use this knowledge to linearize the approximation problem by fixing the poles in advance, giving rise to the Lightning approximation. The resulting approximation set is however highly ill-conditioned, which raises the question of stability. We show that augmenting the approximation set with polynomial terms greatly improves stability. This observation leads to a decoupling of the approximation problem into two regimes, related to the singular and the smooth behaviour of the function. In addition, adding polynomial terms to the approximation set can result in a significant increase in convergence speed. The convergence rate is however very sensitive to the speed at which the clustered poles approach the singularity.
Fooled by optimality
Abstract
An occupational hazard of mathematicians is the investigation of objects that are "optimal" in a mathematically precise sense, yet may be far from optimal in practice. This talk will discuss an extreme example of this effect: Gauss-Hermite quadrature on the real line. For large numbers of quadrature points, Gauss-Hermite quadrature is a very poor method of integration, much less efficient than simply truncating the interval and applying Gauss-Legendre quadrature or the periodic trapezoidal rule. We will present a theorem quantifying this difference and explain where the standard notion of optimality has failed.