Domain decomposition training strategies for physics-informed neural networks [talk hosted by Rutherford Appleton Lab]
Abstract
Physics-informed neural networks (PINNs) [2] are a solution method for solving boundary value problems based on differential equations (PDEs). The key idea of PINNs is to incorporate the residual of the PDE as well as boundary conditions into the loss function of the neural network. This provides a simple and mesh-free approach for solving problems relating to PDEs. However, a key limitation of PINNs is their lack of accuracy and efficiency when solving problems with larger domains and more complex, multi-scale solutions.
In a more recent approach, Finite Basis Physics-Informed Neural Networks (FBPINNs) [1], the authors use ideas from domain decomposition to accelerate the learning process of PINNs and improve their accuracy in this setting. In this talk, we show how Schwarz-like additive, multiplicative, and hybrid iteration methods for training FBPINNs can be developed. Furthermore, we will present numerical experiments on the influence on convergence and accuracy of these different variants.
This is joint work with Alexander Heinlein (Delft) and Benjamin Moseley (Oxford).
References
1. [1] B. Moseley, A. Markham, and T. Nissen-Meyer. Finite basis physics- informed neural networks (FBPINNs): a scalable domain decomposition approach for solving differential equations. arXiv:2107.07871, 2021.
2. [2] M. Raissi, P. Perdikaris, and G. E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686–707, 2019.
Scalable Second–order Tensor Based Methods for Unconstrained Non–convex Optimization
Regularization by inexact Krylov methods with applications to blind deblurring
Abstract
In this talk I will present a new class of algorithms for separable nonlinear inverse problems based on inexact Krylov methods. In particular, I will focus in semi-blind deblurring applications. In this setting, inexactness stems from the uncertainty in the parameters defining the blur, which are computed throughout the iterations. After giving a brief overview of the theoretical properties of these methods, as well as strategies to monitor the amount of inexactness that can be tolerated, the performance of the algorithms will be shown through numerical examples. This is joint work with Silvia Gazzola (University of Bath).
Some recent developments in high order finite element methods for incompressible flow
Abstract
Computing functions of matrices via composite rational functions
Abstract
Most algorithms for computing a matrix function f(A) are based on finding a rational (or polynomial) approximant r(A)≈f(A) to the scalar function on the spectrum of A. These functions are often in a composite form, that is, f(z)≈r(z)=rk(⋯r2(r1(z))) (where k is the number of compositions, which is often the iteration count, and proportional to the computational cost); this way r is a rational function whose degree grows exponentially in k. I will review algorithms that fall into this category and highlight the remarkable power of composite (rational) functions.