Multifidelity Multilevel Monte Carlo to Accelerate Approximate Bayesian Parameter Inference for Partially Observed Stochastic Processes
Warne, D Prescott, T Baker, R Simpson, M Journal of Computational Physics
Polynomial growth and asymptotic dimension
Papazoglou, P Israel Journal of Mathematics volume 255 issue 2 985-1000 (13 Mar 2023)
Multiscale methods for signal selection in single-cell data
Hoekzema, R Marsh, L Sumray, O Carroll, T Lu, X Byrne, H Harrington, H Entropy volume 24 issue 8 (13 Aug 2022)
There is nothing medically magical about machine learning
Bowman, C Joural of Royal Society of Medicine volume 115 issue 9 332-332 (02 Nov 2022)
Analysis of cellular kinetic models suggest that physiologically based model parameters may be inherently, practically unidentifiable.
Brown, L Coles, M McConnell, M Ratushny, A Gaffney, E Journal of pharmacokinetics and pharmacodynamics (06 Aug 2022)
Fixed and distributed gene expression time delays in reaction–diffusion systems
Sargood, A Gaffney, E Krause, A Bulletin of Mathematical Biology volume 84 issue 9 (07 Aug 2022)
Shape-morphing structures based on perforated kirigami
Zhang, Y Yang, J Liu, M Vella, D Extreme Mechanics Letters 101857-101857 (01 Aug 2022)
Thu, 27 Oct 2022

14:00 - 15:00
Zoom

Domain decomposition training strategies for physics-informed neural networks [talk hosted by Rutherford Appleton Lab]

Victorita Dolean
(University of Strathclyde)
Abstract

Physics-informed neural networks (PINNs) [2] are a solution method for solving boundary value problems based on differential equations (PDEs). The key idea of PINNs is to incorporate the residual of the PDE as well as boundary conditions into the loss function of the neural network. This provides a simple and mesh-free approach for solving problems relating to PDEs. However, a key limitation of PINNs is their lack of accuracy and efficiency when solving problems with larger domains and more complex, multi-scale solutions. 


In a more recent approach, Finite Basis Physics-Informed Neural Networks (FBPINNs) [1], the authors use ideas from domain decomposition to accelerate the learning process of PINNs and improve their accuracy in this setting. In this talk, we show how Schwarz-like additive, multiplicative, and hybrid iteration methods for training FBPINNs can be developed. Furthermore, we will present numerical experiments on the influence on convergence and accuracy of these different variants. 

This is joint work with Alexander Heinlein (Delft) and Benjamin Moseley (Oxford).


References 
1.    [1]  B. Moseley, A. Markham, and T. Nissen-Meyer. Finite basis physics- informed neural networks (FBPINNs): a scalable domain decomposition approach for solving differential equations. arXiv:2107.07871, 2021. 
2.    [2]  M. Raissi, P. Perdikaris, and G. E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686–707, 2019.

Subscribe to