Shape-morphing structures based on perforated kirigami
Zhang, Y Yang, J Liu, M Vella, D Extreme Mechanics Letters 101857-101857 (01 Aug 2022)
Thu, 27 Oct 2022

14:00 - 15:00
Zoom

Domain decomposition training strategies for physics-informed neural networks [talk hosted by Rutherford Appleton Lab]

Victorita Dolean
(University of Strathclyde)
Abstract

Physics-informed neural networks (PINNs) [2] are a solution method for solving boundary value problems based on differential equations (PDEs). The key idea of PINNs is to incorporate the residual of the PDE as well as boundary conditions into the loss function of the neural network. This provides a simple and mesh-free approach for solving problems relating to PDEs. However, a key limitation of PINNs is their lack of accuracy and efficiency when solving problems with larger domains and more complex, multi-scale solutions. 


In a more recent approach, Finite Basis Physics-Informed Neural Networks (FBPINNs) [1], the authors use ideas from domain decomposition to accelerate the learning process of PINNs and improve their accuracy in this setting. In this talk, we show how Schwarz-like additive, multiplicative, and hybrid iteration methods for training FBPINNs can be developed. Furthermore, we will present numerical experiments on the influence on convergence and accuracy of these different variants. 

This is joint work with Alexander Heinlein (Delft) and Benjamin Moseley (Oxford).


References 
1.    [1]  B. Moseley, A. Markham, and T. Nissen-Meyer. Finite basis physics- informed neural networks (FBPINNs): a scalable domain decomposition approach for solving differential equations. arXiv:2107.07871, 2021. 
2.    [2]  M. Raissi, P. Perdikaris, and G. E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686–707, 2019.

Random cliques in random graphs and sharp thresholds for F-factors
RIORDAN, O Random Structures and Algorithms (22 Jul 2022)
Tue, 22 Nov 2022

14:00 - 14:30
L3

Regularization by inexact Krylov methods with applications to blind deblurring

Malena Sabate Landman
(Cambridge)
Abstract

In this talk I will present a new class of algorithms for separable nonlinear inverse problems based on inexact Krylov methods. In particular, I will focus in semi-blind deblurring applications. In this setting, inexactness stems from the uncertainty in the parameters defining the blur, which are computed throughout the iterations. After giving a brief overview of the theoretical properties of these methods, as well as strategies to monitor the amount of inexactness that can be tolerated, the performance of the algorithms will be shown through numerical examples. This is joint work with Silvia Gazzola (University of Bath).

Tue, 25 Oct 2022

14:30 - 15:00
L3

Some recent developments in high order finite element methods for incompressible flow

Charles Parker
(Mathematical Institute University of Oxford)
Abstract
Over the past 30-40 years, high order finite element methods (FEMs) have gained significant traction. While much of the theory and implementation of high order continuous FEMs are well-understood, FEMs with increased smoothness are less popular in the literature and in practice. Nevertheless, engineering problems involving plates and shells give rise to fourth order elliptic equations, whose conforming approximations typically entail the Argyris elements, which are globally C1 and C2 at element vertices. The Argyris elements and their high order counterparts can then be used to construct the mass-conserving Falk-Neilan elements for incompressible flow problems. In particular, the Falk-Neilan elements inherit a level of extra smoothness at element vertices. In this talk, we will give a brief overview of some recent developments concerning the uniform hp-stability and preconditioning of the Falk-Neilan elements.
Tue, 08 Nov 2022

14:00 - 14:30
L3

Computing functions of matrices via composite rational functions

Yuji Nakatsukasa
((Oxford University))
Abstract

Most algorithms for computing a matrix function $f(A)$ are based on finding a rational (or polynomial) approximant $r(A)≈f(A)$ to the scalar function on the spectrum of $A$. These functions are often in a composite form, that is, $f(z)≈r(z)=r_k(⋯r_2(r_1(z)))$ (where $k$ is the number of compositions, which is often the iteration count, and proportional to the computational cost); this way $r$ is a rational function whose degree grows exponentially in $k$. I will review algorithms that fall into this category and highlight the remarkable power of composite (rational) functions.

Tue, 08 Nov 2022

14:30 - 15:00
L3

Rational approximation of functions with branch point singularities

Astrid Herremans
(KU Leuven)
Abstract

Rational functions are able to approximate functions containing branch point singularities with a root-exponential convergence rate. These appear for example in the solution of boundary value problems on domains containing corners or edges. Results from Newman in 1964 indicate that the poles of the optimal rational approximant are exponentially clustered near the branch point singularities. Trefethen and collaborators use this knowledge to linearize the approximation problem by fixing the poles in advance, giving rise to the Lightning approximation. The resulting approximation set is however highly ill-conditioned, which raises the question of stability. We show that augmenting the approximation set with polynomial terms greatly improves stability. This observation leads to a  decoupling of the approximation problem into two regimes, related to the singular and the smooth behaviour of the function. In addition, adding polynomial terms to the approximation set can result in a significant increase in convergence speed. The convergence rate is however very sensitive to the speed at which the clustered poles approach the singularity.

Tue, 11 Oct 2022

14:30 - 15:00
L3

Fooled by optimality

Nick Trefethen
(University of Oxford)
Abstract

An occupational hazard of mathematicians is the investigation of objects that are "optimal" in a mathematically precise sense, yet may be far from optimal in practice. This talk will discuss an extreme example of this effect: Gauss-Hermite quadrature on the real line. For large numbers of quadrature points, Gauss-Hermite quadrature is a very poor method of integration, much less efficient than simply truncating the interval and applying Gauss-Legendre quadrature or the periodic trapezoidal rule. We will present a theorem quantifying this difference and explain where the standard notion of optimality has failed.

Subscribe to