Partial differential equations (PDEs) are among the most universal tools used in modelling problems in nature and man-made complex systems. For example, stochastic PDEs are a fundamental ingredient in models for nonlinear filtering problems in chemical engineering and weather forecasting, deterministic Schroedinger PDEs describe the wave function in a quantum physical system, deterministic Hamiltonian-Jacobi-Bellman PDEs are employed in operations research to describe optimal control problems where companys aim to minimise their costs, and deterministic Black-Scholes-type PDEs are highly employed in portfolio optimization models as well as in state-of-the-art pricing and hedging models for financial derivatives. The PDEs appearing in such models are often high-dimensional as the number of dimensions, roughly speaking, corresponds to the number of all involved interacting substances, particles, resources, agents, or assets in the model. For instance, in the case of the above mentioned financial engineering models the dimensionality of the PDE often corresponds to the number of financial assets in the involved hedging portfolio. Such PDEs can typically not be solved explicitly and it is one of the most challenging tasks in applied mathematics to develop approximation algorithms which are able to approximatively compute solutions of high-dimensional PDEs. Nearly all approximation algorithms for PDEs in the literature suffer from the so-called "curse of dimensionality" in the sense that the number of required computational operations of the approximation algorithm to achieve a given approximation accuracy grows exponentially in the dimension of the considered PDE. With such algorithms it is impossible to approximatively compute solutions of high-dimensional PDEs even when the fastest currently available computers are used. In the case of linear parabolic PDEs and approximations at a fixed space-time point, the curse of dimensionality can be overcome by means of Monte Carlo approximation algorithms and the Feynman-Kac formula. In this talk we introduce new nonlinear Monte Carlo algorithms for high-dimensional nonlinear PDEs. We prove that such algorithms do indeed overcome the curse of dimensionality in the case of a general class of semilinear parabolic PDEs and we thereby prove, for the first time, that a general semilinear parabolic PDE with a nonlinearity depending on the PDE solution can be solved approximatively without the curse of dimensionality.

# Past Computational Mathematics and Applications Seminar

The Landau-DeGennes Q-model of uniaxial nematic liquid crystals seeks a rank-one

traceless tensor Q that minimizes a Frank-type energy plus a double well potential

that confines the eigenvalues of Q to lie between -1/2 and 1. We propose a finite

element method (FEM) which preserves this basic structure and satisfies a discrete

form of the fundamental energy estimates. We prove that the discrete problem Gamma

converges to the continuous one as the meshsize tends to zero, and propose a discrete

gradient flow to compute discrete minimizers. Numerical experiments confirm the ability

of the scheme to approximate configurations with half-integer defects, and to deal with

colloidal and electric field effects. This work, joint with J.P. Borthagaray and S.

Walker, builds on our previous work for the Ericksen's model which we review briefly.

As parallel computers approach Exascale (10^18 floating point operations per second), processor failure and data corruption are of increasing concern. Numerical linear algebra solvers are at the heart of many scientific and engineering applications, and with the increasing failure rates, they may fail to compute a solution or produce an incorrect solution. It is therefore crucial to develop novel parallel linear algebra solvers capable of providing correct solutions on unreliable computing systems. The common way to mitigate failures in high performance computing systems consists of periodically saving data onto a reliable storage device such as a remote disk. But considering the increasing failure rate and the ever-growing volume of data involved in numerical simulations, the state-of-the-art fault-tolerant strategies are becoming time consuming, therefore unsuitable for large-scale simulations. In this talk, we will present a novel class of fault-tolerant algorithms that do not require any additional resources. The key idea is to leverage the knowledge of numerical properties of solvers involved in a simulation to regenerate lost data due to system failures. We will also share the lessons learned and report on the numerical properties and the performance of the new resilience algorithms.

One of the major steps in the adaptive finite element methods (AFEM) is the adaptive selection of the next partition. The process is usually governed by a strategy based on carefully chosen local error indicators and aims at convergence results with optimal rates. One can formally relate the refinement of the partitions with growing an oriented graph or a tree. Then each node of the tree/graph corresponds to a cell of a partition and the approximation of a function on adaptive partitions can be expressed trough the local errors related to the cell, i.e., the node. The total approximation error is then calculated as the sum of the errors on the leaves (the terminal nodes) of the tree/graph and the problem of finding an optimal error for a given budget of nodes is known as tree approximation. Establishing a near-best tree approximation result is a key ingredient in proving optimal convergence rates for AFEM.

The classical tree approximation problems are usually related to the so-called h-adaptive approximation in which the improvements a due to reducing the size of the cells in the partition. This talk will consider also an extension of this framework to hp-adaptive approximation allowing different polynomial spaces to be used for the local approximations at different cells while maintaining the near-optimality in terms of the combined number of degrees of freedom used in the approximation.

The problem of conformity of the resulting partition will be discussed as well. Typically in AFEM, certain elements of the current partition are marked and subdivided together with some additional ones to maintain desired properties of the partition like conformity. This strategy is often described as “mark → subdivide → complete”. The process is very well understood for triangulations received via newest vertex bisection procedure. In particular, it is proven that the number of elements in the final partition is limited by constant times the number of marked cells. This hints at the possibility to design a marking procedure that is limited only to cells of the partition whose subdivision will result in a conforming partition and therefore no completion step would be necessary. This talk will present such a strategy together with theoretical results about its near-optimal performance.

In this talk, I am going to give an introduction to operator preconditioning as a general and robust strategy to precondition linear systems arising from Galerkin discretization of PDEs or Boundary Integral Equations. Then, in order to illustrate the applicability of this preconditioning technique, I will discuss the simple case of weakly singular and hypersingular integral equations, arising from exterior Dirichlet and Neumann BVPs for the Laplacian in 3D. Finally, I will show how we can also tackle operators with a more difficult structure, like the electric field integral equation (EFIE) on screens, which models the scattering of time-harmonic electromagnetic waves at perfectly conducting bounded infinitely thin objects, like patch antennas in 3D.

We present a novel approach to the solution of time-dependent PDEs via the so-called monolithic or all-at-once formulation.

This approach will be explained for simple parabolic problems and its utility in the context of PDE constrained optimization problems will be elucidated.

The underlying linear algebra includes circulant matrix approximations of Toeplitz-structured matrices and allows for effective parallel implementation. Simple computational results will be shown for the heat equation and the wave equation which indicate the potential as a parallel-in-time method.

This is joint work with Elle McDonald (CSIRO, Australia), Jennifer Pestana (Strathclyde University, UK) and Anthony Goddard (Durham University, UK)

ABSTRACT

We approximate the solution of the stationary Stokes equations with various conforming and nonconforming inf-sup stable pairs of finite element spaces on simplicial meshes. Based on each pair, we design a discretization that is quasi-optimal and pressure robust, in the sense that the velocity H^1-error is proportional to the best H^1-error to the analytical velocity. This shows that such a property can be achieved without using conforming and divergence-free pairs. We bound also the pressure L^2-error, only in terms of the best approximation errors to the analytical velocity and the analytical pressure. Our construction can be summarized as follows. First, a linear operator acts on discrete velocity test functions, before the application of the load functional, and maps the discrete kernel into the analytical one.

Second, in order to enforce consistency, we possibly employ a new augmented Lagrangian formulation, inspired by Discontinuous Galerkin methods.

Small block overlapping, and non-overlapping, Schwarz methods are theoretically highly attractive as multilevel smoothers for a wide variety of problems that are not amenable to point relaxation methods. Examples include monolithic Vanka smoothers for Stokes, overlapping vertex-patch decompositions for $H(\text{div})$ and $H(\text{curl})$ problems, along with nearly incompressible elasticity, and augmented Lagrangian schemes.

While it is possible to manually program these different schemes, their use in general purpose libraries has been held back by a lack of generic, composable interfaces. We present a new approach to the specification and development such additive Schwarz methods in PETSc that cleanly separates the topological space decomposition from the discretisation and assembly of the equations. Our preconditioner is flexible enough to support overlapping and non-overlapping additive Schwarz methods, and can be used to formulate line, and plane smoothers, Vanka iterations, amongst others. I will illustrate these new features with some examples utilising the Firedrake finite element library, in particular how the design of an approriate computational interface enables these schemes to be used as building blocks inside block preconditioners.

This is joint work with Patrick Farrell and Florian Wechsung (Oxford), and Matt Knepley (Buffalo).