Small block overlapping, and non-overlapping, Schwarz methods are theoretically highly attractive as multilevel smoothers for a wide variety of problems that are not amenable to point relaxation methods. Examples include monolithic Vanka smoothers for Stokes, overlapping vertex-patch decompositions for $H(\text{div})$ and $H(\text{curl})$ problems, along with nearly incompressible elasticity, and augmented Lagrangian schemes.

While it is possible to manually program these different schemes, their use in general purpose libraries has been held back by a lack of generic, composable interfaces. We present a new approach to the specification and development such additive Schwarz methods in PETSc that cleanly separates the topological space decomposition from the discretisation and assembly of the equations. Our preconditioner is flexible enough to support overlapping and non-overlapping additive Schwarz methods, and can be used to formulate line, and plane smoothers, Vanka iterations, amongst others. I will illustrate these new features with some examples utilising the Firedrake finite element library, in particular how the design of an approriate computational interface enables these schemes to be used as building blocks inside block preconditioners.

This is joint work with Patrick Farrell and Florian Wechsung (Oxford), and Matt Knepley (Buffalo).

# Past Computational Mathematics and Applications Seminar

Classical methods for X-ray computed tomography (CT) are based on the assumption that the X-ray source intensity is known. In practice, however, the intensity is measured and hence uncertain. Under normal circumstances, when the exposure time is sufficiently high, this kind of uncertainty typically has a negligible effect on the reconstruction quality. However, in time- or dose-limited applications such as dynamic CT, this uncertainty may cause severe and systematic artifacts known as ring artifacts.

By modeling the measurement process and by taking uncertainties into account, it is possible to derive a convex reconstruction model that leads to improved reconstructions when the signal-to-noise ratio is low. We discuss some computational challenges associated with the model and illustrate its merits with some numerical examples based on simulated and real data.

Multiscale methods based on coupled partial differential equations defined on bulk and embedded manifolds are still poorly explored from the theoretical standpoint, although they are successfully used in applications, such as microcirculation and flow in perforated subsurface reservoirs. This work aims at shedding light on some theoretical aspects of a multiscale method consisting of coupled partial differential equations defined on one-dimensional domains embedded into three-dimensional ones. Mathematical issues arise because the dimensionality gap between the bulk and the inclusions is larger than one, named as the high dimensionality gap case. First, we show that such model derives from a system of full three-dimensional equations, by the application of a topological model reduction approach. Secondly, we rigorously analyze the problem, showing that the averaging operators applied for the model reduction introduce a regularization effect that resolves the issues due to the singularity of solutions and to the ill-posedness of restriction operators. Then, we discretize the problem by means of the finite element method and we analyze the approximation error. Finally, we exploit the structure of the model reduction technique to analyze the modeling error. This study confirms that for infinitesimally small inclusions, the modeling error vanishes.

This is a joint work with Federica Laurino, Department of Mathematics, Politecnico di Milano.

Driven by its numerous applications in computational science, the approximation of smooth, high-dimensional functions via sparse polynomial expansions has received significant attention in the last five to ten years. In the first part of this talk, I will give a brief survey of recent progress in this area. In particular, I will demonstrate how the proper use of compressed sensing tools leads to new techniques for high-dimensional approximation which can mitigate the curse of dimensionality to a substantial extent. The rest of the talk is devoted to approximating functions defined on irregular domains. The vast majority of works on high-dimensional approximation assume the function in question is defined over a tensor-product domain. Yet this assumption is often unrealistic. I will introduce a method, known as polynomial frame approximation, suitable for broad classes of irregular domains and present theoretical guarantees for its approximation error, stability, and sample complexity. These results show the suitability of this approach for high-dimensional approximation through the independence (or weak dependence) of the various guarantees on the ambient dimension d. Time permitting, I will also discuss several extensions.

When the linear system in Newton’s method is approximately solved using an iterative method we have an inexact or truncated Newton method. The outer method is Newton’s method and the inner iterations will be the iterative method. The Inexact Newton framework is now close to 30 years old and is widely used and given names like Newton-Arnoldi, Newton-CG depending on the inner iterative method. In this talk we will explore convergence properties when the outer iterative method is Gauss-Newton, the Halley method or an interior point method for linear programming problems.

We discuss the key role that bespoke linear algebra plays in modelling PDEs with random coefficients using stochastic Galerkin approximation methods. As a specific example, we consider nearly incompressible linear elasticity problems with an uncertain spatially varying Young's modulus. The uncertainty is modelled with a finite set of parameters with prescribed probability distribution. We introduce a novel three-field mixed variational formulation of the PDE model and and assess the stability with respect to a weighted norm. The main focus will be the efficient solution of the associated high-dimensional indefinite linear system of equations. Eigenvalue bounds for the preconditioned system can be established and shown to be independent of the discretisation parameters and the Poisson ratio. We also discuss an associated a posteriori error estimation strategy and assess proxies for the error reduction associated with selected enrichments of the approximation spaces. We will show by example that these proxies enable the design of efficient adaptive solution algorithms that terminate when the estimated error falls below a user-prescribed tolerance.

This is joint work with Arbaz Khan and Catherine Powell

Just like optimization needs derivatives, shape optimization needs shape derivatives. Their definition and computation is a classical subject, at least concerning first order shape derivatives. Second derivatives have been studied as well, but some aspects of their theory still remains a bit mysterious for practitioners. As a result, most algorithms for shape optimization are first order methods.

To understand this situation better and in a general way, we consider first and second order shape sensitivities of integrals on smooth submanifolds using a variant of shape differentiation. Instead of computing the second derivative as the derivative of the first derivative, we choose a one-parameter family of perturbations and compute first and second derivatives with respect to that parameter. The result is a quadratic form in terms of a perturbation vector field that yields a second order quadratic model of the perturbed functional, which can be used as the basis of a second order shape optimization algorithm. We discuss the structure of this derivative, derive domain expressions and Hadamard forms in a general geometric framework, and give a detailed geometric interpretation of the arising terms.

Finally, we use our results to construct a second order SQP-algorithm for shape optimization that exhibits indeed local fast convergence.

Standard quadratic programs have numerous applications and play an important role in copositivity detection. We consider reformulating a standard quadratic program as a mixed integer linear programming (MILP) problem. We propose alternative MILP reformulations that exploit the specific structure of standard quadratic programs. We report extensive computational results on various classes of instances. Our experiments reveal that our MILP reformulations significantly outperform other global solution approaches.

This is joint work with Jacek Gondzio.

In this talk I will present two recent findings concerning the preconditioning of elliptic problems. The first result concerns preconditioning of elliptic problems with variable coefficient K by an inverse Laplacian. Here we show that there is a close relationship between the eigenvalues of the preconditioned system and K.

The second results concern the problem on mixed form where K approaches zero. Here, we show a uniform inf-sup condition and corresponding robust preconditioning.