Past Computational Mathematics and Applications Seminar

13 June 2019
14:00
Professor Ricardo Nochetto
Abstract

The Landau-DeGennes Q-model of uniaxial nematic liquid crystals seeks a rank-one

traceless tensor Q that minimizes a Frank-type energy plus a double well potential

that confines the eigenvalues of Q to lie between -1/2 and 1. We propose a finite

element method (FEM) which preserves this basic structure and satisfies a discrete

form of the fundamental energy estimates. We prove that the discrete problem Gamma

converges to the continuous one as the meshsize tends to zero, and propose a discrete

gradient flow to compute discrete minimizers. Numerical experiments confirm the ability

of the scheme to approximate configurations with half-integer defects, and to deal with

colloidal and electric field effects. This work, joint with J.P. Borthagaray and S.

Walker, builds on our previous work for the Ericksen's model which we review briefly.

  • Computational Mathematics and Applications Seminar
Abstract

As parallel computers approach Exascale (10^18 floating point operations per second), processor failure and data corruption are of increasing concern. Numerical linear algebra solvers are at the heart of many scientific and engineering applications, and with the increasing failure rates, they may fail to compute a solution or produce an incorrect solution. It is therefore crucial to develop novel parallel linear algebra solvers capable of providing correct solutions on unreliable computing systems. The common way to mitigate failures in high performance computing systems consists of periodically saving data onto a reliable storage device such as a remote disk. But considering the increasing failure rate and the ever-growing volume of data involved in numerical simulations, the state-of-the-art fault-tolerant strategies are becoming time consuming, therefore unsuitable for large-scale simulations. In this talk, we will present a  novel class of fault-tolerant algorithms that do not require any additional resources. The key idea is to leverage the knowledge of numerical properties of solvers involved in a simulation to regenerate lost data due to system failures. We will also share the lessons learned and report on the numerical properties and the performance of the new resilience algorithms.

  • Computational Mathematics and Applications Seminar
30 May 2019
14:00
Professor Peter Binev
Abstract

One of the major steps in the adaptive finite element methods (AFEM) is the adaptive selection of the next partition. The process is usually governed by a strategy based on carefully chosen local error indicators and aims at convergence results with optimal rates. One can formally relate the refinement of the partitions with growing an oriented graph or a tree. Then each node of the tree/graph corresponds to a cell of a partition and the approximation of a function on adaptive partitions can be expressed trough the local errors related to the cell, i.e., the node. The total approximation error is then calculated as the sum of the errors on the leaves (the terminal nodes) of the tree/graph and the problem of finding an optimal error for a given budget of nodes is known as tree approximation. Establishing a near-best tree approximation result is a key ingredient in proving optimal convergence rates for AFEM.

 

The classical tree approximation problems are usually related to the so-called h-adaptive approximation in which the improvements a due to reducing the size of the cells in the partition. This talk will consider also an extension of this framework to hp-adaptive approximation allowing different polynomial spaces to be used for the local approximations at different cells while maintaining the near-optimality in terms of the combined number of degrees of freedom used in the approximation.

 

The problem of conformity of the resulting partition will be discussed as well. Typically in AFEM, certain elements of the current partition are marked and subdivided together with some additional ones to maintain desired properties of the partition like conformity. This strategy is often described as “mark → subdivide → complete”. The process is very well understood for triangulations received via newest vertex bisection procedure. In particular, it is proven that the number of elements in the final partition is limited by constant times the number of marked cells. This hints at the possibility to design a marking procedure that is limited only to cells of the partition whose subdivision will result in a conforming partition and therefore no completion step would be necessary. This talk will present such a strategy together with theoretical results about its near-optimal performance.

  • Computational Mathematics and Applications Seminar
23 May 2019
14:00
Abstract

In this talk, I am going to give an introduction to operator preconditioning as a general and robust strategy to precondition linear systems arising from Galerkin discretization of PDEs or Boundary Integral Equations. Then, in order to illustrate the applicability of this preconditioning technique, I will discuss the simple case of weakly singular and hypersingular integral equations, arising from exterior Dirichlet and Neumann BVPs for the Laplacian in 3D. Finally, I will show how we can also tackle operators with a more difficult structure, like the electric field integral equation (EFIE) on screens, which models the scattering of time-harmonic electromagnetic waves at perfectly conducting bounded infinitely thin objects, like patch antennas in 3D.

  • Computational Mathematics and Applications Seminar
16 May 2019
14:00
Professor Andy Wathen
Abstract

We present a novel approach to the solution of time-dependent PDEs via the so-called monolithic or all-at-once formulation.

This approach will be explained for simple parabolic problems and its utility in the context of PDE constrained optimization problems will be elucidated.

The underlying linear algebra includes circulant matrix approximations of Toeplitz-structured matrices and allows for effective parallel implementation. Simple computational results will be shown for the heat equation and the wave equation which indicate the potential as a parallel-in-time method.

This is joint work with Elle McDonald (CSIRO, Australia), Jennifer Pestana (Strathclyde University, UK) and Anthony Goddard (Durham University, UK)

  • Computational Mathematics and Applications Seminar
9 May 2019
14:00
Dr Pietro Zanotti
Abstract

ABSTRACT

We approximate the solution of the stationary Stokes equations with various conforming and nonconforming inf-sup stable pairs of finite element spaces on simplicial meshes. Based on each pair, we design a discretization that is quasi-optimal and pressure robust, in the sense that the velocity H^1-error is proportional to the best H^1-error to the analytical velocity. This shows that such a property can be achieved without using conforming and divergence-free pairs. We bound also the pressure L^2-error, only in terms of the best approximation errors to the analytical velocity and the analytical pressure. Our construction can be summarized as follows. First, a linear operator acts on discrete velocity test functions, before the application of the load functional, and maps the discrete kernel into the analytical one.

Second, in order to enforce consistency, we  possibly employ a new augmented Lagrangian formulation, inspired by Discontinuous Galerkin methods.

  • Computational Mathematics and Applications Seminar
7 March 2019
14:00
Dr Lawrence Mitchell
Abstract

Small block overlapping, and non-overlapping, Schwarz methods are theoretically highly attractive as multilevel smoothers for a wide variety of problems that are not amenable to point relaxation methods.  Examples include monolithic Vanka smoothers for Stokes, overlapping vertex-patch decompositions for $H(\text{div})$ and  $H(\text{curl})$ problems, along with nearly incompressible elasticity, and augmented Lagrangian schemes.

 While it is possible to manually program these different schemes,  their use in general purpose libraries has been held back by a lack   of generic, composable interfaces. We present a new approach to the   specification and development such additive Schwarz methods in PETSc  that cleanly separates the topological space decomposition from the  discretisation and assembly of the equations. Our preconditioner is  flexible enough to support overlapping and non-overlapping additive  Schwarz methods, and can be used to formulate line, and plane smoothers, Vanka iterations, amongst others. I will illustrate these new features with some examples utilising the Firedrake finite element library, in particular how the design of an approriate computational interface enables these schemes to be used as building blocks inside block preconditioners.

This is joint work with Patrick Farrell and Florian Wechsung (Oxford), and Matt Knepley (Buffalo).

  • Computational Mathematics and Applications Seminar
Prof Martin Skovgaard Andersen
Abstract

Classical methods for X-ray computed tomography (CT) are based on the assumption that the X-ray source intensity is known. In practice, however, the intensity is measured and hence uncertain. Under normal circumstances, when the exposure time is sufficiently high, this kind of uncertainty typically has a negligible effect on the reconstruction quality. However, in time- or dose-limited applications such as dynamic CT, this uncertainty may cause severe and systematic artifacts known as ring artifacts.
By modeling the measurement process and by taking uncertainties into account, it is possible to derive a convex reconstruction model that leads to improved reconstructions when the signal-to-noise ratio is low. We discuss some computational challenges associated with the model and illustrate its merits with some numerical examples based on simulated and real data.

  • Computational Mathematics and Applications Seminar

Pages