Forthcoming events in this series


Thu, 22 Oct 2015

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Constraint preconditioning for the coupled Stokes-Darcy system

Dr. Scott Ladenheim
(Manchester University)
Abstract

We propose the use of a constraint preconditioner for the iterative solution of the linear system arising from the finite element discretization of the coupled Stokes-Darcy system. The Stokes-Darcy system is a set of coupled PDEs that can be used to model a freely flowing fluid over porous media flow. The fully coupled system matrix is large, sparse, non-symmetric, and of saddle point form. We provide for exact versions of the constraint preconditioner spectral and field-of-values bounds that are independent of the underlying mesh width. We present several numerical experiments, using the deal.II finite element library, that illustrate our results in both two and three dimensions. We compare exact and inexact versions of the constraint preconditioner against standard block diagonal and block lower triangular preconditioners to illustrate its favorable properties.

Thu, 08 Oct 2015

14:00 - 15:00
L4

Randomized iterative methods for linear systems

Dr Peter Richtárik
(Edinburgh University)
Abstract

We develop a novel, fundamental and surprisingly simple randomized iterative method for solving consistent linear systems. Our method has six different but equivalent interpretations: sketch-and-project, constrain-and-approximate, random intersect, random linear solve, random update and random fixed point. By varying its two parameters—a positive definite matrix (defining geometry), and a random matrix (sampled in an i.i.d. fashion in each iteration)—we recover a comprehensive array of well known algorithms as special cases, including the randomized Kaczmarz method, randomized Newton method, randomized coordinate descent method and random Gaussian pursuit. We naturally also obtain variants of all these methods using blocks and importance sampling. However, our method allows for a much wider selection of these two parameters, which leads to a number of new specific methods. We prove exponential convergence of the expected norm of the error in a single theorem, from which existing complexity results for known variants can be obtained. However, we also give an exact formula for the evolution of the expected iterates, which allows us to give lower bounds on the convergence rate. 

This is joint work with Robert M. Gower (Edinburgh).
Thu, 18 Jun 2015

14:00 - 15:00
L5

Linear Algebra for Matrix-Free Optimization

Dominique Orban
(École Polytechnique Montréal)
Abstract

When formulated appropriately, the broad families of sequential quadratic programming, augmented Lagrangian and interior-point methods all require the solution of symmetric saddle-point linear systems. When regularization is employed, the systems become symmetric and quasi definite. The latter are
indefinite but their rich structure and strong relationships with definite systems enable specialized linear algebra, and make them prime candidates for matrix-free implementations of optimization methods. In this talk, I explore various formulations of the step equations in optimization and corresponding
iterative methods that exploit their structure.

Thu, 11 Jun 2015

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Interior Point Methods for Optimal Power Flow Formulations

Andreas Grothey
(University of Edinburgh)
Abstract

Security Constrained Optimal Power Flow is an increasingly important problem for power systems operation both in its own right and as a subproblem for more complex problems such as transmission switching or
unit commitment.

The structure of the problem resembles stochastic programming problems in that one aims to find a cost optimal operation schedule that is feasible for all possible equipment outage scenarios
(contingencies). Due to the presence of power flow constraints (in their "DC" or "AC" version), the resulting problem is a large scale linear or nonlinear programming problem.

However it is known that only a small subset of the contingencies is active at the solution. We show how Interior Point methods can exploit this structure both by simplifying the linear algebra operations as
well as generating necessary contingencies on the fly and integrating them into the algorithm using IPM warmstarting techniques. The final problem solved by this scheme is significantly smaller than the full
contingency constrained problem, resulting in substantial speed gains.

Numerical and theoretical results of our algorithm will be presented.

Thu, 04 Jun 2015

14:00 - 15:00
L5

Polytopic Finite Element Methods

Dr Andrea Cangiani
(Leicester University)
Abstract

Can we extend the FEM to general polytopic, i.e. polygonal and polyhedral, meshes while retaining 
the ease of implementation and computational cost comparable to that of standard FEM? Within this talk, I present two approaches that achieve just  that (and much more): the Virtual Element Method (VEM) and an hp-version discontinuous Galerkin (dG) method.

The Virtual Element spaces are like the usual (polynomial) finite element spaces with the addition of suitable non-polynomial functions. This is far from being a novel idea. The novelty of the VEM approach is that it avoids expensive evaluations of the non-polynomial "virtual" functions by basing all 
computations solely on the method's carefully chosen degrees of freedom. This way we can easily deal 
with complicated element geometries and/or higher continuity requirements (like C1, C2, etc.), while 
maintaining the computational complexity comparable to that of standard finite element computations.

As you might expect, the choice and number of the degrees of freedom depends on such continuity 
requirements. If mesh flexibility is the goal, while one is ready to  give up on global regularity, other approaches can be considered. For instance, dG approaches are naturally suited to deal with polytopic meshes. Here I present an extension of the classical Interior Penalty dG method which achieves optimal rates of convergence on polytopic meshes even under elemental edge/face degeneration. 

The next step is to exploit mesh flexibility in the efficient resolution of problems characterised by 
complicated geometries and solution features, for instance within the framework of automatic FEM 
adaptivity. I shall finally introduce ongoing work in this direction.

Thu, 28 May 2015

14:00 - 15:00
L5

Semi-Langrangian Methods for Monge-Ampère Equations

Dr Max Jensen
(University of Sussex)
Abstract

In this seminar I will present a semi-langrangian discretisation of the Monge-Ampère operator, which is of interest in optimal transport 
and differential geometry as well as in related fields of application.

I will discuss the proof of convergence to viscosity solutions. To address the challenge of uniqueness and convexity we draw upon the classical relationship with Hamilton-Jacobi-Bellman equations, which we extend to the viscosity setting. I will explain that the monotonicity of semi-langrangian schemes implies that they possess large stencils, which in turn requires careful treatment of the boundary conditions.

The contents of the seminar is based on current work with X Feng from the University of Tennessee.

Thu, 21 May 2015

14:00 - 15:00
L5

Leverage Scores in Data Analysis

Petros Drineas
(Rensselaer Polytechnic Institute)
Abstract

The Singular Value Decomposition (SVD) of matrices and the related Principal Components Analysis (PCA) express a matrix in terms of singular vectors, which are linear combinations of all the input data and lack an intuitive physical interpretation. Motivated by the application of PCA and SVD in the analysis of populations genetics data, we will discuss the notion of leverage scores: a simple statistic that reveals columns/rows of a matrix that lie in the subspace spanned by the top principal components (left/right singular vectors). We will then use the leverage scores to present matrix decompositions that express the structure in a matrix in terms of actual columns (and/or rows) of the matrix. Such decompositions are easier to interpret in applications, since the selected columns and rows are subsets of the data. We will also discuss extensions of the leverage scores to reveal influential entries of a matrix.

Thu, 14 May 2015

14:00 - 15:00
L5

A Trust Region Algorithm with Improved Iteration Complexity for Nonconvex Smooth Optimization

Frank Curtis
(Lehigh University)
Abstract

We present a trust region algorithm for solving nonconvex optimization problems that, in the worst-case, is able to drive the norm of the gradient of the objective below a prescribed threshold $\epsilon > 0$ after at most ${\cal O}(\epsilon^{-3/2})$ function evaluations, gradient evaluations, or iterations.  Our work has been inspired by the recently proposed Adaptive Regularisation framework using Cubics (i.e., the ARC algorithm), which attains the same worst-case complexity bound.  Our algorithm is modeled after a traditional trust region algorithm, but employs modified step acceptance criteria and a novel trust region updating mechanism that allows it to achieve this desirable property.  Importantly, our method also maintains standard global and fast local convergence guarantees.

Thu, 07 May 2015

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

A preconditioned MINRES method for nonsymmetric Toeplitz matrices

Dr. Jennifer Pestana
(University of Manchester)
Abstract

Although Toeplitz matrices are often dense, matrix-vector products with Toeplitz matrices can be quickly performed via circulant embedding and the fast Fourier transform. This makes their solution by preconditioned Krylov subspace methods attractive. 

For a wide class of symmetric Toeplitz matrices, symmetric positive definite circulant preconditioners that cluster eigenvalues have been proposed. MINRES or the conjugate gradient method can be applied to these problems and descriptive convergence theory based on eigenvalues guarantees fast convergence. 

In contrast, although circulant preconditioners have been proposed for nonsymmetric Toeplitz systems, guarantees of fast convergence are generally only available for CG for the normal equations (CGNE). This is somewhat unsatisfactory because CGNE has certain drawbacks, including slow convergence and a larger condition number. In this talk we discuss a simple alternative symmetrization of nonsymmetric Toeplitz matrices, that allows us to use MINRES to solve the resulting linear system. We show how existing circulant preconditioners for nonsymmetric Toeplitz matrices can be straightforwardly adapted to this situation and give convergence estimates similar to those in the symmetric case.

Thu, 30 Apr 2015

14:00 - 15:00
L5

A Finite-Element Approach to Free-Energy Minimisation

Dr. Scott MacLachlan
(Memorial University of Newfoundland)
Abstract

Numerical simulation tools for fluid and solid mechanics are often based on the discretisation of coupled systems of partial differential equations, which can easily be identified in terms of physical
conservation laws.  In contrast, much physical insight is often gained from the equivalent formulation of the relevant energy or free-energy functional, possibly subject to constraints.  Motivated by the
nonlinear static and dynamic behaviour of nematic liquid crystals and of magnetosensitive elastomers, we propose a finite-element framework for minimising these free-energy functionals, using Lagrange multipliers to enforce additional constraints.  This talk will highlight challenges, limitations, and successes, both in the formulation of these models and their use in numerical simulation.
This is joint work with PhD students Thomas Benson, David Emerson, and Dong Han, and with James Adler, Timothy Atherton, and Luis Dorfmann.

Thu, 12 Mar 2015

14:00 - 15:00
L5

Preconditioning: A Review

Professor Andrew Wathen
(Oxford University)
Abstract

Preconditioning is of significant importance in the solution of large dimensional systems of linear equations such as those that arise from the numerical solution of partial differential equation problems. In this talk we will attempt a broad ranging review of preconditioning.

Thu, 05 Mar 2015

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Preconditioned Iterative Solvers for Constrained Optimization

John Pearson
(Edinburgh University)
Abstract

In this talk, we discuss the development of fast iterative solvers for matrix systems arising from various constrained optimization problems. In particular, we seek to exploit the saddle point structure of these problems to construct powerful preconditioners for the resulting systems, using appropriate approximations of the (1,1)-block and Schur complement.

The problems we consider arise from two well-studied subject areas within computational optimization. Specifically, we investigate the
numerical solution of PDE-constrained optimization problems, and the interior point method (IPM) solution of linear/quadratic programming
problems. Indeed a particular focus in this talk is the interior point method solution of PDE-constrained optimization problems with
additional inequality constraints on the state and control variables.

We present a range of optimization problems which we seek to solve using our methodology, and examine the theoretical and practical
convergence properties of our iterative methods for these problems.
 

Thu, 26 Feb 2015

14:00 - 15:00
L5

Quasi-optimal stability estimates for the hp-Raviart-Thomas projection operator on the cube

Dr Alexey Chernov
(Reading University)
Abstract

Stability of the hp-Raviart-Thomas projection operator as a mapping H^1(K) -> H^1(K) on the unit cube K in R^3 has been addressed e.g. in [2], see also [1]. These results are suboptimal with respect to the polynomial degree. In this talk we present quasi-optimal stability estimates for the hp-Raviart-Thomas projection operator on the cube. The analysis involves elements of the polynomial approximation theory on an interval and the real method of Banach space interpolation.

(Joint work with Herbert Egger, TU Darmstadt)

[1] Mark Ainsworth and Katia Pinchedez. hp-approximation theory for BDFM and RT finite elements on quadrilaterals. SIAM J. Numer. Anal., 40(6):2047–2068 (electronic) (2003), 2002.

[2] Dominik Schötzau, Christoph Schwab, and Andrea Toselli. Mixed hp-DGFEM for incompressible flows. SIAM J. Numer. Anal., 40(6):2171–2194 (electronic) (2003), 2002.

Thu, 19 Feb 2015

14:00 - 15:00
L5

Distinct solutions of nonlinear systems via deflation

Dr Patrick Farrell
(Oxford University)
Abstract

Nonlinear systems of partial differential equations (PDEs) may permit several distinct solutions. The typical current approach to finding distinct solutions is to start Newton's method with many different initial guesses, hoping to find starting points that lie in different basins of attraction. In this talk, we present an infinite-dimensional deflation algorithm for systematically modifying the residual of a nonlinear PDE problem to eliminate known solutions from consideration. This enables the Newton--Kantorovitch iteration to converge to several different solutions, even starting from the same initial guess. The deflated Jacobian is dense, but an efficient preconditioning strategy is devised, and the number of Krylov iterations is observed not to grow as solutions are deflated. The technique is then applied to computing distinct solutions of nonlinear PDEs, tracing bifurcation diagrams, and to computing multiple local minima of PDE-constrained optimisation problems.

Thu, 12 Feb 2015

14:00 - 15:00
L5

The evolution of the universe recreated in a supercomputer

Professor Christian Klingenberg
(University of Wuerzburg)
Abstract

In this talk we will describe the steps towards self-consistent computer simulations of the evolution of the universe beginning soon after the Big Bang and ending with the formation of realistic stellar systems like the Milky Way. This is a multi-scale problem of vast proportions. The first step has been the Millennium Simulation, one of the largest and most successful numerical simulations of the Universe ever carried out. Now we are in the midst of the next step, where this is carried to a much higher level of physical fidelity on the latest supercomputing platforms. This talk will be illustrate how the role of mathematics is essential in this endeavor. Also computer simulations will be shown. This is joint work among others with Volker Springel.

 

Thu, 05 Feb 2015

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Rational Krylov Approximation of Matrix Functions and Applications

Dr Stefan Guettel
(Manchester University)
Abstract

Some problems in scientific computing, like the forward simulation of electromagnetic waves in geophysical prospecting, can be
solved via approximation of f(A)b, the action of a large matrix function f(A) onto a vector b. Iterative methods based on rational Krylov
spaces are powerful tools for these computations, and the choice of parameters in these methods is an active area of research.
We provide an overview of different approaches for obtaining optimal parameters, with an emphasis on the exponential and resolvent function, and the square root. We will discuss applications of the rational Arnoldi method for iteratively generating near-optimal absorbing boundary layers for indefinite Helmholtz problems, and for rational least squares vector fitting.

Thu, 29 Jan 2015

14:00 - 15:00
L5

High-order approximations for some classical Gaussian quadrature

Dr Ignace Bogaert
(University of Ghent)
Abstract

Gaussian quadrature rules are of theoretical and practical interest because of their role in numerical integration and interpolation. For general weighting functions, their computation can be performed with the Golub-Welsch algorithm or one of its refinements. However, for the specific case of Gauss-Legendre quadrature, computation methods based on asymptotic series representations of the Legendre polynomials have recently been proposed. 
For large quadrature rules, these methods provide superior accuracy and speed at the cost of generality. We provide an overview of the progress that was made with these asymptotic methods, focusing on the ideas and asymptotic formulas that led to them. 
Finally, the limited generality will be discussed with Gauss-Jacobi quadrature rules as a prominent possibility for extension.

Thu, 22 Jan 2015

14:00 - 15:00
L5

Electron correlation in van der Waals interactions

Professor Ridgway Scott
(University of Chicago)
Abstract
We examine a technique of Slater and Kirkwood which provides an exact resolution of the asymptotic behavior of the van der Waals attraction between two hydrogens atoms. We modify their technique to make the problem more tractable analytically and more easily solvable by numerical methods. Moreover, we prove rigorously that this approach provides an exact solution for the asymptotic electron correlation. The proof makes use of recent results that utilize the Feshbach-Schur perturbation technique. We provide visual representations of the asymptotic electron correlation (entanglement) based on the use of Laguerre approximations.
Thu, 04 Dec 2014

14:00 - 15:00
L5

Is the Helmholtz equation really sign-indefinite?

Dr Euan Spence
(University of Bath)
Abstract

The usual variational formulations of the Helmholtz equation are sign-indefinite (i.e. not coercive). In this talk, I will argue that this indefiniteness is not an inherent feature of the Helmholtz equation itself, only of its standard formulations. I will do this by presenting new sign-definite formulations of several Helmholtz boundary value problems.

This is joint work with Andrea Moiola (Reading).
 

Thu, 27 Nov 2014

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Incomplete Cholesky preconditioners based on orthogonal dropping : theory and practice

Artem Napov
(Universite Libre de Bruxelles)
Abstract

Incomplete Cholesky factorizations are commonly used as black-box preconditioners for the iterative solution of large sparse symmetric positive definite linear systems. Traditionally, incomplete 
factorizations are obtained by dropping (i.e., replacing by zero) some entries of the factors during the factorization process. Here we consider a less common way to approximate the factors : through low-rank approximations of some off-diagonal blocks. We focus more specifically on approximation schemes that satisfy the orthogonality condition: the approximation should be orthogonal to the corresponding approximation error.

The resulting incomplete Cholesky factorizations have attractive theoretical properties. First, the underlying factorization process can be shown breakdown-free. Further, the condition number of the 
preconditioned system, that characterizes the convergence rate of standard iterative schemes, can be shown bounded as a function of the accuracy of individual approximations. Hence, such a bound can benefit from better approximations, but also from some algorithmic peculiarities. Eventually, the above results can be shown to hold for any symmetric positive definite system matrix.

On the practical side, we consider a particular variant of the preconditioner. It relies on a nested dissection ordering of unknowns to  insure an attractive memory usage and operations count. Further, it exploits in an algebraic way the low-rank structure present in system matrices that arise from PDE discretizations. A preliminary implementation of the method is compared with similar Cholesky and 
incomplete Cholesky factorizations based on dropping of individual entries.

Thu, 20 Nov 2014

14:00 - 15:00
L5

The Dynamic Dictionary of Mathematical Functions

Dr Marc Mezzarobba
(CNRS and Ecole Normale Superieure)
Abstract

The Dynamic Dictionary of Mathematical Functions (or DDMF, http://ddmf.msr-inria.inria.fr/) is an interactive website on special functions inspired by reference books such as the NIST Handbook of Special Functions. The originality of the DDMF is that each of its “chapters” is automatically generated from a short mathematical description of the corresponding function.

To make this possible, the DDMF focuses on so-called D-finite (or holonomic) functions, i.e., complex analytic solutions of linear ODEs with polynomial coefficients. D-finite functions include in particular most standard elementary functions (exp, log, sin, sinh, arctan...) as well as many of the classical special functions of mathematical physics (Airy functions, Bessel functions, hypergeometric functions...). A function of this class can be represented by a finite amount of data (a differential equation along with sufficiently many initial values), 
and this representation makes it possible to develop a computer algebra framework that deals with the whole class in a unified way, instead of ad hoc algorithms and code for each particular function. The DDMF attempts to put this idea into practice.

In this talk, I will present the DDMF, some of the algorithms and software libraries behind it, and ongoing projects based on similar ideas, with an emphasis on symbolic-numeric algorithms.

Thu, 13 Nov 2014

14:00 - 15:00
L5

Quadrature in infinite dimensions and applications in uncertainty quantification

Professor Rob Scheichl
(University of Bath)
Abstract

The coefficients in mathematical models of physical processes are often impossible to determine fully or accurately, and are hence subject to uncertainty. It is of great importance to quantify the uncertainty in the model outputs based on the (uncertain) information that is available on the model inputs. This invariably leads to very high dimensional quadrature problems associated with the computation of statistics of quantities of interest, such as the time it takes a pollutant plume in an uncertain subsurface flow problem to reach the boundary of a safety region or the buckling load of an airplane wing. Higher order methods, such as stochastic Galerkin or polynomial chaos methods, suffer from the curse of dimensionality and when the physical models themselves are complex and computationally costly, they become prohibitively expensive in higher dimensions. Instead, some of the most promising approaches to quantify uncertainties in continuum models are based on Monte Carlo sampling and the “multigrid philosophy”. Multilevel Monte Carlo (MLMC) Methods have been introduced recently and successfully applied to many model problems, producing significant gains. In this talk I want to recall the classical MLMC method and then show how the gains can be improved further (significantly) by using quasi-Monte Carlo (QMC) sampling rules. More importantly the dimension independence and the improved gains can be justified rigorously for an important model problem in subsurface flow. To achieve uniform bounds, independent of the dimension, it is necessary to work in infinite dimensions and to study quadrature in sequence spaces. I will present the elements of this new theory for the case of lognormal random coefficients in a diffusion problem and support the theory with numerical experiments.

Thu, 06 Nov 2014

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Tomographic problems as linear algebra

Bill Lionheart
(Manchester University)
Abstract

For many tomographic imaging problems there are explicit inversion formulas, and depending on the completeness of the data these are unstable to differing degrees. Increasingly we are solving tomographic problems as though they were any other linear inverse problem using numerical linear algebra. I will illustrate the use of numerical singular value decomposition to explore the (in)stability for various problems. I will also show how standard techniques from numerical linear algebra, such as conjugate gradient least squares, can be employed with systematic regularization compared with the ad hoc use of slowly convergent iterative methods more traditionally used in computed tomography. I will mainly illustrate the talk with examples from three dimensional x-ray tomography but I will also touch on tensor tomography problems.
 

Thu, 30 Oct 2014

14:00 - 15:00
L5

Polynomial hulls, low rank perturbations and multicentric calculus

Professor Olavi Nevanlinna
(Aalto University)
Abstract

We outline a path from polynomial numerical hulls to multicentric calculus for evaluating f(A). Consider
$$Vp(A) = {z ∈ C : |p(z)| ≤ kp(A)k}$$
where p is a polynomial and A a bounded linear operator (or matrix). Intersecting these sets over polynomials of degree 1 gives the closure of the numerical range, while intersecting over all polynomials gives the spectrum of A, with possible holes filled in.
Outside any set Vp(A) one can write the resolvent down explicitly and this leads to multicentric holomorphic functional calculus.
The spectrum, pseudospectrum or the polynomial numerical hulls can move rapidly in low rank perturbations. However, this happens in a very controlled way and when measured correctly one gets an identity which shows e.g. the following: if you have a low-rank homotopy between self-adjoint and quasinilpotent, then the identity forces the nonnormality to increase in exact compensation with the spectrum shrinking.
In this talk we shall mention how the multicentric calculus leads to a nontrivial extension of von Neumann theorem
$$kf(A)k ≤ sup |z|≤1
kf(z)k$$
where A is a contraction in a Hilbert space, and conclude with some new results on (nonholomorphic) functional calculus for operators for which p(A) is normal at a nontrivial polynomial p. Notice that this is always true for matrices.

 

Thu, 23 Oct 2014

14:00 - 15:00
L5

Stabilised finite element methods for non symmetric, non coercive and ill-posed problems

Professor Erik Burman
(UCL)
Abstract

In numerical analysis the design and analysis of computational methods is often based on, and closely linked to, a well-posedness result for the underlying continuous problem. In particular the continuous dependence of the continuous model is inherited by the computational method when such an approach is used. In this talk our aim is to design a stabilised finite element method that can exploit continuous dependence of the underlying physical problem without making use of a standard well-posedness result such as Lax-Milgram's Lemma or The Babuska-Brezzi theorem. This is of particular interest for inverse problems or data assimilation problems which may not enter the framework of the above mentioned well-posedness results, but can nevertheless satisfy some weak continuous dependence properties. First we will discuss non-coercive elliptic and hyperbolic equations where the discrete problem can be ill-posed even for well posed continuous problems and then we will discuss the linear elliptic Cauchy problem as an example of an ill-posed problem where there are continuous dependence results available that are suitable for the framework that we propose.

Thu, 16 Oct 2014

14:00 - 15:00
L5

Adjoint-based optimisation for flow analysis and flow control

Professor Peter Schmid
(Imperial College London)
Abstract

Gradient-based optimisation techniques have become a common tool in the analysis of fluid systems. They have been applied to replace and extend large-scale matrix decompositions to compute optimal amplification and optimal frequency responses in unstable and stable flows. We will show how to efficiently extract linearised and adjoint information directly from nonlinear simulation codes and how to use this information for determining common flow characteristics. We also extend this framework to deal with the optimisation of less common norms. Examples from aero-acoustics and mixing will be presented.

Thu, 09 Oct 2014
14:00
L5

TBA

Professor Ke Chen
(University of Liverpool)
Thu, 09 Oct 2014

14:00 - 15:00
L5

Variational segmentation models for selective extraction of features in an image – challenges in modelling, algorithms and applications

Professor Ke Chen
(University of Liverpool)
Abstract

Mathematical imaging is not only a multidisciplinary research area but also a major cross-discipline subject within mathematical sciences as  image analysis techniques involve differential geometry, optimization, nonlinear partial differential equations (PDEs), mathematical analysis, computational algorithms and numerical analysis. Segmentation refers to the essential problem in imaging and vision  of automatically detecting objects in an image.

 

In this talk I first review some various models and techniques in the variational framework that are used for segmentation of images, with the purpose of discussing the state of arts rather than giving a comprehensive survey. Then I introduce the practically demanding task of detecting local features in a large image and our recent segmentation methods using energy minimization and  PDEs. To ensure uniqueness and fast solution, we reformulate one non-convex functional as a convex one and further consider how to adapt the additive operator splitting method for subsequent solution. Finally I show our preliminary work to attempt segmentation of blurred images in the framework of joint deblurring and segmentation.

  

This talk covers joint work with Jianping Zhang, Lavdie Rada, Bryan Williams, Jack Spencer (Liverpool, UK), N. Badshah and H. Ali (Pakistan). Other collaborators in imaging in general include T. F. Chan, R. H. Chan, B. Yu,  L. Sun, F. L. Yang (China), C. Brito (Mexico), N. Chumchob (Thailand),  M. Hintermuller (Germany), Y. Q. Dong (Denmark), X. C. Tai (Norway) etc. [Related publications from   http://www.liv.ac.uk/~cmchenke ]

Thu, 09 Oct 2014

02:00 - 03:00
L5

Variational Segmentation Models for Selective Extraction of Features in An Image: Challenges in Modelling, Algorithms and Applications

Professor Ke Chen
(The University of Liverpool)
Abstract

Mathematical imaging is not only a multidisciplinary research area but also a major cross-discipline subject within mathematical sciences as  image analysis techniques involve differential geometry, optimization, nonlinear partial differential equations (PDEs), mathematical analysis, computational algorithms and numerical analysis. Segmentation refers to the essential problem in imaging and vision  of automatically detecting objects in an image.

 

In this talk I first review some various models and techniques in the variational framework that are used for segmentation of images, with the purpose of discussing the state of arts rather than giving a comprehensive survey. Then I introduce the practically demanding task of detecting local features in a large image and our recent segmentation methods using energy minimization and  PDEs. To ensure uniqueness and fast solution, we reformulate one non-convex functional as a convex one and further consider how to adapt the additive operator splitting method for subsequent solution. Finally I show our preliminary work to attempt segmentation of blurred images in the framework of joint deblurring and segmentation.

  

This talk covers joint work with Jianping Zhang, Lavdie Rada, Bryan Williams, Jack Spencer (Liverpool, UK), N. Badshah and H. Ali (Pakistan). Other collaborators in imaging in general include T. F. Chan, R. H. Chan, B. Yu,  L. Sun, F. L. Yang (China), C. Brito (Mexico), N. Chumchob (Thailand),  M. Hintermuller (Germany), Y. Q. Dong (Denmark), X. C. Tai (Norway) etc.

[Related publications from   http://www.liv.ac.uk/~cmchenke ]

Thu, 19 Jun 2014
14:00
Rutherford Appleton Laboratory, nr Didcot

Preconditioning and deflation techniques for interior point methods

Dr Rachael Tappenden
(Edinburgh University)
Abstract

The accurate and efficient solution of linear systems Ax = b is very important in many engineering and technological applications, and systems of this form also arise as subproblems within other algorithms. In particular, this is true for interior point methods (IPM), where the Newton system must be solved to find the search direction at each iteration. Solving this system is a computational bottleneck of an IPM, and in this talk I will explain how preconditioning and deflation techniques can be used, to lessen this computational burden.

This is joint work with Jacek Gondzio.

Thu, 12 Jun 2014
14:00
L5

Cyclic Schemes for PDE-Based Image Analysis

Professor Joachim Weickert
(Universität des Saarlandes)
Abstract

Many successful methods in image processing and computer vision involve

parabolic and elliptic partial differential equations (PDEs). Thus, there

is a growing demand for simple and highly efficient numerical algorithms

that work for a broad class of problems. Moreover, these methods should

also be well-suited for low-cost parallel hardware such as GPUs.

In this talk we show that two of the simplest methods for the numerical

analysis of PDEs can lead to remarkably efficient algorithms when they

are only slightly modified: To this end, we consider cyclic variants of

the explicit finite difference scheme for approximating parabolic problems,

and of the Jacobi overrelaxation method for solving systems of linear

equations.

Although cyclic algorithms have been around in the numerical analysis

community for a long time, they have never been very popular for a number

of reasons. We argue that most of these reasons have become obsolete and

that cyclic methods ideally satisfy the needs of modern image processing

applications. Interestingly this transfer of knowledge is not a one-way

road from numerical analysis to image analysis: By considering a

factorisation of general smoothing filters, we introduce novel, signal

processing based ways of deriving cycle parameters. They lead to hitherto

unexplored methods with alternative parameter cycles. These methods offer

better smoothing properties than classical numerical concepts such as

Super Time Stepping and the cyclic Richardson algorithm.

We present a number of prototypical applications that demonstrate the

wide applicability of our cyclic algorithms. They include isotropic

and anisotropic nonlinear diffusion processes, higher dimensional

variational problems, and higher order PDEs.

Thu, 29 May 2014
14:00
L5

Atomistic/Continuum Multiscale Methods

Christoph Ortner
(University of Warwick)
Abstract

For many questions of scientific interest, all-atom molecular simulations are still out of reach, in particular in materials engineering where large numbers of atoms, and often expensive force fields, are required. A long standing challenge has been to construct concurrent atomistic/continuum coupling schemes that use atomistic models in regions of space where high accuracy is required, with computationally efficient continuum models (e.g., FEM) of the elastic far-fields.

Many different mechanisms for achieving such atomistic/continuum couplings have been developed. To assess their relative merits, in particular accuracy/cost ratio, I will present a numerical analysis framework. I will use this framework to analyse in more detail a particularly popular scheme (the BQCE scheme), identifying key approximation parameters which can then be balanced (in a surprising way) to obtain an optimised formulation.

Finally, I will demonstrate that this analysis shows how to remove a severe bottlenecks in the BQCE scheme, leading to a new scheme with optimal convergence rate.

Thu, 22 May 2014
14:00
L5

A finite element exterior calculus framework for the rotating shallow water equations

Dr Colin Cotter
(Imperial College, London)
Abstract

We describe discretisations of the shallow water equations on

the sphere using the framework of finite element exterior calculus. The

formulation can be viewed as an extension of the classical staggered

C-grid energy-enstrophy conserving and

energy-conserving/enstrophy-dissipating schemes which were defined on

latitude-longitude grids. This work is motivated by the need to use

pseudo-uniform grids on the sphere (such as an icosahedral grid or a

cube grid) in order to achieve good scaling on massively parallel

computers, and forms part of the multi-institutional UK “Gung Ho”

project which aims to design a next generation dynamical core for the

Met Office Unified Model climate and weather prediction system. The

rotating shallow water equations are a single layer model that is

used to benchmark the horizontal component of numerical schemes for

weather prediction models.

We show, within the finite element exterior calculus framework, that it

is possible

to build numerical schemes with horizontal velocity and layer depth that

have a con-

served diagnostic potential vorticity field, by making use of the

geometric properties of the scheme. The schemes also conserve energy and

enstrophy, which arise naturally as conserved quantities out of a

Poisson bracket formulation. We show that it is possible to modify the

discretisation, motivated by physical considerations, so that enstrophy

is dissipated, either by using the Anticipated Potential Vorticity

Method, or by inducing stabilised advection schemes for potential

vorticity such as SUPG or higher-order Taylor-Galerkin schemes. We

illustrate our results with convergence tests and numerical experiments

obtained from a FEniCS implementation on the sphere.

Thu, 15 May 2014
14:00
L5

Plane Wave Discontinuous Galerkin Methods: Exponential Convergence of the hp-version

Andrea Moiola
(Reading University)
Abstract

Computer simulation of the propagation and interaction of linear waves
is a core task in computational science and engineering.
The finite element method represents one of the most common
discretisation techniques for Helmholtz and Maxwell's equations, which
model time-harmonic acoustic and electromagnetic wave scattering.
At medium and high frequencies, resolution requirements and the
so-called pollution effect entail an excessive computational effort
and prevent standard finite element schemes from an effective use.
The wave-based Trefftz methods offer a possible way to deal with this
problem: trial and test functions are special solutions of the
underlying PDE inside each element, thus the information about the
frequency is directly incorporated in the discrete spaces.

This talk is concerned with a family of those methods: the so-called
Trefftz-discontinuous Galerkin (TDG) methods, which include the
well-known ultraweak variational formulation (UWVF).
We derive a general formulation of the TDG method for Helmholtz
impedance boundary value problems and we discuss its well-posedness
and quasi-optimality.
A complete theory for the (a priori) h- and p-convergence for plane
and circular/spherical wave finite element spaces has been developed,
relying on new best approximation estimates for the considered
discrete spaces.
In the two-dimensional case, on meshes with very general element
shapes geometrically graded towards domain corners, we prove
exponential convergence of the discrete solution in terms of number of
unknowns.

This is a joint work with Ralf Hiptmair, Christoph Schwab (ETH Zurich,
Switzerland) and Ilaria Perugia (Vienna, Austria).

Thu, 01 May 2014
14:00
L5

Adjoint sensitivity analysis in Thermoacoustics

Dr Matthew Juniper
(Cambridge)
Abstract

Thermoacoustic oscillations occur in combustion chambers when heat release oscillations lock into pressure oscillations. They were first observed in lamps in the 18th century, in rockets in the 1930s, and are now one of the most serious problems facing gas turbine manufacturers.

This theoretical and numerical study concerns an infinite-rate chemistry diffusion flame in a tube, which is a simple model for a flame in a combustion chamber. The problem is linearized around the non-oscillating state in order to derive the direct and adjoint equations governing the evolution of infinitesimal oscillations.

The direct equations are used to predict the frequency, growth rate, and mode shape of the most unstable thermoacoustic oscillations. The adjoint equations are then used to calculate how the frequency and growth rate change in response to (i) changes to the base state such as the flame shape or the composition of the fuel (ii) generic passive feedback mechanisms that could be added to the device. This information can be used to stabilize the system, which is verified by subsequent experiments.

This analysis reveals that, as expected from a simple model, the phase delay between velocity and heat-release fluctuations is the key parameter in determining the sensitivities. It also reveals that this thermo-acoustic system is exceedingly sensitive to changes in the base state. This analysis can be extended to more accurate models and is a promising new tool for the analysis and control of thermo-acoustic oscillations.

Thu, 24 Apr 2014
14:00
L4

Modeling of reactive events

Professor Eric Van den Eijnden
(New York University)
Abstract

Dynamics in nature often proceed in the form of reactive events, aka activated processes. The system under study spends very long periods of time in various metastable states; only very rarely does it transition from one such state to another. Understanding the dynamics of such events requires us to study the ensemble of transition paths between the different metastable states. Transition path theory (TPT) is a general mathematical framework developed for this purpose. It is also the foundation for developing modern numerical algorithms such as the string method for finding the transition pathways or milestoning to calculate the reaction rate, and it can also be used in the context of Markov State Models (MSMs). In this talk, I will review the basic ingredients of the transition path theory and discuss connections with transition state theory (TST) as well as approaches to metastability based on potential theory and large deviation theory. I will also discuss how the string method arises in order to find approximate solutions in the framework of the transition path theory, the connections between milestoning and TPT, and the way the theory help building MSMs. The concepts and methods will be illustrated using examples from molecular dynamics, material science and atmosphere/ocean sciences.

Thu, 13 Mar 2014

14:00 - 15:00
L5

Instance optimality of an AFEM with maximum marking strategy

Professor Christian Kreuzer
(Ruhr University Bochum)
Abstract

Adaptive finite element methods (AFEMs) with Dörflers marking strategy are known to converge with

optimal asymptotical rates. Practical experiences show that AFEMs with a maximum marking strategy

produces optimal results thereby being less sensitive to choices of the marking parameter.

\\

\\

In this talk, we prove that an AFEM with a modified maximum strategy is even instance optimal for the

total error, i.e., for the sum of the error and the oscillation. This is a non-asymptotical optimality result.

Our approach uses new techniques based on the minimisation of the Dirichlet energy and a newly

developed tree structure of the nodes of admissible triangulations.

Thu, 06 Mar 2014

14:00 - 15:00
L5

Kullback-Leibler Approximation Of Probability Measures

Professor Andrew Stuart
(University of Warwick)
Abstract

Many problems in the physical sciences

require the determination of an unknown

function from a finite set of indirect measurements.

Examples include oceanography, oil recovery,

water resource management and weather forecasting.

The Bayesian approach to these problems

is natural for many reasons, including the

under-determined and ill-posed nature of the inversion,

the noise in the data and the uncertainty in

the differential equation models used to describe

complex mutiscale physics. The object of interest

in the Bayesian approach is the posterior

probability distribution on the unknown field [1].

\\

\\

However the Bayesian approach presents a

computationally formidable task as it

results in the need to probe a probability

measure on separable Banach space. Monte

Carlo Markov Chain methods (MCMC) may be

used to achieve this [2], but can be

prohibitively expensive. In this talk I

will discuss approximation of probability measures

by a Gaussian measure, looking for the closest

approximation with respect to the Kullback-Leibler

divergence. This methodology is widely

used in machine-learning [3]. In the context of

target measures on separable Banach space

which themselves have density with respect to

a Gaussian, I will show how to make sense of the

resulting problem in the calculus of variations [4].

Furthermore I will show how the approximate

Gaussians can be used to speed-up MCMC

sampling of the posterior distribution [5].

\\

\\

[1] A.M. Stuart. "Inverse problems: a Bayesian

perspective." Acta Numerica 19(2010) and

http://arxiv.org/abs/1302.6989

\\

[2] S.L.Cotter, G.O.Roberts, A.M. Stuart and D. White,

"MCMC methods for functions: modifying old algorithms

to make them faster". Statistical Science 28(2013).

http://arxiv.org/abs/1202.0709

\\

[3] C.M. Bishop, "Pattern recognition and machine learning".

Springer, 2006.

\\

[4] F.J. Pinski G. Simpson A.M. Stuart H. Weber, "Kullback-Leibler

Approximations for measures on infinite dimensional spaces."

http://arxiv.org/abs/1310.7845

\\

[5] F.J. Pinski G. Simpson A.M. Stuart H. Weber, "Algorithms

for Kullback-Leibler approximation of probability measures in

infinite dimensions." In preparation.

Thu, 27 Feb 2014

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Alternating minimal energy methods for linear systems in higher dimensions

Dr Dmitry Savostyanov
(University of Southampton)
Abstract

When high-dimensional problems are concerned, not much algorithms can break the curse of dimensionality, and solve them efficiently and reliably. Among those, tensor product algorithms, which implement the idea of separation of variables for multi-index arrays (tensors), seem to be the most general and also very promising. They originated in quantum physics and chemistry and descent broadly from the density matrix renormalization group (DMRG) and matrix product states (MPS) formalisms. The same tensor formats were recently re-discovered in the numerical linear algebra (NLA) community as the tensor train (TT) format.

Algorithms developed in the quantum physics community are based on the optimisation in tensor formats, that is performed subsequently for all components of a tensor format (i.e. all sites or modes).
The DMRG/MPS schemes are very efficient but very difficult to analyse, and at the moment only local convergence results for the simplest algorithm are available. In the NLA community, a common approach is to use a classical iterative scheme (e.g. GMRES) and enforce the compression to a tensor format at every step. The formal analysis is quite straightforward, but tensor ranks of the vectors which span the Krylov subspace grow rapidly with iterations, and the methods are struggling in practice.

The first attempt to merge classical iterative algorithms and DMRG/MPS methods was made by White (2005), where the second Krylov vector is used to expand the search space on the optimisation step.
The idea proved to be useful, but the implementation was based on the fair amount of physical intuition, and the algorithm is not completely justified.

We have recently proposed the AMEn algorithm for linear systems, that also injects the gradient direction in the optimisation step, but in a way that allows to prove the global convergence of the resulted scheme. The scheme can be easily applied for the computation of the ground state --- the differences to the algorithm of S. White are emphasized in Dolgov and Savostyanov (2013).
The AMEn scheme is already acknowledged in the NLA community --- for example it was recently applied for the computation of extreme eigenstates by Kressner, Steinlechner and Uschmajew (2013), using the block-TT format proposed by in Dolgov, Khoromskij, Oseledets and Savostyanov (2014).

At the moment, AMEn algorithm was applied
 - to simulate the NMR spectra of large molecules (such as ubiquitin),
 - to solve the Fokker-Planck equation for the non-Newtonian polymeric flows,
 - to the chemical master equation describing the mesoscopic model of gene regulative networks,
 - to solve the Heisenberg model problem for a periodic spin chain.
We aim to extend this framework and the analysis to other problems of NLA: eigenproblems, time-dependent problems, high-dimensional interpolation, and matrix functions;  as well as to a wider list of high-dimensional problems.

This is a joint work with Sergey Dolgov the from Max-Planck Institute for Mathematics in the Sciences, Leipzig, Germany.

Thu, 13 Feb 2014

14:00 - 15:00
L5

Finite element approximation of a quasi-static model of rock detachment

Dr Leonardo Figueroa
(Universidad de Concepción)
Abstract

We report on a numerical implementation of a quasi-static model of

rock detachment based on Allaire, Jouve and Van Goethem's

implementation of Francfort and Marigo's model of damage in brittle

solids, As such, local minimizers of a cost functional involving both

stored elastic energy and a damage penalization term are sought by

using a procedure which alternates between approximately solving a

linear elasticity system and advancing a transport equation for a

level set function describing the loci of still-attached rock. We pay

special attention to the mixed finite element method used in the

approximation of the linear elasticity system.

Thu, 06 Feb 2014

14:00 - 15:00
L5

Approximation on surfaces with radial basis functions: from global to local methods

Professor Grady Wright
(Boise State University)
Abstract

Radial basis function (RBF) methods are becoming increasingly popular for numerically solving partial differential equations (PDEs) because they are geometrically flexible, algorithmically accessible, and can be highly accurate. There have been many successful applications of these techniques to various types of PDEs defined on planar regions in two and higher dimensions, and to PDEs defined on the surface of a sphere. Originally, these methods were based on global approximations and their computational cost was quite high. Recent efforts have focused on reducing the computational cost by using ``local’’ techniques, such as RBF generated finite differences (RBF-FD).

In this talk, we first describe our recent work on developing a new, high-order, global RBF method for numerically solving PDEs on relatively general surfaces, with a specific focus on reaction-diffusion equations. The method is quite flexible, only requiring a set of ``scattered’’ nodes on the surface and the corresponding normal vectors to the surface at these nodes. We next present a new scalable local method based on the RBF-FD approach with this same flexibility. This is the first application of the RBF-FD method to general surfaces. We conclude with applications of these methods to some biologically relevant problems.

This talk represents joint work with Edward Fuselier (High Point University), Aaron Fogelson, Mike Kirby, and Varun Shankar (all at the University of Utah).

Thu, 23 Jan 2014

14:00 - 15:00
L5

Direct Search Based on Probabilistic Descent

Professor Luis Nunes Vicente
(University of Coimbra)
Abstract

Direct-search methods are a class of popular derivative-free

algorithms characterized by evaluating the objective function

using a step size and a number of (polling) directions.

When applied to the minimization of smooth functions, the

polling directions are typically taken from positive spanning sets

which in turn must have at least n+1 vectors in an n-dimensional variable space.

In addition, to ensure the global convergence of these algorithms,

the positive spanning sets used throughout the iterations

must be uniformly non-degenerate in the sense of having a positive

(cosine) measure bounded away from zero.

\\

\\

However, recent numerical results indicated that randomly generating

the polling directions without imposing the positive spanning property

can improve the performance of these methods, especially when the number

of directions is chosen considerably less than n+1.

\\

\\

In this talk, we analyze direct-search algorithms when the polling

directions are probabilistic descent, meaning that with a certain

probability at least one of them is of descent type. Such a framework

enjoys almost-sure global convergence. More interestingly, we will show

a global decaying rate of $1/\sqrt{k}$ for the gradient size, with

overwhelmingly high probability, matching the corresponding rate for

the deterministic versions of the gradient method or of direct search.

Our analysis helps to understand numerical behavior and the choice of

the number of polling directions.

\\

\\

This is joint work with Clément Royer, Serge Gratton, and Zaikun Zhang.

Thu, 05 Dec 2013

14:00 - 15:00
L5

Certified upper and lower bounds for the eigenvalues of the Maxwell operator

Dr Gabriel Barrenechea
(University of Strathclyde)
Abstract

We propose a strategy which allows computing eigenvalue enclosures for the Maxwell operator by means of the finite element method. The origins of this strategy can be traced back to over 20 years ago. One of its main features lies in the fact that it can be implemented on any type of regular mesh (structured or otherwise) and any type of elements (nodal or otherwise). In the first part of the talk we formulate a general framework which is free from spectral pollution and allows estimation of eigenfunctions.

We then prove the convergence of the method, which implies precise convergence rates for nodal finite elements. Various numerical experiments on benchmark geometries, with and without symmetries, are reported.

Thu, 28 Nov 2013

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Block LU factorization with panel Rank Revealing Pivoting and its Communication Avoiding version

Dr Amal Khabou
(University of Manchester)
Abstract

We present a block LU factorization with panel rank revealing

pivoting (block LU_PRRP), an algorithm based on strong

rank revealing QR for the panel factorization.

Block LU_PRRP is more stable than Gaussian elimination with partial

pivoting (GEPP), with a theoretical upper bound of the growth factor

of $(1+ \tau b)^{(n/ b)-1}$, where $b$ is the size of the panel used

during the block factorization, $\tau$ is a parameter of the strong

rank revealing QR factorization, and $n$ is the number of columns of

the matrix. For example, if the size of the panel is $b = 64$, and

$\tau = 2$, then $(1+2b)^{(n/b)-1} = (1.079)^{n-64} \ll 2^{n-1}$, where

$2^{n-1}$ is the upper bound of the growth factor of GEPP. Our

extensive numerical experiments show that the new factorization scheme

is as numerically stable as GEPP in practice, but it is more resistant

to some pathological cases where GEPP fails. We note that the block LU_PRRP

factorization does only $O(n^2 b)$ additional floating point operations

compared to GEPP.