Forthcoming events in this series


Thu, 29 Jan 2015

14:00 - 15:00
L5

High-order approximations for some classical Gaussian quadrature

Dr Ignace Bogaert
(University of Ghent)
Abstract

Gaussian quadrature rules are of theoretical and practical interest because of their role in numerical integration and interpolation. For general weighting functions, their computation can be performed with the Golub-Welsch algorithm or one of its refinements. However, for the specific case of Gauss-Legendre quadrature, computation methods based on asymptotic series representations of the Legendre polynomials have recently been proposed. 
For large quadrature rules, these methods provide superior accuracy and speed at the cost of generality. We provide an overview of the progress that was made with these asymptotic methods, focusing on the ideas and asymptotic formulas that led to them. 
Finally, the limited generality will be discussed with Gauss-Jacobi quadrature rules as a prominent possibility for extension.

Thu, 22 Jan 2015

14:00 - 15:00
L5

Electron correlation in van der Waals interactions

Professor Ridgway Scott
(University of Chicago)
Abstract
We examine a technique of Slater and Kirkwood which provides an exact resolution of the asymptotic behavior of the van der Waals attraction between two hydrogens atoms. We modify their technique to make the problem more tractable analytically and more easily solvable by numerical methods. Moreover, we prove rigorously that this approach provides an exact solution for the asymptotic electron correlation. The proof makes use of recent results that utilize the Feshbach-Schur perturbation technique. We provide visual representations of the asymptotic electron correlation (entanglement) based on the use of Laguerre approximations.
Thu, 04 Dec 2014

14:00 - 15:00
L5

Is the Helmholtz equation really sign-indefinite?

Dr Euan Spence
(University of Bath)
Abstract

The usual variational formulations of the Helmholtz equation are sign-indefinite (i.e. not coercive). In this talk, I will argue that this indefiniteness is not an inherent feature of the Helmholtz equation itself, only of its standard formulations. I will do this by presenting new sign-definite formulations of several Helmholtz boundary value problems.

This is joint work with Andrea Moiola (Reading).
 

Thu, 27 Nov 2014

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Incomplete Cholesky preconditioners based on orthogonal dropping : theory and practice

Artem Napov
(Universite Libre de Bruxelles)
Abstract

Incomplete Cholesky factorizations are commonly used as black-box preconditioners for the iterative solution of large sparse symmetric positive definite linear systems. Traditionally, incomplete 
factorizations are obtained by dropping (i.e., replacing by zero) some entries of the factors during the factorization process. Here we consider a less common way to approximate the factors : through low-rank approximations of some off-diagonal blocks. We focus more specifically on approximation schemes that satisfy the orthogonality condition: the approximation should be orthogonal to the corresponding approximation error.

The resulting incomplete Cholesky factorizations have attractive theoretical properties. First, the underlying factorization process can be shown breakdown-free. Further, the condition number of the 
preconditioned system, that characterizes the convergence rate of standard iterative schemes, can be shown bounded as a function of the accuracy of individual approximations. Hence, such a bound can benefit from better approximations, but also from some algorithmic peculiarities. Eventually, the above results can be shown to hold for any symmetric positive definite system matrix.

On the practical side, we consider a particular variant of the preconditioner. It relies on a nested dissection ordering of unknowns to  insure an attractive memory usage and operations count. Further, it exploits in an algebraic way the low-rank structure present in system matrices that arise from PDE discretizations. A preliminary implementation of the method is compared with similar Cholesky and 
incomplete Cholesky factorizations based on dropping of individual entries.

Thu, 20 Nov 2014

14:00 - 15:00
L5

The Dynamic Dictionary of Mathematical Functions

Dr Marc Mezzarobba
(CNRS and Ecole Normale Superieure)
Abstract

The Dynamic Dictionary of Mathematical Functions (or DDMF, http://ddmf.msr-inria.inria.fr/) is an interactive website on special functions inspired by reference books such as the NIST Handbook of Special Functions. The originality of the DDMF is that each of its “chapters” is automatically generated from a short mathematical description of the corresponding function.

To make this possible, the DDMF focuses on so-called D-finite (or holonomic) functions, i.e., complex analytic solutions of linear ODEs with polynomial coefficients. D-finite functions include in particular most standard elementary functions (exp, log, sin, sinh, arctan...) as well as many of the classical special functions of mathematical physics (Airy functions, Bessel functions, hypergeometric functions...). A function of this class can be represented by a finite amount of data (a differential equation along with sufficiently many initial values), 
and this representation makes it possible to develop a computer algebra framework that deals with the whole class in a unified way, instead of ad hoc algorithms and code for each particular function. The DDMF attempts to put this idea into practice.

In this talk, I will present the DDMF, some of the algorithms and software libraries behind it, and ongoing projects based on similar ideas, with an emphasis on symbolic-numeric algorithms.

Thu, 13 Nov 2014

14:00 - 15:00
L5

Quadrature in infinite dimensions and applications in uncertainty quantification

Professor Rob Scheichl
(University of Bath)
Abstract

The coefficients in mathematical models of physical processes are often impossible to determine fully or accurately, and are hence subject to uncertainty. It is of great importance to quantify the uncertainty in the model outputs based on the (uncertain) information that is available on the model inputs. This invariably leads to very high dimensional quadrature problems associated with the computation of statistics of quantities of interest, such as the time it takes a pollutant plume in an uncertain subsurface flow problem to reach the boundary of a safety region or the buckling load of an airplane wing. Higher order methods, such as stochastic Galerkin or polynomial chaos methods, suffer from the curse of dimensionality and when the physical models themselves are complex and computationally costly, they become prohibitively expensive in higher dimensions. Instead, some of the most promising approaches to quantify uncertainties in continuum models are based on Monte Carlo sampling and the “multigrid philosophy”. Multilevel Monte Carlo (MLMC) Methods have been introduced recently and successfully applied to many model problems, producing significant gains. In this talk I want to recall the classical MLMC method and then show how the gains can be improved further (significantly) by using quasi-Monte Carlo (QMC) sampling rules. More importantly the dimension independence and the improved gains can be justified rigorously for an important model problem in subsurface flow. To achieve uniform bounds, independent of the dimension, it is necessary to work in infinite dimensions and to study quadrature in sequence spaces. I will present the elements of this new theory for the case of lognormal random coefficients in a diffusion problem and support the theory with numerical experiments.

Thu, 06 Nov 2014

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Tomographic problems as linear algebra

Bill Lionheart
(Manchester University)
Abstract

For many tomographic imaging problems there are explicit inversion formulas, and depending on the completeness of the data these are unstable to differing degrees. Increasingly we are solving tomographic problems as though they were any other linear inverse problem using numerical linear algebra. I will illustrate the use of numerical singular value decomposition to explore the (in)stability for various problems. I will also show how standard techniques from numerical linear algebra, such as conjugate gradient least squares, can be employed with systematic regularization compared with the ad hoc use of slowly convergent iterative methods more traditionally used in computed tomography. I will mainly illustrate the talk with examples from three dimensional x-ray tomography but I will also touch on tensor tomography problems.
 

Thu, 30 Oct 2014

14:00 - 15:00
L5

Polynomial hulls, low rank perturbations and multicentric calculus

Professor Olavi Nevanlinna
(Aalto University)
Abstract

We outline a path from polynomial numerical hulls to multicentric calculus for evaluating f(A). Consider
$$Vp(A) = {z ∈ C : |p(z)| ≤ kp(A)k}$$
where p is a polynomial and A a bounded linear operator (or matrix). Intersecting these sets over polynomials of degree 1 gives the closure of the numerical range, while intersecting over all polynomials gives the spectrum of A, with possible holes filled in.
Outside any set Vp(A) one can write the resolvent down explicitly and this leads to multicentric holomorphic functional calculus.
The spectrum, pseudospectrum or the polynomial numerical hulls can move rapidly in low rank perturbations. However, this happens in a very controlled way and when measured correctly one gets an identity which shows e.g. the following: if you have a low-rank homotopy between self-adjoint and quasinilpotent, then the identity forces the nonnormality to increase in exact compensation with the spectrum shrinking.
In this talk we shall mention how the multicentric calculus leads to a nontrivial extension of von Neumann theorem
$$kf(A)k ≤ sup |z|≤1
kf(z)k$$
where A is a contraction in a Hilbert space, and conclude with some new results on (nonholomorphic) functional calculus for operators for which p(A) is normal at a nontrivial polynomial p. Notice that this is always true for matrices.

 

Thu, 23 Oct 2014

14:00 - 15:00
L5

Stabilised finite element methods for non symmetric, non coercive and ill-posed problems

Professor Erik Burman
(UCL)
Abstract

In numerical analysis the design and analysis of computational methods is often based on, and closely linked to, a well-posedness result for the underlying continuous problem. In particular the continuous dependence of the continuous model is inherited by the computational method when such an approach is used. In this talk our aim is to design a stabilised finite element method that can exploit continuous dependence of the underlying physical problem without making use of a standard well-posedness result such as Lax-Milgram's Lemma or The Babuska-Brezzi theorem. This is of particular interest for inverse problems or data assimilation problems which may not enter the framework of the above mentioned well-posedness results, but can nevertheless satisfy some weak continuous dependence properties. First we will discuss non-coercive elliptic and hyperbolic equations where the discrete problem can be ill-posed even for well posed continuous problems and then we will discuss the linear elliptic Cauchy problem as an example of an ill-posed problem where there are continuous dependence results available that are suitable for the framework that we propose.

Thu, 16 Oct 2014

14:00 - 15:00
L5

Adjoint-based optimisation for flow analysis and flow control

Professor Peter Schmid
(Imperial College London)
Abstract

Gradient-based optimisation techniques have become a common tool in the analysis of fluid systems. They have been applied to replace and extend large-scale matrix decompositions to compute optimal amplification and optimal frequency responses in unstable and stable flows. We will show how to efficiently extract linearised and adjoint information directly from nonlinear simulation codes and how to use this information for determining common flow characteristics. We also extend this framework to deal with the optimisation of less common norms. Examples from aero-acoustics and mixing will be presented.

Thu, 09 Oct 2014
14:00
L5

TBA

Professor Ke Chen
(University of Liverpool)
Thu, 09 Oct 2014

14:00 - 15:00
L5

Variational segmentation models for selective extraction of features in an image – challenges in modelling, algorithms and applications

Professor Ke Chen
(University of Liverpool)
Abstract

Mathematical imaging is not only a multidisciplinary research area but also a major cross-discipline subject within mathematical sciences as  image analysis techniques involve differential geometry, optimization, nonlinear partial differential equations (PDEs), mathematical analysis, computational algorithms and numerical analysis. Segmentation refers to the essential problem in imaging and vision  of automatically detecting objects in an image.

 

In this talk I first review some various models and techniques in the variational framework that are used for segmentation of images, with the purpose of discussing the state of arts rather than giving a comprehensive survey. Then I introduce the practically demanding task of detecting local features in a large image and our recent segmentation methods using energy minimization and  PDEs. To ensure uniqueness and fast solution, we reformulate one non-convex functional as a convex one and further consider how to adapt the additive operator splitting method for subsequent solution. Finally I show our preliminary work to attempt segmentation of blurred images in the framework of joint deblurring and segmentation.

  

This talk covers joint work with Jianping Zhang, Lavdie Rada, Bryan Williams, Jack Spencer (Liverpool, UK), N. Badshah and H. Ali (Pakistan). Other collaborators in imaging in general include T. F. Chan, R. H. Chan, B. Yu,  L. Sun, F. L. Yang (China), C. Brito (Mexico), N. Chumchob (Thailand),  M. Hintermuller (Germany), Y. Q. Dong (Denmark), X. C. Tai (Norway) etc. [Related publications from   http://www.liv.ac.uk/~cmchenke ]

Thu, 09 Oct 2014

02:00 - 03:00
L5

Variational Segmentation Models for Selective Extraction of Features in An Image: Challenges in Modelling, Algorithms and Applications

Professor Ke Chen
(The University of Liverpool)
Abstract

Mathematical imaging is not only a multidisciplinary research area but also a major cross-discipline subject within mathematical sciences as  image analysis techniques involve differential geometry, optimization, nonlinear partial differential equations (PDEs), mathematical analysis, computational algorithms and numerical analysis. Segmentation refers to the essential problem in imaging and vision  of automatically detecting objects in an image.

 

In this talk I first review some various models and techniques in the variational framework that are used for segmentation of images, with the purpose of discussing the state of arts rather than giving a comprehensive survey. Then I introduce the practically demanding task of detecting local features in a large image and our recent segmentation methods using energy minimization and  PDEs. To ensure uniqueness and fast solution, we reformulate one non-convex functional as a convex one and further consider how to adapt the additive operator splitting method for subsequent solution. Finally I show our preliminary work to attempt segmentation of blurred images in the framework of joint deblurring and segmentation.

  

This talk covers joint work with Jianping Zhang, Lavdie Rada, Bryan Williams, Jack Spencer (Liverpool, UK), N. Badshah and H. Ali (Pakistan). Other collaborators in imaging in general include T. F. Chan, R. H. Chan, B. Yu,  L. Sun, F. L. Yang (China), C. Brito (Mexico), N. Chumchob (Thailand),  M. Hintermuller (Germany), Y. Q. Dong (Denmark), X. C. Tai (Norway) etc.

[Related publications from   http://www.liv.ac.uk/~cmchenke ]

Thu, 19 Jun 2014
14:00
Rutherford Appleton Laboratory, nr Didcot

Preconditioning and deflation techniques for interior point methods

Dr Rachael Tappenden
(Edinburgh University)
Abstract

The accurate and efficient solution of linear systems Ax = b is very important in many engineering and technological applications, and systems of this form also arise as subproblems within other algorithms. In particular, this is true for interior point methods (IPM), where the Newton system must be solved to find the search direction at each iteration. Solving this system is a computational bottleneck of an IPM, and in this talk I will explain how preconditioning and deflation techniques can be used, to lessen this computational burden.

This is joint work with Jacek Gondzio.

Thu, 12 Jun 2014
14:00
L5

Cyclic Schemes for PDE-Based Image Analysis

Professor Joachim Weickert
(Universität des Saarlandes)
Abstract

Many successful methods in image processing and computer vision involve

parabolic and elliptic partial differential equations (PDEs). Thus, there

is a growing demand for simple and highly efficient numerical algorithms

that work for a broad class of problems. Moreover, these methods should

also be well-suited for low-cost parallel hardware such as GPUs.

In this talk we show that two of the simplest methods for the numerical

analysis of PDEs can lead to remarkably efficient algorithms when they

are only slightly modified: To this end, we consider cyclic variants of

the explicit finite difference scheme for approximating parabolic problems,

and of the Jacobi overrelaxation method for solving systems of linear

equations.

Although cyclic algorithms have been around in the numerical analysis

community for a long time, they have never been very popular for a number

of reasons. We argue that most of these reasons have become obsolete and

that cyclic methods ideally satisfy the needs of modern image processing

applications. Interestingly this transfer of knowledge is not a one-way

road from numerical analysis to image analysis: By considering a

factorisation of general smoothing filters, we introduce novel, signal

processing based ways of deriving cycle parameters. They lead to hitherto

unexplored methods with alternative parameter cycles. These methods offer

better smoothing properties than classical numerical concepts such as

Super Time Stepping and the cyclic Richardson algorithm.

We present a number of prototypical applications that demonstrate the

wide applicability of our cyclic algorithms. They include isotropic

and anisotropic nonlinear diffusion processes, higher dimensional

variational problems, and higher order PDEs.

Thu, 29 May 2014
14:00
L5

Atomistic/Continuum Multiscale Methods

Christoph Ortner
(University of Warwick)
Abstract

For many questions of scientific interest, all-atom molecular simulations are still out of reach, in particular in materials engineering where large numbers of atoms, and often expensive force fields, are required. A long standing challenge has been to construct concurrent atomistic/continuum coupling schemes that use atomistic models in regions of space where high accuracy is required, with computationally efficient continuum models (e.g., FEM) of the elastic far-fields.

Many different mechanisms for achieving such atomistic/continuum couplings have been developed. To assess their relative merits, in particular accuracy/cost ratio, I will present a numerical analysis framework. I will use this framework to analyse in more detail a particularly popular scheme (the BQCE scheme), identifying key approximation parameters which can then be balanced (in a surprising way) to obtain an optimised formulation.

Finally, I will demonstrate that this analysis shows how to remove a severe bottlenecks in the BQCE scheme, leading to a new scheme with optimal convergence rate.

Thu, 22 May 2014
14:00
L5

A finite element exterior calculus framework for the rotating shallow water equations

Dr Colin Cotter
(Imperial College, London)
Abstract

We describe discretisations of the shallow water equations on

the sphere using the framework of finite element exterior calculus. The

formulation can be viewed as an extension of the classical staggered

C-grid energy-enstrophy conserving and

energy-conserving/enstrophy-dissipating schemes which were defined on

latitude-longitude grids. This work is motivated by the need to use

pseudo-uniform grids on the sphere (such as an icosahedral grid or a

cube grid) in order to achieve good scaling on massively parallel

computers, and forms part of the multi-institutional UK “Gung Ho”

project which aims to design a next generation dynamical core for the

Met Office Unified Model climate and weather prediction system. The

rotating shallow water equations are a single layer model that is

used to benchmark the horizontal component of numerical schemes for

weather prediction models.

We show, within the finite element exterior calculus framework, that it

is possible

to build numerical schemes with horizontal velocity and layer depth that

have a con-

served diagnostic potential vorticity field, by making use of the

geometric properties of the scheme. The schemes also conserve energy and

enstrophy, which arise naturally as conserved quantities out of a

Poisson bracket formulation. We show that it is possible to modify the

discretisation, motivated by physical considerations, so that enstrophy

is dissipated, either by using the Anticipated Potential Vorticity

Method, or by inducing stabilised advection schemes for potential

vorticity such as SUPG or higher-order Taylor-Galerkin schemes. We

illustrate our results with convergence tests and numerical experiments

obtained from a FEniCS implementation on the sphere.

Thu, 15 May 2014
14:00
L5

Plane Wave Discontinuous Galerkin Methods: Exponential Convergence of the hp-version

Andrea Moiola
(Reading University)
Abstract

Computer simulation of the propagation and interaction of linear waves
is a core task in computational science and engineering.
The finite element method represents one of the most common
discretisation techniques for Helmholtz and Maxwell's equations, which
model time-harmonic acoustic and electromagnetic wave scattering.
At medium and high frequencies, resolution requirements and the
so-called pollution effect entail an excessive computational effort
and prevent standard finite element schemes from an effective use.
The wave-based Trefftz methods offer a possible way to deal with this
problem: trial and test functions are special solutions of the
underlying PDE inside each element, thus the information about the
frequency is directly incorporated in the discrete spaces.

This talk is concerned with a family of those methods: the so-called
Trefftz-discontinuous Galerkin (TDG) methods, which include the
well-known ultraweak variational formulation (UWVF).
We derive a general formulation of the TDG method for Helmholtz
impedance boundary value problems and we discuss its well-posedness
and quasi-optimality.
A complete theory for the (a priori) h- and p-convergence for plane
and circular/spherical wave finite element spaces has been developed,
relying on new best approximation estimates for the considered
discrete spaces.
In the two-dimensional case, on meshes with very general element
shapes geometrically graded towards domain corners, we prove
exponential convergence of the discrete solution in terms of number of
unknowns.

This is a joint work with Ralf Hiptmair, Christoph Schwab (ETH Zurich,
Switzerland) and Ilaria Perugia (Vienna, Austria).

Thu, 01 May 2014
14:00
L5

Adjoint sensitivity analysis in Thermoacoustics

Dr Matthew Juniper
(Cambridge)
Abstract

Thermoacoustic oscillations occur in combustion chambers when heat release oscillations lock into pressure oscillations. They were first observed in lamps in the 18th century, in rockets in the 1930s, and are now one of the most serious problems facing gas turbine manufacturers.

This theoretical and numerical study concerns an infinite-rate chemistry diffusion flame in a tube, which is a simple model for a flame in a combustion chamber. The problem is linearized around the non-oscillating state in order to derive the direct and adjoint equations governing the evolution of infinitesimal oscillations.

The direct equations are used to predict the frequency, growth rate, and mode shape of the most unstable thermoacoustic oscillations. The adjoint equations are then used to calculate how the frequency and growth rate change in response to (i) changes to the base state such as the flame shape or the composition of the fuel (ii) generic passive feedback mechanisms that could be added to the device. This information can be used to stabilize the system, which is verified by subsequent experiments.

This analysis reveals that, as expected from a simple model, the phase delay between velocity and heat-release fluctuations is the key parameter in determining the sensitivities. It also reveals that this thermo-acoustic system is exceedingly sensitive to changes in the base state. This analysis can be extended to more accurate models and is a promising new tool for the analysis and control of thermo-acoustic oscillations.

Thu, 24 Apr 2014
14:00
L4

Modeling of reactive events

Professor Eric Van den Eijnden
(New York University)
Abstract

Dynamics in nature often proceed in the form of reactive events, aka activated processes. The system under study spends very long periods of time in various metastable states; only very rarely does it transition from one such state to another. Understanding the dynamics of such events requires us to study the ensemble of transition paths between the different metastable states. Transition path theory (TPT) is a general mathematical framework developed for this purpose. It is also the foundation for developing modern numerical algorithms such as the string method for finding the transition pathways or milestoning to calculate the reaction rate, and it can also be used in the context of Markov State Models (MSMs). In this talk, I will review the basic ingredients of the transition path theory and discuss connections with transition state theory (TST) as well as approaches to metastability based on potential theory and large deviation theory. I will also discuss how the string method arises in order to find approximate solutions in the framework of the transition path theory, the connections between milestoning and TPT, and the way the theory help building MSMs. The concepts and methods will be illustrated using examples from molecular dynamics, material science and atmosphere/ocean sciences.

Thu, 13 Mar 2014

14:00 - 15:00
L5

Instance optimality of an AFEM with maximum marking strategy

Professor Christian Kreuzer
(Ruhr University Bochum)
Abstract

Adaptive finite element methods (AFEMs) with Dörflers marking strategy are known to converge with

optimal asymptotical rates. Practical experiences show that AFEMs with a maximum marking strategy

produces optimal results thereby being less sensitive to choices of the marking parameter.

\\

\\

In this talk, we prove that an AFEM with a modified maximum strategy is even instance optimal for the

total error, i.e., for the sum of the error and the oscillation. This is a non-asymptotical optimality result.

Our approach uses new techniques based on the minimisation of the Dirichlet energy and a newly

developed tree structure of the nodes of admissible triangulations.

Thu, 06 Mar 2014

14:00 - 15:00
L5

Kullback-Leibler Approximation Of Probability Measures

Professor Andrew Stuart
(University of Warwick)
Abstract

Many problems in the physical sciences

require the determination of an unknown

function from a finite set of indirect measurements.

Examples include oceanography, oil recovery,

water resource management and weather forecasting.

The Bayesian approach to these problems

is natural for many reasons, including the

under-determined and ill-posed nature of the inversion,

the noise in the data and the uncertainty in

the differential equation models used to describe

complex mutiscale physics. The object of interest

in the Bayesian approach is the posterior

probability distribution on the unknown field [1].

\\

\\

However the Bayesian approach presents a

computationally formidable task as it

results in the need to probe a probability

measure on separable Banach space. Monte

Carlo Markov Chain methods (MCMC) may be

used to achieve this [2], but can be

prohibitively expensive. In this talk I

will discuss approximation of probability measures

by a Gaussian measure, looking for the closest

approximation with respect to the Kullback-Leibler

divergence. This methodology is widely

used in machine-learning [3]. In the context of

target measures on separable Banach space

which themselves have density with respect to

a Gaussian, I will show how to make sense of the

resulting problem in the calculus of variations [4].

Furthermore I will show how the approximate

Gaussians can be used to speed-up MCMC

sampling of the posterior distribution [5].

\\

\\

[1] A.M. Stuart. "Inverse problems: a Bayesian

perspective." Acta Numerica 19(2010) and

http://arxiv.org/abs/1302.6989

\\

[2] S.L.Cotter, G.O.Roberts, A.M. Stuart and D. White,

"MCMC methods for functions: modifying old algorithms

to make them faster". Statistical Science 28(2013).

http://arxiv.org/abs/1202.0709

\\

[3] C.M. Bishop, "Pattern recognition and machine learning".

Springer, 2006.

\\

[4] F.J. Pinski G. Simpson A.M. Stuart H. Weber, "Kullback-Leibler

Approximations for measures on infinite dimensional spaces."

http://arxiv.org/abs/1310.7845

\\

[5] F.J. Pinski G. Simpson A.M. Stuart H. Weber, "Algorithms

for Kullback-Leibler approximation of probability measures in

infinite dimensions." In preparation.

Thu, 27 Feb 2014

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Alternating minimal energy methods for linear systems in higher dimensions

Dr Dmitry Savostyanov
(University of Southampton)
Abstract

When high-dimensional problems are concerned, not much algorithms can break the curse of dimensionality, and solve them efficiently and reliably. Among those, tensor product algorithms, which implement the idea of separation of variables for multi-index arrays (tensors), seem to be the most general and also very promising. They originated in quantum physics and chemistry and descent broadly from the density matrix renormalization group (DMRG) and matrix product states (MPS) formalisms. The same tensor formats were recently re-discovered in the numerical linear algebra (NLA) community as the tensor train (TT) format.

Algorithms developed in the quantum physics community are based on the optimisation in tensor formats, that is performed subsequently for all components of a tensor format (i.e. all sites or modes).
The DMRG/MPS schemes are very efficient but very difficult to analyse, and at the moment only local convergence results for the simplest algorithm are available. In the NLA community, a common approach is to use a classical iterative scheme (e.g. GMRES) and enforce the compression to a tensor format at every step. The formal analysis is quite straightforward, but tensor ranks of the vectors which span the Krylov subspace grow rapidly with iterations, and the methods are struggling in practice.

The first attempt to merge classical iterative algorithms and DMRG/MPS methods was made by White (2005), where the second Krylov vector is used to expand the search space on the optimisation step.
The idea proved to be useful, but the implementation was based on the fair amount of physical intuition, and the algorithm is not completely justified.

We have recently proposed the AMEn algorithm for linear systems, that also injects the gradient direction in the optimisation step, but in a way that allows to prove the global convergence of the resulted scheme. The scheme can be easily applied for the computation of the ground state --- the differences to the algorithm of S. White are emphasized in Dolgov and Savostyanov (2013).
The AMEn scheme is already acknowledged in the NLA community --- for example it was recently applied for the computation of extreme eigenstates by Kressner, Steinlechner and Uschmajew (2013), using the block-TT format proposed by in Dolgov, Khoromskij, Oseledets and Savostyanov (2014).

At the moment, AMEn algorithm was applied
 - to simulate the NMR spectra of large molecules (such as ubiquitin),
 - to solve the Fokker-Planck equation for the non-Newtonian polymeric flows,
 - to the chemical master equation describing the mesoscopic model of gene regulative networks,
 - to solve the Heisenberg model problem for a periodic spin chain.
We aim to extend this framework and the analysis to other problems of NLA: eigenproblems, time-dependent problems, high-dimensional interpolation, and matrix functions;  as well as to a wider list of high-dimensional problems.

This is a joint work with Sergey Dolgov the from Max-Planck Institute for Mathematics in the Sciences, Leipzig, Germany.