Past Computational Mathematics and Applications Seminar

29 January 2015
14:00
Dr Ignace Bogaert
Abstract

Gaussian quadrature rules are of theoretical and practical interest because of their role in numerical integration and interpolation. For general weighting functions, their computation can be performed with the Golub-Welsch algorithm or one of its refinements. However, for the specific case of Gauss-Legendre quadrature, computation methods based on asymptotic series representations of the Legendre polynomials have recently been proposed. 
For large quadrature rules, these methods provide superior accuracy and speed at the cost of generality. We provide an overview of the progress that was made with these asymptotic methods, focusing on the ideas and asymptotic formulas that led to them. 
Finally, the limited generality will be discussed with Gauss-Jacobi quadrature rules as a prominent possibility for extension.

  • Computational Mathematics and Applications Seminar
22 January 2015
14:00
Professor Ridgway Scott
Abstract
We examine a technique of Slater and Kirkwood which provides an exact resolution of the asymptotic behavior of the van der Waals attraction between two hydrogens atoms. We modify their technique to make the problem more tractable analytically and more easily solvable by numerical methods. Moreover, we prove rigorously that this approach provides an exact solution for the asymptotic electron correlation. The proof makes use of recent results that utilize the Feshbach-Schur perturbation technique. We provide visual representations of the asymptotic electron correlation (entanglement) based on the use of Laguerre approximations.
  • Computational Mathematics and Applications Seminar
4 December 2014
14:00
Dr Euan Spence
Abstract

The usual variational formulations of the Helmholtz equation are sign-indefinite (i.e. not coercive). In this talk, I will argue that this indefiniteness is not an inherent feature of the Helmholtz equation itself, only of its standard formulations. I will do this by presenting new sign-definite formulations of several Helmholtz boundary value problems.

This is joint work with Andrea Moiola (Reading).
 

  • Computational Mathematics and Applications Seminar
Abstract

Incomplete Cholesky factorizations are commonly used as black-box preconditioners for the iterative solution of large sparse symmetric positive definite linear systems. Traditionally, incomplete 
factorizations are obtained by dropping (i.e., replacing by zero) some entries of the factors during the factorization process. Here we consider a less common way to approximate the factors : through low-rank approximations of some off-diagonal blocks. We focus more specifically on approximation schemes that satisfy the orthogonality condition: the approximation should be orthogonal to the corresponding approximation error.

The resulting incomplete Cholesky factorizations have attractive theoretical properties. First, the underlying factorization process can be shown breakdown-free. Further, the condition number of the 
preconditioned system, that characterizes the convergence rate of standard iterative schemes, can be shown bounded as a function of the accuracy of individual approximations. Hence, such a bound can benefit from better approximations, but also from some algorithmic peculiarities. Eventually, the above results can be shown to hold for any symmetric positive definite system matrix.

On the practical side, we consider a particular variant of the preconditioner. It relies on a nested dissection ordering of unknowns to  insure an attractive memory usage and operations count. Further, it exploits in an algebraic way the low-rank structure present in system matrices that arise from PDE discretizations. A preliminary implementation of the method is compared with similar Cholesky and 
incomplete Cholesky factorizations based on dropping of individual entries.

  • Computational Mathematics and Applications Seminar
20 November 2014
14:00
Abstract

The Dynamic Dictionary of Mathematical Functions (or DDMF, http://ddmf.msr-inria.inria.fr/) is an interactive website on special functions inspired by reference books such as the NIST Handbook of Special Functions. The originality of the DDMF is that each of its “chapters” is automatically generated from a short mathematical description of the corresponding function.

To make this possible, the DDMF focuses on so-called D-finite (or holonomic) functions, i.e., complex analytic solutions of linear ODEs with polynomial coefficients. D-finite functions include in particular most standard elementary functions (exp, log, sin, sinh, arctan...) as well as many of the classical special functions of mathematical physics (Airy functions, Bessel functions, hypergeometric functions...). A function of this class can be represented by a finite amount of data (a differential equation along with sufficiently many initial values), 
and this representation makes it possible to develop a computer algebra framework that deals with the whole class in a unified way, instead of ad hoc algorithms and code for each particular function. The DDMF attempts to put this idea into practice.

In this talk, I will present the DDMF, some of the algorithms and software libraries behind it, and ongoing projects based on similar ideas, with an emphasis on symbolic-numeric algorithms.

  • Computational Mathematics and Applications Seminar
13 November 2014
14:00
Professor Rob Scheichl
Abstract

The coefficients in mathematical models of physical processes are often impossible to determine fully or accurately, and are hence subject to uncertainty. It is of great importance to quantify the uncertainty in the model outputs based on the (uncertain) information that is available on the model inputs. This invariably leads to very high dimensional quadrature problems associated with the computation of statistics of quantities of interest, such as the time it takes a pollutant plume in an uncertain subsurface flow problem to reach the boundary of a safety region or the buckling load of an airplane wing. Higher order methods, such as stochastic Galerkin or polynomial chaos methods, suffer from the curse of dimensionality and when the physical models themselves are complex and computationally costly, they become prohibitively expensive in higher dimensions. Instead, some of the most promising approaches to quantify uncertainties in continuum models are based on Monte Carlo sampling and the “multigrid philosophy”. Multilevel Monte Carlo (MLMC) Methods have been introduced recently and successfully applied to many model problems, producing significant gains. In this talk I want to recall the classical MLMC method and then show how the gains can be improved further (significantly) by using quasi-Monte Carlo (QMC) sampling rules. More importantly the dimension independence and the improved gains can be justified rigorously for an important model problem in subsurface flow. To achieve uniform bounds, independent of the dimension, it is necessary to work in infinite dimensions and to study quadrature in sequence spaces. I will present the elements of this new theory for the case of lognormal random coefficients in a diffusion problem and support the theory with numerical experiments.

  • Computational Mathematics and Applications Seminar
Bill Lionheart
Abstract

For many tomographic imaging problems there are explicit inversion formulas, and depending on the completeness of the data these are unstable to differing degrees. Increasingly we are solving tomographic problems as though they were any other linear inverse problem using numerical linear algebra. I will illustrate the use of numerical singular value decomposition to explore the (in)stability for various problems. I will also show how standard techniques from numerical linear algebra, such as conjugate gradient least squares, can be employed with systematic regularization compared with the ad hoc use of slowly convergent iterative methods more traditionally used in computed tomography. I will mainly illustrate the talk with examples from three dimensional x-ray tomography but I will also touch on tensor tomography problems.
 

  • Computational Mathematics and Applications Seminar
30 October 2014
14:00
Professor Olavi Nevanlinna
Abstract

We outline a path from polynomial numerical hulls to multicentric calculus for evaluating f(A). Consider
$$Vp(A) = {z ∈ C : |p(z)| ≤ kp(A)k}$$
where p is a polynomial and A a bounded linear operator (or matrix). Intersecting these sets over polynomials of degree 1 gives the closure of the numerical range, while intersecting over all polynomials gives the spectrum of A, with possible holes filled in.
Outside any set Vp(A) one can write the resolvent down explicitly and this leads to multicentric holomorphic functional calculus.
The spectrum, pseudospectrum or the polynomial numerical hulls can move rapidly in low rank perturbations. However, this happens in a very controlled way and when measured correctly one gets an identity which shows e.g. the following: if you have a low-rank homotopy between self-adjoint and quasinilpotent, then the identity forces the nonnormality to increase in exact compensation with the spectrum shrinking.
In this talk we shall mention how the multicentric calculus leads to a nontrivial extension of von Neumann theorem
$$kf(A)k ≤ sup |z|≤1
kf(z)k$$
where A is a contraction in a Hilbert space, and conclude with some new results on (nonholomorphic) functional calculus for operators for which p(A) is normal at a nontrivial polynomial p. Notice that this is always true for matrices.

 

  • Computational Mathematics and Applications Seminar
23 October 2014
14:00
Professor Erik Burman
Abstract

In numerical analysis the design and analysis of computational methods is often based on, and closely linked to, a well-posedness result for the underlying continuous problem. In particular the continuous dependence of the continuous model is inherited by the computational method when such an approach is used. In this talk our aim is to design a stabilised finite element method that can exploit continuous dependence of the underlying physical problem without making use of a standard well-posedness result such as Lax-Milgram's Lemma or The Babuska-Brezzi theorem. This is of particular interest for inverse problems or data assimilation problems which may not enter the framework of the above mentioned well-posedness results, but can nevertheless satisfy some weak continuous dependence properties. First we will discuss non-coercive elliptic and hyperbolic equations where the discrete problem can be ill-posed even for well posed continuous problems and then we will discuss the linear elliptic Cauchy problem as an example of an ill-posed problem where there are continuous dependence results available that are suitable for the framework that we propose.

  • Computational Mathematics and Applications Seminar
16 October 2014
14:00
Professor Peter Schmid
Abstract

Gradient-based optimisation techniques have become a common tool in the analysis of fluid systems. They have been applied to replace and extend large-scale matrix decompositions to compute optimal amplification and optimal frequency responses in unstable and stable flows. We will show how to efficiently extract linearised and adjoint information directly from nonlinear simulation codes and how to use this information for determining common flow characteristics. We also extend this framework to deal with the optimisation of less common norms. Examples from aero-acoustics and mixing will be presented.

  • Computational Mathematics and Applications Seminar

Pages