Forthcoming events in this series


Thu, 11 Feb 2010

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Resolution of sharp fronts in the presence of model error in variational data assimilation

Dr. Melina Freitag
(University of Bath)
Abstract

We show that data assimilation using four-dimensional variation

(4DVar) can be interpreted as a form of Tikhonov regularisation, a

familiar method for solving ill-posed inverse problems. It is known from

image restoration problems that $L_1$-norm penalty regularisation recovers

sharp edges in the image better than the $L_2$-norm penalty

regularisation. We apply this idea to 4DVar for problems where shocks are

present and give some examples where the $L_1$-norm penalty approach

performs much better than the standard $L_2$-norm regularisation in 4DVar.

Thu, 04 Feb 2010

14:00 - 15:00
3WS SR

Determination of the Basin of Attraction in Dynamical Systems using Meshless Collocation

Dr Peter Giesl
(University of Sussex)
Abstract

In dynamical systems given by an ODE, one is interested in the basin

of attraction of invariant sets, such as equilibria or periodic

orbits. The basin of attraction consists of solutions which converge

towards the invariant set. To determine the basin of attraction, one

can use a solution of a certain linear PDE which can be approximated

by meshless collocation.

The basin of attraction of an equilibrium can be determined through

sublevel sets of a Lyapunov function, i.e. a scalar-valued function

which is decreasing along solutions of the dynamical system. One

method to construct such a Lyapunov function is to solve a certain

linear PDE approximately using Meshless Collocation. Error estimates

ensure that the approximation is a Lyapunov function.

The basin of attraction of a periodic orbit can be analysed by Borg’s

criterion measuring the time evolution of the distance between

adjacent trajectories with respect to a certain Riemannian metric.

The sufficiency and necessity of this criterion will be discussed,

and methods how to compute a suitable Riemannian metric using

Meshless Collocation will be presented in this talk.

Thu, 28 Jan 2010

14:00 - 15:00
3WS SR

Preconditioning Stochastic Finite Element Matrices

Dr. Catherine Powell
(University of Manchester)
Abstract

In the last few years, there has been renewed interest in stochastic

finite element methods (SFEMs), which facilitate the approximation

of statistics of solutions to PDEs with random data. SFEMs based on

sampling, such as stochastic collocation schemes, lead to decoupled

problems requiring only deterministic solvers. SFEMs based on

Galerkin approximation satisfy an optimality condition but require

the solution of a single linear system of equations that couples

deterministic and stochastic degrees of freedom. This is regarded as

a serious bottleneck in computations and the difficulty is even more

pronounced when we attempt to solve systems of PDEs with

random data via stochastic mixed FEMs.

In this talk, we give an overview of solution strategies for the

saddle-point systems that arise when the mixed form of the Darcy

flow problem, with correlated random coefficients, is discretised

via stochastic Galerkin and stochastic collocation techniques. For

the stochastic Galerkin approach, the systems are orders of

magnitude larger than those arising for deterministic problems. We

report on fast solvers and preconditioners based on multigrid, which

have proved successful for deterministic problems. In particular, we

examine their robustness with respect to the random diffusion

coefficients, which can be either a linear or non-linear function of

a finite set of random parameters with a prescribed probability

distribution.

Tue, 26 Jan 2010

14:00 - 15:00
3WS SR

On the existence of modified equations for stochastic differential equations

Dr Konstantinos Zyglakis
(OCCAM (Oxford))
Abstract

In this talk we describe a general framework for deriving

modified equations for stochastic differential equations with respect to

weak convergence. We will start by quickly recapping of how to derive

modified equations in the case of ODEs and describe how these ideas can

be generalized in the case of SDEs. Results will be presented for first

order methods such as the Euler-Maruyama and the Milstein method. In the

case of linear SDEs, using the Gaussianity of the underlying solutions,

we will derive a SDE that the numerical method solves exactly in the

weak sense. Applications of modified equations in the numerical study

of Langevin equations and in the calculation of effective diffusivities

will also be discussed.

Thu, 21 Jan 2010

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

An excursion through the world of complex networks guided by matrix theory

Prof. Ernesto Estrada
(University of Strathclyde)
Abstract

A brief introduction to the field of complex networks is carried out by means of some examples. Then, we focus on the topics of defining and applying centrality measures to characterise the nodes of complex networks. We combine this approach with methods for detecting communities as well as to identify good expansion properties on graphs. All these concepts are formally defined in the presentation. We introduce the subgraph centrality from a combinatorial point of view and then connect it with the theory of graph spectra. Continuing with this line we introduce some modifications to this measure by considering some known matrix functions, e.g., psi matrix functions, as well as new ones introduced here. Finally, we illustrate some examples of applications in particular the identification of essential proteins in proteomic maps.

Tue, 19 Jan 2010

14:00 - 15:00
3WS SR

Discovery of Mechanisms from Mathematical Modeling of DNA Microarray Data by Using Matrix and Tensor Algebra: Computational Prediction and Experimental Verification

Dr Orly Alter
(University of Texas at Austin)
Abstract

Future discovery and control in biology and medicine will come from

the mathematical modeling of large-scale molecular biological data,

such as DNA microarray data, just as Kepler discovered the laws of

planetary motion by using mathematics to describe trends in

astronomical data. In this talk, I will demonstrate that

mathematical modeling of DNA microarray data can be used to correctly

predict previously unknown mechanisms that govern the activities of

DNA and RNA.

First, I will describe the computational prediction of a mechanism of

regulation, by using the pseudoinverse projection and a higher-order

singular value decomposition to uncover a genome-wide pattern of

correlation between DNA replication initiation and RNA expression

during the cell cycle. Then, I will describe the recent

experimental verification of this computational prediction, by

analyzing global expression in synchronized cultures of yeast under

conditions that prevent DNA replication initiation without delaying

cell cycle progression. Finally, I will describe the use of the

singular value decomposition to uncover "asymmetric Hermite functions,"

a generalization of the eigenfunctions of the quantum harmonic

oscillator, in genome-wide mRNA lengths distribution data.

These patterns might be explained by a previously undiscovered asymmetry

in RNA gel electrophoresis band broadening and hint at two competing

evolutionary forces that determine the lengths of gene transcripts.

Thu, 14 Jan 2010

14:00 - 15:00
3WS SR

Golub-Kahan Iterative Bidiagonalization and Revealing Noise in the Data

Prof. Zdenek Strakos
(Academy of Sciences of the Czech Republic)
Abstract

Regularization techniques based on the Golub-Kahan iterative bidiagonalization belong among popular approaches for solving large discrete ill-posed problems. First, the original problem is projected onto a lower dimensional subspace using the bidiagonalization algorithm, which by itself represents a form of regularization by projection. The projected problem, however, inherits a part of the ill-posedness of the original problem, and therefore some form of inner regularization must be applied. Stopping criteria for the whole process are then based on the regularization of the projected (small) problem.

We consider an ill-posed problem with a noisy right-hand side (observation vector), where the noise level is unknown. We show how the information from the Golub-Kahan iterative bidiagonalization can be used for estimating the noise level. Such information can be useful for constructing efficient stopping criteria in solving ill-posed problems.

This is joint work by Iveta Hn\v{e}tynkov\'{a}, Martin Ple\v{s}inger, and Zden\v{e}k Strako\v{s} (Faculty of Mathematics and Physics, Charles University, and Institute of Computer Science, Academy of Sciences, Prague)

Thu, 03 Dec 2009

14:00 - 15:00
3WS SR

Rational Approximations to the Complex Error Function

Prof. Andre Weideman
(University of Stellenbosch)
Abstract

We consider rational approximations to the Faddeeva or plasma dispersion function, defined

as

$w(z) = e^{-z^{2}} \mbox{erfc} (-iz)$.

With many important applications in physics, good software for

computing the function reliably everywhere in the complex plane is required. In this talk

we shall derive rational approximations to $w(z)$ via quadrature, M\"{o}bius transformations, and best approximation. The various approximations are compared with regard to speed of convergence, numerical stability, and ease of generation of the coefficients of the formula.

In addition, we give preference to methods for which a single expression yields uniformly

high accuracy in the entire complex plane, as well as being able to reproduce exactly the

asymptotic behaviour

$w(z) \sim i/(\sqrt{\pi} z), z \rightarrow \infty$

(in an appropriate sector).

This is Joint work with: Stephan Gessner, St\'efan van der Walt

Thu, 26 Nov 2009

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Invariant pairs of matrix polynomials

Dr. Timo Betcke
(University of Reading)
Abstract

Invariant subspaces are a well-established tool in the theory of linear eigenvalue problems. They are also computationally more stable objects than single eigenvectors if one is interested in a group of closely clustered eigenvalues. A generalization of invariant subspaces to matrix polynomials can be given by using invariant pairs.

We investigate some basic properties of invariant pairs and give perturbation results, which show that invariant pairs have similarly favorable properties for matrix polynomials than do invariant subspaces have for linear eigenvalue problems. In the second part of the talk we discuss computational aspects, namely how to extract invariant pairs from linearizations of matrix polynomials and how to do efficient iterative refinement on them. Numerical examples are shown using the NLEVP collection of nonlinear eigenvalue test problems.

This talk is joint work with Daniel Kressner from ETH Zuerich.

Thu, 19 Nov 2009

14:00 - 15:00
3WS SR

Molecular Dynamics Simulations and why they are interesting for Numerical Analysts

Dr. Pedro Gonnet
(ETH Zurich and Oxford University)
Abstract

Molecular Dynamics Simulations are a tool to study the behaviour

of atomic-scale systems. The simulations themselves solve the

equations of motion for hundreds to millions of particles over

thousands to billions of time steps. Due to the size of the

problems studied, such simulations are usually carried out on

large clusters or special-purpose hardware.

At a first glance, there is nothing much of interest for a

Numerical Analyst: the equations of motion are simple, the

integrators are of low order and the computational aspects seem

to focus on hardware or ever larger and faster computer

clusters.

The field, however, having been ploughed mainly by domain

scientists (e.g. Chemists, Biologists, Material Scientists) and

a few Computer Scientists, is a goldmine for interesting

computational problems which have been solved either badly or

not at all. These problems, although domain specific, require

sufficient mathematical and computational skill to make finding

a good solution potentially interesting for Numerical Analysts.

The proper solution of such problems can result in speed-ups

beyond what can be achieved by pushing the envelope on Moore's

Law.

In this talk I will present three examples where problems

interesting to Numerical Analysts arise. For the first two

problems, Constraint Resolution Algorithms and Interpolated

Potential Functions, I will present some of my own results. For

the third problem, using interpolations to efficiently compute

long-range potentials, I will only present some observations and

ideas, as this will be the main focus of my research in Oxford

and therefore no results are available yet.

Thu, 12 Nov 2009

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

CFD in the Gas Turbine Industry

Dr. Leigh Lapworth (t.b.c.)
(Rolls Royce)
Abstract

CFD is an indispensible part of the design process for all major gas turbine components. The growth in the use of CFD from single-block structured mesh steady state solvers to highly resolved unstructured mesh unsteady solvers will be described, with examples of the design improvements that have been achieved. The European Commission has set stringent targets for the reduction of noise, emissions and fuel consumption to be achieved by 2020. The application of CFD to produce innovative designs to meet these targets will be described. The future direction of CFD towards whole engine simulations will also be discussed.

Thu, 05 Nov 2009

14:00 - 15:00
3WS SR

On rational interpolation

Dr. Joris van Deun
(University of Antwerp and University of Oxford)
Thu, 29 Oct 2009

14:00 - 15:00
3WS SR

Is the Outer Solar System Chaotic?

Dr. Wayne Hayes
(UC Irvine and Imperial College London)
Abstract

The stability of our Solar System has been debated since Newton devised

the laws of gravitation to explain planetary motion. Newton himself

doubted the long-term stability of the Solar System, and the question

has remained unanswered despite centuries of intense study by

generations of illustrious names such as Laplace, Langrange, Gauss, and

Poincare. Finally, in the 1990s, with the advent of computers fast

enough to accurately integrate the equations of motion of the planets

for billions of years, the question has finally been settled: for the

next 5 billion years, and barring interlopers, the shapes of the

planetary orbits will remain roughly as they are now. This is called

"practical stability": none of the known planets will collide with each

other, fall into the Sun, or be ejected from the Solar System, for the

next 5 billion years.

Although the Solar System is now known to be practically stable, it may

still be "chaotic". This means that we may---or may not---be able

precisely to predict the positions of the planets within their orbits,

for the next 5 billion years. The precise positions of the planets

effects the tilt of each planet's axis, and so can have a measurable

effect on the Earth's climate. Although the inner Solar System is

almost certainly chaotic, for the past 15 years, there has been

some debate about whether the outer Solar System exhibits chaos or not.

In particular, when performing numerical integrations of the orbits of

the outer planets, some astronomers observe chaos, and some do not. This

is particularly disturbing since it is known that inaccurate integration

can inject chaos into a numerical solution whose exact solution is known

to be stable.

In this talk I will demonstrate how I closed that 15-year debate on

chaos in the outer solar system by performing the most carefully justified

high precision integrations of the orbits of the outer planets that has

yet been done. The answer surprised even the astronomical community,

and was published in _Nature Physics_.

I will also show lots of pretty pictures demonstrating the fractal nature

of the boundary between chaos and regularity in the outer Solar System.

Thu, 22 Oct 2009

14:00 - 15:00
3WS SR

Mesh redistribution algorithms and error control for time-dependent PDEs

Prof. Charalambos Makridakis
(University of Crete)
Abstract

Self adjusted meshes have important benefits approximating PDEs with solutions that exhibit nontrivial characteristics. When appropriately chosen, they lead to efficient, accurate and robust algorithms. Error control is also important, since appropriate analysis can provide guarantees on how accurate the approximate solution is through a posteriori estimates. Error control may lead to appropriate adaptive algorithms by identifying areas of large errors and adjusting the mesh accordingly. Error control and associated adaptive algorithms for important equations in Mathematical Physics is an open problem.

In this talk we consider the main structure of an algorithm which permits mesh redistribution with time and the nontrivial characteristics associated with it. We present improved algorithms and we discuss successful approaches towards error control for model problems (linear and nonlinear) of parabolic or hyperbolic type.

Thu, 15 Oct 2009

14:00 - 15:00
3WS SR

Sparsity, $\ell_1$ Minimization, and the Geometric Separation Problem

Prof. Gitta Kutyniok
(University of Osnabruck)
Abstract

During the last two years, sparsity has become a key concept in various areas

of applied mathematics, computer science, and electrical engineering. Sparsity

methodologies explore the fundamental fact that many types of data/signals can

be represented by only a few non-vanishing coefficients when choosing a suitable

basis or, more generally, a frame. If signals possess such a sparse representation,

they can in general be recovered from few measurements using $\ell_1$ minimization

techniques.

One application of this novel methodology is the geometric separation of data,

which is composed of two (or more) geometrically distinct constituents -- for

instance, pointlike and curvelike structures in astronomical imaging of galaxies.

Although it seems impossible to extract those components -- as there are two

unknowns for every datum -- suggestive empirical results using sparsity

considerations have already been obtained.

In this talk we will first give an introduction into the concept of sparse

representations and sparse recovery. Then we will develop a very general

theoretical approach to the problem of geometric separation based on these

methodologies by introducing novel ideas such as geometric clustering of

coefficients. Finally, we will apply our results to the situation of separation

of pointlike and curvelike structures in astronomical imaging of galaxies,

where a deliberately overcomplete representation made of wavelets (suited

to pointlike structures) and curvelets/shearlets (suited to curvelike

structures) will be chosen. The decomposition principle is to minimize the

$\ell_1$ norm of the frame coefficients. Our theoretical results, which

are based on microlocal analysis considerations, show that at all sufficiently

fine scales, nearly-perfect separation is indeed achieved.

This is joint work with David Donoho (Stanford University).

Thu, 18 Jun 2009

14:00 - 15:00
Comlab

Radial Basis Functions Methods for Modeling Atmospheric and Solid Earth Flows

Dr. Natasha Flyer
(National Center for Atmospheric Research)
Abstract

Current community models in the geosciences employ a variety of numerical methods from finite-difference, finite-volume, finite- or spectral elements, to pseudospectral methods. All have specialized strengths but also serious weaknesses. The first three methods are generally considered low-order and can involve high algorithmic complexity (as in triangular elements or unstructured meshes). Global spectral methods do not practically allow for local mesh refinement and often involve cumbersome algebra. Radial basis functions have the advantage of being spectrally accurate for irregular node layouts in multi-dimensions with extreme algorithmic simplicity, and naturally permit local node refinement on arbitrary domains. We will show test examples ranging from vortex roll-ups, modeling idealized cyclogenesis, to the unsteady nonlinear flows posed by the shallow water equations to 3-D mantle convection in the earth’s interior. The results will be evaluated based on numerical accuracy, stability and computational performance.

Wed, 17 Jun 2009

14:00 - 15:00
Comlab

Random triangles: are they acute or obtuse?

Prof Gil Strang
(MIT)
Abstract

This is a special talk outside the normal Computational Mathematics and Application seminar series. Please note it takes place on a Wednesday.

Thu, 11 Jun 2009

14:00 - 15:00
Comlab

A fast domain decomposition solver for the discretized Stokes equations by a stabilized finite element method

Dr. Atsushi Suzuki
(Czech Technical University in Prague / Kyushu University)
Abstract

An iterative substructuring method with balancing Neumann-Neumann preconditioner is known as an efficient parallel algorithm for the elasticity equations. This method was extended to the Stokes equations by Pavarino and Widlund [2002]. In their extension, Q2/P0-discontinuous elements are used for velocity/pressure and a Schur complement system within "benign space", where incompressibility satisfied, is solved by CG method.

For the construction of the coarse space for the balancing preconditioner, some supplementary solvability conditions are considered. In our algorithm for 3-D computation, P1/P1 elements for velocity/pressure with pressure stabilization are used to save computational cost in the stiffness matrix. We introduce a simple coarse space similar to the one of elasticity equations. Owing to the stability term, solvabilities of local Dirichlet problem for a Schur complement system, of Neumann problem for the preconditioner, and of the coarse space problem are ensured. In our implementation, local Dirichlet and Neumann problems are solved by a 4x4-block modified Cholesky factorization procedure with an envelope method, which leads to fast computation with small memory requirement. Numerical result on parallel efficiency with a shared memory computer will be shown. Direct use of the Stokes solver in an application of Earth's mantle convection problem will be also shown.

Thu, 04 Jun 2009

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Approximate Gauss-Newton methods using reduced order models

Dr. Amos Lawless
(University of Reading)
Abstract

Work with N.K. Nichols (Reading), C. Boess & A. Bunse-Gerstner (Bremen)

The Gauss-Newton (GN) method is a well known iterative technique for solving nonlinear least squares problems subject to dynamical system constraints. Such problems arise commonly from applications in optimal control and state estimation. Variational data assimilation systems for weather, ocean and climate prediction currently use approximate GN methods. The GN method solves a sequence of linear least squares problems subject to linearized system constraints. For very large systems, low resolution linear approximations to the model dynamics are used to improve the efficiency of the algorithm. We propose a new method for deriving low order system approximations based on model reduction techniques from control theory. We show how this technique can be combined with the GN method to give a state estimation technique that retains more of the dynamical information of the full system. Numerical experiments using a shallow-water model illustrate the superior performance of model reduction to standard truncation techniques.

Thu, 28 May 2009

14:00 - 15:00
Comlab

Radial Basis Functions for Solving Partial Differential Equations

Prof. Bengt Fornberg
(University of Colorado)
Abstract

For the task of solving PDEs, finite difference (FD) methods are particularly easy to implement. Finite element (FE) methods are more flexible geometrically, but tend to be difficult to make very accurate. Pseudospectral (PS) methods can be seen as a limit of FD methods if one keeps on increasing their order of accuracy. They are extremely effective in many situations, but this strength comes at the price of very severe geometric restrictions. A more standard introduction to PS methods (rather than via FD methods of increasing orders of accuracy) is in terms of expansions in orthogonal functions (such as Fourier, Chebyshev, etc.).

Radial basis functions (RBFs) were first proposed around 1970 as a tool for interpolating scattered data. Since then, both our knowledge about them and their range of applications have grown tremendously. In the context of solving PDEs, we can see the RBF approach as a major generalization of PS methods, abandoning the orthogonality of the basis functions and in return obtaining much improved simplicity and flexibility. Spectral accuracy becomes now easily available also when using completely unstructured meshes, permitting local node refinements in critical areas. A very counterintuitive parameter range (making all the RBFs very flat) turns out to be of special interest. Computational cost and numerical stability were initially seen as serious difficulties, but major progress have recently been made also in these areas.

Thu, 21 May 2009

14:00 - 15:00
Comlab

Introduction to Quasicontinuum Methods: Formulation, Classification, Analysis

Dr. Christoph Ortner
(Computing Laboratory, Oxford)
Abstract

Quasicontinuum methods are a prototypical class of atomistic-to-continuum coupling methods. For example, we may wish to model a lattice defect (a vacancy or a dislocation) by an atomistic model, but the elastic far field by a continuum model. If the continuum model is consistent with the atomistic model (e.g., the Cauchy--Born model) then the main question is how the interface treatment affects the method.

In this talk I will introduce three of the main ideas how to treat the interface. I will explain their strengths and weaknesses by formulating the simplest possible non-trivial model problem and then simply analyzing the two classical concerns of numerical analysis: consistency and stability.

Thu, 30 Apr 2009

14:00 - 15:00
Comlab

Approximation of Inverse Problems

Prof. Andrew Stuart
(University of Warwick)
Abstract

Inverse problems are often ill-posed, with solutions that depend sensitively on data. Regularization of some form is often used to counteract this. I will describe an approach to regularization, based on a Bayesian formulation of the problem, which leads to a notion of well-posedness for inverse problems, at the level of probability measures.

The stability which results from this well-posedness may be used as the basis for understanding approximation of inverse problems in finite dimensional spaces. I will describe a theory which carries out this program.

The ideas will be illustrated with the classical inverse problem for the heat equation, and then applied to so more complicated inverse problems arising in data assimilation, such as determining the initial condition for the Navier-Stokes equation from observations.