Forthcoming events in this series


Thu, 05 Nov 2009

14:00 - 15:00
3WS SR

On rational interpolation

Dr. Joris van Deun
(University of Antwerp and University of Oxford)
Thu, 29 Oct 2009

14:00 - 15:00
3WS SR

Is the Outer Solar System Chaotic?

Dr. Wayne Hayes
(UC Irvine and Imperial College London)
Abstract

The stability of our Solar System has been debated since Newton devised

the laws of gravitation to explain planetary motion. Newton himself

doubted the long-term stability of the Solar System, and the question

has remained unanswered despite centuries of intense study by

generations of illustrious names such as Laplace, Langrange, Gauss, and

Poincare. Finally, in the 1990s, with the advent of computers fast

enough to accurately integrate the equations of motion of the planets

for billions of years, the question has finally been settled: for the

next 5 billion years, and barring interlopers, the shapes of the

planetary orbits will remain roughly as they are now. This is called

"practical stability": none of the known planets will collide with each

other, fall into the Sun, or be ejected from the Solar System, for the

next 5 billion years.

Although the Solar System is now known to be practically stable, it may

still be "chaotic". This means that we may---or may not---be able

precisely to predict the positions of the planets within their orbits,

for the next 5 billion years. The precise positions of the planets

effects the tilt of each planet's axis, and so can have a measurable

effect on the Earth's climate. Although the inner Solar System is

almost certainly chaotic, for the past 15 years, there has been

some debate about whether the outer Solar System exhibits chaos or not.

In particular, when performing numerical integrations of the orbits of

the outer planets, some astronomers observe chaos, and some do not. This

is particularly disturbing since it is known that inaccurate integration

can inject chaos into a numerical solution whose exact solution is known

to be stable.

In this talk I will demonstrate how I closed that 15-year debate on

chaos in the outer solar system by performing the most carefully justified

high precision integrations of the orbits of the outer planets that has

yet been done. The answer surprised even the astronomical community,

and was published in _Nature Physics_.

I will also show lots of pretty pictures demonstrating the fractal nature

of the boundary between chaos and regularity in the outer Solar System.

Thu, 22 Oct 2009

14:00 - 15:00
3WS SR

Mesh redistribution algorithms and error control for time-dependent PDEs

Prof. Charalambos Makridakis
(University of Crete)
Abstract

Self adjusted meshes have important benefits approximating PDEs with solutions that exhibit nontrivial characteristics. When appropriately chosen, they lead to efficient, accurate and robust algorithms. Error control is also important, since appropriate analysis can provide guarantees on how accurate the approximate solution is through a posteriori estimates. Error control may lead to appropriate adaptive algorithms by identifying areas of large errors and adjusting the mesh accordingly. Error control and associated adaptive algorithms for important equations in Mathematical Physics is an open problem.

In this talk we consider the main structure of an algorithm which permits mesh redistribution with time and the nontrivial characteristics associated with it. We present improved algorithms and we discuss successful approaches towards error control for model problems (linear and nonlinear) of parabolic or hyperbolic type.

Thu, 15 Oct 2009

14:00 - 15:00
3WS SR

Sparsity, $\ell_1$ Minimization, and the Geometric Separation Problem

Prof. Gitta Kutyniok
(University of Osnabruck)
Abstract

During the last two years, sparsity has become a key concept in various areas

of applied mathematics, computer science, and electrical engineering. Sparsity

methodologies explore the fundamental fact that many types of data/signals can

be represented by only a few non-vanishing coefficients when choosing a suitable

basis or, more generally, a frame. If signals possess such a sparse representation,

they can in general be recovered from few measurements using $\ell_1$ minimization

techniques.

One application of this novel methodology is the geometric separation of data,

which is composed of two (or more) geometrically distinct constituents -- for

instance, pointlike and curvelike structures in astronomical imaging of galaxies.

Although it seems impossible to extract those components -- as there are two

unknowns for every datum -- suggestive empirical results using sparsity

considerations have already been obtained.

In this talk we will first give an introduction into the concept of sparse

representations and sparse recovery. Then we will develop a very general

theoretical approach to the problem of geometric separation based on these

methodologies by introducing novel ideas such as geometric clustering of

coefficients. Finally, we will apply our results to the situation of separation

of pointlike and curvelike structures in astronomical imaging of galaxies,

where a deliberately overcomplete representation made of wavelets (suited

to pointlike structures) and curvelets/shearlets (suited to curvelike

structures) will be chosen. The decomposition principle is to minimize the

$\ell_1$ norm of the frame coefficients. Our theoretical results, which

are based on microlocal analysis considerations, show that at all sufficiently

fine scales, nearly-perfect separation is indeed achieved.

This is joint work with David Donoho (Stanford University).

Thu, 18 Jun 2009

14:00 - 15:00
Comlab

Radial Basis Functions Methods for Modeling Atmospheric and Solid Earth Flows

Dr. Natasha Flyer
(National Center for Atmospheric Research)
Abstract

Current community models in the geosciences employ a variety of numerical methods from finite-difference, finite-volume, finite- or spectral elements, to pseudospectral methods. All have specialized strengths but also serious weaknesses. The first three methods are generally considered low-order and can involve high algorithmic complexity (as in triangular elements or unstructured meshes). Global spectral methods do not practically allow for local mesh refinement and often involve cumbersome algebra. Radial basis functions have the advantage of being spectrally accurate for irregular node layouts in multi-dimensions with extreme algorithmic simplicity, and naturally permit local node refinement on arbitrary domains. We will show test examples ranging from vortex roll-ups, modeling idealized cyclogenesis, to the unsteady nonlinear flows posed by the shallow water equations to 3-D mantle convection in the earth’s interior. The results will be evaluated based on numerical accuracy, stability and computational performance.

Wed, 17 Jun 2009

14:00 - 15:00
Comlab

Random triangles: are they acute or obtuse?

Prof Gil Strang
(MIT)
Abstract

This is a special talk outside the normal Computational Mathematics and Application seminar series. Please note it takes place on a Wednesday.

Thu, 11 Jun 2009

14:00 - 15:00
Comlab

A fast domain decomposition solver for the discretized Stokes equations by a stabilized finite element method

Dr. Atsushi Suzuki
(Czech Technical University in Prague / Kyushu University)
Abstract

An iterative substructuring method with balancing Neumann-Neumann preconditioner is known as an efficient parallel algorithm for the elasticity equations. This method was extended to the Stokes equations by Pavarino and Widlund [2002]. In their extension, Q2/P0-discontinuous elements are used for velocity/pressure and a Schur complement system within "benign space", where incompressibility satisfied, is solved by CG method.

For the construction of the coarse space for the balancing preconditioner, some supplementary solvability conditions are considered. In our algorithm for 3-D computation, P1/P1 elements for velocity/pressure with pressure stabilization are used to save computational cost in the stiffness matrix. We introduce a simple coarse space similar to the one of elasticity equations. Owing to the stability term, solvabilities of local Dirichlet problem for a Schur complement system, of Neumann problem for the preconditioner, and of the coarse space problem are ensured. In our implementation, local Dirichlet and Neumann problems are solved by a 4x4-block modified Cholesky factorization procedure with an envelope method, which leads to fast computation with small memory requirement. Numerical result on parallel efficiency with a shared memory computer will be shown. Direct use of the Stokes solver in an application of Earth's mantle convection problem will be also shown.

Thu, 04 Jun 2009

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Approximate Gauss-Newton methods using reduced order models

Dr. Amos Lawless
(University of Reading)
Abstract

Work with N.K. Nichols (Reading), C. Boess & A. Bunse-Gerstner (Bremen)

The Gauss-Newton (GN) method is a well known iterative technique for solving nonlinear least squares problems subject to dynamical system constraints. Such problems arise commonly from applications in optimal control and state estimation. Variational data assimilation systems for weather, ocean and climate prediction currently use approximate GN methods. The GN method solves a sequence of linear least squares problems subject to linearized system constraints. For very large systems, low resolution linear approximations to the model dynamics are used to improve the efficiency of the algorithm. We propose a new method for deriving low order system approximations based on model reduction techniques from control theory. We show how this technique can be combined with the GN method to give a state estimation technique that retains more of the dynamical information of the full system. Numerical experiments using a shallow-water model illustrate the superior performance of model reduction to standard truncation techniques.

Thu, 28 May 2009

14:00 - 15:00
Comlab

Radial Basis Functions for Solving Partial Differential Equations

Prof. Bengt Fornberg
(University of Colorado)
Abstract

For the task of solving PDEs, finite difference (FD) methods are particularly easy to implement. Finite element (FE) methods are more flexible geometrically, but tend to be difficult to make very accurate. Pseudospectral (PS) methods can be seen as a limit of FD methods if one keeps on increasing their order of accuracy. They are extremely effective in many situations, but this strength comes at the price of very severe geometric restrictions. A more standard introduction to PS methods (rather than via FD methods of increasing orders of accuracy) is in terms of expansions in orthogonal functions (such as Fourier, Chebyshev, etc.).

Radial basis functions (RBFs) were first proposed around 1970 as a tool for interpolating scattered data. Since then, both our knowledge about them and their range of applications have grown tremendously. In the context of solving PDEs, we can see the RBF approach as a major generalization of PS methods, abandoning the orthogonality of the basis functions and in return obtaining much improved simplicity and flexibility. Spectral accuracy becomes now easily available also when using completely unstructured meshes, permitting local node refinements in critical areas. A very counterintuitive parameter range (making all the RBFs very flat) turns out to be of special interest. Computational cost and numerical stability were initially seen as serious difficulties, but major progress have recently been made also in these areas.

Thu, 21 May 2009

14:00 - 15:00
Comlab

Introduction to Quasicontinuum Methods: Formulation, Classification, Analysis

Dr. Christoph Ortner
(Computing Laboratory, Oxford)
Abstract

Quasicontinuum methods are a prototypical class of atomistic-to-continuum coupling methods. For example, we may wish to model a lattice defect (a vacancy or a dislocation) by an atomistic model, but the elastic far field by a continuum model. If the continuum model is consistent with the atomistic model (e.g., the Cauchy--Born model) then the main question is how the interface treatment affects the method.

In this talk I will introduce three of the main ideas how to treat the interface. I will explain their strengths and weaknesses by formulating the simplest possible non-trivial model problem and then simply analyzing the two classical concerns of numerical analysis: consistency and stability.

Thu, 30 Apr 2009

14:00 - 15:00
Comlab

Approximation of Inverse Problems

Prof. Andrew Stuart
(University of Warwick)
Abstract

Inverse problems are often ill-posed, with solutions that depend sensitively on data. Regularization of some form is often used to counteract this. I will describe an approach to regularization, based on a Bayesian formulation of the problem, which leads to a notion of well-posedness for inverse problems, at the level of probability measures.

The stability which results from this well-posedness may be used as the basis for understanding approximation of inverse problems in finite dimensional spaces. I will describe a theory which carries out this program.

The ideas will be illustrated with the classical inverse problem for the heat equation, and then applied to so more complicated inverse problems arising in data assimilation, such as determining the initial condition for the Navier-Stokes equation from observations.

Thu, 12 Mar 2009

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

On fast multilevel algorithms for nonlinear variational imaging models

Prof Ke Chen
(The University of Liverpool)
Abstract

In recent years, the interdisciplinary field of imaging science has been experiencing an explosive growth in research activities including more models being developed, more publications generated, and above all wider applications attempted.
In this talk I shall first give an overview of the various imaging work carried out in our Liverpool group, some with collaborations with UCLA (T F Chan), CUHK (R H Chan) and Bergen (X C Tai) and several colleagues from other departments in Liverpool. Then I shall focus on two pieces of recent work, denoising and segmentation respectively:
(i) Image denoising has been a research topic deeply investigated within the last two decades. Even algorithmically the well-known ROF model (1992) can be solved efficiently. However less work has been done on models using high order regularization. I shall describe our first and successful attempt to develop a working multilevel algorithm for a 4th order nonlinear denoising model, and our work on solving the combined denoising and deblurring problem, different from the reformulation approach by M N Ng and W T Yin (2008) et al.
(ii) the image active contour model by Chan-Vese (2001) can be solved efficiently both by a geometric multigrid method and by an optimization based multilevel method. Surprisingly the new multilevel methods can find a solution closer to the global minimize than the existing unilevel methods. Also discussed are some recent work (jointly with N Badshah) on selective segmentation that has useful medical applications.

Thu, 05 Mar 2009

14:00 - 15:00
Comlab

Geometric Numerical Integration of Differential Equations

Prof Reinout Quispel
(Latrobe University Melbourne)
Abstract

Geometric integration is the numerical integration of a differential equation, while preserving one or more of its geometric/physical properties exactly, i.e. to within round-off error.

Many of these geometric properties are of crucial importance in physical applications: preservation of energy, momentum, angular momentum, phase-space volume, symmetries, time-reversal symmetry, symplectic structure and dissipation are examples. The field has tantalizing connections to dynamical systems, as well as to Lie groups.

In this talk we first present a survey of geometric numerical integration methods for differential equations, and then exemplify this by discussing symplectic vs energy-preserving integrators for ODEs as well as for PDEs.

Thu, 19 Feb 2009

14:00 - 15:00
Comlab

Numerical methods for palindromic eigenvalue problems

Dr Christian Mehl
(University of Birmingham)
Abstract

We discuss numerical methods for the solution of the palindromic eigenvalue problem Ax=λ ATx, where A is a complex matrix. Such eigenvalue problems occur, for example, in the vibration analysis of rail tracks.

The structure of palindromic eigenvalue problems leads to a symmetry in the spectrum: all eigenvalues occur in reciprocal pairs. The need for preservation of this symmetry in finite precision arithmetic requires the use of structure-preserving numerical methods. In this talk, we explain how such methods can be derived.

Thu, 12 Feb 2009

14:00 - 15:00
Comlab

A new perspective on the complexity of interior point methods for linear programming

Dr Raphael Hauser
(Computing Laboratory, Oxford)
Abstract

The aim of this talk is to render the power of (short-step) interior-point methods for linear programming (and by extension, convex programming) intuitively understandable to those who have a basic training in numerical methods for dynamical systems solving. The connection between the two areas is made by interpreting line-search methods in a forward Euler framework, and by analysing the algorithmic complexity in terms of the stiffness of the vector field of search directions. Our analysis cannot replicate the best complexity bounds, but due to its weak assumptions it also applies to inexactly computed search directions and has explanatory power for a wide class of algorithms.

Co-Author: Coralia Cartis, Edinburgh University School of Mathematics.

Thu, 29 Jan 2009

14:00 - 15:00
Comlab

Coverage Processes on Spheres and Condition Numbers for Linear Programming

Dr Martin Lotz
(Oxford University and City University of Hong Kong)
Abstract

This talk is concerned with the probabilistic behaviour of a condition

number C(A) for the problem of deciding whether a system of n

homogeneous linear inequalities in m unknowns has a non-zero solution.

In the case where the input system is feasible, the exact probability

distribution of the condition number for random inputs is determined,

and a sharp bound for the general case. In particular, for the

expected value of the logarithm log C(A), an upper bound of order

O(log m) (m the number of variables) is presented which does not

depend on the number of inequalities.

The probability distribution of the condition number C(A) is closely

related to the probability of covering the m-sphere with n spherical

caps of a given radius. As a corollary, we obtain bounds on the

probability of covering the sphere with random caps.

Thu, 22 Jan 2009

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Preconditioning of linear systems in an ocean flow model

Dr Fred Wubs
(University of Groningen)
Abstract

The climate is largely determined by the ocean flow, which in itself is driven by wind and by gradients in temperature and salinity. Nowadays numerical models exist that are able to describe the occurring phenomena not only qualitatively but also quantitatively. At the Institute for Marine and Atmospheric research Utrecht (IMAU) a so-called thermohaline circulation model is developed in which methods of dynamical systems theory are used to study the stability of ocean flows. Here bifurcation diagrams are constructed by varying the strength of the forcing, for instance the amount of fresh water coming in from the north due to melting. For every value of the strength we have to solve a nonlinear system, which is handled by a Newton-type method. This produces many linear systems to be solved. 

In the talk the following will be addressed: the form of the system of equations, a special purpose method which uses Trilinos and MRILU. The latter is a multilevel ILU preconditioner developed at Groningen University. Results of the approach obtained on the Dutch national supercomputer will be shown.

Thu, 15 Jan 2009

14:00 - 15:00
Comlab

On the accuracy of inexact saddle point solvers

Dr Miro Rozloznik
(Academy of Sciences of the Czech Republic)
Abstract

For large--scale saddle point problems, the application of exact iterative schemes and preconditioners may be computationally expensive. In practical situations, only approximations to the inverses of the diagonal block or the related cross-product matrices are considered, giving rise to inexact versions of various solvers. Therefore, the approximation effects must be carefully studied. In this talk we study numerical behavior of several iterative Krylov subspace solvers applied to the solution of large-scale saddle point problems. Two main representatives of the segregated solution approach are analyzed: the Schur complement reduction method, based on an (iterative) elimination of primary variables and the null-space projection method which relies on a basis for the null-space for the constraints. We concentrate on the question what is the best accuracy we can get from inexact schemes solving either Schur complement system or the null-space projected system when implemented in finite precision arithmetic. The fact that the inner solution tolerance strongly influences the accuracy of computed iterates is known and was studied in several contexts.

In particular, for several mathematically equivalent implementations we study the influence of inexact solving the inner systems and estimate their maximum attainable accuracy. When considering the outer iteration process our rounding error analysis leads to results similar to ones which can be obtained assuming exact arithmetic. The situation is different when we look at the residuals in the original saddle point system. We can show that some implementations lead ultimately to residuals on the the roundoff unit level independently of the fact that the inner systems were solved inexactly on a much higher level than their level of limiting accuracy. Indeed, our results confirm that the generic and actually the cheapest implementations deliver the approximate solutions which satisfy either the second or the first block equation to the working accuracy. In addition, the schemes with a corrected direct substitution are also very attractive. We give a theoretical explanation for the behavior which was probably observed or it is already tacitly known. The implementations that we pointed out as optimal are actually those which are widely used and suggested in applications.

Thu, 04 Dec 2008

14:00 - 15:00
Comlab

Cholesky factorizations for multi-core systems

Jonathan Hogg
(Rutherford Appleton Laboratory)
Abstract

Multicore chips are nearly ubiquitous in modern machines, and to fully exploit this continuation of Moore's Law, numerical algorithms need to be able to exploit parallelism. We describe recent approaches to both dense and sparse parallel Cholesky factorization on shared memory multicore systems and present results from our new codes for problems arising from large real-world applications. In particular we describe our experiences using directed acyclic graph based scheduling in the dense case and retrofitting parallelism to a

sparse serial solver.