Mon, 21 Feb 2005
15:45
DH 3rd floor SR

Perspectives on the mathematics of the integral of geometric Brownian motion

Professor Michael Schroeder
(University of mannheim)
Abstract

This talk attempts to survey key aspects of the mathematics that has been developed in recent years towards an explicit understanding of the structure of exponential functionals of Brownian motion, starting with work of Yor's in the 1990s

Mon, 21 Feb 2005
14:15
DH 3rd floor SR

Characterisation of paths by their signatures

Dr Nadia Sidorova
(Oxford)
Abstract

It is known that a continuous path of bounded variation

can be reconstructed from a sequence of its iterated integrals (called the signature) in a similar way to a function on the circle being reconstructed from its Fourier coefficients. We study the radius of convergence of the corresponding logarithmic signature for paths in an arbitrary Banach space. This convergence has important consequences for control theory (in particular, it can be used for computing the logarithm of a flow)and the efficiency of numerical approximations to solutions of SDEs. We also discuss the nonlinear structure of the space of logarithmic signatures and the problem of reconstructing a path by its signature.

Thu, 10 Feb 2005
14:00
Rutherford Appleton Laboratory, nr Didcot

Preconditioning for eigenvalue problems: ideas, algorithms, error analysis

Dr Eugene Ovtchinnikov
(University of Westminster)
Abstract

The convergence of iterative methods for solving the linear system Ax = b with a Hermitian positive definite matrix A depends on the condition number of A: the smaller the latter the faster the former. Hence the idea to multiply the equation by a matrix T such that the condition number of TA is much smaller than that of A. The above is a common interpretation of the technique known as preconditioning, the matrix T being referred to as the preconditioner for A.
The eigenvalue computation does not seem to benefit from the direct application of such a technique. Indeed, what is the point in replacing the standard eigenvalue problem Ax = λx with the generalized one TAx = λTx that does not appear to be any easier to solve? It is hardly surprising then that modern eigensolvers, such as ARPACK, do not use preconditioning directly. Instead, an option is provided to accelerate the convergence to the sought eigenpairs by applying spectral transformation, which generally requires the user to supply a subroutine that solves the system (A−σI)y = z, and it is entirely up to the user to employ preconditioning if they opt to solve this system iteratively.
In this talk we discuss some alternative views on the preconditioning technique that are more general and more useful in the convergence analysis of iterative methods and that show, in particular, that the direct preconditioning approach does make sense in eigenvalue computation. We review some iterative algorithms that can benefit from the direct preconditioning, present available convergence results and demonstrate both theoretically and numerically that the direct preconditioning approach has advantages over the two-level approach. Finally, we discuss the role that preconditioning can play in the a posteriori error analysis, present some a posteriori error estimates that use preconditioning and compare them with commonly used estimates in terms of the Euclidean norm of residual.