Computational Mathematics and Applications
|
Thu, 26/04/2012 14:00 |
Dr Alfredo Buttari (CNRS-IRIT Toulouse) |
Computational Mathematics and Applications |
Rutherford Appleton Laboratory, nr Didcot |
| The advent of multicore processors represents a disruptive event in the history of computer science as conventional parallel programming paradigms are proving incapable of fully exploiting their potential for concurrent computations. The need for different or new programming models clearly arises from recent studies which identify fine-granularity and dynamic execution as the keys to achieve high efficiency on multicore systems. This talk shows how these models can be effectively applied to the multifrontal method for the QR factorization of sparse matrices providing a very high efficiency achieved through a fine-grained partitioning of data and a dynamic scheduling of computational tasks relying on a dataflow parallel programming model. Moreover, preliminary results will be discussed showing how the multifrontal QR factorization can be accelerated by using low-rank approximation techniques. | |||
|
Thu, 03/05/2012 14:00 |
Dr Cécile Piret (Université catholique de Louvain.) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
| Although much work has been done on using RBFs for reconstructing arbitrary surfaces, using RBFs to solve PDEs on arbitrary manifolds is only now being considered and is the subject of this talk. We will review current methods and introduce a new technique that is loosely inspired by the Closest Point Method. This new technique, the Orthogonal Gradients Method (OGr), benefits from the advantages of using RBFs: the simplicity, the high accuracy but also the meshfree character, which gives the flexibility to represent the most complex geometries in any dimension. | |||
|
Thu, 10/05/2012 14:00 |
Professor Mario Bebendorf (University of Bonn) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
We present recent numerical techniques for the treatment of integral formulations of Helmholtz boundary value problems in the case of high frequencies. The combination of -matrices with further developments of the adaptive cross approximation allows to solve such problems with logarithmic-linear complexity independent of the frequency. An advantage of this new approach over existing techniques such as fast multipole methods is its stability over the whole range of frequencies, whereas other methods are efficient either for low or high frequencies. |
|||
|
Thu, 17/05/2012 14:00 |
Dr Mike Botchev (University of Twente) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
| Exponential time integrators are a powerful tool for numerical solution of time dependent problems. The actions of the matrix functions on vectors, necessary for exponential integrators, can be efficiently computed by different elegant numerical techniques, such as Krylov subspaces. Unfortunately, in some situations the additional work required by exponential integrators per time step is not paid off because the time step can not be increased too much due to the accuracy restrictions. To get around this problem, we propose the so-called time-stepping-free approach. This approach works for linear ordinary differential equation (ODE) systems where the time dependent part forms a small-dimensional subspace. In this case the time dependence can be projected out by block Krylov methods onto the small, projected ODE system. Thus, there is just one block Krylov subspace involved and there are no time steps. We refer to this method as EBK, exponential block Krylov method. The accuracy of EBK is determined by the Krylov subspace error and the solution accuracy in the projected ODE system. EBK works for well for linear systems, its extension to nonlinear problems is an open problem and we discuss possible ways for such an extension. | |||
|
Thu, 24/05/2012 14:00 |
Dr Elias Jarlebring (KTH Stockholm) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
| The Arnoldi method for standard eigenvalue problems possesses several attractive properties making it robust, reliable and efficient for many problems. We will present here a new algorithm equivalent to the Arnoldi method, but designed for nonlinear eigenvalue problems corresponding to the problem associated with a matrix depending on a parameter in a nonlinear but analytic way. As a first result we show that the reciprocal eigenvalues of an infinite dimensional operator. We consider the Arnoldi method for this and show that with a particular choice of starting function and a particular choice of scalar product, the structure of the operator can be exploited in a very effective way. The structure of the operator is such that when the Arnoldi method is started with a constant function, the iterates will be polynomials. For a large class of NEPs, we show that we can carry out the infinite dimensional Arnoldi algorithm for the operator in arithmetic based on standard linear algebra operations on vectors and matrices of finite size. This is achieved by representing the polynomials by vector coefficients. The resulting algorithm is by construction such that it is completely equivalent to the standard Arnoldi method and also inherits many of its attractive properties, which are illustrated with examples. | |||
|
Thu, 31/05/2012 14:00 |
Dr David Kay (University of Oxford) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
| This talk will present a computationally efficient method of simulating cardiac electrical propagation using an adaptive high-order finite element method. The refinement strategy automatically concentrates computational effort where it is most needed in space on each time-step. We drive the adaptivity using a residual-based error indicator, and demonstrate using norms of the error that the indicator allows to control it successfully. Our results using two-dimensional domains of varying complexity demonstrate in that significant improvements in efficiency are possible over the state-of-the-art, indicating that these methods should be investigated for implementation in whole-heart scale software. | |||
|
Thu, 07/06/2012 14:00 |
Dr Chris Farmer (University of Oxford) |
Computational Mathematics and Applications |
Rutherford Appleton Laboratory, nr Didcot |
|
Uncertainty quantification can begin by specifying the initial state of a system as a probability measure. Part of the state (the 'parameters') might not evolve, and might not be directly observable. Many inverse problems are generalisations of uncertainty quantification such that one modifies the probability measure to be consistent with measurements, a forward model and the initial measure. The inverse problem, interpreted as computing the posterior probability measure of the states, including the parameters and the variables, from a sequence of noise-corrupted observations, is reviewed in the talk. Bayesian statistics provides a natural framework for a solution but leads to very challenging computational problems, particularly when the dimension of the state space is very large, as when arising from the discretisation of a partial differential equation theory.
In this talk we show how the Bayesian framework leads to a new algorithm - the 'Variational Smoothing Filter' - that unifies the leading techniques in use today. In particular the framework provides an interpretation and generalisation of Tikhonov regularisation, a method of forecast verification and a way of quantifying and managing uncertainty. To deal with the problem that a good initial prior may not be Gaussian, as with a general prior intended to describe, for example a geological structure, a Gaussian mixture prior is used. This has many desirable properties, including ease of sampling to make 'numerical rocks' or 'numerical weather' for visualisation purposes and statistical summaries, and in principle can approximate any probability density. Robustness is sought by combining a variational update with this full mixture representation of the conditional posterior density. |
|||
|
Thu, 14/06/2012 14:00 |
Dr Christoph Reisinger (University of Oxford) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
|
While a general framework of approximating the solution to Hamilton-Jacobi-Bellman (HJB) equations by difference methods is well established, and efficient numerical algorithms are available for one-dimensional problems, much less is known in the multi-dimensional case. One difficulty is the monotone approximation of cross-derivatives, which guarantees convergence to the viscosity solution. We propose a scheme combining piecewise freezing of the policies in time with a suitable spatial discretisation to establish convergence for a wide class of equations, and give numerical illustrations for a diffusion equation with uncertain parameters. These equations arise, for instance, in the valuation of financial derivatives under model uncertainty. This is joint work with Peter Forsyth. |
|||

-matrices with further developments of the adaptive cross approximation allows to solve such problems with logarithmic-linear complexity independent of the frequency. An advantage of this new approach over existing techniques such as fast multipole methods is its stability over the whole range of frequencies, whereas other methods are efficient either for low or high frequencies.