Computational Mathematics and Applications (past)
|
Thu, 18/10/2012 14:00 |
Dr Lehel Banjai (Heriot-Watt University) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
|
We will discuss the numerical simulation of acoustic wave propagation with localized inhomogeneities. To do this we will apply a standard finite element method (FEM) in space and explicit time-stepping in time on a finite spatial domain containing the inhomogeneities. The equations in the exterior computational domain will be dealt with a time-domain boundary integral formulation discretized by the Galerkin boundary element method (BEM) in space and convolution quadrature in time.
We will give the analysis of the proposed method, starting with the proof of a positivity preservation property of convolution quadrature as a consequence of a variant of the Herglotz theorem. Combining this result with standard energy analysis of leap-frog discretization of the interior equations will give us both stability and convergence of the method. Numerical results will also be given. |
|||
|
Thu, 11/10/2012 14:00 |
Dr Patrick Farrell (Imperial College London) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
|
The derivatives of PDE models are key ingredients in many
important algorithms of computational science. They find applications in
diverse areas such as sensitivity analysis, PDE-constrained
optimisation, continuation and bifurcation analysis, error estimation,
and generalised stability theory.
These derivatives, computed using the so-called tangent linear and adjoint models, have made an enormous impact in certain scientific fields (such as aeronautics, meteorology, and oceanography). However, their use in other areas has been hampered by the great practical difficulty of the derivation and implementation of tangent linear and adjoint models. In his recent book, Naumann (2011) describes the problem of the robust automated derivation of parallel tangent linear and adjoint models as "one of the great open problems in the field of high-performance scientific computing”. In this talk, we present an elegant solution to this problem for the common case where the original discrete forward model may be written in variational form, and discuss some of its applications. |
|||
|
Thu, 04/10/2012 14:00 |
Professor Max Gunzburger (Florida State University) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
| The melting of ice in Greenland and Antarctica would, of course, be by far the major contributor any possible sea level rise. Thus, to make science-based predictions about sea-level rise, it is crucial that the ice sheets covering those land masses be accurately mathematically modeled and computationally simulated. In fact, the 2007 IPCC report on the state of the climate did not include predictions about sea level rise because it was concluded there that the science of ice sheets was not developed to a sufficient degree. As a result, predictions could not be rationally and confidently made. In recent years, there has been much activity in trying to improve the state-of-the-art of ice sheet modeling and simulation. In this lecture, we review a hierarchy of mathematical models for the flow of ice, pointing out the relative merits and demerits of each, showing how they are coupled to other climate system components (ocean and atmosphere), and discussing where further modeling work is needed. We then discuss algorithmic approaches for the approximate solution of ice sheet flow models and present and compare results obtained from simulations using the different mathematical models. | |||
|
Thu, 14/06/2012 14:00 |
Dr Christoph Reisinger (University of Oxford) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
|
While a general framework of approximating the solution to Hamilton-Jacobi-Bellman (HJB) equations by difference methods is well established, and efficient numerical algorithms are available for one-dimensional problems, much less is known in the multi-dimensional case. One difficulty is the monotone approximation of cross-derivatives, which guarantees convergence to the viscosity solution. We propose a scheme combining piecewise freezing of the policies in time with a suitable spatial discretisation to establish convergence for a wide class of equations, and give numerical illustrations for a diffusion equation with uncertain parameters. These equations arise, for instance, in the valuation of financial derivatives under model uncertainty. This is joint work with Peter Forsyth. |
|||
|
Thu, 07/06/2012 14:00 |
Dr Chris Farmer (University of Oxford) |
Computational Mathematics and Applications |
Rutherford Appleton Laboratory, nr Didcot |
|
Uncertainty quantification can begin by specifying the initial state of a system as a probability measure. Part of the state (the 'parameters') might not evolve, and might not be directly observable. Many inverse problems are generalisations of uncertainty quantification such that one modifies the probability measure to be consistent with measurements, a forward model and the initial measure. The inverse problem, interpreted as computing the posterior probability measure of the states, including the parameters and the variables, from a sequence of noise-corrupted observations, is reviewed in the talk. Bayesian statistics provides a natural framework for a solution but leads to very challenging computational problems, particularly when the dimension of the state space is very large, as when arising from the discretisation of a partial differential equation theory.
In this talk we show how the Bayesian framework leads to a new algorithm - the 'Variational Smoothing Filter' - that unifies the leading techniques in use today. In particular the framework provides an interpretation and generalisation of Tikhonov regularisation, a method of forecast verification and a way of quantifying and managing uncertainty. To deal with the problem that a good initial prior may not be Gaussian, as with a general prior intended to describe, for example a geological structure, a Gaussian mixture prior is used. This has many desirable properties, including ease of sampling to make 'numerical rocks' or 'numerical weather' for visualisation purposes and statistical summaries, and in principle can approximate any probability density. Robustness is sought by combining a variational update with this full mixture representation of the conditional posterior density. |
|||
|
Thu, 31/05/2012 14:00 |
Dr David Kay (University of Oxford) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
| This talk will present a computationally efficient method of simulating cardiac electrical propagation using an adaptive high-order finite element method. The refinement strategy automatically concentrates computational effort where it is most needed in space on each time-step. We drive the adaptivity using a residual-based error indicator, and demonstrate using norms of the error that the indicator allows to control it successfully. Our results using two-dimensional domains of varying complexity demonstrate in that significant improvements in efficiency are possible over the state-of-the-art, indicating that these methods should be investigated for implementation in whole-heart scale software. | |||
|
Thu, 24/05/2012 14:00 |
Dr Elias Jarlebring (KTH Stockholm) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
| The Arnoldi method for standard eigenvalue problems possesses several attractive properties making it robust, reliable and efficient for many problems. We will present here a new algorithm equivalent to the Arnoldi method, but designed for nonlinear eigenvalue problems corresponding to the problem associated with a matrix depending on a parameter in a nonlinear but analytic way. As a first result we show that the reciprocal eigenvalues of an infinite dimensional operator. We consider the Arnoldi method for this and show that with a particular choice of starting function and a particular choice of scalar product, the structure of the operator can be exploited in a very effective way. The structure of the operator is such that when the Arnoldi method is started with a constant function, the iterates will be polynomials. For a large class of NEPs, we show that we can carry out the infinite dimensional Arnoldi algorithm for the operator in arithmetic based on standard linear algebra operations on vectors and matrices of finite size. This is achieved by representing the polynomials by vector coefficients. The resulting algorithm is by construction such that it is completely equivalent to the standard Arnoldi method and also inherits many of its attractive properties, which are illustrated with examples. | |||
|
Thu, 17/05/2012 14:00 |
Dr Mike Botchev (University of Twente) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
| Exponential time integrators are a powerful tool for numerical solution of time dependent problems. The actions of the matrix functions on vectors, necessary for exponential integrators, can be efficiently computed by different elegant numerical techniques, such as Krylov subspaces. Unfortunately, in some situations the additional work required by exponential integrators per time step is not paid off because the time step can not be increased too much due to the accuracy restrictions. To get around this problem, we propose the so-called time-stepping-free approach. This approach works for linear ordinary differential equation (ODE) systems where the time dependent part forms a small-dimensional subspace. In this case the time dependence can be projected out by block Krylov methods onto the small, projected ODE system. Thus, there is just one block Krylov subspace involved and there are no time steps. We refer to this method as EBK, exponential block Krylov method. The accuracy of EBK is determined by the Krylov subspace error and the solution accuracy in the projected ODE system. EBK works for well for linear systems, its extension to nonlinear problems is an open problem and we discuss possible ways for such an extension. | |||
|
Thu, 10/05/2012 14:00 |
Professor Mario Bebendorf (University of Bonn) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
We present recent numerical techniques for the treatment of integral formulations of Helmholtz boundary value problems in the case of high frequencies. The combination of -matrices with further developments of the adaptive cross approximation allows to solve such problems with logarithmic-linear complexity independent of the frequency. An advantage of this new approach over existing techniques such as fast multipole methods is its stability over the whole range of frequencies, whereas other methods are efficient either for low or high frequencies. |
|||
|
Thu, 03/05/2012 14:00 |
Dr Cécile Piret (Université catholique de Louvain.) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
| Although much work has been done on using RBFs for reconstructing arbitrary surfaces, using RBFs to solve PDEs on arbitrary manifolds is only now being considered and is the subject of this talk. We will review current methods and introduce a new technique that is loosely inspired by the Closest Point Method. This new technique, the Orthogonal Gradients Method (OGr), benefits from the advantages of using RBFs: the simplicity, the high accuracy but also the meshfree character, which gives the flexibility to represent the most complex geometries in any dimension. | |||
|
Thu, 26/04/2012 14:00 |
Dr Alfredo Buttari (CNRS-IRIT Toulouse) |
Computational Mathematics and Applications |
Rutherford Appleton Laboratory, nr Didcot |
| The advent of multicore processors represents a disruptive event in the history of computer science as conventional parallel programming paradigms are proving incapable of fully exploiting their potential for concurrent computations. The need for different or new programming models clearly arises from recent studies which identify fine-granularity and dynamic execution as the keys to achieve high efficiency on multicore systems. This talk shows how these models can be effectively applied to the multifrontal method for the QR factorization of sparse matrices providing a very high efficiency achieved through a fine-grained partitioning of data and a dynamic scheduling of computational tasks relying on a dataflow parallel programming model. Moreover, preliminary results will be discussed showing how the multifrontal QR factorization can be accelerated by using low-rank approximation techniques. | |||
|
Thu, 19/04/2012 14:00 |
Professor Endre Süli (University of Oxford) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
|
The talk will survey recent developments concerning the existence and the approximation of global weak solutions to a general class of coupled microscopic-macroscopic bead-spring chain models that arise in the kinetic theory of dilute solutions of polymeric liquids with noninteracting polymer chains. The class of models involves the unsteady incompressible Navier-Stokes equations in a bounded domain for the velocity and the pressure of the fluid, with an elastic extra-stress tensor appearing on the right-hand side of the momentum equation. The extra-stress tensor stems from the random movement of the polymer chains and is defined by the Kramers expression through the associated probability density function that satisfies a Fokker-Planck type parabolic equation. Models of this kind were proposed in work of Hans Kramers in the early 1940's, and the existence of global weak solutions to the model has been a long-standing question in the mathematical analysis of kinetic models of dilute polymers.
We also discuss computational challenges associated with the numerical approximation of the high-dimensional Fokker-Planck equation featuring in the model. |
|||
|
Thu, 08/03/2012 14:00 |
Professor Rosie Renaut (Arizona State University) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
| In this talk I review the use of the spectral decomposition for understanding the solution of ill-posed inverse problems. It is immediate to see that regularization is needed in order to find stable solutions. These solutions, however, do not typically allow reconstruction of signal features such as edges. Generalized regularization assists but is still insufficient and methods of total variation are commonly suggested as an alternative. In the talk I consider application of standard approaches from Tikhonov regularization for finding appropriate regularization parameters in the total variation augmented Lagrangian implementations. Areas for future research will be considered. | |||
|
Thu, 01/03/2012 14:00 |
Professor Paul Houston (University of Nottingham) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
In this talk we present an overview of some recent developments concerning the a posteriori error analysis and adaptive mesh design of - and -version discontinuous Galerkin finite element methods for the numerical approximation of second-order quasilinear elliptic boundary value problems. In particular, we consider the derivation of computable bounds on the error measured in terms of an appropriate (mesh-dependent) energy norm in the case when a two-grid approximation is employed. In this setting, the fully nonlinear problem is first computed on a coarse finite element space . The resulting 'coarse' numerical solution is then exploited to provide the necessary data needed to linearise the underlying discretization on the finer space ; thereby, only a linear system of equations is solved on the richer space . Here, an adaptive -refinement algorithm is proposed which automatically selects the local mesh size and local polynomial degrees on both the coarse and fine spaces and , respectively. Numerical experiments confirming the reliability and efficiency of the proposed mesh refinement algorithm are presented. |
|||
|
Thu, 23/02/2012 14:00 |
Dr Stephen Langdon (University of Reading) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
| Standard numerical schemes for acoustic scattering problems suffer from the restriction that the number of degrees of freedom required to achieve a prescribed level of accuracy must grow at least linearly with respect to frequency in order to maintain accuracy as frequency increases. In this talk, we review recent progress on the development and analysis of hybrid numerical-asymptotic boundary integral equation methods for these problems. The key idea of this approach is to form an ansatz for the solution based on knowledge of the high frequency asymptotics, allowing one to achieve any required accuracy via the approximation of only (in many cases provably) non-oscillatory functions. In particular, we discuss very recent work extending these ideas for the first time to non-convex scatterers. | |||
|
Thu, 16/02/2012 14:00 |
Professor Spencer Sherwin (Imperial College London) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
|
Thu, 09/02/2012 14:00 |
Dr Yuji Nakatsukasa (University of Manchester) |
Computational Mathematics and Applications |
Rutherford Appleton Laboratory, nr Didcot |
|
Computing the eigenvalue decomposition of a symmetric matrix and the singular value decomposition of a general matrix are two of the central tasks in numerical linear algebra. There has been much recent work in the development of linear algebra algorithms that minimize communication cost. However, the reduction in communication cost sometimes comes at the expense of significantly more arithmetic and potential instability.
In this talk I will describe algorithms for the two decompositions that have optimal communication cost and arithmetic cost within a small factor of those for the best known algorithms. The key idea is to use the best rational approximation of the sign function, which lets the algorithm converge in just two steps. The algorithms are backward stable and easily parallelizable. Preliminary numerical experiments demonstrate their efficiency. |
|||
|
Thu, 02/02/2012 14:00 |
Dr Coralia Cartis (University of Edinburgh) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
| We show that the steepest-descent and Newton's methods for unconstrained nonconvex optimization under standard assumptions may both require a number of iterations and function evaluations arbitrarily close to the steepest-descent's global worst-case complexity bound. This implies that the latter upper bound is essentially tight for steepest descent and that Newton's method may be as slow as the steepest-descent method in the worst case. Then the cubic regularization of Newton's method (Griewank (1981), Nesterov & Polyak (2006)) is considered and extended to large-scale problems, while preserving the same order of its improved worst-case complexity (by comparison to that of steepest-descent); this improved worst-case bound is also shown to be tight. We further show that the cubic regularization approach is, in fact, optimal from a worst-case complexity point of view amongst a wide class of second-order methods. The worst-case problem-evaluation complexity of constrained optimization will also be discussed. This is joint work with Nick Gould (Rutherford Appleton Laboratory, UK) and Philippe Toint (University of Namur, Belgium). | |||
|
Thu, 26/01/2012 14:00 |
Dr Andreas Grothey (University of Edinburgh) |
Computational Mathematics and Applications |
Gibson Grd floor SR |
| We present progress on an Interior Point based multi-step solution approach for stochastic programming problems. Our approach works with a series of scenario trees that can be seen as successively more accurate discretizations of an underlying probability distribution and employs IPM warmstarts to "lift" approximate solutions from one tree to the next larger tree. | |||
|
Thu, 19/01/2012 14:00 |
Dr Jennifer Scott (STFC, Rutherford Appleton Laboratory) |
Computational Mathematics and Applications |
Rutherford Appleton Laboratory, nr Didcot |

-matrices with further developments of the adaptive cross approximation allows to solve such problems with logarithmic-linear complexity independent of the frequency. An advantage of this new approach over existing techniques such as fast multipole methods is its stability over the whole range of frequencies, whereas other methods are efficient either for low or high frequencies.
- and
-version discontinuous Galerkin finite element methods for the numerical approximation of second-order quasilinear elliptic boundary value problems. In particular, we consider the derivation of computable bounds on the error measured in terms of an appropriate (mesh-dependent) energy norm in the case when a two-grid approximation is employed. In this setting, the fully nonlinear problem is first computed on a coarse finite element space
. The resulting 'coarse' numerical solution is then exploited to provide the necessary data needed to linearise the underlying discretization on the finer space
; thereby, only a linear system of equations is solved on the richer space