Forthcoming events in this series
High-order surface integral algorithms for 3D computational electromagnetics
Abstract
We discuss a class of high-order spectral-Galerkin surface integral algorithms with specific focus on simulating the scattering of electromagnetic waves by a collection of three dimensional deterministic and stochastic particles.
Numerical Methods for Monge-Kantorovich Transportation Problems
Abstract
In the eighteenth century Gaspard Monge considered the problem of finding the best way of moving a pile of material from one site to another. This optimal transport problem has many applications such as mesh generation, moving mesh methods, image registration, image morphing, optical design, cartograms, probability theory, etc. The solution to an optimal transport problem can be found by solving the Monge-Amp\`{e}re equation, a highly nonlinear second order elliptic partial differential equation. Leonid Kantorovich, however, showed that it is possible to analyse optimal transport problems in a framework that naturally leads to a linear programming formulation. In recent years several efficient methods have been proposed for solving the Monge-Amp\`{e}re equation. For the linear programming problem, standard methods do not exploit the special properties of the solution and require a number of operations that is quadratic or even cubic in the number of points in the discretisation. In this talk I will discuss techniques that can be used to obtain more efficient methods.
Joint work with Chris Budd (University of Bath).
RBF collocation methods for delayed differential equations
Abstract
Meshless (or meshfree) methods are a relatively new numerical approach for the solution of ordinary- and partial differential equations. They offer the geometrical flexibility of finite elements but without requiring connectivity from the discretization support (ie a mesh). Meshless methods based on the collocation of radial basis functions (RBF methods) are particularly easy to code, and have a number of theoretical advantages as well as practical drawbacks. In this talk, an adaptive RBF scheme is presented for a novel application, namely the solution of (a rather broad class of) delayed- and neutral differential equations.
A Preconditioned Conjugate Gradient Method for Optimal Control Problems with Control and State Constraints
Abstract
We consider saddle point problems arising as (linearized) optimality conditions in elliptic optimal control problems. The efficient solution of such systems is a core ingredient in second-order optimization algorithms. In the spirit of Bramble and Pasciak, the preconditioned systems are symmetric and positive definite with respect to a suitable scalar product. We extend previous work by Schoeberl and Zulehner and consider problems with control and state constraints. It stands out as a particular feature of this approach that an appropriate symmetric indefinite preconditioner can be constructed from standard preconditioners for those matrices which represent the inner products, such as multigrid cycles.
Numerical examples in 2D and 3D are given which illustrate the performance of the method, and limitations and open questions are addressed.
A Primal-Dual Regularized Interior-Point Method for Convex Quadratic Programs
Abstract
Interior-point methods for linear and convex quadratic programming
require the solution of a sequence of symmetric indefinite linear
systems which are used to derive search directions. Safeguards are
typically required in order to handle free variables or rank-deficient
Jacobians. We propose a consistent framework and accompanying
theoretical justification for regularizing these linear systems. Our
approach is akin to the proximal method of multipliers and can be
interpreted as a simultaneous proximal-point regularization of the
primal and dual problems. The regularization is termed "exact" to
emphasize that, although the problems are regularized, the algorithm
recovers a solution of the original problem. Numerical results will be
presented. If time permits we will illustrate current research on a
matrix-free implementation.
This is joint work with Michael Friedlander, University of British Columbia, Canada
Spectral analysis of the discrete Helmholtz operator preconditioned with a shifted Laplacian
Abstract
Shifted Laplace preconditioners have attracted considerable attention as
a technique to speed up convergence of iterative solution methods for the
Helmholtz equation. In the talk we present a comprehensive spectral
analysis of the discrete Helmholtz operator preconditioned with a shifted
Laplacian. Our analysis is valid under general conditions. The propagating
medium can be heterogeneous, and the analysis also holds for different types
of damping, including a radiation condition for the boundary of the computational
domain. By combining the results of the spectral analysis of the
preconditioned Helmholtz operator with an upper bound on the GMRES-residual
norm we are able to derive an optimal value for the shift, and to
explain the mesh-depency of the convergence of GMRES preconditioned
with a shifted Laplacian. We will illustrate our results with a seismic test
problem.
Joint work with: Yogi Erlangga (University of British Columbia) and Kees Vuik (TU Delft)
Nonlinear Eigenvalue Problems
Abstract
Nonlinear eigenvalue problem (NEP) is a class of eigenvalue problems where the matrix depends on the eigenvalue. We will first introduce some NEPs in real applications and some algorithms for general NEPs. Then we introduce our recent advances in NEPs, including second order Arnoldi algorithms for large scale quadratic eigenvalue problem (QEP), analysis and algorithms for symmetric eigenvalue problem with nonlinear rank-one updating, a new linearization for rational eigenvalue problem (REP).
Split Bregman methods for L1-Regularized Problems with Applications to Image Processing
Abstract
This talk will introduce L1-regularized optimization problems that arise in image processing, and numerical methods for their solution. In particular, we will focus on methods of the split-Bregman type, which very efficiently solve large scale problems without regularization or time stepping. Applications include image
denoising, segmentation, non-local filters, and compressed sensing.
Numerical Aspects of Optimization in Finance
Abstract
There is a widespread use of mathematical tools in finance and its
importance has grown over the last two decades. In this talk we
concentrate on optimization problems in finance, in particular on
numerical aspects. In this talk, we put emphasis on the mathematical problems and aspects, whereas all the applications are connected to the pricing of derivatives and are the
outcome of a cooperation with an international finance institution.
As one example, we take an in-depth look at the problem of hedging
barrier options. We review approaches from the literature and illustrate
advantages and shortcomings. Then we rephrase the problem as an
optimization problem and point out that it leads to a semi-infinite
programming problem. We give numerical results and put them in relation
to known results from other approaches. As an extension, we consider the
robustness of this approach, since it is known that the optimality is
lost, if the market data change too much. To avoid this effect, one can
formulate a robust version of the hedging problem, again by the use of
semi-infinite programming. The numerical results presented illustrate
the robustness of this approach and its advantages.
As a further aspect, we address the calibration of models being used in
finance through optimization. This may lead to PDE-constrained
optimization problems and their solution through SQP-type or
interior-point methods. An important issue in this context are
preconditioning techniques, like preconditioning of KKT systems, a very
active research area. Another aspect is the preconditioning aspect
through the use of implicit volatilities. We also take a look at the
numerical effects of non-smooth data for certain models in derivative
pricing. Finally, we discuss how to speed up the optimization for
calibration problems by using reduced order models.
Saddle point problems in liquid crystal modelling
Abstract
Saddle-point problems occur frequently in liquid crystal modelling. For example, they arise whenever Lagrange multipliers are used for the pointwise-unit-vector constraints in director modelling, or in both general director and order tensor models when an electric field is present that stems from a constant voltage. Furthermore, in a director model with associated constraints and Lagrange multipliers, together with a coupled electric-field interaction, a particular ''double'' saddle-point structure arises. This talk will focus on a simple example of this type and discuss appropriate numerical solution schemes.
This is joint work with Eugene C. Gartland, Jr., Department of Mathematical Sciences, Kent State University.
Resolution of sharp fronts in the presence of model error in variational data assimilation
Abstract
We show that data assimilation using four-dimensional variation
(4DVar) can be interpreted as a form of Tikhonov regularisation, a
familiar method for solving ill-posed inverse problems. It is known from
image restoration problems that $L_1$-norm penalty regularisation recovers
sharp edges in the image better than the $L_2$-norm penalty
regularisation. We apply this idea to 4DVar for problems where shocks are
present and give some examples where the $L_1$-norm penalty approach
performs much better than the standard $L_2$-norm regularisation in 4DVar.
Determination of the Basin of Attraction in Dynamical Systems using Meshless Collocation
Abstract
In dynamical systems given by an ODE, one is interested in the basin
of attraction of invariant sets, such as equilibria or periodic
orbits. The basin of attraction consists of solutions which converge
towards the invariant set. To determine the basin of attraction, one
can use a solution of a certain linear PDE which can be approximated
by meshless collocation.
The basin of attraction of an equilibrium can be determined through
sublevel sets of a Lyapunov function, i.e. a scalar-valued function
which is decreasing along solutions of the dynamical system. One
method to construct such a Lyapunov function is to solve a certain
linear PDE approximately using Meshless Collocation. Error estimates
ensure that the approximation is a Lyapunov function.
The basin of attraction of a periodic orbit can be analysed by Borg’s
criterion measuring the time evolution of the distance between
adjacent trajectories with respect to a certain Riemannian metric.
The sufficiency and necessity of this criterion will be discussed,
and methods how to compute a suitable Riemannian metric using
Meshless Collocation will be presented in this talk.
Preconditioning Stochastic Finite Element Matrices
Abstract
In the last few years, there has been renewed interest in stochastic
finite element methods (SFEMs), which facilitate the approximation
of statistics of solutions to PDEs with random data. SFEMs based on
sampling, such as stochastic collocation schemes, lead to decoupled
problems requiring only deterministic solvers. SFEMs based on
Galerkin approximation satisfy an optimality condition but require
the solution of a single linear system of equations that couples
deterministic and stochastic degrees of freedom. This is regarded as
a serious bottleneck in computations and the difficulty is even more
pronounced when we attempt to solve systems of PDEs with
random data via stochastic mixed FEMs.
In this talk, we give an overview of solution strategies for the
saddle-point systems that arise when the mixed form of the Darcy
flow problem, with correlated random coefficients, is discretised
via stochastic Galerkin and stochastic collocation techniques. For
the stochastic Galerkin approach, the systems are orders of
magnitude larger than those arising for deterministic problems. We
report on fast solvers and preconditioners based on multigrid, which
have proved successful for deterministic problems. In particular, we
examine their robustness with respect to the random diffusion
coefficients, which can be either a linear or non-linear function of
a finite set of random parameters with a prescribed probability
distribution.
On the existence of modified equations for stochastic differential equations
Abstract
In this talk we describe a general framework for deriving
modified equations for stochastic differential equations with respect to
weak convergence. We will start by quickly recapping of how to derive
modified equations in the case of ODEs and describe how these ideas can
be generalized in the case of SDEs. Results will be presented for first
order methods such as the Euler-Maruyama and the Milstein method. In the
case of linear SDEs, using the Gaussianity of the underlying solutions,
we will derive a SDE that the numerical method solves exactly in the
weak sense. Applications of modified equations in the numerical study
of Langevin equations and in the calculation of effective diffusivities
will also be discussed.
An excursion through the world of complex networks guided by matrix theory
Abstract
A brief introduction to the field of complex networks is carried out by means of some examples. Then, we focus on the topics of defining and applying centrality measures to characterise the nodes of complex networks. We combine this approach with methods for detecting communities as well as to identify good expansion properties on graphs. All these concepts are formally defined in the presentation. We introduce the subgraph centrality from a combinatorial point of view and then connect it with the theory of graph spectra. Continuing with this line we introduce some modifications to this measure by considering some known matrix functions, e.g., psi matrix functions, as well as new ones introduced here. Finally, we illustrate some examples of applications in particular the identification of essential proteins in proteomic maps.
Discovery of Mechanisms from Mathematical Modeling of DNA Microarray Data by Using Matrix and Tensor Algebra: Computational Prediction and Experimental Verification
Abstract
Future discovery and control in biology and medicine will come from
the mathematical modeling of large-scale molecular biological data,
such as DNA microarray data, just as Kepler discovered the laws of
planetary motion by using mathematics to describe trends in
astronomical data. In this talk, I will demonstrate that
mathematical modeling of DNA microarray data can be used to correctly
predict previously unknown mechanisms that govern the activities of
DNA and RNA.
First, I will describe the computational prediction of a mechanism of
regulation, by using the pseudoinverse projection and a higher-order
singular value decomposition to uncover a genome-wide pattern of
correlation between DNA replication initiation and RNA expression
during the cell cycle. Then, I will describe the recent
experimental verification of this computational prediction, by
analyzing global expression in synchronized cultures of yeast under
conditions that prevent DNA replication initiation without delaying
cell cycle progression. Finally, I will describe the use of the
singular value decomposition to uncover "asymmetric Hermite functions,"
a generalization of the eigenfunctions of the quantum harmonic
oscillator, in genome-wide mRNA lengths distribution data.
These patterns might be explained by a previously undiscovered asymmetry
in RNA gel electrophoresis band broadening and hint at two competing
evolutionary forces that determine the lengths of gene transcripts.
Golub-Kahan Iterative Bidiagonalization and Revealing Noise in the Data
Abstract
Regularization techniques based on the Golub-Kahan iterative bidiagonalization belong among popular approaches for solving large discrete ill-posed problems. First, the original problem is projected onto a lower dimensional subspace using the bidiagonalization algorithm, which by itself represents a form of regularization by projection. The projected problem, however, inherits a part of the ill-posedness of the original problem, and therefore some form of inner regularization must be applied. Stopping criteria for the whole process are then based on the regularization of the projected (small) problem.
We consider an ill-posed problem with a noisy right-hand side (observation vector), where the noise level is unknown. We show how the information from the Golub-Kahan iterative bidiagonalization can be used for estimating the noise level. Such information can be useful for constructing efficient stopping criteria in solving ill-posed problems.
This is joint work by Iveta Hn\v{e}tynkov\'{a}, Martin Ple\v{s}inger, and Zden\v{e}k Strako\v{s} (Faculty of Mathematics and Physics, Charles University, and Institute of Computer Science, Academy of Sciences, Prague)
Rational Approximations to the Complex Error Function
Abstract
We consider rational approximations to the Faddeeva or plasma dispersion function, defined
as
$w(z) = e^{-z^{2}} \mbox{erfc} (-iz)$.
With many important applications in physics, good software for
computing the function reliably everywhere in the complex plane is required. In this talk
we shall derive rational approximations to $w(z)$ via quadrature, M\"{o}bius transformations, and best approximation. The various approximations are compared with regard to speed of convergence, numerical stability, and ease of generation of the coefficients of the formula.
In addition, we give preference to methods for which a single expression yields uniformly
high accuracy in the entire complex plane, as well as being able to reproduce exactly the
asymptotic behaviour
$w(z) \sim i/(\sqrt{\pi} z), z \rightarrow \infty$
(in an appropriate sector).
This is Joint work with: Stephan Gessner, St\'efan van der Walt
Invariant pairs of matrix polynomials
Abstract
Invariant subspaces are a well-established tool in the theory of linear eigenvalue problems. They are also computationally more stable objects than single eigenvectors if one is interested in a group of closely clustered eigenvalues. A generalization of invariant subspaces to matrix polynomials can be given by using invariant pairs.
We investigate some basic properties of invariant pairs and give perturbation results, which show that invariant pairs have similarly favorable properties for matrix polynomials than do invariant subspaces have for linear eigenvalue problems. In the second part of the talk we discuss computational aspects, namely how to extract invariant pairs from linearizations of matrix polynomials and how to do efficient iterative refinement on them. Numerical examples are shown using the NLEVP collection of nonlinear eigenvalue test problems.
This talk is joint work with Daniel Kressner from ETH Zuerich.
Molecular Dynamics Simulations and why they are interesting for Numerical Analysts
Abstract
Molecular Dynamics Simulations are a tool to study the behaviour
of atomic-scale systems. The simulations themselves solve the
equations of motion for hundreds to millions of particles over
thousands to billions of time steps. Due to the size of the
problems studied, such simulations are usually carried out on
large clusters or special-purpose hardware.
At a first glance, there is nothing much of interest for a
Numerical Analyst: the equations of motion are simple, the
integrators are of low order and the computational aspects seem
to focus on hardware or ever larger and faster computer
clusters.
The field, however, having been ploughed mainly by domain
scientists (e.g. Chemists, Biologists, Material Scientists) and
a few Computer Scientists, is a goldmine for interesting
computational problems which have been solved either badly or
not at all. These problems, although domain specific, require
sufficient mathematical and computational skill to make finding
a good solution potentially interesting for Numerical Analysts.
The proper solution of such problems can result in speed-ups
beyond what can be achieved by pushing the envelope on Moore's
Law.
In this talk I will present three examples where problems
interesting to Numerical Analysts arise. For the first two
problems, Constraint Resolution Algorithms and Interpolated
Potential Functions, I will present some of my own results. For
the third problem, using interpolations to efficiently compute
long-range potentials, I will only present some observations and
ideas, as this will be the main focus of my research in Oxford
and therefore no results are available yet.
CFD in the Gas Turbine Industry
Abstract
CFD is an indispensible part of the design process for all major gas turbine components. The growth in the use of CFD from single-block structured mesh steady state solvers to highly resolved unstructured mesh unsteady solvers will be described, with examples of the design improvements that have been achieved. The European Commission has set stringent targets for the reduction of noise, emissions and fuel consumption to be achieved by 2020. The application of CFD to produce innovative designs to meet these targets will be described. The future direction of CFD towards whole engine simulations will also be discussed.
Is the Outer Solar System Chaotic?
Abstract
The stability of our Solar System has been debated since Newton devised
the laws of gravitation to explain planetary motion. Newton himself
doubted the long-term stability of the Solar System, and the question
has remained unanswered despite centuries of intense study by
generations of illustrious names such as Laplace, Langrange, Gauss, and
Poincare. Finally, in the 1990s, with the advent of computers fast
enough to accurately integrate the equations of motion of the planets
for billions of years, the question has finally been settled: for the
next 5 billion years, and barring interlopers, the shapes of the
planetary orbits will remain roughly as they are now. This is called
"practical stability": none of the known planets will collide with each
other, fall into the Sun, or be ejected from the Solar System, for the
next 5 billion years.
Although the Solar System is now known to be practically stable, it may
still be "chaotic". This means that we may---or may not---be able
precisely to predict the positions of the planets within their orbits,
for the next 5 billion years. The precise positions of the planets
effects the tilt of each planet's axis, and so can have a measurable
effect on the Earth's climate. Although the inner Solar System is
almost certainly chaotic, for the past 15 years, there has been
some debate about whether the outer Solar System exhibits chaos or not.
In particular, when performing numerical integrations of the orbits of
the outer planets, some astronomers observe chaos, and some do not. This
is particularly disturbing since it is known that inaccurate integration
can inject chaos into a numerical solution whose exact solution is known
to be stable.
In this talk I will demonstrate how I closed that 15-year debate on
chaos in the outer solar system by performing the most carefully justified
high precision integrations of the orbits of the outer planets that has
yet been done. The answer surprised even the astronomical community,
and was published in _Nature Physics_.
I will also show lots of pretty pictures demonstrating the fractal nature
of the boundary between chaos and regularity in the outer Solar System.
Mesh redistribution algorithms and error control for time-dependent PDEs
Abstract
Self adjusted meshes have important benefits approximating PDEs with solutions that exhibit nontrivial characteristics. When appropriately chosen, they lead to efficient, accurate and robust algorithms. Error control is also important, since appropriate analysis can provide guarantees on how accurate the approximate solution is through a posteriori estimates. Error control may lead to appropriate adaptive algorithms by identifying areas of large errors and adjusting the mesh accordingly. Error control and associated adaptive algorithms for important equations in Mathematical Physics is an open problem.
In this talk we consider the main structure of an algorithm which permits mesh redistribution with time and the nontrivial characteristics associated with it. We present improved algorithms and we discuss successful approaches towards error control for model problems (linear and nonlinear) of parabolic or hyperbolic type.
Sparsity, $\ell_1$ Minimization, and the Geometric Separation Problem
Abstract
During the last two years, sparsity has become a key concept in various areas
of applied mathematics, computer science, and electrical engineering. Sparsity
methodologies explore the fundamental fact that many types of data/signals can
be represented by only a few non-vanishing coefficients when choosing a suitable
basis or, more generally, a frame. If signals possess such a sparse representation,
they can in general be recovered from few measurements using $\ell_1$ minimization
techniques.
One application of this novel methodology is the geometric separation of data,
which is composed of two (or more) geometrically distinct constituents -- for
instance, pointlike and curvelike structures in astronomical imaging of galaxies.
Although it seems impossible to extract those components -- as there are two
unknowns for every datum -- suggestive empirical results using sparsity
considerations have already been obtained.
In this talk we will first give an introduction into the concept of sparse
representations and sparse recovery. Then we will develop a very general
theoretical approach to the problem of geometric separation based on these
methodologies by introducing novel ideas such as geometric clustering of
coefficients. Finally, we will apply our results to the situation of separation
of pointlike and curvelike structures in astronomical imaging of galaxies,
where a deliberately overcomplete representation made of wavelets (suited
to pointlike structures) and curvelets/shearlets (suited to curvelike
structures) will be chosen. The decomposition principle is to minimize the
$\ell_1$ norm of the frame coefficients. Our theoretical results, which
are based on microlocal analysis considerations, show that at all sufficiently
fine scales, nearly-perfect separation is indeed achieved.
This is joint work with David Donoho (Stanford University).
Solving partial differential equations on surfaces with the Closest Point Method
Radial Basis Functions Methods for Modeling Atmospheric and Solid Earth Flows
Abstract
Current community models in the geosciences employ a variety of numerical methods from finite-difference, finite-volume, finite- or spectral elements, to pseudospectral methods. All have specialized strengths but also serious weaknesses. The first three methods are generally considered low-order and can involve high algorithmic complexity (as in triangular elements or unstructured meshes). Global spectral methods do not practically allow for local mesh refinement and often involve cumbersome algebra. Radial basis functions have the advantage of being spectrally accurate for irregular node layouts in multi-dimensions with extreme algorithmic simplicity, and naturally permit local node refinement on arbitrary domains. We will show test examples ranging from vortex roll-ups, modeling idealized cyclogenesis, to the unsteady nonlinear flows posed by the shallow water equations to 3-D mantle convection in the earth’s interior. The results will be evaluated based on numerical accuracy, stability and computational performance.
Random triangles: are they acute or obtuse?
Abstract
This is a special talk outside the normal Computational Mathematics and Application seminar series. Please note it takes place on a Wednesday.
A fast domain decomposition solver for the discretized Stokes equations by a stabilized finite element method
Abstract
An iterative substructuring method with balancing Neumann-Neumann preconditioner is known as an efficient parallel algorithm for the elasticity equations. This method was extended to the Stokes equations by Pavarino and Widlund [2002]. In their extension, Q2/P0-discontinuous elements are used for velocity/pressure and a Schur complement system within "benign space", where incompressibility satisfied, is solved by CG method.
For the construction of the coarse space for the balancing preconditioner, some supplementary solvability conditions are considered. In our algorithm for 3-D computation, P1/P1 elements for velocity/pressure with pressure stabilization are used to save computational cost in the stiffness matrix. We introduce a simple coarse space similar to the one of elasticity equations. Owing to the stability term, solvabilities of local Dirichlet problem for a Schur complement system, of Neumann problem for the preconditioner, and of the coarse space problem are ensured. In our implementation, local Dirichlet and Neumann problems are solved by a 4x4-block modified Cholesky factorization procedure with an envelope method, which leads to fast computation with small memory requirement. Numerical result on parallel efficiency with a shared memory computer will be shown. Direct use of the Stokes solver in an application of Earth's mantle convection problem will be also shown.
Approximate Gauss-Newton methods using reduced order models
Abstract
Work with N.K. Nichols (Reading), C. Boess & A. Bunse-Gerstner (Bremen)
The Gauss-Newton (GN) method is a well known iterative technique for solving nonlinear least squares problems subject to dynamical system constraints. Such problems arise commonly from applications in optimal control and state estimation. Variational data assimilation systems for weather, ocean and climate prediction currently use approximate GN methods. The GN method solves a sequence of linear least squares problems subject to linearized system constraints. For very large systems, low resolution linear approximations to the model dynamics are used to improve the efficiency of the algorithm. We propose a new method for deriving low order system approximations based on model reduction techniques from control theory. We show how this technique can be combined with the GN method to give a state estimation technique that retains more of the dynamical information of the full system. Numerical experiments using a shallow-water model illustrate the superior performance of model reduction to standard truncation techniques.
Radial Basis Functions for Solving Partial Differential Equations
Abstract
For the task of solving PDEs, finite difference (FD) methods are particularly easy to implement. Finite element (FE) methods are more flexible geometrically, but tend to be difficult to make very accurate. Pseudospectral (PS) methods can be seen as a limit of FD methods if one keeps on increasing their order of accuracy. They are extremely effective in many situations, but this strength comes at the price of very severe geometric restrictions. A more standard introduction to PS methods (rather than via FD methods of increasing orders of accuracy) is in terms of expansions in orthogonal functions (such as Fourier, Chebyshev, etc.).
Radial basis functions (RBFs) were first proposed around 1970 as a tool for interpolating scattered data. Since then, both our knowledge about them and their range of applications have grown tremendously. In the context of solving PDEs, we can see the RBF approach as a major generalization of PS methods, abandoning the orthogonality of the basis functions and in return obtaining much improved simplicity and flexibility. Spectral accuracy becomes now easily available also when using completely unstructured meshes, permitting local node refinements in critical areas. A very counterintuitive parameter range (making all the RBFs very flat) turns out to be of special interest. Computational cost and numerical stability were initially seen as serious difficulties, but major progress have recently been made also in these areas.
Introduction to Quasicontinuum Methods: Formulation, Classification, Analysis
Abstract
Quasicontinuum methods are a prototypical class of atomistic-to-continuum coupling methods. For example, we may wish to model a lattice defect (a vacancy or a dislocation) by an atomistic model, but the elastic far field by a continuum model. If the continuum model is consistent with the atomistic model (e.g., the Cauchy--Born model) then the main question is how the interface treatment affects the method.
In this talk I will introduce three of the main ideas how to treat the interface. I will explain their strengths and weaknesses by formulating the simplest possible non-trivial model problem and then simply analyzing the two classical concerns of numerical analysis: consistency and stability.
Hybrid asymptotic-numerical methods for high frequency scattering
Generalized Nested Factorization - a recursive preconditioner for spatially discretized linear systems
Approximation of Inverse Problems
Abstract
Inverse problems are often ill-posed, with solutions that depend sensitively on data. Regularization of some form is often used to counteract this. I will describe an approach to regularization, based on a Bayesian formulation of the problem, which leads to a notion of well-posedness for inverse problems, at the level of probability measures.
The stability which results from this well-posedness may be used as the basis for understanding approximation of inverse problems in finite dimensional spaces. I will describe a theory which carries out this program.
The ideas will be illustrated with the classical inverse problem for the heat equation, and then applied to so more complicated inverse problems arising in data assimilation, such as determining the initial condition for the Navier-Stokes equation from observations.
How sharp is the restricted isometry property? An investigation into sparse approximation techniques
Abstract
On fast multilevel algorithms for nonlinear variational imaging models
Abstract
In recent years, the interdisciplinary field of imaging science has been experiencing an explosive growth in research activities including more models being developed, more publications generated, and above all wider applications attempted.
In this talk I shall first give an overview of the various imaging work carried out in our Liverpool group, some with collaborations with UCLA (T F Chan), CUHK (R H Chan) and Bergen (X C Tai) and several colleagues from other departments in Liverpool. Then I shall focus on two pieces of recent work, denoising and segmentation respectively:
(i) Image denoising has been a research topic deeply investigated within the last two decades. Even algorithmically the well-known ROF model (1992) can be solved efficiently. However less work has been done on models using high order regularization. I shall describe our first and successful attempt to develop a working multilevel algorithm for a 4th order nonlinear denoising model, and our work on solving the combined denoising and deblurring problem, different from the reformulation approach by M N Ng and W T Yin (2008) et al.
(ii) the image active contour model by Chan-Vese (2001) can be solved efficiently both by a geometric multigrid method and by an optimization based multilevel method. Surprisingly the new multilevel methods can find a solution closer to the global minimize than the existing unilevel methods. Also discussed are some recent work (jointly with N Badshah) on selective segmentation that has useful medical applications.
Geometric Numerical Integration of Differential Equations
Abstract
Geometric integration is the numerical integration of a differential equation, while preserving one or more of its geometric/physical properties exactly, i.e. to within round-off error.
Many of these geometric properties are of crucial importance in physical applications: preservation of energy, momentum, angular momentum, phase-space volume, symmetries, time-reversal symmetry, symplectic structure and dissipation are examples. The field has tantalizing connections to dynamical systems, as well as to Lie groups.
In this talk we first present a survey of geometric numerical integration methods for differential equations, and then exemplify this by discussing symplectic vs energy-preserving integrators for ODEs as well as for PDEs.
Golden syrup, lubrication theory, and PETSc -- a recipe for models of ice-sheet dynamics
Numerical methods for palindromic eigenvalue problems
Abstract
We discuss numerical methods for the solution of the palindromic eigenvalue problem Ax=λ ATx, where A is a complex matrix. Such eigenvalue problems occur, for example, in the vibration analysis of rail tracks.
The structure of palindromic eigenvalue problems leads to a symmetry in the spectrum: all eigenvalues occur in reciprocal pairs. The need for preservation of this symmetry in finite precision arithmetic requires the use of structure-preserving numerical methods. In this talk, we explain how such methods can be derived.
A new perspective on the complexity of interior point methods for linear programming
Abstract
The aim of this talk is to render the power of (short-step) interior-point methods for linear programming (and by extension, convex programming) intuitively understandable to those who have a basic training in numerical methods for dynamical systems solving. The connection between the two areas is made by interpreting line-search methods in a forward Euler framework, and by analysing the algorithmic complexity in terms of the stiffness of the vector field of search directions. Our analysis cannot replicate the best complexity bounds, but due to its weak assumptions it also applies to inexactly computed search directions and has explanatory power for a wide class of algorithms.
Co-Author: Coralia Cartis, Edinburgh University School of Mathematics.
Parametric approximation of geometric evolution equations and their coupling to bulk equations
Coverage Processes on Spheres and Condition Numbers for Linear Programming
Abstract
This talk is concerned with the probabilistic behaviour of a condition
number C(A) for the problem of deciding whether a system of n
homogeneous linear inequalities in m unknowns has a non-zero solution.
In the case where the input system is feasible, the exact probability
distribution of the condition number for random inputs is determined,
and a sharp bound for the general case. In particular, for the
expected value of the logarithm log C(A), an upper bound of order
O(log m) (m the number of variables) is presented which does not
depend on the number of inequalities.
The probability distribution of the condition number C(A) is closely
related to the probability of covering the m-sphere with n spherical
caps of a given radius. As a corollary, we obtain bounds on the
probability of covering the sphere with random caps.
Preconditioning of linear systems in an ocean flow model
Abstract
The climate is largely determined by the ocean flow, which in itself is driven by wind and by gradients in temperature and salinity. Nowadays numerical models exist that are able to describe the occurring phenomena not only qualitatively but also quantitatively. At the Institute for Marine and Atmospheric research Utrecht (IMAU) a so-called thermohaline circulation model is developed in which methods of dynamical systems theory are used to study the stability of ocean flows. Here bifurcation diagrams are constructed by varying the strength of the forcing, for instance the amount of fresh water coming in from the north due to melting. For every value of the strength we have to solve a nonlinear system, which is handled by a Newton-type method. This produces many linear systems to be solved.
In the talk the following will be addressed: the form of the system of equations, a special purpose method which uses Trilinos and MRILU. The latter is a multilevel ILU preconditioner developed at Groningen University. Results of the approach obtained on the Dutch national supercomputer will be shown.
On the accuracy of inexact saddle point solvers
Abstract
For large--scale saddle point problems, the application of exact iterative schemes and preconditioners may be computationally expensive. In practical situations, only approximations to the inverses of the diagonal block or the related cross-product matrices are considered, giving rise to inexact versions of various solvers. Therefore, the approximation effects must be carefully studied. In this talk we study numerical behavior of several iterative Krylov subspace solvers applied to the solution of large-scale saddle point problems. Two main representatives of the segregated solution approach are analyzed: the Schur complement reduction method, based on an (iterative) elimination of primary variables and the null-space projection method which relies on a basis for the null-space for the constraints. We concentrate on the question what is the best accuracy we can get from inexact schemes solving either Schur complement system or the null-space projected system when implemented in finite precision arithmetic. The fact that the inner solution tolerance strongly influences the accuracy of computed iterates is known and was studied in several contexts.
In particular, for several mathematically equivalent implementations we study the influence of inexact solving the inner systems and estimate their maximum attainable accuracy. When considering the outer iteration process our rounding error analysis leads to results similar to ones which can be obtained assuming exact arithmetic. The situation is different when we look at the residuals in the original saddle point system. We can show that some implementations lead ultimately to residuals on the the roundoff unit level independently of the fact that the inner systems were solved inexactly on a much higher level than their level of limiting accuracy. Indeed, our results confirm that the generic and actually the cheapest implementations deliver the approximate solutions which satisfy either the second or the first block equation to the working accuracy. In addition, the schemes with a corrected direct substitution are also very attractive. We give a theoretical explanation for the behavior which was probably observed or it is already tacitly known. The implementations that we pointed out as optimal are actually those which are widely used and suggested in applications.
Cholesky factorizations for multi-core systems
Abstract
Multicore chips are nearly ubiquitous in modern machines, and to fully exploit this continuation of Moore's Law, numerical algorithms need to be able to exploit parallelism. We describe recent approaches to both dense and sparse parallel Cholesky factorization on shared memory multicore systems and present results from our new codes for problems arising from large real-world applications. In particular we describe our experiences using directed acyclic graph based scheduling in the dense case and retrofitting parallelism to a
sparse serial solver.
Topology Optimisation: Achievements and Challenges
Abstract
As research in topology optimisation has reached a level of maturity, two main classes of methods have emerged and their applications to real engineering design in industry are increasing. It has therefore become important to identify the limitations and challenges in order to ensure that topology optimisation is appropriately employed during the design process whilst research may continue to offer a more reliable and fast design tool to engineers.
The seminar will begin by introducing the topology optimisation problem and the two popular finite element based approaches. A range of numerical methods used in the typical implementations will be outlined. This will form the basis for the discussion on the short-comings and challenges as an easy-to-use design tool for engineers, particularly in the context of reliably providing the consistent optimum solutions to given problems with minimum a priori information. Another industrial requirement is a fast solution time to easy-to-set-up problems. The seminar will present the recent efforts in addressing some of these issues and the remaining challenges for the future.
Approximation of harmonic maps and wave maps
Abstract
Partial differential equations with a nonlinear pointwise constraint defined through a manifold occur in a variety of applications: The magnetization of a ferromagnet can be described by a unit length vector field and the orientation of the rod-like molecules that constitute a liquid crystal is often modeled by a vector field that attains its values in the real projective plane thus respecting the head-to-tail symmetry of the molecules. Other applications arise in geometric
modeling, quantum mechanics, and general relativity. Simple examples reveal that it is impossible to satisfy pointwise constraints exactly by lowest order finite elements. For two model problems we discuss the practical realization of the constraint, the efficient solution of the resulting nonlinear systems of equations, and weak accumulation of approximations at exact solutions.
Optimal domain decomposition methods (Neumann-Neumann or FETI types) for systems of PDEs
Abstract
We focus on domain decomposition methods for systems of PDEs (versus scalar PDEs). The Smith factorization (a "pure" algebra tool) is used systematically to derive new domain decompositions methods for symmetric and unsymmetric systems of PDEs: the compressible Euler equations, the Stokes and Oseen (linearized Navier-Stokes) problem. We will focus on the Stokes system. In two dimensions the key idea is the transformation of the Stokes problem into a scalar bi-harmonic problem. We show, how a proposed domain decomposition method for the bi-harmonic problem leads to a domain decomposition method for the Stokes equations which inherits the convergence behavior of the scalar problem. Thus, it is sufficient to study the convergence of the scalar algorithm. The same procedure can also be applied to the three-dimensional Stokes problem.