Forthcoming events in this series
Efficient, communication-minimizing algorithms for the symmetric eigenvalue decomposition and the singular value decomposition
Abstract
Computing the eigenvalue decomposition of a symmetric matrix and the singular value decomposition of a general matrix are two of the central tasks in numerical linear algebra. There has been much recent work in the development of linear algebra algorithms that minimize communication cost. However, the reduction in communication cost sometimes comes at the expense of significantly more arithmetic and potential instability.
\\
\\
In this talk I will describe algorithms for the two decompositions that have optimal communication cost and arithmetic cost within a small factor of those for the best known algorithms. The key idea is to use the best rational approximation of the sign function, which lets the algorithm converge in just two steps. The algorithms are backward stable and easily parallelizable. Preliminary numerical experiments demonstrate their efficiency.
Optimal Newton-type methods for nonconvex smooth optimization
Abstract
We show that the steepest-descent and Newton's methods for unconstrained nonconvex optimization
under standard assumptions may both require a number of iterations and function evaluations
arbitrarily close to the steepest-descent's global worst-case complexity bound. This implies that
the latter upper bound is essentially tight for steepest descent and that Newton's method may be as
slow as the steepest-descent method in the worst case. Then the cubic regularization of Newton's
method (Griewank (1981), Nesterov & Polyak (2006)) is considered and extended to large-scale
problems, while preserving the same order of its improved worst-case complexity (by comparison to
that of steepest-descent); this improved worst-case bound is also shown to be tight. We further
show that the cubic regularization approach is, in fact, optimal from a worst-case complexity point
of view amongst a wide class of second-order methods. The worst-case problem-evaluation complexity
of constrained optimization will also be discussed. This is joint work with Nick Gould (Rutherford
Appleton Laboratory, UK) and Philippe Toint (University of Namur, Belgium).
Interior Point warmstarts and stochastic programming
Abstract
We present progress on an Interior Point based multi-step solution approach for stochastic programming problems. Our approach works with a series of scenario trees that can be seen as successively more accurate discretizations of an underlying probability distribution and employs IPM warmstarts to "lift" approximate solutions from one tree to the next larger tree.
Antibandwidth maximization: a graph colouring problem
Spectral decompositions and nonnormality of boundary integral operators in acoustic scattering
Abstract
Nonnormality is a well studied subject in the context of partial differential operators. Yet, only little is known for boundary integral operators. The only well studied case is the unit ball, where the standard single layer, double layer and conjugate double layer potential operators in acoustic scattering diagonalise in a unitary basis. In this talk we present recent results for the analysis of spectral decompositions and nonnormality of boundary integral operators on more general domains. One particular application is the analysis of stability constants for boundary element discretisations. We demonstrate how these are effected by nonnormality and give several numerical examples, illustrating these issues on various domains.
Orthogonality and stability in large matrix iterative algorithms
Abstract
Many iterative algorithms for large sparse matrix problems are based on orthogonality (or $A$-orthogonality, bi-orthogonality, etc.), but these properties can be lost very rapidly using vector orthogonalization (subtracting multiples of earlier supposedly orthogonal vectors from the latest vector to produce the next orthogonal vector). Yet many of these algorithms are some of the best we have for very large sparse problems, such as Conjugate Gradients, Lanczos' method for the eigenproblem, Golub and Kahan bidiagonalization, and MGS-GMRES.
\\
\\
Here we describe an ideal form of orthogonal matrix that arises from any sequence of supposedly orthogonal vectors. We illustrate some of its fascinating properties, including a beautiful measure of orthogonality of the original set of vectors. We will indicate how the ideal orthogonal matrix leads to expressions for new concepts of stability of such iterative algorithm. These are expansions of the concept of backward stability for matrix transformation algorithms that was so effectively developed and applied by J. H. Wilkinson (FRS). The resulting new expressions can be used to understand the subtle and effective performance of some (and hopefully eventually all) of these iterative algorithms.
Some linear algebra problems arising in the analysis of complex networks
Climate, Assimilation of Data and Models - When Data Fail Us
Abstract
The fundamental task in climate variability research is to eke
out structure from climate signals. Ideally we want a causal
connection between a physical process and the structure of the
signal. Sometimes we have to settle for a correlation between
these. The challenge is that the data is often poorly
constrained and/or sparse. Even though many data gathering
campaigns are taking place or are being planned, the very high
dimensional state space of the system makes the prospects of
climate variability analysis from data alone impractical.
Progress in the analysis is possible by the use of models and
data. Data assimilation is one such strategy. In this talk we
will describe the methodology, illustrate some of its
challenges, and highlight some of the ways our group has
proposed to improving the methodology.
Energy-law preserving continuous finite element methods for simulation of liquid crystal and multi-phase flows
Abstract
The liquid crystal (LC) flow model is a coupling between
orientation (director field) of LC molecules and a flow field.
The model may probably be one of simplest complex fluids and
is very similar to a Allen-Cahn phase field model for
multiphase flows if the orientation variable is replaced by a
phase function. There are a few large or small parameters
involved in the model (e.g. the small penalty parameter for
the unit length LC molecule or the small phase-change
parameter, possibly large Reynolds number of the flow field,
etc.). We propose a C^0 finite element formulation in space
and a modified midpoint scheme in time which accurately
preserves the inherent energy law of the model. We use C^0
elements because they are simpler than existing C^1 element
and mixed element methods. We emphasise the energy law
preservation because from the PDE analysis point of view the
energy law is very important to correctly catch the evolution
of singularities in the LC molecule orientation. In addition
we will see numerical examples that the energy law preserving
scheme performs better under some choices of parameters. We
shall apply the same idea to a Cahn-Hilliard phase field model
where the biharmonic operator is decomposed into two Laplacian
operators. But we find that under our scheme non-physical
oscillation near the interface occurs. We figure out the
reason from the viewpoint of differential algebraic equations
and then remove the non-physical oscillation by doing only one
step of a modified backward Euler scheme at the initial time.
A number of numerical examples demonstrate the good
performance of the method. At the end of the talk we will show
how to apply the method to compute a superconductivity model,
especially at the regime of Hc2 or beyond. The talk is based
on a few joint papers with Chun Liu, Qi Wang, Xingbin Pan and
Roland Glowinski, etc.
Data assimilation using reduced order modelling for unstable systems
Abstract
Variational data assimilation techniques for optimal state estimation in very large environmental systems currently use approximate Gauss-Newton (GN) methods. The GN method solves a sequence of linear least squares problems subject to linearized system constraints. For very large systems, low resolution linear approximations to the model dynamics are used to improve the efficiency of the algorithm. We propose a new approach for deriving low order system approximations based on model reduction techniques from control theory which can be applied to unstable stochastic systems. We show how this technique can be combined with the GN method to retain the response of the dynamical system more accurately and improve the performance of the approximate GN method.
SOPHY: An Automated, Aerothermal Design and Optimisation System for Aero-Engine Components
Abstract
Computational Fluid Dynamics (CFD) has become an
indispensable tool in designing turbomachinery components in all sectors of
Rolls-Royce business units namely, Aerospace, Industrial, Marine and Nuclear.
Increasingly sophisticated search and optimisation techniques are used based on
both traditional optimisation methods as well as, design of computer experiment
techniques, advanced surrogate methods, and evolutionary optimisation
techniques. Geometry and data representation as well as access, queuing and
loading control of large high performance computing clusters are areas of
research to establish the most efficient techniques for improving the
performance of an already highly efficient modern jet engine.
\\
\\
This presentation focuses on a high fidelity design
optimisation framework called SOPHY that is used in Rolls-Royce to provide
parametric geometry, automatic meshing, advanced design-space search
algorithms, accurate and robust CFD methodology and post-processing. The
significance of including the so-called real geometry features and interaction
of turbomachinery components in the optimisation cycle are discussed. Examples are drawn from real world
applications of the SOPHY design systems in an engine project.
On hypergraph partitioning based ordering methods for sparse matrix factorization
Abstract
We will discuss the use of hypergraph-based methods for orderings of sparse matrices in Cholesky, LU and QR factorizations. For the Cholesky factorization case, we will investigate a recent result on pattern-wise decomposition of sparse matrices, generalize the result and develop algorithmic tools to obtain effective ordering methods. We will also see that the generalized results help us formulate the ordering problem in LU much like we do for the Cholesky case, without ever symmetrizing the given matrix $A$ as $A+A^{T}$ or $A^{T}A$. For the QR factorization case, the use of hypergraph models is fairly standard. We will nonetheless highlight the fact that the method again does not form the possibly much denser matrix $A^{T}A$. We will see comparisons of the hypergraph-based methods with the most common alternatives in all three cases.
\\
\\
This is joint work with Iain S. Duff.
Writing the matrix adjoint as a rational function in the matrix can be interesting
Abstract
We will study the question of whether the adjoint of a given matrix can be written as a rational function in the matrix. After showing necessary and sufficient conditions, rational interpolation theory will help to characterize the most important existing cases. Several topics related to our question will be explored. They range from short recurrence Krylov subspace methods to the roots of harmonic polynomials and harmonic rational functions. The latter have recently found interesting applications in astrophysics, which will briefly be discussed as well.
Multivariate Chebyshev Polynomials; Theory and Applications
Efficiency of serial and parallel block-coordinate descent methods for minimizing composite functions
Abstract
We develop a randomized block-coordinate descent method for minimizing the sum of a smooth and a simple nonsmooth block-separable convex function and prove that it obtains an $\epsilon$-accurate solution with probability at least $1-\rho$ in at most $O[(2n/\epsilon)\log(1/\rho)]$ iterations, where $n$ is the dimension of the problem. For strongly convex functions the method converges linearly. This extends recent results of Nesterov [Efficiency of coordinate descent methods on huge-scale optimization problems, CORE Discussion Paper \#2010/2], which cover the smooth case, to composite minimization, while at the same time improving the complexity by the factor of 4 and removing $\epsilon$ from the logarithm. More importantly, in contrast with the aforementioned work in which the authors achieve the results by applying their method to a regularized version of the objective function with an unknown scaling factor, we show that this is not necessary, thus achieving true iteration complexity bounds. In the smooth case we also allow for arbitrary probability vectors and non-Euclidean norms. Time permitting, we will also mention new iteration complexity results for a parallel version of the method.
\\
\\
In the second part of the talk we demonstrate numerically that the algorithm is able to solve huge-scale $\ell_1$-regularized support vector machine and least squares problems with billion variables. Finally, we present preliminary computational results with a GPU-accelerated parallel version of the method, on truss topology design instances, achieving speedups of up to two orders of magnitude when compared to a single-core implementation in C.
\\
\\
References:
\\
P. Richtarik and M. Takac, Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function, 2011; http://arxiv.org/abs/1107.2848
\\
P. Richtarik and M. Takac, Efficient serial and parallel coordinate descent methods for huge-scale truss topology design, 2011; http://www.optimization-online.org/DB_HTML/2011/08/3118.html
\\
P. Richtarik and M. Takac, Efficiency of randomized coordinate descent methods on minimization problems with a composite objective function, Proceedings of SPARS11
The numerical computation of violent liquid motion
Abstract
Liquid impact is a key issue in various industrial applications (seawalls, offshore structures, breakwaters, sloshing in tanks of liquefied natural gas vessels, wave energy converters, offshore wind turbines, etc). Numerical simulations dealing with these applications have been performed by many groups, using various types of numerical methods. In terms of the numerical results, the outcome is often impressive, but the question remains of how relevant these results are when it comes to determining impact pressures. The numerical models are too simplified to reproduce the high variability of the measured pressures. In fact, for the time being, it is not possible to simulate accurately both global and local effects. Unfortunately it appears that local effects predominate over global effects when the behaviour of pressures is considered.
\\
\\
Having said this, it is important to point out that numerical studies can be quite useful to perform sensitivity analyses in idealized conditions such as a liquid mass falling under gravity on top of a horizontal wall and then spreading along the lateral sides. Simple analytical models inspired by numerical results on idealized problems can also be useful to predict trends.
\\
\\
The talk is organized as follows: After an introduction on some of the industrial applications, it will be explained to what extent numerical studies can be used to improve our understanding of impact pressures. Results on a liquid mass hitting a wall obtained by various numerical codes will be shown.
RBFs on Spheres
Abstract
In this talk, I will discuss various aspects of approximation by radial basis functions on spheres. After a short introduction to the subject of scattered data approximation on spheres and optimal recovery, I will particularly talk about error analysis, a hybrid approximation scheme involving polynomials and radial basis functions and, if time permits, solving nonlinear parabolic equations on spheres.
Several kinds of Chebyshev polynomials in higher dimensions
Abstract
Chebyshev polynomials are arguably the most useful orthogonal polynomials for computational purposes. In one dimension they arise from the close relationship that exists between Fourier series and polynomials. We describe how this relationship generalizes to Fourier series on certain symmetric lattices, that exist in all dimensions. The associated polynomials can not be seen as tensor-product generalizations of the one-dimensional case. Yet, they still enjoy excellent properties for interpolation, integration, and spectral approximation in general, with fast FFT-based algorithms, on a variety of domains. The first interesting case is the equilateral triangle in two dimensions (almost). We further describe the generalization of Chebyshev polynomials of the second kind, and many new kinds are found when the theory is completed. Connections are made to Laplacian eigenfunctions, representation theory of finite groups, and the Gibbs phenomenon in higher dimensions.
Analysis of a multiscale method for nonlinear nonmonotone elliptic problems
Abstract
Following the framework of the heterogeneous multiscale method, we present a numerical method for nonlinear elliptic homogenization problems. We briefly review the numerical, relying on an efficient coupling of macro and micro solvers, for linear problems. A fully discrete analysis is then given for nonlinear (nonmonotone) problems, optimal convergence rates in the H1 and L2 norms are derived and the uniqueness of the method is shown on sufficiently fine macro and micro meshes.
Numerical examples confirm the theoretical convergence rates and illustrate the performance and versatility of our approach.
IDR -- A New Class of Krylov Subspace Solvers: Benefits and Drawbacks
Abstract
This talk is about the Induced Dimension Reduction (IDR) methods developed by Peter Sonneveld and, more recently, Martin van Gijzen. We sketch the history, outline the underlying principle, and give a few details about different points of view on this class of Krylov subspace methods. If time permits, we briefly outline some recent developments in this field and the benefits and drawbacks of these and IDR methods in general.
Modelling and simulation of the self-assembly of thin solid films
Abstract
Many continuum models have been derived in recent years which describe the self-assembly of industrially utilisable crystalline films to a level of detail that allows qualitative comparisons with experiments. For thin-film problems, where the characteristic length scales in vertical and horizontal directions differ significantly, the governing surface diffusion equations can be reduced to simpler PDEs by making use of asymptotic expansions. Many mathematical problems and solutions emerge from such new evolution equations and many of them remind of Cahn-Hilliard type equations. The surface diffusion models are of high, of fourth or even sixth, order.
We present the modeling, model reduction and simulation results for heteroepitaxial growth as for Ge/Si quantum dot self-assembly. The numerical methods we are using are based on trigonometric interpolation. These kind of pseudospectral methods seem very well suited for simulating the coarsening of large quantum dot arrays. When the anisotropy of the growing crystalline film is strong, it might become necessary to add a corner regularisation to the model. Then the transition region between neighboring facets is still smooth, but its scale is rather small. In this case it might be useful to think about an adaptive extension of the existing method.

Figure 1: Ostwald ripening process of quantum dots depicted at consecutive time points. One fourth of the whole, periodic, simulated domain is shown.
Joint work with Peter Evans and Barbara Wagner
Uncertainty Analysis for Flow of an Incompressible Fluid in a Sudden Expansion in Two-Dimensional Channel
Abstract
This seminar will be held at the Rutherford Appleton Laboratory near Didcot.
Abstract:
Numerical calculations of laminar flow in a two-dimensional channel with a sudden expansion exhibit a symmetry-breaking bifurcation at Reynolds number 40.45 when the expansion ratio is 3:1. In the experiments reported by Fearn, Mullin and Cliffe [1] there is a large perturbation to this bifurcation and the agreement with the numerical calculations is surprisingly poor. Possible reasons for this discrepancy are explored using modern techniques for uncertainty quantification.
When experimental equipment is constructed there are, inevitably, small manufacturing imperfections that can break the symmetry in the apparatus. In this work we considered a simple model for these imperfections. It was assumed that the inlet section of the channel was displaced by a small amount and that the centre line of the inlet section was not parallel to the centre line of the outlet section. Both imperfections were modelled as normal random variables with variance equal to the manufacturing tolerance. Thus the problem to be solved is the Navier-Stokes equations in a geometry with small random perturbations. A co-ordinate transformation technique was used to transform the problem to a fixed deterministic domain but with random coefficient appearing in the transformed Navier-Stokes equations. The resulting equations were solved using a stochastic collocation technique that took into account the fact that the problem has a discontinuity in parameter space arising from the bifurcation structure in the problem.
The numerical results are in the form of an approximation to a probability measure on the set of bifurcation diagrams. The experimental data of Fearn, Mullin and Cliffe are consistent with the computed solutions, so it appears that a satisfactory explanation for the large perturbation can be provided by manufacturing imperfections in the experimental apparatus.
The work demonstrates that modern methods for uncertainty quantification can be applied successfully to a bifurcation problem arising in fluid mechanics. It should be possible to apply similar techniques to a wide range of bifurcation problems in fluid mechanics in the future.
References:
[1] R M Fearn, T Mullin and K A Cliffe Nonlinear flow phenomena in a symmetric sudden expansion, J. Fluid Mech. 211, 595-608, 1990.
Multilevel Monte Carlo method
Abstract
Please note that this is a short notice change from the originally advertised talk by Dr Shahrokh Shahpar (Rolls-Royce plc.)
The new talk "Multilevel Monte Carlo method" is given by Mike Giles, Oxford-Man Institute of Quantitative Finance, Mathematical Institute, University of Oxford.
Joint work with Rob Scheichl, Aretha Teckentrup (Bath) and Andrew Cliffe (Nottingham)
An Overview of Adaptive Mesh Generation and Variational Methods
Abstract
Over the last several decades, many mesh generation methods and a plethora of adaptive methods for solving differential equations have been developed. In this talk, we take a general approach for describing the mesh generation problem, which can be considered as being in some sense equivalent to determining a coordinate transformation between physical space and a computational space. Our description provides some new theoretical insights into precisely what is accomplished from mesh equidistribution (which is a standard adaptivity tool used in practice) and mesh alignment. We show how variational mesh generation algorithms, which have historically been the most common and important ones, can generally be compared using these mesh generation principles. Lastly, we relate these to a variety of moving mesh methods for solving time-dependent PDEs.
This is joint work with Weizhang Huang, Kansas University
Optimal Iterative Solvers for Saddle Point Problems
Abstract
In this talk we discuss the design of efficient numerical methods for solving symmetric indefinite linear systems arising from mixed approximation of elliptic PDE problems with associated constraints. Examples include linear elasticity (Navier-Lame equations), steady fluid flow (Stokes' equations) and electromagnetism (Maxwell's equations).
The novel feature of our iterative solution approach is the incorporation of error control in the natural "energy" norm in combination with an a posteriori estimator for the PDE approximation error. This leads to a robust and optimally efficient stopping criterion: the iteration is terminated as soon as the algebraic error is insignificant compared to the approximation error. We describe a "proof of concept" MATLAB implementation of this algorithm, which we call EST_MINRES, and we illustrate its effectiveness when integrated into our Incompressible Flow Iterative Solution Software (IFISS) package (http://www.manchester.ac.uk/ifiss/).
Analytical Results on the PAUSE Auction Procedure
Abstract
In this talk, we focus on the analytical properties of a decentralized auction, namely the PAUSE Auction Procedure. We prove that the revenue of the auctioneer from PAUSE is greater than or equal to the profit from the well-known VCG auction when there are only two bidders and provide lower bounds on the profit for arbitrary number of bidders. Based on these bounds and observations from auctions with few items, we propose a modification of the procedure that increases the profit. We believe that this study, which is still in progress, will be a milestone in designing better decentralized auctions since it is the first analytical study on such auctions with promising results.
Iterative Valid Polynomial Inequalities Generation for Polynomial Programing
Abstract
Polynomial Programs are ussually solved by using hierarchies of convex relaxations. This scheme rapidly becomes computationally expensive and is often tractable only for problems of small sizes. We propose an iterative scheme that improves an initial relaxation without incurring exponential growth in size. The key ingredient is a dynamic scheme for generating valid polynomial inequalities for general polynomial programs. These valid inequalities are then used to construct better approximations of the original problem. As a result, the proposed scheme is in principle scalable to large general combinatorial optimization problems.
Joint work with Bissan Ghaddar and Miguel Anjos
OP2 -- an open-source parallel library for unstructured grid computations
Abstract
Based on an MPI library written over 10 years ago, OP2 is a new open-source library which is aimed at application developers using unstructured grids. Using a single API, it targets a variety of HPC architectures, including both manycore GPUs and multicore CPUs with vector units. The talk will cover the API design, key aspects of the parallel implementation on the different platforms, and preliminary performance results on a small but representative CFD test code.
Project homepage: http://people.maths.ox.ac.uk/gilesm/op2/
Backward Perturbation Analysis of Linear Least Squares Problems
Abstract
We consider the iterative solution of large sparse linear least squares (LS) problems. Specifically, we focus on the design and implementation of reliable stopping criteria for the widely-used algorithm LSQR of Paige and Saunders. First we perform a backward perturbation analysis of the LS problem. We show why certain projections of the residual vector are good measures of convergence, and we propose stopping criteria that use these quantities. These projections are too expensive to compute to be used directly in practice. We show how to estimate them efficiently at every iteration of the algorithm LSQR. Our proposed stopping criteria can therefore be used in practice.
This talk is based on joint work with Xiao-Wen Chang, Chris Paige, Pavel Jiranek, and Serge Gratton.
Optimized domain decomposition methods that scale weakly
Abstract
In various fields of application, one must solve very large scale boundary value problems using parallel solvers and supercomputers. The domain decomposition approach partitions the large computational domain into smaller computational subdomains. In order to speed up the convergence, we have several ``optimized'' algorithm that use Robin transmission conditions across the artificial interfaces (FETI-2LM). It is known that this approach alone is not sufficient: as the number of subdomains increases, the number of iterations required for convergence also increases and hence the parallel speedup is lost. A known solution for classical Schwarz methods as well as FETI algorithms is to incorporate a ``coarse grid correction'', which is able to transmit low-frequency information more quickly across the whole domain. Such algorithms are known to ``scale weakly'' to large supercomputers. A coarse grid correction is also necessary for FETI-2LM methods. In this talk, we will introduce and analyze coarse grid correction algorithms for FETI-2LM domain decomposition methods.
Mathematics enters the picture
Abstract
Can one of the most important Italian Renaissance frescoes reduced in hundreds of thousand fragments by a bombing during the Second World War be re-composed after more than 60 years from its damage? Can we reconstruct the missing parts and can we say something about their original color?
In this talk we would like to exemplify, hopefully effectively by taking advantage of the seduction of art, how mathematics today can be applied in real-life problems which were considered unsolvable only few years ago.
A high performance dual revised simplex solver
Abstract
Implementations of the revised simplex method for solving large scale sparse linear programming (LP) problems are highly efficient for single-core architectures. This talk will discuss the limitations of the underlying techniques in the context of modern multi-core architectures, in particular with respect to memory access. Novel techniques for implementing the dual revised simplex method will be introduced, and their use in developing a dual revised simplex solver for multi-core architectures will be described.
Primal-dual active set methods for solving Non-local Allen-Cahn Systems
Abstract
We propose and analyze a primal-dual active set method for local and non-local vector-valued Allen-Cahn variational inequalities.
We show existence and uniqueness of a solution for the non-local vector-valued Allen-Cahn variational inequality in a formulation involving Lagrange multipliers for local and non-local constraints. Furthermore, convergence of the algorithm is shown by interpreting the approach as a semi-smooth Newton method and numerical simulations are presented.
Optimization with time-periodic PDE constraints: Numerical methods and applications
Abstract
Optimization problems with time-periodic parabolic PDE constraints can arise in important chemical engineering applications, e.g., in periodic adsorption processes. I will present a novel direct numerical method for this problem class. The main numerical challenges are the high nonlinearity and high dimensionality of the discretized problem. The method is based on Direct Multiple Shooting and inexact Sequential Quadratic Programming with globalization of convergence based on natural level functions. I will highlight the use of a generalized Richardson iteration with a novel two-grid Newton-Picard preconditioner for the solution of the quadratic subproblems. At the end of the talk I will explain the principle of Simulated Moving Bed processes and conclude with numerical results for optimization of such a process.
Applications of linear barycentric rational interpolation at equidistant points
Abstract
Efficient linear and infinitely smooth approximation of functions from equidistant samples is a fascinating problem, at least since Runge showed in 1901 that it is not delivered by the interpolating polynomial.
In 1988, I suggested to substitute linear rational for polynomial interpolation by replacing the denominator 1 with a polynomial depending on the nodes, though not on the interpolated function. Unfortunately the so-obtained interpolant converges merely as the square of the mesh size. In 2007, Floater and Hormann have given for every integer a denominator that yields convergence of that prescribed order.
In the present talk I shall present the corresponding interpolant as well as some of its applications to differentiation, integration and the solution of boundary value problems. This is joint work with Georges Klein and Michael Floater.
The Convergence Behaviour of BiCG
Abstract
The Bi-Conjugate Gradient method (BiCG) is a well-known iterative solver (Krylov method) for linear systems of equations, proposed about 35 years ago, and the basis for some of the most successful iterative methods today, like BiCGSTAB. Nevertheless, the convergence behavior is poorly understood. The method satisfies a Petrov-Galerkin property, and hence its residual is constrained to a space of decreasing dimension (decreasing one per iteration). However, that does not explain why, for many problems, the method converges in, say, a hundred or a few hundred iterations for problems involving a hundred thousand or a million unknowns. For many problems, BiCG converges not much slower than an optimal method, like GMRES, even though the method does not satisfy any optimality properties. In fact, Anne Greenbaum showed that every three-term recurrence, for the first (n/2)+1 iterations (for a system of dimension n), is BiCG for some initial 'left' starting vector. So, why does the method work so well in most cases? We will introduce Krylov methods, discuss the convergence of optimal methods, describe the BiCG method, and provide an analysis of its convergence behavior.
Algebraic multigrid with guaranteed convergence rate
Abstract
Algebraic multigrid methods are nowadays popular to solve linear systems arising from the discretization of elliptic PDEs. They try to combine the efficiency of well tuned specific schemes like classical (geometric-based) multigrid methods, with the ease of use of general purpose preconditioning techniques. This requires to define automatic coarsening procedures, which set up an hierarchy of coarse representations of the problem at hand using only information from the system matrix.
In this talk, we focus on aggregation-based algebraic multigrid methods. With these, the coarse unknowns are simply formed by grouping variables in disjoint subset called aggregates.
In the first part of the talk, we consider symmetric M-matrices with nonnegative row-sum. We show how aggregates can then be formed in such a way that the resulting method satisfies a prescribed bound on its convergence rate. That is, instead of the classical paradigm that applies an algorithm and then performs its analysis, the analysis is integrated in the set up phase so as to enforce minimal quality requirements. As a result, we obtain one of the first algebraic multigrid method with full convergence proof. The efficiency of the method is further illustrated by numerical results performed on finite difference or linear finite element discretizations of second order elliptic PDEs; the set of problems includes problems with jumps, anisotropy, reentering corners and/or unstructured meshes, sometimes with local refinement.
In the second part of the talk, we discuss nonsymmetric problems. We show how the previous approach can be extended to M-matrices with row- and column-sum both nonnegative, which includes some stable discretizations of convection-diffusion equations with divergence free convective flow. Some (preliminary) numerical results are also presented.
This is joint work with Artem Napov.
Diffuse interface models for two-phase flow
Abstract
Starting from a Navier-Stokes-Cahn-Hilliard equation for a two-phase flow problem we discuss efficient numerical approaches based on adaptive finite element methods. Various extensions of the model are discussed: a) we consider the model on implicitly described geometries, which is used to simulate the sliding of droplets over nano-patterned surfaces, b) we consider the effect of soluble surfactants and show its influence on tip splitting of droplets under shear flow, and c) we consider bijels as a new class of soft matter materials, in which colloidal particles are jammed on the fluid-fluid interface and effect the motion of the interface due to an elastic force.
The work is based on joint work with Sebastian Aland (TU Dresden), John Lowengrub (UC Irvine) and Knut Erik Teigen (U Trondheim).
A Nonlinear Discretization Theory with Applications to Meshfree Methods
Abstract
We extend for the first time the linear discretization theory of Schaback, developed for meshfree methods, to nonlinear operator equations, relying heavily on methods of Böhmer, Vol I. There is no restriction to elliptic problems or to symmetric numerical methods like Galerkin techniques.
Trial spaces can be arbitrary, but have to approximate the solution well, and testing can be weak or strong. We present Galerkin techniques as an example. On the downside, stability is not easy to prove for special applications, and numerical methods have to be formulated as optimization problems. Results of this discretization theory cover error bounds and convergence rates. These results remain valid for the general case of fully nonlinear elliptic differential equations of second order. Some numerical examples are added for illustration.
A fast and simple algorithm for the computation of Legendre coefficients
Abstract
We present an O(N logN) algorithm for the calculation of the first N coefficients in an expansion of an analytic function in Legendre polynomials. In essence, the algorithm consists of an integration of a suitably weighted function along an ellipse, a task which can be accomplished with Fast Fourier Transform, followed by some post-processing.
Towards Effective Computation with Kernels on Manifolds
Abstract
High-order surface integral algorithms for 3D computational electromagnetics
Abstract
We discuss a class of high-order spectral-Galerkin surface integral algorithms with specific focus on simulating the scattering of electromagnetic waves by a collection of three dimensional deterministic and stochastic particles.
Numerical Methods for Monge-Kantorovich Transportation Problems
Abstract
In the eighteenth century Gaspard Monge considered the problem of finding the best way of moving a pile of material from one site to another. This optimal transport problem has many applications such as mesh generation, moving mesh methods, image registration, image morphing, optical design, cartograms, probability theory, etc. The solution to an optimal transport problem can be found by solving the Monge-Amp\`{e}re equation, a highly nonlinear second order elliptic partial differential equation. Leonid Kantorovich, however, showed that it is possible to analyse optimal transport problems in a framework that naturally leads to a linear programming formulation. In recent years several efficient methods have been proposed for solving the Monge-Amp\`{e}re equation. For the linear programming problem, standard methods do not exploit the special properties of the solution and require a number of operations that is quadratic or even cubic in the number of points in the discretisation. In this talk I will discuss techniques that can be used to obtain more efficient methods.
Joint work with Chris Budd (University of Bath).