14:00
Forthcoming events in this series
14:00
14:00
Preconditioning for eigenvalue problems: ideas, algorithms, error analysis
Abstract
The convergence of iterative methods for solving the linear system Ax = b with a Hermitian positive definite matrix A depends on the condition number of A: the smaller the latter the faster the former. Hence the idea to multiply the equation by a matrix T such that the condition number of TA is much smaller than that of A. The above is a common interpretation of the technique known as preconditioning, the matrix T being referred to as the preconditioner for A.
The eigenvalue computation does not seem to benefit from the direct application of such a technique. Indeed, what is the point in replacing the standard eigenvalue problem Ax = λx with the generalized one TAx = λTx that does not appear to be any easier to solve? It is hardly surprising then that modern eigensolvers, such as ARPACK, do not use preconditioning directly. Instead, an option is provided to accelerate the convergence to the sought eigenpairs by applying spectral transformation, which generally requires the user to supply a subroutine that solves the system (A−σI)y = z, and it is entirely up to the user to employ preconditioning if they opt to solve this system iteratively.
In this talk we discuss some alternative views on the preconditioning technique that are more general and more useful in the convergence analysis of iterative methods and that show, in particular, that the direct preconditioning approach does make sense in eigenvalue computation. We review some iterative algorithms that can benefit from the direct preconditioning, present available convergence results and demonstrate both theoretically and numerically that the direct preconditioning approach has advantages over the two-level approach. Finally, we discuss the role that preconditioning can play in the a posteriori error analysis, present some a posteriori error estimates that use preconditioning and compare them with commonly used estimates in terms of the Euclidean norm of residual.
14:00
Computing ratings for eigenvectors
Abstract
We consider the problem of computing ratings using the results of games (such as chess) played between a set of n players, and show how this problem can be reduced to computing the positive eigenvectors corresponding to the dominant eigenvalues of certain n by n matrices. There is a close connection with the stationary probability distributions of certain Markov chains. In practice, if n is large, then the matrices involved will be sparse, and the power method may be used to solve the eigenvalue problems efficiently.
15:00
The use of coupled solvers for complex multiphase and reacting flows
Abstract
Many industrial flow problems, expecially in the minerals and process
industries, are very complex, with strong interactions between phases
and components, and with very different length and time scales. This
presentation outlines the algorithms used in the CFX-5 software, and
describes the extension of its coupled solver approach to some
multi-scale industrial problems. including Population Balance modelling
to predict size distributions of a disperse phase. These results will be
illustrated on some practical industrial problems.
14:00
Resolution of Gibbs' phenomenon from global to semi-global
Abstract
Spectral projections enjoy high order convergence for globally smooth functions. However, a single discontinuity introduces O(1) spurious oscillations near the discontinuity and reduces the high order convergence rate to first order, Gibbs' Phenomena. Although a direct expansion of the function in terms of its global moments yields this low order approximation, high resolution information is retained in the global moments. Two techniques for the resolution of the Gibbs' phenomenon are discussed, filtering and reprojection methods. An adaptive filter with optimal joint time-frequency localization is presented, which recovers a function from its N term Fourier projection within the error bound \exp(-Nd(x)), where d(x) is the distance from the point being recovered to the nearest discontinuity. Symmetric filtering, however, must sacrifice accuracy when approaching a discontinuity. To overcome this limitation, Gegenbauer postprocessing was introduced by Gottlieb, Shu, et al, which recovers a function from its N term Fourier projection within the error bound \exp(-N). An extension of Gegenbauer postprocessing with improved convergence and robustness properties is presented, the robust Gibbs complements. Filtering and reprojection methods will be put in a unifying framework, and their properties such as robustness and computational cost contrasted. This research was conducted jointly with Eitan Tadmor and Anne Gelb.
Weighted matchings for the preconditioning of symmetric indefinite matrices
Abstract
The use of weighted matchings is becoming increasingly standard in the
solution of sparse linear systems. While non-symmetric permutations based on these
matchings have been the state-of-the-art for
several years (especially for direct solvers), approaches for symmetric
matrices have only recently gained attention.
\\
In this talk we discuss results of our work on using weighted matchings in
the preconditioning of symmetric indefinite linear systems, following ideas
introduced by Duff and Gilbert. In order to maintain symmetry,
the weighted matching is symmetrized and the cycle structure of the
resulting matching is used to build reorderings that form small diagonal
blocks from the matched entries.
\\
For the preconditioning we investigated two approaches. One is an
incomplete $LDL^{T}$ preconditioning, that chooses 1x1 or 2x2 diagonal pivots
based on a simple tridiagonal pivoting criterion. The second approach
targets distributed computing, and is based on factorized sparse approximate
inverses, whose existence, in turn, is based on the existence of an $LDL^{T}$
factorization. Results for a number of comprehensive test sets are given,
including comparisons with sparse direct solvers and other preconditioning
approaches.
An interior-point method for MPECs based on strictly feasible relaxations
Abstract
An interior-point method for solving mathematical programs with
equilibrium constraints (MPECs) is proposed. At each iteration of the
algorithm, a single primal-dual step is computed from each subproblem of
a sequence. Each subproblem is defined as a relaxation of the MPEC with
a nonempty strictly feasible region. In contrast to previous
approaches, the proposed relaxation scheme preserves the nonempty strict
feasibility of each subproblem even in the limit. Local and superlinear
convergence of the algorithm is proved even with a less restrictive
strict complementarity condition than the standard one. Moreover,
mechanisms for inducing global convergence in practice are proposed.
Numerical results on the MacMPEC test problem set demonstrate the
fast-local convergence properties of the algorithm.
The Trapezoidal rule in the complex plane
Abstract
The trapezoidal rule for numerical integration is remarkably accurate when
the integrand under consideration is smooth and periodic. In this
situation it is superior to more sophisticated methods like Simpson's rule
and even the Gauss-Legendre rule. In the first part of the talk we
discuss this phenomenon and give a few elementary examples. In the second
part of the talk we discuss the application of this idea to the numerical
evaluation of contour integrals in the complex plane.
Demonstrations involving numerical differentiation, the computation
of special functions, and the inversion of the Laplace transform will be
presented.
Patterns of turbulence
Abstract
Plane Couette flow - the flow between two infinite parallel plates moving in opposite directions -
undergoes a discontinuous transition from laminar flow to turbulence as the Reynolds number is
increased. Due to its simplicity, this flow has long served as one of the canonical examples for understanding shear turbulence and the subcritical transition process typical of channel and pipe flows. Only recently was it discovered in very large aspect ratio experiments that this flow also exhibits remarkable pattern formation near transition. Steady, spatially periodic patterns of distinct regions of turbulent and laminar flow emerges spontaneously from uniform turbulence as the Reynolds number is decreased. The length scale of these patterns is more than an order of magnitude larger than the plate separation. It now appears that turbulent-laminar patterns are inevitable intermediate states on the route from turbulent to laminar flow in many shear flows. I will explain how we have overcome the difficulty of simulating these large scale patterns and show results from studies of three types of patterns: periodic, localized, and intermittent.
Analysis of the sparse grid combination technique and high dimensional applications in option pricing
Abstract
Sparse grids yield numerical solutions to PDEs with a
significantly reduced number of degrees of freedom. The relative
benefit increases with the dimensionality of the problem, which makes
multi-factor models for financial derivatives computationally tractable.
An outline of a convergence proof for the so called combination
technique will be given for a finite difference discretisation of the
heat equation, for which sharp error bounds can be shown.
Numerical examples demonstrate that by an adaptive (heuristic)
choice of the subspaces European and American options with up to thirty
(and most likely many more) independent variables can be priced with
high accuracy.
Computational fluid dynamics
Abstract
The computation of flows of compressible fluids will be
discussed, exploiting the symmetric form of the equations describing
compressible flow.
Modelling and simulation issues in computational cell biology
Abstract
A cell is a wonderously complex object. In this talk I will
give an overview of some of the mathematical frameworks that are needed
in order to make progress to understanding the complex dynamics of a
cell. The talk will consist of a directed random walk through discrete
Markov processes, stochastic differential equations, anomalous diffusion
and fractional differential equations.
Alternatives to eigenvalues - describing the behaviour of nonnormal matrices and linear operators
Generating good meshes and inverting good matrices
Abstract
An essential first step in many problems of numerical analysis and
computer graphics is to cover a region with a reasonably regular mesh.
We describe a short MATLAB code that begins with a "distance function"
to describe the region: $d(x)$ is the distance to the boundary
(with d
Fast and high quality display of large relational information with an introduction to recent advances in mathematica
Abstract
The talk will start with an introduction to recent development in Mathematica, with emphasis on numerical computing. This will be followed by a discussion of graph drawing algorithms for the display of relational information, in particular force directed algorithms. The talk will show that by employing multilevel approach and octree data structure, it is possible to achieve fast display of very large relational information, without compromising the quality.
Practical implementation of an inexact GMRES method
Abstract
We consider the solution of a linear system of equations using the GMRES iterative method. In some applications, performing inexact matrix-vector products in this method may be interesting, provided that a reasonable convergence of GMRES is achieved. A GMRES algorithm where the matrix vector product is performed inexactly is termed ”inexact GMRES algorithm”. An application of this idea occurs in computational electromagnetics, where the fast multipole method provides approximations of the matrix-vector product within a user-defined precision, and where these inaccurate matrix-vector products are all the more cheaper (in terms of CPU time) as the user-defined precision is low. The key point is then to design a control strategy of the accuracy of the matrix-vector product so that the GMRES converges better in the sense that 1) the inexact method achieves a satisfactory limiting accuracy, 2) within a reasonable number of steps.
\\
In [1], a relaxation strategy is proposed for general systems and validated on a large set of numerical experiments. This work is based on heuristic considerations and proposes a strategy that enables a convergence of the GMRES iterates $x_{k}$ within a relative normwise backward error $\frac{\|b−Ax_{k}\|}{\|A\| \|x_{k}\| + \|b\|}$ less than a prescribed quantity $\eta$ > 0, on a significant number of numerical experiments. Similar strategies have been applied to the solution of device simulation problems using domain decomposition [2] and to the preconditioning of a radiation diffusion problem in [5].
\\
A step toward a theoretical explanation of the observed behaviour of the inexact GMRES is proposed in [3, 4]. In this talk, we show that in spite of this considerable theoretical study, the experimental work of [1] is not fully understood yet. We give an overview of the questions that still remains open both in exact arithmetic and in floating-point arithmetic, and we provide some insights into the solution of some of them. Possible applications of this work for the preconditioned GMRES method, when the matrix-vector product is accurate but the preconditioning operation is approximate, are also investigated, based on [3].
\\
\\
References
\\
[1] A. Bouras and V. Frayss´e. Inexact matrix-vector products in Krylov methods for solving linear systems: a relaxation strategy. SIAM Journal on Matrix Analysis and Applications, 2004. To appear.
\\
[2] A. Bouras, V. Frayss´e, and L. Giraud. A relaxation strategy for inner-outer linear solvers in domain decomposition methods. Technical Report TR/PA/00/17, CERFACS, Toulouse, France, 2000.
\\
[3] V. Simoncini and D. B. Szyld. Theory of inexact Krylov subspace methods and applications to scientific computing. SIAM Journal Scientific Computing, 25:454–477, 2003.
\\
[4] J. van den Eshof and G. L. G. Sleijpen. Inexact Krylov subspace methods for linear systems. SIAM Journal on Matrix Analysis and Applications, February 2004. To appear.
\\
[5] J. S. Warsa, M. Benzi, T. A. Warein, and J. E. Morel. Preconditioning a mixed discontinuous finite element method for radiation diffusion. Numerical Linear Algebra with Applications, 2004. To appear.
Discontinuous Galerkin methods for time-harmonic Maxwell's equations
Abstract
In recent years, there has been considerable interest, especially in the context of
fluid-dynamics, in nonconforming finite element methods that are based on discontinuous
piecewise polynomial approximation spaces; such approaches are referred to as discontinuous
Galerkin (DG) methods. The main advantages of these methods lie in their conservation properties, their ability to treat a wide range of problems within the same unified framework, and their great flexibility in the mesh-design. Indeed, DG methods can easily handle non-matching grids and non-uniform, even anisotropic, polynomial approximation degrees. Moreover, orthogonal bases can easily be constructed which lead to diagonal mass matrices; this is particularly advantageous in unsteady problems. Finally, in combination with block-type preconditioners, DG methods can easily be parallelized.
\\
\\
In this talk, we introduce DG discretizations of mixed field and potential-based formulations of
eddy current problems in the time-harmonic regime. For the electric field formulation, the
divergence-free constraint within non-conductive regions is imposed by means of a Lagrange
multiplier. This allows for the correct capturing of edge and corner singularities in polyhedral domains; in contrast, additional Sobolev regularity must be assumed in the DG formulation, and their conforming counterparts, when regularization techniques are employed. In particular, we present a mixed method involving discontinuous $P^\ell-P^\ell$ elements, which includes a normal jump stabilization term, and a non-stabilized variant employing discontinuous $P^\ell-P^{\ell+1}$ elements.The first formulation delivers optimal convergence rates for the vector-valued unknowns in a suitable energy norm, while the second (non-stabilized) formulation is designed to yield optimal convergence rates in both the $L^2$--norm, as well as in a suitable energy norm. For this latter method, we also develop the {\em a posteriori} error estimation of the mixed DG approximation of the Maxwell operator. Indeed, by employing suitable Helmholtz decompositions of the error, together with the conservation properties of the underlying method, computable upper bounds on the error, measured in terms of the energy norm, are derived.
\\
\\
Numerical examples illustrating the performance of the proposed methods will be presented; here,
both conforming and non-conforming (irregular) meshes will be employed. Our theoretical and
numerical results indicate that the proposed DG methods provide promising alternatives to standard conforming schemes based on edge finite elements.
Towards an SDP-based Algorithm for the satisfiability problem
Abstract
The satisfiability (SAT) problem is a central problem in mathematical
logic, computing theory, and artificial intelligence. We consider
instances of SAT specified by a set of boolean variables and a
propositional formula in conjunctive normal form. Given such an instance,
the SAT problem asks whether there is a truth assignment to the variables
such that the formula is satisfied. It is well known that SAT is in
general NP-complete, although several important special cases can be
solved in polynomial time. Extending the work of de Klerk, Warners and van
Maaren, we present new linearly sized semidefinite programming (SDP)
relaxations arising from a recently introduced paradigm of higher
semidefinite liftings for discrete optimization problems. These
relaxations yield truth assignments satisfying the SAT instance if a
feasible matrix of sufficiently low rank is computed. The sufficient rank
values differ between relaxations and can be viewed as a measure of the
relative strength of each relaxation. The SDP relaxations also have the
ability to prove that a given SAT formula is unsatisfiable. Computational
results on hard instances from the SAT Competition 2003 show that the SDP
approach has the potential to complement existing techniques for SAT.
Exponential Brownian motion and divided differences
Abstract
We calculate an analytic value for the correlation coefficient between a geometric, or exponential, Brownian motion and its time-average, a novelty being our use of divided differences to elucidate formulae. This provides a simple approximation for the value of certain Asian options regarding them as exchange options. We also illustrate that the higher moments of the time-average can be expressed neatly as divided differences of the exponential function via the Hermite-Genocchi integral relation, as well as demonstrating that these expressions agree with those obtained by Oshanin and Yor when the drift term vanishes.
Pattern formation with a conservation law
Abstract
The formation of steady patterns in one space dimension is generically
governed, at small amplitude, by the Ginzburg-Landau equation.
But in systems with a conserved quantity, there is a large-scale neutral
mode that must be included in the asymptotic analysis for pattern
formation near onset. The usual Ginzburg-Landau equation for the amplitude
of the pattern is then coupled to an equation for the large-scale mode.
\\
These amplitude equations show that for certain parameters all regular
periodic patterns are unstable. Beyond the stability boundary, there
exist stable stationary solutions in the form of spatially modulated
patterns or localised patterns. Many more exotic localised states are
found for patterns in two dimensions.
\\
Applications of the theory include convection in a magnetic field,
providing an understanding of localised states seen in numerical
simulations.
Nonhydrodynamic modes and lattice Boltzmann equations with general equations of state
Abstract
The lattice Boltzmann equation has been used successfully used to simulate
nearly incompressible flows using an isothermal equation of state, but
much less work has been done to determine stable implementations for other
equations of state. The commonly used nine velocity lattice Boltzmann
equation supports three non-hydrodynamic or "ghost'' modes in addition to
the macroscopic density, momentum, and stress modes. The equilibrium value
of one non-hydrodynamic mode is not constrained by the continuum equations
at Navier-Stokes order in the Chapman-Enskog expansion. Instead, we show
that it must be chosen to eliminate a high wavenumber instability. For
general barotropic equations of state the resulting stable equilibria do
not coincide with a truncated expansion in Hermite polynomials, and need
not be positive or even sign-definite as one would expect from arguments
based on entropy extremisation. An alternative approach tries to suppress
the instability by enhancing the damping the non-hydrodynamic modes using
a collision operator with multiple relaxation times instead of the common
single relaxation time BGK collision operator. However, the resulting
scheme fails to converge to the correct incompressible limit if the
non-hydrodynamic relaxation times are fixed in lattice units. Instead we
show that they must scale with the Mach number in the same way as the
stress relaxation time.
Parameterised approximation estimators for mixed noise distributions
Abstract
Consider approximating a set of discretely defined values $f_{1}, \ldots , f_{m}$ say at $x=x_{1}, x_{2}, \ldots, x_{m}$, with a chosen approximating form. Given prior knowledge that noise is present and that some might be outliers, a standard least squares approach based on $l_{2}$ norm of the error $\epsilon$ may well provide poor estimates. We instead consider a least squares approach based on a modified measure of the form $\tilde{\epsilon} = \epsilon (1+c^{2}\epsilon^{2})^{-\frac{1}{2}}$, where $c$ is a constant to be fixed.
\\
The choice of the constant $c$ in this estimator has a significant effect on the performance of the estimator both in terms of its algorithmic convergence to a solution and its ability to cope effectively with outliers. Given a prior estimate of the likely standard deviation of the noise in the data, we wish to determine a value of $c$ such that the estimator behaves like a robust estimator when outliers are present but like a least squares estimator otherwise.
\\
We describe approaches to determining suitable values of $c$ and illustrate their effectiveness on approximation with polynomial and radial basis functions. We also describe algorithms for computing the estimates based on an iteratively weighted linear least squares scheme.
Structured matrix computations
Abstract
We consider matrix groups defined in terms of scalar products. Examples of interest include the groups of
- complex orthogonal,
- real, complex, and conjugate symplectic,
- real perplectic,
- real and complex pseudo-orthogonal,
- pseudo-unitary
matrices. We
- Construct a variety of transformations belonging to these groups that imitate the actions of Givens rotations, Householder reflectors, and Gauss transformations.
- Describe applications for these structured transformations, including to generating random matrices in the groups.
- Show how to exploit group structure when computing the polar decomposition, the matrix sign function and the matrix square root on these matrix groups.
This talk is based on recent joint work with N. Mackey, D. S. Mackey, and N. J. Higham.
Iteration between model and experiment in studying cardiac mechano-electric feedback: from clinics to channels, and back
Abstract
The heart can be described as an electrically driven mechanical pump. This
pump couldn't adapt to beat-by-beat changes in circulatory demand if there
was no feedback from the mechanical environment to the electrical control
processes. Cardiac mechano-electric feedback has been studied at various
levels of functional integration, from stretch-activated ion channels,
through mechanically induced changes in cardiac cells and tissue, to
clinically relevant observations in man, where mechanical stimulation of the
heart may either disturb or reinstate cardiac rhythmicity. The seminar will
illustrate the patho-physiological relevance of cardiac mechano-electric
feedback, introduce underlying mechanisms, and show the utility of iterating
between experimental research and mathematical modelling in studying this
phenomenon.
Symmetries in semidefinite programming, and how to exploit them
Abstract
Semidefinite programming (SDP) techniques have been extremely successful
in many practical engineering design questions. In several of these
applications, the problem structure is invariant under the action of
some symmetry group, and this property is naturally inherited by the
underlying optimization. A natural question, therefore, is how to
exploit this information for faster, better conditioned, and more
reliable algorithms. To this effect, we study the associative algebra
associated with a given SDP, and show the striking advantages of a
careful use of symmetries. The results are motivated and illustrated
through applications of SDP and sum of squares techniques from networked
control theory, analysis and design of Markov chains, and quantum
information theory.
A discontinuous Galerkin method for flow and transport in porous media
Abstract
Discontinuous Galerkin methods (DG) use trial and test functions that are continuous within
elements and discontinuous at element boundaries. Although DG methods have been invented
in the early 1970s they have become very popular only recently.
\\
DG methods are very attractive for flow and transport problems in porous media since they
can be used to solve hyperbolic as well as elliptic/parabolic problems, (potentially) offer
high-order convergence combined with local mass balance and can be applied to unstructured,
non-matching grids.
\\
In this talk we present a discontinuous Galerkin method based on the non-symmetric interior
penalty formulation introduced by Wheeler and Rivi\`{e}re for an elliptic equation coupled to
a nonlinear parabolic/hyperbolic equation. The equations cover models for groundwater flow and
solute transport as well as two-phase flow in porous media.
\\
We show that the method is comparable in efficiency with the mixed finite element method for
elliptic problems with discontinuous coefficients. In the case of two-phase flow the method
can outperform standard finite volume schemes by a factor of ten for a five-spot problem and
also for problems with dominating capillary pressure.
Direct calculation of transonic aeroelastic stability through bifurcation analysis
Abstract
The standard airframe industry tool for flutter analysis is based
on linear potential predictions of the aerodynamics. Despite the
limitations of the modelling this is even true in the transonic
range. There has been a heavy research effort in the past decade to
use CFD to generate the aerodynamics for flutter simulations, to
improve the reliability of predictions and thereby reduce the risk
and cost of flight testing. The first part of the talk will describe
efforts at Glasgow to couple CFD with structural codes to produce
a time domain simulation and an example calculation will be described for
the BAE SYSTEMS Hawk aircraft.
\\
\\
A drawback with time domain simulations is that unsteady CFD is still
costly and parametric searches to determine stability through the
growth or decay of responses can quickly become impractical. This has
motivated another active research effort in developing ways of
encapsulating the CFD level aerodynamic predictions in models which
are more affordable for routine application. A number of these
approaches are being developed (eg POD, system identification...)
but none have as yet reached maturity. At Glasgow effort has been
put into developing a method based on the behaviour of the
eigenspectrum of the discrete operator Jacobian, using Hopf
Bifurcation conditions to formulate an augmented system of
steady state equations which can be used to calculate flutter speeds
directly. The talk will give the first three dimensional example
of such a calculation.
\\
\\
For background reports on these topics see
http://www.aero.gla.ac.uk/Research/CFD/projects/aeroelastics/pubs/menu…
Boundary concentrated FEM
Abstract
It is known for elliptic problems with smooth coefficients
that the solution is smooth in the interior of the domain;
low regularity is only possible near the boundary.
The $hp$-version of the FEM allows us to exploit this
property if we use meshes where the element size grows
porportionally to the element's distance to the boundary
and the approximation order is suitably linked to the
element size. In this way most degrees of freedom are
concentrated near the boundary.
\\
In this talk, we will discuss convergence and complexity
issues of the boundary concentrated FEM. We will show
that it is comparable to the classical boundary element
method (BEM) in that it leads to the same convergence rate
(error versus degrees of freedom). Additionally, it
generalizes the classical FEM since it does not require
explicit knowledge of the fundamental solution so that
it is also applicable to problems with (smooth) variable
coefficients.
A posteriori error estimates and adaptive finite elements for meshes with high aspect ratio: application to elliptic and parabolic problems
Abstract
Following the framework of Formaggia and Perotto (Numer.
Math. 2001 and 2003), anisotropic a posteriori error estimates have been
proposed for various elliptic and parabolic problems. The error in the
energy norm is bounded above by an error indicator involving the matrix
of the error gradient, the constant being independent of the mesh aspect
ratio. The matrix of the error gradient is approached using
Zienkiewicz-Zhu error estimator. Numerical experiments show that the
error indicator is sharp. An adaptive finite element algorithm which
aims at producing successive triangulations with high aspect ratio is
proposed. Numerical results will be presented on various problems such
as diffusion-convection, Stokes problem, dendritic growth.
Spreading fronts and fluctuations in sedimentation
Abstract
While the average settling velocity of particles in a suspension has been successfully predicted, we are still unsuccessful with the r.m.s velocity, with theories suggesting a divergence with the size of
the container and experiments finding no such dependence. A possible resolution involves stratification originating from the spreading of the front between the clear liquid above and the suspension below. One theory describes the spreading front by a nonlinear diffusion equation
$\frac{\partial \phi}{\partial t} = D \frac{\partial }{\partial z}(\phi^{4/5}(\frac{\partial \phi}{\partial z})^{2/5})$.
\\
\\
Experiments and computer simulations find differently.
Inverse scattering by rough surfaces
Abstract
We consider the problem of recovering the position of a scattering surface
from measurements of the scattered field on a finite line above the surface.
A point source algorithm is proposed, based on earlier work by Potthast,
which reconstructs, in the first instance, the scattered field in the whole
region above the scattering surface. This information is used in a second
stage to locate the scatterer. We summarise the theoretical results that can
be obtained (error bounds on the reconstructed field as a function of the
noise level in the original measurements). For the case of a point source of
the incident field we present numerical experiments for both a steady source
(time harmonic excitation) and a pulse source typical of an antenna in
ground penetrating radar applications.
\\
This is joint work with Claire Lines (Brunel University).
Recent developments in numerical simulation of failure in metals subjected to impact loading
Abstract
The seminar will address issues related to numerical simulation
of non-linear behaviour of solid materials to impact loading.
The kinematic and constitutive aspects of the transition from
continuum to discontinuum will be presented as utilised
within an explicit finite element development framework.
Material softening, mesh sensitivity and regularisation of
solutions will be discussed.
Jacobians and Hessians are scarcely matrices!!
Abstract
To numerical analysts and other applied mathematicians Jacobians and Hessians
are matrices, i.e. rectangular arrays of numbers or algebraic expressions.
Possibly taking account of their sparsity such arrays are frequently passed
into library routines for performing various computational tasks.
\\
\\
A central goal of an activity called automatic differentiation has been the
accumulation of all nonzero entries from elementary partial derivatives
according to some variant of the chainrule. The elementary partials arise
in the user-supplied procedure for evaluating the underlying vector- or
scalar-valued function at a given argument.
\\
\\
We observe here that in this process a certain kind of structure that we
call "Jacobian scarcity" might be lost. This loss will make the subsequent
calculation of Jacobian vector-products unnecessarily expensive.
Instead we advocate the representation of the Jacobian as a linear computational
graph of minimal complexity. Many theoretical and practical questions remain unresolved.
Conditioning in optimization and variational analysis
Abstract
Condition numbers are a central concept in numerical analysis.
They provide a natural parameter for studying the behavior of
algorithms, as well as sensitivity and geometric properties of a problem.
The condition number of a problem instance is usually a measure
of the distance to the set of ill-posed instances. For instance, the
classical Eckart and Young identity characterizes the condition
number of a square matrix as the reciprocal of its relative distance
to singularity.
\\
\\
We present concepts of conditioning for optimization problems and
for more general variational problems. We show that the Eckart and
Young identity has natural extension to much wider contexts. We also
discuss conditioning under the presence of block-structure, such as
that determined by a sparsity pattern. The latter has interesting
connections with the mu-number in robust control and with the sign-real
spectral radius.
Multiphysics modelling in FEMLAB
Abstract
The seminar will focus on mathematical modelling of physics phenomena,
with applications in e.g. mass and heat transfer, fluid flow, and
electromagnetic wave propagation. Simultaneous solutions of several
physics phenomena described by PDEs - multiphysics - will also be
presented and discussed.
\\
\\
All models will be realised through the use of the MATLAB based finite
element package FEMLAB. FEMLAB is a multiphysics modelling environment
with built-in PDE solvers for linear, non-linear, time dependent and
eigenvalue problems. For ease-of-use, it comprises ready-to-use applications
for various physics phenomena, and tailored applications for Structural
Mechanics, Electromagnetics, and Chemical Engineering. But in addition,
FEMLAB facilitates straightforward implementation of arbitrary coupled
non-linear PDEs, which brings about a great deal of flexibility in problem
definition. Please see http://ww.uk.comsol.com for more info.
\\
\\
FEMLAB is developed by the COMSOL Group, a Swedish headquartered spin-off
from the Royal Institute of Technology (KTH) in Stockholm, with offices
around the world. Its UK office is situated in The Oxford Science Park.
Robust numerical methods for computer aided process plant design
Abstract
The process industries are one of the UK's major sectors and include
petrochemicals, pharmaceuticals, water, energy and the food industry,
amongst others. The design of a processing plant is a difficult task. This
is due to both the need to cater for multiple criteria (such as economics,
environmental and safety) and the use highly complex nonlinear models to
describe the behaviour of individual unit operations in the process. Early
in the design stages, an engineer may wish to use automated design tools to
generate conceptual plant designs which have potentially positive attributes
with respect to the main criteria. Such automated tools typically rely on
optimization for solving large mixed integer nonlinear programming models.
\\
\\
This talk presents an overview of some of the work done in the Computer
Aided Process Engineering group at UCL. Primary emphasis will be given to
recent developments in hybrid optimization methods, including the use of
graphical interfaces based on problem specific visualization techniques to
allow the engineer to interact with embedded optimization procedures. Case
studies from petrochemical and water industries will be presented to
demonstrate the complexities involved and illustrate the potential benefits
of hybrid approaches.
Preconditioning for 3D sedimentary basin simulations
Abstract
The simulation of sedimentary basins aims at reconstructing its historical
evolution in order to provide quantitative predictions about phenomena
leading to hydrocarbon accumulations. The kernel of this simulation is the
numerical solution of a complex system of time dependent, three
dimensional partial differential equations of mixed parabolic-hyperbolic
type in highly heterogeneous media. A discretisation and linearisation of
this system leads to large ill-conditioned non-symmetric linear systems
with three unknowns per mesh element.
\\
\\
In the seminar I will look at different preconditioning approaches for
these systems and at their parallelisation. The most effective
preconditioner which we developed so far consists in three stages: (i) a
local decoupling of the equations which (in addition) aims at
concentrating the elliptic part of the system in the "pressure block'';
(ii) an efficient preconditioning of the pressure block using AMG; (iii)
the "recoupling'' of the equations. Numerical results on real case
studies, exhibit (i) a significant reduction of sequential CPU times, up
to a factor 5 with respect to the current ILU(0) preconditioner, (ii)
robustness with respect to physical and numerical parameters, and (iii) a
speedup of up to 4 on 8 processors.
Computation of highly-oscillatory problems made easy
Abstract
Rapidly oscillating problems, whether differential equations or
integrals, ubiquitous in applications, are allegedly difficult to
compute. In this talk we will endeavour to persuade the audience that
this is false: high oscillation, properly understood, is good for
computation! We describe methods for differential equations, based on
Magnus and Neumann expansions of modified systems, whose efficacy
improves in the presence of high oscillation. Likewise, we analyse
generalised Filon quadrature methods, showing that also their error
sharply decreases as the oscillation becomes more rapid.
Fitting stochastic models to partially observed dynamics
Abstract
In many applications of interest, such as the conformational
dynamics of molecules, large deterministic systems can exhibit
stochastic behaviour in a relative small number of coarse-grained
variables. This kind of dimension reduction, from a large deterministic
system to a smaller stochastic one, can be very useful in understanding
the problem. Whilst the subject of statistical mechanics provides
a wealth of explicit examples where stochastic models for coarse
variables can be found analytically, it is frequently the case
that applications of interest are not amenable to analytic
dimension reduction. It is hence of interest to pursue computational
algorithms for such dimension reduction. This talk will be devoted
to describing recent work on parameter estimation aimed at
problems arising in this context.
\\
\\
Joint work with Raz Kupferman (Jerusalem) and Petter Wiberg (Warwick)
A divergence-free element for finite element prediction of radar cross sections
Abstract
In recent times, research into scattering of electromagnetic waves by complex objects
has assumed great importance due to its relevance to radar applications, where the
main objective is to identify targeted objects. In designing stealth weapon systems
such as military aircraft, control of their radar cross section is of paramount
importance. Aircraft in combat situations are threatened by enemy missiles. One
countermeasure which is used to reduce this threat is to minimise the radar cross
section. On the other hand, there is a demand for the enhancement of the radar cross
section of civilian spacecraft. Operators of communication satellites often request
a complicated differential radar cross section in order to assist with the tracking
of the satellite. To control the radar cross section, an essential requirement is a
capability for accurate prediction of electromagnetic scattering from complex objects.
\\
\\
One difficulty which is encountered in the development of suitable numerical solution
schemes is the existence of constraints which are in excess of those needed for a unique
solution. Rather than attempt to include the constraint in the equation set, the novel
approach which is presented here involves the use of the finite element method and the
construction of a specialised element in which the relevant solution variables are
appropriately constrained by the nature of their interpolation functions. For many
years, such an idea was claimed to be impossible. While the idea is not without its
difficulties, its advantages far outweigh its disadvantages. The presenter has
successfully developed such an element for primitive variable solutions to viscous
incompressible flows and wishes to extend the concept to electromagnetic scattering
problems.
\\
\\
Dr Mack has first degrees in mathematics and aeronautical engineering, plus a Masters
and a Doctorate, both in computational fluid dynamics. He has some thirty years
experience in this latter field. He pioneered the development of the innovative
solenoidal approach for the finite element solution of viscous incompressible flows.
At the time, such a radical idea was claimed in the literature to be impossible.
Much of this early research was undertaken during a six month sabbatical with the
Numerical Analysis Group at the Oxford University Computing Laboratory. Dr Mack has
since received funding from British Aerospace and the United States Department of
Defense to continue this research.
FILTRANE, a filter method for the nonlinear feasibility problem
Abstract
A new filter method will be presented that attempts to find a feasible
point for sets of nonlinear sets of equalities and inequalities. The
method is intended to work for problems where the number of variables
or the number of (in)equalities is large, or both. No assumption is
made about convexity. The technique used is that of maintaining a list
of multidimensional "filter entries", a recent development of ideas
introduced by Fletcher and Leyffer. The method will be described, as
well as large scale numerical experiments with the corresponding
Fortran 90 module, FILTRANE.
Pascal Matrices (and Mesh Generation!)
Abstract
In addition to the announced topic of Pascal Matrices (abstract below) we will speak briefly about more recent work by Per-Olof Persson on generating simplicial meshes on regions defined by a function that gives the distance from the boundary. Our first goal was a short MATLAB code and we just submitted "A Simple Mesh Generator in MATLAB" to SIAM.
This is joint work with Alan Edelman at MIT and a little bit with Pascal. They had all the ideas.
Put the famous Pascal triangle into a matrix. It could go into a lower triangular L or its transpose L' or a symmetric matrix S:
| [ 1 0 0 0 ] | [ 1 1 1 1 ] | [ 1 1 1 1] | |||
| L = | [ 1 1 0 0 ] | L' = | [ 0 1 2 3 ] | S = | [ 1 2 3 4] |
| [ 1 2 1 0 ] | [ 0 0 1 3 ] | [ 1 3 6 10] | |||
| [ 1 3 3 1 ] | [ 0 0 0 1 ] | [ 1 4 10 20] |
These binomial numbers come from a recursion, or from the formula for i choose j, or functionally from taking powers of (1 + x).
The amazing thing is that L times L' equals S. (OK for 4 by 4) It follows that S has determinant 1. The matrices have other unexpected properties too, that give beautiful examples in teaching linear algebra. The proof of L L' = S comes 3 ways, I don't know which you will prefer:
1. By induction using the recursion formula for the matrix entries.
2. By an identity for the coefficients i+j choose j in S.
3. By applying both sides to the column vector [ 1 x x2 x3 ... ]'.
The third way also gives a proof that S3 = -I but we doubt that result.
The rows of the "hypercube matrix" L2 count corners and edges and faces and ... in n dimensional cubes.
Clustering, reordering and random graphs
Abstract
From the point of view of a numerical analyst, I will describe some algorithms for:
- clustering data points based on pairwise similarity,
- reordering a sparse matrix to reduce envelope, two-sum or bandwidth,
- reordering nodes in a range-dependent random graph to reflect the range-dependency,
and point out some connections between seemingly disparate solution techniques. These datamining problems arise across a range of disciplines. I will mention a particularly new and important application from bioinformatics concerning the analysis of gene or protein interaction data.
Immersed interface methods for fluid dynamics problems
Abstract
Immersed interface methods have been developed for a variety of
differential equations on domains containing interfaces or irregular
boundaries. The goal is to use a uniform Cartesian grid (or other fixed
grid on simple domain) and to allow other boundaries or interfaces to
cut through this grid. Special finite difference formulas are developed
at grid points near an interface that incorporate the appropriate jump
conditions across the interface so that uniform second-order accuracy
(or higher) can be obtained. For fluid flow problems with an immersed
deformable elastic membrane, the jump conditions result from a balance
between the singular force imposed by the membrane, inertial forces if
the membrane has mass, and the jump in pressure across the membrane.
A second-order accurate method of this type for Stokes flow was developed
with Zhilin Li and more recently extended to the full incompressible
Navier-Stokes equations in work with Long Lee.
Inverse eigenvalue problems for quadratic matrix polynomials
Abstract
Feedback design for a second order control system leads to an
eigenstructure assignment problem for a quadratic matrix polynomial. It is
desirable that the feedback controller not only assigns specified
eigenvalues to the second order closed loop system, but also that the
system is robust, or insensitive to perturbations. We derive here new
sensitivity measures, or condition numbers, for the eigenvalues of the
quadratic matrix polynomial and define a measure of robustness of the
corresponding system. We then show that the robustness of the quadratic
inverse eigenvalue problem can be achieved by solving a generalized linear
eigenvalue assignment problem subject to structured perturbations.
Numerically reliable methods for solving the structured generalized linear
problem are developed that take advantage of the special properties of the
system in order to minimize the computational work required.
Modelling bilevel games in electricity
Abstract
Electricity markets facilitate pricing and delivery of wholesale power.
Generators submit bids to an Independent System Operator (ISO) to indicate
how much power they can produce depending on price. The ISO takes these bids
with demand forecasts and minimizes the total cost of power production
subject to feasibility of distribution in the electrical network.
\\
\\
Each generator can optimise its bid using a bilevel program or
mathematical program with equilibrium (or complementarity) constraints, by
taking the ISOs problem, which contains all generators bid information, at
the lower level. This leads immediately to a game between generators, where
a Nash equilibrium - at which each generator's bid maximises its profit
provided that none of the other generators changes its bid - is sought.
\\
\\
In particular, we examine the idealised model of Berry et al (Utility
Policy 8, 1999), which gives a bilevel game that can be modelled as an
"equilibrium problem with complementarity constraints" or EPCC.
Unfortunately, like bilevel games, EPCCs on networks may not have Nash
equilibria in the (common) case when one or more of links of the network is
saturated (at maximum capacity). Nevertheless we explore some theory and
algorithms for this problem, and discuss the economic implications of
numerical examples where equilibria are found for small electricity
networks.