15:45
15:45
14:15
Rayleigh processes, real trees, and root growth with re-grafting
A divergence-free element for finite element prediction of radar cross sections
Abstract
In recent times, research into scattering of electromagnetic waves by complex objects
has assumed great importance due to its relevance to radar applications, where the
main objective is to identify targeted objects. In designing stealth weapon systems
such as military aircraft, control of their radar cross section is of paramount
importance. Aircraft in combat situations are threatened by enemy missiles. One
countermeasure which is used to reduce this threat is to minimise the radar cross
section. On the other hand, there is a demand for the enhancement of the radar cross
section of civilian spacecraft. Operators of communication satellites often request
a complicated differential radar cross section in order to assist with the tracking
of the satellite. To control the radar cross section, an essential requirement is a
capability for accurate prediction of electromagnetic scattering from complex objects.
\\
\\
One difficulty which is encountered in the development of suitable numerical solution
schemes is the existence of constraints which are in excess of those needed for a unique
solution. Rather than attempt to include the constraint in the equation set, the novel
approach which is presented here involves the use of the finite element method and the
construction of a specialised element in which the relevant solution variables are
appropriately constrained by the nature of their interpolation functions. For many
years, such an idea was claimed to be impossible. While the idea is not without its
difficulties, its advantages far outweigh its disadvantages. The presenter has
successfully developed such an element for primitive variable solutions to viscous
incompressible flows and wishes to extend the concept to electromagnetic scattering
problems.
\\
\\
Dr Mack has first degrees in mathematics and aeronautical engineering, plus a Masters
and a Doctorate, both in computational fluid dynamics. He has some thirty years
experience in this latter field. He pioneered the development of the innovative
solenoidal approach for the finite element solution of viscous incompressible flows.
At the time, such a radical idea was claimed in the literature to be impossible.
Much of this early research was undertaken during a six month sabbatical with the
Numerical Analysis Group at the Oxford University Computing Laboratory. Dr Mack has
since received funding from British Aerospace and the United States Department of
Defense to continue this research.
FILTRANE, a filter method for the nonlinear feasibility problem
Abstract
A new filter method will be presented that attempts to find a feasible
point for sets of nonlinear sets of equalities and inequalities. The
method is intended to work for problems where the number of variables
or the number of (in)equalities is large, or both. No assumption is
made about convexity. The technique used is that of maintaining a list
of multidimensional "filter entries", a recent development of ideas
introduced by Fletcher and Leyffer. The method will be described, as
well as large scale numerical experiments with the corresponding
Fortran 90 module, FILTRANE.
Pascal Matrices (and Mesh Generation!)
Abstract
In addition to the announced topic of Pascal Matrices (abstract below) we will speak briefly about more recent work by Per-Olof Persson on generating simplicial meshes on regions defined by a function that gives the distance from the boundary. Our first goal was a short MATLAB code and we just submitted "A Simple Mesh Generator in MATLAB" to SIAM.
This is joint work with Alan Edelman at MIT and a little bit with Pascal. They had all the ideas.
Put the famous Pascal triangle into a matrix. It could go into a lower triangular L or its transpose L' or a symmetric matrix S:
| [ 1 0 0 0 ] | [ 1 1 1 1 ] | [ 1 1 1 1] | |||
| L = | [ 1 1 0 0 ] | L' = | [ 0 1 2 3 ] | S = | [ 1 2 3 4] |
| [ 1 2 1 0 ] | [ 0 0 1 3 ] | [ 1 3 6 10] | |||
| [ 1 3 3 1 ] | [ 0 0 0 1 ] | [ 1 4 10 20] |
These binomial numbers come from a recursion, or from the formula for i choose j, or functionally from taking powers of (1 + x).
The amazing thing is that L times L' equals S. (OK for 4 by 4) It follows that S has determinant 1. The matrices have other unexpected properties too, that give beautiful examples in teaching linear algebra. The proof of L L' = S comes 3 ways, I don't know which you will prefer:
1. By induction using the recursion formula for the matrix entries.
2. By an identity for the coefficients i+j choose j in S.
3. By applying both sides to the column vector [ 1 x x2 x3 ... ]'.
The third way also gives a proof that S3 = -I but we doubt that result.
The rows of the "hypercube matrix" L2 count corners and edges and faces and ... in n dimensional cubes.
Clustering, reordering and random graphs
Abstract
From the point of view of a numerical analyst, I will describe some algorithms for:
- clustering data points based on pairwise similarity,
- reordering a sparse matrix to reduce envelope, two-sum or bandwidth,
- reordering nodes in a range-dependent random graph to reflect the range-dependency,
and point out some connections between seemingly disparate solution techniques. These datamining problems arise across a range of disciplines. I will mention a particularly new and important application from bioinformatics concerning the analysis of gene or protein interaction data.
Immersed interface methods for fluid dynamics problems
Abstract
Immersed interface methods have been developed for a variety of
differential equations on domains containing interfaces or irregular
boundaries. The goal is to use a uniform Cartesian grid (or other fixed
grid on simple domain) and to allow other boundaries or interfaces to
cut through this grid. Special finite difference formulas are developed
at grid points near an interface that incorporate the appropriate jump
conditions across the interface so that uniform second-order accuracy
(or higher) can be obtained. For fluid flow problems with an immersed
deformable elastic membrane, the jump conditions result from a balance
between the singular force imposed by the membrane, inertial forces if
the membrane has mass, and the jump in pressure across the membrane.
A second-order accurate method of this type for Stokes flow was developed
with Zhilin Li and more recently extended to the full incompressible
Navier-Stokes equations in work with Long Lee.
Inverse eigenvalue problems for quadratic matrix polynomials
Abstract
Feedback design for a second order control system leads to an
eigenstructure assignment problem for a quadratic matrix polynomial. It is
desirable that the feedback controller not only assigns specified
eigenvalues to the second order closed loop system, but also that the
system is robust, or insensitive to perturbations. We derive here new
sensitivity measures, or condition numbers, for the eigenvalues of the
quadratic matrix polynomial and define a measure of robustness of the
corresponding system. We then show that the robustness of the quadratic
inverse eigenvalue problem can be achieved by solving a generalized linear
eigenvalue assignment problem subject to structured perturbations.
Numerically reliable methods for solving the structured generalized linear
problem are developed that take advantage of the special properties of the
system in order to minimize the computational work required.
Modelling bilevel games in electricity
Abstract
Electricity markets facilitate pricing and delivery of wholesale power.
Generators submit bids to an Independent System Operator (ISO) to indicate
how much power they can produce depending on price. The ISO takes these bids
with demand forecasts and minimizes the total cost of power production
subject to feasibility of distribution in the electrical network.
\\
\\
Each generator can optimise its bid using a bilevel program or
mathematical program with equilibrium (or complementarity) constraints, by
taking the ISOs problem, which contains all generators bid information, at
the lower level. This leads immediately to a game between generators, where
a Nash equilibrium - at which each generator's bid maximises its profit
provided that none of the other generators changes its bid - is sought.
\\
\\
In particular, we examine the idealised model of Berry et al (Utility
Policy 8, 1999), which gives a bilevel game that can be modelled as an
"equilibrium problem with complementarity constraints" or EPCC.
Unfortunately, like bilevel games, EPCCs on networks may not have Nash
equilibria in the (common) case when one or more of links of the network is
saturated (at maximum capacity). Nevertheless we explore some theory and
algorithms for this problem, and discuss the economic implications of
numerical examples where equilibria are found for small electricity
networks.
Combinatorial structures in nonlinear programming
Abstract
Traditional optimisation theory and -methods on the basis of the
Lagrangian function do not apply to objective or constraint functions
which are defined by means of a combinatorial selection structure. Such
selection structures can be explicit, for example in the case of "min",
"max" or "if" statements in function evaluations, or implicit as in the
case of inverse optimisation problems where the combinatorial structure is
induced by the possible selections of active constraints. The resulting
optimisation problems are typically neither convex nor smooth and do not
fit into the standard framework of nonlinear optimisation. Users typically
treat these problems either through a mixed-integer reformulation, which
drastically reduces the size of tractable problems, or by employing
nonsmooth optimisation methods, such as bundle methods, which are
typically based on convex models and therefore only allow for weak
convergence results. In this talk we argue that the classical Lagrangian
theory and SQP methodology can be extended to a fairly general class of
nonlinear programs with combinatorial constraints. The paper is available
Exact real arithmetic
Abstract
Is it possible to construct a computational model of the real numbers in which the sign
of every computed result is corrected determined? The answer is yes, both in theory and in
practice. The resulting viewpoint contrasts strongly with the traditional floating
point model. I will review the theoretical background and software design issues,
discuss previous attempts at implementation and finally demonstrate my own python and
C++ codes.
Generalised finite and infinite elements for flow acoustics
Improving spectral methods with optimized rational interpolation
Abstract
The pseudospectral method for solving boundary value problems on the interval
consists in replacing the solution by an interpolating polynomial in Lagrangian
form between well-chosen points and collocating at those same points.
\\
\\
Due to its globality, the method cannot handle steep gradients well (Markov's inequality).
We will present and discuss two means of improving upon this: the attachment of poles to
the ansatz polynomial, on one hand, and conformal point shifts on the other hand, both
optimally adapted to the problem to be solved.
Numerical issues arising in dynamic optimisation of process modelling applications
Abstract
Dynamic optimisation is a tool that enables the process industries to
compute optimal control strategies for important chemical processes.
Aspen DynamicsTM is a well-established commercial engineering software
package containing a dynamic optimisation tool. Its intuitive graphical
user interface and library of robust dynamic models enables engineers to
quickly and easily define a dynamic optimisation problem including
objectives, control vector parameterisations and constraints. However,
this is only one part of the story. The combination of dynamics and
non-linear optimisation can create a problem that can be very difficult
to solve due to a number of reasons, including non-linearities, poor
initial guesses, discontinuities and accuracy and speed of dynamic
integration. In this talk I will begin with an introduction to process
modelling and outline the algorithms and techniques used in dynamic
optimisation. I will move on to discuss the numerical issues that can
give us so much trouble in practice and outline some solutions we have
created to overcome some of them.
Eigenmodes of polygonal drums
Abstract
Many questions of interest to both mathematicians and physicists relate
to the behavior of eigenvalues and eigenmodes of the Laplace operator
on a polygon. Algorithmic improvements have revived the old "method
of fundamental solutions" associated with Fox, Henrici and Moler; is it
going to end up competitive with the state-of-the-art method of Descloux,
Tolley and Driscoll? This talk will outline the numerical issues but
give equal attention to applications including "can you hear the shape
of a drum?", localization of eigenmodes, eigenvalue avoidance, and the
design of drums that play chords.
\\
This is very much work in progress -- with graduate student Timo Betcke.
Convergence analysis of linear and adjoint approximations with shocks
Geometry, PDEs fluid dynamics, and image processing
Abstract
Image processing is an area with many important applications, as well as challenging problems for mathematicians. In particular, Fourier/wavelets analysis and stochastic/statistical methods have had major impact in this area. Recently, there has been increased interest in a new and complementary approach, using partial differential equations (PDEs) and differential-geometric models. It offers a more systematic treatment of geometric features of mages, such as shapes, contours and curvatures, etc., as well as allowing the wealth of techniques developed for PDEs and Computational Fluid Dynamics (CFD) to be brought to bear on image processing tasks.
I'll use two examples from my recent work to illustrate this synergy:
1. A unified image restoration model using Total Variation (TV) which can be used to model denoising, deblurring, as well as image inpainting (e.g. restoring old scratched photos). The TV idea can be traced to shock capturing methods in CFD and was first used in image processing by Rudin, Osher and Fatemi.
2. An "active contour" model which uses a variational level set method for object detection in scalar and vector-valued images. It can detect objects not necessarily defined by sharp edges, as well as objects undetectable in each channel of a vector-valued image or in the combined intensity. The contour can go through topological changes, and the model is robust to noise. The level set method was originally developed by Osher and Sethian for tracking interfaces in CFD.
(The above are joint works with Jackie Shen at the Univ. of Minnesota and Luminita Vese in the Math Dept at UCLA.)
Computing solutions of Laplace's equation by conformal mapping
Special Alan Curtis event
Abstract
- 2.00 pm Professor Iain Duff (RAL) Opening remarks
- 2.15 pm Professor M J D Powell (University of Cambridge)
- Some developments of work with Alan on cubic splines
- 3.00 pm Professor Kevin Burrage (University of Queensland)
- Stochastic models and simulations for chemically reacting systems
- 3.30 pm Tea/Coffee
- 4.00 pm Professor John Reid (RAL)
- Sparse matrix research at Harwell and the Rutherford Appleton Laboratory
- 4.30 pm Dr Ian Jones (AEA PLC)
- Computational fluid dynamics and the role of stiff solvers
- 5.00 pm Dr Lawrence Daniels (Hyprotech UK Ltd)
- Current work with Alan on ODE solvers for HSL
On the convergence of interior point methods for linear programming
Abstract
Long-step primal-dual path-following algorithms constitute the
framework of practical interior point methods for
solving linear programming problems. We consider
such an algorithm and a second order variant of it.
We address the problem of the convergence of
the sequences of iterates generated by the two algorithms
to the analytic centre of the optimal primal-dual set.
Spectral effects with quaternions
Abstract
Several real Lie and Jordan algebras, along with their associated
automorphism groups, can be elegantly expressed in the quaternion tensor
algebra. The resulting insight into structured matrices leads to a class
of simple Jacobi algorithms for the corresponding $n \times n$ structured
eigenproblems. These algorithms have many desirable properties, including
parallelizability, ease of implementation, and strong stability.
Computation of period orbits for the Navier-Stokes equations
Abstract
A method for computing periodic orbits for the Navier-Stokes
equations will be presented. The method uses a finite-element Galerkin
discretisation for the spatial part of the problem and a spectral
Galerkin method for the temporal part of the problem. The method will
be illustrated by calculations of the periodic flow behind a circular
cylinder in a channel. The problem has a simple reflectional symmetry
and it will be explained how this can be exploited to reduce the cost
of the computations.
On the solution of moving boundary value problems adaptive moving meshes
Superlinear convergence of conjugate gradients
Abstract
The convergence of Krylov subspace methods like conjugate gradients
depends on the eigenvalues of the underlying matrix. In many cases
the exact location of the eigenvalues is unknown, but one has some
information about the distribution of eigenvalues in an asymptotic
sense. This could be the case for linear systems arising from a
discretization of a PDE. The asymptotic behavior then takes place
when the meshsize tends to zero.
\\
\\
We discuss two possible approaches to study the convergence of
conjugate gradients based on such information.
The first approach is based on a straightforward idea to estimate
the condition number. This method is illustrated by means of a
comparison of preconditioning techniques.
The second approach takes into account the full asymptotic
spectrum. It gives a bound on the asymptotic convergence factor
which explains the superlinear convergence observed in many situations.
This method is mathematically more involved since it deals with
potential theory. I will explain the basic ideas.
Sobolev index estimation for hp-adaptive finite element methods
Abstract
We develop an algorithm for estimating the local Sobolev regularity index
of a given function by monitoring the decay rate of its Legendre expansion
coefficients. On the basis of these local regularities, we design and
implement an hp--adaptive finite element method based on employing
discontinuous piecewise polynomials, for the approximation of nonlinear
systems of hyperbolic conservation laws. The performance of the proposed
adaptive strategy is demonstrated numerically.
Recent results on accuracy and stability of numerical algorithms
Abstract
The study of the finite precision behaviour of numerical algorithms dates back at least as far as Turing and Wilkinson in the 1940s. At the start of the 21st century, this area of research is still very active.
We focus on some topics of current interest, describing recent developments and trends and pointing out future research directions. The talk will be accessible to those who are not specialists in numerical analysis.
Specific topics intended to be addressed include
- Floating point arithmetic: correctly rounded elementary functions, and the fused multiply-add operation.
- The use of extra precision for key parts of a computation: iterative refinement in fixed and mixed precision.
- Gaussian elimination with rook pivoting and new error bounds for Gaussian elimination.
- Automatic error analysis.
- Application and analysis of hyperbolic transformations.
Real symmetric matrices with multiple eigenvalues
Abstract
We describe "avoidance of crossing" and its explanation by von
Neumann and Wigner. We show Lax's criterion for degeneracy and then
discover matrices whose determinants give the discriminant of the
given matrix. This yields a simple proof of the bound given by
Ilyushechkin on the number of terms in the expansion of the discriminant
as a sum of squares. We discuss the 3 x 3 case in detail.
Some complexity considerations in sparse LU factorization
Abstract
The talk will discuss unsymmetric sparse LU factorization based on
the Markowitz pivot selection criterium. The key question for the
author is the following: Is it possible to implement a sparse
factorization where the overhead is limited to a constant times
the actual numerical work? In other words, can the work be bounded
by o(sum(k, M(k)), where M(k) is the Markowitz count in pivot k.
The answer is probably NO, but how close can we get? We will give
several bad examples for traditional methods and suggest alternative
methods / data structure both for pivot selection and for the sparse
update operations.
Filtering & signal processing
Abstract
We discuss two filters that are frequently used to smooth data.
One is the (nonlinear) median filter, that chooses the median
of the sample values in the sliding window. This deals effectively
with "outliers" that are beyond the correct sample range, and will
never be chosen as the median. A straightforward implementation of
the filter is expensive for large windows, particularly in two dimensions
(for images).
\\
\\
The second filter is linear, and known as "Savitzky-Golay". It is
frequently used in spectroscopy, to locate positions and peaks and
widths of spectral lines. This filter is based on a least-squares fit
of the samples in the sliding window to a polynomial of relatively
low degree. The filter coefficients are unlike the equiripple filter
that is optimal in the maximum norm, and the "maxflat" filters that
are central in wavelet constructions. Should they be better known....?
\\
\\
We will discuss the analysis and the implementation of both filters.
Asymptotic rates of convergence - for quadrature, ODEs and PDEs
Abstract
The asymptotic rate of convergence of the trapezium rule is
defined, for smooth functions, by the Euler-Maclaurin expansion.
The extension to other methods, such as Gauss rules, is straightforward;
this talk begins with some special cases, such as Periodic functions, and
functions with various singularities.
\\
\\
Convergence rates for ODEs (Initial and Boundary value problems)
and for PDEs are available, but not so well known. Extension to singular
problems seems to require methods specific to each situation. Some of
the results are unexpected - to me, anyway.
A toolbox for optimal design
Abstract
In the past few years we have developed some expertise in solving optimization
problems that involve large scale simulations in various areas of Computational
Geophysics and Engineering. We will discuss some of those applications here,
namely: inversion of seismic data, characterization of piezoelectrical crystals
material properties, optimal design of piezoelectrical transducers and
opto-electronic devices, and the optimal design of steel structures.
\\
\\
A common theme among these different applications is that the goal functional
is very expensive to evaluate, often, no derivatives are readily available, and
some times the dimensionality can be large.
\\
\\
Thus parallelism is a need, and when no derivatives are present, search type
methods have to be used for the optimization part. Additional difficulties can
be ill-conditioning and non-convexity, that leads to issues of global
optimization. Another area that has not been extensively explored in numerical
optimization and that is important in real applications is that of
multiobjective optimization.
\\
\\
As a result of these varied experiences we are currently designing a toolbox
to facilitate the rapid deployment of these techniques to other areas of
application with a minimum of retooling.
Analysis of some structured preconditioners for saddle point problems
A-Posteriori error estimates for higher order Godunov finite volume methods on unstructured meshes
Abstract
A-Posteriori Error estimates for high order Godunov finite
volume methods are presented which exploit the two solution
representations inherent in the method, viz. as piecewise
constants $u_0$ and cell-wise $q$-th order reconstructed
functions $R^0_q u_0$. The analysis provided here applies
directly to schemes such as MUSCL, TVD, UNO, ENO, WENO or any
other scheme that is a faithful extension of Godunov's method
to high order accuracy in a sense that will be made precise.
Using standard duality arguments, we construct exact error
representation formulas for derived functionals that are
tailored to the class of high order Godunov finite volume
methods with data reconstruction, $R^0_q u_0$. We then consider
computable error estimates that exploit the structure of higher
order Godunov finite volume methods. The analysis technique used
in this work exploits a certain relationship between higher
order Godunov methods and the discontinuous Galerkin method.
Issues such as the treatment of nonlinearity and the optional
post-processing of numerical dual data are also discussed.
Numerical results for linear and nonlinear scalar conservation
laws are presented to verify the analysis. Complete details can
be found in a paper appearing in the proceedings of FVCA3,
Porquerolles, France, June 24-28, 2002.
SMP parallelism: Current achievements, future challenges
Abstract
SMP (Symmetric Multi-Processors) hardware technologies are very popular
with vendors and end-users alike for a number of reasons. However, true
shared memory parallelism has experienced somewhat slower to take up
amongst the scientific-programming community. NAG has been at the
forefront of SMP technology for a number of years, and the NAG SMP
Library has shown the potential of SMP systems.
\\
\\
At the very high end, SMP hardware technologies are used as building
blocks of modern supercomputers, which truly consist of clusters of SMP
systems, for which no dedicated model of parallelism yet exists.
\\
\\
The aim of this talk is to introduce SMP systems and their potential.
Results from our work at NAG will also be introduced to show how SMP
parallelism, based on a shared memory paradigm, can be used to very
good effect and can produce high performance, scalable software. The
talk also aims to discuss some aspects of the apparent slow take up of
shared memory parallelism and the potential competition from PC (i.e.
Intel)-based cluster technology. The talk then aims to explore the
potential of SMP technology within "hybrid parallelism", i.e. mixed
distributed and shared memory modes, illustrating the point with some
preliminary work carried out by the author and others. Finally, a
number of potential future challenges to numerical analysts will be
discussed.
\\
\\
The talk is aimed at all who are interested in SMP technologies for
numerical computing, irrespective of any previous experience in the
field. The talk aims to stimulate discussion, by presenting some ideas,
backing these with data, not to stifle it in an ocean of detail!
Computed tomography for X-rays: old 2-D results, relevance to new 3-D spiral CT problems
Oscillations in discrete solutions to the convection-diffusion equation
Abstract
It is well known that discrete solutions to the convection-diffusion
equation contain nonphysical oscillations when boundary layers are present
but not resolved by the discretisation. For the Galerkin finite element
method with linear elements on a uniform 1D grid, a precise statement as
to exactly when such oscillations occur can be made, namely, that for a
problem with mesh size h, constant advective velocity and different values
at the left and right boundaries, oscillations will occur if the mesh
P\'{e}clet number $P_e$ is greater than one. In 2D, however, the situation
is not so well understood. In this talk, we present an analysis of a 2D
model problem on a square domain with grid-aligned flow which enables us
to clarify precisely when oscillations occur, and what can be done to
prevent them. We prove the somewhat surprising result that there are
oscillations in the 2D problem even when $P_e$ is less than 1. Also, we show that there
are distinct effects arising from differences in the top and bottom
boundary conditions (equivalent to those seen in 1D), and the non-zero
boundaries parallel to the flow direction.
Algebraic modeling systems and mathematical programming
Abstract
Algebra based modeling systems are becoming essential elements in the
application of large and complex mathematical programs. These systems
enable the abstraction, expression and translation of practical
problems into reliable and effective operational systems. They provide
the bridged between algorithms and real world problems by automating
the problem analysis and translation into specific data structures and
provide computational services required by different solvers. The
modeling system GAMS will be used to illustrate the design goals and
main features of such systems. Applications in use and under
development will be used to provide the context for discussing the
changes in user focus and future requirements. This presents new sets
of opportunities and challenges to the supplier and implementer of
mathematical programming solvers and modeling systems.
Adaptive finite elements for optimal control
Abstract
A systematic approach to error control and mesh adaptation for
optimal control of systems governed by PDEs is presented.
Starting from a coarse mesh, the finite element spaces are successively
enriched in order to construct suitable discrete models.
This process is guided by an a posteriori error estimator which employs
sensitivity factors from the adjoint equation.
We consider different examples with the stationary Navier-Stokes
equations as state equation.
On the condition number of bases in Banach spaces
Iterative methods for PDE eigenvalue problems
Abstract
Finite Element approximation of surfactant spreading on a thin film
A new preconditioning technique for the solution of the biharmonic problem
Abstract
In this presentation we examine the convergence characteristics of a
Krylov subspace solver preconditioned by a new indefinite
constraint-type preconditioner, when applied to discrete systems
arising from low-order mixed finite element approximation of the
classical biharmonic problem. The preconditioning operator leads to
preconditioned systems having an eigenvalue distribution consisting of
a tightly clustered set together with a small number of outliers. We
compare the convergence characteristics of a new approach with the
convergence characteristics of a standard block-diagonal Schur
complement preconditioner that has proved to be extremely effective in
the context of mixed approximation methods.
\\
\\
In the second part of the presentation we are concerned with the
efficient parallel implementation of proposed algorithm on modern
shared memory architectures. We consider use of the efficient parallel
"black-box'' solvers for the Dirichlet Laplacian problems based on
sparse Cholesky factorisation and multigrid, and for this purpose we
use publicly available codes from the HSL library and MGNet collection.
We compare the performance of our algorithm with sparse direct solvers
from the HSL library and discuss some implementation related issues.
Distribution tails of condition numbers for the polyhedral conic feasibility problem
Abstract
(Joint work with Felipe Cucker and Dennis Cheung, City University of Hong Kong.)
\\
\\
Condition numbers are important complexity-theoretic tools to capture
a "distillation" of the input aspects of a computational problem that
determine the running time of algorithms for its solution and the
sensitivity of the computed output. The motivation for our work is the
desire to understand the average case behaviour of linear programming
algorithms for a large class of randomly generated input data in the
computational model of a machine that computes with real numbers. In
this model it is not known whether linear programming is polynomial
time solvable, or so-called "strongly polynomial". Closely related to
linear programming is the problem of either proving non-existence of
or finding an explicit example of a point in a polyhedral cone defined
in terms of certain input data. A natural condition number for this
computational problem was developed by Cheung and Cucker, and we analyse
its distributions under a rather general family of input distributions.
We distinguish random sampling of primal and dual constraints
respectively, two cases that necessitate completely different techniques
of analysis. We derive the exact exponents of the decay rates of the
distribution tails and prove various limit theorems of complexity
theoretic importance. An interesting result is that the existence of
the k-th moment of Cheung-Cucker's condition number depends only very
mildly on the distribution of the input data. Our results also form
the basis for a second paper in which we analyse the distributions of
Renegar's condition number for the randomly generated linear programming
problem.
Eigenvalues of Locally Perturbed Toeplitz Matrices
Abstract
Toeplitz matrices enjoy the dual virtues of ubiquity and beauty. We begin this talk by surveying some of the interesting spectral properties of such matrices, emphasizing the distinctions between infinite-dimensional Toeplitz matrices and the large-dimensional limit of the corresponding finite matrices. These basic results utilize the algebraic Toeplitz structure, but in many applications, one is forced to spoil this structure with some perturbations (e.g., by imposing boundary conditions upon a finite difference discretization of an initial-boundary value problem). How do such
perturbations affect the eigenvalues? This talk will address this question for "localized" perturbations, by which we mean perturbations that are restricted to a single entry, or a block of entries whose size remains fixed as the matrix dimension grows. One finds, for a broad class of matrices, that sufficiently small perturbations fail to alter the spectrum, though the spectrum is exponentially sensitive to other perturbations. For larger real single-entry
perturbations, one observes the perturbed eigenvalues trace out curves in the complex plane. We'll show a number of illustrations of this phenomenon for tridiagonal Toeplitz matrices.
\\
\\
This talk describes collaborative work with Albrecht Boettcher, Marko Lindner, and Viatcheslav Sokolov of TU Chemnitz.