Forthcoming events in this series


Thu, 22 Nov 2001

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

A new preconditioning technique for the solution of the biharmonic problem

Dr Milan Mihajlovic
(University of Manchester)
Abstract

In this presentation we examine the convergence characteristics of a

Krylov subspace solver preconditioned by a new indefinite

constraint-type preconditioner, when applied to discrete systems

arising from low-order mixed finite element approximation of the

classical biharmonic problem. The preconditioning operator leads to

preconditioned systems having an eigenvalue distribution consisting of

a tightly clustered set together with a small number of outliers. We

compare the convergence characteristics of a new approach with the

convergence characteristics of a standard block-diagonal Schur

complement preconditioner that has proved to be extremely effective in

the context of mixed approximation methods.

\\

\\

In the second part of the presentation we are concerned with the

efficient parallel implementation of proposed algorithm on modern

shared memory architectures. We consider use of the efficient parallel

"black-box'' solvers for the Dirichlet Laplacian problems based on

sparse Cholesky factorisation and multigrid, and for this purpose we

use publicly available codes from the HSL library and MGNet collection.

We compare the performance of our algorithm with sparse direct solvers

from the HSL library and discuss some implementation related issues.

Thu, 15 Nov 2001

14:00 - 15:00
Comlab

Distribution tails of condition numbers for the polyhedral conic feasibility problem

Dr Raphael Hauser
(University of Oxford)
Abstract

(Joint work with Felipe Cucker and Dennis Cheung, City University of Hong Kong.)

\\

\\

Condition numbers are important complexity-theoretic tools to capture

a "distillation" of the input aspects of a computational problem that

determine the running time of algorithms for its solution and the

sensitivity of the computed output. The motivation for our work is the

desire to understand the average case behaviour of linear programming

algorithms for a large class of randomly generated input data in the

computational model of a machine that computes with real numbers. In

this model it is not known whether linear programming is polynomial

time solvable, or so-called "strongly polynomial". Closely related to

linear programming is the problem of either proving non-existence of

or finding an explicit example of a point in a polyhedral cone defined

in terms of certain input data. A natural condition number for this

computational problem was developed by Cheung and Cucker, and we analyse

its distributions under a rather general family of input distributions.

We distinguish random sampling of primal and dual constraints

respectively, two cases that necessitate completely different techniques

of analysis. We derive the exact exponents of the decay rates of the

distribution tails and prove various limit theorems of complexity

theoretic importance. An interesting result is that the existence of

the k-th moment of Cheung-Cucker's condition number depends only very

mildly on the distribution of the input data. Our results also form

the basis for a second paper in which we analyse the distributions of

Renegar's condition number for the randomly generated linear programming

problem.

Thu, 08 Nov 2001

14:00 - 15:00
Comlab

Eigenvalues of Locally Perturbed Toeplitz Matrices

Dr Mark Embree
(University of Oxford)
Abstract

Toeplitz matrices enjoy the dual virtues of ubiquity and beauty. We begin this talk by surveying some of the interesting spectral properties of such matrices, emphasizing the distinctions between infinite-dimensional Toeplitz matrices and the large-dimensional limit of the corresponding finite matrices. These basic results utilize the algebraic Toeplitz structure, but in many applications, one is forced to spoil this structure with some perturbations (e.g., by imposing boundary conditions upon a finite difference discretization of an initial-boundary value problem). How do such

perturbations affect the eigenvalues? This talk will address this question for "localized" perturbations, by which we mean perturbations that are restricted to a single entry, or a block of entries whose size remains fixed as the matrix dimension grows. One finds, for a broad class of matrices, that sufficiently small perturbations fail to alter the spectrum, though the spectrum is exponentially sensitive to other perturbations. For larger real single-entry

perturbations, one observes the perturbed eigenvalues trace out curves in the complex plane. We'll show a number of illustrations of this phenomenon for tridiagonal Toeplitz matrices.

\\

\\

This talk describes collaborative work with Albrecht Boettcher, Marko Lindner, and Viatcheslav Sokolov of TU Chemnitz.

Thu, 01 Nov 2001

14:00 - 15:00
Comlab

Solution of massive support vector machine problems

Dr Michael Ferris
(University of Wisconsin)
Abstract

We investigate the use of interior-point and semismooth methods for solving

quadratic programming problems with a small number of linear constraints,

where the quadratic term consists of a low-rank update to a positive

semi-definite matrix. Several formulations of the support vector machine

fit into this category. An interesting feature of these particular problems

is the volume of data, which can lead to quadratic programs with between 10

and 100 million variables and, if written explicitly, a dense $Q$ matrix.

Our codes are based on OOQP, an object-oriented interior-point code, with the

linear algebra specialized for the support vector machine application.

For the targeted massive problems, all of the data is stored out of core and

we overlap computation and I/O to reduce overhead. Results are reported for

several linear support vector machine formulations demonstrating that the

methods are reliable and scalable and comparing the two approaches.

Thu, 18 Oct 2001

14:00 - 15:00
Comlab

Spectral inclusion and spectral exactness for non-selfadjoint differential equation eigenproblems

Dr Marco Marletta
(University of Leicester)
Abstract

Non-selfadjoint singular differential equation eigenproblems arise in a number of contexts, including scattering theory, the study of quantum-mechanical resonances, and hydrodynamic and magnetohydrodynamic stability theory.

\\

\\

It is well known that the spectra of non-selfadjoint operators can be pathologically sensitive to perturbation of the

operator. Wilkinson provides matrix examples in his classical text, while Trefethen has studied the phenomenon extensively through pseudospectra, which he argues are often of more physical relevance than the spectrum itself. E.B. Davies has studied the phenomenon particularly in the context of Sturm-Liouville operators and has shown that the eigenfunctions and associated functions of non-selfadjoint singular Sturm-Liouville operators may not even form a complete set in $L^2$.

\\

\\

In this work we ask the question: under what conditions can one expect the regularization process used for selfadjoint singular Sturm-Liouville operators to be successful for non-selfadjoint operators? The answer turns out to depend in part on the so-called Sims Classification of the problem. For Sims Case I the process is not guaranteed to work, and indeed Davies has very recently described the way in which spurious eigenvalues may be generated and converge to certain curves in the complex plane.

\\

\\

Using the Titchmarsh-Weyl theory we develop a very simple numerical procedure which can be used a-posteriori to distinguish genuine eigenvalues from spurious ones. Numerical results indicate that it is able to detect not only the spurious eigenvalues due to the regularization process, but also spurious eigenvalues due to the numerics on an already-regular problem. We present applications to quantum mechanical resonance calculations and to the Orr-Sommerfeld equation.

\\

\\

This work, in collaboration with B.M. Brown in Cardiff, has recently been generalized to Hamiltonian systems.

Fri, 12 Oct 2001

14:00 - 15:00
Comlab

Numerical methods for stiff systems of ODEs

Dr Paul Matthews
(University of Nottingham)
Abstract

Stiff systems of ODEs arise commonly when solving PDEs by spectral methods,

so conventional explicit time-stepping methods require very small time steps.

The stiffness arises predominantly through the linear terms, and these

terms can be handled implicitly or exactly, permitting larger time steps.

This work develops and investigates a class of methods known as

'exponential time differencing'. These methods are shown to have a

number of advantages over the more well-known linearly implicit

methods and integrating factor methods.

Thu, 04 Oct 2001

14:00 - 15:00
Comlab

The Kestrel interface to the NEOS server

Dr Todd Munson
(Argonne National Laboratories)
Abstract

The Kestrel interface for submitting optimization problems to the NEOS Server augments the established e-mail, socket, and web interfaces by enabling easy usage of remote solvers from a local modeling environment.

\\

\\

Problem generation, including the run-time detection of syntax errors, occurs on the local machine using any available modeling language facilities. Finding a solution to the problem takes place on a remote machine, with the result returned in the native modeling language format for further processing. A byproduct of the Kestrel interface is the ability to solve multiple problems generated by a modeling language in parallel.

\\

\\

This mechanism is used, for example, in the GAMS/AMPL solver available through the NEOS Server, which internally translates a submitted GAMS problem into AMPL. The resulting AMPL problem is then solved through the NEOS Server via the Kestrel interface. An advantage of this design is that the GAMS to AMPL translator does not need to be collocated with the AMPL solver used, removing restrictions on solver choice and reducing administrative costs.

\\

\\

This talk is joint work with Elizabeth Dolan.

Thu, 21 Jun 2001

14:00 - 15:00
Comlab

Tridiagonal matrices and trees

Prof Gilbert Strang
(MIT)
Abstract

Tridiagonal matrices and three term recurrences and second order equations appear amazingly often, throughout all of mathematics. We won't try to review this subject. Instead we look in two less familiar directions.

\\

\\

Here is a tridiagonal matrix problem that waited surprisingly long for a solution. Forward elimination factors T into LDU, with the pivots in D as usual. Backward elimination, from row n to row 1, factors T into U_D_L_. Parlett asked for a proof that diag(D + D_) = diag(T) + diag(T^-1).^-1. In an excellent paper (Lin Alg Appl 1997) Dhillon and Parlett extended this four-diagonal identity to block tridiagonal matrices, and also applied it to their "Holy Grail" algorithm for the eigenproblem. I would like to make a different connection, to the Kalman filter.

\\

\\

The second topic is a generalization of tridiagonal to "tree-diagonal". Unlike the interval, the tree can branch. In the matrix T, each vertex is connected only to its neighbors (but a branch point has more than two neighbors). The continuous analogue is a second order differential equation on a tree. The "non-jump" conditions at a meeting of N edges are continuity of the potential (N-1 equations) and Kirchhoff's Current Law (1 equation). Several important properties of tridiagonal matrices, including O(N) algorithms, survive on trees.

Thu, 14 Jun 2001

14:00 - 15:00
Comlab

No seminar

--
Abstract

No seminar this week

Thu, 07 Jun 2001

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Some properties of thin plate spline interpolation

Prof Mike J D Powell
(University of Cambridge)
Abstract

Let the thin plate spline radial basis function method be applied to

interpolate values of a smooth function $f(x)$, $x \!\in\! {\cal R}^d$.

It is known that, if the data are the values $f(jh)$, $j \in {\cal Z}^d$,

where $h$ is the spacing between data points and ${\cal Z}^d$ is the

set of points in $d$ dimensions with integer coordinates, then the

accuracy of the interpolant is of magnitude $h^{d+2}$. This beautiful

result, due to Buhmann, will be explained briefly. We will also survey

some recent findings of Bejancu on Lagrange functions in two dimensions

when interpolating at the integer points of the half-plane ${\cal Z}^2

\cap \{ x : x_2 \!\geq\! 0 \}$. Most of our attention, however, will

be given to the current research of the author on interpolation in one

dimension at the points $h {\cal Z} \cap [0,1]$, the purpose of the work

being to establish theoretically the apparent deterioration in accuracy

at the ends of the range from ${\cal O} ( h^3 )$ to ${\cal O} ( h^{3/2}

)$ that has been observed in practice. The analysis includes a study of

the Lagrange functions of the semi-infinite grid ${\cal Z} \cap \{ x :

x \!\geq\! 0 \}$ in one dimension.

Thu, 17 May 2001

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

On the robust solution of process simulation problems

Dr Lawrence Daniels and Dr Iain Strachan
(Hyprotech)
Abstract

In this talk we review experiences of using the Harwell Subroutine

Library and other numerical software codes in implementing large scale

solvers for commercial industrial process simulation packages. Such

packages are required to solve problems in an efficient and robust

manner. A core requirement is the solution of sparse systems of linear

equations; various HSL routines have been used and are compared.

Additionally, the requirement for fast small dense matrix solvers is

examined.

Thu, 15 Mar 2001

14:00 - 15:00
Comlab

Scientific computing for problems on the sphere - applying good approximations on the sphere to geodesy and the scattering of sound

Prof Ian Sloan
(University of New South Wales)
Abstract

The sphere is an important setting for applied mathematics, yet the underlying approximation theory and numerical analysis needed for serious applications (such as, for example, global weather models) is much less developed than, for example, for the cube.

\\

\\

This lecture will apply recent developments in approximation theory on the sphere to two different problems in scientific computing.

\\

\\

First, in geodesy there is often the need to evaluate integrals using data selected from the vast amount collected by orbiting satellites. Sometimes the need is for quadrature rules that integrate exactly all spherical polynomials up to a specified degree $n$ (or equivalently, that integrate exactly all spherical harmonies $Y_{\ell ,k}(\theta ,\phi)$ with $\ell \le n).$ We shall demonstrate (using results of M. Reimer, I. Sloan and R. Womersley in collaboration with

W. Freeden) that excellent quadrature rules of this kind can be obtained from recent results on polynomial interpolation on the sphere, if the interpolation points (and thus the quadrature points) are chosen to be points of a so-called extremal fundamental system.

\\

\\

The second application is to the scattering of sound by smooth three-dimensional objects, and to the inverse problem of finding the shape of a scattering object by observing the pattern of the scattered sound waves. For these problems a methodology has been developed, in joint work with I.G. Graham, M. Ganesh and R. Womersley, by applying recent results on constructive polynomial approximation on the sphere. (The scattering object is treated as a deformed sphere.)

Thu, 01 Mar 2001

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Reliable process modelling and optimisation using interval analysis

Prof Mark Stadtherr
(University of Notre Dame)
Abstract

Continuing advances in computing technology provide the power not only to solve

increasingly large and complex process modeling and optimization problems, but also

to address issues concerning the reliability with which such problems can be solved.

For example, in solving process optimization problems, a persistent issue

concerning reliability is whether or not a global, as opposed to local,

optimum has been achieved. In modeling problems, especially with the

use of complex nonlinear models, the issue of whether a solution is unique

is of concern, and if no solution is found numerically, of whether there

actually exists a solution to the posed problem. This presentation

focuses on an approach, based on interval mathematics,

that is capable of dealing with these issues, and which

can provide mathematical and computational guarantees of reliability.

That is, the technique is guaranteed to find all solutions to nonlinear

equation solving problems and to find the global optimum in nonlinear

optimization problems. The methodology is demonstrated using several

examples, drawn primarily from the modeling of phase behavior, the

estimation of parameters in models, and the modeling, using lattice

density-functional theory, of phase transitions in nanoporous materials.

Thu, 22 Feb 2001

14:00 - 15:00
Comlab

Acceleration strategies for restarted minimum residual methods

Dr Oliver Ernst
(Bergakademie Freiberg)
Abstract

This talk reviews some recent joint work with Michael Eiermann and Olaf

Schneider which introduced a framework for analyzing some popular

techniques for accelerating restarted Krylov subspace methods for

solving linear systems of equations. Such techniques attempt to compensate

for the loss of information due to restarting methods like GMRES, the

memory demands of which are usually too high for it to be applied to

large problems in unmodified form. We summarize the basic strategies which

have been proposed and present both theoretical and numerical comparisons.

Thu, 08 Feb 2001

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Support Vector machines and related kernel methods

Dr Colin Campbell
(University of Bristol)
Abstract

Support Vector Machines are a new and very promising approach to

machine learning. They can be applied to a wide range of tasks such as

classification, regression, novelty detection, density estimation,

etc. The approach is motivated by statistical learning theory and the

algorithms have performed well in practice on important applications

such as handwritten character recognition (where they currently give

state-of-the-art performance), bioinformatics and machine vision. The

learning task typically involves optimisation theory (linear, quadratic

and general nonlinear programming, depending on the algorithm used).

In fact, the approach has stimulated new questions in optimisation

theory, principally concerned with the issue of how to handle problems

with a large numbers of variables. In the first part of the talk I will

overview this subject, in the second part I will describe some of the

speaker's contributions to this subject (principally, novelty

detection, query learning and new algorithms) and in the third part I

will outline future directions and new questions stimulated by this

research.