# Past Forthcoming Seminars

28 November 2002
14:00
Dr Coralia Cartis
Abstract
Long-step primal-dual path-following algorithms constitute the framework of practical interior point methods for solving linear programming problems. We consider such an algorithm and a second order variant of it. We address the problem of the convergence of the sequences of iterates generated by the two algorithms to the analytic centre of the optimal primal-dual set.
• Computational Mathematics and Applications Seminar
21 November 2002
14:00
Abstract
Several real Lie and Jordan algebras, along with their associated automorphism groups, can be elegantly expressed in the quaternion tensor algebra. The resulting insight into structured matrices leads to a class of simple Jacobi algorithms for the corresponding $n \times n$ structured eigenproblems. These algorithms have many desirable properties, including parallelizability, ease of implementation, and strong stability.
• Computational Mathematics and Applications Seminar
Dr Andrew Cliffe
Abstract
A method for computing periodic orbits for the Navier-Stokes equations will be presented. The method uses a finite-element Galerkin discretisation for the spatial part of the problem and a spectral Galerkin method for the temporal part of the problem. The method will be illustrated by calculations of the periodic flow behind a circular cylinder in a channel. The problem has a simple reflectional symmetry and it will be explained how this can be exploited to reduce the cost of the computations.
• Computational Mathematics and Applications Seminar
31 October 2002
14:00
Dr Arno Kuijlaars
Abstract
The convergence of Krylov subspace methods like conjugate gradients depends on the eigenvalues of the underlying matrix. In many cases the exact location of the eigenvalues is unknown, but one has some information about the distribution of eigenvalues in an asymptotic sense. This could be the case for linear systems arising from a discretization of a PDE. The asymptotic behavior then takes place when the meshsize tends to zero. \\ \\ We discuss two possible approaches to study the convergence of conjugate gradients based on such information. The first approach is based on a straightforward idea to estimate the condition number. This method is illustrated by means of a comparison of preconditioning techniques. The second approach takes into account the full asymptotic spectrum. It gives a bound on the asymptotic convergence factor which explains the superlinear convergence observed in many situations. This method is mathematically more involved since it deals with potential theory. I will explain the basic ideas.
• Computational Mathematics and Applications Seminar
24 October 2002
14:00
Prof Endre Süli
Abstract
We develop an algorithm for estimating the local Sobolev regularity index of a given function by monitoring the decay rate of its Legendre expansion coefficients. On the basis of these local regularities, we design and implement an hp--adaptive finite element method based on employing discontinuous piecewise polynomials, for the approximation of nonlinear systems of hyperbolic conservation laws. The performance of the proposed adaptive strategy is demonstrated numerically.
• Computational Mathematics and Applications Seminar
17 October 2002
14:00
Prof Nick Higham
Abstract

The study of the finite precision behaviour of numerical algorithms dates back at least as far as Turing and Wilkinson in the 1940s. At the start of the 21st century, this area of research is still very active.

We focus on some topics of current interest, describing recent developments and trends and pointing out future research directions. The talk will be accessible to those who are not specialists in numerical analysis.

Specific topics intended to be addressed include

• Floating point arithmetic: correctly rounded elementary functions, and the fused multiply-add operation.
• The use of extra precision for key parts of a computation: iterative refinement in fixed and mixed precision.
• Gaussian elimination with rook pivoting and new error bounds for Gaussian elimination.
• Automatic error analysis.
• Application and analysis of hyperbolic transformations.
• Computational Mathematics and Applications Seminar
10 October 2002
14:00
Prof Beresford Parlett
Abstract
We describe "avoidance of crossing" and its explanation by von Neumann and Wigner. We show Lax's criterion for degeneracy and then discover matrices whose determinants give the discriminant of the given matrix. This yields a simple proof of the bound given by Ilyushechkin on the number of terms in the expansion of the discriminant as a sum of squares. We discuss the 3 x 3 case in detail.
• Computational Mathematics and Applications Seminar
Abstract
The talk will discuss unsymmetric sparse LU factorization based on the Markowitz pivot selection criterium. The key question for the author is the following: Is it possible to implement a sparse factorization where the overhead is limited to a constant times the actual numerical work? In other words, can the work be bounded by o(sum(k, M(k)), where M(k) is the Markowitz count in pivot k. The answer is probably NO, but how close can we get? We will give several bad examples for traditional methods and suggest alternative methods / data structure both for pivot selection and for the sparse update operations.
• Computational Mathematics and Applications Seminar
6 June 2002
14:00
Prof Gilbert Strang and Per-Olof Persson
Abstract
We discuss two filters that are frequently used to smooth data. One is the (nonlinear) median filter, that chooses the median of the sample values in the sliding window. This deals effectively with "outliers" that are beyond the correct sample range, and will never be chosen as the median. A straightforward implementation of the filter is expensive for large windows, particularly in two dimensions (for images). \\ \\ The second filter is linear, and known as "Savitzky-Golay". It is frequently used in spectroscopy, to locate positions and peaks and widths of spectral lines. This filter is based on a least-squares fit of the samples in the sliding window to a polynomial of relatively low degree. The filter coefficients are unlike the equiripple filter that is optimal in the maximum norm, and the "maxflat" filters that are central in wavelet constructions. Should they be better known....? \\ \\ We will discuss the analysis and the implementation of both filters.
• Computational Mathematics and Applications Seminar