Thu, 08 May 2008

14:00 - 15:00
Comlab

The Envelope Method

Prof Beresford Parlett
(UC Berkeley)
Abstract

The task is to compute orthogonal eigenvectors (without Gram-Schmidt) of symmetric tridiagonals for isolated clusters of close eigenvalues. We review an "old" method, the Submatrix method, and describe an extension which significantly enlarges the scope to include several mini-clusters within the given cluster. An essential feature is to find the envelope of the associated invariant subspace.

Thu, 01 May 2008

14:00 - 15:00
Comlab

Eigenvalue avoidance

Prof Nick Trefethen
(Computing Laboratory, Oxford)
Abstract

"Eigenvalue avoidance" or "level repulsion" refers to the tendency of eigenvalues of matrices or operators to be distinct rather than degenerate.

The mathematics goes back to von Neumann and Wigner in 1929 and touches many subjects including numerical linear algebra, random matrix theory, chaotic dynamics, and number theory.

This talk will be an informal illustrated discussion of various aspects of this phenomenon.

Thu, 21 Feb 2008

14:00 - 15:00
Comlab

Meshfree Methods: Theory and Applications

Prof Holger Wendland
(University of Sussex)
Abstract

Meshfree methods become more and more important for the numerical simulation of complex real-world processes. Compared to classical, mesh-based methods they have the advantage of being more flexible, in particular for higher dimensional problems and for problems, where the underlying geometry is changing. However, often, they are also combined with classical methods to form hybrid methods.

In this talk, I will discuss meshfree, kernel based methods. After a short introduction along the lines of optimal recovery, I will concentrate on results concerning convergence orders and stability. After that I will address efficient numerical algorithms. Finally, I will present some examples, including one from fluid-structure-interaction, which will demonstrate why these methods are currently becoming Airbus's preferred solution in Aeroelasticity.

Thu, 14 Feb 2008

14:00 - 15:00
Comlab

Distance Geometry Problem for Protein Modeling via Geometric Buildup

Prof Ya-xiang Yuan
(Chinese Academy of Sciences, Beijing)
Abstract

A well-known problem in protein modeling is the determination of the structure of a protein with a given set of inter-atomic or inter-residue distances obtained from either physical experiments or theoretical estimates. A general form of the problem is known as the distance geometry problem in mathematics, the graph embedding problem in computer science, and the multidimensional scaling problem in statistics. The problem has applications in many other scientific and engineering fields as well such as sensor network localization, image recognition, and protein classification. We describe the formulations and complexities of the problem in its various forms, and introduce a so-called geometric buildup approach to the problem. We present the general algorithm and discuss related computational issues including control of numerical errors, determination of rigid vs. unique structures, and tolerance of distance errors. The theoretical basis of the approach is established based on the theory of distance geometry. A group of necessary and sufficient conditions for the determination of a structure with a given set of distances using a geometric buildup algorithm are justified. The applications of the algorithm to model protein problems are demonstrated.

Thu, 17 Jan 2008

14:00 - 15:00
Comlab

Nonlinear problems in analysis of Krylov subspace methods

Prof Zdenek Strakos
(Academy of Sciences of the Czech Republic)
Abstract
Consider a system of linear algebraic equations $Ax=b$ where $A$ is an $n$ by $n$ real matrix and $b$ a real vector of length $n$. Unlike in the linear iterative methods based on the idea of splitting of $A$, the Krylov subspace methods, which are used in computational kernels of various optimization techniques, look for some optimal approximate solution $x^n$ in the subspaces ${\cal K}_n (A, b) = \mbox{span} \{ b, Ab, \dots, A^{n-1}b\}, n = 1, 2, \dots$ (here we assume, with no loss of generality, $x^0 = 0$). As a consequence, though the problem $Ax = b$ is linear, Krylov subspace methods are not. Their convergence behaviour cannot be viewed as an (unimportant) initial transient stage followed by the subsequent convergence stage. Apart from very simple, and from the point of view of Krylov subspace methods uninteresting cases, it cannot be meaningfully characterized by an asymptotic rate of convergence. In Krylov subspace methods such as the conjugate gradient method (CG) or the generalized minimal residual method (GMRES), the optimality at each step over Krylov subspaces of increasing dimensionality makes any linearized description inadequate. CG applied to $Ax = b$ with a symmetric positive definite $A$ can be viewed as a method for numerical minimization the quadratic functional $1/2 (Ax, x) - (b,x)$. In order to reveal its nonlinear character, we consider CG a matrix formulation of the Gauss-Christoffel quadrature, and show that it essentially solves the classical Stieltjes moment problem. Moreover, though the CG behaviour is fully determined by the spectral decomposition of the problem, the relationship between convergence and spectral information is nothing but simple. We will explain several phenomena where an intuitive commonly used argumentation can lead to wrong conclusions, which can be found in the literature. We also show that rounding error analysis of CG brings fundamental understanding of seemingly unrelated problems in convergence analysis and in theory of the Gauss-Christoffel quadrature. In remaining time we demonstrate that in the unsymmetric case the spectral information is not generally sufficient for description of behaviour of Krylov subspace methods. In particular, given an arbitrary prescribed convergence history of GMRES and an arbitrary prescribed spectrum of the system matrix, there is always a system $Ax=b$ such that GMRES follows the prescribed convergence while $A$ has the prescribed spectrum.
Subscribe to Comlab