Forthcoming events in this series
Convex quadratic semi-definite programming problem: algorithms and applications
Abstract
The talk starts with a general introduction of the convex
quadratic semidefinite programming problem (QSDP), followed by a number of
interesting examples arising from finance, statistics and computer sciences.
We then discuss the concept of primal nondegeneracy for QSDP and show that
some QSDPs are nondegenerate and others are not even under the linear
independence assumption. The talk then moves on to the algorithmic side by
introducing the dual approach and how it naturally leads to Newton's method,
which is quadratically convergent for degenerate problems. On the
implementation side of the Newton method, we stress that direct methods for
the linear equations in Newton's method are impossible simply because the
equations are quite large in size and dense. Our numerical experiments use
the conjugate gradient method, which works quite well for the nearest
correlation matrix problem. We also discuss difficulties for us to find
appropriate preconditioners for the linear system encountered. The talk
concludes in discussing some other available methods and some future topics.
Multivariate highly oscillatory integration
Abstract
The aim of this talk is to describe several methods for numerically approximating
the integral of a multivariate highly oscillatory function. We begin with a review
of the asymptotic and Filon-type methods developed by Iserles and Nørsett. Using a
method developed by Levin as a point of departure we will construct a new method that
uses the same information as the Filon-type method, and obtains the same asymptotic
order, while not requiring moments. This allows us to integrate over nonsimplicial
domains, and with complicated oscillators.
Supercomputing at Oxford
Abstract
High-performance computing is an important tool for computational science.
Oxford University has recently decided to invest £3M in a new supercomputing
facility which is under development now. In this seminar I will give an overview
of some activities in Oxford and provide a vision for supercomputing here.
I will discuss some of the numerical analysis software and tools,
such as Distributed Matlab and indicate some of the challenges at
the intersection of numerical analysis and high-performance computing.
Matric roots: theory, computation and applications
Abstract
The aim of this talk is to give some understanding of the theory of matrix $p$'th roots (solutions to the nonlinear matrix equation $X^{p} = A$), to explain how and how not to compute roots, and to describe some applications. In particular, an application in finance will be described concerning roots of transition matrices from Markov models.
Strange discrete operators - A tour concerning meshless methods and image processing
Abstract
One of the oldest approach in meshless methods for PDEs is the Interpolating Moving Least Squares (IMLS) technique developed in the 1980s. Although widely accepted by users working in fields as diverse as geoinformatics and crack dynamics I shall take a fresh look at this method and ask for the equivalent difference operators which are generated implicitly. As it turns out, these operators are optimal only in trivial cases and are "strange" in general. I shall try to exploit two different approaches for the computation of these operators.
On the other hand (and very different from IMLS), Total Variation Flow (TVF) PDEs are the most recent developments in image processing and have received much attention lately. Again I shall show that they are able to generate "strange" discrete operators and that they easily can behave badly although they may be properly implemented.
The surprising structure of Gaussian point clouds and its implications for signal processing
Abstract
We will explore connections between the structure of high-dimensional convex polytopes and information acquisition for compressible signals. A classical result in the field of convex polytopes is that if N points are distributed Gaussian iid at random in dimension n<<N, then only order (log N)^n of the points are vertices of their convex hull. Recent results show that provided n grows slowly with N, then with high probability all of the points are vertices of its convex hull. More surprisingly, a rich "neighborliness" structure emerges in the faces of the convex hull. One implication of this phenomenon is that an N-vector with k non-zeros can be recovered computationally efficiently from only n random projections with n=2e k log(N/n). Alternatively, the best k-term approximation of a signal in any basis can be recovered from 2e k log(N/n) non-adaptive measurements, which is within a log factor of the optimal rate achievable for adaptive sampling. Additional implications for randomized error correcting codes will be presented.
This work was joint with David L. Donoho.
Global performance of the Newton method
Abstract
In this talk we present different strategies for regularization of the pure Newton method
(minimization problems)and of the Gauss-Newton method (systems of nonlinear equations).
For these schemes, we prove general convergence results. We establish also the global and
local worst-case complexity bounds. It is shown that the corresponding search directions can
be computed by a standard linear algebra technique.
Petrov-Galerkin Enriched Methods for Porous Media Applications
Abstract
We present a novel enhanced finite element method for the Darcy problem starting from the non stable
continuous $P_1 / P_0$ finite element spaces enriched with multiscale functions. The method is a departure
from the standard mixed method framework used in these applications. The methods are derived in a Petrov-Galerkin
framework where both velocity and pressure trial spaces are enriched with functions based on residuals of strong
equations in each element and edge partition. The strategy leads to enhanced velocity space with an element of
the lowest order Raviart-Thomas space and to a stable weak formulation preserving local mass conservation.
Numerical tests validate the method.
Jointly with Gabriel R Barrenechea, Universidad de Concepcion &
Frederic G C Valentin, LNCC
Numerical simulation of flows with strong density imhomogeneities
Abstract
Strong horizontal gradients of density are responsible for the occurence of a large number of (often catastrophic) flows, such as katabatic winds, dust storms, pyroclastic flows and powder-snow avalanches. For a large number of applications, the overall density contrast in the flow remains small and simulations are carried in the Boussinesq limit, where density variations only appear in the body-force term. However, pyroclastic flows and powder-snow avalanches involve much larger density contrasts, which implies that the inhomogeneous Navier-Stokes equations need to be solved, along with a closure equation describing the mass diffusion. We propose a Lagrange-Galerkin numerical scheme to solve this system, and prove optimal error bounds subject to constraints on the order of the discretization and the time-stepping. Simulations of physical relevance are then shown.
Modelling cerebrospinal fluid flow through the brain and hydrocephalus
Abstract
An integral part of the brain is a fluid flow system that is separate from brain tissue and the cerebral blood flow system: cerebrospinal fluid (CSF) is produced near the centre of the brain, flows out and around the brain, including around the spinal cord and is absorbed primarily in a region between the brain tissue and the skull. Hydrocephalus covers a broad range of anomalous flow and pressure situations: the normal flow path can become blocked, other problems can occur which result in abnormal tissue deformation or pressure changes. This talk will describe work that treats brain tissue as a poroelastic matrix through which the CSF can move when normal flow paths are blocked, producing tissue deformation and pressure changes. We have a number of models, the simplest treating the brain and CSF flow as having spherial symmetry ranging to more complex, fully three-dimensional computations. As well as considering acute hydrocephalus, we touch on normal pressure hydrocephalus, idiopathic intracranial hypertension and simulation of an infusion test. The numerical methods used are a combination of finite difference and finite element techniques applied to an interesting set of hydro-elastic equations.
Recent activities in automatic differentiation and beyond
Abstract
In this talk, we report on recent activities in the development of automatic differentiation tools for Matlab and CapeML, a common intermediate language for process control, and highlight some recent AD applications. Lastly, we show the potential for parallelisation created by AD and comment on the impact on scientific computing due to emerging multicore chips which are providing substantial thread-based parallelism in a "pizza box" form factor.
Numerical integration in high dimensions: lifting the curse of dimensionality
Algebraic updates of preconditioners for solving similar algebraic linear systems
Abstract
We consider the solution of sequences of linear systems by preconditioned iterative methods. Such systems arise, for example, in applications such as CFD and structural mechanics. In some cases it is important to avoid the recomputation of preconditioners for subsequent systems. We propose an algebraic strategy that replaces new preconditioners by old preconditioners with simple updates. Efficiency of the new strategy, which generalizes the approach of Benzi and Bertaccini, is demonstrated using numerical experiments.
This talk presents results of joint work with Jurjen Duintjer Tebbens.
On the stability and convergence of moving mesh methods for a model convection-diffusion problem
Diagonal scaling of discrete differential forms
Abstract
The use of discrete differential forms in the construction of finite element discretisations of the Sobolev spaces H^s, H(div) and H(curl) is now routinely applied by numerical analysts and engineers alike. However, little attention has been paid to the conditioning of the resulting stiffness matrices, particularly in the case of the non-uniform meshes that arise when adaptive refinement algorithms are used. We study this issue and show that the matrices are generally rather poorly conditioned. Typically, diagonal scaling is applied (often unwittingly) as a preconditioner. However, whereas diagonal scaling removes the effect of the mesh non-uniformity in the case of Sobolev spaces H^s, we show this is not so in the case of the spaces H(curl) and H(div). We trace the reason behind this difference, and give a simple remedy for curing the problem.
A novel, parallel PDE solver for unstructured grids
Abstract
We propose a new parallel domain decomposition algorithm to solve symmetric linear systems of equations derived from the discretization of PDEs on general unstructured grids of triangles or tetrahedra. The algorithm is based on a single-level Schwarz alternating procedure and a modified conjugate gradient solver. A single layer of overlap has been adopted in order to simplify the data-structure and minimize the overhead. This approach makes the global convergence rate vary slightly with the number of domains and the algorithm becomes highly scalable. The algorithm has been implemented in Fortran 90 using MPI and hence portable to different architectures. Numerical experiments have been carried out on a SunFire 15K parallel computer and have been shown superlinear performance in some cases.
How to approach non-normal matrix eigenvalue problems
Abstract
Non-normal matrices can be tiresome; some eigenvalues may be phlegmatic while others may be volatile. Computable error bounds are rarely used in such computations. We offer a way to proceed. Let (e,q,p') be an approximate eigentriple for non-normal B. Form column and row residuals r = Bq - qe and s' = p'B - ep'. We establish the relation between the smallest perturbation E, in both spectral and Frobenius norms, that makes the approximations correct and the norms of r and s'. Our results extend to the case when q and p are tall thin matrices and e is a small square matrix. Now regard B as a perturbation of B-E to obtain a (first order) bound on the error in e as a product of ||E|| and the condition number of e, namely (||q|| ||p'||)/|p'q|.
Adaptive preconditioners for Newton-Krylov methods
Abstract
The use of preconditioned Newton-Krylov methods is in many applications mandatory for computing efficiently the solution of large nonlinear systems of equations. However, the available preconditioners are often sub-optimal, due to the changing nature of the linearized operator. This the case, for instance, for quasi-Newton methods where the Jacobian (and its preconditioner) are kept fixed at each non-linear iteration, with the rate of convergence usually degraded from quadratic to linear. Updated Jacobians, on the other hand require updated preconditioners, which may not be readily available. In this work we introduce an adaptive preconditioning technique based on the Krylov subspace information generated at previous steps in the nonlinear iteration. In particular, we use to advantage an adaptive technique suggested for restarted GMRES to enhance existing preconditioners with information about (almost) invariant subspaces constructed by GMRES at previous stages in the nonlinear iteration. We provide guidelines on the choice of invariant-subspace basis used in the construction of our preconditioner and demonstrate the improved performance on various test problems. As a useful general application we consider the case of augmented systems preconditioned by block triangular matrices based on the structure of the system matrix. We show that a sufficiently good solution involving the primal space operator allows for an efficient application of our adaptive technique restricted to the space of dual variables.
Algebraic multigrid using inverse-based coarsening
Abstract
In this talk we will review classical multigrid methods and give an overview on algebraic multigrid methods, in particular the "classical" approach to AMG by Ruge and Stueben.
After that we will introduce a new class of multilevel methods. These new AMGs on one hand and exploit information based on filtering vectors and on the other hand, information about the inverse matrix is used to drive the coarsening process.
This new kind of AMG will be discussed and compared with "classical" AMG from a theoretical point of view as well as by showing some numerical examples.
On the numerical analysis of an augmented mixed finite element method for linear elasticity
Abstract
We present a new stabilized mixed finite element method for the linear elasticity problem. The approach is based on the introduction of Galerkin least-squares terms arising from the constitutive and equilibrium equations, and from the relation defining the rotation in terms of the displacement.
We show that the resulting augmented variational formulation and the associated Galerkin scheme are well posed, and that the latter becomes locking-free and asymptotically locking-free for Dirichlet and mixed boundary conditions, respectively. In particular, the discrete scheme allows the utilization of Raviart-Thomas spaces of lowest order for the stress tensor, piecewise linear elements for the displacement, and piecewise constants for the rotation.
In the case of mixed boundary conditions, the essential one (Neumann) is imposed weakly, which yields the introduction of the trace of the displacement as a suitable Lagrange multiplier. This trace is then approximated by piecewise linear elements on an independent partition of the Neumann boundary whose mesh size needs to satisfy a compatibility condition with the mesh size associated to the triangulation of the domain. A reliable and efficient a-posteriori error estimate is also described. Finally, several numerical results illustrating the performance of the augmented scheme are reported.
The finite element method for Cahn-Hilliard-Navier-Stokes equations
Abstract
The Cahn-Hilliard equations provides a model of phase transitions when two or more immiscible fluids interact. When coupled with the Navier-Stokes equations we obtain a model fro the dynamics of multiphase flow. This model takes into account the viscosity and densities of the various fluids present.
A finite element discretisation of the variable density Cahn-Hilliard-Navier-Stokes equations is presented. An analysis of the discretisation and a reliable efficient numerical solution method are presented.
Applications of radial basis functions
Abstract
I will describe some application areas for radial basis function, and discuss how the computational problems can be overcome by the use of preconditioning methods and fast evaluation techniques.
tba
Abstract
This Seminar has been cancelled and will now take place in Trinity Term, Week 3, 11 MAY.
Inverse problems and stochastic differential equations
Abstract
Using the one-dimensional diffusion equation as an example, this seminar looks at ways of constructing approximations to the solution and coefficient functions of differential equations when the coefficients are not fully defined. There may, however, be some information about the solution. The input data, usually given as values of a small number of functionals of the coefficients and the solution, is insufficient for specifying a well-posed problem, and so various extra assumptions are needed. It is argued that looking at these inverse problems as problems in Bayesian statistics is a unifying approach. We show how the standard methods of Tikhonov Regularisation are related to special forms of random field. The numerical approximation of stochastic partial differential Langevin equations to sample generation will be discussed.
High frequency scattering by convex polygons
Abstract
Standard finite element or boundary element methods for high frequency scattering problems, with piecewise polynomial approximation spaces, suffer from the limitation that the number of degrees of freedom required to achieve a prescribed level of accuracy grows at least linearly with respect to the frequency. Here we present a new boundary element method for which, by including in the approximation space the products of plane wave basis functions with piecewise polynomials supported on a graded mesh, we can demonstrate a computational cost that grows only logarithmically with respect to the frequency.
Cubature formulas, discrepancy and non linear approximation
Abstract
The main goal of this talk is to demonstrate connections between the following three big areas of research: the theory of cubature formulas (numerical integration), the discrepancy theory, and nonlinear approximation. First, I will discuss a relation between results on cubature formulas and on discrepancy. In particular, I'll show how standard in the theory of cubature formulas settings can be translated into the discrepancy problem and into a natural generalization of the discrepancy problem. This leads to a concept of the r-discrepancy. Second, I'll present results on a relation between construction of an optimal cubature formula with m knots for a given function class and best nonlinear m-term approximation of a special function determined by the function class. The nonlinear m-term approximation is taken with regard to a redundant dictionary also determined by the function class. Third, I'll give some known results on the lower and the upper estimates of errors of optimal cubature formulas for the class of functions with bounded mixed derivative. One of the important messages (well known in approximation theory) of this talk is that the theory of discrepancy is closely connected with the theory of cubature formulas for the classes of functions with bounded mixed derivative.
Dynamic-load balancing issues and preliminary out-of-core experiments in a parallel sparse solver
Abstract
Parallel sparse direct solvers are an interesting alternative to iterative methods for some classes of large sparse systems of linear equations. In the context of a parallel sparse multifrontal solver (MUMPS), we describe a new dynamic scheduling strategy aiming at balancing both the workload and the memory usage. More precisely, this hybrid approach balances the workload under memory constraints. We show that the peak of memory can be significantly reduced, while we have also improved the performance of the solver.
Then, we present preliminary work concerning a parallel out-of-core extension of the solver MUMPS, enabling to solve increasingly large simulation problems.
This is joint work with P.Amestoy, A.Guermouche, S.Pralet and E.Agullo.
Instability & transition of steady and pulsatile flow in stenotic/constricted pipes
Abstract
Through the advent of enhanced medical imaging computational modelling can now be applied to anatomically correct arterial geometries. However many flow feautures under physiological and pathological flow paraemeters, even in idealised problems, are relatively poorly understood. A commonly studied idealisation of an arterial blockage or stenosis, potentially generated by atherosclerosis, is a sinusoidally varying constricted tube. Although most physiological flow conditions are typically laminar, in this configuration turbulent flow states can arise due to the local increase in sectional Reynolds number. To examine the onset of turbulence in this geometry, under a simple non-reversing pulsatile flows, we have applied Floquet stability analysis and direct
numerical simulation.
As illustrated in the above figure, a period-doubling absolute instability mode associated with alternating tilting of the vortex rings that are ejected out of the stenosis/constriction during each pulse. This primary instability occurs for relatively large reduced velocities associated with long pulse periods (or low Womersley numbers). For lower reduced velocities the primary instability typically manifests itself as azimuthal waves (Widnall instability modes) of low wavenumber that grow on each vortex ring. We have also observed the shear layer of the steady axisymmetric flow is convectively unstable at still shorter temporal periods.
In this presentation we shall outline the challenges of modelling vascular flow problems with a particular focus on idealised stenotic flow. After briefly outlining the numerical analysis methods we shall discuss the flow investigations outlined above and their relation to more classical vortex instabilities.
Fast image inpainting (based on coherent transport)
Abstract
Image Inpainting turns the art of image restoration, retouching, and disocclusion into a computer-automated trade. Mathematically, it may be viewed as an interpolation problem in BV, SBV, or other fancy function spaces thought suitable for digital images. It has recently drawn the attention of the numerical PDE community, which has led to some impressive results. However, stability restrictions of the suggested explicit schemes so far yield computing times that are next to prohibitive in the realm of interactive digital image processing. We address this issue by constructing an appropriatecontinuous energy functional that combines the possibility of a fast discrete minimization with high perceptible quality of the resulting inpainted images.
The talk will survey the background of the inpainting problem and prominent PDE-based methods before entering the discussion of the suggested new energy functional. Many images will be shown along the way, in parts with online demonstrations.
This is joint work with my student Thomas März.
Sensitivity issues for least-squares problems
Abstract
Alan Turing introduced the sensitivity of the solution of a numerical problem to changes in its data as a way to measure the difficulty of solving the problem accurately. Condition numbers are now considered fundamental to sensitivity analysis. They have been used to measure the mathematical difficulty of many linear algebra problems, including linear systems, linear least-squares, and eigenvalue problems. By definition, unless exact arithmetic is used, it is expected to be difficultto accurately solve an ill-conditioned problem.
In this talk we focus on least-squares problems. After a historical overview of condition number for least-squares, we introduce two related condition numbers. The first is the partial condition number, which measures the sensitivity of a linear combination of the components of the solution. The second is related the truncated SVD solution of the problem, which is often used when the matrix is nearly rank deficient.
Throughout the talk we are interested in three types of results :closed formulas for condition numbers, sharp bounds and statistical estimates.
Solving large sparse symmetric linear systems out of core
Abstract
Direct methods for solving large sparse linear systems of equations are popular because of their generality and robustness. As the requirements of computational scientists for more accurate models increases, so inevitably do the sizes of the systems that must be solved and the amount of memory needed by direct solvers.
For many users, the option of using a computer with a sufficiently large memory is either not available or is too expensive. Using a preconditioned iterative solver may be possible but for the "tough" systems that arise from many practical applications, the difficulties involved in finding and computing a good preconditioner can make iterative methods infeasible. An alternative is to use a direct solver that is able to hold its data structures on disk, that is, an out-of-core solver.
In this talk, we will explain the multifrontal algorithm and discuss the design and development of a new HSL sparse symmetric out-of-core solver that uses it. Both the system matrix A and its factors are stored externally. For the indefinite case, numerical pivoting using 1x1 and 2x2 pivots is incorporated. To minimise storage for the system data, a reverse communication interface is used. Input of A is either by rows or by elements.
An important feature of the package is that all input and output to disk is performed through a set of Fortran subroutines that manage a virtual memory system so that actual i/o occurs only when really necessary. Also important is to design in-core data structures that avoid expensive searches. All these aspects will be discussed.
At the time of writing, we are tuning the code for the positive-definite case and have performance figures for real problems. By the time of the seminar, we hope to have developed the indefinite case, too.
Optimization on matrix manifolds
Abstract
It is well known that the computation of a few extreme eigenvalues, and the corresponding eigenvectors, of a symmetric matrix A can be rewritten as computing extrema of the Rayleigh quotient of A. However, since the Rayleigh quotient is a homogeneous function of degree zero, its extremizers are not isolated. This difficulty can be remedied by restricting the search space to a well-chosen manifold, which brings the extreme eigenvalue problem into the realm of optimization on manifolds. In this presentation, I will show how a recently-proposed generalization of trust-region methods to Riemannian manifolds applies to this problem, and how the resulting algorithms compare with existing ones.
I will also show how the Joint Diagonalization problem (that is, approximately diagonalizing a collection of symmetric matrices via a congruence transformation) can be tackled by a differential geometric approach. This problem has an important application in Independent Component Analysis.
From sparsity to block-sparsity: direct solution of linear systems of dimension 10^9
Abstract
We discuss a method for solving very large structured symmetric indefinite equation systems arising in optimization with interior point methods.
Many real-life economic models involve system dynamics, spatial distribution or uncertainty and lead to large-scale optimization problems. Such problems usually have a hidden structure: they are constructed by replication of some small generic block. The linear algebra subproblems which arise in optimization algorithms for such problems involve matrices which are not only sparse, but they additionally display a block-structure with many smaller blocks sparsely distributed in the large matrix.
We have developed a structure-exploiting parallel interior point solver for optimization problems. Its design uses object-orientated programming techniques. The progress OOPS (Object-Orientated Parallel Solver: http://www.maths.ed.ac.uk/~gondzio/parallel/solver.html) on a number of different computing platforms and achieves scalability on a number of different computing platforms. We illustrate its performance on a collection of problems with sizes reaching 109 variables arising from asset liability management and portfolio optimization.
This is a joint work with Andreas Grothey.
14:00
Scale-inariant moving finite elements for time-dependent nonlinear partial differential equations
Abstract
A scale-invariant moving finite element method is proposed for the
adaptive solution of nonlinear partial differential equations. The mesh
movement is based on a finite element discretisation of a scale-invariant
conservation principle incorporating a monitor function, while the time
discretisation of the resulting system of ordinary differential equations
may be carried out using a scale-invariant time-stepping. The accuracy and
reliability of the algorithm is tested against exact self-similar
solutions, where available, and a state-of-the-art $h$-refinement scheme
for a range of second and fourth order problems with moving boundaries.
The monitor functions used are the dependent variable and a monitor
related to the surface area of the solution manifold.
14:00
High-frequency cavity modes: efficient computation and applications
14:00
1st - A nonlinear Krylov accelerator for Modified Newton; 2nd - 3D computerized tomography from 4D data
Abstract
First, I'll give a very brief update on our nonlinear Krylov accelerator for the usual Modified Newton's method. This simple accelerator, which I devised and Neil Carlson implemented as a simple two page Fortran add-on to our implicit stiff ODEs solver, has been robust, simple, cheap, and automatic on all our moving node computations since 1990. I publicize further experience with it here, by us and by others in diverse fields, because it is proving to be of great general usefulness, especially for solving nonlinear evolutionary PDEs or a smooth succession of steady states.
Second, I'll report on some recent work in computerized tomography from X-rays. With colored computer graphics I'll explain how the standard "filtered backprojection" method works for the classical 2D parallel beam problem. Then with that backprojection kernel function H(t) we'll use an integral "change of variables" approach for the 2D fan-beam geometry. Finally, we turn to the tomographic reconstruction of a 3D object f(x,y,z) from a wrapped around cylindical 2D array of detectors opposite a 2D array of sources, such as occurs in PET (positron-emission tomography) or in very-wide-cone-beam tomography with a finely spaced source spiral.
Structured perturbation results on matrices, eigenvalues and pseudospectra
Abstract
The famous Eckart-Young Theorem states that the (normwise) condition number of a matrix is equal to the reciprocal of its distance to the nearest singular matrix. In a recent paper we proved an extension of this to a number of structures common in matrix analysis, i.e. the structured condition number is equal to the reciprocal of the structured distance to the nearest singular matrix. In this talk we present a number of related results on structured eigenvalue perturbations and structured pseudospectra, for normwise and for componentwise perturbations.
A new look at Newton's method
Abstract
Current methods for globalizing Newton's Method for solving systems of nonlinear equations fall back on steps biased towards the steepest descent direction (e.g. Levenberg/Marquardt, Trust regions, Cauchy point dog-legs etc.), when there is difficulty in making progress. This can occasionally lead to very slow convergence when short steps are repeatedly taken.
This talk looks at alternative strategies based on searching curved arcs related to Davidenko trajectories. Near to manifolds on which the Jacobian matrix is singular, certain conjugate steps are also interleaved, based on identifying a Pareto optimal solution.
Preliminary limited numerical experiments indicate that this approach is very effective, with rapid and ultimately second order convergence in almost all cases. It is hoped to present more detailed numerical evidence when the talk is given. The new ideas can also be incorporated with more recent ideas such as multifilters or nonmonotonic line searches without difficulty, although it may be that there is no longer much to gain by doing this.
(a) Another Orthogonal Matrix & (b) An application of Pfaff's Theorem (on skew-symmetric matrices)
Abstract
Abstract 1 Another Orthogonal Matrix
A householder reflection and a suitable product of Givens rotations are two well known examples of an
orthogonal matrix with given first column. We present another way to produce such a matrix and apply
it to produce a "fast Givens" method to compute the R factor of A, A = QR. This approach avoids the danger
of under/overflow.
(joint work with Eric Barszcz)
Abstract 2 An application of Pfaff's Theorem (on skew-symmetric matrices)
There are no constraints on the eigenvalues of a product of two real symmetric matrices but what about the
product of two real skew-symmetric matrices?
(joint work with A Dubrulle)
Quadratic eigenvalue problems and related distance problems
14:00
Backward error analysis, a new view and further improvements
Abstract
When studying invariant quantities and stability of discretization schemes for time-dependent differential equations(ODEs), Backward error analysis (BEA) has proven itself an invaluable tool. Although the established results give very accurate estimates, the known results are generally given for "worst case" scenarios. By taking into account the structure of the differential equations themselves further improvements on the estimates can be established, and sharper estimates on invariant quantities and stability can be established. In the talk I will give an overview of BEA, and its applications as it stands emphasizing the shortcoming in the estimates. An alternative strategy is then proposed overcoming these shortcomings, resulting in a tool which when used in connection with results from dynamical systems theory gives a very good insight into the dynamics of discretized differential equations.
14:00
14:00
To SQP or not to SQP - modern alternatives in large-scale nonlinear optimization
14:00
Preconditioning for eigenvalue problems: ideas, algorithms, error analysis
Abstract
The convergence of iterative methods for solving the linear system Ax = b with a Hermitian positive definite matrix A depends on the condition number of A: the smaller the latter the faster the former. Hence the idea to multiply the equation by a matrix T such that the condition number of TA is much smaller than that of A. The above is a common interpretation of the technique known as preconditioning, the matrix T being referred to as the preconditioner for A.
The eigenvalue computation does not seem to benefit from the direct application of such a technique. Indeed, what is the point in replacing the standard eigenvalue problem Ax = λx with the generalized one TAx = λTx that does not appear to be any easier to solve? It is hardly surprising then that modern eigensolvers, such as ARPACK, do not use preconditioning directly. Instead, an option is provided to accelerate the convergence to the sought eigenpairs by applying spectral transformation, which generally requires the user to supply a subroutine that solves the system (A−σI)y = z, and it is entirely up to the user to employ preconditioning if they opt to solve this system iteratively.
In this talk we discuss some alternative views on the preconditioning technique that are more general and more useful in the convergence analysis of iterative methods and that show, in particular, that the direct preconditioning approach does make sense in eigenvalue computation. We review some iterative algorithms that can benefit from the direct preconditioning, present available convergence results and demonstrate both theoretically and numerically that the direct preconditioning approach has advantages over the two-level approach. Finally, we discuss the role that preconditioning can play in the a posteriori error analysis, present some a posteriori error estimates that use preconditioning and compare them with commonly used estimates in terms of the Euclidean norm of residual.