Forthcoming events in this series
14:00
Scale-inariant moving finite elements for time-dependent nonlinear partial differential equations
Abstract
A scale-invariant moving finite element method is proposed for the
adaptive solution of nonlinear partial differential equations. The mesh
movement is based on a finite element discretisation of a scale-invariant
conservation principle incorporating a monitor function, while the time
discretisation of the resulting system of ordinary differential equations
may be carried out using a scale-invariant time-stepping. The accuracy and
reliability of the algorithm is tested against exact self-similar
solutions, where available, and a state-of-the-art $h$-refinement scheme
for a range of second and fourth order problems with moving boundaries.
The monitor functions used are the dependent variable and a monitor
related to the surface area of the solution manifold.
14:00
High-frequency cavity modes: efficient computation and applications
14:00
1st - A nonlinear Krylov accelerator for Modified Newton; 2nd - 3D computerized tomography from 4D data
Abstract
First, I'll give a very brief update on our nonlinear Krylov accelerator for the usual Modified Newton's method. This simple accelerator, which I devised and Neil Carlson implemented as a simple two page Fortran add-on to our implicit stiff ODEs solver, has been robust, simple, cheap, and automatic on all our moving node computations since 1990. I publicize further experience with it here, by us and by others in diverse fields, because it is proving to be of great general usefulness, especially for solving nonlinear evolutionary PDEs or a smooth succession of steady states.
Second, I'll report on some recent work in computerized tomography from X-rays. With colored computer graphics I'll explain how the standard "filtered backprojection" method works for the classical 2D parallel beam problem. Then with that backprojection kernel function H(t) we'll use an integral "change of variables" approach for the 2D fan-beam geometry. Finally, we turn to the tomographic reconstruction of a 3D object f(x,y,z) from a wrapped around cylindical 2D array of detectors opposite a 2D array of sources, such as occurs in PET (positron-emission tomography) or in very-wide-cone-beam tomography with a finely spaced source spiral.
Structured perturbation results on matrices, eigenvalues and pseudospectra
Abstract
The famous Eckart-Young Theorem states that the (normwise) condition number of a matrix is equal to the reciprocal of its distance to the nearest singular matrix. In a recent paper we proved an extension of this to a number of structures common in matrix analysis, i.e. the structured condition number is equal to the reciprocal of the structured distance to the nearest singular matrix. In this talk we present a number of related results on structured eigenvalue perturbations and structured pseudospectra, for normwise and for componentwise perturbations.
A new look at Newton's method
Abstract
Current methods for globalizing Newton's Method for solving systems of nonlinear equations fall back on steps biased towards the steepest descent direction (e.g. Levenberg/Marquardt, Trust regions, Cauchy point dog-legs etc.), when there is difficulty in making progress. This can occasionally lead to very slow convergence when short steps are repeatedly taken.
This talk looks at alternative strategies based on searching curved arcs related to Davidenko trajectories. Near to manifolds on which the Jacobian matrix is singular, certain conjugate steps are also interleaved, based on identifying a Pareto optimal solution.
Preliminary limited numerical experiments indicate that this approach is very effective, with rapid and ultimately second order convergence in almost all cases. It is hoped to present more detailed numerical evidence when the talk is given. The new ideas can also be incorporated with more recent ideas such as multifilters or nonmonotonic line searches without difficulty, although it may be that there is no longer much to gain by doing this.
(a) Another Orthogonal Matrix & (b) An application of Pfaff's Theorem (on skew-symmetric matrices)
Abstract
Abstract 1 Another Orthogonal Matrix
A householder reflection and a suitable product of Givens rotations are two well known examples of an
orthogonal matrix with given first column. We present another way to produce such a matrix and apply
it to produce a "fast Givens" method to compute the R factor of A, A = QR. This approach avoids the danger
of under/overflow.
(joint work with Eric Barszcz)
Abstract 2 An application of Pfaff's Theorem (on skew-symmetric matrices)
There are no constraints on the eigenvalues of a product of two real symmetric matrices but what about the
product of two real skew-symmetric matrices?
(joint work with A Dubrulle)
Quadratic eigenvalue problems and related distance problems
14:00
Backward error analysis, a new view and further improvements
Abstract
When studying invariant quantities and stability of discretization schemes for time-dependent differential equations(ODEs), Backward error analysis (BEA) has proven itself an invaluable tool. Although the established results give very accurate estimates, the known results are generally given for "worst case" scenarios. By taking into account the structure of the differential equations themselves further improvements on the estimates can be established, and sharper estimates on invariant quantities and stability can be established. In the talk I will give an overview of BEA, and its applications as it stands emphasizing the shortcoming in the estimates. An alternative strategy is then proposed overcoming these shortcomings, resulting in a tool which when used in connection with results from dynamical systems theory gives a very good insight into the dynamics of discretized differential equations.
14:00
14:00
To SQP or not to SQP - modern alternatives in large-scale nonlinear optimization
14:00
Preconditioning for eigenvalue problems: ideas, algorithms, error analysis
Abstract
The convergence of iterative methods for solving the linear system Ax = b with a Hermitian positive definite matrix A depends on the condition number of A: the smaller the latter the faster the former. Hence the idea to multiply the equation by a matrix T such that the condition number of TA is much smaller than that of A. The above is a common interpretation of the technique known as preconditioning, the matrix T being referred to as the preconditioner for A.
The eigenvalue computation does not seem to benefit from the direct application of such a technique. Indeed, what is the point in replacing the standard eigenvalue problem Ax = λx with the generalized one TAx = λTx that does not appear to be any easier to solve? It is hardly surprising then that modern eigensolvers, such as ARPACK, do not use preconditioning directly. Instead, an option is provided to accelerate the convergence to the sought eigenpairs by applying spectral transformation, which generally requires the user to supply a subroutine that solves the system (A−σI)y = z, and it is entirely up to the user to employ preconditioning if they opt to solve this system iteratively.
In this talk we discuss some alternative views on the preconditioning technique that are more general and more useful in the convergence analysis of iterative methods and that show, in particular, that the direct preconditioning approach does make sense in eigenvalue computation. We review some iterative algorithms that can benefit from the direct preconditioning, present available convergence results and demonstrate both theoretically and numerically that the direct preconditioning approach has advantages over the two-level approach. Finally, we discuss the role that preconditioning can play in the a posteriori error analysis, present some a posteriori error estimates that use preconditioning and compare them with commonly used estimates in terms of the Euclidean norm of residual.
14:00
Computing ratings for eigenvectors
Abstract
We consider the problem of computing ratings using the results of games (such as chess) played between a set of n players, and show how this problem can be reduced to computing the positive eigenvectors corresponding to the dominant eigenvalues of certain n by n matrices. There is a close connection with the stationary probability distributions of certain Markov chains. In practice, if n is large, then the matrices involved will be sparse, and the power method may be used to solve the eigenvalue problems efficiently.
15:00
The use of coupled solvers for complex multiphase and reacting flows
Abstract
Many industrial flow problems, expecially in the minerals and process
industries, are very complex, with strong interactions between phases
and components, and with very different length and time scales. This
presentation outlines the algorithms used in the CFX-5 software, and
describes the extension of its coupled solver approach to some
multi-scale industrial problems. including Population Balance modelling
to predict size distributions of a disperse phase. These results will be
illustrated on some practical industrial problems.
14:00
Resolution of Gibbs' phenomenon from global to semi-global
Abstract
Spectral projections enjoy high order convergence for globally smooth functions. However, a single discontinuity introduces O(1) spurious oscillations near the discontinuity and reduces the high order convergence rate to first order, Gibbs' Phenomena. Although a direct expansion of the function in terms of its global moments yields this low order approximation, high resolution information is retained in the global moments. Two techniques for the resolution of the Gibbs' phenomenon are discussed, filtering and reprojection methods. An adaptive filter with optimal joint time-frequency localization is presented, which recovers a function from its N term Fourier projection within the error bound \exp(-Nd(x)), where d(x) is the distance from the point being recovered to the nearest discontinuity. Symmetric filtering, however, must sacrifice accuracy when approaching a discontinuity. To overcome this limitation, Gegenbauer postprocessing was introduced by Gottlieb, Shu, et al, which recovers a function from its N term Fourier projection within the error bound \exp(-N). An extension of Gegenbauer postprocessing with improved convergence and robustness properties is presented, the robust Gibbs complements. Filtering and reprojection methods will be put in a unifying framework, and their properties such as robustness and computational cost contrasted. This research was conducted jointly with Eitan Tadmor and Anne Gelb.
Weighted matchings for the preconditioning of symmetric indefinite matrices
Abstract
The use of weighted matchings is becoming increasingly standard in the
solution of sparse linear systems. While non-symmetric permutations based on these
matchings have been the state-of-the-art for
several years (especially for direct solvers), approaches for symmetric
matrices have only recently gained attention.
\\
In this talk we discuss results of our work on using weighted matchings in
the preconditioning of symmetric indefinite linear systems, following ideas
introduced by Duff and Gilbert. In order to maintain symmetry,
the weighted matching is symmetrized and the cycle structure of the
resulting matching is used to build reorderings that form small diagonal
blocks from the matched entries.
\\
For the preconditioning we investigated two approaches. One is an
incomplete $LDL^{T}$ preconditioning, that chooses 1x1 or 2x2 diagonal pivots
based on a simple tridiagonal pivoting criterion. The second approach
targets distributed computing, and is based on factorized sparse approximate
inverses, whose existence, in turn, is based on the existence of an $LDL^{T}$
factorization. Results for a number of comprehensive test sets are given,
including comparisons with sparse direct solvers and other preconditioning
approaches.
An interior-point method for MPECs based on strictly feasible relaxations
Abstract
An interior-point method for solving mathematical programs with
equilibrium constraints (MPECs) is proposed. At each iteration of the
algorithm, a single primal-dual step is computed from each subproblem of
a sequence. Each subproblem is defined as a relaxation of the MPEC with
a nonempty strictly feasible region. In contrast to previous
approaches, the proposed relaxation scheme preserves the nonempty strict
feasibility of each subproblem even in the limit. Local and superlinear
convergence of the algorithm is proved even with a less restrictive
strict complementarity condition than the standard one. Moreover,
mechanisms for inducing global convergence in practice are proposed.
Numerical results on the MacMPEC test problem set demonstrate the
fast-local convergence properties of the algorithm.
The Trapezoidal rule in the complex plane
Abstract
The trapezoidal rule for numerical integration is remarkably accurate when
the integrand under consideration is smooth and periodic. In this
situation it is superior to more sophisticated methods like Simpson's rule
and even the Gauss-Legendre rule. In the first part of the talk we
discuss this phenomenon and give a few elementary examples. In the second
part of the talk we discuss the application of this idea to the numerical
evaluation of contour integrals in the complex plane.
Demonstrations involving numerical differentiation, the computation
of special functions, and the inversion of the Laplace transform will be
presented.
Patterns of turbulence
Abstract
Plane Couette flow - the flow between two infinite parallel plates moving in opposite directions -
undergoes a discontinuous transition from laminar flow to turbulence as the Reynolds number is
increased. Due to its simplicity, this flow has long served as one of the canonical examples for understanding shear turbulence and the subcritical transition process typical of channel and pipe flows. Only recently was it discovered in very large aspect ratio experiments that this flow also exhibits remarkable pattern formation near transition. Steady, spatially periodic patterns of distinct regions of turbulent and laminar flow emerges spontaneously from uniform turbulence as the Reynolds number is decreased. The length scale of these patterns is more than an order of magnitude larger than the plate separation. It now appears that turbulent-laminar patterns are inevitable intermediate states on the route from turbulent to laminar flow in many shear flows. I will explain how we have overcome the difficulty of simulating these large scale patterns and show results from studies of three types of patterns: periodic, localized, and intermittent.