Thu, 28 Nov 2013

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Block LU factorization with panel Rank Revealing Pivoting and its Communication Avoiding version

Dr Amal Khabou
(University of Manchester)
Abstract

We present a block LU factorization with panel rank revealing

pivoting (block LU_PRRP), an algorithm based on strong

rank revealing QR for the panel factorization.

Block LU_PRRP is more stable than Gaussian elimination with partial

pivoting (GEPP), with a theoretical upper bound of the growth factor

of $(1+ \tau b)^{(n/ b)-1}$, where $b$ is the size of the panel used

during the block factorization, $\tau$ is a parameter of the strong

rank revealing QR factorization, and $n$ is the number of columns of

the matrix. For example, if the size of the panel is $b = 64$, and

$\tau = 2$, then $(1+2b)^{(n/b)-1} = (1.079)^{n-64} \ll 2^{n-1}$, where

$2^{n-1}$ is the upper bound of the growth factor of GEPP. Our

extensive numerical experiments show that the new factorization scheme

is as numerically stable as GEPP in practice, but it is more resistant

to some pathological cases where GEPP fails. We note that the block LU_PRRP

factorization does only $O(n^2 b)$ additional floating point operations

compared to GEPP.

Thu, 24 Oct 2013

14:00 - 15:00
L5

A geometric theory of phase transitions in convex optimization

Dr Martin Lotz
(University of Manchester)
Abstract

Convex regularization has become a popular approach to solve large scale inverse or data separation problems. A prominent example is the problem of identifying a sparse signal from linear samples my minimizing the l_1 norm under linear constraints. Recent empirical research indicates that many convex regularization problems on random data exhibit a phase transition phenomenon: the probability of successfully recovering a signal changes abruptly from zero to one as the number of constraints increases past a certain threshold. We present a rigorous analysis that explains why phase transitions are ubiquitous in convex optimization. It also describes tools for making reliable predictions about the quantitative aspects of the transition, including the location and the width of the transition region. These techniques apply to regularized linear inverse problems, to demixing problems, and to cone programs with random affine constraints. These applications depend on a new summary parameter, the statistical dimension of cones, that canonically extends the dimension of a linear subspace to the class of convex cones.

Joint work with Dennis Amelunxen, Mike McCoy and Joel Tropp.

Thu, 11 Oct 2012

16:00 - 17:00
DH 1st floor SR

Mathematical sociology is not an oxymoron

Martin Everett
(University of Manchester)
Abstract

The use of formal mathematical models in sociology started in the 1940s and attracted mathematicians such as Frank Harary in the 1950s. The idea is to take the rather intuitive ideas described in social theory and express these in formal mathematical terms. Social network analysis is probably the best known of these and it is the area which has caught the imagination of a wider audience and has been the subject of a number of popular books. We shall give a brief over view of the field of social networks and will then look at three examples which have thrown up problems of interest to the mathematical community. We first look at positional analysis techniques and give a formulation that tries to capture the notion of social role by using graph coloration. We look at algebraic structures, properties, characterizations, algorithms and applications including food webs. Our second and related example looks at core-periphery structures in social networks. Our final example relates to what the network community refer to as two-mode data and a general approach to analyzing networks of this form. In all cases we shall look at the mathematics involved and discuss some open problems and areas of research that could benefit from new approaches and insights.

Thu, 09 Feb 2012

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Efficient, communication-minimizing algorithms for the symmetric eigenvalue decomposition and the singular value decomposition

Dr Yuji Nakatsukasa
(University of Manchester)
Abstract

Computing the eigenvalue decomposition of a symmetric matrix and the singular value decomposition of a general matrix are two of the central tasks in numerical linear algebra. There has been much recent work in the development of linear algebra algorithms that minimize communication cost. However, the reduction in communication cost sometimes comes at the expense of significantly more arithmetic and potential instability.

\\

\\

In this talk I will describe algorithms for the two decompositions that have optimal communication cost and arithmetic cost within a small factor of those for the best known algorithms. The key idea is to use the best rational approximation of the sign function, which lets the algorithm converge in just two steps. The algorithms are backward stable and easily parallelizable. Preliminary numerical experiments demonstrate their efficiency.

Thu, 04 May 2000

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Analysis of the Cholesky method with iterative refinement for solving the symmetric definite generalized eigenproblem

Prof Nick Higham
(University of Manchester)
Abstract

The Cholesky factorization approach to solving the symmetric definite generalized eigenvalue problem

$Ax = \lambda Bx$, where $A$ is symmetric and $B$ is symmetric positive definite, computes a Cholesky factorization $B = LL^T$ and solves the equivalent standard symmetric eigenvalue problem $C y = \l y$ where $C = L^{-1} A L^{-T}$. Provided that a stable eigensolver is used, standard error analysis says that the computed eigenvalues are exact for $A+\dA$ and $B+\dB$ with $\max( \normt{\dA}/\normt{A}, \normt{\dB}/\normt{B} )$

bounded by a multiple of $\kappa_2(B)u$, where $u$ is the unit roundoff. We take the Jacobi method as the eigensolver and explain how backward error bounds potentially much smaller than $\kappa_2(B)u$ can be obtained.

To show the practical utility of our bounds we describe a vibration problem from structural engineering in which $B$ is ill conditioned yet the error bounds are small. We show how, in cases of instability, iterative refinement based on Newton's method can be used to produce eigenpairs with small backward errors.

Our analysis and experiments also give insight into the popular Cholesky--QR method used in LAPACK, in which the QR method is used as the eigensolver.

Thu, 22 Nov 2001

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

A new preconditioning technique for the solution of the biharmonic problem

Dr Milan Mihajlovic
(University of Manchester)
Abstract

In this presentation we examine the convergence characteristics of a

Krylov subspace solver preconditioned by a new indefinite

constraint-type preconditioner, when applied to discrete systems

arising from low-order mixed finite element approximation of the

classical biharmonic problem. The preconditioning operator leads to

preconditioned systems having an eigenvalue distribution consisting of

a tightly clustered set together with a small number of outliers. We

compare the convergence characteristics of a new approach with the

convergence characteristics of a standard block-diagonal Schur

complement preconditioner that has proved to be extremely effective in

the context of mixed approximation methods.

\\

\\

In the second part of the presentation we are concerned with the

efficient parallel implementation of proposed algorithm on modern

shared memory architectures. We consider use of the efficient parallel

"black-box'' solvers for the Dirichlet Laplacian problems based on

sparse Cholesky factorisation and multigrid, and for this purpose we

use publicly available codes from the HSL library and MGNet collection.

We compare the performance of our algorithm with sparse direct solvers

from the HSL library and discuss some implementation related issues.

Thu, 17 Oct 2002

14:00 - 15:00
Comlab

Recent results on accuracy and stability of numerical algorithms

Prof Nick Higham
(University of Manchester)
Abstract

The study of the finite precision behaviour of numerical algorithms dates back at least as far as Turing and Wilkinson in the 1940s. At the start of the 21st century, this area of research is still very active.

We focus on some topics of current interest, describing recent developments and trends and pointing out future research directions. The talk will be accessible to those who are not specialists in numerical analysis.

Specific topics intended to be addressed include

  • Floating point arithmetic: correctly rounded elementary functions, and the fused multiply-add operation.
  • The use of extra precision for key parts of a computation: iterative refinement in fixed and mixed precision.
  • Gaussian elimination with rook pivoting and new error bounds for Gaussian elimination.
  • Automatic error analysis.
  • Application and analysis of hyperbolic transformations.
Thu, 19 Oct 2006

14:00 - 15:00
Comlab

Matric roots: theory, computation and applications

Prof Nick Higham
(University of Manchester)
Abstract

The aim of this talk is to give some understanding of the theory of matrix $p$'th roots (solutions to the nonlinear matrix equation $X^{p} = A$), to explain how and how not to compute roots, and to describe some applications. In particular, an application in finance will be described concerning roots of transition matrices from Markov models.

Thu, 10 Mar 2011

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Optimal Iterative Solvers for Saddle Point Problems

Prof David Silvester
(University of Manchester)
Abstract

In this talk we discuss the design of efficient numerical methods for solving symmetric indefinite linear systems arising from mixed approximation of elliptic PDE problems with associated constraints. Examples include linear elasticity (Navier-Lame equations), steady fluid flow (Stokes' equations) and electromagnetism (Maxwell's equations).

The novel feature of our iterative solution approach is the incorporation of error control in the natural "energy" norm in combination with an a posteriori estimator for the PDE approximation error. This leads to a robust and optimally efficient stopping criterion: the iteration is terminated as soon as the algebraic error is insignificant compared to the approximation error. We describe a "proof of concept" MATLAB implementation of this algorithm, which we call EST_MINRES, and we illustrate its effectiveness when integrated into our Incompressible Flow Iterative Solution Software (IFISS) package (http://www.manchester.ac.uk/ifiss/).

Subscribe to University of Manchester