Forthcoming events in this series


Thu, 12 Jan 2017
14:00
L5

Tight Optimality and Convexity Conditions for Piecewise Smooth Functions

Prof. Andreas Griewank
(Yachay Tech University)
Abstract

 Functions defined by evaluation programs involving smooth  elementals and absolute values as well as max and min are piecewise smooth. For this class we present first and second order, necessary and sufficient conditions for the functions to be locally optimal, or convex, or at least possess a supporting hyperplane. The conditions generalize the classical KKT and SSC theory and are constructive; though in the case of convexity they may be combinatorial to verify. As a side product we find that, under the Mangasarin-Fromowitz-Kink-Qualification, the well established nonsmooth concept of subdifferential regularity is equivalent to first order convexity. All results are based on piecewise linearization and suggest corresponding optimization algorithms.

Thu, 01 Dec 2016

14:00 - 15:00
L5

A multilevel method for semidefinite programming relaxations of polynomial optimization problems with structured sparsity

Panos Parpas
(Imperial College)
Abstract

We propose a multilevel paradigm for the global optimisation of polynomials with sparse support. Such polynomials arise through the discretisation of PDEs, optimal control problems and in global optimization applications in general. We construct projection operators to relate the primal and dual variables of the SDP relaxation between lower and higher levels in the hierarchy, and theoretical results are proven to confirm their usefulness. Numerical results are presented for polynomial problems that show how these operators can be used in a hierarchical fashion to solve large scale problems with high accuracy.

Thu, 24 Nov 2016

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Stochastic methods for inverting matrices as a tool for designing Stochastic quasi-Newton methods

Dr Robert Gower
(INRIA - Ecole Normale Supérieure)
Abstract

I will present a broad family of stochastic algorithms for inverting a matrix, including specialized variants which maintain symmetry or positive definiteness of the iterates. All methods in the family converge globally and linearly, with explicit rates. In special cases, the methods obtained are stochastic block variants of several quasi-Newton updates, including bad Broyden (BB), good Broyden (GB), Powell-symmetric-Broyden (PSB), Davidon-Fletcher-Powell (DFP) and Broyden-Fletcher-Goldfarb-Shanno (BFGS). After a pause for questions, I will then present a block stochastic BFGS method based on the stochastic method for inverting positive definite matrices. In this method, the estimate of the inverse Hessian matrix that is maintained by it, is updated at each iteration using a sketch of the Hessian, i.e., a randomly generated compressed form of the Hessian. I will propose several sketching strategies, present a new quasi-Newton method that uses stochastic block BFGS updates combined with the variance reduction approach SVRG to compute batch stochastic gradients, and prove linear convergence of the resulting method. Numerical tests on large-scale logistic regression problems reveal that our method is more robust and substantially outperforms current state-of-the-art methods.

Thu, 17 Nov 2016

14:00 - 15:00
L5

Second order approximation of the MRI signal for single shot parameter assessment

Prof. Rodrigo Platte
(Arizona State University)
Abstract

Most current methods of Magnetic Resonance Imaging (MRI) reconstruction interpret raw signal values as samples of the Fourier transform of the object. Although this is computationally convenient, it neglects relaxation and off–resonance evolution in phase, both of which can occur to significant extent during a typical MRI signal. A more accurate model, known as Parameter Assessment by Recovery from Signal Encoding (PARSE), takes the time evolution of the signal into consideration. This model uses three parameters that depend on tissue properties: transverse magnetization, signal decay rate, and frequency offset from resonance. Two difficulties in recovering an image using this model are the low SNR for long acquisition times in single-shot MRI, and the nonlinear dependence of the signal on the decay rate and frequency offset. In this talk, we address the latter issue by using a second order approximation of the original PARSE model. The linearized model can be solved using convex optimization augmented with well-stablished regularization techniques such as total variation. The sensitivity of the parameters to noise and computational challenges associated with this approximation will be discussed.

Thu, 03 Nov 2016

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Nonnegative matrix factorization through sparse regression

Dr Robert Luce
(EPFL Lausanne)
Abstract

We consider the problem of computing a nonnegative low rank factorization to a given nonnegative input matrix under the so-called "separabilty condition".  This assumption makes this otherwise NP hard problem polynomial time solvable, and we will use first order optimization techniques to compute such a factorization. The optimization model use is based on sparse regression with a self-dictionary, in which the low rank constraint is relaxed to the minimization of an l1-norm objective function.  We apply these techniques to endmember detection and classification in hyperspecral imaging data.

Thu, 27 Oct 2016

14:00 - 15:00
L5

Semidefinite approximations of matrix logarithm

Hamza Fawzi
(University of Cambridge)
Abstract

 The matrix logarithm, when applied to symmetric positive definite matrices, is known to satisfy a notable concavity property in the positive semidefinite (Loewner) order. This concavity property is a cornerstone result in the study of operator convex functions and has important applications in matrix concentration inequalities and quantum information theory.
In this talk I will show that certain rational approximations of the matrix logarithm remarkably preserve this concavity property and moreover, are amenable to semidefinite programming. Such approximations allow us to use off-the-shelf semidefinite programming solvers for convex optimization problems involving the matrix logarithm. These approximations are also useful in the scalar case and provide a much faster alternative to existing methods based on successive approximation for problems involving the exponential/relative entropy cone. I will conclude by showing some applications to problems arising in quantum information theory.

This is joint work with James Saunderson (Monash University) and Pablo Parrilo (MIT)

Thu, 20 Oct 2016

14:00 - 15:00
L5

Parallelization of the rational Arnoldi algorithm

Dr. Stefan Guettel
(Manchester University)
Abstract


Rational Krylov methods are applicable to a wide range of scientific computing problems, and ​the rational Arnoldi algorithm is a commonly used procedure for computing an ​orthonormal basis of a rational Krylov space. Typically, the computationally most expensive component of this​ ​algorithm is the solution of a large linear system of equations at each iteration. We explore the​ ​option of solving several linear systems simultaneously, thus constructing the rational Krylov​ ​basis in parallel. If this is not done carefully, the basis being orthogonalized may become badly​ ​conditioned, leading to numerical instabilities in the orthogonalization process. We introduce the​ ​new concept of continuation pairs which gives rise to a near-optimal parallelization strategy that ​allows to control the growth of the condition number of this nonorthogonal basis. As a consequence we obtain a significantly more accurate and reliable parallel rational Arnoldi algorithm.
​ ​
The computational benefits are illustrated using several numerical examples from different application areas.
​ ​
This ​talk is based on joint work with Mario Berljafa  available as an Eprint at http://eprints.ma.man.ac.uk/2503/
 

Thu, 13 Oct 2016

14:00 - 15:00
L5

Optimization with occasionally accurate data

Prof. Coralia Cartis
(Oxford University)
Abstract


We present global rates of convergence for a general class of methods for nonconvex smooth optimization that include linesearch, trust-region and regularisation strategies, but that allow inaccurate problem information. Namely, we assume the local (first- or second-order) models of our function are only sufficiently accurate with a certain probability, and they can be arbitrarily poor otherwise. This framework subsumes certain stochastic gradient analyses and derivative-free techniques based on random sampling of function values. It can also be viewed as a robustness
assessment of deterministic methods and their resilience to inaccurate derivative computation such as due to processor failure in a distribute framework. We show that in terms of the order of the accuracy, the evaluation complexity of such methods is the same as their counterparts that use deterministic accurate models; the use of probabilistic models only increases the complexity by a constant, which depends on the probability of the models being good. Time permitting, we also discuss the case of inaccurate, probabilistic function value information, that arises in stochastic optimization. This work is joint with Katya Scheinberg (Lehigh University, USA).
 

Thu, 16 Jun 2016

14:00 - 15:00
L5

Input-independent, optimal interpolatory model reduction: Moving from linear to nonlinear dynamics

Prof. Serkan Gugercin
(Virginia Tech)
Abstract

For linear dynamical systems, model reduction has achieved great success. In the case of linear dynamics,  we know how to construct, at a modest cost, (locally) optimalinput-independent reduced models; that is, reduced models that are uniformly good over all inputs having bounded energy. In addition, in some cases we can achieve this goal using only input/output data without a priori knowledge of internal  dynamics.  Even though model reduction has been successfully and effectively applied to nonlinear dynamical systems as well, in this setting,  bot the reduction process and the reduced models are input dependent and the high fidelity of the resulting approximation is generically restricted to the training input/data. In this talk, we will offer remedies to this situation.

 
First, we will  review  model reduction for linear systems by using rational interpolation as the underlying framework. The concept of transfer function will prove fundamental in this setting. Then, we will show how rational interpolation and transfer function concepts can be extended to nonlinear dynamics, specifically to bilinear systems and quadratic-in-state systems, allowing us to construct input-independent reduced models in this setting as well. Several numerical examples will be illustrated to support the discussion.
Thu, 09 Jun 2016

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Conditioning of Optimal State Estimation Problems

Prof. Nancy Nichols
(Reading University)
Abstract

To predict the behaviour of a dynamical system using a mathematical model, an accurate estimate of the current state of the system is needed in order to initialize the model. Complete information on the current state is, however, seldom available. The aim of optimal state estimation, known in the geophysical sciences as ‘data assimilation’, is to determine a best estimate of the current state using measured observations of the real system over time, together with the model equations. The problem is commonly formulated in variational terms as a very large nonlinear least-squares optimization problem. The lack of complete data, coupled with errors in the observations and in the model, leads to a highly ill-conditioned inverse problem that is difficult to solve.

To understand the nature of the inverse problem, we examine how different components of the assimilation system influence the conditioning of the optimization problem. First we consider the case where the dynamical equations are assumed to model the real system exactly. We show, against intuition, that with increasingly dense and precise observations, the problem becomes harder to solve accurately. We then extend these results to a 'weak-constraint' form of the problem, where the model equations are assumed not to be exact, but to contain random errors. Two different, but mathematically equivalent, forms of the problem are derived. We investigate the conditioning of these two forms and find, surprisingly, that these have quite different behaviour.

Thu, 02 Jun 2016

14:00 - 15:00
L5

CUR Matrix Factorizations: Algorithms, Analysis, Applications

Professor Mark Embree
(Virginia Tech)
Abstract
Interpolatory matrix factorizations provide alternatives to the singular value decomposition for obtaining low-rank approximations; this class includes the CUR factorization, where the C and R matrices are subsets of columns and rows of the target matrix.  While interpolatory approximations lack the SVD's optimality, their ingredients are easier to interpret than singular vectors: since they are copied from the matrix itself, they inherit the data's key properties (e.g., nonnegative/integer values, sparsity, etc.). We shall provide an overview of these approximate factorizations, describe how they can be analyzed using interpolatory projectors, and introduce a new method for their construction based on the
Discrete Empirical Interpolation Method (DEIM).  To conclude, we will use this algorithm to gain insight into accelerometer data from an instrumented building.  (This talk describes joint work with Dan Sorensen (Rice) and collaborators in Virginia Tech's Smart Infrastucture Lab.)
Thu, 19 May 2016

14:00 - 15:00
L5

Computing defective eigenpairs in parameter-dependent eigenproblems

Dr. Melina Freitag
(University of Bath)
Abstract

The requirement to compute Jordan blocks for multiple eigenvalues arises in a number of physical problems, for example panel flutter problems in aerodynamical stability, the stability of electrical power systems, and in quantum mechanics. We introduce a general method for computing a 2-dimensional Jordan block in a parameter-dependent matrix eigenvalue problem based on the so called Implicit Determinant Method. This is joint work with Alastair Spence (Bath).

Thu, 12 May 2016

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Estimating the Largest Elements of a Matrix

Dr Sam Relton
(Manchester University)
Abstract


In many applications we need to find or estimate the $p \ge 1$ largest elements of a matrix, along with their locations. This is required for recommender systems used by Amazon and Netflix, link prediction in graphs, and in finding the most important links in a complex network, for example. 

Our algorithm uses only matrix vector products and is based upon a power method for mixed subordinate norms. We have obtained theoretical results on the convergence of this algorithm via a comparison with rook pivoting for the LU  decomposition. We have also improved the practicality of the algorithm by producing a blocked version iterating on $n \times t$ matrices, as opposed to vectors, where $t$ is a tunable parameter. For $p > 1$ we show how deflation can be used to improve the convergence of the algorithm. 

Finally, numerical experiments on both randomly generated matrices and real-life datasets (the latter for $A^TA$ and $e^A$) show how our algorithms can reliably estimate the largest elements of a matrix whilst obtaining considerable speedups when compared to forming the matrix explicitly: over 1000x in some cases.

Thu, 05 May 2016

14:00 - 15:00
L5

How to effectively compute the spectrum of the Laplacian with mixed Dirichlet and Neumann data

Professor Nilima Nigam
(Simon Fraser University)
Abstract
Eigenfunctions of the Laplace operator with mixed Dirichet-Neumann boundary conditions may possess singularities, especially if the Dirichlet-Neumann junction occurs at angles $\geq \frac{\pi}{2}$. This suggests the use of boundary integral strategies to solve such eigenproblems. As with boundary value problems, integral-equation methods allow for a reduction of dimension, and the resolution of singular behaviour which may otherwise present challenges to volumetric methods.
 
In this talk, we present a  novel integral-equation algorithm for mixed Dirichlet-Neumann eigenproblems. This is based on joint work with Oscar Bruno and Eldar Akhmetgaliyev (Caltech).
 
For domains with smooth boundary, the singular behaviour of the eigenfunctions at  Dirichlet-Neumann junctions is incorporated as part of the discretization strategy for the integral operator.  The discretization we use is based on the high-order Fourier Continuation method (FC). 
 
 For non-smooth (Lipschitz) domains an alternative high-order discretization is presented which achieves high-order accuracy on the basis of graded meshes.
 
 In either case (smooth or Lipschitz boundary), eigenvalues are evaluated by examining the minimal singular values of a suitable discrete system. A naive implementation will not succeed even in simple situations. We implement a strategy inspired by one suggested by Trefethen and Betcke, who developed a modified method of particular solutions.
 
The method is conceptually simple, and allows for highly accurate and efficient computation of eigenvalues and eigenfunctions, even in challenging geometries. 
Thu, 28 Apr 2016

14:00 - 15:00
L5

Fast simplicial finite elements via Bernstein polynomials

Professor Rob Kirby
(Baylor University)
Abstract

For many years, sum-factored algorithms for finite elements in rectangular reference geometry have combined low complexity with the mathematical power of high-order approximation.  However, such algorithms rely heavily on the tensor product structure inherent in the geometry and basis functions, and similar algorithms for simplicial geometry have proven elusive.

Bernstein polynomials are totally nonnegative, rotationally symmetric, and geometrically decomposed bases with many other remarkable properties that lead to optimal-complexity algorithms for element wise finite element computations.  The also form natural building blocks for the finite element exterior calculus bases for the de Rham complex so that H(div) and H(curl) bases have efficient representations as well.  We will also their relevance for explicit discontinuous Galerkin methods, where the element mass matrix requires special attention.

Thu, 03 Mar 2016

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Sparse iterative solvers on GPGPUs and applications

Dr Salvatore Filippone
(Cranfield University)
Abstract

We will review the basic building blocks of iterative solvers, i.e. sparse matrix-vector multiplication, in the context of GPU devices such 
as the cards by NVIDIA; we will then discuss some techniques in preconditioning by approximate inverses, and we will conclude with an 
application to an image processing problem from the biomedical field.

Thu, 25 Feb 2016

14:00 - 15:00
L5

On multigrid methods in convex optimization

Michal Kocvara
(Birmingham University)
Abstract

The aim of this talk is to design an efficient multigrid method for constrained convex optimization problems arising from discretization  of  some  underlying  infinite  dimensional  problems. Due  to problem  dependency  of this approach, we only consider bound constraints with (possibly) a linear equality constraint. As our aim is to target large-scale problems, we want to avoid computation of second 
derivatives of the objective function, thus excluding Newton like methods. We propose a smoothing operator that only uses first-order information and study the computational efficiency of the resulting method. In the second part, we consider application of multigrid techniques to more general optimization problems, in particular, the topology design problem.

Thu, 18 Feb 2016

14:00 - 15:00
L5

Ten things you should know about quadrature

Professor Nick Trefethen
(Oxford)
Abstract

Quadrature is the term for the numerical evaluation of integrals.  It's a beautiful subject because it's so accessible, yet full of conceptual surprises and challenges.  This talk will review ten of these, with plenty of history and numerical demonstrations.  Some are old if not well known, some are new, and two are subjects of my current research.

Mon, 15 Feb 2016

14:00 - 15:00
L5

TBA

Dr. Garth Wells
(Schlumberger)
Thu, 11 Feb 2016

14:00 - 15:00
L5

Tensor product approach for solution of multidimensional differential equations

Dr. Sergey Dolgov
(Bath University)
Abstract

Partial differential equations with more than three coordinates arise naturally if the model features certain kinds of stochasticity. Typical examples are the Schroedinger, Fokker-Planck and Master equations in quantum mechanics or cell biology, as well as quantification of uncertainty.
The principal difficulty of a straightforward numerical solution of such equations is the `curse of dimensionality': the storage cost of the discrete solution grows exponentially with the number of coordinates (dimensions).

One way to reduce the complexity is the low-rank separation of variables. One can see all discrete data (such as the solution) as multi-index arrays, or tensors. These large tensors are never stored directly.
We approximate them by a sum of products of smaller factors, each carrying only one of the original variables. I will present one of the simplest but powerful of such representations, the Tensor Train (TT) decomposition. The TT decomposition generalizes the approximation of a given matrix by a low-rank matrix to the tensor case. It was found that many interesting models allow such approximations with a significant reduction of storage demands.

A workhorse approach to computations with the TT and other tensor product decompositions is the alternating optimization of factors. The simple realization is however prone to convergence issues.
I will show some of the recent improvements that are indispensable for really many dimensions, or solution of linear systems with non-symmetric or indefinite matrices.

Thu, 04 Feb 2016

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Task-based multifrontal QR solver for heterogeneous architectures

Dr Florent Lopez
(Rutherford Appleton Laboratory)
Abstract

To face the advent of multicore processors and the ever increasing complexity of hardware architectures, programming
models based on DAG parallelism regained popularity in the high performance, scientific computing community. Modern runtime systems offer a programming interface that complies with this paradigm and powerful engines for scheduling the tasks into which the application is decomposed. These tools have already proved their effectiveness on a number of dense linear algebra applications. 

In this talk we present the design of task-based sparse direct solvers on top of runtime systems. In the context of the
qr_mumps solver, we prove the usability and effectiveness of our approach with the implementation of a sparse matrix multifrontal factorization based on a Sequential Task flow parallel programming model. Using this programming model, we developed features such as the integration of dense 2D Communication Avoiding algorithms in the multifrontal method allowing for better scalability compared to the original approach used in qr_mumps.

Following this approach, we move to heterogeneous architectures where task granularity and scheduling strategies are critical to achieve performance. We present, for the multifrontal method, a hierarchical strategy for data partitioning and a scheduling algorithm capable of handling the heterogeneity of resources.   Finally we introduce a memory-aware algorithm to control the memory behavior of our solver and show, in the context of multicore architectures, an important reduction of the memory footprint for the multifrontal QR factorization with a small impact on performance.

Thu, 28 Jan 2016

14:00 - 15:00
L5

Redundant function approximation in theory and in practice

Prof. Daan Huybrechs
(KU Leuven)
Abstract
Functions are usually approximated numerically in a basis, a non-redundant and complete set of functions that span a certain space. In this talk we highlight a number of benefits of using overcomplete sets, in particular using the more general notion of a "frame". The main 

benefit is that frames are easily constructed even for functions of several variables on domains with irregular shapes. On the other hand, allowing for possible linear depencies naturally leads to ill-conditioning of approximation algorithms. The ill-conditioning is 

potentially severe. We give some useful examples of frames and we first address the numerical stability of best approximations in a frame. Next, we briefly describe special point sets in which interpolation turns out to be stable. Finally, we review so-called Fourier extensions and an efficient algorithm to approximate functions with spectral accuracy on domains without structure.