Forthcoming events in this series


Tue, 21 Nov 2017

14:00 - 14:30
L5

Compressed Sensing Reconstruction of Dynamic X-ray Imaging

Joseph Field
(University of Oxford)
Abstract

Medical imaging is a key diagnostic tool, and is paramount for disease detection and for patient monitoring during ongoing care. Often, to reduce the amount of radiation that a patient is subjected to, there is a strong incentive to consider image reconstruction from incomplete sets of measurements, and so the imaging process is formulated as a compressed sensing problem.

In this talk, we will focus on compressed sensing for digital tomosynthesis (DTS), in which three-dimensional images are reconstructed from a set of two-dimensional X-ray projections. We first discuss a reconstruction approach for static bodies, with a particular interest in the choice of basis for the image representation. We will then focus on the need for accurate image reconstructions when the body of interest is not stationary, but is undergoing simple motion, discussing two different approaches for tackling this dynamic problem.

Tue, 14 Nov 2017

14:30 - 15:00
L5

Shape Optimisation with Conformal Mappings

Florian Wechsung
(University of Oxford)
Abstract

The design of shapes that are in some sense optimal is a task faced by engineers in a wide range of disciplines. In shape optimisation one aims to improve a given initial shape by iteratively deforming it - if the shape is represented by a mesh, then this means that the mesh has to deformed. This is a delicate problem as overlapping or highly stretched meshes lead to poor accuracy of numerical methods.

In the presented work we consider a novel mesh deformation method motivated by the Riemannian mapping theorem and based on conformal mappings.

Tue, 14 Nov 2017

14:00 - 14:30
L5

An Alternative to the Coarse Solver for the Parareal Algorithm

Federico Danieli
(University of Oxford)
Abstract

Time parallelisation techniques provide an additional direction for the parallelisation of the solution of time-dependent PDEs or of systems of ODEs. In particular, the Parareal algorithm has imposed itself as the canonical choice to achieve parallelisation in time, also because of its simplicity and flexibility. The algorithm works by splitting the time domain in chunks, and iteratively alternating a prediction step (parallel), in which a "fine" solver is employed to achieve a high-accuracy solution within each chunk, to a correction step (serial) where a "coarse" solver is used to quickly propagate the update between the chunks. However, the stability of the method has proven to be highly sensitive to the choice of fine and coarse solver, even more so when applied to chaotic systems or advection-dominated problems.


In this presentation, an alternative formulation of Parareal is discussed. This aims to conduct the update by estimating directly the sensitivity of the solution of the integration with respect to the initial conditions, thus eliminating altogether the necessity of choosing the most apt coarse solver, and potentially boosting its convergence properties.

 

Tue, 07 Nov 2017

14:30 - 15:00
L5

Monte Carlo integration: variance reduction by function approximation

Yuji Nakatsukasa
(University of Oxford)
Abstract

Classical algorithms for numerical integration (quadrature/cubature) proceed by approximating the integrand with a simple function (e.g. a polynomial), and integrate the approximant exactly. In high-dimensional integration, such methods quickly become infeasible due to the curse of dimensionality.


A common alternative is the Monte Carlo method (MC), which simply takes the average of random samples, improving the estimate as more and more samples are taken. The main issue with MC is its slow "sqrt(variance/#samples)" convergence, and various techniques have been proposed to reduce the variance.


In this work we reveal a numerical analyst's interpretation of MC: it approximates the integrand with a simple(st) function, and integrates that function exactly. This observation leads naturally to MC-like methods that combines MC with function approximation theory, including polynomial approximation and sparse grids. The resulting method can be regarded as another variance reduction technique for Monte Carlo.

Tue, 07 Nov 2017

14:00 - 14:30
L5

OSQP: An Operator Splitting Solver for Quadratic Programs

Bartolomeo Stellato
(Oxford University)
Abstract

We develop a general purpose solver for quadratic programs based on operator splitting. We introduce a novel splitting that requires the solution of a quasi-definite linear system with the same coefficient matrix in each iteration. The resulting algorithm is very robust, and once the initial factorization is carried out, division free; it also eliminates requirements on the problem data such as positive definiteness of the objective function or linear independence of the constraint functions. Moreover, it is able to detect primal or dual infeasible problems providing infeasibility certificates. The method supports caching the factorization of the quasi-definite system and warm starting, making it efficient for solving parametrized problems arising in finance, control, and machine learning. Our open-source C implementation OSQP has a small footprint and is library-free. Numerical benchmarks on problems arising from several application domains show that OSQP is typically 10x faster than interior-point methods, especially when factorization caching or warm start is used.


This is joint work with Goran Banjac, Paul Goulart, Alberto Bemporad and Stephen Boyd
 

Tue, 31 Oct 2017

14:30 - 15:00
L5

Error bounds for monotone schemes for parabolic Hamilton-Jacobi-Bellman equations in bounded domains

Athena Picarelli
(Imperial College)
Abstract

We provide the rate of convergence of general monotone numerical schemes for parabolic Hamilton-Jacobi-Bellman equations in bounded domains with Dirichlet boundary conditions. The so-called "shaking coefficients" technique introduced by Krylov is used. This technique is based on a perturbation of the dynamics followed by a regularization step by convolution. When restricting the equation to a domain, the perturbed problem may not satisfy such a restriction, so that a special treatment near the boundary is necessary. 

Tue, 31 Oct 2017

14:00 - 14:30
L5

Dual Acceleration for Nonconvex Optimisation

Matthew Geleta
(University of Cambridge)
Abstract


The phenomenon of poor algorithmic scalability is a critical problem in large-scale machine learning and data science. This has led to a resurgence in the use of first-order (Hessian-free) algorithms from classical optimisation. One major drawback is that first-order methods tend to converge extremely slowly. However, there exist techniques for efficiently accelerating them.
    
The topic of this talk is the Dual Regularisation Nonlinear Acceleration algorithm (DRNA) (Geleta, 2017) for nonconvex optimisation. Numerical studies using the CUTEst optimisation problem set show the method to accelerate several nonconvex optimisation algorithms, including quasi-Newton BFGS and steepest descent methods. DRNA compares favourably with a number of existing accelerators in these studies.
    
DRNA extends to the nonconvex setting a recent acceleration algorithm due to Scieur et al. (Advances in Neural Information Processing Systems 29, 2016). We have proven theorems relating DRNA to the Kylov subspace method GMRES, as well as to Anderson's acceleration method and family of multi-secant quasi-Newton methods.
 

Tue, 24 Oct 2017

14:30 - 15:00
L5

Network Block Decomposition for Revenue Management

Jaroslav Fowkes
(University of Oxford)
Abstract

In this talk we introduce a novel dynamic programming (DP) approximation that exploits the inherent network structure present in revenue management problems. In particular, our approximation provides a new lower bound on the value function for the DP, which enables conservative revenue forecasts to be made. Existing state of the art approximations of the revenue management DP neglect the network structure, apportioning the prices of each product, whereas our proposed method does not: we partition the network of products into clusters by apportioning the capacities of resources. Our proposed approach allows, in principle, for better approximations of the DP to be made than the decomposition methods currently implemented in industry and we see it as an important stepping stone towards better approximate DP methods in practice.

Tue, 24 Oct 2017

14:00 - 14:30
L5

Gaussian Processes for Demand Unconstraining

Ilan Price
(University of Oxford)
Abstract

One of the key challenges in revenue management is unconstraining demand data. Existing state of the art single-class unconstraining methods make restrictive assumptions about the form of the underlying demand and can perform poorly when applied to data which breaks these assumptions. In this talk, we propose a novel unconstraining method that uses Gaussian process (GP) regression. We develop a novel GP model by constructing and implementing a new non-stationary covariance function for the GP which enables it to learn and extrapolate the underlying demand trend. We show that this method can cope with important features of realistic demand data, including nonlinear demand trends, variations in total demand, lengthy periods of constraining, non-exponential inter-arrival times, and discontinuities/changepoints in demand data. In all such circumstances, our results indicate that GPs outperform existing single-class unconstraining methods.

Tue, 17 Oct 2017

14:30 - 15:00
L5

White Noise Coupling for Multilevel Monte Carlo

Matteo Croci
(University of Oxford)
Abstract

In this talk we describe a new approach that enables the use of elliptic PDEs with white noise forcing to sample Matérn fields within the multilevel Monte Carlo (MLMC) framework.

When MLMC is used to quantify the uncertainty in the solution of PDEs with random coefficients, two key ingredients are needed: 1) a sampling technique for the coefficients that satisfies the MLMC telescopic sum and 2) a numerical solver for the forward PDE problem.

When the dimensionality of the uncertainty in the problem is infinite (i.e. coefficients are random fields), the sampling techniques commonly used in the literature are Karhunen–Loève expansions or circulant embeddings. In the specific case in which the coefficients are Gaussian fields of Mat ́ern covariance structure another sampling technique available relies on the solution of a linear elliptic PDE with white noise forcing.


When the finite element method (FEM) is used for the forward problem, the latter option can become advantageous as elliptic PDEs can be quickly and efficiently solved with the FEM, the sampling can be performed in parallel and the same FEM software can be used without the need for external packages. However, it is unclear how to enforce a good stochastic coupling of white noise between MLMC levels so as to respect the MLMC telescopic sum. In this talk we show how this coupling can be enforced in theory and in practice.

Tue, 17 Oct 2017

14:00 - 14:30
L5

Multilevel weighted least squares polynomial approximation

Abdul-Lateef Haji-Ali
(University of Oxford)
Abstract

We propose and analyze a multilevel weighted least squares polynomial approximation method. Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method, which employs samples with different accuracies and is able to match the accuracy of single level approximations at reduced computational work. We prove complexity bounds under certain assumptions on polynomial approximability and sample work. Furthermore, we propose an adaptive
algorithm for situations where such assumptions cannot be verified a priori. Numerical experiments underline the practical applicability of our method.

Tue, 10 Oct 2017

14:30 - 15:00
L5

A novel DG method using the principle of discrete least squares

Jan Glaubitz
(TU Braunschweig)
Abstract

In this talk, a novel discontinuous Galerkin (DG) method is introduced by utilising the principle of discrete least squares. The key idea is to build polynomial approximations by the method of  (weighted) discrete least squares instead of usual interpolation or (discrete) $L^2$ projections. The resulting method hence uses more information of the underlying function and provides a more robust alternative to common DG methods. As a result, we are able to construct high-order schemes which are conservative as well as linear stable on any set of collocation points. Several numerical tests highlight the new discontinuous Galerkin discrete least squares (DG-DLS) method to significantly outperform present-day DG methods.

Tue, 10 Oct 2017

14:00 - 14:30
L5

Generalised Summation-by-Parts Operators, Entropy Stability, and Split Forms

Hendrik Ranocha
(TU Braunschweig)
Abstract

High-order methods for conservation laws can be highly efficient if their stability is ensured. A suitable means mimicking estimates of the continuous level is provided by summation-by-parts (SBP) operators and the weak enforcement of boundary conditions. Recently, there has been an increasing interest in generalised SBP operators both in the finite difference and the discontinuous Galerkin spectral element framework.

However, if generalised SBP operators are used, the treatment of boundaries becomes more difficult since some properties of the continuous level are no longer mimicked discretely —interpolating the product of two functions will in general result in a value different from the product of the interpolations. Thus, desired properties such as conservation and stability are more difficult to obtain.

In this talk, the concept of generalised SBP operators and their application to entropy stable semidiscretisations will be presented. Several recent ideas extending the range of possible methods are discussed, presenting both advantages and several shortcomings.

Tue, 26 Sep 2017

14:00 - 14:30
C4

Low algebraic dimension matrix completion

Greg Ongie
(University of Michigan)
Abstract

We consider a generalization of low-rank matrix completion to the case where the data belongs to an algebraic variety, i.e., each data point is a solution to a system of polynomial equations. In this case, the original matrix is possibly high-rank, but it becomes low-rank after mapping each column to a higher dimensional space of monomial features. Many well-studied extensions of linear models, including affine subspaces and their union, can be described by a variety model. We study the sampling requirements for matrix completion under a variety model with a focus on a union of subspaces. We also propose an efficient matrix completion algorithm that minimizes a surrogate of the rank of the matrix of monomial features, which is able to recover synthetically generated data up to the predicted sampling complexity bounds. The proposed algorithm also outperforms standard low-rank matrix completion and subspace clustering techniques in experiments with real data.

Tue, 20 Jun 2017

14:00 - 15:00
L5

Numerical Convolution for Tensor Operations

Professor Wolfgang Hackbusch
(Max Planck Institute Leipzig)
Abstract

Starting from an example in quantum chemistry, we explain the techniques of Numerical Tensor Calculus with particular emphasis on the convolution operation. The tensorisation technique also applies to one-dimensional grid functions and allows to perform the convolution with a cost which may be much cheaper than the fast Fourier transform.

Tue, 06 Jun 2017
14:00
L2

Analysis of Magnus expansion methods in the semiclassical regime

Pranav Singh
(Mathematical Institute)
Abstract


Magnus expansion based methods are an efficient class of integrators for solving Schrödinger equations that feature time dependent potentials such as lasers. These methods have been found to be highly effective in computational quantum chemistry since the pioneering work of Tal Ezer and Kosloff in the early 90s. The convergence of the Magnus expansion, however, is usually understood only for ODEs and traditional analysis suggests a much poorer performance of these methods than observed experimentally. It was not till the work of Hochbruck and Lubich in 2003 that a rigorous analysis justifying the application to PDEs with unbounded operators, such as the Schrödinger equation, was presented. In this talk we will extend this analysis to the semiclassical regime, where the highly oscillatory solution conventionally suggests large errors and a requirement for very small time steps.
 

Tue, 30 May 2017
14:30
L5

New approaches for global optimization methods

Adilet Otemisov
(Mathematical Institute and Alan Turing Institute)
Abstract


We present some dimensionality reduction techniques for global optimization algorithms, in order to increase their scalability. Inspired by ideas in machine learning, and extending the approach of random projections in Zhang et al (2016), we present some new algorithmic approaches for global optimisation with theoretical guarantees of good behaviour and encouraging numerical results.
 

Tue, 30 May 2017
14:00
L5

Derivative-free optimisation methods for nonlinear least-squares problems

Lindon Roberts
(Mathematical Institute)
Abstract

Derivative-free optimisation (DFO) algorithms are a category of optimisation methods for situations when one is unable to compute or estimate derivatives of the objective, such as when the objective has noise or is very expensive to evaluate. In this talk I will present a flexible DFO framework for unconstrained nonlinear least-squares problems, and in particular discuss its performance on noisy problems.

Tue, 23 May 2017
14:30
L5

The 2017 Problem Solving Squad

Problem Solving Squad (Roberts, Wechsung, Roy et al.)
(Mathematical Institute)
Abstract

Each year Prof. Trefethen gives the Problem Solving Squad a sequence of problems with no hints, one a week, where the solution of each problem is a single real number to be computed by any method available.  We will present this year's three problems, involving (1) an S-shaped bifurcation curve, (2) shortest path around a random web, and (3) switching a time-varying system to maximize a matrix norm.

 

The 14 students this year are Simon Vary plus InFoMM cohort 2: Matteo Croci, Davin Lunz, Michael McPhail, Tori Pereira, Lindon Roberts, Caoimhe Rooney, Ian Roper, Thomas Roy, Tino Sulzer, Bogdan Toader, Florian Wechsung, Jess Williams, and Fabian Ying.  The presentations will be by (1) Lindon Roberts, (2) Florian Wechsung, and (3) Thomas Roy.

Tue, 23 May 2017
14:00
L5

Sparse Kerdock matrices for compressive sensing

Andrew Thompson
(Mathematical Institute)
Abstract

Delsarte-Goethals frames are a popular choice for deterministic measurement matrices in compressive sensing. I will show that it is possible to construct extremely sparse matrices which share precisely the same row space as Delsarte-Goethals frames. I will also describe the combinatorial block design underlying the construction and make a connection to Steiner equiangular tight frames.
 

Tue, 16 May 2017
14:00
L5

Random functions in Chebfun

Nick Trefethen
(Mathematical Institute)
Abstract

What's the continuous analog of randn?  In recent months I've been exploring such questions with Abdul-Lateef Haji-Ali and other members of the Chebfun team, and Chebfun now has commands randnfun, randnfun2, randnfunsphere, and randnfundisk.  These are based on finite Fourier series with random coefficients, and interesting questions arise in the "white noise" limit as the lengths of the series approaches infinity and as random ODEs become stochastic DEs.    This work is at an early stage and we are grateful for input from stochastic experts at Oxford and elsewhere.