Forthcoming events in this series


Tue, 03 May 2016
14:00
L3

Modelling weakly coupled nonlinear oscillators: volcanism and glacial cycles

Jonathan Burley
(Department of Earth Science, University of Oxford)
Abstract

This talk will be a geophysicist's view on the emerging properties of a numerical model representing the Earth's climate and volcanic activity over the past million years.

The model contains a 2D ice sheet (Glen's Law solved with a semi-implicit scheme), an energy balance for the atmosphere and planet surface (explicit), and an ODE for the time evolution of CO2 (explicit).

The dependencies between these models generate behaviour similar to weakly coupled nonlinear oscillators.

Tue, 26 Apr 2016
14:30
L3

Applications of minimum rank of matrices described by a graph or sign pattern

Leslie Hogben
(Iowa State University)
Abstract

Low-rank compression of matrices and tensors is a huge and growing business.  Closely related is low-rank compression of multivariate functions, a technique used in Chebfun2 and Chebfun3.  Not all functions can be compressed, so the question becomes, which ones?  Here we focus on two kinds of functions for which compression is effective: those with some alignment with the coordinate axes, and those dominated by small regions of localized complexity.

Tue, 26 Apr 2016
14:00
L3

Best L1 polynomial approximation

Yuji Nakatsukasa
(University of Oxford)
Abstract

An important observation in compressed sensing is the exact recovery of an l0 minimiser to an underdetermined linear system via the l1 minimiser, given the knowledge that a sparse solution vector exists. Here, we develop a continuous analogue of this observation and show that the best L1 and L0 polynomial approximants of a corrupted function (continuous analogue of sparse vectors) are equivalent. We use this to construct best L1 polynomial approximants of corrupted functions via linear programming. We also present a numerical algorithm for computing best L1 polynomial approximants to general continuous functions, and observe that compared with best L-infinity and L2 polynomial approximants, the best L1 approximants tend to have error functions that are more localized.

Joint work with Alex Townsend (MIT).

Tue, 08 Mar 2016
14:30
L3

Homogenized boundary conditions and resonance effects in Faraday cages

Dave Hewett
(University of Oxford)
Abstract

The Faraday cage effect is the phenomenon whereby electrostatic and electromagnetic fields are shielded by a wire mesh "cage". Nick Trefethen, Jon Chapman and I recently carried out a mathematical analysis of the two-dimensional electrostatic problem with thin circular wires, demonstrating that the shielding effect is not as strong as one might infer from the physics literature. In this talk I will present new results generalising the previous analysis to the electromagnetic case, and to wires of arbitrary shape. The main analytical tool is the asymptotic method of multiple scales, which is used to derive continuum models for the shielding, involving homogenized boundary conditions on an effective cage boundary. In the electromagnetic case one observes interesting resonance effects, whereby at frequencies close to the natural frequencies of the equivalent solid shell, the presence of the cage actually amplifies the incident field, rather than shielding it. We discuss applications to radiation containment in microwave ovens and acoustic scattering by perforated shells. This is joint work with Ian Hewitt.

Tue, 01 Mar 2016
14:30
L3

Kerdock matrices and the efficient quantization of subsampled measurements

Andrew Thompson
(University of Oxford)
Abstract

Kerdock matrices are an attractive choice as deterministic measurement matrices for compressive sensing. I'll explain how Kerdock matrices are constructed, and then show how they can be adapted to one particular  strategy for quantizing measurements, in which measurements exceeding the desired dynamic range are rejected.

Tue, 16 Feb 2016
14:30
L5

How accurate must solves be in interior point methods?

Tyrone Rees
(Rutherford Appleton Laboratory)
Abstract

At the heart of the interior point method in optimization is a linear system solve, but how accurate must this solve be?  The behaviour of such methods is well-understood when a direct solver is used, but the scale of problems being tackled today means that users increasingly turn to iterative methods to approximate its solution.  Current suggestions of the accuracy required can be seen to be too stringent, leading to inefficiency.

In this talk I will give conditions on the accuracy of the solution in order to guarantee the inexact interior point method converges at the same rate as if there was an exact solve.  These conditions can be shown numerically to be tight, in that performance degrades rapidly if a weaker condition is used.  Finally, I will describe how the norms that appear in these condition are related to the natural norms that are minimized in several popular Krylov subspace methods. This, in turn, could help in the development of new preconditioners in this important field.

Tue, 16 Feb 2016
14:00
L5

Block operators and spectral discretizations

Jared Aurentz
(University of Oxford)
Abstract

Operators, functions, and functionals are combined in many problems of computational science in a fashion that has the same logical structure as is familiar for block matrices and vectors.  It is proposed that the explicit consideration of such block structures at the continuous as opposed to discrete level can be a useful tool.  In particular, block operator diagrams provide templates for spectral discretization by the rectangular differentiation, integration, and identity matrices introduced by Driscoll and Hale.  The notion of the rectangular shape of a linear operator can be made rigorous by the theory of Fredholm operators and their indices, and the block operator formulations apply to nonlinear problems too, where the convention is proposed of representing nonlinear blocks as shaded.  At each step of a Newton iteration, the structure is linearized and the blocks become unshaded, representing Fréchet derivative operators, square or rectangular.  The use of block operator diagrams makes it possible to precisely specify discretizations of even rather complicated problems with just a few lines of pseudocode.

[Joint work with Nick Trefethen]

Tue, 09 Feb 2016

14:00 - 14:30
L5

Regularization methods - varying the power, the smoothness and the accuracy

Coralia Cartis
(University of Oxford)
Abstract

Adaptive cubic regularization methods have recently emerged as a credible alternative to line search and trust-region for smooth nonconvex optimization, with optimal complexity amongst second-order methods. Here we consider a general class of adaptive regularization methods, that use first- or higher-order local Taylor models of the objective regularized by a(ny) power of the step size. We investigate the worst-case complexity/global rate of convergence of these algorithms, in the presence of varying (unknown) smoothness of the objective. We find that some methods automatically adapt their complexity to the degree of smoothness of the objective; while others take advantage of the power of the regularization step to satisfy increasingly better bounds with the order of the models. This work is joint with Nick Gould (RAL) and Philippe Toint (Namur).

Tue, 19 Jan 2016

14:30 - 15:00
L5

Sparse information representation through feature selection

Thanasis Tsanas
(University of Oxford)
Abstract
In this talk I am presenting a range of feature selection methods, which are aimed at detecting the most parsimonious subset of characteristics/features/genes. This sparse representation leads always to simpler, more interpretable models, and may lead to improvement in prediction accuracy. I survey some of the state-of-the-art developed algorithms, and discuss a novel approach which is both computationally attractive, and seems to work very effectively across a range of domains, in particular for fat datasets.
Tue, 24 Nov 2015

14:30 - 15:00
L5

Geometric integrators in optimal control theory

Sina Ober-Blobaum
(University of Oxford)
Abstract
Geometric integrators are structure-peserving integrators with the goal to capture the dynamical system's behavior in a most realistic way. Using structure-preserving methods for the simulation of mechanical systems, specific properties of the underlying system are handed down to the numerical solution, for example, the energy of a conservative system shows no numerical drift or momentum maps induced by symmetries are preserved exactly. One particular class of geometric integrators is the class of variational integrators. They are derived from a discrete variational principle based on a discrete action function that approximates the continuous one. The resulting schemes are symplectic-momentum conserving and exhibit good energy behaviour. 
 
For the numerical solution of optimal control problems, direct methods are based on a discretization of the underlying differential equations which serve as equality constraints for the resulting finite dimensional nonlinear optimization problem. For the case of mechanical systems, we use variational integrators for the discretization of optimal control problems. By analyzing the adjoint systems of the optimal control problem and its discretized counterpart, we prove that for these particular integrators optimization and discretization commute due to the symplecticity of the discretization scheme. This property guarantees that the convergence rates are preserved for the adjoint system which is also referred to as the Covector Mapping Principle. 
Tue, 24 Nov 2015

14:00 - 14:30
L5

Numerical calculation of permanents

Peter McCullagh
(University of Chicago)
Abstract
The $\alpha$-permanent of a square matrix is a determinant-style sum, with $\alpha=-1$ corresponding to the determinant, $\alpha=1$ to the ordinary permanent, and $\alpha=0$ to the Hamiltonian sum over cyclic permutations.  Exact computation of permanents is notoriously difficult; numerical computation using the best algorithm for $\alpha=1$ is feasible for matrices of order about 25--30; numerical computation for general $\alpha$ is feasible only for $n < 12$.  I will describe briefly how the $\alpha$-permanent arises in statistical work as the probability density function of the Boson point process, and I will discuss the level of numerical accuracy needed for statistical applications.  My hope is that, for sufficiently large matrices, it may be possible to develop a non-stochastic polynomial-time approximation of adequate accuracy.
Tue, 17 Nov 2015

14:30 - 15:00
L5

A GPU Implementation of the Filtered Lanczos Procedure

Jared Aurentz
(University of Oxford)
Abstract

This talk describes a graphics processing unit (GPU) implementation of the Filtered Lanczos Procedure for the solution of large, sparse, symmetric eigenvalue problems. The Filtered Lanczos Procedure uses a carefully chosen polynomial spectral transformation to accelerate the convergence of the Lanczos method when computing eigenvalues within a desired interval. This method has proven particularly effective when matrix-vector products can be performed efficiently in parallel. We illustrate, via example, that the Filtered Lanczos Procedure implemented on a GPU can greatly accelerate eigenvalue computations for certain classes of symmetric matrices common in electronic structure calculations and graph theory. Comparisons against previously published CPU results suggest a typical speedup of at least a factor of $10$.

Tue, 17 Nov 2015

14:00 - 14:30
L5

A fast hierarchical direct solver for singular integral equations defined on disjoint boundaries and application to fractal screens

Mikael Slevinsky
(University of Oxford)
Abstract
Olver and I recently developed a fast and stable algorithm for the solution of singular integral equations. It is a new systematic approach for converting singular integral equations into almost-banded and block-banded systems of equations. The structures of these systems lend themselves to fast direct solution via the adaptive QR factorization. However, as the number of disjoint boundaries increases, the computational effectiveness deteriorates and specialized linear algebra is required.

Our starting point for specialized linear algebra is an alternative algorithm based on a recursive block LU factorization recently developed by Aminfar, Ambikasaran, and Darve. This algorithm specifically exploits the hierarchically off-diagonal low-rank structure arising from coercive singular integral operators of elliptic partial differential operators. The hierarchical solver involves a pre-computation phase independent of the right-hand side. Once this pre-computation factorizes the operator, the solution of many right-hand sides takes a fraction of the original time. Our fast direct solver allows for the exploration of reduced-basis problems, where the boundary density for any incident plane wave can be represented by a periodic Fourier series whose coefficients are in turn expanded in weighted Chebyshev or ultraspherical bases.
 
A fractal antenna uses a self-similar design to achieve excellent broadband performance. Similarly, a fractal screen uses a fractal such as a Cantor set to screen electromagnetic radiation. Hewett, Langdon, and Chandler-Wilde have shown recently that the density on the nth convergent to a fractal screen converges to a non-zero element in the suitable Sobolev space, resulting in a physically observable and persistent scattered field as n tends to infinity. We use our hierarchical solver to show numerical results for prefractal screens.
Tue, 10 Nov 2015

14:00 - 15:00
L5

BFO: a Brute Force trainable algorithm for mixed-integer and multilevel derivative-free optimization

Philippe Toint
(University of Namur)
Abstract

The talk will describe a new "Brute Force Optimizer" whose objective is to provide a very versatile derivative-free Matlab package for bound-constrained optimization, with the distinctive feature that it can be trained to improve its own performance on classes of problems specified by the user (rather than on a single-but-wide problem class chosen by the algorithm developer).  In addition, BFO can be used to optimize the performance of other algorithms and provides facilities for mixed-integer and multilevel problems, including constrained equilibrium calculations.