Forthcoming events in this series


Tue, 18 Jun 2019

14:30 - 15:00
L3

PathFinder: a toolbox for oscillatory quadrature

Andrew Gibbs
(KU Leuven)
Abstract

Highly oscillatory integrals arise in a range of wave-based problems. For example, they may occur when a basis for a boundary element has been enriched with oscillatory functions, or as part of a localised approximation to various short-wavelength phenomena. A range of contemporary methods exist for the efficient evaluation of such integrals. These methods have been shown to be very effective for model integrals, but may require expertise and manual intervention for
integrals with higher complexity, and can be unstable in practice.

The PathFinder toolbox aims to develop robust and fully automated numerical software for a large class of oscillatory integrals. In this talk I will introduce the method of numerical steepest descent (the technique upon which PathFinder is based) with a few simple examples, which are also intended to highlight potential causes for numerical instability or manual intervention. I will then explain the novel approaches that PathFinder uses to avoid these. Finally I will present some numerical examples, demonstrating how to use the toolbox, convergence results, and an application to the parabolic wave equation.

Tue, 18 Jun 2019

14:00 - 14:30
L3

Improving the scalability of derivative-free optimisation for nonlinear least-squares problems

Lindon Roberts
(Oxford)
Abstract

In existing techniques for model-based derivative-free optimisation, the computational cost of constructing local models and Lagrange polynomials can be high. As a result, these algorithms are not as suitable for large-scale problems as derivative-based methods. In this talk, I will introduce a derivative-free method based on exploration of random subspaces, suitable for nonlinear least-squares problems. This method has a substantially reduced computational cost (in terms of linear algebra), while still making progress using few objective evaluations.

Tue, 11 Jun 2019

14:30 - 15:00
L2

Integrated Approaches for Stochastic Chemical Kinetics

Pamela Burrage
(Queensland)
Abstract

In this talk I discuss how we can simulate stochastic chemical kinetics when there is a memory component. This can occur when there is spatial crowding within a cell or part of a cell, which acts to constrain the motion of the molecules which then in turn changes the dynamics of the chemistry. The counterpart of the Law of Mass Action in this setting is through replacing the first derivative in the ODE description of the Law of Mass Action by a time-­fractional derivative, where the time-­fractional index is between 0 and 1. There has been much discussion in the literature, some of it wrong, as to how we model and simulate stochastic chemical kinetics in the setting of a spatially-­constrained domain – this is sometimes called anomalous diffusion kinetics.

In this presentation, I discuss some of these issues and then present two (equivalent) ways of simulating fractional stochastic chemical kinetics. The key here is to either replace the exponential waiting time used in Gillespie’s SSA by Mittag-­Leffler waiting times (MacNamara et al. [2]), which have longer tails than in the exponential case. The other approach is to use some theory developed by Jahnke and Huisinga [1] who are able towrite down the underlying probability density function for any set of mono-­molecular chemical reactions (under the standard Law of Mass Action) as a convolution of either binomial probability density functions or binomial and Poisson probability density functions). We can then extend the Jahnke and Huisinga formulation through the concept of iterated Brownian Motion paths to produce exact simulations of the underlying fractional stochastic chemical process. We demonstrate the equivalence of these two approaches through simulations and also by computing the probability density function of the underlying fractional stochastic process, as described by the fractional chemical master equation whose solution is the Mittag-­Lefflermatrix function. This is computed based on a clever algorithm for computing matrix functions by Cauchy contours (Weideman and Trefethen [3]).

This is joint work with Manuel Barrio (University of Vallodolid, Spain), Kevin Burrage (QUT), Andre Leier (University of Alabama), Shev MacNamara(University of Technology Sydney)and T. Marquez-­Lago (University of Alabama).

[1]T. Jahnke and W. Huisinga, 2007, Solving the chemical master equation for monomolecular reaction systems analytically, J. Math. Biology 54, 1, 1—26.[2]S. MacNamara, B. Henry and W. McLean, 2017, Fractional Euler limits and their applications, SIAM J. Appl. Math. 77, 2, 447—469.[3]J.A.C. Weideman and L.N. Trefethen, 2007, Parabolic and hyperbolic contours for computing the Bromwich integral, Math. Comp. 76, 1341—1356.

Tue, 11 Jun 2019

14:00 - 14:30
L2

The Additive Congruential Random Number (ACORN) Generator - pseudo-random sequences that are well distributed in k-dimensions

Roy S Wikramaratna
(REAMC Limited)
Abstract

ACORN generators represents an approach to generating uniformly distributed pseudo-random numbers which is straightforward to implement for arbitrarily large order $k$ and modulus $M=2^{30t}$ (integer $t$). They give long period sequences which can be proven theoretically to approximate to uniformity in up to $k$ dimensions, while empirical statistical testing demonstrates that (with a few very simple constraints on the choice of parameters and the initialisation) the resulting sequences can be expected to pass all the current standard tests .

The standard TestU01 Crush and BigCrush Statistical Test Suites are used to demonstrate for ACORN generators with order $8≤k≤25$ that the statistical performance improves as the modulus increases from $2^{60}$ to $2^{120}$. With $M=2^{120}$ and $k≥9$, it appears that ACORN generators pass all the current TestU01 tests over a wide range of initialisations; results are presented that demonstrate the remarkable consistency of these results, and explore the limits of this behaviour.

This contrasts with corresponding results obtained for the widely-used Mersenne Twister MT19937 generator, which consistently failed on two of the tests in both the Crush and BigCrush test suites.

There are other pseudo-random number generators available which will also pass all the TestU01 tests. However, for the ACORN generators it is possible to go further: we assert that an ACORN generator might also be expected to pass any more demanding tests for $p$-dimensional uniformity that may be required in the future, simply by choosing the order $k>p$, the modulus $M=2^{30t}$ for sufficiently large $t$, together with any odd value for the seed and an arbitrary set of initial values. We note that there will be $M/2$ possible odd values for the seed, with each such choice of seed giving rise to a different $k$-th order ACORN sequence satisfying all the required tests.

This talk builds on and extends results presented at the recent discussion meeting on “Numerical algorithms for high-performance computational science” at the Royal Society London, 8-9 April 2019, see download link at bottom of web page http://acorn.wikramaratna.org/references.html.

Tue, 04 Jun 2019

14:30 - 15:00
L5

The dual approach to non-negative super-resolution: impact on primal reconstruction accuracy

Bogdan Toader
(Oxford)
Abstract

We study the problem of super-resolution using TV norm minimisation, where we recover the locations and weights of non-negative point sources from a few samples of their convolution with a Gaussian kernel. A practical approach is to solve the dual problem. In this talk, we study the stability of solutions with respect to the solutions to the dual problem. In particular, we establish a relationship between perturbations in the dual variable and the primal variables around the optimiser. This is achieved by applying a quantitative version of the implicit function theorem in a non-trivial way.

Tue, 04 Jun 2019

14:00 - 14:30
L5

Decentralised Sparse Multi-Task Regression

Dominic Richards
(Oxford)
Abstract

We consider a sparse multi-task regression framework for fitting a collection of related sparse models. Representing models as nodes in a graph with edges between related models, a framework that fuses lasso regressions with the total variation penalty is investigated. Under a form of generalised restricted eigenvalue assumption, bounds on prediction and squared error are given that depend upon the sparsity of each model and the differences between related models. This assumption relates to the smallest eigenvalue restricted to the intersection of two cone sets of the covariance matrix constructed from each of the agents' covariances. In the case of a grid topology high-probability bounds are given that match, up to log factors, the no-communication setting of fitting a lasso on each model, divided by the number of agents.  A decentralised dual method that exploits a convex-concave formulation of the penalised problem is proposed to fit the models and its effectiveness demonstrated on simulations. (Joint work with Sahand Negahban and Patrick Rebeschini)

Tue, 28 May 2019

14:30 - 15:00
L5

Optimisation of 1D Piecewise Smooth Functions

Jonathan Grant-Peters
(Oxford)
Abstract

Optimisation in 1D is far simpler than multidimensional optimisation and this is largely due to the notion of a bracket. A bracket is a trio of points such that the middle point is the one with the smallest objective function value (of the three). The existence of a bracket is sufficient to guarantee that a continuous function has a local minimum within the bracket. The most stable 1D optimisation methods, such as Golden Section or Brent's Method, make use of this fact. The mentality behind these methods is to maintain a bracket at all times, all the while finding smaller brackets until the local minimum can be guaranteed to lie within a sufficiently small range. For smooth functions, Brent's method in particular converges quickly with a minimum of function evaluations required. However, when applied to a piece-wise smooth functions, it achieves its realistic worst case convergence rate. In this presentation, I will present a new method which uses ideas from Brent and Golden Section, while being designed to converge quickly for piece-wise smooth functions.

Tue, 28 May 2019

14:00 - 14:30
L5

On divergence-free methods for double-diffusion equations in porous media

Paul Méndez
(Concepción)
Abstract

A stationary Navier-Stokes-Brinkman model coupled to a system of advection-diffusion equations serves as a model for so-called double-diffusive viscous flow in porous mediain which both heat and a solute within the fluid phase are subject to transport and diffusion. The solvability analysis of these governing equations results as a combination of compactness arguments and fixed-point theory. In addition an H(div)-conforming discretisation is formulated by a modification of existing methods for Brinkman flows. The well-posedness ofthe discrete Galerkin formulation is also discussed, and convergence properties are derived rigorously. Computational tests confirm the predicted rates of error decay and illustrate the applicability of the methods for the simulation of bacterial bioconvection and thermohaline circulation problems.

Tue, 21 May 2019

14:30 - 15:00
L5

A Model-Based Derivative-Free Approach to Black-Box Adversarial Examples in Deep Learning

Giuseppe Ughi
(Oxford)
Abstract

Neural Network algorithms have achieved unprecedented performance in image recognition over the past decade. However, their application in real world use-cases, such as self driving cars, raises the question of whether it is safe to rely on them.

We generally associate the robustness of these algorithms with how easy it is to generate an adversarial example: a tiny perturbation of an image which leads it to be misclassified by the Neural Net (which classifies the original image correctly). Neural Nets are strongly susceptible to such adversarial examples, but when the architecture of the target neural net is unknown to the attacker it becomes more difficult to generate these examples efficiently.

In this Black-Box setting, we frame the generation of an adversarial example as an optimisation problem solvable via derivative free optimisation methods. Thus, we introduce an algorithm based on the BOBYQA model-based method and compare this to the current state of the art algorithm.

Tue, 21 May 2019

14:00 - 14:30
L5

Time-Varying Matrix Problems and Zhang Neural Networks

Frank Uhlig
(Auburn)
Abstract

We adapt convergent look-ahead and backward finite difference formulas to compute future eigenvectors and eigenvalues of piecewise smooth time-varying matrix flows $A(t)$. This is based on the Zhang Neural Network model for time-varying problems and uses the associated error function

$E(t) =A(t)V(t)−V(t)D(t)$

with the Zhang design stipulation

$\dot{E}(t) =−\eta E(t)$.

Here $E(t)$ decreased exponentially over time for $\eta >0$. It leads to a discrete-time differential equation of the form $P(t_k)\dot{z}(t_k) = q(t_k)$ for the eigendata vector $z(t_k)$ of $A(t_k)$. Convergent high order look-ahead difference formulas then allow us to express $z(t_k+1)$ in terms of earlier discrete $A$ and $z$ data. Numerical tests, comparisons and open questions follow.

Tue, 14 May 2019

14:30 - 15:00
L3

Deep artificial neural networks overcome the curse of dimensionality in PDE approximation

Timo Welti
(ETHZ)
Abstract

Numerical simulations indicate that deep artificial neural networks (DNNs) seem to be able to overcome the curse of dimensionality in many computational  problems in the sense that the number of real parameters used to describe the DNN grows at most polynomially in both the reciprocal of the prescribed approximation accuracy and the dimension of the function which the DNN aims to approximate. However, there are only a few special situations where results in the literature can rigorously explain the success of DNNs when approximating high-dimensional functions.

In this talk it is revealed that DNNs do indeed overcome the curse of dimensionality in the numerical approximation of Kolmogorov PDEs with constant diffusion and nonlinear drift coefficients. A crucial ingredient in our proof of this result is the fact that the artificial neural network used to approximate the PDE solution really is a deep artificial neural network with a large number of hidden layers.

Tue, 14 May 2019

14:00 - 14:30
L3

Fast Graph Sampling using Gershgorin Disc Alignment

Gene Cheung
(York University)
Abstract

Graph sampling with noise is a fundamental problem in graph signal processing (GSP). A popular biased scheme using graph Laplacian regularization (GLR) solves a system of linear equations for its reconstruction. Assuming this GLR-based reconstruction scheme, we propose a fast sampling strategy to maximize the numerical stability of the linear system--i.e., minimize the condition number of the coefficient matrix. Specifically, we maximize the eigenvalue lower bounds of the matrix that are left-ends of Gershgorin discs of the coefficient matrix, without eigen-decomposition. We propose an iterative algorithm to traverse the graph nodes via Breadth First Search (BFS) and align the left-ends of all corresponding Gershgorin discs at lower-bound threshold T using two basic operations: disc shifting and scaling. We then perform binary search to maximize T given a sample budget K. Experiments on real graph data show that the proposed algorithm can effectively promote large eigenvalue lower bounds, and the reconstruction MSE is the same or smaller than existing sampling methods for different budget K at much lower complexity.

Tue, 07 May 2019

14:30 - 15:00
L5

Fireshape, a look under the hood

Alberto Paganini
(Oxford)
Abstract

Fireshape is a shape optimization library based on finite elements. In this talk I will describe how Florian Wechsung and I developed Fireshape and will share my reflections on lessons learned through the process.

Tue, 07 May 2019

14:00 - 14:30
L5

Sharp error bounds for Ritz vectors and approximate singular vectors

Yuji Nakatsukasa
(Oxford)
Abstract

We derive sharp bounds for the accuracy of approximate eigenvectors (Ritz vectors) obtained by the Rayleigh-Ritz process for symmetric eigenvalue problems. Using information that is available or easy to estimate, our bounds improve the classical Davis-Kahan sin-theta theorem by a factor that can be arbitrarily large, and can give nontrivial information even when the sin-theta theorem suggests that a Ritz vector might have no accuracy at all. We also present extensions in three directions, deriving error bounds for invariant subspaces, singular vectors and subspaces computed by a (Petrov-Galerkin) projection SVD method, and eigenvectors of self-adjoint operators on a Hilbert space.

Tue, 30 Apr 2019

14:30 - 15:00
L3

Exponential integrators for stiff PDEs

Lloyd Nick Trefethen
(Oxford)
Abstract

Many time-dependent PDEs -- KdV, Burgers, Gray-Scott, Allen-Cahn, Navier-Stokes and many others -- combine a higher-order linear term with a lower-order nonlinear term.  This talk will review the method of exponential integrators for solving such problems with better than 2nd-order accuracy in time.

Tue, 30 Apr 2019

14:00 - 14:30
L3

Computable upper error bounds for Krylov subspace approximations to matrix exponentials

Tobias Jawecki
(TU Wien)
Abstract

A defect-based a posteriori error estimate for Krylov subspace approximations to the matrix exponential is introduced. This error estimate constitutes an upper norm bound on the error and can be computed during the construction of the Krylov subspace with nearly no computational effort. The matrix exponential function itself can be understood as a time propagation with restarts. In practice, we are interested in finding time steps for which the error of the Krylov subspace approximation is smaller than a given tolerance. Finding correct time steps is a simple task with our error estimate. Apart from step size control, the upper error bound can be used on the fly to test if the dimension of the Krylov subspace is already sufficiently large to solve the problem in a single time step with the required accuracy.

Tue, 05 Mar 2019

14:30 - 15:00
L5

MLQMC Methods for Elliptic PDEs Driven by White Noise

Matteo Croci
(Oxford)
Abstract

When solving partial differential equations driven by additive spatial white noise, the efficient sampling of white noise realizations can be challenging. In this talk we focus on the efficient sampling of white noise using quasi-random points in a finite element method and multilevel Quasi Monte Carlo (MLQMC) setting. This work is an extension of previous research on white noise sampling for MLMC.

We express white noise as a wavelet series expansion that we divide in two parts. The first part is sampled using quasi-random points and contains a finite number of terms in order of decaying importance to ensure good QMC convergence. The second part is a correction term which is sampled using standard pseudo-random numbers.

We show how the sampling of both terms can be performed in linear time and memory complexity in the number of mesh cells via a supermesh construction. Furthermore, our technique can be used to enforce the MLQMC coupling even in the case of non-nested mesh hierarchies. We demonstrate the efficacy of our method with numerical experiments.

Tue, 05 Mar 2019

14:00 - 14:30
L5

A VEM discretization for the transmission eigenvalue problem

David Mora
(Universidad del Bio-Bio)
Abstract

In this talk, we analyze a virtual element method (VEM) for solving a non-selfadjoint fourth-order eigenvalue problem derived from the transmission eigenvalue problem. We write a variational formulation and propose a $C^1$-conforming discretization by means of the VEM. We use the classical approximation theory for compact non-selfadjoint operators to obtain optimal order error estimates for the eigenfunctions and a double order for the eigenvalues. Finally, we present some numerical experiments illustrating the behavior of the virtual scheme on different families of meshes.

Tue, 26 Feb 2019

14:30 - 15:00
L3

Multispectral snapshot demosaicing via non-convex matrix completion

Simon Vary
(Oxford)
Abstract

Snapshot mosaic multispectral imagery acquires an undersampled data cube by acquiring a single spectral measurement per spatial pixel. Sensors which acquire p frequencies, therefore, suffer from severe 1/p undersampling of the full data cube.  We show that the missing entries can be accurately imputed using non-convex techniques from sparse approximation and matrix completion initialised with traditional demosaicing algorithms.

Tue, 26 Feb 2019

14:00 - 14:30
L3

New mixed finite element methods for natural convection with phase-change in porous media

Bryan Gómez Vargas
(Conception)
Abstract

This talk is concerned with the mathematical and numerical analysis of a steady phase change problem for non-isothermal incompressible viscous flow. The system is formulated in terms of pseudostress, strain rate and velocity for the Navier-Stokes-Brinkman equation, whereas temperature, normal heat flux on the boundary, and an auxiliary unknown are introduced for the energy conservation equation. In addition, and as one of the novelties of our approach, the symmetry of the pseudostress is imposed in an ultra-weak sense, thanks to which the usual introduction of the vorticity as an additional unknown is no longer needed. Then, for the mathematical analysis two variational formulations are proposed, namely mixed-primal and fully-mixed approaches, and the solvability of the resulting coupled formulations is established by combining fixed-point arguments, Sobolev embedding theorems and certain regularity assumptions. We then construct corresponding Galerkin discretizations based on adequate finite element spaces, and derive optimal a priori error estimates. Finally, numerical experiments in 2D and 3D illustrate the interest of this scheme and validate the theory.

Tue, 19 Feb 2019

14:30 - 15:00
L3

Univariate and Multivariate Polynomials in Numerical Analysis

Lloyd N. Trefethen
(Oxford)
Abstract

We begin by reviewing numerical methods for problems in one variable and find that univariate polynomials are the starting point for most of them.  A similar review in several variables, however, reveals that multivariate polynomials are not so important.  Why?  On the other hand in pure mathematics, the field of algebraic geometry is precisely the study of multivariate polynomials.  Why?

Tue, 19 Feb 2019

14:00 - 14:30
L3

Stochastic Analysis and Correction of Floating Point Errors in Monte Carlo Simulations

Oliver Sheridan-Methven
(Oxford)
Abstract

In this talk we will show how the floating point errors in the simulation of SDEs (stochastic differential equations) can be modelled as stochastic. Furthermore, we will show how these errors can be corrected within a multilevel Monte Carlo approach which performs most calculations with low precision, but a few calculations with higher precision. The same procedure can also be used to correct for errors in converting from uniform random numbers to approximate Normal random numbers. Numerical results will be generated on both CPUs (using single/double precision) and GPUs (using half/single precision).

Tue, 12 Feb 2019

14:30 - 15:00
L5

Optimization Relaxations in Dynamic Pricing

Jaroslav Fowkes
(Oxford)
Abstract

The idea of adjusting prices in order to sell goods at the highest acceptable price, such as haggling in a market, is as old as money itself. We consider the problem of pricing multiple products on a network of resources, such as that faced by an airline selling tickets on its flight network. In this talk I will consider various optimization relaxations to the deterministic dynamic pricing problem on a network. This is joint work with Raphael Hauser.

Tue, 12 Feb 2019

14:00 - 14:30
L5

Direct solvers for the Lippmann-Schwinger equation

Abinand Gopal
(Oxford)
Abstract

In recent years, there has been an increased interest in exploiting rank structure of matrices arising from the discretization of partial differential equations to develop fast direct solvers. In this talk, I will outline the fundamental ideas of this topic in the context of solving the integral equation formulation of the Helmholtz equation, known as the Lippmann-Schwinger equation, and will discuss some plans for future work to develop new, higher-order solvers. This is joint work with Gunnar Martinsson.

Tue, 05 Feb 2019

14:30 - 15:00
L5

An Introduction to Persistent Homology

Vidit Nanda
(Oxford)
Abstract

This talk will feature a brief introduction to persistent homology, the vanguard technique in topological data analysis. Nothing will be required of the audience beyond a willingness to row-reduce enormous matrices (by hand if we can, by machine if we must).