Tue, 03 Dec 2019
14:00
L1

On symmetrizing the ultraspherical spectral method for self-adjoint problems

Mikael Slevinsky
(University of Manitoba)
Abstract

A mechanism is described to symmetrize the ultraspherical spectral method for self-adjoint problems. The resulting discretizations are symmetric and banded. An algorithm is presented for an adaptive spectral decomposition of self-adjoint operators. Several applications are explored to demonstrate the properties of the symmetrizer and the adaptive spectral decomposition.

 

Tue, 26 Nov 2019
14:00
L5

Subspace Gauss-Newton for Nonlinear Least-Squares

Constantin Puiu
(Oxford)
Abstract


Subspace methods have the potential to outperform conventional methods, as the derivatives only need to be computed in a smaller dimensional subspace. The sub-problem that needs to be solved at each iteration is also smaller in size, and thus the Linear Algebra cost is also lower. However, if the subspace is not selected "properly", the progress per iteration can be significantly much lower than the progress of the equivalent full-space method. Thus, "improper" selection of the subspace results in subspace methods which are actually more computationally expensive per unit of progress than their full-space alternatives. The popular subspace selection methods (such as randomized) fall into this category when the objective function does not have a known (exploitable) structure. We provide a simple and effective rule to choose the subspace in the "right way" when the objective function does not have a structure. We focus on Gauss-Newton and Least-Squares, but the idea can be generalised to any other solvers and/or objective functions. We show theoretically that the cost of this strategy per unit progress lies in between (approximately) 50% and 100% of the cost of Gauss-Newton, and give an intuition why in practice, it should be closer to the favorable end of the spectrum. We confirm these expectations by running numerical experiments on the CUTEst32 test set. We also compare the proposed selection method with randomized subspace selection. We briefly show that the method is globally convergent and has a 2-step quadratic asymptotic rate of convergence for zero-residual problems.
 

Tue, 19 Nov 2019
14:30
L5

An approximate message passing algorithm for compressed sensing MRI

Charles Millard
(Oxford)
Abstract

The Approximate Message Passing (AMP) algorithm is a powerful iterative method for reconstructing undersampled sparse signals. Unfortunately, AMP is sensitive to the type of sensing matrix employed and frequently encounters convergence problems. One case where AMP tends to fail is compressed sensing MRI, where Fourier coefficients of a natural image are sampled with variable density. An AMP-inspired algorithm constructed specifically for MRI is presented that exhibits a 'state evolution', where at every iteration the image estimate before thresholding behaves as the ground truth corrupted by Gaussian noise with known covariance. Numerical experiments explore the practical benefits of such effective noise behaviour.
 

Tue, 19 Nov 2019
14:00
L5

Quotient-Space Boundary Element Methods for Scattering at Multi-Screens

Carolina Urzua Torres
(Oxford)
Abstract


Boundary integral equations (BIEs) are well established for solving scattering at bounded infinitely thin objects, so-called screens, which are modelled as “open surfaces” in 3D and as “open curves” in 2D. Moreover, the unknowns of these BIEs are the jumps of traces across $\Gamma$. Things change considerably when considering scattering at multi-screens, which are arbitrary arrangements of thin panels that may not be even locally orientable because of junction points (2D) or junction lines (3D). Indeed, the notion of jumps of traces is no longer meaningful at these junctions. This issue can be solved by switching to a quotient space perspective of traces, as done in recent work by Claeys and Hiptmair. In this talk, we present the extension of the quotient-space approach to the Galerkin boundary element (BE) discretization of first-kind BIEs. Unlike previous approaches, the new quotient-space BEM relies on minimal geometry information and does not require any special treatment at junctions. Moreover, it allows for a rigorous numerical analysis.
 

Tue, 12 Nov 2019
14:00
L5

Computing multiple local minima of topology optimisation problems

Ioannis Papadopoulos
(Oxford)
Abstract

Topology optimisation finds the optimal material distribution of a fluid or solid in a domain, subject to PDE and volume constraints. There are many formulations and we opt for the density approach which results in a PDE, volume and inequality constrained, non-convex, infinite-dimensional optimisation problem without a priori knowledge of a good initial guess. Such problems can exhibit many local minima or even no minima. In practice, heuristics are used to obtain the global minimum, but these can fail even in the simplest of cases. In this talk, we will present an algorithm that solves such problems and systematically discovers as many of these local minima as possible along the way.  

Tue, 05 Nov 2019
14:30
L5

Parameter Optimization in a Global Ocean Biogeochemical Model

Sophy Oliver
(Oxford)
Abstract

Ocean biogeochemical models used in climate change predictions are very computationally expensive and heavily parameterised. With derivatives too costly to compute, we optimise the parameters within one such model using derivative-free algorithms with the aim of finding a good optimum in the fewest possible function evaluations. We compare the performance of the evolutionary algorithm CMA-ES which is a stochastic global optimization method requiring more function evaluations, to the Py-BOBYQA and DFO-LS algorithms which are local derivative-free solvers requiring fewer evaluations. We also use initial Latin Hypercube sampling to then provide DFO-LS with a good starting point, in an attempt to find the global optimum with a local solver. This is joint work with Coralia Cartis and Samar Khatiwala.
 

Tue, 05 Nov 2019
14:00
L5

Globally convergent least-squares optimisation methods for variational data assimilation

Maha Kaouri
(University of Reading)
Abstract

The variational data assimilation (VarDA) problem is usually solved using a method equivalent to Gauss-Newton (GN) to obtain the initial conditions for a numerical weather forecast. However, GN is not globally convergent and if poorly initialised, may diverge such as when a long time window is used in VarDA; a desirable feature that allows the use of more satellite data. To overcome this, we apply two globally convergent GN variants (line search & regularisation) to the long window VarDA problem and show when they locate a more accurate solution versus GN within the time and cost available.
Joint work with Coralia Cartis, Amos S. Lawless, Nancy K. Nichols.

Tue, 29 Oct 2019
14:30
L5

Deciphering pattern formation via normal forms

Priya Subramanian
(Oxford)
Abstract

Complex spatial patterns such as superlattice patterns and quasipatterns occur in a variety of physical systems ranging from vibrated fluid layers to crystallising soft matter. Reduced order models that describe such systems are usually PDEs. Close to a phase transition, modal expansion along with perturbation methods can be applied to convert the PDEs to normal form equations in the form of coupled ODEs. I use equivariant bifurcation theory along with homotopy methods (developed in computational algebraic geometry) to obtain all solutions of the normal form equations in a non-iterative method. I want to talk about how this approach allows us to ask new questions about the physical systems of interest and what extensions to this method might be possible. This forms a step in my long-term interest to explore how to better ‘complete’ a bifurcation diagram!

Tue, 29 Oct 2019

14:00 - 14:30
L5

Sketching for Linear Least Squares

Zhen Shao
(Oxford)
Abstract

We discuss sketching techniques for sparse Linear Least Squares (LLS) problems, that perform a randomised dimensionality reduction for more efficient and scalable solutions. We give theoretical bounds for the accuracy of the sketched solution/residual when hashing matrices are used for sketching, quantifying carefully the trade-off between the coherence of the original, un-sketched matrix and the sparsity of the hashing matrix. We then use these bounds to quantify the success of our algorithm that employs a sparse factorisation of the sketched matrix as a preconditioner for the original LLS, before applying LSQR. We extensively compare our algorithm to state-of-the-art direct and iterative solvers for large-scale and sparse LLS, with encouraging results.

Tue, 22 Oct 2019

14:00 - 14:30
L5

A neural network based policy iteration algorithm with global H^2 -superlinear convergence for stochastic games on domains

Yufei Zhang
(Oxford)
Abstract

In this work, we propose a class of numerical schemes for solving semilinear Hamilton-Jacobi-Bellman-Isaacs (HJBI) boundary value problems which arise naturally from exit time problems of diffusion processes with controlled drift. We exploit policy iteration to reduce the semilinear problem into a sequence of linear Dirichlet problems, which are subsequently approximated by a multilayer feedforward neural network ansatz. We establish that the numerical solutions converge globally in the H^2 -norm, and further demonstrate that this convergence is superlinear, by interpreting the algorithm as an inexact Newton iteration for the HJBI equation. Moreover, we construct the optimal feedback controls from the numerical value functions and deduce convergence. The numerical schemes and convergence results are then extended to oblique derivative boundary conditions. Numerical experiments on the stochastic Zermelo navigation problem and the perpetual American option pricing problems are presented to illustrate the theoretical results and to demonstrate the effectiveness of the method.
 

Subscribe to