Tue, 15 Oct 2019
14:00
L5

Wilkinson, numerical analysis, and me

Nick Trefethen
(Oxford)
Abstract

The two courses I took from Wilkinson as a graduate student at Stanford influenced me greatly.  Along with some reminiscences of those days, this talk will touch upon backward error analysis, Gaussian elimination, and Evariste Galois.  It was originally presented at the Wilkinson 100th Birthday conference in Manchester earlier this year.

 

Mon, 11 Nov 2019

14:15 - 15:15
L4

Green's function estimates and the Poisson equation

Ovidiu Munteanu
(University of Connecticut)
Further Information

 

 

Abstract

The Green's function of the Laplace operator has been widely studied in geometric analysis. Manifolds admitting a positive Green's function are called nonparabolic. By Li and Yau, sharp pointwise decay estimates are known for the Green's function on nonparabolic manifolds that have nonnegative Ricci
curvature. The situation is more delicate when curvature is not nonnegative everywhere. While pointwise decay estimates are generally not possible in this
case, we have obtained sharp integral decay estimates for the Green's function on manifolds admitting a Poincare inequality and an appropriate (negative) lower bound on Ricci curvature. This has applications to solving the Poisson equation, and to the study of the structure at infinity of such manifolds.

Tue, 03 Dec 2019
14:00
L1

On symmetrizing the ultraspherical spectral method for self-adjoint problems

Mikael Slevinsky
(University of Manitoba)
Abstract

A mechanism is described to symmetrize the ultraspherical spectral method for self-adjoint problems. The resulting discretizations are symmetric and banded. An algorithm is presented for an adaptive spectral decomposition of self-adjoint operators. Several applications are explored to demonstrate the properties of the symmetrizer and the adaptive spectral decomposition.

 

Tue, 26 Nov 2019
14:00
L5

Subspace Gauss-Newton for Nonlinear Least-Squares

Constantin Puiu
(Oxford)
Abstract


Subspace methods have the potential to outperform conventional methods, as the derivatives only need to be computed in a smaller dimensional subspace. The sub-problem that needs to be solved at each iteration is also smaller in size, and thus the Linear Algebra cost is also lower. However, if the subspace is not selected "properly", the progress per iteration can be significantly much lower than the progress of the equivalent full-space method. Thus, "improper" selection of the subspace results in subspace methods which are actually more computationally expensive per unit of progress than their full-space alternatives. The popular subspace selection methods (such as randomized) fall into this category when the objective function does not have a known (exploitable) structure. We provide a simple and effective rule to choose the subspace in the "right way" when the objective function does not have a structure. We focus on Gauss-Newton and Least-Squares, but the idea can be generalised to any other solvers and/or objective functions. We show theoretically that the cost of this strategy per unit progress lies in between (approximately) 50% and 100% of the cost of Gauss-Newton, and give an intuition why in practice, it should be closer to the favorable end of the spectrum. We confirm these expectations by running numerical experiments on the CUTEst32 test set. We also compare the proposed selection method with randomized subspace selection. We briefly show that the method is globally convergent and has a 2-step quadratic asymptotic rate of convergence for zero-residual problems.
 

Tue, 19 Nov 2019
14:30
L5

An approximate message passing algorithm for compressed sensing MRI

Charles Millard
(Oxford)
Abstract

The Approximate Message Passing (AMP) algorithm is a powerful iterative method for reconstructing undersampled sparse signals. Unfortunately, AMP is sensitive to the type of sensing matrix employed and frequently encounters convergence problems. One case where AMP tends to fail is compressed sensing MRI, where Fourier coefficients of a natural image are sampled with variable density. An AMP-inspired algorithm constructed specifically for MRI is presented that exhibits a 'state evolution', where at every iteration the image estimate before thresholding behaves as the ground truth corrupted by Gaussian noise with known covariance. Numerical experiments explore the practical benefits of such effective noise behaviour.
 

Tue, 19 Nov 2019
14:00
L5

Quotient-Space Boundary Element Methods for Scattering at Multi-Screens

Carolina Urzua Torres
(Oxford)
Abstract


Boundary integral equations (BIEs) are well established for solving scattering at bounded infinitely thin objects, so-called screens, which are modelled as “open surfaces” in 3D and as “open curves” in 2D. Moreover, the unknowns of these BIEs are the jumps of traces across $\Gamma$. Things change considerably when considering scattering at multi-screens, which are arbitrary arrangements of thin panels that may not be even locally orientable because of junction points (2D) or junction lines (3D). Indeed, the notion of jumps of traces is no longer meaningful at these junctions. This issue can be solved by switching to a quotient space perspective of traces, as done in recent work by Claeys and Hiptmair. In this talk, we present the extension of the quotient-space approach to the Galerkin boundary element (BE) discretization of first-kind BIEs. Unlike previous approaches, the new quotient-space BEM relies on minimal geometry information and does not require any special treatment at junctions. Moreover, it allows for a rigorous numerical analysis.
 

Tue, 12 Nov 2019
14:00
L5

Computing multiple local minima of topology optimisation problems

Ioannis Papadopoulos
(Oxford)
Abstract

Topology optimisation finds the optimal material distribution of a fluid or solid in a domain, subject to PDE and volume constraints. There are many formulations and we opt for the density approach which results in a PDE, volume and inequality constrained, non-convex, infinite-dimensional optimisation problem without a priori knowledge of a good initial guess. Such problems can exhibit many local minima or even no minima. In practice, heuristics are used to obtain the global minimum, but these can fail even in the simplest of cases. In this talk, we will present an algorithm that solves such problems and systematically discovers as many of these local minima as possible along the way.  

Tue, 05 Nov 2019
14:30
L5

Parameter Optimization in a Global Ocean Biogeochemical Model

Sophy Oliver
(Oxford)
Abstract

Ocean biogeochemical models used in climate change predictions are very computationally expensive and heavily parameterised. With derivatives too costly to compute, we optimise the parameters within one such model using derivative-free algorithms with the aim of finding a good optimum in the fewest possible function evaluations. We compare the performance of the evolutionary algorithm CMA-ES which is a stochastic global optimization method requiring more function evaluations, to the Py-BOBYQA and DFO-LS algorithms which are local derivative-free solvers requiring fewer evaluations. We also use initial Latin Hypercube sampling to then provide DFO-LS with a good starting point, in an attempt to find the global optimum with a local solver. This is joint work with Coralia Cartis and Samar Khatiwala.
 

Subscribe to