14:30
Forthcoming events in this series
14:30
14:00
14:30
Cross-diffusion systems for image enhancement and denoising
Abstract
Diffusion processes are commonly used in image processing. In particular, complex diffusion models have been successfully applied in medical imaging denoising. The interpretation of a complex diffusion equation as a cross-diffusion system motivates the introduction of more general models of this type and their study in the context of image processing. In this talk we will discuss the use of nonlinear cross-diffusion systems to perform image restoration. We will analyse the well-posedness, scale-space properties and
long time behaviour of the models along with their performance to treat image filtering problems. Examples of application will be highlighted.
14:00
14:30
Low-rank compression of functions in 2D and 3D
Abstract
Low-rank compression of matrices and tensors is a huge and growing business. Closely related is low-rank compression of multivariate functions, a technique used in Chebfun2 and Chebfun3. Not all functions can be compressed, so the question becomes, which ones? Here we focus on two kinds of functions for which compression is effective: those with some alignment with the coordinate axes, and those dominated by small regions of localized complexity.
14:00
Linear convergence rate bounds for operator splitting methods
Abstract
We establish necessary and sufficient conditions for linear convergence of operator splitting methods for a general class of convex optimization problems where the associated fixed-point operator is averaged. We also provide a tight bound on the achievable convergence rate. Most existing results establishing linear convergence in such methods require restrictive assumptions regarding strong convexity and smoothness of the constituent functions in the optimization problem. However, there are several examples in the literature showing that linear convergence is possible even when these properties do not hold. We provide a unifying analysis method for establishing linear convergence based on linear regularity and show that many existing results are special cases of our approach.
14:30
Optimal preconditioners for systems defined by functions of Toeplitz matrices
Abstract
We propose several optimal preconditioners for systems defined by some functions $g$ of Toeplitz matrices $T_n$. In this paper we are interested in solving $g(T_n)x=b$ by the preconditioned conjugate method or the preconditioned minimal residual method, namely in the cases when $g(T_n)$ are the analytic functions $e^{T_n}$, $\sin{T_n}$ and $\cos{T_n}$. Numerical results are given to show the effectiveness of the proposed preconditioners.
14:00
Modelling weakly coupled nonlinear oscillators: volcanism and glacial cycles
Abstract
This talk will be a geophysicist's view on the emerging properties of a numerical model representing the Earth's climate and volcanic activity over the past million years.
The model contains a 2D ice sheet (Glen's Law solved with a semi-implicit scheme), an energy balance for the atmosphere and planet surface (explicit), and an ODE for the time evolution of CO2 (explicit).
The dependencies between these models generate behaviour similar to weakly coupled nonlinear oscillators.
14:30
Applications of minimum rank of matrices described by a graph or sign pattern
Abstract
Low-rank compression of matrices and tensors is a huge and growing business. Closely related is low-rank compression of multivariate functions, a technique used in Chebfun2 and Chebfun3. Not all functions can be compressed, so the question becomes, which ones? Here we focus on two kinds of functions for which compression is effective: those with some alignment with the coordinate axes, and those dominated by small regions of localized complexity.
14:00
Best L1 polynomial approximation
Abstract
An important observation in compressed sensing is the exact recovery of an l0 minimiser to an underdetermined linear system via the l1 minimiser, given the knowledge that a sparse solution vector exists. Here, we develop a continuous analogue of this observation and show that the best L1 and L0 polynomial approximants of a corrupted function (continuous analogue of sparse vectors) are equivalent. We use this to construct best L1 polynomial approximants of corrupted functions via linear programming. We also present a numerical algorithm for computing best L1 polynomial approximants to general continuous functions, and observe that compared with best L-infinity and L2 polynomial approximants, the best L1 approximants tend to have error functions that are more localized.
Joint work with Alex Townsend (MIT).
14:30
Homogenized boundary conditions and resonance effects in Faraday cages
Abstract
The Faraday cage effect is the phenomenon whereby electrostatic and electromagnetic fields are shielded by a wire mesh "cage". Nick Trefethen, Jon Chapman and I recently carried out a mathematical analysis of the two-dimensional electrostatic problem with thin circular wires, demonstrating that the shielding effect is not as strong as one might infer from the physics literature. In this talk I will present new results generalising the previous analysis to the electromagnetic case, and to wires of arbitrary shape. The main analytical tool is the asymptotic method of multiple scales, which is used to derive continuum models for the shielding, involving homogenized boundary conditions on an effective cage boundary. In the electromagnetic case one observes interesting resonance effects, whereby at frequencies close to the natural frequencies of the equivalent solid shell, the presence of the cage actually amplifies the incident field, rather than shielding it. We discuss applications to radiation containment in microwave ovens and acoustic scattering by perforated shells. This is joint work with Ian Hewitt.
14:30
Kerdock matrices and the efficient quantization of subsampled measurements
Abstract
Kerdock matrices are an attractive choice as deterministic measurement matrices for compressive sensing. I'll explain how Kerdock matrices are constructed, and then show how they can be adapted to one particular strategy for quantizing measurements, in which measurements exceeding the desired dynamic range are rejected.
14:00
14:30
14:00
14:30
How accurate must solves be in interior point methods?
Abstract
At the heart of the interior point method in optimization is a linear system solve, but how accurate must this solve be? The behaviour of such methods is well-understood when a direct solver is used, but the scale of problems being tackled today means that users increasingly turn to iterative methods to approximate its solution. Current suggestions of the accuracy required can be seen to be too stringent, leading to inefficiency.
In this talk I will give conditions on the accuracy of the solution in order to guarantee the inexact interior point method converges at the same rate as if there was an exact solve. These conditions can be shown numerically to be tight, in that performance degrades rapidly if a weaker condition is used. Finally, I will describe how the norms that appear in these condition are related to the natural norms that are minimized in several popular Krylov subspace methods. This, in turn, could help in the development of new preconditioners in this important field.
14:00
Block operators and spectral discretizations
Abstract
Operators, functions, and functionals are combined in many problems of computational science in a fashion that has the same logical structure as is familiar for block matrices and vectors. It is proposed that the explicit consideration of such block structures at the continuous as opposed to discrete level can be a useful tool. In particular, block operator diagrams provide templates for spectral discretization by the rectangular differentiation, integration, and identity matrices introduced by Driscoll and Hale. The notion of the rectangular shape of a linear operator can be made rigorous by the theory of Fredholm operators and their indices, and the block operator formulations apply to nonlinear problems too, where the convention is proposed of representing nonlinear blocks as shaded. At each step of a Newton iteration, the structure is linearized and the blocks become unshaded, representing Fréchet derivative operators, square or rectangular. The use of block operator diagrams makes it possible to precisely specify discretizations of even rather complicated problems with just a few lines of pseudocode.
[Joint work with Nick Trefethen]
An ultraspherical spectral method for fractional differential equations of half-integer order
Regularization methods - varying the power, the smoothness and the accuracy
Abstract
Adaptive cubic regularization methods have recently emerged as a credible alternative to line search and trust-region for smooth nonconvex optimization, with optimal complexity amongst second-order methods. Here we consider a general class of adaptive regularization methods, that use first- or higher-order local Taylor models of the objective regularized by a(ny) power of the step size. We investigate the worst-case complexity/global rate of convergence of these algorithms, in the presence of varying (unknown) smoothness of the objective. We find that some methods automatically adapt their complexity to the degree of smoothness of the objective; while others take advantage of the power of the regularization step to satisfy increasingly better bounds with the order of the models. This work is joint with Nick Gould (RAL) and Philippe Toint (Namur).
Sparse information representation through feature selection
Abstract
A three-field formulation and mixed FEM for poroelasticity
Geometric integrators in optimal control theory
Abstract
Numerical calculation of permanents
Abstract
A GPU Implementation of the Filtered Lanczos Procedure
Abstract
This talk describes a graphics processing unit (GPU) implementation of the Filtered Lanczos Procedure for the solution of large, sparse, symmetric eigenvalue problems. The Filtered Lanczos Procedure uses a carefully chosen polynomial spectral transformation to accelerate the convergence of the Lanczos method when computing eigenvalues within a desired interval. This method has proven particularly effective when matrix-vector products can be performed efficiently in parallel. We illustrate, via example, that the Filtered Lanczos Procedure implemented on a GPU can greatly accelerate eigenvalue computations for certain classes of symmetric matrices common in electronic structure calculations and graph theory. Comparisons against previously published CPU results suggest a typical speedup of at least a factor of $10$.
A fast hierarchical direct solver for singular integral equations defined on disjoint boundaries and application to fractal screens
Abstract
Our starting point for specialized linear algebra is an alternative algorithm based on a recursive block LU factorization recently developed by Aminfar, Ambikasaran, and Darve. This algorithm specifically exploits the hierarchically off-diagonal low-rank structure arising from coercive singular integral operators of elliptic partial differential operators. The hierarchical solver involves a pre-computation phase independent of the right-hand side. Once this pre-computation factorizes the operator, the solution of many right-hand sides takes a fraction of the original time. Our fast direct solver allows for the exploration of reduced-basis problems, where the boundary density for any incident plane wave can be represented by a periodic Fourier series whose coefficients are in turn expanded in weighted Chebyshev or ultraspherical bases.
BFO: a Brute Force trainable algorithm for mixed-integer and multilevel derivative-free optimization
Abstract
The talk will describe a new "Brute Force Optimizer" whose objective is to provide a very versatile derivative-free Matlab package for bound-constrained optimization, with the distinctive feature that it can be trained to improve its own performance on classes of problems specified by the user (rather than on a single-but-wide problem class chosen by the algorithm developer). In addition, BFO can be used to optimize the performance of other algorithms and provides facilities for mixed-integer and multilevel problems, including constrained equilibrium calculations.
Block Preconditioning for Incompressible Two-Phase Flow
Abstract
Modelling two-phase, incompressible flow with level set or volume-of-fluid formulations results in a variable coefficient Navier-Stokes system that is challenging to solve computationally. In this talk I will present work from a recent InFoMM CDT mini-project which looked to adapt current preconditioners for one-phase Navier-Stokes flows. In particular we consider systems arising from the application of finite element methodology and preconditioners which are based on approximate block factorisations. A crucial ingredient is a good approximation of the Schur complement arising in the factorisation which can be computed efficiently.
Collocation-based hybrid numerical-asymptotic methods for high frequency wave scattering
Abstract
Wave scattering problems arise in numerous applications in acoustics, electromagnetics and linear elasticity. In the boundary element method (BEM) one reformulates the scattering problem as an integral equation on the scatterer boundary, e.g. using Green’s identities, and then seeks an approximate solution of the boundary integral equation (BIE) from some finite-dimensional approximation space. The conventional choice is a space of piecewise polynomials; however, in the “high frequency” regime when the wavelength is small compared to the size of the scatterer, it is computationally expensive to resolve the highly oscillatory wave solution in this way. The hybrid numerical-asymptotic (HNA) approach aims to reduce the computational cost by enriching the BEM approximation space with oscillatory functions, carefully chosen to capture the high frequency asymptotic solution behaviour. To date, the HNA methodology has been implemented almost exclusively in a Galerkin variational framework. This has many attractive features, not least the possibility of proving rigorous convergence results, but has the disadvantage of requiring numerical evaluation of high dimensional oscillatory integrals. In this talk I will present the results of some investigations carried out with my MSc student Emile Parolin into collocation-based implementations, which involve lower-dimensional integrals, but appear harder to analyse in terms of convergence and stability.
"Resonance" from the textbook in preparation, "Exploring ODEs"
Simple unified convergence proofs for Trust Region and a new ARC variant, and implementation issues
Abstract
We provide a simple convergence analysis unified for TR and a new ARC algorithms, which we name ARCq and which is very close in spirit to trust region methods, closer than the original ARC is. We prove global convergence to second order points. We also obtain as a corollary the convergence of the original ARC method. Since one of our aims is to achieve a simple presentation, we sacrifice some generality which we discuss at the end of our developments. In this simplified setting, we prove the optimal complexity property for the ARCq and identify the key elements which allow it. We then propose efficient implementations using a Cholesky like factorization as well as a scalable version based on conjugate gradients.
The inverse eigenvector problem for real tridiagonal matrices
Abstract
TBA
Are resultant methods numerically unstable for multidimensional rootfinding
Abstract
they are competitive practical rootfinders. However, in higher dimensions they are known to be notoriously difficult, if not impossible, to make numerically robust. We will show that the most popular variant based on the Cayley resultant is inherently and spectacularly numerically unstable by a factor that grows exponentially with the dimension. Disastrous. Yet, perhaps, it can be circumnavigated.
Best approximations in Chebfun and applications to digital filters
Abstract
In this talk I will give an overview of the algorithms used by Chebfun to numerically compute polynomial and trigonometric minimax approximations of continuous functions. I'll also present Chebfun's capabilities to compute best approximations on compact subsets of an interval and how these methods can be used to design digital filters.
Krylov methods for operators
Abstract
In this talk we will explore the convergence of Krylov methods when used to solve $Lu = f$ where $L$ is an unbounded linear operator. We will show that for certain problems, methods like Conjugate Gradients and GMRES still converge even though the spectrum of $L$ is unbounded. A theoretical justification for this behavior is given in terms of polynomial approximation on unbounded domains.
Sparse matrix orderings: it's child's play! Or is it?
Abstract
Sparse matrices occur in numerical simulations throughout science and engineering. In particular, it is often desirable to solve systems of the form Ax=b, where A is a sparse matrix with 100,000+ rows and columns. The order that the rows and columns occur in can have a dramatic effect on the viability of a direct solver e.g., the time taken to find x, the amount of memory needed, the quality of x,... We shall consider symmetric matrices and, with the help of playdough, explore how best to order the rows/columns using a nested dissection strategy. Starting with a straightforward strategy, we will discover the pitfalls and develop an adaptive strategy with the aim of coping with a large variety of sparse matrix structures.
Some of the talk will involve the audience playing with playdough, so bring your inner child along with you!
Continuum Modelling and Numerical Approaches for Diblock Copolymers
Abstract
We review a class of systems of non-linear PDEs, derived from the Cahn--Hilliard and Ohta--Kawasaki functionals, that describe the energy evolution of diblock copolymers. These are long chain molecules that can self assemble into repeating patterns as they cool. We are particularly interested in finite element numerical methods that approximate these PDEs in the two-phase (in which we model the polymer only) and three-phase (in which we imagine the polymer surrounded by, and interacting with, a void) cases.
We present a brief derivation of the underlying models, review a class of numerical methods to approximate them, and showcase some early results from our codes.
Image Reconstruction from X-Ray Scanning
Abstract
The talk will present ongoing work on medical image reconstruction from x-ray scanners. A suitable method for reconstruction of these undersampled systems is compressed sensing. The presentation will show respective reconstruction methods and their analysis. Furthermore, work in progress about extensions of the standard approach will be shown.
Early volumes of MC, SIREV, NM, BIT, SINUM, IMANA
Abstract
When the Computing Laboratory discarded its hardcopy journals around 2008, I kept the first ten years or so of each of six classic numerical analysis journals, starting from volume 1, number 1. This will not be a seminar in the usual sense but a mutual exploration. Come prepared to look through a few of these old volumes yourself and perhaps to tell the group of something interesting you find. Bring a pen and paper. All are welcome.
Mathematics of Computation, from 1943
SIAM Journal, from 1953
Numerische Mathematik, from 1959
BIT, from 1961
SIAM Journal on Numerical Analysis, from 1964
Journal of the IMA, from 1965
Preconditioning for boundary control problems in fluid dynamics
Abstract
In recent years, several preconditioning strategies have been proposed for control problems in fluid dynamics. These are a special case of the general saddle point problem in optimisation. Here, we will extend a preconditionier for distributed Stokes control problems, developed by Rees and Wathen, to the case of boundary control. We will show the usefulness of low-rank structures in constructing a good approximation for the Schur complement of the saddle point matrix. In the end, we will discuss the applicability of this strategy for Navier-Stokes control.
A fast and almost-banded spectral method for solving singular integral equations
Abstract
We develop a spectral method for solving univariate singular integral equations over unions of intervals and circles, by utilizing Chebyshev, ultraspherical and Laurent polynomials to reformulate the equations as banded infinite-dimensional systems. Low rank approximations are used to obtain compressed representations of the bivariate kernels. The resulting system can be solved in linear time using an adaptive QR factorization, determining an optimal number of unknowns needed to resolve the solution to any pre-determined accuracy. Applications considered include fracture mechanics, the Faraday cage, and acoustic scattering. The Julia software package https://github.com/ApproxFun/SIE.jl implements our method with a convenient, user-friendly interface.
An algorithm for optimizing nonconvex quadratic functions subject to simple bound constraints
Abstract
I present a new method for optimizing quadratic functions subject to simple bound constraints. If the problem happens to be strictly convex, the algorithm reduces to a highly efficient method by Dostal and Schoberl. Our algorithm, however, is also able to efficiently solve nonconcex problems. During this talk I will present the algorithm, a sketch of the convergence theory, and numerical results for convex and nonconvex problems.
Alternating direction methods for structured nonconvex optimization with applications in risk parity portfolio selection
Abstract
We will begin by discussing the risk parity portfolio selection problem, which aims to find portfolios for which the contributions of risk from all assets are equally weighted. The risk parity may be satisfied over either individual assets or groups of assets. We show how convex optimization techniques can find a risk parity solution in the nonnegative orthant, however, in general cases the number of such solutions can be anywhere between zero and exponential in the dimension. We then propose a nonconvex least-squares formulation which allows us to consider and possibly solve the general case.
Motivated by this problem we present several alternating direction schemes for specially structured nonlinear nonconvex problems. The problem structure allows convenient 2-block variable splitting. Our methods rely on solving convex subproblems at each iteration and converge to a local stationary point. Specifically, discuss approach alternating directions method of multipliers and the alternating linearization method and we provide convergence rate results for both classes of methods. Moreover, global optimization techniques from polynomial optimization literature are applied to complement our local methods and to provide lower bounds.