Forthcoming events in this series


Tue, 01 Feb 2022
14:00
L5

Numerical quadrature for singular integrals on fractals

Dave Hewett
(University College London)
Abstract

How can one integrate singular functions over fractals? And why would one want to do this? In this talk I will present a general approach to numerical quadrature on the compact attractor of an iterated function system of contracting similarities, where integration is with respect to the relevant Hausdorff measure. For certain singular integrands of logarithmic or algebraic type the self-similarity of the integration domain can be exploited to express the singular integral exactly in terms of regular integrals that can be approximated using standard techniques. As an application we show how this approach, combined with a singularity-subtraction technique, can be used to accurately evaluate the singular double integrals that arise in Hausdorff-measure Galerkin boundary element methods for acoustic wave scattering by fractal screens. This is joint work with Andrew Gibbs (UCL) and Andrea Moiola (Pavia).

Tue, 18 Jan 2022
14:30
Virtual

Constrained optimization on Riemannian manifolds

Melanie Weber
(Mathematical Institute (University of Oxford))
Abstract

Many applications involve non-Euclidean data, where exploiting Riemannian geometry can deliver algorithms that are computationally superior to standard nonlinear programming approaches. This observation has resulted in an increasing interest in Riemannian methods in the optimization and machine learning community. In this talk, we consider the problem of optimizing a function on a Riemannian manifold subject to convex constraints. We introduce Riemannian Frank-Wolfe (RFW) methods, a class of projection-free algorithms for constrained geodesically convex optimization. To understand the algorithm’s efficiency, we discuss (1) its iteration complexity, (2) the complexity of computing the Riemannian gradient and (3) the complexity of the Riemannian “linear” oracle (RLO), a crucial subroutine at the heart of the algorithm. We complement our theoretical results with an empirical comparison of RFW against state-of-the-art Riemannian optimization methods. Joint work with Suvrit Sra (MIT).

Tue, 18 Jan 2022
14:00
Virtual

Is everything a rational function?

Nick Trefethen
(Mathematical Institute (University of Oxford))
Abstract


There's an idea going back at least to Kirchberger in 1902 that since the only operations we can ultimately compute are +, -, *, and /, all of numerical computation must reduce to rational functions.  I've been looking into this idea and it has led in some interesting directions.

Tue, 23 Nov 2021
14:30
L3

A scalable and robust vertex-star relaxation for high-order FEM

Pablo Brubeck
(University of Oxford)
Abstract

The additive Schwarz method with vertex-centered patches and a low-order coarse space gives a p-robust solver for FEM discretizations of symmetric and coercive problems. However, for very high polynomial degree it is not feasible to assemble or factorize the matrices for each patch. In this work we introduce a direct solver for separable patch problems that scales to very high polynomial degree on tensor product cells. The solver constructs a tensor product basis that diagonalizes the blocks in the stiffness matrix for the internal degrees of freedom of each individual cell. As a result, the non-zero structure of the cell matrices is that of the graph connecting internal degrees of freedom to their projection onto the facets. In the new basis, the patch problem is as sparse as a low-order finite difference discretization, while having a sparser Cholesky factorization. We can thus afford to assemble and factorize the matrices for the vertex-patch problems, even for very high polynomial degree. In the non-separable case, the method can be applied as a preconditioner by approximating the problem with a separable surrogate. We apply this approach as a relaxation for the displacement block of mixed formulations of incompressible linear elasticity.

Tue, 23 Nov 2021
14:00
L3

Numerical approximation of viscous contact problems in glaciology

Gonzalo Gonzalez
(University of Oxford)
Abstract

Viscous contact problems describe the time evolution of fluid flows in contact with a surface from which they can detach. These type of problems arise in glaciology when, for example, modelling the evolution of the grounding line of a marine ice sheet or the formation of a subglacial cavity. Such problems are generally modelled as a time dependent viscous Stokes flow with a free boundary and contact boundary conditions. Although these applications are of great importance in glaciology, a systematic study of the numerical approximation of viscous contact problems has not been carried out yet. In this talk, I will present some of the challenges that arise when approximating these problems and some of the ideas we have come up with for overcoming them.

Tue, 09 Nov 2021
14:30
L3

TBA

Fede Danieli
(University of Oxford)
Abstract

TBA

Tue, 09 Nov 2021
14:00
L3

TBA

Guiseppe Ughi
(University of Oxford)
Abstract

TBA

Tue, 26 Oct 2021

14:30 - 15:00
L3

Fast & Accurate Randomized Algorithms for Linear Systems and Eigenvalue Problems

Yuji Nakatsukasa
(University of Oxford)
Abstract

We develop a new class of algorithms for general linear systems and a wide range of eigenvalue problems. These algorithms apply fast randomized sketching to accelerate subspace projection methods.  This approach offers great flexibility in designing the basis for the approximation subspace, which can improve scalability in many computational environments. The resulting algorithms outperform the classic methods with minimal loss of accuracy. For model problems, numerical experiments show large advantages over MATLAB’s optimized routines, including a 100x speedup. 

Joint work with Joel Tropp (Caltech). 

Tue, 26 Oct 2021

14:00 - 14:30
L3

Randomized algorithms for trace estimation

Alice Cortinovis
(EPFL)
Abstract

The Hutchinson’s trace estimator approximates the trace of a large-scale matrix A by computing the average of some quadratic forms involving A and some random vectors. Hutch++ is a more efficient trace estimation algorithm that combines this with the randomized singular value decomposition, which obtains a low-rank approximation of A by multiplying the matrix with some random vectors. In this talk, we present an improved version of Hutch++ which aims at minimizing the computational cost - that is, the number of matrix-vector multiplications with A - needed to achieve a trace estimate with a target accuracy. This is joint work with David Persson and Daniel Kressner.

Tue, 12 Oct 2021
14:30
L3

A proposal for the convergence analysis of parallel-in-time algorithms on nonlinear problems

Gian Antonucci
(University of Oxford)
Abstract

Over the last few decades, scientists have conducted extensive research on parallelisation in time, which appears to be a promising way to provide additional parallelism when parallelisation in space saturates before all parallel resources have been used. For the simulations of interest to the Culham Centre of Fusion Energy (CCFE), however, time parallelisation is highly non-trivial, because the exponential divergence of nearby trajectories makes it hard for time-parallel numerical integration to achieve convergence. In this talk we present our results for the convergence analysis of parallel-in-time algorithms on nonlinear problems, focussing on what is widely accepted to be the prototypical parallel-in-time method, the Parareal algorithm. Next, we introduce a new error function to measure convergence based on the maximal Lyapunov exponents, and show how it improves the overall parallel speedup when compared to the traditional check used in the literature. We conclude by mentioning how the above tools can help us design and analyse a novel algorithm for the long-time integration of chaotic systems that uses time-parallel algorithms as a sub-procedure.

Tue, 12 Oct 2021
14:00
L3

Preconditioning for normal equations and least squares

Andy Wathen
(University of Oxford)
Abstract

The solution of systems of linear(ized) equations lies at the heart of many problems in Scientific Computing. In particular for large systems, iterative methods are a primary approach. For many symmetric (or self-adjoint) systems, there are effective solution methods based on the Conjugate Gradient method (for definite problems) or minres (for indefinite problems) in combination with an appropriate preconditioner, which is required in almost all cases. For nonsymmetric systems there are two principal lines of attack: the use of a nonsymmetric iterative method such as gmres, or tranformation into a symmetric problem via the normal equations. In either case, an appropriate preconditioner is generally required. We consider the possibilities here, particularly the idea of preconditioning the normal equations via approximations to the original nonsymmetric matrix. We highlight dangers that readily arise in this approach. Our comments also apply in the context of linear least squares problems as we will explain.

Tue, 15 Jun 2021
14:30
Virtual

Numerical Relativity

Katy Clough
(Department of Physics)
Abstract

Numerical relativity allows us to simulate the behaviour of regions of space and time where gravity is strong and dynamical. For example, it allows us to calculate precisely the gravitational waveform that should be generated by the merger of two inspiralling black holes. Since the first detection of gravitational waves from such an event in 2015, banks of numerical relativity “templates” have been used to extract further information from noisy data streams. In this talk I will give an overview of the field - what are we simulating, why, and what are the main challenges, past and future.

-

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Tue, 01 Jun 2021
14:30
Virtual

Order-preserving mixed-precision Runge-Kutta methods

Matteo Croci
(Mathematical Institute (University of Oxford))
Abstract

Mixed-precision algorithms combine low- and high-precision computations in order to benefit from the performance gains of reduced-precision while retaining good accuracy. In this talk we focus on explicit stabilised Runge-Kutta (ESRK) methods for parabolic PDEs as they are especially amenable to a mixed-precision treatment. However, some of the concepts we present can be extended more generally to Runge-Kutta (RK) methods in general.

Consider the problem $y' = f(t,y)$ and let $u$ be the roundoff unit of the low-precision used. Standard mixed-precision schemes perform all evaluations of $f$ in reduced-precision to improve efficiency. We show that while this approach has many benefits, it harms the convergence order of the method leading to a limiting accuracy of $O(u)$.

In this talk we present a more accurate alternative: a scheme, which we call $q$-order-preserving, that is unaffected by this limiting behaviour. The idea is simple: by using $q$ high-precision evaluations of $f$ we can hope to retain a limiting convergence order of $O(\Delta t^{q})$. However, the practical design of these order-preserving schemes is less straight-forward.

We specifically focus on ESRK schemes as these are low-order schemes that employ a much larger number of stages than dictated by their convergence order so as to maximise stability. As such, these methods require most of the computational effort to be spent for stability rather than for accuracy purposes. We present new $s$-stage order $1$ and $2$ RK-Chebyshev and RK-Legendre methods that are provably full-order preserving. These methods are essentially as cheap as their fully low-precision equivalent and they are as accurate and (almost) as stable as their high-precision counterpart.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Tue, 01 Jun 2021
14:00
Virtual

Why are numerical algorithms accurate at large scale and low precisions?

Theo Mary
(Sorbonne Université)
Abstract

Standard worst-case rounding error bounds of most numerical linear algebra algorithms grow linearly with the problem size and the machine precision. These bounds suggest that numerical algorithms could be inaccurate at large scale and/or at low precisions, but fortunately they are pessimistic. We will review recent advances in probabilistic rounding error analyses, which have attracted renewed interest due to the emergence of low precisions on modern hardware as well as the rise of stochastic rounding.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Tue, 18 May 2021
14:30
Virtual

Numerical analysis of a topology optimization problem for Stokes flow

John Papadopoulos
(Mathematical Insittute)
Abstract

A topology optimization problem for Stokes flow finds the optimal material distribution of a Stokes fluid that minimizes the fluid’s power dissipation under a volume constraint. In 2003, T. Borrvall and J. Petersson [1] formulated a nonconvex optimization problem for this objective. They proved the existence of minimizers in the infinite-dimensional setting and showed that a suitably chosen finite element method will converge in a weak(-*) sense to an unspecified solution. In this talk, we will extend and refine their numerical analysis. We will show that there exist finite element functions, satisfying the necessary first-order conditions of optimality, that converge strongly to each isolated local minimizer of the problem.

[1] T. Borrvall, J. Petersson, Topology optimization of fluids in Stokes flow, International Journal for Numerical Methods in Fluids 41 (1) (2003) 77–107. doi:10.1002/fld.426.

 

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Tue, 18 May 2021
14:00
Virtual

Hashing embeddings of optimal dimension, with applications to linear least squares

Zhen Shao
(Mathematical Institute (University of Oxford))
Abstract

We investigate theoretical and numerical properties of sparse sketching for both dense and sparse Linear Least Squares (LLS) problems. We show that, sketching with hashing matrices --- with one nonzero entry per column and of size proportional to the rank of the data matrix --- generates a subspace embedding with high probability, provided the given data matrix has low coherence; thus optimal residual values are approximately preserved when the LLS matrix has similarly important rows. We then show that using $s-$hashing matrices, with $s>1$ nonzero entries per column, satisfy similarly good sketching properties for a larger class of low coherence data matrices. Numerically, we introduce our solver Ski-LLS for solving generic dense or sparse LLS problems. Ski-LLS builds upon the successful strategies employed in the Blendenpik and LSRN solvers, that use sketching to calculate a preconditioner before applying the iterative LLS solver LSQR. Ski-LLS significantly improves upon these sketching solvers by judiciously using sparse hashing sketching while also allowing rank-deficiency of input; furthermore, when the data matrix is sparse, Ski-LLS also applies a sparse factorization to the sketched input. Extensive numerical experiments show Ski-LLS is also competitive with other state-of-the-art direct and preconditioned iterative solvers for sparse LLS, and outperforms them in the significantly over-determined regime.

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Tue, 04 May 2021
14:30
Virtual

Global Riemannian acceleration in hyperbolic and spherical spaces

David Martinez
(Dept of Computer Science - University of Oxford)
Abstract

Riemannian optimization is a powerful and active area of research that studies the optimization of functions defined on manifolds with structure. A class of functions of interest is the set of geodesically convex functions, which are functions that are convex when restricted to every geodesic. In this talk, we will present an accelerated first-order method, nearly achieving the same rates as accelerated gradient descent in the Euclidean space, for the optimization of smooth and g-convex or strongly g-convex functions defined on the hyperbolic space or a subset of the sphere. We will talk about accelerated optimization of another non-convex problem, defined in the Euclidean space, that we solve as a proxy. Additionally, for any Riemannian manifold of bounded sectional curvature, we will present reductions from optimization methods for smooth and g-convex functions to methods for smooth and strongly g-convex functions and vice versa.

This talk is based on the paper https://arxiv.org/abs/2012.03618.

-

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Tue, 04 May 2021
14:00
Virtual

Fast randomized linear solver

Yuji Nakatsukasa
(Mathematical Institute (University of Oxford))
Abstract

We propose a randomized algorithm for solving a linear system $Ax = b$ with a highly numerically rank-deficient coefficient matrix $A$ with nearly consistent right-hand side possessing a small-norm solution. Our algorithm finds a small-norm solution with small residual in $O(N_r + nrlogn + r^3 )$ operations, where $r$ is the numerical rank of $A$ and $N_r$ is the cost of multiplying an $n\times r$ matrix to $A$. 

Joint work with Marcus Webb (Manchester). 

 

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Tue, 09 Mar 2021
14:30
Virtual

Broadband recursive skeletonization

Abi Gopal
(Mathematical Institute)
Abstract

Often in scattering applications it is advantageous to reformulate the problem as an integral equation, discretize, and then solve the resulting linear system using a fast direct solver. The computational cost of this approach is typically dominated by the work needed to compress the coefficient matrix into a rank-structured format. In this talk, we present a novel technique which exploits the bandlimited-nature of solutions to the Helmholtz equation in order to accelerate this procedure in environments where multiple frequencies are of interest.

This talk is based on joint work with Gunnar Martinsson (UT Austin).

-

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

 

Tue, 09 Mar 2021
14:00
Virtual

Finite element approximation of a strain-limiting elastic model

Endre Süli
(Mathematical Institute)
Abstract

Motivated by the work of K.R. Rajagopal, the objective of the talk is to discuss the construction and analysis of numerical approximations to a class of models that fall outside the realm of classical Cauchy elasticity. The models under consideration are implicit and nonlinear, and are referred to as strain-limiting, because the linearised strain remains bounded even when the stress is very large, a property that cannot be guaranteed within the framework of classical elastic or nonlinear elastic models. Strain-limiting models can be used to describe, for example, the behavior of brittle materials in the vicinity of fracture tips, or elastic materials in the neighborhood of concentrated loads where there is concentration of stress even though the magnitude of the strain tensor is limited.

We construct a finite element approximation of a strain-limiting elastic model and discuss the theoretical difficulties that arise in proving the convergence of the numerical method. The analytical results are illustrated by numerical experiments.

The talk is based on joint work with Andrea Bonito (Texas A&M University) and Vivette Girault (Sorbonne Université, Paris).

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Tue, 23 Feb 2021
14:30
Virtual

Well conditioned representation for high order finite elements

Kaibo Hu
(Mathematical Institute)
Abstract

For high order finite elements (continuous piecewise polynomials) the conditioning of the basis is important. However, so far there seems no generally accepted concept of "a well-conditioned basis”,  or a general strategy for how to obtain such representations. In this presentation, we use the $L^2$ condition number as a measure of the conditioning, and construct representations by frames such that the associated $L^2$ condition number is bounded independently of the polynomial degree. The main tools include the bubble transform, which is a stable decomposition of functions into local modes, and orthogonal polynomials on simplexes.  We also include a brief discussion on potential applications in preconditioning. This is a joint work with Ragnar Winther. 

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Tue, 23 Feb 2021
14:00
Virtual

Dense for the price of sparse: Initialising deep nets with efficient sparse affine transforms

Ilan Price
(Mathematical Institute)
Abstract

That neural networks may be pruned to high sparsities and retain high accuracy is well established. Recent research efforts focus on pruning immediately after initialization so as to allow the computational savings afforded by sparsity to extend to the training process. In this work, we introduce a new `DCT plus Sparse' layer architecture, which maintains information propagation and trainability even with as little as 0.01% trainable kernel parameters remaining. We show that standard training of networks built with these layers, and pruned at initialization, achieves state-of-the-art accuracy for extreme sparsities on a variety of benchmark network architectures and datasets. Moreover, these results are achieved using only simple heuristics to determine the locations of the trainable parameters in the network, and thus without having to initially store or compute with the full, unpruned network, as is required by competing prune-at-initialization algorithms. Switching from standard sparse layers to DCT plus Sparse layers does not increase the storage footprint of a network and incurs only a small additional computational overhead.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Tue, 09 Feb 2021
14:30
Virtual

A unified iteration scheme for strongly monotone problems

Pascal Heid
(Mathematical Institute)
Abstract

A wide variety of fixed-point iterative methods for the solution of nonlinear operator equations in Hilbert spaces exists. In many cases, such schemes can be interpreted as iterative local linearisation methods, which can be obtained by applying a suitable preconditioning operator to the original (nonlinear) equation. Based on this observation, we will derive a unified abstract framework which recovers some prominent iterative methods. It will be shown that for strongly monotone operators this unified iteration scheme satisfies an energy contraction property. Consequently, the generated sequence converges to a solution of the original problem.

 

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Tue, 09 Feb 2021
14:00
Virtual

Point cloud registration under algebraic variety model

Florentin Goyens
(Mathematical Institute)
Abstract

Point cloud registration is the task of finding the transformation that aligns two data sets. We make the assumption that the data lies on a low-dimensional algebraic variety.  The task is phrased as an optimization problem over the special orthogonal group of rotations. We solve this problem using Riemannian optimization algorithms and show numerical examples that illustrate the efficiency of this approach for point cloud registration. 

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.