Past Numerical Analysis Group Internal Seminar

12 October 2021
14:30
Gian Antonucci
Abstract

Over the last few decades, scientists have conducted extensive research on parallelisation in time, which appears to be a promising way to provide additional parallelism when parallelisation in space saturates before all parallel resources have been used. For the simulations of interest to the Culham Centre of Fusion Energy (CCFE), however, time parallelisation is highly non-trivial, because the exponential divergence of nearby trajectories makes it hard for time-parallel numerical integration to achieve convergence. In this talk we present our results for the convergence analysis of parallel-in-time algorithms on nonlinear problems, focussing on what is widely accepted to be the prototypical parallel-in-time method, the Parareal algorithm. Next, we introduce a new error function to measure convergence based on the maximal Lyapunov exponents, and show how it improves the overall parallel speedup when compared to the traditional check used in the literature. We conclude by mentioning how the above tools can help us design and analyse a novel algorithm for the long-time integration of chaotic systems that uses time-parallel algorithms as a sub-procedure.

  • Numerical Analysis Group Internal Seminar
12 October 2021
14:00
Andy Wathen
Abstract

The solution of systems of linear(ized) equations lies at the heart of many problems in Scientific Computing. In particular for large systems, iterative methods are a primary approach. For many symmetric (or self-adjoint) systems, there are effective solution methods based on the Conjugate Gradient method (for definite problems) or minres (for indefinite problems) in combination with an appropriate preconditioner, which is required in almost all cases. For nonsymmetric systems there are two principal lines of attack: the use of a nonsymmetric iterative method such as gmres, or tranformation into a symmetric problem via the normal equations. In either case, an appropriate preconditioner is generally required. We consider the possibilities here, particularly the idea of preconditioning the normal equations via approximations to the original nonsymmetric matrix. We highlight dangers that readily arise in this approach. Our comments also apply in the context of linear least squares problems as we will explain.

  • Numerical Analysis Group Internal Seminar
15 June 2021
14:30
Katy Clough
Abstract

Numerical relativity allows us to simulate the behaviour of regions of space and time where gravity is strong and dynamical. For example, it allows us to calculate precisely the gravitational waveform that should be generated by the merger of two inspiralling black holes. Since the first detection of gravitational waves from such an event in 2015, banks of numerical relativity “templates” have been used to extract further information from noisy data streams. In this talk I will give an overview of the field - what are we simulating, why, and what are the main challenges, past and future.

-

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact trefethen@maths.ox.ac.uk.

  • Numerical Analysis Group Internal Seminar
1 June 2021
14:30
Abstract

Mixed-precision algorithms combine low- and high-precision computations in order to benefit from the performance gains of reduced-precision while retaining good accuracy. In this talk we focus on explicit stabilised Runge-Kutta (ESRK) methods for parabolic PDEs as they are especially amenable to a mixed-precision treatment. However, some of the concepts we present can be extended more generally to Runge-Kutta (RK) methods in general.

Consider the problem $y' = f(t,y)$ and let $u$ be the roundoff unit of the low-precision used. Standard mixed-precision schemes perform all evaluations of $f$ in reduced-precision to improve efficiency. We show that while this approach has many benefits, it harms the convergence order of the method leading to a limiting accuracy of $O(u)$.

In this talk we present a more accurate alternative: a scheme, which we call $q$-order-preserving, that is unaffected by this limiting behaviour. The idea is simple: by using $q$ high-precision evaluations of $f$ we can hope to retain a limiting convergence order of $O(\Delta t^{q})$. However, the practical design of these order-preserving schemes is less straight-forward.

We specifically focus on ESRK schemes as these are low-order schemes that employ a much larger number of stages than dictated by their convergence order so as to maximise stability. As such, these methods require most of the computational effort to be spent for stability rather than for accuracy purposes. We present new $s$-stage order $1$ and $2$ RK-Chebyshev and RK-Legendre methods that are provably full-order preserving. These methods are essentially as cheap as their fully low-precision equivalent and they are as accurate and (almost) as stable as their high-precision counterpart.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact trefethen@maths.ox.ac.uk.

  • Numerical Analysis Group Internal Seminar
1 June 2021
14:00
Abstract

Standard worst-case rounding error bounds of most numerical linear algebra algorithms grow linearly with the problem size and the machine precision. These bounds suggest that numerical algorithms could be inaccurate at large scale and/or at low precisions, but fortunately they are pessimistic. We will review recent advances in probabilistic rounding error analyses, which have attracted renewed interest due to the emergence of low precisions on modern hardware as well as the rise of stochastic rounding.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact trefethen@maths.ox.ac.uk.

  • Numerical Analysis Group Internal Seminar
18 May 2021
14:30
John Papadopoulos
Abstract

A topology optimization problem for Stokes flow finds the optimal material distribution of a Stokes fluid that minimizes the fluid’s power dissipation under a volume constraint. In 2003, T. Borrvall and J. Petersson [1] formulated a nonconvex optimization problem for this objective. They proved the existence of minimizers in the infinite-dimensional setting and showed that a suitably chosen finite element method will converge in a weak(-*) sense to an unspecified solution. In this talk, we will extend and refine their numerical analysis. We will show that there exist finite element functions, satisfying the necessary first-order conditions of optimality, that converge strongly to each isolated local minimizer of the problem.

[1] T. Borrvall, J. Petersson, Topology optimization of fluids in Stokes flow, International Journal for Numerical Methods in Fluids 41 (1) (2003) 77–107. doi:10.1002/fld.426.

 

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact trefethen@maths.ox.ac.uk.

  • Numerical Analysis Group Internal Seminar
18 May 2021
14:00
Abstract

We investigate theoretical and numerical properties of sparse sketching for both dense and sparse Linear Least Squares (LLS) problems. We show that, sketching with hashing matrices --- with one nonzero entry per column and of size proportional to the rank of the data matrix --- generates a subspace embedding with high probability, provided the given data matrix has low coherence; thus optimal residual values are approximately preserved when the LLS matrix has similarly important rows. We then show that using $s-$hashing matrices, with $s>1$ nonzero entries per column, satisfy similarly good sketching properties for a larger class of low coherence data matrices. Numerically, we introduce our solver Ski-LLS for solving generic dense or sparse LLS problems. Ski-LLS builds upon the successful strategies employed in the Blendenpik and LSRN solvers, that use sketching to calculate a preconditioner before applying the iterative LLS solver LSQR. Ski-LLS significantly improves upon these sketching solvers by judiciously using sparse hashing sketching while also allowing rank-deficiency of input; furthermore, when the data matrix is sparse, Ski-LLS also applies a sparse factorization to the sketched input. Extensive numerical experiments show Ski-LLS is also competitive with other state-of-the-art direct and preconditioned iterative solvers for sparse LLS, and outperforms them in the significantly over-determined regime.

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact trefethen@maths.ox.ac.uk.

  • Numerical Analysis Group Internal Seminar
4 May 2021
14:30
Abstract

Riemannian optimization is a powerful and active area of research that studies the optimization of functions defined on manifolds with structure. A class of functions of interest is the set of geodesically convex functions, which are functions that are convex when restricted to every geodesic. In this talk, we will present an accelerated first-order method, nearly achieving the same rates as accelerated gradient descent in the Euclidean space, for the optimization of smooth and g-convex or strongly g-convex functions defined on the hyperbolic space or a subset of the sphere. We will talk about accelerated optimization of another non-convex problem, defined in the Euclidean space, that we solve as a proxy. Additionally, for any Riemannian manifold of bounded sectional curvature, we will present reductions from optimization methods for smooth and g-convex functions to methods for smooth and strongly g-convex functions and vice versa.

This talk is based on the paper https://arxiv.org/abs/2012.03618.

-

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact trefethen@maths.ox.ac.uk.

  • Numerical Analysis Group Internal Seminar
4 May 2021
14:00
Abstract

We propose a randomized algorithm for solving a linear system $Ax = b$ with a highly numerically rank-deficient coefficient matrix $A$ with nearly consistent right-hand side possessing a small-norm solution. Our algorithm finds a small-norm solution with small residual in $O(N_r + nrlogn + r^3 )$ operations, where $r$ is the numerical rank of $A$ and $N_r$ is the cost of multiplying an $n\times r$ matrix to $A$. 

Joint work with Marcus Webb (Manchester). 

 

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact trefethen@maths.ox.ac.uk.

  • Numerical Analysis Group Internal Seminar

Pages