Thu, 15 May 2025
14:00

Quick on the draw: high-frequency trading in the Wild West of cryptocurrency limit order-book markets

Sam Howison
(Mathematical Institute (University of Oxford))
Abstract

Cryptocurrencies such as Bitcoin have only recently become a significant part of the financial landscape. Many billions of dollars are now traded daily on limit order-book markets such as Binance, and these are probably among the most open, liquid and transparent markets there are. They therefore make an interesting platform from which to investigate myriad questions to do with market microstructure. I shall talk about a few of these, including live-trading experiments to investigate the difference between on-paper strategy analysis (typical in the academic literature) and actual trading outcomes. I shall also mention very recent work on the new Hyperliquid exchange which runs on a blockchain basis, showing how to use this architecture to obtain datasets of an unprecendented level of granularity. This is joint work with Jakob Albers, Mihai Cucuringu and Alex Shestopaloff.

Thu, 08 May 2025
14:00
(This talk is hosted by Rutherford Appleton Laboratory)

Multilevel Monte Carlo Methods with Smoothing

Aretha Teckentrup
(University of Edinburgh)
Abstract

Parameters in mathematical models are often impossible to determine fully or accurately, and are hence subject to uncertainty. By modelling the input parameters as stochastic processes, it is possible to quantify the uncertainty in the model outputs. 

In this talk, we employ the multilevel Monte Carlo (MLMC) method to compute expected values of quantities of interest related to partial differential equations with random coefficients. We make use of the circulant embedding method for sampling from the coefficient, and to further improve the computational complexity of the MLMC estimator, we devise and implement the smoothing technique integrated into the circulant embedding method. This allows to choose the coarsest mesh on the  first level of MLMC independently of the correlation length of the covariance function of the random  field, leading to considerable savings in computational cost.

 

 

Please note; this talk is hosted by Rutherford Appleton Laboratory, Harwell Campus, Didcot, OX11 0QX

 

 

 

Thu, 20 Mar 2025
14:00
(This talk is hosted by Rutherford Appleton Laboratory)

Firedrake: a differentiable programming framework for finite element simulation

David Ham
(Imperial College London)
Abstract

Differentiable programming is the underpinning technology for the AI revolution. It allows neural networks to be programmed in very high level user code while still achieving very high performance for both the evaluation of the network and, crucially, its derivatives. The Firedrake project applies exactly the same concepts to the simulation of physical phenomena modelled with partial differential equations (PDEs). By exploiting the high level mathematical abstraction offered by the finite element method, users are able to write mathematical operators for the problem they wish to solve in Python. The high performance parallel implementations of these operators are then automatically generated, and composed with the PETSc solver framework to solve the resulting PDE. However, because the symbolic differential operators are available as code, it is possible to reason symbolically about them before the numerical evaluation. In particular, the operators can be differentiated with respect to their inputs, and the resulting derivative operators composed in forward or reverse order. This creates a differentiable programming paradigm congruent with (and compatible with) machine learning frameworks such as Pytorch and JAX. 

 

In this presentation, David Ham will present Firedrake in the context of differentiable programming, and show how this enables productivity, capability and performance to be combined in a unique way. I will also touch on the mechanism that enables Firedrake to be coupled with Pytorch and JAX.

  

Please note this talk will take place at Rutherford Appleton Laboratory, Harwell Campus, Didcot. 

Thu, 19 Jun 2025
14:00
Lecture Room 3

Hilbert’s 19th problem and discrete De Giorgi-Nash-Moser theory: analysis and applications

Endre Süli
(Mathematical Institute (University of Oxford))
Abstract
This talk is concerned with the construction and mathematical analysis of a system of nonlinear partial differential equations featuring in a model of an incompressible non-Newtonian fluid, the synovial fluid, contained in the cavities of human joints. To prove the convergence of the numerical method one has to develop a discrete counterpart of the De Giorgi-Nash-Moser theorem, which guarantees a uniform bound on the sequence of continuous piecewise linear finite element approximations in a Hölder norm, for divergence-form uniformly elliptic partial differential equations with measurable coefficients.
Thu, 01 May 2025

14:00 - 15:00
Lecture Room 3

Adventures in structured matrix computations

Gunnar Martinsson
(UT Austin)
Abstract

Many matrices that arise in scientific computing and in data science have internal structure that can be exploited to accelerate computations. The focus in this talk will be on matrices that are either of low rank, or can be tessellated into a collection of subblocks that are either of low rank or are of small size. We will describe how matrices of this nature arise in the context of fast algorithms for solving PDEs and integral equations, and also in handling "kernel matrices" from computational statistics. A particular focus will be on randomized algorithms for obtaining data sparse representations of such matrices.

 

At the end of the talk, we will explore an unorthodox technique for discretizing elliptic PDEs that was designed specifically to play well with fast algorithms for dense structured matrices.

Thu, 05 Jun 2025
14:00
Lecture Room 3

Solving sparse linear systems using quantum computing algorithms

Leigh Lapworth
(Rolls-Royce)
Abstract

The currently available quantum computers fall into the NISQ (Noisy Intermediate Scale Quantum) regime. These enable variational algorithms with a relatively small number of free parameters. We are now entering the FTQC (Fault Tolerant Quantum Computer)  regime where gate fidelities are high enough that error-correction schemes are effective. The UK Quantum Missions include the target for a FTQC device that can perform a million operations by 2028, and a trillion operations by 2035.

 

This talk will present the outcomes from assessments of  two quantum linear equation solvers for FTQCs– the Harrow–Hassidim–Lloyd (HHL) and the Quantum Singular Value Transform (QSVT) algorithms. These have used sample matrices from a Computational Fluid Dynamics (CFD) testcase. The quantum solvers have also been embedded with an outer non-linear solver to judge their impact on convergence. The analysis uses circuit emulation and is used to judge the FTQC requirements to deliver quantum utility.

Thu, 29 May 2025

14:00 - 15:00
Lecture Room 3

On the data-sparsity of the solution of Riccati equations with quasiseparable coefficients

Stefano Massei
(Universita di Pisa)
Abstract

Solving large-scale continuous-time algebraic Riccati equations is a significant challenge in various control theory applications. 

This work demonstrates that when the matrix coefficients of the equation are quasiseparable, the solution also exhibits numerical quasiseparability. This property enables us to develop two efficient Riccati solvers. The first solver is applicable to the general quasiseparable case, while the second is tailored to the particular case of banded coefficients. Numerical experiments confirm the effectiveness of the proposed algorithms on both synthetic examples and case studies from the control of partial differential equations and agent-based models. 

Thu, 22 May 2025

14:00 - 15:00
Lecture Room 3

When you truncate an infinite equation, what happens to the leftovers?

Geoff Vasil
(University of Edinburgh)
Abstract

Numerically solving PDEs typically requires compressing infinite information into a finite system of algebraic equations. Pragmatically, we usually follow a recipe: “Assume solutions of form X; substitute into PDE Y; discard terms by rule Z.” In contrast, Lanczos’s pioneering “tau method” prescribes modifying the PDE to form an exact finite system. Crucially, any recipe-based method can be viewed as adding a small equation correction, enabling us to compare multiple schemes independently of the solver. 

This talk also addresses a paradox: PDEs often admit infinitely many solutions, but finite systems produce only a finite set. When we include a “small” correction, the missing solutions are effectively hidden. I will discuss how tau methods frame this perspective and outline proposals for systematically studying and optimising various residuals.

Thu, 12 Dec 2024
14:00
(This talk is hosted by Rutherford Appleton Laboratory)

A Subspace-conjugate Gradient Method for Linear Matrix Equations

Davide Palitta
(Università di Bologna)
Abstract

 
The solution of multiterm linear matrix equations is still a very challenging task in numerical linear algebra.
If many different solid methods for equations with (at most) two terms exist in the literature, having a number of terms greater than two makes the numerical treatment of these equations much trickier. Very few options are available in the literature. In particular, to the best of our knowledge, no decomposition-based method for multiterm equations has never been proposed; only iterative procedures exist.
A non-complete list of contributions in this direction includes a greedy procedure designed by Sirkovi\'c and Kressner, projection methods tailored to the equation at hand, Riemannian optimization schemes, and matrix-oriented Krylov methods with low-rank truncations. The last class of solvers is probably one of the most commonly used ones. These schemes amount to adapting standard Krylov schemes for linear systems to matrix equations by leveraging the equivalence between matrix equations and their Kronecker form.
As long as no truncations are performed, this equivalence means that the algorithm itself is not exploiting the structure of the problem as it is not able to see that we are actually solving a matrix equation and not a linear system. The low-rank truncations we may perform in the matrix-equation framework can be viewed as a simple computational tool needed to make the solution process affordable in terms of storage allocation and not as an algorithmic advance.

 
By taking inspiration from the matrix-oriented cg method, our goal is to design a novel iterative scheme for the solution of multiterm matrix equations. We name this procedure the subspace-conjugate gradient method (Ss-cg) due to the peculiar orthogonality conditions imposed to compute some of the quantities involved in the scheme. As we will show in this talk, the main difference between Ss-cg and cg is the ability of the former to capitalize on the matrix equation structure of the underlying problem. In particular, we will make full use of the (low-rank) matrix format of the iterates to define appropriate ``step-sizes''. If these quantities correspond to scalars alpha_k and beta_k in cg, they will amount to small-dimensional matrices in our fresh approach.
This novel point of view leads to remarkable computational gains making Ss-cg a very competitive option for the solution of multi-term linear matrix equations.

 
This is a joint work with Martina Iannacito and Valeria Simoncini, both from the Department of Mathematics, University of Bologna.
 
Subscribe to Computational Mathematics and Applications Seminar