Forthcoming events in this series


Tue, 20 Jun 2023
13:30
L3

CDT in Mathematics of Random Systems June Workshop 2023

Milena Vuletic, Nicola Muca Cirone & Renyuan Xu
Abstract

1:30 Milena Vuletic

Simulation of Arbitrage-Free Implied Volatility Surfaces

We present a computationally tractable method for simulating arbitrage-free implied volatility surfaces. We illustrate how our method may be combined with a factor model based on historical SPX implied volatility data to generate dynamic scenarios for arbitrage-free implied volatility surfaces. Our approach conciliates static arbitrage constraints with a realistic representation of statistical properties of implied volatility co-movements.


2:00 Nicola Muca Cirone

Neural Signature Kernels

Motivated by the paradigm of reservoir computing, we consider randomly initialized controlled ResNets defined as Euler-discretizations of neural controlled differential equations (Neural CDEs), a unified architecture which enconpasses both RNNs and ResNets. We show that in the infinite-width-depth limit and under proper scaling, these architectures converge weakly to Gaussian processes indexed on some spaces of continuous paths and with kernels satisfying certain partial differential equations (PDEs) varying according to the choice of activation function, extending the results of Hayou (2022); Hayou & Yang (2023) to the controlled and homogeneous case. In the special, homogeneous, case where the activation is the identity, we show that the equation reduces to a linear PDE and the limiting kernel agrees with the signature kernel of Salvi et al. (2021a). We name this new family of limiting kernels neural signature kernels. Finally, we show that in the infinite-depth regime, finite-width controlled ResNets converge in distribution to Neural CDEs with random vector fields which, depending on whether the weights are shared across layers, are either time-independent and Gaussian or behave like a matrix-valued Brownian motion.


2:30 Break


2:50-3:50 Renyuan Xu, Assistant Professor, University of Southern California

Reversible and Irreversible Decisions under Costly Information Acquisition 

Many real-world analytics problems involve two significant challenges: estimation and optimization. Due to the typically complex nature of each challenge, the standard paradigm is estimate-then-optimize. By and large, machine learning or human learning tools are intended to minimize estimation error and do not account for how the estimations will be used in the downstream optimization problem (such as decision-making problems). In contrast, there is a line of literature in economics focusing on exploring the optimal way to acquire information and learn dynamically to facilitate decision-making. However, most of the decision-making problems considered in this line of work are static (i.e., one-shot) problems which over-simplify the structures of many real-world problems that require dynamic or sequential decisions.

As a preliminary attempt to introduce more complex downstream decision-making problems after learning and to investigate how downstream tasks affect the learning behavior, we consider a simple example where a decision maker (DM) chooses between two products, an established product A with known return and a newly introduced product B with an unknown return. The DM will make an initial choice between A and B after learning about product B for some time. Importantly, our framework allows the DM to switch to Product A later on at a cost if Product B is selected as the initial choice. We establish the general theory and investigate the analytical structure of the problem through the lens of the Hamilton—Jacobi—Bellman equation and viscosity solutions. We then discuss how model parameters and the opportunity to reverse affect the learning behavior of the DM.

This is based on joint work with Thaleia Zariphopoulou and Luhao Zhang from UT Austin.
 

Mon, 27 Feb 2023
13:30
L5

CDT in Mathematics of Random Systems February Workshop 2023

Deborah Miori, Žan Žurič
Abstract

1:30-2:15 Deborah Miori, CDT student, University of Oxford

DeFi: Data-Driven Characterisation of Uniswap v3 Ecosystem & an Ideal Crypto Law for Liquidity Pools

Uniswap is a Constant Product Market Maker built around liquidity pools, where pairs of tokens are exchanged subject to a fee that is proportional to the size of transactions. At the time of writing, there exist more than 6,000 pools associated with Uniswap v3, implying that empirical investigations on the full ecosystem can easily become computationally expensive. Thus, we propose a systematic workflow to extract and analyse a meaningful but computationally tractable sub-universe of liquidity pools.

Leveraging on the 34 pools found relevant for the six-months time window January-June 2022, we then investigate the related liquidity consumption behaviour of market participants. We propose to represent each liquidity taker by a suitably constructed transaction graph, which is a fully connected network where nodes are the liquidity taker’s executed transactions, and edges contain weights encoding the time elapsed between any two transactions. We extend the NLP-inspired graph2vec algorithm to the weighted undirected setting, and employ it to obtain an embedding of the set of graphs. This embedding allows us to extract seven clusters of liquidity takers, with equivalent behavioural patters and interpretable trading preferences.

We conclude our work by testing for relationships between the characteristic mechanisms of each pool, i.e. liquidity provision, consumption, and price variation. We introduce a related ideal crypto law, inspired from the ideal gas law of thermodynamics, and demonstrate that pools adhering to this law are healthier trading venues in terms of sensitivity of liquidity and agents’ activity. Regulators and practitioners could benefit from our model by developing related pool health monitoring tools.

2:15-3:00 Žan Žurič, CDT student, Imperial College London

A Random Neural Network Approach to Pricing SPDEs for Rough Volatility

We propose a novel machine learning-based scheme for solving partial differential equations (PDEs) and backward stochastic partial differential equations (BSPDE) stemming from option pricing equations of Markovian and non-Markovian models respectively. The use of the so-called random weighted neural networks (RWNN) allows us to formulate the optimisation problem as linear regression, thus immensely speeding up the training process. Furthermore, we analyse the convergence of the RWNN scheme and are able to specify error estimates in terms of the number of hidden nodes. The performance of the scheme is tested on Black-Scholes and rBergomi models and shown to have superior training times with accuracy comparable to existing deep learning approaches.

Tue, 06 Dec 2022
14:00
Large Lecture Theatre, Department of Statistics, University of Oxford

CDT in Mathematics of Random Systems December Workshop 2022

Thomas Tendron (Oxford Statistics), Julian Sieber (Imperial Mathematics)
Abstract

2:00 Julian Sieber

On the (Non-)stationary density of fractional SDEs

I will present a novel approach for studying the density of SDEs driven by additive fractional Brownian motion. It allows us to establish smoothness and Gaussian-type upper and lower bounds for both the non-stationary as well as the stationary density. While the stationary density has not been studied in any previous works, the former was the subject of multiple articles by Baudoin, Hairer, Nualart, Ouyang, Pillai, Tindel, among others. The common theme of all of these works is to obtain the results through bounds on the Malliavin derivative. The main disadvantage of this approach lies in the non-optimal regularity conditions on the SDE's coefficients. In case of additive noise, the equation is known to be well-posed if the drift is merely sublinear and measurable (resp. Holder continuous). Relying entirely on classical methods of stochastic analysis (avoiding any Malliavin calculus), we prove the aforementioned Gaussian-type bounds under optimal regularity conditions.

The talk is based on a joint work with Xue-Mei Li and Fabien Panloup.

 

2:45 Thomas Tendron

A central limit theorem for a spatial logistic branching process in the slow coalescence regime

We study the scaling limits of a spatial population dynamics model which describes the sizes of colonies located on the integer lattice, and allows for branching, coalescence in the form of local pairwise competition, and migration. When started near the local equilibrium, the rates of branching and coalescence in the particle system are both linear in the local population size - we say that the coalescence is slow. We identify a rescaling of the equilibrium fluctuations process under which it converges to an infinite dimensional Ornstein-Uhlenbeck process with alpha-stable driving noise if the offspring distribution lies in the domain of attraction of an alpha-stable law with alpha between one and two.

3:30 Break

4:00-5:30 Careers Discussion

Dr Katia Babbar

Immersive Finance, Founder, and Oxford Mathematics, Visiting Lecturer in Mathematical Finance

Professor Coralia Cartis

Oxford Mathematics, Professor of Numerical Optimisation

Dr Robert Leese

Smith Institute, Chief Technical Officer

Dr Alisdair Wallis

Tesco, Data Science Manager

Fri, 28 Oct 2022
14:30
Imperial College

CDT in Mathematics of Random Systems October Workshop 2022

Dr Cris Salvi, Will Turner & Yihuang (Ross) Zhang
(University of Oxford and Imperial College London)
Abstract

2:30 -3.00 Will Turner (CDT Student, Imperial College London)

Topologies on unparameterised path space

The signature of a path is a non-commutative exponential introduced by K.T. Chen in the 1950s, and appears as a central object in the theory of rough paths developed by T. Lyons in the 1990s. For continuous paths of bounded variation, the signature may be realised as a sequence of iterated integrals, which provides a succinct summary for multimodal, irregularly sampled, time-ordered data. The terms in the signature act as an analogue to monomials for finite dimensional data: linear functionals on the signature uniformly approximate any compactly supported continuous function on unparameterised path space (Levin, Lyons, Ni 2013). Selection of a suitable topology on the space of unparameterised paths is then key to the practical use of this approximation theory. We present new results on the properties of several candidate topologies for this space. If time permits, we will relate these results to two classical models: the fixed-time solution of a controlled differential equation, and the expected signature model of Levin, Lyons, and Ni. This is joint work with Thomas Cass.


3.05 -3.35 Ross Zhang (CDT Student, University of Oxford)

Random vortex dynamics via functional stochastic differential equations

The talk focuses on the representation of the three-dimensional (3D) Navier-Stokes equations by a random vortex system. This new system could give us new numerical schemes to efficiently approximate the 3D incompressible fluid flows by Monte Carlo simulations. Compared with the 2D Navier-Stokes equation, the difficulty of the 3D Navier-Stokes equation lies in the stretching of vorticity. To handle the stretching term, a system of stochastic differential equations is coupled with a functional ordinary differential equation in the 3D random vortex system. Two main tools are developed to derive the new system: the first is the investigation of pinned diffusion measure, which describes the conditional distribution of a time reversal diffusion, and the second is a forward-type Feynman Kac formula for nonlinear PDEs, which utilizes the pinned diffusion measure to delicately overcome the time reversal issue in PDE. Although the main focus of the research is the Navier-stokes equation, the tools developed in this research are quite general. They could be applied to other nonlinear PDEs as well, thereby providing respective numerical schemes.


3.40 - 4.25pm Dr Cris Salvi (Imperial College London)

Signature kernel methods

Kernel methods provide a rich and elegant framework for a variety of learning tasks including supervised learning, hypothesis testing, Bayesian inference, generative modelling and scientific computing. Sequentially ordered information often arrives in the form of complex streams taking values in non-trivial ambient spaces (e.g. a video is a sequence of images). In these situations, the design of appropriate kernels is a notably challenging task. In this talk, I will outline how rough path theory, a modern mathematical framework for describing complex evolving systems, allows to construct a family of characteristic kernels on pathspace known as signature kernels. I will then present how signature kernels can be used to develop a variety of algorithms such as two-sample hypothesis and (conditional) independence tests for stochastic processes, generative models for time series and numerical methods for path-dependent PDEs.


4.30 Refreshments

 

Fri, 17 Jun 2022

14:00 - 17:00
Large Lecture Theatre, Department of Statistics, University of Oxford

CDT in Mathematics of Random Systems June Workshop 2022

Ziheng Wang, Professor Ian Melbourne, Dr Sara Franceschelli
Further Information

Please contact @email for remote viewing details

Abstract

2:00 Ziheng Wang, EPSRC CDT in Mathematics of Random Systems Student

Continuous-time stochastic gradient descent for optimizing over the stationary distribution of stochastic differential equations

Abstract: We develop a new continuous-time stochastic gradient descent method for optimizing over the stationary distribution of stochastic differential equation (SDE) models. The algorithm continuously updates the SDE model's parameters using a stochastic estimate for the gradient of the stationary distribution. The gradient estimate satisfies an SDE and is simultaneously updated, asymptotically converging to the direction of steepest descent. We rigorously prove convergence of our online algorithm for dissipative SDE models and present numerical results for other nonlinear examples. The proof requires analysis of the fluctuations of the parameter evolution around the direction of steepest descent. Bounds on the fluctuations are challenging to obtain due to the online nature of the algorithm (e.g., the stationary distribution will continuously change as the parameters change). We prove bounds for the solutions of a new class of Poisson partial differential equations, which are then used to analyze the parameter fluctuations in the algorithm.

2:45 Ian Melbourne,  Professor of Mathematics, University of Warwick

Interpretation of stochastic integrals, and the Levy area

Abstract: An important question in stochastic analysis is the appropriate interpretation of stochastic integrals. The classical Wong-Zakai theorem gives sufficient conditions under which smooth integrals converge to Stratonovich stochastic integrals. The conditions are automatic in one-dimension, but in higher dimensions it is necessary to take account of corrections stemming from the Levy area. The first part of the talk covers work with Kelly 2016, where we justified the Levy area correction for large classes of smooth systems, bypassing any stochastic modelling assumptions. The second part of the talk addresses a much less studied question: is the Levy area zero or nonzero for systems of physical interest, eg Hamiltonian time-reversible systems? In recent work with Gottwald, we classify (and clarify) the situations where such structure forces the Levy area to vanish. The conclusion of our work is that typically the Levy area correction is nonzero.

3:45 Break

4:15 Sara Franceschelli, Associate Professor,  École Normale Supérieure de Lyon

When is a model is a good model? Epistemological perspectives on mathematical modelling

When a model is a good model? Must it represent a specific target system? Allow to make predictions? Provide an explanation for observed behaviors?  After a brief survey of general epistemological questions on modelling, I will consider examples of mathematical modelling in physics and biology from the perspective of dynamical systems theory. I will first show that even if it has been little noticed by philosophers, dynamical systems theory itself as a mathematical theory has been a source of questions and criteria in order to assess the goodness of a model (notions of stability, genericity, structural stability). I will then discuss the theoretical fruitfulness of arguments of (in)stability in the mathematical modelling of morphogenesis.

 

Tue, 26 Apr 2022

13:30 - 15:00
Imperial College

CDT in Mathematics of Random Systems April Workshop 2022

Julian Meier, Omer Karin
(University of Oxford/Imperial College London)
Further Information

Please contact @email for remote viewing details

Abstract

1:30pm Julian Meier, University of Oxford

Interacting-Particle Systems with Elastic Boundaries and Nonlinear SPDEs

We study interacting particle systems on the positive half-line. When we impose an elastic boundary at zero, the particle systems give rise to nonlinear SPDEs with irregular boundaries. We show existence and uniqueness of solutions to these equations. To deal with the nonlinearity we establish a probabilistic representation of solutions and regularity in L2.

2:15pm Dr Omer Karin, Imperial College London

Mathematical Principles of Biological Regulation

Modern research in the life sciences has developed remarkable methods to measure and manipulate biological systems. We now have detailed knowledge of the molecular interactions inside cells and the way cells communicate with each other. Yet many of the most fundamental questions (such as how do cells choose and maintain their identities? how is development coordinated? why do homeostatic processes fail in disease?) remain elusive, as addressing them requires a good understanding of complex dynamical processes. In this talk, I will present a mathematical approach for tackling these questions, which emphasises the role of control and of emergent properties. We will explore the application of this approach to various questions in biology and biomedicine, and highlight important future directions.

 

Wed, 02 Mar 2022

13:00 - 16:00
L4

March 2022 CDT in Maths of Random Systems Workshop

Jonathan Tam, Remy Messadene, Julien Berestycki
(University of Oxford and Imperial College London)
Further Information

Please contact @email for remote link

Abstract

1pm Jonathan Tam: Markov decision processes with observation costs

We present a framework for a controlled Markov chain where the state of the chain is only given at chosen observation times and of a cost. Optimal strategies therefore involve the choice of observation times as well as the subsequent control values. We show that the corresponding value function satisfies a dynamic programming principle, which leads to a system of quasi-variational inequalities (QVIs). Next, we give an extension where the model parameters are not known a priori but are inferred from the costly observations by Bayesian updates. We then prove a comparison principle for a larger class of QVIs, which implies uniqueness of solutions to our proposed problem. We utilise penalty methods to obtain arbitrarily accurate solutions. Finally, we perform numerical experiments on three applications which illustrate our framework.

Preprint at https://arxiv.org/abs/2201.07908

 

1.45pm Remy Messadene: signature asymptotics, empirical processes, and optimal transport

Rough path theory provides one with the notion of signature, a graded family of tensors which characterise, up to a negligible equivalence class, and ordered stream of vector-valued data. In the last few years, use of the signature has gained traction in time-series analysis, machine learning, deep learning and more recently in kernel methods. In this work, we lay down the theoretical foundations for a connection between signature asymptotics, the theory of empirical processes, and Wasserstein distances, opening up the landscape and toolkit of the second and third in the study of the first. Our main contribution is to show that the Hambly-Lyons limit can be reinterpreted as a statement about the asymptotic behaviour of Wasserstein distances between two independent empirical measures of samples from the same underlying distribution. In the setting studied here, these measures are derived from samples from a probability distribution which is determined by geometrical properties of the underlying path.

 

2.30-3.00 Tea & coffee in the mezzananie

 

3-4pm Julien Berestycki: Extremal point process of the branching Brownian motion

 

 

 

Wed, 02 Feb 2022

13:15 - 15:15
Imperial College

CDT in Mathematics of Random Systems February Workshop

Alessandro Micheli, Terence Tsui, Dr Barbara Bravi
(Imperial College London and University of Oxford)
Further Information

For remote access please contact lydia.noa@imperial.ac.uk

13.20 – 13.50 Alessandro Micheli (CDT Student, Imperial College London)
Closed-loop Nash competition for liquidity

 

13.50 – 14.20 Terence Tsui (CDT Student, University of Oxford)

Uncovering Genealogies of Populations with Local Density Regulation

 

14.25 - 15:10 Dr Barbara Bravi (Lecturer in Biomathematics, Department of Mathematics, Imperial College London)

Path integral approaches to model reduction in biochemical networks

Wed, 08 Dec 2021

13:45 - 16:30
L2

December CDT in Mathematics of Random Systems Seminars

Lancelot Da Costa, Zheneng Xie, Professor Terry Lyons
(Imperial College London and University of Oxford)
Further Information

Please email @email for the link to view talks remotely.

1:45-2:30 Lancelot Da Costa - Adaptive agents through active inference
2:30-3:15 Zheneng Xie - Scaling Limits of Random Graphs
3:15-3:30 Break
3:30-4:30 Professor Terry Lyons - From Mathematics to Data Science and Back

Abstract

Adaptive agents through active inference: The main fields of research that are used to model and realise adaptive agents are optimal control, reinforcement learning and active inference. Active inference is a probabilistic description of adaptive agents that is relatively less known to mathematicians, as it originated from neuroscience in the last decade. This talk presents the mathematical underpinnings of active inference, starting from fundamental considerations about agents that maintain their structural integrity in the face of environmental perturbations. Through this, we derive a probability distribution over actions, that describes decision-making under uncertainty in adaptive agents . Interestingly, this distribution has an interesting information geometric structure, combining, for instance, drives for exploration and exploitation, which may yield a principled answer to the exploration-exploitation trade-off. Preserving this geometric structure enables to realise adaptive agents in practice. We illustrate their behaviour with simulation examples and empirical comparisons with reinforcement learning.

Scaling Limits of Random Graphs: The scaling limit of directed random graphs remains relatively unexplored compared to their undirected counterparts. In contrast, many real-world networks, such as links on the world wide web, financial transactions and “follows” on Twitter, are inherently directed. Previous work by Goldschmidt and Stephenson established the scaling limit for the strongly connected components (SCCs) of the Erdős -- Rényi model in the critical window when appropriately rescaled. In this talk, we present a result showing the SCCs of another class of critical random directed graphs will converge when rescaled to the same limit. Central to the proof is an exploration of the directed graph and subsequent encodings of the exploration as real valued random processes. We aim to present this exploration algorithm and other key components of the proof.

From Mathematics to Data Science and Back: We give an overview of the interaction between rough path theory and data science at the current time.
 

 

Fri, 19 Nov 2021

15:00 - 17:00
Imperial College

November CDT in Maths of Random Systems Seminars

Felix Prenzel, Benedikt Petko & Dante Kalise
(Imperial College London and University of Oxford)
Further Information

Please email @email for the link to view talks remotely.

Abstract

High-dimensional approximation of Hamilton-Jacobi-Bellman PDEs – architectures, algorithms and applications

Hamilton-Jacobi Partial Differential Equations (HJ PDEs) are a central object in optimal control and differential games, enabling the computation of robust controls in feedback form. High-dimensional HJ PDEs naturally arise in the feedback synthesis for high-dimensional control systems, and their numerical solution must be sought outside the framework provided by standard grid-based discretizations. In this talk, I will discuss the construction novel computational methods for approximating high-dimensional HJ PDEs, based on tensor decompositions, polynomial approximation, and deep neural networks.