15:30
The Brownian loop measure on Riemann surfaces and applications to length spectra
Abstract
Forthcoming events in this series
Planar random growth processes occur widely in the physical world. Examples include diffusion-limited aggregation (DLA) for mineral deposition and the Eden model for biological cell growth. One approach to mathematically modelling such processes is to represent the randomly growing clusters as compositions of conformal mappings. In 1998, Hastings and Levitov proposed one such family of models, which includes versions of the physical processes described above. An intriguing property of their model is a conjectured phase transition between models that converge to growing disks, and 'turbulent' non-disk like models. In this talk I will describe a natural generalisation of the Hastings-Levitov family in which the location of each successive particle is distributed according to the density of harmonic measure on the cluster boundary, raised to some power. In recent joint work with Norris and Silvestri, we show that when this power lies within a particular range, the macroscopic shape of the cluster converges to a disk, but that as the power approaches the edge of this range the fluctuations approach a critical point, which is a limit of stability. This phase transition in fluctuations can be interpreted as the beginnings of a macroscopic phase transition from disks to non-disks analogous to that present in the Hastings-Levitov family.
Optimal transport theory is a natural way to define both a distance and a geometry on the space of probability measures. In settings like graphical causal models (also called Bayes networks or belief networks), the space of probability measures is enriched by an information structure modeled by a directed graph. This talk introduces a variant of optimal transport including such a graphical information structure. The goal is to provide a concept of optimal transport whose topological and geometric properties are well suited for structural causal models. In this regard, we show that the resulting concept of Wasserstein distance can be used to control the difference between average treatment effects under different distributions, and is geometrically suitable to interpolate between different structural causal models.
This is a joint seminar with OxPDE.
In this talk we study wave propagation in random media using multiscale analysis.
We show that the wavefield can be described by a stochastic partial differential equation.
We can then address the following physical conjecture: for large propagation distances, the wavefield has Gaussian statistics, mean zero, and second-order moments determined by radiative transfer theory.
The results for the first two moments can be proved under general circumstances.
The Gaussian conjecture for the statistical distribution of the wavefield can be proved in some propagation regimes, but it turns out to be wrong in other regimes.
We consider the general framework of distributionally robust optimization under a martingale restriction. We provide explicit expressions for model risk sensitivities in this context by considering deviations in the Wasserstein distance and the corresponding adapted one. We also extend the dual formulation to this context.
We consider the sharp interface limit problem for 1D stochastic Allen-Cahn equation, and extend a classic result by Funaki to the full small noise regime. One interesting point is that the notion of "small noise" turns out to depend on the topology one uses. The main new idea in the proof is the construction of a series of functional correctors, which are designed to recursively cancel out potential divergences. At a technical level, in order to show these correctors are well behaved, we also develop a systematic decomposition of functional derivatives of the deterministic Allen-Cahn flow of all orders, which might have its own interest.
Based on a joint work with Wenhao Zhao (EPFL) and Shuhan Zhou (PKU).
We will introduce the Quintic Ornstein-Uhlenbeck model that jointly calibrates SPX-VIX options with a particular focus on its mathematical tractability namely for fast pricing SPX options using Fourier techniques. Then, we will consider the more general class of stochastic volatility models where the dynamics of the volatility are given by a possibly infinite linear combination of the elements of the time extended signature of a Brownian motion. First, we show that the model is remarkably universal, as it includes, but is not limited to, the celebrated Stein-Stein, Bergomi, and Heston models, together with some path-dependent variants. Second, we derive the joint characteristic functional of the log-price and integrated variance provided that some infinite-dimensional extended tensor algebra valued Riccati equation admits a solution. This allows us to price and (quadratically) hedge certain European and path-dependent options using Fourier inversion techniques. We highlight the efficiency and accuracy of these Fourier techniques in a comprehensive numerical study.
Inspired by questions concerning the evolution of phase fields, we study the Allen-Cahn equation in dimension 2 with white noise initial datum. In a weak coupling regime, where the nonlinearity is damped in relation to the smoothing of the initial condition, we prove Gaussian fluctuations. The effective variance that appears can be described as the solution to an ODE. Our proof builds on a Wild expansion of the solution, which is controlled through precise combinatorial estimates. Joint works with Simon Gabriel, Martin Hairer, Khoa Lê and Nikos Zygouras.
Many systems in the applied sciences are made of a large number of particles. One is often not interested in the detailed behaviour of each particle but rather in the collective behaviour of the group. An established methodology in statistical mechanics and kinetic theory allows one to study the limit as the number of particles in the system N tends to infinity and to obtain a (low dimensional) PDE for the evolution of the density of the particles. The limiting PDE is a non-linear equation, where the non-linearity has a specific structure and is called a McKean-Vlasov nonlinearity. Even if the particles evolve according to a stochastic differential equation, the limiting equation is deterministic, as long as the particles are subject to independent sources of noise. If the particles are subject to the same noise (common noise) then the limit is given by a Stochastic Partial Differential Equation (SPDE). In the latter case the limiting SPDE is substantially the McKean-Vlasov PDE + noise; noise is furthermore multiplicative and has gradient structure. One may then ask the question about whether it is possible to obtain McKean-Vlasov SPDEs with additive noise from particle systems. We will explain how to address this question, by studying limits of weighted particle systems.
This is a joint work with L. Angeli, J. Barre, D. Crisan, M. Kolodziejzik.
Rough path theory provides a framework for the study of nonlinear systems driven by highly oscillatory (deterministic) signals. The corresponding analysis is inherently distinct from that of classical stochastic calculus, and neither theory alone is able to satisfactorily handle hybrid systems driven by both rough and stochastic noise. The introduction of the stochastic sewing lemma (Khoa Lê, 2020) has paved the way for a theory which can efficiently handle such hybrid systems. In this talk, we will discuss how this can be done in a general setting which allows for jump discontinuities in both sources of noise.
This presentation focuses on the study of the regulartiy of random wavelet series. We first study their belonging to certain functional spaces and we compare these results with long-established results related to random Fourier series. Next, we show how the study of random wavelet series leads to precise pointwise regularity properties of processes like fractional Brownian motion. Additionally, we explore how these series helps create Gaussian processes with random Hölder exponents.
One way to capture both the elastic and stochastic reaction of purchases to price is through a model where sellers control the intensity of a counting process, representing the number of sales thus far. The intensity describes the probabilistic likelihood of a sale, and is a decreasing function of the price a seller sets. A classical model for ticket pricing, which assumes a single seller and infinite time horizon, is by Gallego and van Ryzin (1994) and it has been widely utilized by airlines, for instance. Extending to more realistic settings where there are multiple sellers, with finite inventories, in competition over a finite time horizon is more complicated both mathematically and computationally. We discuss some dynamic games of this type, from static to two player to the associated mean field game, with some numerical and existence-uniqueness results.
Based on works with Andrew Ledvina and with Emre Parmaksiz.
Fluctuating hydrodynamics provides a framework for approximating density fluctuations in interacting particle systems by suitable SPDEs. The Dean-Kawasaki equation - a strongly singular SPDE - is perhaps the most basic equation of fluctuating hydrodynamics; it has been proposed in the physics literature to describe the fluctuations of the density of N diffusing weakly interacting particles in the regime of large particle numbers N. The strongly singular nature of the Dean-Kawasaki equation presents a substantial challenge for both its analysis and its rigorous mathematical justification: Besides being non-renormalizable by approaches like regularity structures, it has recently been shown to not even admit nontrivial martingale solutions.
In this talk, we give an overview of recent quantitative results for the justification of fluctuating hydrodynamics models. In particular, we give an interpretation of the Dean-Kawasaki equation as a "recipe" for accurate and efficient numerical simulations of the density fluctuations for weakly interacting diffusing particles, allowing for an error that is of arbitarily high order in the inverse particle number.
Based on joint works with Federico Cornalba, Jonas Ingmanns, and Claudia Raithel
We propose a novel generative model for time series based on Schrödinger bridge (SB) approach. This consists in the entropic interpolation via optimal transport between a reference probability measure on path space and a target measure consistent with the joint data distribution of the time series. The solution is characterized by a stochastic differential equation on finite horizon with a path-dependent drift function, hence respecting the temporal dynamics of the time series distribution. We estimate the drift function from data samples by nonparametric, e.g. kernel regression methods, and the simulation of the SB diffusion yields new synthetic data samples of the time series. The performance of our generative model is evaluated through a series of numerical experiments. First, we test with autoregressive models, a GARCH Model, and the example of fractional Brownian motion, and measure the accuracy of our algorithm with marginal, temporal dependencies metrics, and predictive scores. Next, we use our SB generated synthetic samples for the application to deep hedging on real-data sets.
We consider stochastic differential equations (SDEs) driven by fractional Brownian motion with Hurst parameter less than 1/2. The drift is a measurable function of time and space which belongs to a certain Lebesgue space. Under subcritical regime, we show that a strong solution exists and is unique in path-by-path sense. When the noise is formally replaced by a Brownian motion, our results correspond to the strong uniqueness result of Krylov and Roeckner (2005). Our methods forgo standard approaches in Markovian settings and utilize Lyons' rough path theory in conjunction with recently developed tools. Joint work with Toyomu Matsuda and Oleg Butkovsky.
Standard symmetric α-stable cylindrical processes in Hilbert spaces are the natural generalisation of the analogue processes in Euclidean spaces. However, like standard Brownian motions, standard symmetric α-stable processes in finite dimensions can only be generalised to infinite dimensional Hilbert spaces as cylindrical processes, i.e. processes in a generalised sense (of Gel’fand and Vilenkin (1964) or Segal (1954)) not attaining values in the underlying Hilbert space.
In this talk, we briefly introduce the theory of stochastic integrals with respect to standard symmetric α-stable cylindrical processes. As these processes exist only in the generalised sense, introducing a stochastic integral requires an approach different to the classical one by semi-martingale decomposition. The main result presented in this talk is the existence of a solution to an abstract evolution equation driven by a standard symmetric α-stable cylindrical process. The main tool for establishing this result is a Yosida approximation and an Itô formula for Hilbert space-valued semi-martingales where the martingale part is represented as an integral driven by cylindrical α-stable noise. While these tools are standard in stochastic analysis, due to the cylindrical nature of our noise, their application requires completely novel arguments and techniques.
In this talk, we will present a loop expansion for lattice gauge theories and its application to prove ultraviolet stability in the Abelian Higgs model. We will first describe this loop expansion and how it relates to earlier works of Brydges-Frohlich-Seiler. We will then show how the expansion leads to a quantitative diamagnetic inequality, which in turn implies moment estimates, uniform in the lattice spacing, on the Holder-Besov norm of the gauge field marginal of the Abelian Higgs lattice model. Based on Gauge field marginal of an Abelian Higgs model, which is joint work with Ajay Chandra.
We consider a model of network of interacting neurons based on jump processes. Briefly, the membrane potential $V^i_t$ of each individual neuron evolves according to a one-dimensional ODE. Neuron $i$ spikes at rate which only depends on its membrane potential, $f(V^i_t)$. After a spike, $V^i_t$ is reset to a fixed value $V^{\mathrm{rest}}$. Simultaneously, the membrane potentials of any (post-synaptic) neuron $j$ connected to the neuron $i$ receives a kick of value $J^{i,j}$.
We study the limit (mean-field) equation obtained where the number of neurons goes to infinity. In this talk, we describe the long time behaviour of the solution. Depending on the intensity of the interactions, we observe convergence of the distribution to a unique invariant measure (small interactions) or we characterize the occurrence of spontaneous oscillations for interactions in the neighbourhood of critical values.
We prove convergence results of the simulation of the density solution to the McKean-Vlasov equation, when the measure variable is in the drift. Our method builds upon adaptive nonparametric results in statistics that enable us to obtain a data-driven selection of the smoothing parameter in a kernel-type estimator. In particular, we give a generalised Bernstein inequality for Euler schemes with interacting particles and obtain sharp deviation inequalities for the estimated classical solution. We complete our theoretical results with a systematic numerical study and gather empirical evidence of the benefit of using high-order kernels and data-driven smoothing parameters. This is a joint work with M. Hoffmann.
Please join us from 1500-1530 for tea and coffee outside the lecture theatre before the talk.
Generating high-fidelity time series data using generative adversarial networks (GANs) remains a challenging task, as it is difficult to capture the temporal dependence of joint probability distributions induced by time-series data. To this end, a key step is the development of an effective discriminator to distinguish between time series distributions. In this talk, I will introduce the so-called PCF-GAN, a novel GAN that incorporates the path characteristic function (PCF) as the principled representation of time series distribution into the discriminator to enhance its generative performance. On the one hand, we establish theoretical foundations of the PCF distance by proving its characteristicity, boundedness, differentiability with respect to generator parameters, and weak continuity, which ensure the stability and feasibility of training the PCF-GAN. On the other hand, we design efficient initialisation and optimisation schemes for PCFs to strengthen the discriminative power and accelerate training efficiency. To further boost the capabilities of complex time series generation, we integrate the auto-encoder structure via sequential embedding into the PCF-GAN, which provides additional reconstruction functionality. Extensive numerical experiments on various datasets demonstrate the consistently superior performance of PCF-GAN over state-of-the-art baselines, in both generation and reconstruction quality. Joint work with Dr. Siran Li (Shanghai Jiao Tong Uni) and Hang Lou (UCL). Paper: [https://arxiv.org/pdf/2305.12511.pdf].
Please join us from 1500-1530 for tea and coffee outside the lecture theatre before the talk.
Neural SDEs are continuous-time generative models for sequential data. State-of-the-art performance for irregular time series generation has been previously obtained by training these models adversarially as GANs. However, as typical for GAN architectures, training is notoriously unstable, often suffers from mode collapse, and requires specialised techniques such as weight clipping and gradient penalty to mitigate these issues. In this talk, I will introduce a novel class of scoring rules on path space based on signature kernels and use them as an objective for training Neural SDEs non-adversarially. The strict properness of such kernel scores and the consistency of the corresponding estimators, provide existence and uniqueness guarantees for the minimiser. With this formulation, evaluating the generator-discriminator pair amounts to solving a system of linear path-dependent PDEs which allows for memory-efficient adjoint-based backpropagation. Moreover, because the proposed kernel scores are well-defined for paths with values in infinite-dimensional spaces of functions, this framework can be easily extended to generate spatiotemporal data. This procedure permits conditioning on a rich variety of market conditions and significantly outperforms alternative ways of training Neural SDEs on a variety of tasks including the simulation of rough volatility models, the conditional probabilistic forecasts of real-world forex pairs where the conditioning variable is an observed past trajectory, and the mesh-free generation of limit order book dynamics.
Please join us from 1500-1530 for tea and coffee outside the lecture theatre before the talk.
We describe the compact scaling limits of uniformly random quadrangulations with boundaries on a surface of arbitrary fixed genus. These limits, called Brownian surfaces, are homeomorphic to the surface of the given genus with or without boundaries depending on the scaling regime of the boundary perimeters of the quadrangulation. They are constructed by appropriate gluings of pieces derived from Brownian geometrical objects (the Brownian plane and half-plane). In this talk, I will review their definition and discuss possible alternative constructions. This is based on joint work with Jérémie Bettinelli.
Recently Linares-Otto-Tempelmayr have unveiled a very interesting algebraic structure which allows to define a new class of rough paths/regularity structures, with associated applications to stochastic PDEs or ODEs. This approach does not consider trees as combinatorial tools but their fertility, namely the function which associates to each integer k the number of vertices in the tree with exactly k children. In a joint work with J-D Jacques we have studied this algebraic structure and shown that it is related with a general and simple class of so-called post-Lie algebras. The construction has remarkable properties and I will try to present them in the simplest possible way.