From Lévy's stochastic area formula to universality of affine and polynomial processes via signature SDEs
Abstract
A plethora of stochastic models used in particular in mathematical finance, but also population genetics and physics, stems from the class of affine and polynomial processes. The history of these processes is on the one hand closely connected with the important concept of tractability, that is a substantial reduction of computational efforts due to special structural features, and on the other hand with a unifying framework for a large number of probabilistic models. One early instance in the literature where this unifying affine and polynomial point of view can be applied is Lévy's stochastic area formula. Starting from this example, we present a guided tour through the main properties and recent results, which lead to signature stochastic differential equations (SDEs). They constitute a large class of stochastic processes, here driven by Brownian motions, whose characteristics are entire or real-analytic functions of their own signature, i.e. of iterated integrals of the process with itself, and allow therefore for a generic path dependence. We show that their prolongation with the corresponding signature is an affine and polynomial process taking values in subsets of group-like elements of the extended tensor algebra. Signature SDEs are thus a class of stochastic processes, which is universal within Itô processes with path-dependent characteristics and which allows - due to the affine theory - for a relatively explicit characterization of the Fourier-Laplace transform and hence the full law on path space.
An Approximation Theory for Metric Space-Valued Functions With A View Towards Deep Learning
Abstract
We build universal approximators of continuous maps between arbitrary Polish metric spaces X and Y using universal approximators between Euclidean spaces as building blocks. Earlier results assume that the output space Y is a topological vector space. We overcome this limitation by "randomization": our approximators output discrete probability measures over Y. When X and Y are Polish without additional structure, we prove very general qualitative guarantees; when they have suitable combinatorial structure, we prove quantitative guarantees for Hölder-like maps, including maps between finite graphs, solution operators to rough differential equations between certain Carnot groups, and continuous non-linear operators between Banach spaces arising in inverse problems. In particular, we show that the required number of Dirac measures is determined by the combinatorial structure of X and Y. For barycentric Y, including Banach spaces, R-trees, Hadamard manifolds, or Wasserstein spaces on Polish metric spaces, our approximators reduce to Y-valued functions. When the Euclidean approximators are neural networks, our constructions generalize transformer networks, providing a new probabilistic viewpoint of geometric deep learning.
As an application, we show that the solution operator to an RDE can be approximated within our framework.
Based on the following articles:
• An Approximation Theory for Metric Space-Valued Functions With A View Towards Deep Learning (2023) - Chong Liu, Matti Lassas, Maarten V. de Hoop, and Ivan Dokmanić (ArXiV 2304.12231)
• Designing universal causal deep learning models: The geometric (Hyper)transformer (2023) B. Acciaio, A. Kratsios, and G. Pammer, Math. Fin. https://onlinelibrary.wiley.com/doi/full/10.1111/mafi.12389
• Universal Approximation Under Constraints is Possible with Transformers (2022) - ICLR Spotlight - A. Kratsios, B. Zamanlooy, T. Liu, and I. Dokmanić.
11:00
Wilson-Ito diffusions
Abstract
In a recent preprint, together with Bailleul and Chevyrev we introduced a class of random fields which try to model the basic properties of quantum fields. I will try to explain the basic ideas and some of the many open problems.
To read the preprint, please click here.