Forthcoming events in this series


Mon, 08 Feb 2021

16:00 - 17:00

Finance and Statistics: Trading Analogies for Sequential Learning

MARTIN LARSSON
(Carnegie Mellon University)
Abstract


The goal of sequential learning is to draw inference from data that is gathered gradually through time. This is a typical situation in many applications, including finance. A sequential inference procedure is `anytime-valid’ if the decision to stop or continue an experiment can depend on anything that has been observed so far, without compromising statistical error guarantees. A recent approach to anytime-valid inference views a test statistic as a bet against the null hypothesis. These bets are constrained to be supermartingales - hence unprofitable - under the null, but designed to be profitable under the relevant alternative hypotheses. This perspective opens the door to tools from financial mathematics. In this talk I will discuss how notions such as supermartingale measures, log-optimality, and the optional decomposition theorem shed new light on anytime-valid sequential learning. (This talk is based on joint work with Wouter Koolen (CWI), Aaditya Ramdas (CMU) and Johannes Ruf (LSE).)
 

Mon, 01 Feb 2021

16:00 - 17:00

Extremal distance and conformal radius of a CLE_4 loop.

TITUS LUPU
(Sorbonne Université)
Abstract

The CLE_4 Conformal Loop Ensemble in a 2D simply connected domain is a random countable collection of fractal Jordan curves that satisfies a statistical conformal invariance and appears, or is conjectured to appear, as a scaling limit of interfaces in various statistical physics models in 2D, for instance in the double dimer model. The CLE_4   is also related to the 2D Gaussian free field. Given a simply connected domain D and a point z in D, we consider the CLE_4 loop that surrounds z and study the extremal distance between the loop and the boundary of the domain, and the conformal radius of the interior surrounded by the loop seen from z. Because of the confomal invariance, the joint law of this two quantities does not depend (up to a scale factor) on the choice of the domain D and the point z in D. The law of the conformal radius alone has been known since the works of Schramm, Sheffield and Wilson. We complement their result by deriving the joint law of (extremal distance, conformal radius). Both quantities can be read on the same 1D Brownian path, by tacking a last passage time and a first hitting time. This joint law, together with some distortion bounds, provides some exponents related to the CLE_4. This is a joint work with Juhan Aru and Avelio Sepulveda.

 

Mon, 25 Jan 2021

16:00 - 17:00

Open markets

DONGHAN KIM
(Columbia University)
Abstract

An open market is a subset of a larger equity market, composed of a certain fixed number of top‐capitalization stocks. Though the number of stocks in the open market is fixed, their composition changes over time, as each company's rank by market capitalization fluctuates. When one is allowed to invest also in a money market, an open market resembles the entire “closed” equity market in the sense that the market viability (lack of arbitrage) is equivalent to the existence of a numéraire portfolio (which cannot be outperformed). When access to the money market is prohibited, the class of portfolios shrinks significantly in open markets; in such a setting, we discuss how to construct functionally generated stock portfolios and the concept of the universal portfolio.

This talk is based on joint work with Ioannis Karatzas.

-------------------------------------------------------------------------------------------------

Mon, 18 Jan 2021

16:00 - 17:00

 Machine Learning for Mean Field Games

MATHIEU LAURIERE
(Princeton University)
Abstract

Mean field games (MFG) and mean field control problems (MFC) are frameworks to study Nash equilibria or social optima in games with a continuum of agents. These problems can be used to approximate competitive or cooperative situations with a large finite number of agents. They have found a broad range of applications, from economics to crowd motion, energy production and risk management. Scalable numerical methods are a key step towards concrete applications. In this talk, we propose several numerical methods for MFG and MFC. These methods are based on machine learning tools such as function approximation via neural networks and stochastic optimization. We provide numerical results and we investigate the numerical analysis of these methods by proving bounds on the approximation scheme. If time permits, we will also discuss model-free methods based on extensions of the traditional reinforcement learning setting to the mean-field regime.  

 

 

Mon, 07 Dec 2020

16:00 - 17:00

"Efficient approximation of high-dimensional functions with neural networks”

PATRICK CHERIDITO
((ETH) Zurich)
Abstract

We develop a framework for showing that neural networks can overcome the curse of dimensionality in different high-dimensional approximation problems. Our approach is based on the notion of a catalog network, which is a generalization of a standard neural network in which the nonlinear activation functions can vary from layer to layer as long as they are chosen from a predefined catalog of functions. As such, catalog networks constitute a rich family of continuous functions. We show that under appropriate conditions on the catalog, catalog networks can efficiently be approximated with ReLU-type networks and provide precise estimates on the number of parameters needed for a given approximation accuracy. As special cases of the general results, we obtain different classes of functions that can be approximated with ReLU networks without the curse of dimensionality. 

 

A preprint is here: https://arxiv.org/abs/1912.04310

Mon, 30 Nov 2020

16:00 - 17:00

Model-independence in a fixed-income market and weak optimal transport

BEATRICE ACCIAIO
(ETH Zurich)
Abstract

 

In this talk I will consider model-independent pricing problems in a stochastic interest rates framework. In this case the usual tools from Optimal Transport and Skorokhod embedding cannot be applied. I will show how some pricing problems in a fixed-income market can be reformulated as Weak Optimal Transport (WOT) problems as introduced by Gozlan et al. I will present a super-replication theorem that follows from an extension of WOT results to the case of non-convex cost functions.
This talk is based on joint work with M. Beiglboeck and G. Pammer.

Mon, 23 Nov 2020

16:00 - 17:00

Excursion Risk

RENYUAN XU
(University of Oxford)
Abstract

The risk and return profiles of a broad class of dynamic trading strategies, including pairs trading and other statistical arbitrage strategies, may be characterized in terms of excursions of the market price of a portfolio away from a reference level. We propose a mathematical framework for the risk analysis of such strategies, based on a description in terms of price excursions, first in a pathwise setting, without probabilistic assumptions, then in a Markovian setting.

 

We introduce the notion of δ-excursion, defined as a path which deviates by δ from a reference level before returning to this level. We show that every continuous path has a unique decomposition into δ-excursions, which is useful for scenario analysis of dynamic trading strategies, leading to simple expressions for the number of trades, realized profit, maximum loss and drawdown. As δ is decreased to zero, properties of this decomposition relate to the local time of the path. When the underlying asset follows a Markov process, we combine these results with Ito's excursion theory to obtain a tractable decomposition of the process as a concatenation of independent δ-excursions, whose distribution is described in terms of Ito's excursion measure. We provide analytical results for linear diffusions and give new examples of stochastic processes for flexible and tractable modeling of excursions. Finally, we describe a non-parametric scenario simulation method for generating paths whose excursion properties match those observed in empirical data.

Joint work with Anna Ananova and Rama Cont: https://ssrn.com/abstract=3723980

 

 

Mon, 16 Nov 2020

16:00 - 17:00

Elliptic stochastic quantisation and supersymmetry

MASSIMILIANO GUBINELLI
(Bonn University)
Abstract

Stochastic quantisation is, broadly speaking, the use of a stochastic differential equation to construct a given probability distribution. Usually this refers to Markovian Langevin evolution with given invariant measure. However we will show that it is possible to construct other kind of equations (elliptic stochastic partial differential equations) whose solutions have prescribed marginals. This connection was discovered in the '80 by Parisi and Sourlas in the context of dimensional reduction of statistical field theories in random external fields. This purely probabilistic results has a proof which depends on a supersymmetric formulation of the problem, i.e. a formulation involving a non-commutative random field defined on a non-commutative space. This talk is based on joint work with S. Albeverio and F. C. de Vecchi.

 

Mon, 09 Nov 2020

16:00 - 17:00

Space-time deep neural network approximations for high-dimensional partial differential equations

DIYORA SALIMOVA
(ETH Zurich)
Abstract


It is one of the most challenging issues in applied mathematics to approximately solve high-dimensional partial differential equations (PDEs) and most of the numerical approximation methods for PDEs in the scientific literature suffer from the so-called curse of dimensionality (CoD) in the sense that the number of computational operations employed in the corresponding approximation scheme to obtain an  approximation precision $\varepsilon >0$ grows exponentially in the PDE dimension and/or the reciprocal of $\varepsilon$. Recently, certain deep learning based approximation methods for PDEs have been proposed  and various numerical simulations for such methods suggest that deep neural network (DNN) approximations might have the capacity to indeed overcome the CoD in the sense that  the number of real parameters used to describe the approximating DNNs  grows at most polynomially in both the PDE dimension $d \in  \N$ and the reciprocal of the prescribed approximation accuracy $\varepsilon >0$. There are now also a few rigorous mathematical results in the scientific literature which  substantiate this conjecture by proving that  DNNs overcome the CoD in approximating solutions of PDEs.  Each of these results establishes that DNNs overcome the CoD in approximating suitable PDE solutions  at a fixed time point $T >0$ and on a compact cube $[a, b]^d$ but none of these results provides an answer to the question whether the entire PDE solution on $[0, T] \times [a, b]^d$ can be approximated by DNNs without the CoD. 
In this talk we show that for every $a \in \R$, $ b \in (a, \infty)$ solutions of  suitable  Kolmogorov PDEs can be approximated by DNNs on the space-time region $[0, T] \times [a, b]^d$ without the CoD. 

 

Mon, 02 Nov 2020

16:00 - 17:00

Stochastic Ricci flow on surfaces

JULIEN DUBEDAT
(Columbia University)
Abstract

The Ricci flow on a surface is an intrinsic evolution of the metric converging to a constant curvature metric within the conformal class. It can be seen as an infinite-dimensional gradient flow. We introduce a natural 'Langevin' version of that flow, thus constructing an SPDE with invariant measure expressed in terms of Liouville Conformal Field Theory.
Joint work with Hao Shen (Wisconsin).

 

Mon, 26 Oct 2020

16:00 - 17:00

Diffusion Limit of Poisson Limit-Order Book Models

STEVE SHREVE
(Carnegie Mellon Univeristy)
Abstract

Trading of financial instruments has largely moved away from floor trading and onto electronic exchanges. Orders to buy and sell are queued at these exchanges in a limit-order book. While a full analysis of the dynamics of a limit-order book requires an understanding of strategic play among multiple agents, and is thus extremely complex, so-called zero-intelligence Poisson models have been shown to capture many of the statistical features of limit-order book evolution. These models can be addressed by traditional queueing theory techniques, including Laplace transform analysis. In this work, we demonstrate in a simple setting that another queueing theory technique, approximating the Poisson model by a diffusion model identified as the limit of a sequence of scaled Poisson models, can also be implemented. We identify the diffusion limit, find an embedded semi-Markov model in the limit, and determine the statistics of the embedded semi-Markov model. Along the way, we introduce and study a new type of process, a generalization of skew Brownian motion that we call two-speed Brownian motion.

Mon, 19 Oct 2020

16:00 - 17:00

Deep neural networks, generic universal interpolation and controlled ODEs

CHRISTA CUCHIERO
(University of Vienna)
Abstract

Abstract: A recent paradigm views deep neural networks as discretizations of certain controlled ordinary differential equations, sometimes called neural ordinary differential equations. We make use of this perspective to link expressiveness of deep networks to the notion of controllability of dynamical systems. Using this connection, we study an expressiveness property that we call universal interpolation, and show that it is generic in a certain sense. The universal interpolation property is slightly weaker than universal approximation, and disentangles supervised learning on finite training sets from generalization properties. We also show that universal interpolation holds for certain deep neural networks even if large numbers of parameters are left untrained, and are instead chosen randomly. This lends theoretical support to the observation that training with random initialization can be successful even when most parameters are largely unchanged through the training. Our results also explore what a minimal amount of trainable parameters in neural ordinary differential equations could be without giving up on expressiveness.

Joint work with Martin Larsson, Josef Teichmann.

Mon, 12 Oct 2020

16:00 - 17:00

A trajectorial approach to the gradient flow properties of Langevin–Smoluchowski diffusions

IOANNIS KARATZAS
(Columbia University)
Abstract

We revisit the variational characterization of conservative diffusion as entropic gradient flow and provide for it a probabilistic interpretation based on stochastic calculus. It was shown by Jordan, Kinderlehrer, and Otto that, for diffusions of Langevin–Smoluchowski type, the Fokker–Planck probability density flow maximizes the rate of relative entropy dissipation, as measured by the distance traveled in the ambient space of probability measures with finite second moments, in terms of the quadratic Wasserstein metric. We obtain novel, stochastic-process versions of these features, valid along almost every trajectory of the diffusive motion in the backward direction of time, using a very direct perturbation analysis. By averaging our trajectorial results with respect to the underlying measure on path space, we establish the maximal rate of entropy dissipation along the Fokker–Planck flow and measure exactly the deviation from this maximum that corresponds to any given perturbation. As a bonus of our trajectorial approach we derive the HWI inequality relating relative entropy (H), Wasserstein distance (W) and relative Fisher information (I).

 

Mon, 22 Jun 2020

16:00 - 17:00

Controlled and constrained martingale problems

Thomas Kurtz
(University of Wisconsin)
Abstract

Most of the basic results on martingale problems extend to the setting in which the generator depends on a control.  The “control” could represent a random environment, or the generator could specify a classical stochastic control problem.  The equivalence between the martingale problem and forward equation (obtained by taking expectations of the martingales) provides the tools for extending linear programming methods introduced by Manne in the context of controlled finite Markov chains to general Markov stochastic control problems.  The controlled martingale problem can also be applied to the study of constrained Markov processes (e.g., reflecting diffusions), the boundary process being treated as a control.  The talk includes joint work with Richard Stockbridge and with Cristina Costantini. 

Mon, 15 Jun 2020

16:00 - 17:00

Local stochastic volatility and the inverse of the Markovian projection

Mykhaylo Shkolnikov
(Princeton University)
Abstract

 

Abstract: The calibration problem for local stochastic volatility models leads to two-dimensional stochastic differential equations of McKean-Vlasov type. In these equations, the conditional distribution of the second component of the solution given the first enters the equation for the first component of the solution. While such equations enjoy frequent application in the financial industry, their mathematical analysis poses a major challenge. I will explain how to prove the strong existence of stationary solutions for these equations, as well as the strong uniqueness in an important special case. Based on joint work with Daniel Lacker and Jiacheng Zhang.  
 

Mon, 01 Jun 2020

16:00 - 17:00

A martingale approach for fractional Brownian motions and related path dependent PDEs

Frederi Viens
(Michigan State University)
Abstract


We study dynamic backward problems, with the computation of conditional expectations as a special objective, in a framework where the (forward) state process satisfies a Volterra type SDE, with fractional Brownian motion as a typical example. Such processes are neither Markov processes nor semimartingales, and most notably, they feature a certain time inconsistency which makes any direct application of Markovian ideas, such as flow properties, impossible without passing to a path-dependent framework. Our main result is a functional Itô formula, extending the Functional Ito calculus to our more general framework. In particular, unlike in the Functional Ito calculus, where one needs only to consider stopped paths, here we need to concatenate the observed path up to the current time with a certain smooth observable curve derived from the distribution of the future paths.  We then derive the path dependent PDEs for the backward problems. Finally, an application to option pricing and hedging in a financial market with rough volatility is presented.

Joint work with JianFeng Zhang (USC).

Mon, 25 May 2020

16:00 - 17:00

Infinitely regularizing paths, and regularization by noise.

Fabian Harang
(University of Oslo)
Abstract

 

Abstract: 

In this talk I will discuss regularization by noise from a pathwise perspective using non-linear Young integration, and discuss the relations with occupation measures and local times. This methodology of pathwise regularization by noise was originally proposed by Gubinelli and Catellier (2016), who use the concept of averaging operators and non-linear Young integration to give meaning to certain ill posed SDEs. 

In a recent work together with   Nicolas Perkowski we show that there exists a class of paths with exceptional regularizing effects on ODEs, using the framework of Gubinelli and Catellier. In particular we prove existence and uniqueness of ODEs perturbed by such a path, even when the drift is given as a Scwartz distribution. Moreover, the flow associated to such ODEs are proven to be infinitely differentiable. Our analysis can be seen as purely pathwise, and is only depending on the existence of a sufficiently regular occupation measure associated to the path added to the ODE. 

As an example, we show that a certain type of Gaussian processes has infinitely differentiable local times, whose paths then can be used to obtain the infinitely regularizing effect on ODEs. This gives insight into the powerful effect that noise may have on certain equations. I will also discuss an ongoing extension of these results towards regularization of certain PDE/SPDEs by noise.​

Mon, 18 May 2020

16:00 - 17:00

The functional Breuer-Major theorem

Ivan Nourdin
(University of Luxembourg)
Abstract


Let ?={??}?∈ℤ be zero-mean stationary Gaussian sequence of random variables with covariance function ρ satisfying ρ(0)=1. Let φ:R→R be a function such that ?[?(?_0)2]<∞ and assume that φ has Hermite rank d≥1. The celebrated Breuer–Major theorem asserts that, if ∑|?(?)|^?<∞ then
the finite dimensional distributions of the normalized sum of ?(??) converge to those of ?? where W is
a standard Brownian motion and σ is some (explicit) constant. Surprisingly, and despite the fact this theorem has become over the years a prominent tool in a bunch of different areas, a necessary and sufficient condition implying the weak convergence in the
space ?([0,1]) of càdlàg functions endowed with the Skorohod topology is still missing. Our main goal in this paper is to fill this gap. More precisely, by using suitable boundedness properties satisfied by the generator of the Ornstein–Uhlenbeck semigroup,
we show that tightness holds under the sufficient (and almost necessary) natural condition that E[|φ(X0)|p]<∞ for some p>2.

Joint work with D Nualart
 

Mon, 11 May 2020

16:00 - 17:00

Weierstrass bridges

Alexander Schied
(University of Waterloo Canada)
Abstract


Many classical fractal functions, such as the Weierstrass and Takagi-van der Waerden functions, admit a finite p-th variation along a natural sequence of partitions. They can thus serve as integrators in pathwise Itô calculus. Motivated by this observation, we
introduce a new class of stochastic processes, which we call Weierstrass bridges. They have continuous sample paths and arbitrarily low regularity and so provide a new example class of “rough” stochastic processes. We study some of their sample path properties
including p-th variation and moduli of continuity. This talk includes joint work with Xiyue Han and Zhenyuan Zhang.

 

Mon, 04 May 2020

16:00 - 17:00

Connecting Generative adversarial networks with Mean Field Games

Xin Guo
(Berkeley, USA)
Abstract


Generative Adversarial Networks (GANs) have celebrated great empirical success, especially in image generation and processing. Meanwhile, Mean-Field Games (MFGs),  as analytically feasible approximations for N-player games, have experienced rapid growth in theory of controls. In this talk, we will discuss a new theoretical connections between GANs and MFGs. Interpreting MFGs as GANs, on one hand, allows us to devise GANs-based algorithm to solve MFGs. Interpreting GANs as MFGs, on the other hand, provides a new and probabilistic foundation for GANs. Moreover, this interpretation helps establish an analytical connection between GANs and Optimal Transport (OT) problems, the connection previously understood mostly from the geometric perspective. We will illustrate by numerical examples of using GANs to solve high dimensional MFGs, demonstrating its superior performance over existing methodology.

Mon, 16 Mar 2020

15:45 - 16:45
Virtual

On the asymptotic optimality of the comb strategy for prediction with expert advice (cancelled)

ERHAN BAYRAKTAR
(University of Michigan)
Abstract

For the problem of prediction with expert advice in the adversarial setting with geometric stopping, we compute the exact leading order expansion for the long time behavior of the value function using techniques from stochastic analysis and PDEs. Then, we use this expansion to prove that as conjectured in Gravin, Peres and Sivan the comb strategies are indeed asymptotically optimal for the adversary in the case of 4 experts.
 

Mon, 16 Mar 2020

14:15 - 15:15
Virtual

Conservative diffusion as entropic gradient flux (cancelled)

IOANNIS KARATZAS
(Columbia University)
Abstract

We provide a detailed, probabilistic interpretation, based on stochastic calculus, for the variational characterization of conservative diffusion as entropic gradient flux. Jordan, Kinderlehrer, and Otto showed in 1998 that, for diffusions of Langevin-Smoluchowski type, the Fokker-Planck probability density flow minimizes the rate of relative entropy dissipation, as measured by the distance traveled in terms of the quadratic Wasserstein metric in the ambient space of configurations. Using a very direct perturbation analysis we obtain novel, stochastic-process versions of such features. These are valid along almost every trajectory of the diffusive motion in both the forward and, most transparently, the backward, directions of time. The original results follow then simply by taking expectations. As a bonus, we obtain the HWI inequality of Otto and Villani relating relative entropy, Fisher information and Wasserstein distance; and from it the celebrated log-Sobolev, Talagrand and Poincare inequalities of functional analysis. (Joint work with W. Schachermayer and B. Tschiderer, from the University of Vienna.)

 

Mon, 09 Mar 2020

15:45 - 16:45
L3

Infinite limit of (fully connected) neural networks: Gaussian processes and kernel methods.

FRANCK GABRIEL
(École Polytechnique Fédérale de Lausanne (EPFL))
Abstract

In practice, it is standard to initialize Artificial Neural Networks (ANN) with random parameters. We will see that this allows to describe, in the functional space, the limit of the evolution of (fully connected) ANN when their width tends towards infinity. Within this limit, an ANN is initially a Gaussian process and follows, during learning, a gradient descent convoluted by a kernel called the Neural Tangent Kernel. 

This description allows a better understanding of the convergence properties of neural networks, of how they generalize to examples during learning and has 

practical implications on the training of wide ANNs. 

Mon, 09 Mar 2020

14:15 - 15:15
L3

Hydrodynamic limit for a facilitated exclusion process

MARIELLE SIMON
(INRIA LILLE)
Abstract


During this talk we will be interested in a one-dimensional exclusion process subject to strong kinetic constraints, which belongs to the class of cooperative kinetically constrained lattice gases. More precisely, its stochastic short range interaction exhibits a continuous phase transition to an absorbing state at a critical value of the particle density. We will see that the macroscopic behavior of this microscopic dynamics, under periodic boundary conditions and diffusive time scaling, is ruled by a non-linear PDE belonging to free boundary problems (or Stefan problems). One of the ingredients is to show that the system typically reaches an ergodic component in subdiffusive time.

Based on joint works with O. Blondel, C. Erignoux and M. Sasada

Mon, 02 Mar 2020

15:45 - 16:45
L3

Mean-field Langevin dynamics and neural networks

ZHENJIE REN
(Université Paris Dauphine)
Abstract

The deep neural network has achieved impressive results in various applications, and is involved in more and more branches of science. However, there are still few theories supporting its empirical success. In particular, we miss the mathematical tool to explain the advantage of certain structures of the network, and to have quantitive error bounds. In our recent work, we used a regularised relaxed control problem to model the deep neural network.  We managed to characterise its optimal control by the invariant measure of a mean-field Langevin system, which can be approximated by the marginal laws. Through this study we understand the importance of the pooling for the deep nets, and are capable of computing an exponential convergence rate for the (stochastic) gradient descent algorithm.