Forthcoming events in this series


Thu, 19 Jan 2023

16:00 - 17:00
L6

Model Calibration with Optimal Transport

Benjamin Joseph
Abstract

In order for one to infer reasonable predictions from a model, it must be calibrated to reproduce observations in the market. We use the semimartingale optimal transport methodology to formulate this calibration problem into a constrained optimisation problem, with our model calibrated using a finite number of European options observed in the market as constraints. Given such a PDE formulation, we are able to then derive a dual formulation involving an HJB equation which we can numerically solve. We focus on two cases: (1) The stochastic interest rate is known and perfectly matches the observed term structure in the market, however the asset local volatility and correlation are not known and must be calibrated; (2) The dynamics of both the stochastic interest rate and the underlying asset are unknown, and we must jointly calibrate both to European options on the interest rate and on the asset.

Thu, 01 Dec 2022

16:00 - 17:00
L3

Convergence of policy gradient methods for finite-horizon stochastic linear-quadratic control problems

Michael Giegrich
Abstract

We study the global linear convergence of policy gradient (PG) methods for finite-horizon exploratory linear-quadratic control (LQC) problems. The setting includes stochastic LQC problems with indefinite costs and allows additional entropy regularisers in the objective. We consider a continuous-time Gaussian policy whose mean is linear in the state variable and whose covariance is state-independent. Contrary to discrete-time problems, the cost is noncoercive in the policy and not all descent directions lead to bounded iterates. We propose geometry-aware gradient descents for the mean and covariance of the policy using the Fisher geometry and the Bures-Wasserstein geometry, respectively. The policy iterates are shown to obey an a-priori bound, and converge globally to the optimal policy with a linear rate. We further propose a novel PG method with discrete-time policies. The algorithm leverages the continuous-time analysis, and achieves a robust linear convergence across different action frequencies. A numerical experiment confirms the convergence and robustness of the proposed algorithm.

This is joint work with Yufei Zhang and Christoph Reisinger.

Thu, 24 Nov 2022

16:00 - 17:00
L3

Graph-based Methods for Forecasting Realized Covariances

Chao Zhang
Abstract

We forecast the realized covariance matrix of asset returns in the U.S. equity market by exploiting the predictive information of graphs in volatility and correlation. Specifically, we augment the Heterogeneous Autoregressive (HAR) model via neighborhood aggregation on these graphs. Our proposed method allows for the modeling of interdependence in volatility (also known as spillover effect) and correlation, while maintaining parsimony and interpretability. We explore various graph construction methods, including sector membership and graphical LASSO (for modeling volatility), and line graph (for modeling correlation). The results generally suggest that the augmented model incorporating graph information yields both statistically and economically significant improvements for out-of-sample performance over the traditional models. Such improvements remain significant over horizons up to one month ahead, but decay in time. The robustness tests demonstrate that the forecast improvements are obtained consistently over the different out-of-sample sub-periods, and are insensitive to measurement errors of volatilities.

Thu, 17 Nov 2022

16:00 - 17:00
L3

Simulating Arbitrage-Free Implied Volatility Surfaces

Milena Vuletic
Abstract

We present a computationally tractable method for simulating arbitrage free implied volatility surfaces. Our approach conciliates static arbitrage constraints with a realistic representation of statistical properties of implied volatility co-movements.
We illustrate our method with two examples. First, we propose a dynamic factor model for the implied volatility surface, and show how our method may be used to remove static arbitrage from model scenarios. As a second example, we propose a nonparametric generative model for implied volatility surfaces based on a Generative Adversarial Network (GAN).

Thu, 10 Nov 2022

16:00 - 17:00
L3

Sensitivity of robust optimization over an adapted Wasserstein ambiguity set

Yifan Jiang
Abstract

In this talk, we consider the sensitivity to the model uncertainty of an optimization problem. By introducing adapted Wasserstein perturbation, we extend the classical results in a static setting to the dynamic multi-period setting. Under mild conditions, we give an explicit formula for the first order approximation to the value function. An optimization problem with a cost of weak type will also be discussed.

Thu, 03 Nov 2022

16:00 - 17:00
L3

Decentralised Finance and Automated Market Making: Optimal Execution and Liquidity Provision

Fayçal Drissi
Abstract

Automated Market Makers (AMMs) are a new prototype of 
trading venues which are revolutionising the way market participants 
interact. At present, the majority of AMMs are Constant Function 
Market Makers (CFMMs) where a deterministic trading function 
determines how markets are cleared. A distinctive characteristic of 
CFMMs is that execution costs for liquidity takers, and revenue for 
liquidity providers, are given by closed-form functions of price, 
liquidity, and transaction size. This gives rise to a new class of 
trading problems. We focus on Constant Product Market Makers with 
Concentrated Liquidity and show how to optimally take and make 
liquidity. We use Uniswap v3 data to study price and liquidity 
dynamics and to motivate the models.

For liquidity taking, we describe how to optimally trade a large 
position in an asset and how to execute statistical arbitrages based 
on market signals. For liquidity provision, we show how the wealth 
decomposes into a fee and an asset component. Finally, we perform 
consecutive runs of in-sample estimation of model parameters and 
out-of-sample trading to showcase the performance of the strategies.

Thu, 20 Jun 2019

16:00 - 17:30
L2

A generic construction for high order approximation schemes of semigroups using random grids

Aurélien Alfonsi
(Ecole des Ponts ParisTech)
Abstract

Our aim is to construct high order approximation schemes for general 
semigroups of linear operators $P_{t},t \ge 0$. In order to do it, we fix a time 
horizon $T$ and the discretization steps $h_{l}=\frac{T}{n^{l}},l\in N$ and we suppose
that we have at hand some short time approximation operators $Q_{l}$ such
that $P_{h_{l}}=Q_{l}+O(h_{l}^{1+\alpha })$ for some $\alpha >0$. Then, we
consider random time grids $\Pi (\omega )=\{t_0(\omega )=0<t_{1}(\omega 
)<...<t_{m}(\omega )=T\}$ such that for all $1\le k\le m$, $t_{k}(\omega 
)-t_{k-1}(\omega )=h_{l_{k}}$ for some $l_{k}\in N$, and we associate the approximation discrete 
semigroup $P_{T}^{\Pi (\omega )}=Q_{l_{n}}...Q_{l_{1}}.$ Our main result is the 
following: for any approximation order $\nu $, we can construct random grids $\Pi_{i}(\omega )$ and coefficients 
$c_{i}$, with $i=1,...,r$ such that $P_{t}f=\sum_{i=1}^{r}c_{i} E(P_{t}^{\Pi _{i}(\omega )}f(x))+O(n^{-\nu})$
with the expectation concerning the random grids $\Pi _{i}(\omega ).$ 
Besides, $Card(\Pi _{i}(\omega ))=O(n)$ and the complexity of the algorithm is of order $n$, for any order
of approximation $\nu$. The standard example concerns diffusion 
processes, using the Euler approximation for $Q_l$.
In this particular case and under suitable conditions, we are able to gather the terms in order to produce an estimator of $P_tf$ with 
finite variance.
However, an important feature of our approach is its universality in the sense that
it works for every general semigroup $P_{t}$ and approximations.  Besides, approximation schemes sharing the same $\alpha$ lead to
the same random grids $\Pi_{i}$ and coefficients $c_{i}$. Numerical illustrations are given for ordinary differential equations, piecewise 
deterministic Markov processes and diffusions.

Thu, 06 Jun 2019

16:00 - 17:30
L4

tba

tba
Thu, 30 May 2019

16:00 - 17:30
L4

Adapted Wasserstein distances and their role in mathematical finance

Julio Backhoff
(University of Vienna)
Abstract

The problem of model uncertainty in financial mathematics has received considerable attention in the last years. In this talk I will follow a non-parametric point of view, and argue that an insightful approach to model uncertainty should not be based on the familiar Wasserstein distances. I will then provide evidence supporting the better suitability of the recent notion of adapted Wasserstein distances (also known as Nested Distances in the literature). Unlike their more familiar counterparts, these transport metrics take the role of information/filtrations explicitly into account. Based on joint work with M. Beiglböck, D. Bartl and M. Eder.

Thu, 09 May 2019

16:00 - 17:30
L4

Deep Learning Volatility

Blanka Horvath
(Kings College London)
Abstract

We present a consistent neural network based calibration method for a number of volatility models-including the rough volatility family-that performs the calibration task within a few milliseconds for the full implied volatility surface.
The aim of neural networks in this work is an off-line approximation of complex pricing functions, which are difficult to represent or time-consuming to evaluate by other means. We highlight how this perspective opens new horizons for quantitative modelling: The calibration bottleneck posed by a slow pricing of derivative contracts is lifted. This brings several model families (such as rough volatility models) within the scope of applicability in industry practice. As customary for machine learning, the form in which information from available data is extracted and stored is crucial for network performance. With this in mind we discuss how our approach addresses the usual challenges of machine learning solutions in a financial context (availability of training data, interpretability of results for regulators, control over generalisation errors). We present specific architectures for price approximation and calibration and optimize these with respect different objectives regarding accuracy, speed and robustness. We also find that including the intermediate step of learning pricing functions of (classical or rough) models before calibration significantly improves network performance compared to direct calibration to data.

Thu, 02 May 2019

16:00 - 17:30
L4

Equilibrium asset pricing with transaction costs

Johannes Muhle-Karbe
(Imperial College London)
Abstract


In the first part of the talk, we study risk-sharing equilibria where heterogenous agents trade subject to quadratic transaction costs. The corresponding equilibrium asset prices and trading strategies are characterised by a system of nonlinear, fully-coupled forward-backward stochastic differential equations. We show that a unique solution generally exists provided that the agents’ preferences are sufficiently similar. In a benchmark specification, the illiquidity discounts and liquidity premia observed empirically correspond to a positive relationship between transaction costs and volatility.
In the second part of the talk, we discuss how the model can be calibrated to time series of prices and the corresponding trading volume, and explain how extensions of the model with general transaction costs, for example, can be solved numerically using the deep learning approach of Han, Jentzen, and E (2018).
 (Based on joint works with Martin Herdegen and Dylan Possamai, as well as with Lukas Gonon and Xiaofei Shi)

 
Thu, 07 Mar 2019

16:00 - 17:30
L4

Strategic Fire-Sales and Price-Mediated Contagion in the Banking System

Dr Lakshithe Wagalath
(IESEG France)
Further Information

 

 
Abstract

We consider a price-mediated contagion framework in which each bank, after an exogenous shock, may have to sell assets in order to comply with regulatory constraints. Interaction between banks takes place only through price impact. We characterize the equilibrium of the strategic deleveraging problem and we calibrate our model to publicly-available data, the US banks that were part of the 2015 regulatory stress-tests. We then consider a more sophisticated model in which each bank is exposed to two risky assets (marketable and not marketable) and is only able to sell the marketable asset. We calibrate our model using the six banks with significant trading operations and we show that, depending on the price impact, the contagion of failures may be significant. Our results may be used to refine current stress testing frameworks by incorporating potential contagion mechanisms between banks. This is joint work with Yann Braouezec.

 
Thu, 28 Feb 2019

16:00 - 17:30
L4

Mean-Field Games with Differing Beliefs for Algorithmic Trading

Sebastian Jaimungal
(University of Toronto)
Abstract

Even when confronted with the same data, agents often disagree on a model of the real-world. Here, we address the question of how interacting heterogenous agents, who disagree on what model the real-world follows, optimize their trading actions. The market has latent factors that drive prices, and agents account for the permanent impact they have on prices. This leads to a large stochastic game, where each agents' performance criteria is computed under a different probability measure. We analyse the mean-field game (MFG) limit of the stochastic game and show that the Nash equilibria is given by the solution to a non-standard vector-valued forward-backward stochastic differential equation. Under some mild assumptions, we construct the solution in terms of expectations of the filtered states. We prove the MFG strategy forms an \epsilon-Nash equilibrium for the finite player game. Lastly, we present a least-squares Monte Carlo based algorithm for computing the optimal control and illustrate the results through simulation in market where agents disagree on the model.
[ joint work with Philippe Casgrain, U. Toronto ]
 

Thu, 21 Feb 2019

16:00 - 17:30
L4

Zero-sum stopping games with asymmetric information

Jan Palczewski
(Leeds University)
Abstract

We study the value of a zero-sum stopping game in which the terminal payoff function depends on the underlying process and on an additional randomness (with finitely many states) which is known to one player but unknown to the other. Such asymmetry of information arises naturally in insider trading when one of the counterparties knows an announcement before it is publicly released, e.g., central bank's interest rates decision or company earnings/business plans. In the context of game options this splits the pricing problem into the phase before announcement (asymmetric information) and after announcement (full information); the value of the latter exists and forms the terminal payoff of the asymmetric phase.

The above game does not have a value if both players use pure stopping times as the informed player's actions would reveal too much of his excess knowledge. The informed player manages the trade-off between releasing information and stopping optimally employing randomised stopping times. We reformulate the stopping game as a zero-sum game between a stopper (the uninformed player) and a singular controller (the informed player). We prove existence of the value of the latter game for a large class of underlying strong Markov processes including multi-variate diffusions and Feller processes. The main tools are approximations by smooth singular controls and by discrete-time games.

Thu, 14 Feb 2019

16:00 - 17:30
L4

Static vs Adaptive Strategies for Optimal Execution with Signals

Eyal Neumann
(Imperial College London)
Further Information

We consider an optimal execution problem in which a trader is looking at a short-term price predictive signal while trading. In the case where the trader is creating an instantaneous market impact, we show that transactions costs resulting from the optimal adaptive strategy are substantially lower than the corresponding costs of the optimal static strategy. Later, we investigate the case where the trader is creating transient market impact. We show that strategies in which the trader is observing the signal a number of times during the trading period, can dramatically reduce the transaction costs and improve the performance of the optimal static strategy. These results answer a question which was raised by Brigo and Piat [1], by analyzing two cases where adaptive strategies can improve the performance of the execution. This is joint work with Claudio Bellani, Damiano Brigo and Alex Done.

Thu, 31 Jan 2019

16:00 - 17:30
L4

Machine learning for volatility

Dr Martin Tegner
(Department of Engineering and Oxford Man Institute)
Further Information

The main focus of this talk will be a nonparametric approach for local volatility. We look at the calibration problem in a probabilistic framework based on Gaussian process priors. This gives a way of encoding prior believes about the local volatility function and a model which is flexible yet not prone to overfitting. Besides providing a method for calibrating a (range of) point-estimate(s), we draw posterior inference from the distribution over local volatility. This leads to a principled understanding of uncertainty attached with the calibration. Further, we seek to infer dynamical properties of local volatility by augmenting the input space with a time dimension. Ideally, this provides predictive distributions not only locally, but also for entire surfaces forward in time. We apply our approach to S&P 500 market data.

 

In the final part of the talk we will give a short account of a nonparametric approach to modelling realised volatility. Again we take a probabilistic view and formulate a hypothesis space of stationary processes for volatility based on Gaussian processes. We demonstrate on the S&P 500 index.

Thu, 24 Jan 2019

16:00 - 17:30
L4

Contagion and Systemic Risk in Heterogeneous Financial Networks

Dr Thilo Meyer-Brandis
(University of Munich)
Abstract

 One of the most defining features of modern financial networks is their inherent complex and intertwined structure. In particular the often observed core-periphery structure plays a prominent role. Here we study and quantify the impact that the complexity of networks has on contagion effects and system stability, and our focus is on the channel of default contagion that describes the spread of initial distress via direct balance sheet exposures. We present a general approach describing the financial network by a random graph, where we distinguish vertices (institutions) of different types - for example core/periphery - and let edge probabilities and weights (exposures) depend on the types of both the receiving and the sending vertex. Our main result allows to compute explicitly the systemic damage caused by some initial local shock event, and we derive a complete characterization of resilient respectively non-resilient financial systems in terms of their global statistical characteristics. Due to the random graphs approach these results bear a considerable robustness to local uncertainties and small changes of the network structure over time. Applications of our theory demonstrate that indeed the features captured by our model can have significant impact on system stability; we derive resilience conditions for the global network based on subnetwork conditions only. 

Thu, 17 Jan 2019

16:00 - 17:30
L4

When does portfolio compression reduce systemic risk?

Dr Luitgard Veraart
(London School of Economics)
Abstract

We analyse the consequences of conservative portfolio compression, i.e., netting cycles in financial networks, on systemic risk.  We show that the recovery rate in case of default plays a significant role in determining whether portfolio compression is potentially beneficial.  If recovery rates of defaulting nodes are zero then compression weakly reduces systemic risk. We also provide a necessary condition under which compression strongly reduces systemic risk.  If recovery rates are positive we show that whether compression is potentially beneficial or harmful for individual institutions does not just depend on the network itself but on quantities outside the network as well. In particular we show that  portfolio compression can have negative effects both for institutions that are part of the compression cycle and for those that are not. Furthermore, we show that while a given conservative compression might be beneficial for some shocks it might be detrimental for others. In particular, the distribution of the shock over the network matters and not just its size.  

Tue, 04 Dec 2018

16:00 - 17:30
L4

Quantifying Ambiguity Bounds Through Hypothetical Statistical Testing

Anne Balter
Abstract

Authors:

Anne Balter and Antoon Pelsser

Models can be wrong and recognising their limitations is important in financial and economic decision making under uncertainty. Robust strategies, which are least sensitive to perturbations of the underlying model, take uncertainty into account. Interpreting

the explicit set of alternative models surrounding the baseline model has been difficult so far. We specify alternative models by a stochastic change of probability measure and derive a quantitative bound on the uncertainty set. We find an explicit ex ante relation

between the choice parameter k, which is the radius of the uncertainty set, and the Type I and II error probabilities on the statistical test that is hypothetically performed to investigate whether the model specification could be rejected at the future test horizon.

The hypothetical test is constructed to obtain all alternative models that cannot be distinguished from the baseline model with sufficient power. Moreover, we also link the ambiguity bound, which is now a function of interpretable variables, to numerical

values on several divergence measures. Finally, we illustrate the methodology on a robust investment problem and identify how the robustness multiplier can be numerically interpreted by ascribing meaning to the amount of ambiguity.

Thu, 29 Nov 2018

16:00 - 17:30
L4

tba

tba