Forthcoming events in this series


Thu, 09 Feb 2023

16:00 - 17:00
L6

Short term predictability of returns in limit order markets: a Deep learning perspective

Lorenzo Lucchese
Abstract

We conduct a systematic large-scale analysis of order book-driven predictability in high-frequency returns by leveraging deep learning techniques. First, we introduce a new and robust representation of the order book, the volume representation. Next, we carry out an extensive empirical experiment to address various questions regarding predictability. We investigate if and how far ahead there is predictability, the importance of a robust data representation, the advantages of multi-horizon modeling, and the presence of universal trading patterns. We use model confidence sets, which provide a formalized statistical inference framework particularly well suited to answer these questions. Our findings show that at high frequencies predictability in mid-price returns is not just present, but ubiquitous. The performance of the deep learning models is strongly dependent on the choice of order book representation, and in this respect, the volume representation appears to have multiple practical advantages.

Thu, 02 Feb 2023

16:00 - 17:00
L6

Energy transition under scenario uncertainty: a mean-field game approach

Roxana Dumitrescu
Abstract

We study the impact of transition scenario uncertainty, and in particular, the uncertainty about future carbon price and electricity demand, on the pace of decarbonization of the electricity industry. To this end, we build a discrete time mean-field game model for the long-term dynamics of the electricity market subject to common random shocks affecting the carbon price and the electricity demand. These shocks depend on a macroeconomic scenario, which is not observed by the agents, but can be partially deduced from the frequency of the shocks. Due to this partial observation feature, the common noise is non-Markovian. We consider two classes of agents: conventional producers and renewable producers. The former choose an optimal moment to exit the market and the latter choose an optimal moment to enter the market by investing into renewable generation. The agents interact through the market price determined by a merit order mechanism with an exogenous stochastic demand. We prove the existence of Nash equilibria in the resulting mean-field game of optimal stopping with common noise, developing a novel linear programming approach for these problems. We illustrate our model by an example inspired by the UK electricity market, and show that scenario uncertainty leads to significant changes in the speed of replacement of conventional generators by renewable production.

Thu, 19 Jan 2023

16:00 - 17:00
L6

Model Calibration with Optimal Transport

Benjamin Joseph
Abstract

In order for one to infer reasonable predictions from a model, it must be calibrated to reproduce observations in the market. We use the semimartingale optimal transport methodology to formulate this calibration problem into a constrained optimisation problem, with our model calibrated using a finite number of European options observed in the market as constraints. Given such a PDE formulation, we are able to then derive a dual formulation involving an HJB equation which we can numerically solve. We focus on two cases: (1) The stochastic interest rate is known and perfectly matches the observed term structure in the market, however the asset local volatility and correlation are not known and must be calibrated; (2) The dynamics of both the stochastic interest rate and the underlying asset are unknown, and we must jointly calibrate both to European options on the interest rate and on the asset.

Thu, 01 Dec 2022

16:00 - 17:00
L3

Convergence of policy gradient methods for finite-horizon stochastic linear-quadratic control problems

Michael Giegrich
Abstract

We study the global linear convergence of policy gradient (PG) methods for finite-horizon exploratory linear-quadratic control (LQC) problems. The setting includes stochastic LQC problems with indefinite costs and allows additional entropy regularisers in the objective. We consider a continuous-time Gaussian policy whose mean is linear in the state variable and whose covariance is state-independent. Contrary to discrete-time problems, the cost is noncoercive in the policy and not all descent directions lead to bounded iterates. We propose geometry-aware gradient descents for the mean and covariance of the policy using the Fisher geometry and the Bures-Wasserstein geometry, respectively. The policy iterates are shown to obey an a-priori bound, and converge globally to the optimal policy with a linear rate. We further propose a novel PG method with discrete-time policies. The algorithm leverages the continuous-time analysis, and achieves a robust linear convergence across different action frequencies. A numerical experiment confirms the convergence and robustness of the proposed algorithm.

This is joint work with Yufei Zhang and Christoph Reisinger.

Thu, 24 Nov 2022

16:00 - 17:00
L3

Graph-based Methods for Forecasting Realized Covariances

Chao Zhang
Abstract

We forecast the realized covariance matrix of asset returns in the U.S. equity market by exploiting the predictive information of graphs in volatility and correlation. Specifically, we augment the Heterogeneous Autoregressive (HAR) model via neighborhood aggregation on these graphs. Our proposed method allows for the modeling of interdependence in volatility (also known as spillover effect) and correlation, while maintaining parsimony and interpretability. We explore various graph construction methods, including sector membership and graphical LASSO (for modeling volatility), and line graph (for modeling correlation). The results generally suggest that the augmented model incorporating graph information yields both statistically and economically significant improvements for out-of-sample performance over the traditional models. Such improvements remain significant over horizons up to one month ahead, but decay in time. The robustness tests demonstrate that the forecast improvements are obtained consistently over the different out-of-sample sub-periods, and are insensitive to measurement errors of volatilities.

Thu, 17 Nov 2022

16:00 - 17:00
L3

Simulating Arbitrage-Free Implied Volatility Surfaces

Milena Vuletic
Abstract

We present a computationally tractable method for simulating arbitrage free implied volatility surfaces. Our approach conciliates static arbitrage constraints with a realistic representation of statistical properties of implied volatility co-movements.
We illustrate our method with two examples. First, we propose a dynamic factor model for the implied volatility surface, and show how our method may be used to remove static arbitrage from model scenarios. As a second example, we propose a nonparametric generative model for implied volatility surfaces based on a Generative Adversarial Network (GAN).

Thu, 10 Nov 2022

16:00 - 17:00
L3

Sensitivity of robust optimization over an adapted Wasserstein ambiguity set

Yifan Jiang
Abstract

In this talk, we consider the sensitivity to the model uncertainty of an optimization problem. By introducing adapted Wasserstein perturbation, we extend the classical results in a static setting to the dynamic multi-period setting. Under mild conditions, we give an explicit formula for the first order approximation to the value function. An optimization problem with a cost of weak type will also be discussed.

Thu, 03 Nov 2022

16:00 - 17:00
L3

Decentralised Finance and Automated Market Making: Optimal Execution and Liquidity Provision

Fayçal Drissi
Abstract

Automated Market Makers (AMMs) are a new prototype of 
trading venues which are revolutionising the way market participants 
interact. At present, the majority of AMMs are Constant Function 
Market Makers (CFMMs) where a deterministic trading function 
determines how markets are cleared. A distinctive characteristic of 
CFMMs is that execution costs for liquidity takers, and revenue for 
liquidity providers, are given by closed-form functions of price, 
liquidity, and transaction size. This gives rise to a new class of 
trading problems. We focus on Constant Product Market Makers with 
Concentrated Liquidity and show how to optimally take and make 
liquidity. We use Uniswap v3 data to study price and liquidity 
dynamics and to motivate the models.

For liquidity taking, we describe how to optimally trade a large 
position in an asset and how to execute statistical arbitrages based 
on market signals. For liquidity provision, we show how the wealth 
decomposes into a fee and an asset component. Finally, we perform 
consecutive runs of in-sample estimation of model parameters and 
out-of-sample trading to showcase the performance of the strategies.

Thu, 20 Jun 2019

16:00 - 17:30
L2

A generic construction for high order approximation schemes of semigroups using random grids

Aurélien Alfonsi
(Ecole des Ponts ParisTech)
Abstract

Our aim is to construct high order approximation schemes for general 
semigroups of linear operators $P_{t},t \ge 0$. In order to do it, we fix a time 
horizon $T$ and the discretization steps $h_{l}=\frac{T}{n^{l}},l\in N$ and we suppose
that we have at hand some short time approximation operators $Q_{l}$ such
that $P_{h_{l}}=Q_{l}+O(h_{l}^{1+\alpha })$ for some $\alpha >0$. Then, we
consider random time grids $\Pi (\omega )=\{t_0(\omega )=0<t_{1}(\omega 
)<...<t_{m}(\omega )=T\}$ such that for all $1\le k\le m$, $t_{k}(\omega 
)-t_{k-1}(\omega )=h_{l_{k}}$ for some $l_{k}\in N$, and we associate the approximation discrete 
semigroup $P_{T}^{\Pi (\omega )}=Q_{l_{n}}...Q_{l_{1}}.$ Our main result is the 
following: for any approximation order $\nu $, we can construct random grids $\Pi_{i}(\omega )$ and coefficients 
$c_{i}$, with $i=1,...,r$ such that $P_{t}f=\sum_{i=1}^{r}c_{i} E(P_{t}^{\Pi _{i}(\omega )}f(x))+O(n^{-\nu})$
with the expectation concerning the random grids $\Pi _{i}(\omega ).$ 
Besides, $Card(\Pi _{i}(\omega ))=O(n)$ and the complexity of the algorithm is of order $n$, for any order
of approximation $\nu$. The standard example concerns diffusion 
processes, using the Euler approximation for $Q_l$.
In this particular case and under suitable conditions, we are able to gather the terms in order to produce an estimator of $P_tf$ with 
finite variance.
However, an important feature of our approach is its universality in the sense that
it works for every general semigroup $P_{t}$ and approximations.  Besides, approximation schemes sharing the same $\alpha$ lead to
the same random grids $\Pi_{i}$ and coefficients $c_{i}$. Numerical illustrations are given for ordinary differential equations, piecewise 
deterministic Markov processes and diffusions.

Thu, 06 Jun 2019

16:00 - 17:30
L4

tba

tba
Thu, 30 May 2019

16:00 - 17:30
L4

Adapted Wasserstein distances and their role in mathematical finance

Julio Backhoff
(University of Vienna)
Abstract

The problem of model uncertainty in financial mathematics has received considerable attention in the last years. In this talk I will follow a non-parametric point of view, and argue that an insightful approach to model uncertainty should not be based on the familiar Wasserstein distances. I will then provide evidence supporting the better suitability of the recent notion of adapted Wasserstein distances (also known as Nested Distances in the literature). Unlike their more familiar counterparts, these transport metrics take the role of information/filtrations explicitly into account. Based on joint work with M. Beiglböck, D. Bartl and M. Eder.

Thu, 09 May 2019

16:00 - 17:30
L4

Deep Learning Volatility

Blanka Horvath
(Kings College London)
Abstract

We present a consistent neural network based calibration method for a number of volatility models-including the rough volatility family-that performs the calibration task within a few milliseconds for the full implied volatility surface.
The aim of neural networks in this work is an off-line approximation of complex pricing functions, which are difficult to represent or time-consuming to evaluate by other means. We highlight how this perspective opens new horizons for quantitative modelling: The calibration bottleneck posed by a slow pricing of derivative contracts is lifted. This brings several model families (such as rough volatility models) within the scope of applicability in industry practice. As customary for machine learning, the form in which information from available data is extracted and stored is crucial for network performance. With this in mind we discuss how our approach addresses the usual challenges of machine learning solutions in a financial context (availability of training data, interpretability of results for regulators, control over generalisation errors). We present specific architectures for price approximation and calibration and optimize these with respect different objectives regarding accuracy, speed and robustness. We also find that including the intermediate step of learning pricing functions of (classical or rough) models before calibration significantly improves network performance compared to direct calibration to data.

Thu, 02 May 2019

16:00 - 17:30
L4

Equilibrium asset pricing with transaction costs

Johannes Muhle-Karbe
(Imperial College London)
Abstract


In the first part of the talk, we study risk-sharing equilibria where heterogenous agents trade subject to quadratic transaction costs. The corresponding equilibrium asset prices and trading strategies are characterised by a system of nonlinear, fully-coupled forward-backward stochastic differential equations. We show that a unique solution generally exists provided that the agents’ preferences are sufficiently similar. In a benchmark specification, the illiquidity discounts and liquidity premia observed empirically correspond to a positive relationship between transaction costs and volatility.
In the second part of the talk, we discuss how the model can be calibrated to time series of prices and the corresponding trading volume, and explain how extensions of the model with general transaction costs, for example, can be solved numerically using the deep learning approach of Han, Jentzen, and E (2018).
 (Based on joint works with Martin Herdegen and Dylan Possamai, as well as with Lukas Gonon and Xiaofei Shi)

 
Thu, 07 Mar 2019

16:00 - 17:30
L4

Strategic Fire-Sales and Price-Mediated Contagion in the Banking System

Dr Lakshithe Wagalath
(IESEG France)
Further Information

 

 
Abstract

We consider a price-mediated contagion framework in which each bank, after an exogenous shock, may have to sell assets in order to comply with regulatory constraints. Interaction between banks takes place only through price impact. We characterize the equilibrium of the strategic deleveraging problem and we calibrate our model to publicly-available data, the US banks that were part of the 2015 regulatory stress-tests. We then consider a more sophisticated model in which each bank is exposed to two risky assets (marketable and not marketable) and is only able to sell the marketable asset. We calibrate our model using the six banks with significant trading operations and we show that, depending on the price impact, the contagion of failures may be significant. Our results may be used to refine current stress testing frameworks by incorporating potential contagion mechanisms between banks. This is joint work with Yann Braouezec.

 
Thu, 28 Feb 2019

16:00 - 17:30
L4

Mean-Field Games with Differing Beliefs for Algorithmic Trading

Sebastian Jaimungal
(University of Toronto)
Abstract

Even when confronted with the same data, agents often disagree on a model of the real-world. Here, we address the question of how interacting heterogenous agents, who disagree on what model the real-world follows, optimize their trading actions. The market has latent factors that drive prices, and agents account for the permanent impact they have on prices. This leads to a large stochastic game, where each agents' performance criteria is computed under a different probability measure. We analyse the mean-field game (MFG) limit of the stochastic game and show that the Nash equilibria is given by the solution to a non-standard vector-valued forward-backward stochastic differential equation. Under some mild assumptions, we construct the solution in terms of expectations of the filtered states. We prove the MFG strategy forms an \epsilon-Nash equilibrium for the finite player game. Lastly, we present a least-squares Monte Carlo based algorithm for computing the optimal control and illustrate the results through simulation in market where agents disagree on the model.
[ joint work with Philippe Casgrain, U. Toronto ]
 

Thu, 21 Feb 2019

16:00 - 17:30
L4

Zero-sum stopping games with asymmetric information

Jan Palczewski
(Leeds University)
Abstract

We study the value of a zero-sum stopping game in which the terminal payoff function depends on the underlying process and on an additional randomness (with finitely many states) which is known to one player but unknown to the other. Such asymmetry of information arises naturally in insider trading when one of the counterparties knows an announcement before it is publicly released, e.g., central bank's interest rates decision or company earnings/business plans. In the context of game options this splits the pricing problem into the phase before announcement (asymmetric information) and after announcement (full information); the value of the latter exists and forms the terminal payoff of the asymmetric phase.

The above game does not have a value if both players use pure stopping times as the informed player's actions would reveal too much of his excess knowledge. The informed player manages the trade-off between releasing information and stopping optimally employing randomised stopping times. We reformulate the stopping game as a zero-sum game between a stopper (the uninformed player) and a singular controller (the informed player). We prove existence of the value of the latter game for a large class of underlying strong Markov processes including multi-variate diffusions and Feller processes. The main tools are approximations by smooth singular controls and by discrete-time games.

Thu, 14 Feb 2019

16:00 - 17:30
L4

Static vs Adaptive Strategies for Optimal Execution with Signals

Eyal Neumann
(Imperial College London)
Further Information

We consider an optimal execution problem in which a trader is looking at a short-term price predictive signal while trading. In the case where the trader is creating an instantaneous market impact, we show that transactions costs resulting from the optimal adaptive strategy are substantially lower than the corresponding costs of the optimal static strategy. Later, we investigate the case where the trader is creating transient market impact. We show that strategies in which the trader is observing the signal a number of times during the trading period, can dramatically reduce the transaction costs and improve the performance of the optimal static strategy. These results answer a question which was raised by Brigo and Piat [1], by analyzing two cases where adaptive strategies can improve the performance of the execution. This is joint work with Claudio Bellani, Damiano Brigo and Alex Done.

Thu, 31 Jan 2019

16:00 - 17:30
L4

Machine learning for volatility

Dr Martin Tegner
(Department of Engineering and Oxford Man Institute)
Further Information

The main focus of this talk will be a nonparametric approach for local volatility. We look at the calibration problem in a probabilistic framework based on Gaussian process priors. This gives a way of encoding prior believes about the local volatility function and a model which is flexible yet not prone to overfitting. Besides providing a method for calibrating a (range of) point-estimate(s), we draw posterior inference from the distribution over local volatility. This leads to a principled understanding of uncertainty attached with the calibration. Further, we seek to infer dynamical properties of local volatility by augmenting the input space with a time dimension. Ideally, this provides predictive distributions not only locally, but also for entire surfaces forward in time. We apply our approach to S&P 500 market data.

 

In the final part of the talk we will give a short account of a nonparametric approach to modelling realised volatility. Again we take a probabilistic view and formulate a hypothesis space of stationary processes for volatility based on Gaussian processes. We demonstrate on the S&P 500 index.

Thu, 24 Jan 2019

16:00 - 17:30
L4

Contagion and Systemic Risk in Heterogeneous Financial Networks

Dr Thilo Meyer-Brandis
(University of Munich)
Abstract

 One of the most defining features of modern financial networks is their inherent complex and intertwined structure. In particular the often observed core-periphery structure plays a prominent role. Here we study and quantify the impact that the complexity of networks has on contagion effects and system stability, and our focus is on the channel of default contagion that describes the spread of initial distress via direct balance sheet exposures. We present a general approach describing the financial network by a random graph, where we distinguish vertices (institutions) of different types - for example core/periphery - and let edge probabilities and weights (exposures) depend on the types of both the receiving and the sending vertex. Our main result allows to compute explicitly the systemic damage caused by some initial local shock event, and we derive a complete characterization of resilient respectively non-resilient financial systems in terms of their global statistical characteristics. Due to the random graphs approach these results bear a considerable robustness to local uncertainties and small changes of the network structure over time. Applications of our theory demonstrate that indeed the features captured by our model can have significant impact on system stability; we derive resilience conditions for the global network based on subnetwork conditions only. 

Thu, 17 Jan 2019

16:00 - 17:30
L4

When does portfolio compression reduce systemic risk?

Dr Luitgard Veraart
(London School of Economics)
Abstract

We analyse the consequences of conservative portfolio compression, i.e., netting cycles in financial networks, on systemic risk.  We show that the recovery rate in case of default plays a significant role in determining whether portfolio compression is potentially beneficial.  If recovery rates of defaulting nodes are zero then compression weakly reduces systemic risk. We also provide a necessary condition under which compression strongly reduces systemic risk.  If recovery rates are positive we show that whether compression is potentially beneficial or harmful for individual institutions does not just depend on the network itself but on quantities outside the network as well. In particular we show that  portfolio compression can have negative effects both for institutions that are part of the compression cycle and for those that are not. Furthermore, we show that while a given conservative compression might be beneficial for some shocks it might be detrimental for others. In particular, the distribution of the shock over the network matters and not just its size.  

Tue, 04 Dec 2018

16:00 - 17:30
L4

Quantifying Ambiguity Bounds Through Hypothetical Statistical Testing

Anne Balter
Abstract

Authors:

Anne Balter and Antoon Pelsser

Models can be wrong and recognising their limitations is important in financial and economic decision making under uncertainty. Robust strategies, which are least sensitive to perturbations of the underlying model, take uncertainty into account. Interpreting

the explicit set of alternative models surrounding the baseline model has been difficult so far. We specify alternative models by a stochastic change of probability measure and derive a quantitative bound on the uncertainty set. We find an explicit ex ante relation

between the choice parameter k, which is the radius of the uncertainty set, and the Type I and II error probabilities on the statistical test that is hypothetically performed to investigate whether the model specification could be rejected at the future test horizon.

The hypothetical test is constructed to obtain all alternative models that cannot be distinguished from the baseline model with sufficient power. Moreover, we also link the ambiguity bound, which is now a function of interpretable variables, to numerical

values on several divergence measures. Finally, we illustrate the methodology on a robust investment problem and identify how the robustness multiplier can be numerically interpreted by ascribing meaning to the amount of ambiguity.

Thu, 29 Nov 2018

16:00 - 17:30
L4

tba

tba
Tue, 13 Nov 2018
16:00
C5

On some applications of excursion theory

Dr Marcin Wisniewolski
(University of Warsaw)
Abstract

During the talk I will present some new computational technique based on excursion theory for Markov processes. Some new results for classical processes like Bessel processes and reflected Brownian Motion will be shown. The most important point of presented applications will be the new insight into Hartman-Watson (HW) distributions. It turns out that excursion theory will enable us to deduce the simple connections of HW with a hyperbolic cosine of Brownian Motion.

Thu, 08 Nov 2018

16:00 - 17:30
L4

On fully-dynamic risk-indifference pricing: time-consistency and other properties

Giulia Di Nunno
Abstract

Risk-indifference pricing is proposed as an alternative to utility indifference pricing, where a risk measure is used instead of a utility based preference. In this, we propose to include the possibility to change the attitude to risk evaluation as time progresses. This is particularly reasonable for long term investments and strategies. 

Then we introduce a fully-dynamic risk-indifference criteria, in which a whole family of risk measures is considered. The risk-indifference pricing system is studied from the point of view of its properties as a convex price system. We tackle questions of time-consistency in the risk evaluation and the corresponding prices. This analysis provides a new insight also to time-consistency for ordinary dynamic risk-measures.

Our techniques and results are set in the representation and extension theorems for convex operators. We shall argue and finally provide a setting in which fully-dynamic risk-indifference pricing is a well set convex price system.

The presentation is based on joint works with Jocelyne Bion-Nadal.

Thu, 25 Oct 2018

16:00 - 17:30
L4

Double auctions in welfare economics

Prof Teemu Pennanen
(Kings College London)
Abstract

Welfare economics argues that competitive markets lead to efficient allocation of resources. The classical theorems are based on the Walrasian market model which assumes the existence of market clearing prices. The emergence of such prices remains debatable. We replace the Walrasian market model by double auctions and show that the conclusions of welfare economics remain largely the same. Double auctions are not only a more realistic description of real markets but they explain how equilibrium prices and efficient allocations emerge in practice. 

Thu, 18 Oct 2018

16:00 - 17:30
L4

Incomplete Equilibrium with a Stochastic Annuity

Kim Weston
(Rutgers University)
Abstract

In this talk, I will present an incomplete equilibrium model to determine the price of an annuity.  A finite number of agents receive stochastic income streams and choose between consumption and investment in the traded annuity.  The novelty of this model is its ability to handle running consumption and general income streams.  In particular, the model incorporates mean reverting income, which is empirically relevant but historically too intractable in equilibrium.  The model is set in a Brownian framework, and equilibrium is characterized and proven to exist using a system of fully coupled quadratic BSDEs.  This work is joint with Gordan Zitkovic.

Thu, 11 Oct 2018

16:00 - 17:30
L4

Model-free version of the BDG inequality and its applications

Rafal Lochowski
(Warsaw School of Economics)
Abstract

In my talk I will briefly introduce model-free approach to mathematical finance, which uses Vovk's outer measure. Then, using pathwise BDG inequality obtained by Beigbloeck and Siorpaes and modification of Vovk's measure, I will present and prove a model-free version of this inequality for continuous price paths. Finally, I will discuss possible applications, like the existence and uniqueness of solutions of SDEs driven by continuous, model-free price paths. The talk will be based on the joint work with Farai Mhlanga and Lesiba Galane (University of Limpopo, South Africa)

Thu, 14 Jun 2018

16:00 - 17:30
L4

Machine Learning in Finance

Josef Teichmann
(ETH Zuerich)
Abstract

We present several instances of applications of machine
learning technologies in mathematical Finance including pricing,
hedging, calibration and filtering problems. We try to show that
regularity theory of the involved equations plays a crucial role
in designing such algorithms.

(based on joint works with Hans Buehler, Christa Cuchiero, Lukas
Gonon, Wahid Khosrawi-Sardroudi, Ben Wood)

Thu, 07 Jun 2018

16:00 - 17:30
L4

Large Deviations for McKean Vlasov Equations and Importance Sampling

Goncalo dos Reis
(University of Edinburgh)
Abstract


We discuss two Freidlin-Wentzell large deviation principles for McKean-Vlasov equations (MV-SDEs) in certain path space topologies. The equations have a drift of polynomial growth and an existence/uniqueness result is provided. We apply the Monte-Carlo methods for evaluating expectations of functionals of solutions to MV-SDE with drifts of super-linear growth.  We assume that the MV-SDE is approximated in the standard manner by means of an interacting particle system and propose two importance sampling (IS) techniques to reduce the variance of the resulting Monte Carlo estimator. In the "complete measure change" approach, the IS measure change is applied simultaneously in the coefficients and in the expectation to be evaluated. In the "decoupling" approach we first estimate the law of the solution in a first set of simulations without measure change and then perform a second set of simulations under the importance sampling measure using the approximate solution law computed in the first step. 

Thu, 24 May 2018

16:00 - 17:30
L4

Computation of optimal transport and related hedging problems via penalization and neural networks

Michael Kupper
(University of Konstanz)
Abstract

We present a widely applicable approach to solving (multi-marginal, martingale) optimal transport and related problems via neural networks. The core idea is to penalize the optimization problem in its dual formulation and reduce it to a finite dimensional one which corresponds to optimizing a neural network with smooth objective function. We present numerical examples from optimal transport, and bounds on the distribution of a sum of dependent random variables. As an application we focus on the problem of risk aggregation under model uncertainty. The talk is based on joint work with Stephan Eckstein and Mathias Pohl.

Thu, 17 May 2018

16:00 - 17:30
L4

Accounting for the Epps Effect: Realized Covariation, Cointegration and Common Factors

Jeremy Large
(Economics (Oxford University))
Abstract

High-frequency realized variance approaches offer great promise for 
estimating asset prices’ covariation, but encounter difficulties 
connected to the Epps effect. This paper models the Epps effect in a 
stochastic volatility setting. It adds dependent noise to a factor 
representation of prices. The noise both offsets covariation and 
describes plausible lags in information transmission. Non-synchronous 
trading, another recognized source of the effect, is not required. A 
resulting estimator of correlations and betas performs well on LSE 
mid-quote data, lending empirical credence to the approach.

Thu, 10 May 2018

16:00 - 17:30
L3

From maps to apps: the power of machine learning and artificial intelligence for regulators

Stefan Hunt
(Financial Conduct Authority)
Abstract

Abstract:
Highlights:

•We increasingly live in a digital world and commercial companies are not the only beneficiaries. The public sector can also use data to tackle pressing issues.
•Machine learning is starting to make an impact on the tools regulators use, for spotting the bad guys, for estimating demand, and for tackling many other problems.
•The speech uses an array of examples to argue that much regulation is ultimately about recognising patterns in data. Machine learning helps us find those patterns.
 
Just as moving from paper maps to smartphone apps can make us better navigators, Stefan’s speech explains how the move from using traditional analysis to using machine learning can make us better regulators.
 
Mini Biography:
 
Stefan Hunt is the founder and Head of the Behavioural Economics and Data Science Unit. He has led the FCA’s use of these two fields and designed several pioneering economic analyses. He is an Honorary Professor at the University of Nottingham and has a PhD in economics from Harvard University.
 

Thu, 03 May 2018

16:00 - 17:30
L4

Generalized McKean-Vlasov stochastic control problems

Beatrice Acciaio
(LSE)
Abstract

Title: Generalized McKean-Vlasov stochastic control problems.

Abstract: I will consider McKean-Vlasov stochastic control problems 
where the cost functions and the state dynamics depend upon the joint 
distribution of the controlled state and the control process. First, I 
will provide a suitable version of the Pontryagin stochastic maximum 
principle, showing that, in the present general framework, pointwise 
minimization of the Hamiltonian with respect to the control is not a 
necessary optimality condition. Then I will take a different 
perspective, and present a variational approach to study a weak 
formulation of such control problems, thereby establishing a new 
connection between those and optimal transport problems on path space.

The talk is based on a joint project with J. Backhoff-Veraguas and R. Carmona.

Thu, 26 Apr 2018

16:00 - 17:30
L4

Lévy forward price approach for multiple yield curves in presence of persistently low and negative interest rates

Zorana Grbac
(Paris)
Abstract

In this talk we present a framework for discretely compounding
interest rates which is based on the forward price process approach.
This approach has a number of advantages, in particular in the current
market environment. Compared to the classical Libor market models, it
allows in a natural way for negative interest rates and has superb
calibration properties even in the presence of persistently low rates.
Moreover, the measure changes along the tenor structure are simplified
significantly. This property makes it an excellent base for a
post-crisis multiple curve setup. Two variants for multiple curve
constructions will be discussed.

As driving processes we use time-inhomogeneous Lévy processes, which
lead to explicit valuation formulas for various interest rate products
using well-known Fourier transform techniques. Based on these formulas
we present calibration results for the two model variants using market
data for caps with Bachelier implied volatilities.

Thu, 08 Mar 2018

16:00 - 17:00
L4

Statistical Learning for Portfolio Tail Risk Measurement

Mike Ludkovski
(University of California Santa Barbara)
Abstract


We consider calculation of VaR/TVaR capital requirements when the underlying economic scenarios are determined by simulatable risk factors. This problem involves computationally expensive nested simulation, since evaluating expected portfolio losses of an outer scenario (aka computing a conditional expectation) requires inner-level Monte Carlo. We introduce several inter-related machine learning techniques to speed up this computation, in particular by properly accounting for the simulation noise. Our main workhorse is an advanced Gaussian Process (GP) regression approach which uses nonparametric spatial modeling to efficiently learn the relationship between the stochastic factors defining scenarios and corresponding portfolio value. Leveraging this emulator, we develop sequential algorithms that adaptively allocate inner simulation budgets to target the quantile region. The GP framework also yields better uncertainty quantification for the resulting VaR/\TVaR estimators that reduces bias and variance compared to existing methods.  Time permitting, I will highlight further related applications of statistical emulation in risk management.
This is joint work with Jimmy Risk (Cal Poly Pomona). 
 

Thu, 01 Mar 2018

16:00 - 16:30
L4

Optimum thresholding using mean and conditional mean squared error

Cecilia Mancini
(Florence)
Abstract

Joint work with Josè E. Figueroa-Lòpez, Washington University in St. Louis

Abstract: We consider a univariate semimartingale model for (the logarithm 
of) an asset price, containing jumps having possibly infinite activity. The 
nonparametric threshold estimator\hat{IV}_n of the integrated variance 
IV:=\int_0^T\sigma^2_sds proposed in Mancini (2009) is constructed using 
observations on a discrete time grid, and precisely it sums up the squared 
increments of the process when they are below a  threshold, a deterministic 
function of the observation step and possibly of the coefficients of X. All the
threshold functions satisfying given conditions allow asymptotically consistent 
estimates of IV, however the finite sample properties of \hat{IV}_n can depend 
on the specific choice of the threshold.
We aim here at optimally selecting the threshold by minimizing either the 
estimation mean squared error (MSE) or the conditional mean squared error 
(cMSE). The last criterion allows to reach a threshold which is optimal not in 
mean but for the specific  volatility and jumps paths at hand.

A parsimonious characterization of the optimum is established, which turns 
out to be asymptotically proportional to the Lévy's modulus of continuity of 
the underlying Brownian motion. Moreover, minimizing the cMSE enables us 
to  propose a novel implementation scheme for approximating the optimal 
threshold. Monte Carlo simulations illustrate the superior performance of the 
proposed method.

Thu, 22 Feb 2018

16:00 - 17:00
L4

Multivariate fatal shock models in large dimensions

Matthias Scherer
(TU Munich)
Abstract

A classical construction principle for dependent failure times is to consider shocks that destroy components within a system. The arrival times of shocks can destroy arbitrary subsets of the system, thus introducing dependence. The seminal model – based on independent and exponentially distributed shocks - was presented by Marshall and Olkin in 1967, various generalizations have been proposed in the literature since then. Such models have applications in non-life insurance, e.g. insurance claims caused by floods, hurricanes, or other natural catastrophes. The simple interpretation of multivariate fatal shock models is clearly appealing, but the number of possible shocks makes them challenging to work with, recall that there are 2^d subsets of a set with d components. In a series of papers we have identified mixture models based on suitable stochastic processes that give rise to a different - and numerically more convenient - stochastic interpretation. This representation is particularly useful for the development of efficient simulation algorithms. Moreover, it helps to define parametric families with a reasonable number of parameters. We review the recent literature on multivariate fatal shock models, extreme-value copulas, and related dependence structures. We also discuss applications and hierarchical structures. Finally, we provide a new characterization of the Marshall-Olkin distribution.

Authors: Mai, J-F.; Scherer, M.;

Thu, 15 Feb 2018

16:00 - 17:00
L4

The General Aggregation Property and its Application to Regime-Dependent Determinants of Variance, Skew and Jump Risk Premia

Carol Alexander
(Sussex)
Abstract

Our general theory, which encompasses two different aggregation properties (Neuberger, 2012; Bondarenko, 2014) establishes a wide variety of new, unbiased and efficient risk premia estimators. Empirical results on meticulously-constructed daily, investable, constant-maturity  S&P500 higher-moment premia reveal significant, previously-undocumented, regime-dependent behavior. The variance premium is fully priced by Fama and French (2015) factors during the volatile regime, but has significant negative alpha in stable markets.  Also only during stable periods, a small, positive but significant third-moment premium is not fully priced by the variance and equity premia. There is no evidence for a separate fourth-moment premium.

Thu, 08 Feb 2018

16:00 - 17:00
L4

Computational Aspects of Robust Optimized Certainty Equivalent

Samuel Drapeau
(Shanghai Advanced Institute of Finance)
Abstract

An extension of the expected shortfall as well as the value at risk to
model uncertainty has been proposed by P. Shige.
In this talk we will present a systematic extension of the general
class of optimized certainty equivalent that includes the expected
shortfall.
We show that its representation can be simplified in many cases for
efficient computations.
In particular we present some result based on a probability model
uncertainty derived from some Wasserstein metric and provide explicit
solution for it.
We further study the duality and representation of them.

This talk is based on a joint work with Daniel Bartlxe and Ludovic
Tangpi

Thu, 01 Feb 2018

16:00 - 17:00
L4

Cost efficient strategies under model ambiguity

Carole Bernard
(Grenoble)
Abstract

The solution to the standard cost efficiency problem depends crucially on the fact that a single real-world measure P is available to the investor pursuing a cost-efficient approach. In most applications of interest however, a historical measure is neither given nor can it be estimated with accuracy from available data. To incorporate the uncertainty about the measure P in the cost efficient approach we assume that, instead of a single measure, a class of plausible prior models is available. We define the notion of robust cost-efficiency and highlight its link with the maxmin expected utility setting of Gilboa and Schmeidler (1989) and more generally with robust preferences in a possibly non expected utility setting.

This is joint work with Thibaut Lux and Steven Vanduffel (VUB)

Thu, 25 Jan 2018

16:00 - 17:00
L4

Martingale optimal transport - discrete to continous

Martin Huessman
(Bonn)
Abstract

In classical optimal transport, the contributions of Benamou–Brenier and 
Mc-Cann regarding the time-dependent version of the problem are 
cornerstones of the field and form the basis for a variety of 
applications in other mathematical areas.

Based on a weak length relaxation we suggest a Benamou-Brenier type 
formulation of martingale optimal transport. We give an explicit 
probabilistic representation of the optimizer for a specific cost 
function leading to a continuous Markov-martingale M with several 
notable properties: In a specific sense it mimics the movement of a 
Brownian particle as closely as possible subject to the marginal 
conditions a time 0 and 1. Similar to McCann’s 
displacement-interpolation, M provides a time-consistent interpolation 
between $\mu$ and $\nu$. For particular choices of the initial and 
terminal law, M recovers archetypical martingales such as Brownian 
motion, geometric Brownian motion, and the Bass martingale. Furthermore, 
it yields a new approach to Kellerer’s theorem.

(based on joint work with J. Backhoff, M. Beiglböck, S. Källblad, and D. 
Trevisan)

Thu, 18 Jan 2018

16:00 - 17:30
L4

Information and Derivatives

Jerome Detemple
(Boston University)
Abstract

We study a dynamic multi-asset economy with private information, a stock and a derivative. There are informed and uninformed investors as well as bounded rational investors trading on noise. The noisy rational expectations equilibrium is obtained in closed form. The equilibrium stock price follows a non-Markovian process, is positive and has stochastic volatility. The derivative cannot be replicated, except at rare endogenous times. At any point in time, the derivative price adds information relative to the stock price, but the pair of prices is less informative than volatility, the residual demand or the history of prices. The rank of the asset span drops at endogenous times causing turbulent trading activity. The effects of financial innovation are discussed. The equilibrium is fully revealing if the derivative is not traded: financial innovation destroys information.

Thu, 30 Nov 2017

16:00 - 17:30
L4

Short-term contingent claims on non-tradable assets: static hedging and pricing

Olivier Gueant
(Université Paris 1)
Abstract

In this talk, I consider the problem of pricing and (statically)
hedging short-term contingent claims written on illiquid or
non-tradable assets.
In a first part, I show how to find the best European payoff written
on a given set of underlying assets for hedging (under several
metrics) a given European payoff written on another set of underlying
assets -- some of them being illiquid or non-tradable. In particular,
I present new results in the case of the Expected Shortfall risk
measure. I also address the associated pricing problem by using
indifference pricing and its link with entropy.
In a second part, I consider the more classic case of hedging with a
finite set of simple payoffs/instruments and I address the associated
pricing problem. In particular, I show how entropic methods (Davis
pricing and indifference pricing à la Rouge-El Karoui) can be used in
conjunction with recent results of extreme value theory (in dimension
higher than 1) for pricing and hedging short-term out-of-the-money
options such as those involved in the definition of Daily Cliquet
Crash Puts.