Forthcoming events in this series
16:00
On some applications of excursion theory
Abstract
During the talk I will present some new computational technique based on excursion theory for Markov processes. Some new results for classical processes like Bessel processes and reflected Brownian Motion will be shown. The most important point of presented applications will be the new insight into Hartman-Watson (HW) distributions. It turns out that excursion theory will enable us to deduce the simple connections of HW with a hyperbolic cosine of Brownian Motion.
On fully-dynamic risk-indifference pricing: time-consistency and other properties
Abstract
Risk-indifference pricing is proposed as an alternative to utility indifference pricing, where a risk measure is used instead of a utility based preference. In this, we propose to include the possibility to change the attitude to risk evaluation as time progresses. This is particularly reasonable for long term investments and strategies.
Then we introduce a fully-dynamic risk-indifference criteria, in which a whole family of risk measures is considered. The risk-indifference pricing system is studied from the point of view of its properties as a convex price system. We tackle questions of time-consistency in the risk evaluation and the corresponding prices. This analysis provides a new insight also to time-consistency for ordinary dynamic risk-measures.
Our techniques and results are set in the representation and extension theorems for convex operators. We shall argue and finally provide a setting in which fully-dynamic risk-indifference pricing is a well set convex price system.
The presentation is based on joint works with Jocelyne Bion-Nadal.
Double auctions in welfare economics
Abstract
Welfare economics argues that competitive markets lead to efficient allocation of resources. The classical theorems are based on the Walrasian market model which assumes the existence of market clearing prices. The emergence of such prices remains debatable. We replace the Walrasian market model by double auctions and show that the conclusions of welfare economics remain largely the same. Double auctions are not only a more realistic description of real markets but they explain how equilibrium prices and efficient allocations emerge in practice.
Incomplete Equilibrium with a Stochastic Annuity
Abstract
In this talk, I will present an incomplete equilibrium model to determine the price of an annuity. A finite number of agents receive stochastic income streams and choose between consumption and investment in the traded annuity. The novelty of this model is its ability to handle running consumption and general income streams. In particular, the model incorporates mean reverting income, which is empirically relevant but historically too intractable in equilibrium. The model is set in a Brownian framework, and equilibrium is characterized and proven to exist using a system of fully coupled quadratic BSDEs. This work is joint with Gordan Zitkovic.
Model-free version of the BDG inequality and its applications
Abstract
In my talk I will briefly introduce model-free approach to mathematical finance, which uses Vovk's outer measure. Then, using pathwise BDG inequality obtained by Beigbloeck and Siorpaes and modification of Vovk's measure, I will present and prove a model-free version of this inequality for continuous price paths. Finally, I will discuss possible applications, like the existence and uniqueness of solutions of SDEs driven by continuous, model-free price paths. The talk will be based on the joint work with Farai Mhlanga and Lesiba Galane (University of Limpopo, South Africa)
Machine Learning in Finance
Abstract
We present several instances of applications of machine
learning technologies in mathematical Finance including pricing,
hedging, calibration and filtering problems. We try to show that
regularity theory of the involved equations plays a crucial role
in designing such algorithms.
(based on joint works with Hans Buehler, Christa Cuchiero, Lukas
Gonon, Wahid Khosrawi-Sardroudi, Ben Wood)
Large Deviations for McKean Vlasov Equations and Importance Sampling
Abstract
We discuss two Freidlin-Wentzell large deviation principles for McKean-Vlasov equations (MV-SDEs) in certain path space topologies. The equations have a drift of polynomial growth and an existence/uniqueness result is provided. We apply the Monte-Carlo methods for evaluating expectations of functionals of solutions to MV-SDE with drifts of super-linear growth. We assume that the MV-SDE is approximated in the standard manner by means of an interacting particle system and propose two importance sampling (IS) techniques to reduce the variance of the resulting Monte Carlo estimator. In the "complete measure change" approach, the IS measure change is applied simultaneously in the coefficients and in the expectation to be evaluated. In the "decoupling" approach we first estimate the law of the solution in a first set of simulations without measure change and then perform a second set of simulations under the importance sampling measure using the approximate solution law computed in the first step.
Computation of optimal transport and related hedging problems via penalization and neural networks
Abstract
We present a widely applicable approach to solving (multi-marginal, martingale) optimal transport and related problems via neural networks. The core idea is to penalize the optimization problem in its dual formulation and reduce it to a finite dimensional one which corresponds to optimizing a neural network with smooth objective function. We present numerical examples from optimal transport, and bounds on the distribution of a sum of dependent random variables. As an application we focus on the problem of risk aggregation under model uncertainty. The talk is based on joint work with Stephan Eckstein and Mathias Pohl.
Accounting for the Epps Effect: Realized Covariation, Cointegration and Common Factors
Abstract
High-frequency realized variance approaches offer great promise for
estimating asset prices’ covariation, but encounter difficulties
connected to the Epps effect. This paper models the Epps effect in a
stochastic volatility setting. It adds dependent noise to a factor
representation of prices. The noise both offsets covariation and
describes plausible lags in information transmission. Non-synchronous
trading, another recognized source of the effect, is not required. A
resulting estimator of correlations and betas performs well on LSE
mid-quote data, lending empirical credence to the approach.
From maps to apps: the power of machine learning and artificial intelligence for regulators
Abstract
Abstract:
Highlights:
•We increasingly live in a digital world and commercial companies are not the only beneficiaries. The public sector can also use data to tackle pressing issues.
•Machine learning is starting to make an impact on the tools regulators use, for spotting the bad guys, for estimating demand, and for tackling many other problems.
•The speech uses an array of examples to argue that much regulation is ultimately about recognising patterns in data. Machine learning helps us find those patterns.
Just as moving from paper maps to smartphone apps can make us better navigators, Stefan’s speech explains how the move from using traditional analysis to using machine learning can make us better regulators.
Mini Biography:
Stefan Hunt is the founder and Head of the Behavioural Economics and Data Science Unit. He has led the FCA’s use of these two fields and designed several pioneering economic analyses. He is an Honorary Professor at the University of Nottingham and has a PhD in economics from Harvard University.
Generalized McKean-Vlasov stochastic control problems
Abstract
Title: Generalized McKean-Vlasov stochastic control problems.
Abstract: I will consider McKean-Vlasov stochastic control problems
where the cost functions and the state dynamics depend upon the joint
distribution of the controlled state and the control process. First, I
will provide a suitable version of the Pontryagin stochastic maximum
principle, showing that, in the present general framework, pointwise
minimization of the Hamiltonian with respect to the control is not a
necessary optimality condition. Then I will take a different
perspective, and present a variational approach to study a weak
formulation of such control problems, thereby establishing a new
connection between those and optimal transport problems on path space.
The talk is based on a joint project with J. Backhoff-Veraguas and R. Carmona.
Lévy forward price approach for multiple yield curves in presence of persistently low and negative interest rates
Abstract
In this talk we present a framework for discretely compounding
interest rates which is based on the forward price process approach.
This approach has a number of advantages, in particular in the current
market environment. Compared to the classical Libor market models, it
allows in a natural way for negative interest rates and has superb
calibration properties even in the presence of persistently low rates.
Moreover, the measure changes along the tenor structure are simplified
significantly. This property makes it an excellent base for a
post-crisis multiple curve setup. Two variants for multiple curve
constructions will be discussed.
As driving processes we use time-inhomogeneous Lévy processes, which
lead to explicit valuation formulas for various interest rate products
using well-known Fourier transform techniques. Based on these formulas
we present calibration results for the two model variants using market
data for caps with Bachelier implied volatilities.
Statistical Learning for Portfolio Tail Risk Measurement
Abstract
We consider calculation of VaR/TVaR capital requirements when the underlying economic scenarios are determined by simulatable risk factors. This problem involves computationally expensive nested simulation, since evaluating expected portfolio losses of an outer scenario (aka computing a conditional expectation) requires inner-level Monte Carlo. We introduce several inter-related machine learning techniques to speed up this computation, in particular by properly accounting for the simulation noise. Our main workhorse is an advanced Gaussian Process (GP) regression approach which uses nonparametric spatial modeling to efficiently learn the relationship between the stochastic factors defining scenarios and corresponding portfolio value. Leveraging this emulator, we develop sequential algorithms that adaptively allocate inner simulation budgets to target the quantile region. The GP framework also yields better uncertainty quantification for the resulting VaR/\TVaR estimators that reduces bias and variance compared to existing methods. Time permitting, I will highlight further related applications of statistical emulation in risk management.
This is joint work with Jimmy Risk (Cal Poly Pomona).
Optimum thresholding using mean and conditional mean squared error
Abstract
Joint work with Josè E. Figueroa-Lòpez, Washington University in St. Louis
Abstract: We consider a univariate semimartingale model for (the logarithm
of) an asset price, containing jumps having possibly infinite activity. The
nonparametric threshold estimator\hat{IV}_n of the integrated variance
IV:=\int_0^T\sigma^2_sds proposed in Mancini (2009) is constructed using
observations on a discrete time grid, and precisely it sums up the squared
increments of the process when they are below a threshold, a deterministic
function of the observation step and possibly of the coefficients of X. All the
threshold functions satisfying given conditions allow asymptotically consistent
estimates of IV, however the finite sample properties of \hat{IV}_n can depend
on the specific choice of the threshold.
We aim here at optimally selecting the threshold by minimizing either the
estimation mean squared error (MSE) or the conditional mean squared error
(cMSE). The last criterion allows to reach a threshold which is optimal not in
mean but for the specific volatility and jumps paths at hand.
A parsimonious characterization of the optimum is established, which turns
out to be asymptotically proportional to the Lévy's modulus of continuity of
the underlying Brownian motion. Moreover, minimizing the cMSE enables us
to propose a novel implementation scheme for approximating the optimal
threshold. Monte Carlo simulations illustrate the superior performance of the
proposed method.
Multivariate fatal shock models in large dimensions
Abstract
A classical construction principle for dependent failure times is to consider shocks that destroy components within a system. The arrival times of shocks can destroy arbitrary subsets of the system, thus introducing dependence. The seminal model – based on independent and exponentially distributed shocks - was presented by Marshall and Olkin in 1967, various generalizations have been proposed in the literature since then. Such models have applications in non-life insurance, e.g. insurance claims caused by floods, hurricanes, or other natural catastrophes. The simple interpretation of multivariate fatal shock models is clearly appealing, but the number of possible shocks makes them challenging to work with, recall that there are 2^d subsets of a set with d components. In a series of papers we have identified mixture models based on suitable stochastic processes that give rise to a different - and numerically more convenient - stochastic interpretation. This representation is particularly useful for the development of efficient simulation algorithms. Moreover, it helps to define parametric families with a reasonable number of parameters. We review the recent literature on multivariate fatal shock models, extreme-value copulas, and related dependence structures. We also discuss applications and hierarchical structures. Finally, we provide a new characterization of the Marshall-Olkin distribution.
Authors: Mai, J-F.; Scherer, M.;
The General Aggregation Property and its Application to Regime-Dependent Determinants of Variance, Skew and Jump Risk Premia
Abstract
Our general theory, which encompasses two different aggregation properties (Neuberger, 2012; Bondarenko, 2014) establishes a wide variety of new, unbiased and efficient risk premia estimators. Empirical results on meticulously-constructed daily, investable, constant-maturity S&P500 higher-moment premia reveal significant, previously-undocumented, regime-dependent behavior. The variance premium is fully priced by Fama and French (2015) factors during the volatile regime, but has significant negative alpha in stable markets. Also only during stable periods, a small, positive but significant third-moment premium is not fully priced by the variance and equity premia. There is no evidence for a separate fourth-moment premium.
Computational Aspects of Robust Optimized Certainty Equivalent
Abstract
An extension of the expected shortfall as well as the value at risk to
model uncertainty has been proposed by P. Shige.
In this talk we will present a systematic extension of the general
class of optimized certainty equivalent that includes the expected
shortfall.
We show that its representation can be simplified in many cases for
efficient computations.
In particular we present some result based on a probability model
uncertainty derived from some Wasserstein metric and provide explicit
solution for it.
We further study the duality and representation of them.
This talk is based on a joint work with Daniel Bartlxe and Ludovic
Tangpi
Cost efficient strategies under model ambiguity
Abstract
The solution to the standard cost efficiency problem depends crucially on the fact that a single real-world measure P is available to the investor pursuing a cost-efficient approach. In most applications of interest however, a historical measure is neither given nor can it be estimated with accuracy from available data. To incorporate the uncertainty about the measure P in the cost efficient approach we assume that, instead of a single measure, a class of plausible prior models is available. We define the notion of robust cost-efficiency and highlight its link with the maxmin expected utility setting of Gilboa and Schmeidler (1989) and more generally with robust preferences in a possibly non expected utility setting.
This is joint work with Thibaut Lux and Steven Vanduffel (VUB)
Martingale optimal transport - discrete to continous
Abstract
In classical optimal transport, the contributions of Benamou–Brenier and
Mc-Cann regarding the time-dependent version of the problem are
cornerstones of the field and form the basis for a variety of
applications in other mathematical areas.
Based on a weak length relaxation we suggest a Benamou-Brenier type
formulation of martingale optimal transport. We give an explicit
probabilistic representation of the optimizer for a specific cost
function leading to a continuous Markov-martingale M with several
notable properties: In a specific sense it mimics the movement of a
Brownian particle as closely as possible subject to the marginal
conditions a time 0 and 1. Similar to McCann’s
displacement-interpolation, M provides a time-consistent interpolation
between $\mu$ and $\nu$. For particular choices of the initial and
terminal law, M recovers archetypical martingales such as Brownian
motion, geometric Brownian motion, and the Bass martingale. Furthermore,
it yields a new approach to Kellerer’s theorem.
(based on joint work with J. Backhoff, M. Beiglböck, S. Källblad, and D.
Trevisan)
Information and Derivatives
Abstract
We study a dynamic multi-asset economy with private information, a stock and a derivative. There are informed and uninformed investors as well as bounded rational investors trading on noise. The noisy rational expectations equilibrium is obtained in closed form. The equilibrium stock price follows a non-Markovian process, is positive and has stochastic volatility. The derivative cannot be replicated, except at rare endogenous times. At any point in time, the derivative price adds information relative to the stock price, but the pair of prices is less informative than volatility, the residual demand or the history of prices. The rank of the asset span drops at endogenous times causing turbulent trading activity. The effects of financial innovation are discussed. The equilibrium is fully revealing if the derivative is not traded: financial innovation destroys information.
Short-term contingent claims on non-tradable assets: static hedging and pricing
Abstract
In this talk, I consider the problem of pricing and (statically)
hedging short-term contingent claims written on illiquid or
non-tradable assets.
In a first part, I show how to find the best European payoff written
on a given set of underlying assets for hedging (under several
metrics) a given European payoff written on another set of underlying
assets -- some of them being illiquid or non-tradable. In particular,
I present new results in the case of the Expected Shortfall risk
measure. I also address the associated pricing problem by using
indifference pricing and its link with entropy.
In a second part, I consider the more classic case of hedging with a
finite set of simple payoffs/instruments and I address the associated
pricing problem. In particular, I show how entropic methods (Davis
pricing and indifference pricing à la Rouge-El Karoui) can be used in
conjunction with recent results of extreme value theory (in dimension
higher than 1) for pricing and hedging short-term out-of-the-money
options such as those involved in the definition of Daily Cliquet
Crash Puts.
Numerical approximation of quantile hedging problem
Abstract
In this talk, I consider the problem of
hedging European and Bermudan option with a given probability. This
question is
more generally linked to portfolio optimisation problems under weak
stochastic target constraints.
I will recall, in a Markovian framework, the characterisation of the
solution by
non-linear PDEs. I will then discuss various numerical algorithms
to compute in practice the quantile hedging price.
This presentation is based on joint works with B. Bouchard (Université
Paris Dauphine), G. Bouveret (University of Oxford) and ongoing work
with C. Benezet (Université Paris Diderot).