Past Mathematical and Computational Finance Seminar

8 March 2018

We consider calculation of VaR/TVaR capital requirements when the underlying economic scenarios are determined by simulatable risk factors. This problem involves computationally expensive nested simulation, since evaluating expected portfolio losses of an outer scenario (aka computing a conditional expectation) requires inner-level Monte Carlo. We introduce several inter-related machine learning techniques to speed up this computation, in particular by properly accounting for the simulation noise. Our main workhorse is an advanced Gaussian Process (GP) regression approach which uses nonparametric spatial modeling to efficiently learn the relationship between the stochastic factors defining scenarios and corresponding portfolio value. Leveraging this emulator, we develop sequential algorithms that adaptively allocate inner simulation budgets to target the quantile region. The GP framework also yields better uncertainty quantification for the resulting VaR/\TVaR estimators that reduces bias and variance compared to existing methods.  Time permitting, I will highlight further related applications of statistical emulation in risk management.
This is joint work with Jimmy Risk (Cal Poly Pomona). 

  • Mathematical and Computational Finance Seminar
1 March 2018
Cecilia Mancini

Joint work with Josè E. Figueroa-Lòpez, Washington University in St. Louis

Abstract: We consider a univariate semimartingale model for (the logarithm 
of) an asset price, containing jumps having possibly infinite activity. The 
nonparametric threshold estimator\hat{IV}_n of the integrated variance 
IV:=\int_0^T\sigma^2_sds proposed in Mancini (2009) is constructed using 
observations on a discrete time grid, and precisely it sums up the squared 
increments of the process when they are below a  threshold, a deterministic 
function of the observation step and possibly of the coefficients of X. All the
threshold functions satisfying given conditions allow asymptotically consistent 
estimates of IV, however the finite sample properties of \hat{IV}_n can depend 
on the specific choice of the threshold.
We aim here at optimally selecting the threshold by minimizing either the 
estimation mean squared error (MSE) or the conditional mean squared error 
(cMSE). The last criterion allows to reach a threshold which is optimal not in 
mean but for the specific  volatility and jumps paths at hand.

A parsimonious characterization of the optimum is established, which turns 
out to be asymptotically proportional to the Lévy's modulus of continuity of 
the underlying Brownian motion. Moreover, minimizing the cMSE enables us 
to  propose a novel implementation scheme for approximating the optimal 
threshold. Monte Carlo simulations illustrate the superior performance of the 
proposed method.

  • Mathematical and Computational Finance Seminar
22 February 2018
Matthias Scherer

A classical construction principle for dependent failure times is to consider shocks that destroy components within a system. The arrival times of shocks can destroy arbitrary subsets of the system, thus introducing dependence. The seminal model – based on independent and exponentially distributed shocks - was presented by Marshall and Olkin in 1967, various generalizations have been proposed in the literature since then. Such models have applications in non-life insurance, e.g. insurance claims caused by floods, hurricanes, or other natural catastrophes. The simple interpretation of multivariate fatal shock models is clearly appealing, but the number of possible shocks makes them challenging to work with, recall that there are 2^d subsets of a set with d components. In a series of papers we have identified mixture models based on suitable stochastic processes that give rise to a different - and numerically more convenient - stochastic interpretation. This representation is particularly useful for the development of efficient simulation algorithms. Moreover, it helps to define parametric families with a reasonable number of parameters. We review the recent literature on multivariate fatal shock models, extreme-value copulas, and related dependence structures. We also discuss applications and hierarchical structures. Finally, we provide a new characterization of the Marshall-Olkin distribution.

Authors: Mai, J-F.; Scherer, M.;

  • Mathematical and Computational Finance Seminar
15 February 2018
Carol Alexander

Our general theory, which encompasses two different aggregation properties (Neuberger, 2012; Bondarenko, 2014) establishes a wide variety of new, unbiased and efficient risk premia estimators. Empirical results on meticulously-constructed daily, investable, constant-maturity  S&P500 higher-moment premia reveal significant, previously-undocumented, regime-dependent behavior. The variance premium is fully priced by Fama and French (2015) factors during the volatile regime, but has significant negative alpha in stable markets.  Also only during stable periods, a small, positive but significant third-moment premium is not fully priced by the variance and equity premia. There is no evidence for a separate fourth-moment premium.

  • Mathematical and Computational Finance Seminar
8 February 2018

An extension of the expected shortfall as well as the value at risk to
model uncertainty has been proposed by P. Shige.
In this talk we will present a systematic extension of the general
class of optimized certainty equivalent that includes the expected
We show that its representation can be simplified in many cases for
efficient computations.
In particular we present some result based on a probability model
uncertainty derived from some Wasserstein metric and provide explicit
solution for it.
We further study the duality and representation of them.

This talk is based on a joint work with Daniel Bartlxe and Ludovic

  • Mathematical and Computational Finance Seminar
1 February 2018
Carole Bernard

The solution to the standard cost efficiency problem depends crucially on the fact that a single real-world measure P is available to the investor pursuing a cost-efficient approach. In most applications of interest however, a historical measure is neither given nor can it be estimated with accuracy from available data. To incorporate the uncertainty about the measure P in the cost efficient approach we assume that, instead of a single measure, a class of plausible prior models is available. We define the notion of robust cost-efficiency and highlight its link with the maxmin expected utility setting of Gilboa and Schmeidler (1989) and more generally with robust preferences in a possibly non expected utility setting.

This is joint work with Thibaut Lux and Steven Vanduffel (VUB)

  • Mathematical and Computational Finance Seminar
25 January 2018
Martin Huessman

In classical optimal transport, the contributions of Benamou–Brenier and 
Mc-Cann regarding the time-dependent version of the problem are 
cornerstones of the field and form the basis for a variety of 
applications in other mathematical areas.

Based on a weak length relaxation we suggest a Benamou-Brenier type 
formulation of martingale optimal transport. We give an explicit 
probabilistic representation of the optimizer for a specific cost 
function leading to a continuous Markov-martingale M with several 
notable properties: In a specific sense it mimics the movement of a 
Brownian particle as closely as possible subject to the marginal 
conditions a time 0 and 1. Similar to McCann’s 
displacement-interpolation, M provides a time-consistent interpolation 
between $\mu$ and $\nu$. For particular choices of the initial and 
terminal law, M recovers archetypical martingales such as Brownian 
motion, geometric Brownian motion, and the Bass martingale. Furthermore, 
it yields a new approach to Kellerer’s theorem.

(based on joint work with J. Backhoff, M. Beiglböck, S. Källblad, and D. 

  • Mathematical and Computational Finance Seminar
18 January 2018
Jerome Detemple

We study a dynamic multi-asset economy with private information, a stock and a derivative. There are informed and uninformed investors as well as bounded rational investors trading on noise. The noisy rational expectations equilibrium is obtained in closed form. The equilibrium stock price follows a non-Markovian process, is positive and has stochastic volatility. The derivative cannot be replicated, except at rare endogenous times. At any point in time, the derivative price adds information relative to the stock price, but the pair of prices is less informative than volatility, the residual demand or the history of prices. The rank of the asset span drops at endogenous times causing turbulent trading activity. The effects of financial innovation are discussed. The equilibrium is fully revealing if the derivative is not traded: financial innovation destroys information.

  • Mathematical and Computational Finance Seminar
30 November 2017
Olivier Gueant

In this talk, I consider the problem of pricing and (statically)
hedging short-term contingent claims written on illiquid or
non-tradable assets.
In a first part, I show how to find the best European payoff written
on a given set of underlying assets for hedging (under several
metrics) a given European payoff written on another set of underlying
assets -- some of them being illiquid or non-tradable. In particular,
I present new results in the case of the Expected Shortfall risk
measure. I also address the associated pricing problem by using
indifference pricing and its link with entropy.
In a second part, I consider the more classic case of hedging with a
finite set of simple payoffs/instruments and I address the associated
pricing problem. In particular, I show how entropic methods (Davis
pricing and indifference pricing à la Rouge-El Karoui) can be used in
conjunction with recent results of extreme value theory (in dimension
higher than 1) for pricing and hedging short-term out-of-the-money
options such as those involved in the definition of Daily Cliquet
Crash Puts.

  • Mathematical and Computational Finance Seminar
23 November 2017
Jean-Francois Chassagneux

In this talk, I consider  the problem of
hedging European and Bermudan option with a given probability. This 
question is
more generally linked to portfolio optimisation problems under weak
stochastic target constraints.
I will recall, in a Markovian framework, the characterisation of the 
solution by
non-linear PDEs. I will then discuss various numerical algorithms
to compute in practice the quantile hedging price.

This presentation is based on joint works with B. Bouchard (Université 
Paris Dauphine), G. Bouveret (University of Oxford) and ongoing work 
with C. Benezet (Université Paris Diderot).

  • Mathematical and Computational Finance Seminar