Statistical Learning for Portfolio Tail Risk Measurement
Abstract
We consider calculation of VaR/TVaR capital requirements when the underlying economic scenarios are determined by simulatable risk factors. This problem involves computationally expensive nested simulation, since evaluating expected portfolio losses of an outer scenario (aka computing a conditional expectation) requires inner-level Monte Carlo. We introduce several inter-related machine learning techniques to speed up this computation, in particular by properly accounting for the simulation noise. Our main workhorse is an advanced Gaussian Process (GP) regression approach which uses nonparametric spatial modeling to efficiently learn the relationship between the stochastic factors defining scenarios and corresponding portfolio value. Leveraging this emulator, we develop sequential algorithms that adaptively allocate inner simulation budgets to target the quantile region. The GP framework also yields better uncertainty quantification for the resulting VaR/\TVaR estimators that reduces bias and variance compared to existing methods. Time permitting, I will highlight further related applications of statistical emulation in risk management.
This is joint work with Jimmy Risk (Cal Poly Pomona).
Optimum thresholding using mean and conditional mean squared error
Abstract
Joint work with Josè E. Figueroa-Lòpez, Washington University in St. Louis
Abstract: We consider a univariate semimartingale model for (the logarithm
of) an asset price, containing jumps having possibly infinite activity. The
nonparametric threshold estimator\hat{IV}_n of the integrated variance
IV:=\int_0^T\sigma^2_sds proposed in Mancini (2009) is constructed using
observations on a discrete time grid, and precisely it sums up the squared
increments of the process when they are below a threshold, a deterministic
function of the observation step and possibly of the coefficients of X. All the
threshold functions satisfying given conditions allow asymptotically consistent
estimates of IV, however the finite sample properties of \hat{IV}_n can depend
on the specific choice of the threshold.
We aim here at optimally selecting the threshold by minimizing either the
estimation mean squared error (MSE) or the conditional mean squared error
(cMSE). The last criterion allows to reach a threshold which is optimal not in
mean but for the specific volatility and jumps paths at hand.
A parsimonious characterization of the optimum is established, which turns
out to be asymptotically proportional to the Lévy's modulus of continuity of
the underlying Brownian motion. Moreover, minimizing the cMSE enables us
to propose a novel implementation scheme for approximating the optimal
threshold. Monte Carlo simulations illustrate the superior performance of the
proposed method.
Multivariate fatal shock models in large dimensions
Abstract
A classical construction principle for dependent failure times is to consider shocks that destroy components within a system. The arrival times of shocks can destroy arbitrary subsets of the system, thus introducing dependence. The seminal model – based on independent and exponentially distributed shocks - was presented by Marshall and Olkin in 1967, various generalizations have been proposed in the literature since then. Such models have applications in non-life insurance, e.g. insurance claims caused by floods, hurricanes, or other natural catastrophes. The simple interpretation of multivariate fatal shock models is clearly appealing, but the number of possible shocks makes them challenging to work with, recall that there are 2^d subsets of a set with d components. In a series of papers we have identified mixture models based on suitable stochastic processes that give rise to a different - and numerically more convenient - stochastic interpretation. This representation is particularly useful for the development of efficient simulation algorithms. Moreover, it helps to define parametric families with a reasonable number of parameters. We review the recent literature on multivariate fatal shock models, extreme-value copulas, and related dependence structures. We also discuss applications and hierarchical structures. Finally, we provide a new characterization of the Marshall-Olkin distribution.
Authors: Mai, J-F.; Scherer, M.;
The General Aggregation Property and its Application to Regime-Dependent Determinants of Variance, Skew and Jump Risk Premia
Abstract
Our general theory, which encompasses two different aggregation properties (Neuberger, 2012; Bondarenko, 2014) establishes a wide variety of new, unbiased and efficient risk premia estimators. Empirical results on meticulously-constructed daily, investable, constant-maturity S&P500 higher-moment premia reveal significant, previously-undocumented, regime-dependent behavior. The variance premium is fully priced by Fama and French (2015) factors during the volatile regime, but has significant negative alpha in stable markets. Also only during stable periods, a small, positive but significant third-moment premium is not fully priced by the variance and equity premia. There is no evidence for a separate fourth-moment premium.
Computational Aspects of Robust Optimized Certainty Equivalent
Abstract
An extension of the expected shortfall as well as the value at risk to
model uncertainty has been proposed by P. Shige.
In this talk we will present a systematic extension of the general
class of optimized certainty equivalent that includes the expected
shortfall.
We show that its representation can be simplified in many cases for
efficient computations.
In particular we present some result based on a probability model
uncertainty derived from some Wasserstein metric and provide explicit
solution for it.
We further study the duality and representation of them.
This talk is based on a joint work with Daniel Bartlxe and Ludovic
Tangpi
Cost efficient strategies under model ambiguity
Abstract
The solution to the standard cost efficiency problem depends crucially on the fact that a single real-world measure P is available to the investor pursuing a cost-efficient approach. In most applications of interest however, a historical measure is neither given nor can it be estimated with accuracy from available data. To incorporate the uncertainty about the measure P in the cost efficient approach we assume that, instead of a single measure, a class of plausible prior models is available. We define the notion of robust cost-efficiency and highlight its link with the maxmin expected utility setting of Gilboa and Schmeidler (1989) and more generally with robust preferences in a possibly non expected utility setting.
This is joint work with Thibaut Lux and Steven Vanduffel (VUB)
Martingale optimal transport - discrete to continous
Abstract
In classical optimal transport, the contributions of Benamou–Brenier and
Mc-Cann regarding the time-dependent version of the problem are
cornerstones of the field and form the basis for a variety of
applications in other mathematical areas.
Based on a weak length relaxation we suggest a Benamou-Brenier type
formulation of martingale optimal transport. We give an explicit
probabilistic representation of the optimizer for a specific cost
function leading to a continuous Markov-martingale M with several
notable properties: In a specific sense it mimics the movement of a
Brownian particle as closely as possible subject to the marginal
conditions a time 0 and 1. Similar to McCann’s
displacement-interpolation, M provides a time-consistent interpolation
between $\mu$ and $\nu$. For particular choices of the initial and
terminal law, M recovers archetypical martingales such as Brownian
motion, geometric Brownian motion, and the Bass martingale. Furthermore,
it yields a new approach to Kellerer’s theorem.
(based on joint work with J. Backhoff, M. Beiglböck, S. Källblad, and D.
Trevisan)
Information and Derivatives
Abstract
We study a dynamic multi-asset economy with private information, a stock and a derivative. There are informed and uninformed investors as well as bounded rational investors trading on noise. The noisy rational expectations equilibrium is obtained in closed form. The equilibrium stock price follows a non-Markovian process, is positive and has stochastic volatility. The derivative cannot be replicated, except at rare endogenous times. At any point in time, the derivative price adds information relative to the stock price, but the pair of prices is less informative than volatility, the residual demand or the history of prices. The rank of the asset span drops at endogenous times causing turbulent trading activity. The effects of financial innovation are discussed. The equilibrium is fully revealing if the derivative is not traded: financial innovation destroys information.