14:30
Rainbow Matchings in Properly Edge-Coloured Multigraphs
Abstract
Aharoni and Berger conjectured that in any bipartite multigraph that is properly edge-coloured by n colours with at least n+1 edges of each colour there must be a matching that uses each colour exactly once (such a matching is called rainbow). This conjecture recently have been proved asymptotically by Pokrovskiy. In this talk, I will consider the same question without the bipartiteness assumption. It turns out that in any multigraph with bounded edge multiplicities, that is properly edge-coloured by n colours with at least n+o(n) edges of each colour, there must be a matching of size n-O(1) that uses each colour at most once. This is joint work with Peter Keevash.
Estimating internal furnace phenomena and changes in operating conditions by using data analysis (main topic) & Modelling injection and melting of metal fine particles in liquid metal reactor
Statistical Learning for Portfolio Tail Risk Measurement
Abstract
We consider calculation of VaR/TVaR capital requirements when the underlying economic scenarios are determined by simulatable risk factors. This problem involves computationally expensive nested simulation, since evaluating expected portfolio losses of an outer scenario (aka computing a conditional expectation) requires inner-level Monte Carlo. We introduce several inter-related machine learning techniques to speed up this computation, in particular by properly accounting for the simulation noise. Our main workhorse is an advanced Gaussian Process (GP) regression approach which uses nonparametric spatial modeling to efficiently learn the relationship between the stochastic factors defining scenarios and corresponding portfolio value. Leveraging this emulator, we develop sequential algorithms that adaptively allocate inner simulation budgets to target the quantile region. The GP framework also yields better uncertainty quantification for the resulting VaR/\TVaR estimators that reduces bias and variance compared to existing methods. Time permitting, I will highlight further related applications of statistical emulation in risk management.
This is joint work with Jimmy Risk (Cal Poly Pomona).
Optimum thresholding using mean and conditional mean squared error
Abstract
Joint work with Josè E. Figueroa-Lòpez, Washington University in St. Louis
Abstract: We consider a univariate semimartingale model for (the logarithm
of) an asset price, containing jumps having possibly infinite activity. The
nonparametric threshold estimator\hat{IV}_n of the integrated variance
IV:=\int_0^T\sigma^2_sds proposed in Mancini (2009) is constructed using
observations on a discrete time grid, and precisely it sums up the squared
increments of the process when they are below a threshold, a deterministic
function of the observation step and possibly of the coefficients of X. All the
threshold functions satisfying given conditions allow asymptotically consistent
estimates of IV, however the finite sample properties of \hat{IV}_n can depend
on the specific choice of the threshold.
We aim here at optimally selecting the threshold by minimizing either the
estimation mean squared error (MSE) or the conditional mean squared error
(cMSE). The last criterion allows to reach a threshold which is optimal not in
mean but for the specific volatility and jumps paths at hand.
A parsimonious characterization of the optimum is established, which turns
out to be asymptotically proportional to the Lévy's modulus of continuity of
the underlying Brownian motion. Moreover, minimizing the cMSE enables us
to propose a novel implementation scheme for approximating the optimal
threshold. Monte Carlo simulations illustrate the superior performance of the
proposed method.
Multivariate fatal shock models in large dimensions
Abstract
A classical construction principle for dependent failure times is to consider shocks that destroy components within a system. The arrival times of shocks can destroy arbitrary subsets of the system, thus introducing dependence. The seminal model – based on independent and exponentially distributed shocks - was presented by Marshall and Olkin in 1967, various generalizations have been proposed in the literature since then. Such models have applications in non-life insurance, e.g. insurance claims caused by floods, hurricanes, or other natural catastrophes. The simple interpretation of multivariate fatal shock models is clearly appealing, but the number of possible shocks makes them challenging to work with, recall that there are 2^d subsets of a set with d components. In a series of papers we have identified mixture models based on suitable stochastic processes that give rise to a different - and numerically more convenient - stochastic interpretation. This representation is particularly useful for the development of efficient simulation algorithms. Moreover, it helps to define parametric families with a reasonable number of parameters. We review the recent literature on multivariate fatal shock models, extreme-value copulas, and related dependence structures. We also discuss applications and hierarchical structures. Finally, we provide a new characterization of the Marshall-Olkin distribution.
Authors: Mai, J-F.; Scherer, M.;
The General Aggregation Property and its Application to Regime-Dependent Determinants of Variance, Skew and Jump Risk Premia
Abstract
Our general theory, which encompasses two different aggregation properties (Neuberger, 2012; Bondarenko, 2014) establishes a wide variety of new, unbiased and efficient risk premia estimators. Empirical results on meticulously-constructed daily, investable, constant-maturity S&P500 higher-moment premia reveal significant, previously-undocumented, regime-dependent behavior. The variance premium is fully priced by Fama and French (2015) factors during the volatile regime, but has significant negative alpha in stable markets. Also only during stable periods, a small, positive but significant third-moment premium is not fully priced by the variance and equity premia. There is no evidence for a separate fourth-moment premium.