Structure Constants and Integrable Bootstrap in Planar N=4 supersymmetric Yang-Mills theory
Abstract
We propose a non-perturbative formulation of structure constants of single trace operators in planar N=4 SYM. We match our results with both weak and strong coupling data available in the literature. Based on work with Benjamin Basso and Pedro Vieira.
Biological modelling: How to cope with always being wrong
15:45
Volatility is rough
Abstract
: Estimating volatility from recent high frequency data, we revisit the question of the smoothness of the volatility process. Our main result is that log-volatility behaves essentially as a fractional Brownian motion with Hurst exponent H of order 0.1, at any reasonable time scale.
This leads us to adopt the fractional stochastic volatility (FSV) model of Comte and Renault.
We call our model Rough FSV (RFSV) to underline that, in contrast to FSV, H<1/2.
We demonstrate that our RFSV model is remarkably consistent with financial time series data; one application is that it enables us to obtain improved forecasts of realized volatility.
Furthermore, we find that although volatility is not long memory in the RFSV model, classical statistical procedures aiming at detecting volatility persistence tend to conclude the presence of long memory in data generated from it.
This sheds light on why long memory of volatility has been widely accepted as a stylized fact.
Finally, we provide a quantitative market microstructure-based foundation for our findings, relating the roughness of volatility to high frequency trading and order splitting.
This is joint work with Jim Gatheral and Thibault Jaisson.
Examples of 2d incompressible flows and certain model equations
Abstract
We will discuss 2d Euler and Boussinesq (incompressible) flows related to a possible boundary blow-up scenario for the 3d axi-symmetric case suggested by G. Luo and T. Hou, together with some easier model problems relevant for that situation.
15:45
Tail Estimates for Markovian Rough Paths
Abstract
We work in the context of Markovian rough paths associated to a class of uniformly subelliptic Dirichlet forms and prove an almost-Gaussian tail-estimate for the accumulated local p-variation functional, which has been introduced and studied by Cass, Litterer and Lyons. We comment on the significance of these estimates to a range of currently-studied problems, including the recent results of Ni Hao, and Chevyrev and Lyons.
14:15
Likelihood construction for discretely observed RDEs
Abstract
The main goal of the talk is to set up a framework for constructing the likelihood for discretely observed RDEs. The main idea is to contract a function mapping the discretely observed data to the corresponding increments of the driving noise. Once this is known, the likelihood of the observations can be written as the likelihood of the increments of the corresponding noise times the Jacobian correction.
Constructing a function mapping data to noise is equivalent to solving the inverse problem of looking for the input given the output of the Ito map corresponding to the RDE. First, I simplify the problem by assuming that the driving noise is linear between observations. Then, I will introduce an iterative process and show that it converges in p-variation to the piecewise linear path X corresponding to the observations. Finally, I will show that the total error in the likelihood construction is bounded in p-variation.
15:45
Multiplicative chaos theory and its applications.
Abstract
Multiplicative chaos theory originated from the study of turbulence by Kolmogorov in the 1940s and it was mathematically founded by Kahane in the 1980s. Recently the theory has drawn much of attention due to its connection to SLEs and statistical physics. In this talk I shall present some recent development of multiplicative chaos theory, as well as its applications to Liouville quantum gravity.
14:15
Min-wise hashing for large-scale regression
Abstract
We consider the problem of large-scale regression where both the number of predictors, p, and the number of observations, n, may be in the order of millions or more. Computing a simple OLS or ridge regression estimator for such data, though potentially sensible from a purely statistical perspective (if n is large enough), can be a real computational challenge. One recent approach to tackling this problem in the common situation where the matrix of predictors is sparse, is to first compress the data by mapping it to an n by L matrix with L << p, using a scheme called b-bit min-wise hashing (Li and König, 2011). We study this technique from a theoretical perspective and obtain finite-sample bounds on the prediction error of regression following such data compression, showing how it exploits the sparsity of the data matrix to achieve good statistical performance. Surprisingly, we also find that a main effects model in the compressed data is able to approximate an interaction model in the original data. Fitting interactions requires no modification of the compression scheme, but only a higher-dimensional mapping with a larger L.
This is joint work with Nicolai Meinshausen (ETH Zürich).