Forthcoming events in this series


Thu, 04 Jun 2015

16:00 - 17:00
L4

Time-consistent stopping under decreasing impatience

Yu-Jui Huang
(Dublin City University)
Abstract

We present a dynamic theory for time-inconsistent stopping problems. The theory is developed under the paradigm of expected discounted
payoff, where the process to stop is continuous and Markovian. We introduce equilibrium stopping policies, which are imple-mentable
stopping rules that take into account the change of preferences over time. When the discount function induces decreasing impatience, we
establish a constructive method to find equilibrium policies. A new class of stopping problems, involving equilibrium policies, is
introduced, as opposed to classical optimal stopping. By studying the stopping of a one-dimensional Bessel process under hyperbolic discounting, we illustrate our theory in an explicit manner.

Thu, 28 May 2015

16:00 - 17:00
L4

Counterparty credit risk measurement: dependence effects, mitigating clauses and gap risk

Gianluca Fusai
(City University)
Abstract

In this talk, we aim to provide a valuation framework for counterparty credit risk based on a structural default model which incorporates jumps and dependence between the assets of interest. In this framework default is caused by the firm value falling below a prespecified threshold following unforeseeable shocks, which deteriorate its liquidity and ability to meet its liabilities. The presence of dependence between names captures wrong-way risk and right-way risk effects. The structural model traces back to Merton (1974), who considered only the possibility of default occurring at the maturity of the contract; first passage time models starting from the seminal contribution of Black and Cox (1976) extend the original framework to incorporate default events at any time during the lifetime of the contract. However, as the driving risk process used is the Brownian motion, all these models suffers of vanishing credit spreads over the short period - a feature not observed in reality. As a consequence, the Credit Value Adjustment (CVA) would be underestimated for short term deals as well as the so-called gap risk, i.e. the unpredictable loss due to a jump event in the market. Improvements aimed at resolving this issue include for example random default barriers, time dependent volatilities, and jumps. In this contribution, we adopt Lévy processes and capture dependence via a linear combination of two independent Lévy processes representing respectively the systematic risk factor and the idiosyncratic shock. We then apply this framework to the valuation of CVA and DVA related to equity contracts such as forwards and swaps. The main focus is on the impact of correlation between entities on the value of CVA and DVA, with particular attention to wrong-way risk and right-way risk, the inclusion of mitigating clauses such as netting and collateral, and finally the impact of gap risk. Particular attention is also devoted to model calibration to market data, and development of adequate numerical methods for the complexity of the model considered.

 
This is joint work with 
Laura Ballotta (Cass Business School, City University of London) and 
Daniele Marazzina (Department of Mathematics, Politecnico of Milan).
Thu, 21 May 2015

16:00 - 17:00
L4

Machine learning using Hawkes processes and concentration for matrix martingales

Prof Stephane Gaiffas
(CMAP ecole polytechnique)
Abstract

We consider the problem of unveiling the implicit network structure of user interactions in a social network, based only on high-frequency timestamps. Our inference is based on the minimization of the least-squares loss associated with a multivariate Hawkes model, penalized by $\ell_1$ and trace norms. We provide a first theoretical analysis of the generalization error for this problem, that includes sparsity and low-rank inducing priors. This result involves a new data-driven concentration inequality for matrix martingales in continuous time with observable variance, which is a result of independent interest. The analysis is based on a new supermartingale property of the trace exponential, based on tools from stochastic calculus. A consequence of our analysis is the construction of sharply tuned $\ell_1$ and trace-norm penalizations, that leads to a data-driven scaling of the variability of information available for each users. Numerical experiments illustrate the strong improvements achieved by the use of such data-driven penalizations.

Thu, 14 May 2015

16:00 - 17:00
L2

Clearing the Jungle of Stochastic Optimization

Professor Warren Powell
(Princeton University)
Abstract

Stochastic optimization for sequential decision problems under uncertainty arises in many settings, and as a result as evolved under several canonical frameworks with names such as dynamic programming, stochastic programming, optimal control, robust optimization, and simulation optimization (to name a few).  This is in sharp contrast with the universally accepted canonical frameworks for deterministic math programming (or deterministic optimal control).  We have found that these competing frameworks are actually hiding different classes of policies to solve a single problem which encompasses all of these fields.  In this talk, I provide a canonical framework which, while familiar to some, is not universally used, but should be.  The framework involves solving an objective function which requires searching over a class of policies, a step that can seem like mathematical hand waving.  We then identify four fundamental classes of policies, called policy function approximations (PFAs), cost function approximations (CFAs), policies based on value function approximations (VFAs), and lookahead policies (which themselves come in different flavors).  With the exception of CFAs, these policies have been widely studied under names that make it seem as if they are fundamentally different approaches (policy search, approximate dynamic programming or reinforcement learning, model predictive control, stochastic programming and robust optimization).  We use a simple energy storage problem to demonstrate that minor changes in the nature of the data can produce problems where each of the four classes might work best, or a hybrid.  This exercise supports our claim that any formulation of a sequential decision problem should start with a recognition that we need to search over a space of policies.

Thu, 07 May 2015

16:00 - 17:00
L4

The Robust Merton Problem of an Ambiguity Averse Investor

Sara Biagini
(Pisa University)
Abstract

We derive a closed form portfolio optimization rule for an investor who is diffident about mean return and volatility estimates, and has a CRRA utility. The novelty is that confidence is here represented using ellipsoidal uncertainty sets for the drift, given a volatility realization. This specification affords a simple and concise analysis, as the optimal portfolio allocation policy is shaped by a rescaled market Sharpe ratio, computed under the worst case volatility. The result is based on a max-min Hamilton-Jacobi-Bellman-Isaacs PDE, which extends the classical Merton problem and reverts to it for an ambiguity-neutral investor.

Thu, 30 Apr 2015

16:00 - 17:00
L4

Utility-Risk Portfolio Selection

Dr Harry Zheng
(Imperial College)
Abstract

In this talk we discuss a utility-risk portfolio selection problem. By considering the first order condition for the objective function, we derive a primitive static problem, called Nonlinear Moment Problem, subject to a set of constraints involving nonlinear functions of “mean-field terms”, to completely characterize the optimal terminal wealth. Under a mild assumption on utility, we establish the existence of the optimal solutions for both utility-downside-risk and utility-strictly-convex-risk problems, their positive answers have long been missing in the literature. In particular, the existence result in utility-downside-risk problem is in contrast with that of mean-downside-risk problem considered in Jin-Yan-Zhou (2005) in which they prove the non-existence of optimal solution instead and we can show the same non-existence result via the corresponding Nonlinear Moment Problem. This is joint work with K.C. Wong (University of Hong Kong) and S.C.P. Yam (Chinese University of Hong Kong).

Mon, 30 Mar 2015

16:00 - 17:00
L4

Utility-Risk Portfolio Selection

Dr Harry Zheng
(Imperial College London)
Abstract

In this talk we discuss a utility-risk portfolio selection problem. By considering the first order condition for the objective function, we derive a primitive static problem, called Nonlinear Moment Problem, subject to a set of constraints involving nonlinear functions of “mean-field terms”, to completely characterize the optimal terminal wealth. Under a mild assumption on utility, we establish the existence of the optimal solutions for both utility-downside-risk and utility-strictly-convex-risk problems, their positive answers have long been missing in the literature. In particular, the existence result in utility-downside-risk problem is in contrast with that of mean-downside-risk problem considered in Jin-Yan-Zhou (2005) in which they prove the non-existence of optimal solution instead and we can show the same non-existence result via the corresponding Nonlinear Moment Problem. This is joint work with K.C. Wong (University of Hong Kong) and S.C.P. Yam (Chinese University of Hong Kong).

Thu, 12 Mar 2015
16:00
L4

Implied Volatility of Leveraged ETF Options: Consistency and Scaling​

Tim Siu-Tang Leung
(Colombia University)
Abstract

The growth of the exchange-traded fund (ETF) industry has given rise to the trading of options written on ETFs and their leveraged counterparts (LETFs). Motivated by a number of empirical market observations, we study the relationship between the ETF and LETF implied volatility surfaces under general stochastic volatility models. Analytic approximations for prices and implied volatilities are derived for LETF ​options, along with rigorous error bounds. In these price and IV expressions, we identify their non-trivial dependence on the leverage ratio. Moreover, we introduce a "moneyness scaling" procedure to enhance the comparison of implied volatilities across leverage ratios, and test it with empirical price data.

Thu, 05 Mar 2015
16:00
L4

Measures of Systemic Risk

Stefan Weber
(Leibniz Universität Hannover)
Abstract
Systemic risk refers to the risk that the financial system is susceptible to failures due to the characteristics of the system itself. The tremendous cost of this type of risk requires the design and implementation of tools for the efficient macroprudential regulation of financial institutions. We propose a novel approach to measuring systemic risk.

Key to our construction is a rigorous derivation of systemic risk measures from the structure of the underlying system and the objectives of a financial regulator. The suggested systemic risk measures express systemic risk in terms of capital endowments of the financial firms. Their definition requires two ingredients: first, a random field that assigns to the capital allocations of the entities in the system a relevant stochastic outcome. The second ingredient is an acceptability criterion, i.e. a set of random variables that identifies those outcomes that are acceptable from the point of view of a regulatory authority. Systemic risk is measured by the set of allocations of additional capital that lead to acceptable outcomes. The resulting systemic risk measures are set-valued and can be studied using methods from set-valued convex analysis. At the same time, they can easily be applied to the regulation of financial institutions in practice.
 
We explain the conceptual framework and the definition of systemic risk measures, provide an algorithm for their computation, and illustrate their application in numerical case studies. We apply our methodology to systemic risk aggregation as described in Chen, Iyengar & Moallemi (2013) and to network models as suggested in the seminal paper of Eisenberg & Noe (2001), see also Cifuentes, Shin & Ferrucci (2005), Rogers & Veraart (2013), and Awiszus & Weber (2015). This is joint work with Zachary G. Feinstein and Birgit Rudloff
Tue, 24 Feb 2015
12:30
Oxford-Man Institute

Measuring and predicting human behaviour using online data

Tobias Preis
(University of Warwick)
Abstract

In this talk, I will outline some recent highlights of our research, addressing two questions. Firstly, can big data resources provide insights into crises in financial markets? By analysing Google query volumes for search terms related to finance and views of Wikipedia articles, we find patterns which may be interpreted as early warning signs of stock market moves. Secondly, can we provide insight into international differences in economic wellbeing by comparing patterns of interaction with the Internet? To answer this question, we introduce a future-orientation index to quantify the degree to which Internet users seek more information about years in the future than years in the past. We analyse Google logs and find a striking correlation between the country's GDP and the predisposition of its inhabitants to look forward. Our results illustrate the potential that combining extensive behavioural data sets offers for a better understanding of large scale human economic behaviour.

Thu, 19 Feb 2015
16:00
L1

Optimal casino betting: why lucky coins and good memory are important

Sang Hu
(National University of Singapore)
Abstract

We consider the dynamic casino gambling model initially proposed by Barberis (2012) and study the optimal stopping strategy of a pre-committing gambler with cumulative prospect theory (CPT) preferences. We illustrate how the strategies computed in Barberis (2012) can be strictly improved by reviewing the entire betting history or by tossing random coins, and explain that such improvement is possible because CPT preferences are not quasi-convex. Finally, we develop a systematic and analytical approach to finding the optimal strategy of the gambler. This is a joint work with Prof. Xue Dong He (Columbia University), Prof. Jan Obloj, and Prof. Xun Yu Zhou.

Thu, 12 Feb 2015
16:00
L4

Discrete time approximation of HJB equations via BSDEs with nonpositive jumps

Idris Kharroubi
(Université Paris Dauphine)
Abstract
We propose a new probabilistic numerical scheme for fully nonlinear equations of Hamilton-Jacobi-Bellman (HJB) type associated to stochastic control problems, which is based on the a recent Feynman-Kac representation by means of control randomization and backward stochastic differential equation with nonpositive jumps. We study a discrete time approximation for the minimal solution to this class of BSDE when the time step goes to zero, which provides both an approximation for the value function and for an optimal control in feedback form. We obtained a convergence rate without any ellipticity condition on the controlled diffusion coefficient.
Thu, 05 Feb 2015
16:00
L1

Bridge Simulation and Estimation for Multivariate Stochastic Differential Equations

Michael Sørensen
(University of Copenhagen)
Abstract

New simple methods of simulating multivariate diffusion bridges, approximately and exactly, are presented. Diffusion bridge simulation plays a fundamental role in simulation-based likelihood inference for stochastic differential equations. By a novel application of classical coupling methods, the new approach generalizes the one-dimensional bridge-simulation method proposed by Bladt and Sørensen (2014) to the multivariate setting. A method of simulating approximate, but often very accurate, diffusion bridges is proposed. These approximate bridges are used as proposal for easily implementable MCMC algorithms that produce exact diffusion bridges. The new method is more generally applicable than previous methods because it does not require the existence of a Lamperti transformation, which rarely exists for multivariate diffusions. Another advantage is that the new method works well for diffusion bridges in long intervals because the computational complexity of the method is linear in the length of the interval. The usefulness of the new method is illustrated by an application to Bayesian estimation for the multivariate hyperbolic diffusion model.

 

The lecture is based on joint work presented in Bladt, Finch and Sørensen (2014).References:

Bladt, M. and Sørensen, M. (2014): Simple simulation of diffusion bridges with application to likelihood inference for diffusions. Bernoulli, 20, 645-675.

Bladt, M., Finch, S. and Sørensen, M. (2014): Simulation of multivariate diffusion bridges. arXiv:1405.7728, pp. 1-30.

Thu, 29 Jan 2015
16:00
L4

Robust evaluation of risks under model uncertainty

Jocelyne Bion-Nadal
(CMAP ecole polytechnique)
Abstract

Dynamic risk measuring has been developed in recent years in the setting of a filtered probability space (Ω,(Ft)0≤t, P). In this setting the risk at time t is given by a Ft-measurable function defined as an ”ess-sup” of conditional expectations. The property of time consistency has been characterized in this setting. Model uncertainty means that instead of a reference probability easure one considers a whole set of probability measures which is furthermore non dominated. For example one needs to deal with this framework to make a robust evaluation of risks for derivative products when one assumes that the underlying model is a diffusion process with uncertain volatility. In this case every possible law for the underlying model is a probability measure solution to the associated martingale problem and the set of possible laws is non dominated.

In the framework of model uncertainty we face two kinds of problems. First the Q-conditional expectation is defined up to a Q-null set and second the sup of a non-countable family of measurable maps is not measurable. To encompass these problems we develop a new approach [1, 2] based on the “Martingale Problem”.

The martingale problem associated with a diffusion process with continuous coefficients has been introduced and studied by Stroock and Varadhan [4]. It has been extended by Stroock to the case of diffusion processes with Levy generators [3]. We study [1] the martingale problem associated with jump diffusions whose coefficients are path dependent. Under certain conditions on the path dependent coefficients, we prove existence and uniqueness of a probability measure solution to the path dependent martingale problem. Making use of the uniqueness of the solution we prove some ”Feller property”. This allows us to construct a time consistent robust evaluation of risks in the framework of model uncertainty [2].

References

[1] Bion-Nadal J., Martingale problem approach to path dependent diffusion processes with jumps, in preparation.

[2] Bion-Nadal J., Robust evaluation of risks from Martingale problem, in preparation.

[3] Strook D., Diffusion processes asociated with Levy generators, Z. Wahrscheinlichkeitstheorie verw. Gebiete 32, pp. 209-244 (1975).

[4] Stroock D. and Varadhan S., Diffusion processes with continuous coefficients, I and II, Communications on Pure and Applied Mathematics, 22, pp 345-400 (1969).

 

Thu, 22 Jan 2015
16:00
L4

A Mean-Field Game Approach to Optimal Execution

Sebastian Jaimungal
(University of Toronto)
Abstract

This paper introduces a mean field game framework for optimal execution with continuous trading. We generalize the classical optimal liquidation problem to a setting where, in addition to the major agent who is liquidating a large portion of shares, there are a number of minor agents (high-frequency traders (HFTs)) who detect and trade along with the liquidator. Cross interaction between the minor and major agents occur through the impact that each trader has on the drift of the fundamental price. As in the classical approach, here, each agent is exposed to both temporary and permanent price impact and they attempt to balance their impact against price uncertainty. In all, this gives rise to a stochastic dynamic game with mean field couplings in the fundamental price. We obtain a set of decentralized strategies using a mean field stochastic control approach and explicitly solve for an epsilon-optimal control up to the solution of a deterministic fixed point problem. As well, we present some numerical results which illustrate how the liquidating agents trading strategy is altered in the presence of the HFTs, and how the HFTs trade to profit from the liquidating agents trading.

[ This is joint work with Mojtaba Nourin, Department of Statistical Sciences, U. Toronto ]

Thu, 27 Nov 2014

16:00 - 17:30
L4

SDEs with weighted local times and discontinuous coefficients, transmission boundary conditions for semilinear PDEs, and related BSDEs

Professor Denis Talay
(INRIA)
Abstract

(Denis Talay, Inria — joint works with N. Champagnat, N. Perrin, S. Niklitschek Soto)

In this lecture we present recent results on SDEs with weighted local times and discontinuous coefficients. Their solutions allow one to construct probabilistic interpretations of  semilinear PDEs with discontinuous coefficients and transmission boundary conditions in terms of BSDEs which do not satisfy classical conditions.

Tue, 18 Nov 2014

12:30 - 13:30
Oxford-Man Institute

tba

Dr. Joseph Engelberg
(UC San Diego)
Thu, 13 Nov 2014

16:00 - 17:30
L4

Optimal Stopping under Coherent Risk Measures

Professor Dr. Denis Belomestny
(Duisburg-Essen University)
Abstract

In this talk we consider optimal stopping problems under a class of coherent risk measures which includes such well known risk measures as weighted AV@R or absolute semi-deviation risk measures. As a matter of fact, the dynamic versions of these risk measures do not have the so-called time-consistency property necessary for the dynamic programming approach. So the standard approaches are not applicable to optimal stopping problems under coherent risk measures. In this paper, we prove a novel representation, which relates the solution of an optimal stopping problem under a coherent risk measure to the sequence of standard optimal stopping problems and hence makes the application of the standard dynamic programming-based approaches possible. In particular, we derive the analogue of the dual representation of Rogers and Haugh and Kogan. Several numerical examples showing the usefulness of the new representation in applications are presented as well.

Thu, 06 Nov 2014

16:00 - 17:30
L4

Securitization and equilibrium pricing under relative performance concerns

Dr. Gonçalo dos Reis
(University of Edinburgh)
Abstract

We investigate the effects of a finite set of agents interacting socially in an equilibrium pricing mechanism. A derivative written on non-tradable underlyings is introduced to the market and priced in an equilibrium framework by agents who assess risk using convex dynamic risk measures expressed by Backward Stochastic Differential Equations (BSDE). An agent is not only exposed to financial and non-financial risk factors, but he also faces performance concerns with respect to the other agents. The equilibrium analysis leads to systems of fully coupled multi-dimensional quadratic BSDEs.

Within our proposed models we prove the existence and uniqueness of an equilibrium. We show that aggregation of risk measures is possible and that a representative agent exists. We analyze the impact of the problem's parameters in the pricing mechanism, in particular how the agent's concern rates affect prices and risk perception.

Fri, 31 Oct 2014

16:00 - 17:30
L4

Optimal Execution Strategies: The Special Case of Accelerated Share Repurchase (ASR) Contracts

Dr. Olivier Guéant
(Université Paris-Diderot)
Abstract

When firms want to buy back their own shares, they often use the services of investment banks through ASR contracts. ASR contracts are execution contracts including exotic option characteristics (an Asian-type payoff and Bermudian/American exercise dates). In this talk, I will present the different types of ASR contracts usually encountered, and I will present a model in order to (i) price ASR contracts and (ii) find the optimal execution strategy for each type of contract. This model is inspired from the classical (Almgren-Chriss) literature on optimal execution and uses classical ideas from option pricing. It can also be used to price options on illiquid assets. Original numerical methods will be presented.

Thu, 23 Oct 2014

16:00 - 17:30
L4

4pm (Joint Nomura-OMI Seminar) - The Use of Randomness in Time Series Analysis

Professor Piotr Fryzlewicz
(LSE)
Abstract
This is an exploratory talk in which we describe different potential 
uses of randomness in time series analysis.

In the first part, we talk about Wild Binary Segmentation for change-point detection, where randomness is used as a device for sampling from the space of all possible contrasts (change-point detection statistics) in order to reduce the computational complexity from cubic to just over linear in the number of observations, without compromising on the accuracy of change-point estimates. We also discuss an interesting related measure of change-point certainty/importance, and extensions to more general nonparametric problems.

In the second part, we use random contemporaneous linear combinations of time series panel data coming from high-dimensional factor models and argue that this gives the effect of "compressively sensing" the components of the multivariate time series, often with not much loss of information but with reduction in the dimensionality of the model.

In the final part, we speculate on the use of random filtering in time series analysis. As an illustration, we show how the appropriate use of this device can reduce the problem of estimating changes in the autocovariance structure of the process to the problem of estimating changes in variance, the latter typically being an easier task.
 
Thu, 16 Oct 2014

16:00 - 17:30
L2

Theta in FX Volatility Modelling and Risk Management

David Shelton
((Merrill Lynch))
Abstract

From a theoretical point of view, theta is a relatively simple quantity: the rate of change in value of a financial derivative with respect to time. In a Black-Scholes world, the theta of a delta hedged option can be viewed as `rent’ paid in exchange for gamma. This relationship is fundamental to the risk-management of a derivatives portfolio. However, in the real world, the situation becomes significantly more complicated. In practice the model is continually being recalibrated, and whereas in the Black-Scholes world volatility is not a risk factor, in the real world it is stochastic and carries an associated risk premium. With the heightened interest in automation and electronic trading, we increasingly need to attempt to capture trading, marking and risk management practice algorithmically, and this requires careful consideration of the relationship between the risk neutral and historical measures. In particular these effects need to be incorporated in order to make sense of theta and the time evolution of a derivatives portfolio in the historical measure. 

Thu, 19 Jun 2014

16:00 - 17:30
L4

Multilevel Richardson-Romberg extrapolation for Monte Carlo simulation

Gilles Pages
(UPMC)
Abstract

This is a joint work with V. Lemaire

(LPMA-UPMC). We propose and analyze a Multilevel Richardson-Romberg

(MLRR) estimator which combines the higher order bias cancellation of

the Multistep Richardson-Romberg ($MSRR$) method introduced

in~[Pag\`es 07] and the variance control resulting from the

stratification in the Multilevel Monte Carlo (MLMC) method (see~$e.g.$

[Heinrich 01, M. Giles 08]). Thus we show that in standard frameworks

like discretization schemes of diffusion processes, an assigned

quadratic error $\varepsilon$ can be obtained with our (MLRR)

estimator with a global complexity of

$\log(1/\varepsilon)/\varepsilon^2$ instead of

$(\log(1/\varepsilon))^2/\varepsilon^2$ with the standard (MLMC)

method, at least when the weak error $\E Y_h-\EY_0}$ induced by the

biased implemented estimator $Y_h$ can be expanded at any order in

$h$. We analyze and compare these estimators on several numerical

problems: option pricing (vanilla or exotic) using $MC$ simulation and

the less classical Nested Monte Carlo simulation (see~[Gordy \& Juneja

2010]).

Thu, 12 Jun 2014

16:00 - 17:30
L4

CAPM, Stochastic Dominance, and prospect theory

Haim Levy
(Hebrew University of Jerusalem)
Abstract

Despite the theoretical and empirical criticisms of the M-V and CAPM, they are found virtually in all curriculums. Why?