Forthcoming events in this series


Thu, 16 Jun 2016

16:00 - 17:30
L5

Mathematical Aspects of Systemic Risk

Hans Föllmer
(Humboldt Universität zu Berlin)
Abstract

We focus on the mathematical structure of systemic risk measures as proposed by Chen, Iyengar, and Moallemi (2013). In order to clarify the interplay between local and global risk assessment, we study the local specification of a systemic risk measure by a consistent family of conditional risk measures for smaller subsystems, and we discuss the appearance of phase transitions at the global level. This extends the analysis of spatial risk measures in Föllmer and Klϋppelberg (2015).

Tue, 07 Jun 2016

12:30 - 13:30
Oxford-Man Institute

Complete-market stochastic volatility models (Joint seminar with OMI)

Mark Davis
(Imperial College, London)
Abstract
It is an old idea that incomplete markets should be completed by adding traded options as non-redundant
securities. While this is easy to show in a finite-state setting, getting a satisfactory theory in
continuous time has proved highly problematic. The goal is however worth pursuing since it would
provide arbitrage-free dynamic models for the whole volatility surface. In this talk we describe an
approach in which all prices in the market are functions of some underlying Markov factor process.
In this setting general conditions for market completeness were given in earlier work with J.Obloj,
but checking them in specific instances is not easy. We argue that Wishart processes are good
candidates for modelling the factor process, combining efficient computational methods with an
adequate correlation structure.
Thu, 02 Jun 2016

16:00 - 17:30
L4

CANCELLED

Nizar Touzi
(Ecole Polytechnique Paris)
Abstract

CANCELLED

Thu, 26 May 2016

16:00 - 17:30
L4

Dividends, capital injections and discrete observation effects in risk theory

Hansjoerg Albrecher
(Universite de Lausanne)
Abstract

In the context of surplus models of insurance risk theory, 
some rather surprising and simple identities are presented. This 
includes an
identity relating level crossing probabilities of continuous-time models 
under (randomized) discrete and continuous observations, as well as
reflection identities relating dividend payments and capital injections. 
Applications as well as extensions to more general underlying processes are
discussed.

 

Thu, 19 May 2016

16:00 - 17:30
L4

Mathematical modelling of limit order books

Frédéric Abergel
(Ecole Centrale Paris)
Abstract

The limit order book is the at the core of every modern, electronic financial market. In this talk, I will present some results pertaining to their statistical properties, mathematical modelling and numerical simulation. Questions such as ergodicity, dependencies, relation betwen time scales... will be addressed and sometimes answered to. Some on-going research projects, with applications to optimal trading and market making, will be evoked.

Thu, 12 May 2016

16:00 - 17:30
L4

Dynamic Mean Variance Asset Allocation: Numerics and Backtests

Peter Forsyth
(University of Waterloo Canada)
Abstract

This seminar is run jointly with OMI.

 

Throughout the Western world, defined benefit pension plans are disappearing, replaced by defined contribution (DC) plans. Retail investors are thus faced with managing investments over a thirty year accumulation period followed by a twenty year decumulation phase. Holders of DC plans are thus truly long term investors. We consider dynamic mean variance asset allocation strategies for long term investors. We derive the "embedding result" which converts the mean variance objective into a form suitable for dynamic programming using an intuitive approach. We then discuss a semi-Lagrangian technique for numerical solution of the optimal control problem via a Hamilton-Jacob-Bellman PDE. Parameters for the inflation adjusted return of a stock index and a risk free bond are determined by examining 89 years of US data. Extensive synthetic market tests, and resampled backtests of historical data, indicate that the multi-period mean variance strategy achieves approximately the same expected terminal wealth as a constant weight strategy, while reducing the probability of shortfall by a factor of two to three.

Thu, 05 May 2016

16:00 - 17:30
L4

Quadratic BSDE systems and applications

Hao Xing
(London School of Economics)
Abstract

In this talk, we will establish existence and uniqueness for a wide class of Markovian systems of backward stochastic differential equations (BSDE) with quadratic nonlinearities. This class is characterized by an abstract structural assumption on the generator, an a-priori local-boundedness property, and a locally-H\"older-continuous terminal condition. We present easily verifiable sufficient conditions for these assumptions and treat several applications, including stochastic equilibria in incomplete financial markets, stochastic differential games, and martingales on Riemannian manifolds. This is a joint work with Gordan Zitkovic.

Thu, 28 Apr 2016

16:00 - 17:30
L4

Branching diffusion representation of semilinear PDEs and Monte Carlo approximation

Xiaolu Tan
(Paris Dauphine University)
Abstract

We provide a representation result of parabolic semi-linear PDEs, with polynomial nonlinearity, by branching diffusion processes. We extend the classical representation for KPP equations, introduced by Skorokhod (1964), Watanabe (1965) and McKean (1975), by allowing for polynomial nonlinearity in the pair (u,Du), where u is the solution of the PDE with space gradient Du. Similar to the previous literature, our result requires a non-explosion condition which restrict to "small maturity" or "small nonlinearity" of the PDE. Our main ingredient is the automatic differentiation technique as in Henry Labordere, Tan and Touzi (2015), based on the Malliavin integration by parts, which allows to account for the nonlinearities in the gradient. As a consequence, the particles of our branching diffusion are marked by the nature of the nonlinearity. This new representation has very important numerical implications as it is suitable for Monte Carlo simulation.

Thu, 10 Mar 2016

16:00 - 17:30
L4

The eigenvalues and eigenvectors of the sample covariance matrix of heavy-tailed multivariate time series

Thomas Mikosch
(Dept of Mathematical Sciences University of Copenhagen)
Abstract

This is joint work with Richard A. Davis (Columbia Statistics) and Johannes Heiny (Copenhagen). In recent years the sample covariance matrix of high-dimensional vectors with iid entries has attracted a lot of attention. A deep theory exists if the entries of the vectors are iid light-tailed; the Tracy-Widom distribution typically appears as weak limit of the largest eigenvalue of the sample covariance matrix. In the heavy-tailed case (assuming infinite 4th moments) the situation changes dramatically. Work by Soshnikov, Auffinger, Ben Arous and Peche shows that the largest eigenvalues are approximated by the points of a suitable nonhomogeneous Poisson process. We follows this line of research. First, we consider a p-dimensional time series with iid heavy-tailed entries where p is any power of the sample size n. The point process of the scaled eigenvalues of the sample covariance matrix converges weakly to a Poisson process. Next, we consider p-dimensional heavy-tailed time series with dependence through time and across the rows. In particular, we consider entries with a linear dependence or a stochastic volatility structure. In this case, the limiting point process is typically a Poisson cluster process. We discuss the suitability of the aforementioned models for large portfolios of return series. 

Thu, 03 Mar 2016

16:00 - 17:30
L4

Stochastic Dependence ,Extremal Risks and Optimal Payoffs

Ludger Rüschendorf
(Mathematische Stochastik Albert-Ludwigs University of Freiburg)
Abstract

We describe the possible influence of stochastic 
dependence on the evaluation of
the risk of joint portfolios and establish relevant risk bounds.Some 
basic tools for this purpose are  the distributional transform,the 
rearrangement method and extensions of the classical Hoeffding -Frechet 
bounds based on duality theory.On the other hand these tools find also 
essential applications to various problems of optimal investments,to the 
construction of cost-efficient payoffs as well as to various optimal 
hedging problems.We
discuss in detail the case of optimal payoffs in Levy market models as 
well as utility optimal payoffs and hedgings
with state dependent utilities.

Thu, 25 Feb 2016

16:00 - 17:30
L4

On data-based optimal stopping under stationarity and ergodicity

Micha Kohler
(Technische Universitat Darmstadt)
Abstract

The problem of optimal stopping with finite horizon in discrete time
is considered in view of maximizing the expected gain. The algorithm
presented in this talk is completely nonparametric in the sense that it
uses observed data from the past of the process up to time -n+1 (n being
a natural number), not relying on any specific model assumption. Kernel
regression estimation of conditional expectations and prediction theory
of individual sequences are used as tools.
The main result is that the algorithm is universally consistent: the
achieved expected gain converges to the optimal value for n tending to
infinity, whenever the underlying process is stationary and ergodic.
An application to exercising American options is given.

Thu, 18 Feb 2016

16:00 - 17:30
L4

A pathwise dynamic programming approach to nonlinear option pricing

Christian Bender
(Department of Mathematics Saarland university)
Abstract

In this talk, we present a pathwise method to construct confidence 
intervals on the value of some discrete time stochastic dynamic 
programming equations, which arise, e.g., in nonlinear option pricing 
problems such as credit value adjustment and pricing under model 
uncertainty. Our method generalizes the primal-dual approach, which is 
popular and well-studied for Bermudan option pricing problems. In a 
nutshell, the idea is to derive a maximization problem and a 
minimization problem such that the value processes of both problems 
coincide with the solution of the dynamic program and such that 
optimizers can be represented in terms of the solution of the dynamic 
program. Applying an approximate solution to the dynamic program, which 
can be precomputed by any algorithm, then leads to `close-to-optimal' 
controls for these optimization problems and to `tight' lower and upper 
bounds for the value of the dynamic program, provided that the algorithm 
for constructing the approximate solution was `successful'. We 
illustrate the method numerically in the context of credit value 
adjustment and pricing under uncertain volatility.
The talk is based on joint work with C. Gärtner, N. Schweizer, and J. 
Zhuo.

Thu, 04 Feb 2016

16:00 - 17:30
L4

Optimal stopping/switching with delivery lags and delayed information

Gechun Liang
(Kings College London)
Abstract

With few exceptions, optimal stopping assumes that the underlying system is stopped immediately after the decision is made. 
In fact, most stoppings take time. This has been variously referred to as "time-to-build", "investment lag" and "gestation period", 
which is often non negligible. 
In this talk, we consider a class of optimal stopping/switching problems with delivery lags, or equivalently, delayed information, 
by using reflected BSDE method. As an example, we study American put option with delayed exercise, and show that it can be decomposed 
as a European put option and a premium, the latter of which involves a new optimal stopping problem where the investor decides when to stop
to collect the Greek theta of such a European option. We also give a complete characterization of the optimal exercise boundary by resorting to free boundary analysis.  

Joint work with Zhou Yang and Mihail Zervos. 

Thu, 28 Jan 2016

16:00 - 17:30
L4

Equilibrium in risk-sharing games

Kostas Kardaras
(Dept of Statistics London School of Economics)
Abstract

The large majority of risk-sharing transactions involve few agents, each of whom can heavily influence the structure and the prices of securities. This paper proposes a game where agents' strategic sets consist of all possible sharing securities and pricing kernels that are consistent with Arrow-Debreu sharing rules. First, it is shown that agents' best response problems have unique solutions, even when the underlying probability space is infinite. The risk-sharing Nash equilibrium admits a finite-dimensional characterisation and it is proved to exist for general number of agents and be unique in the two-agent game. In equilibrium, agents choose to declare beliefs on future random outcomes different from their actual probability assessments, and the risk-sharing securities are endogenously bounded, implying (amongst other things) loss of efficiency. In addition, an analysis regarding extremely risk tolerant agents indicates that they profit more from the Nash risk-sharing equilibrium as compared to the Arrow-Debreu one.
(Joint work with Michail Anthropelos)

Thu, 21 Jan 2016

16:00 - 17:30
L4

Modelling sovereign risks: from a hybrid model to the generalized density approach

Ying Jiao
(Université Claude Bernard Lyon 1)
Abstract

Motivated by the European sovereign debt crisis, we propose a hybrid sovereign default model which combines an accessible part which takes into account the movement of the sovereign solvency and the impact of critical political events, and a totally inaccessible part for the idiosyncratic credit risk. We obtain closed-form formulas for the probability that the default occurs at political critical dates in a Markovian CEV process setting. Moreover, we introduce a generalized density framework for the hybrid default times and deduce the compensator process of default. Finally we apply the hybrid model and the generalized density to the valuation of sovereign bond and explain the significant jumps in the long-term government bond yield during the sovereign crisis.

Thu, 03 Dec 2015

16:00 - 17:30
L4

Predictable Forward Performance Processes (joint work with B. Angoshtari and X.Y. Zhou)

Thaleia Zariphopoulou
(University of Texas)
Abstract

In this talk, I will present a family of forward performance processes in
discrete time. These processes are predictable with regards to the market
information. Examples from a binomial setting will be given which include
the time-monotone exponential forward process and the completely monotonic
family.

Thu, 26 Nov 2015

16:00 - 17:30
L4

Nonlinear valuation under credit gap risk, collateral margins, funding costs and multiple curves

Damiano Brigo
(Imperial College London)
Abstract

Following a quick introduction to derivatives markets and the classic theory of valuation, we describe the changes triggered by post 2007 events. We re-discuss the valuation theory assumptions and introduce valuation under counterparty credit risk, collateral posting, initial and variation margins, and funding costs. A number of these aspects had been investigated well before 2007. We explain model dependence induced by credit effects, hybrid features, contagion, payout uncertainty, and nonlinear effects due to replacement closeout at default and possibly asymmetric borrowing and lending rates in the margin interest and in the funding strategy for the hedge of the relevant portfolio. Nonlinearity manifests itself in the valuation equations taking the form of semi-linear PDEs or Backward SDEs. We discuss existence and uniqueness of solutions for these equations. We present an invariance theorem showing that the final valuation equations do not depend on unobservable risk free rates, that become purely instrumental variables. Valuation is thus based only on real market rates and processes. We also present a high level analysis of the consequences of nonlinearities, both from the point of view of methodology and from an operational angle, including deal/entity/aggregation dependent valuation probability measures and the role of banks treasuries. Finally, we hint at how one may connect these developments to interest rate theory under multiple discount curves, thus building a consistent valuation framework encompassing most post-2007 effects.

Damiano Brigo, Joint work with Andrea Pallavicini, Daniele Perini, Marco Francischello. 

Thu, 12 Nov 2015

16:00 - 17:30
L4

Safe-Haven CDS Premia

David Lando
(Cophenhagon Business School)
Abstract

We argue that Credit Default Swap (CDS) premia for safe-haven sovereigns, like Germany and the United States, are driven to a large extent by regulatory requirements under which  derivatives dealing banks have an incentive to buy CDS to hedge counterparty credit risk of their counterparties.
We explain the mechanics of the regulatory requirements and develop a model in which derivatives dealers, who have a derivatives exposure with sovereigns, need CDS for capital relief. End users without exposure to the sovereigns sell the CDS and require a positive premium equivalent to the capital requirement. The model's predictions are confirmed using data on several sovereigns.

 

Joint with OMI

Thu, 05 Nov 2015

16:00 - 17:30
L4

On multi-dimensional risk sharing problems

Guillaume Carlier
(Université Paris Dauphine)
Abstract

A well-known result of Landsberger and Meilijson says that efficient risk-sharing rules for univariate risks are characterized by a so-called comonotonicity condition. In this talk, I'll first discuss a multivariate extension of this result (joint work with R.-A. Dana and A. Galichon). Then I will discuss the restrictions (in the form of systems of nonlinear PDEs) efficient risk sharing imposes on individual consumption as a function of aggregate consumption. I'll finally give an identification result on how to recover preferences from the knowledge of the risk sharing (joint work with M. Aloqeili and I. Ekeland).

Thu, 29 Oct 2015

16:00 - 17:30
L4

Multi-Dimensional Backward Stochastic Differential Equations of Diagonally Quadratic generators

Ying Hu
(Université de Rennes 1 France)
Abstract

The talk is concerned with adapted solution of a multi-dimensional BSDE with a "diagonally" quadratic generator, the quadratic part of whose iith component only depends on the iith row of the second unknown variable. Local and global solutions are given. In our proofs, it is natural and crucial to apply both John-Nirenberg and reverse Holder inequalities for BMO martingales. 

Tue, 20 Oct 2015

12:30 - 13:30
Oxford-Man Institute

On prospect theory in a dynamic context

Sebastian Ebert
(Tilburg University)
Abstract

We provide a result on prospect theory decision makers who are naïve about the time inconsistency induced by probability weighting. If a market offers a sufficiently rich set of investment strategies, investors postpone their trading decisions indefinitely due to a strong preference for skewness. We conclude that probability weighting in combination with naïveté leads to unrealistic predictions for a wide range of dynamic setups. Finally, I discuss recent work on the topic that invokes different assumptions on the dynamic modeling of prospect theory.

Thu, 15 Oct 2015

16:00 - 17:30
L4

Numerical approximation of irregular SDEs via Skorokhod embeddings

Stefan Ankirchner
(Friedrich-Schiller-Universität Jena)
Abstract

We provide a new algorithm for approximating the law of a one-dimensional diffusion M solving a stochastic differential equation with possibly irregular coefficients.
The algorithm is based on the construction of Markov chains whose laws can be embedded into the diffusion M with a sequence of stopping times. The algorithm does not require any regularity or growth assumption; in particular it applies to SDEs with coefficients that are nowhere continuous and that grow superlinearly. We show that if the diffusion coefficient is bounded and bounded away from 0, then our algorithm has a weak convergence rate of order 1/4. Finally, we illustrate the algorithm's performance with several examples.

Thu, 18 Jun 2015

16:00 - 17:00
L1

Nomura-OMI Seminar: Optimal exit under moral hazard

Prof. Stephane Villeneuve
(University of Toulouse)
Abstract

We revisit the optimal exit problem by adding a moral hazard problem where a firm owner contracts out with an agent to run a project. We analyse the optimal contracting problem between the owner and the agent in a Brownian framework, when the latter modifies the project cash-flows with an hidden action. The analysis leads to the resolution of a constrained optimal stopping problem that we solve explicitly.

Tue, 09 Jun 2015

12:30 - 13:30
Oxford-Man Institute

Markets are Efficient if and only if P=NP

Philip Maymin
(NYU)
Abstract

I prove that if markets are weak-form efficient, meaning current prices fully reflect all information available in past prices, then P = NP, meaning every computational problem whose solution can be verified in polynomial time can also be solved in polynomial time. I also prove the converse by showing how we can "program" the market to solveNP-complete problems. Since P probably does not equal NP, markets are probably not efficient. Specifically, markets become increasingly inefficient as the time series lengthens or becomes more frequent. An illustration by way of partitioning the excess returns to momentum strategies based on data availability confirms this prediction.

For more info please visit: http://philipmaymin.com/academic-papers#pnp

Thu, 04 Jun 2015

16:00 - 17:00
L4

Time-consistent stopping under decreasing impatience

Yu-Jui Huang
(Dublin City University)
Abstract

We present a dynamic theory for time-inconsistent stopping problems. The theory is developed under the paradigm of expected discounted
payoff, where the process to stop is continuous and Markovian. We introduce equilibrium stopping policies, which are imple-mentable
stopping rules that take into account the change of preferences over time. When the discount function induces decreasing impatience, we
establish a constructive method to find equilibrium policies. A new class of stopping problems, involving equilibrium policies, is
introduced, as opposed to classical optimal stopping. By studying the stopping of a one-dimensional Bessel process under hyperbolic discounting, we illustrate our theory in an explicit manner.

Thu, 28 May 2015

16:00 - 17:00
L4

Counterparty credit risk measurement: dependence effects, mitigating clauses and gap risk

Gianluca Fusai
(City University)
Abstract

In this talk, we aim to provide a valuation framework for counterparty credit risk based on a structural default model which incorporates jumps and dependence between the assets of interest. In this framework default is caused by the firm value falling below a prespecified threshold following unforeseeable shocks, which deteriorate its liquidity and ability to meet its liabilities. The presence of dependence between names captures wrong-way risk and right-way risk effects. The structural model traces back to Merton (1974), who considered only the possibility of default occurring at the maturity of the contract; first passage time models starting from the seminal contribution of Black and Cox (1976) extend the original framework to incorporate default events at any time during the lifetime of the contract. However, as the driving risk process used is the Brownian motion, all these models suffers of vanishing credit spreads over the short period - a feature not observed in reality. As a consequence, the Credit Value Adjustment (CVA) would be underestimated for short term deals as well as the so-called gap risk, i.e. the unpredictable loss due to a jump event in the market. Improvements aimed at resolving this issue include for example random default barriers, time dependent volatilities, and jumps. In this contribution, we adopt Lévy processes and capture dependence via a linear combination of two independent Lévy processes representing respectively the systematic risk factor and the idiosyncratic shock. We then apply this framework to the valuation of CVA and DVA related to equity contracts such as forwards and swaps. The main focus is on the impact of correlation between entities on the value of CVA and DVA, with particular attention to wrong-way risk and right-way risk, the inclusion of mitigating clauses such as netting and collateral, and finally the impact of gap risk. Particular attention is also devoted to model calibration to market data, and development of adequate numerical methods for the complexity of the model considered.

 
This is joint work with 
Laura Ballotta (Cass Business School, City University of London) and 
Daniele Marazzina (Department of Mathematics, Politecnico of Milan).
Thu, 21 May 2015

16:00 - 17:00
L4

Machine learning using Hawkes processes and concentration for matrix martingales

Prof Stephane Gaiffas
(CMAP ecole polytechnique)
Abstract

We consider the problem of unveiling the implicit network structure of user interactions in a social network, based only on high-frequency timestamps. Our inference is based on the minimization of the least-squares loss associated with a multivariate Hawkes model, penalized by $\ell_1$ and trace norms. We provide a first theoretical analysis of the generalization error for this problem, that includes sparsity and low-rank inducing priors. This result involves a new data-driven concentration inequality for matrix martingales in continuous time with observable variance, which is a result of independent interest. The analysis is based on a new supermartingale property of the trace exponential, based on tools from stochastic calculus. A consequence of our analysis is the construction of sharply tuned $\ell_1$ and trace-norm penalizations, that leads to a data-driven scaling of the variability of information available for each users. Numerical experiments illustrate the strong improvements achieved by the use of such data-driven penalizations.

Thu, 14 May 2015

16:00 - 17:00
L2

Clearing the Jungle of Stochastic Optimization

Professor Warren Powell
(Princeton University)
Abstract

Stochastic optimization for sequential decision problems under uncertainty arises in many settings, and as a result as evolved under several canonical frameworks with names such as dynamic programming, stochastic programming, optimal control, robust optimization, and simulation optimization (to name a few).  This is in sharp contrast with the universally accepted canonical frameworks for deterministic math programming (or deterministic optimal control).  We have found that these competing frameworks are actually hiding different classes of policies to solve a single problem which encompasses all of these fields.  In this talk, I provide a canonical framework which, while familiar to some, is not universally used, but should be.  The framework involves solving an objective function which requires searching over a class of policies, a step that can seem like mathematical hand waving.  We then identify four fundamental classes of policies, called policy function approximations (PFAs), cost function approximations (CFAs), policies based on value function approximations (VFAs), and lookahead policies (which themselves come in different flavors).  With the exception of CFAs, these policies have been widely studied under names that make it seem as if they are fundamentally different approaches (policy search, approximate dynamic programming or reinforcement learning, model predictive control, stochastic programming and robust optimization).  We use a simple energy storage problem to demonstrate that minor changes in the nature of the data can produce problems where each of the four classes might work best, or a hybrid.  This exercise supports our claim that any formulation of a sequential decision problem should start with a recognition that we need to search over a space of policies.

Thu, 07 May 2015

16:00 - 17:00
L4

The Robust Merton Problem of an Ambiguity Averse Investor

Sara Biagini
(Pisa University)
Abstract

We derive a closed form portfolio optimization rule for an investor who is diffident about mean return and volatility estimates, and has a CRRA utility. The novelty is that confidence is here represented using ellipsoidal uncertainty sets for the drift, given a volatility realization. This specification affords a simple and concise analysis, as the optimal portfolio allocation policy is shaped by a rescaled market Sharpe ratio, computed under the worst case volatility. The result is based on a max-min Hamilton-Jacobi-Bellman-Isaacs PDE, which extends the classical Merton problem and reverts to it for an ambiguity-neutral investor.

Thu, 30 Apr 2015

16:00 - 17:00
L4

Utility-Risk Portfolio Selection

Dr Harry Zheng
(Imperial College)
Abstract

In this talk we discuss a utility-risk portfolio selection problem. By considering the first order condition for the objective function, we derive a primitive static problem, called Nonlinear Moment Problem, subject to a set of constraints involving nonlinear functions of “mean-field terms”, to completely characterize the optimal terminal wealth. Under a mild assumption on utility, we establish the existence of the optimal solutions for both utility-downside-risk and utility-strictly-convex-risk problems, their positive answers have long been missing in the literature. In particular, the existence result in utility-downside-risk problem is in contrast with that of mean-downside-risk problem considered in Jin-Yan-Zhou (2005) in which they prove the non-existence of optimal solution instead and we can show the same non-existence result via the corresponding Nonlinear Moment Problem. This is joint work with K.C. Wong (University of Hong Kong) and S.C.P. Yam (Chinese University of Hong Kong).

Mon, 30 Mar 2015

16:00 - 17:00
L4

Utility-Risk Portfolio Selection

Dr Harry Zheng
(Imperial College London)
Abstract

In this talk we discuss a utility-risk portfolio selection problem. By considering the first order condition for the objective function, we derive a primitive static problem, called Nonlinear Moment Problem, subject to a set of constraints involving nonlinear functions of “mean-field terms”, to completely characterize the optimal terminal wealth. Under a mild assumption on utility, we establish the existence of the optimal solutions for both utility-downside-risk and utility-strictly-convex-risk problems, their positive answers have long been missing in the literature. In particular, the existence result in utility-downside-risk problem is in contrast with that of mean-downside-risk problem considered in Jin-Yan-Zhou (2005) in which they prove the non-existence of optimal solution instead and we can show the same non-existence result via the corresponding Nonlinear Moment Problem. This is joint work with K.C. Wong (University of Hong Kong) and S.C.P. Yam (Chinese University of Hong Kong).

Thu, 12 Mar 2015
16:00
L4

Implied Volatility of Leveraged ETF Options: Consistency and Scaling​

Tim Siu-Tang Leung
(Colombia University)
Abstract

The growth of the exchange-traded fund (ETF) industry has given rise to the trading of options written on ETFs and their leveraged counterparts (LETFs). Motivated by a number of empirical market observations, we study the relationship between the ETF and LETF implied volatility surfaces under general stochastic volatility models. Analytic approximations for prices and implied volatilities are derived for LETF ​options, along with rigorous error bounds. In these price and IV expressions, we identify their non-trivial dependence on the leverage ratio. Moreover, we introduce a "moneyness scaling" procedure to enhance the comparison of implied volatilities across leverage ratios, and test it with empirical price data.

Thu, 05 Mar 2015
16:00
L4

Measures of Systemic Risk

Stefan Weber
(Leibniz Universität Hannover)
Abstract
Systemic risk refers to the risk that the financial system is susceptible to failures due to the characteristics of the system itself. The tremendous cost of this type of risk requires the design and implementation of tools for the efficient macroprudential regulation of financial institutions. We propose a novel approach to measuring systemic risk.

Key to our construction is a rigorous derivation of systemic risk measures from the structure of the underlying system and the objectives of a financial regulator. The suggested systemic risk measures express systemic risk in terms of capital endowments of the financial firms. Their definition requires two ingredients: first, a random field that assigns to the capital allocations of the entities in the system a relevant stochastic outcome. The second ingredient is an acceptability criterion, i.e. a set of random variables that identifies those outcomes that are acceptable from the point of view of a regulatory authority. Systemic risk is measured by the set of allocations of additional capital that lead to acceptable outcomes. The resulting systemic risk measures are set-valued and can be studied using methods from set-valued convex analysis. At the same time, they can easily be applied to the regulation of financial institutions in practice.
 
We explain the conceptual framework and the definition of systemic risk measures, provide an algorithm for their computation, and illustrate their application in numerical case studies. We apply our methodology to systemic risk aggregation as described in Chen, Iyengar & Moallemi (2013) and to network models as suggested in the seminal paper of Eisenberg & Noe (2001), see also Cifuentes, Shin & Ferrucci (2005), Rogers & Veraart (2013), and Awiszus & Weber (2015). This is joint work with Zachary G. Feinstein and Birgit Rudloff
Tue, 24 Feb 2015
12:30
Oxford-Man Institute

Measuring and predicting human behaviour using online data

Tobias Preis
(University of Warwick)
Abstract

In this talk, I will outline some recent highlights of our research, addressing two questions. Firstly, can big data resources provide insights into crises in financial markets? By analysing Google query volumes for search terms related to finance and views of Wikipedia articles, we find patterns which may be interpreted as early warning signs of stock market moves. Secondly, can we provide insight into international differences in economic wellbeing by comparing patterns of interaction with the Internet? To answer this question, we introduce a future-orientation index to quantify the degree to which Internet users seek more information about years in the future than years in the past. We analyse Google logs and find a striking correlation between the country's GDP and the predisposition of its inhabitants to look forward. Our results illustrate the potential that combining extensive behavioural data sets offers for a better understanding of large scale human economic behaviour.

Thu, 19 Feb 2015
16:00
L1

Optimal casino betting: why lucky coins and good memory are important

Sang Hu
(National University of Singapore)
Abstract

We consider the dynamic casino gambling model initially proposed by Barberis (2012) and study the optimal stopping strategy of a pre-committing gambler with cumulative prospect theory (CPT) preferences. We illustrate how the strategies computed in Barberis (2012) can be strictly improved by reviewing the entire betting history or by tossing random coins, and explain that such improvement is possible because CPT preferences are not quasi-convex. Finally, we develop a systematic and analytical approach to finding the optimal strategy of the gambler. This is a joint work with Prof. Xue Dong He (Columbia University), Prof. Jan Obloj, and Prof. Xun Yu Zhou.

Thu, 12 Feb 2015
16:00
L4

Discrete time approximation of HJB equations via BSDEs with nonpositive jumps

Idris Kharroubi
(Université Paris Dauphine)
Abstract
We propose a new probabilistic numerical scheme for fully nonlinear equations of Hamilton-Jacobi-Bellman (HJB) type associated to stochastic control problems, which is based on the a recent Feynman-Kac representation by means of control randomization and backward stochastic differential equation with nonpositive jumps. We study a discrete time approximation for the minimal solution to this class of BSDE when the time step goes to zero, which provides both an approximation for the value function and for an optimal control in feedback form. We obtained a convergence rate without any ellipticity condition on the controlled diffusion coefficient.
Thu, 05 Feb 2015
16:00
L1

Bridge Simulation and Estimation for Multivariate Stochastic Differential Equations

Michael Sørensen
(University of Copenhagen)
Abstract

New simple methods of simulating multivariate diffusion bridges, approximately and exactly, are presented. Diffusion bridge simulation plays a fundamental role in simulation-based likelihood inference for stochastic differential equations. By a novel application of classical coupling methods, the new approach generalizes the one-dimensional bridge-simulation method proposed by Bladt and Sørensen (2014) to the multivariate setting. A method of simulating approximate, but often very accurate, diffusion bridges is proposed. These approximate bridges are used as proposal for easily implementable MCMC algorithms that produce exact diffusion bridges. The new method is more generally applicable than previous methods because it does not require the existence of a Lamperti transformation, which rarely exists for multivariate diffusions. Another advantage is that the new method works well for diffusion bridges in long intervals because the computational complexity of the method is linear in the length of the interval. The usefulness of the new method is illustrated by an application to Bayesian estimation for the multivariate hyperbolic diffusion model.

 

The lecture is based on joint work presented in Bladt, Finch and Sørensen (2014).References:

Bladt, M. and Sørensen, M. (2014): Simple simulation of diffusion bridges with application to likelihood inference for diffusions. Bernoulli, 20, 645-675.

Bladt, M., Finch, S. and Sørensen, M. (2014): Simulation of multivariate diffusion bridges. arXiv:1405.7728, pp. 1-30.

Thu, 29 Jan 2015
16:00
L4

Robust evaluation of risks under model uncertainty

Jocelyne Bion-Nadal
(CMAP ecole polytechnique)
Abstract

Dynamic risk measuring has been developed in recent years in the setting of a filtered probability space (Ω,(Ft)0≤t, P). In this setting the risk at time t is given by a Ft-measurable function defined as an ”ess-sup” of conditional expectations. The property of time consistency has been characterized in this setting. Model uncertainty means that instead of a reference probability easure one considers a whole set of probability measures which is furthermore non dominated. For example one needs to deal with this framework to make a robust evaluation of risks for derivative products when one assumes that the underlying model is a diffusion process with uncertain volatility. In this case every possible law for the underlying model is a probability measure solution to the associated martingale problem and the set of possible laws is non dominated.

In the framework of model uncertainty we face two kinds of problems. First the Q-conditional expectation is defined up to a Q-null set and second the sup of a non-countable family of measurable maps is not measurable. To encompass these problems we develop a new approach [1, 2] based on the “Martingale Problem”.

The martingale problem associated with a diffusion process with continuous coefficients has been introduced and studied by Stroock and Varadhan [4]. It has been extended by Stroock to the case of diffusion processes with Levy generators [3]. We study [1] the martingale problem associated with jump diffusions whose coefficients are path dependent. Under certain conditions on the path dependent coefficients, we prove existence and uniqueness of a probability measure solution to the path dependent martingale problem. Making use of the uniqueness of the solution we prove some ”Feller property”. This allows us to construct a time consistent robust evaluation of risks in the framework of model uncertainty [2].

References

[1] Bion-Nadal J., Martingale problem approach to path dependent diffusion processes with jumps, in preparation.

[2] Bion-Nadal J., Robust evaluation of risks from Martingale problem, in preparation.

[3] Strook D., Diffusion processes asociated with Levy generators, Z. Wahrscheinlichkeitstheorie verw. Gebiete 32, pp. 209-244 (1975).

[4] Stroock D. and Varadhan S., Diffusion processes with continuous coefficients, I and II, Communications on Pure and Applied Mathematics, 22, pp 345-400 (1969).

 

Thu, 22 Jan 2015
16:00
L4

A Mean-Field Game Approach to Optimal Execution

Sebastian Jaimungal
(University of Toronto)
Abstract

This paper introduces a mean field game framework for optimal execution with continuous trading. We generalize the classical optimal liquidation problem to a setting where, in addition to the major agent who is liquidating a large portion of shares, there are a number of minor agents (high-frequency traders (HFTs)) who detect and trade along with the liquidator. Cross interaction between the minor and major agents occur through the impact that each trader has on the drift of the fundamental price. As in the classical approach, here, each agent is exposed to both temporary and permanent price impact and they attempt to balance their impact against price uncertainty. In all, this gives rise to a stochastic dynamic game with mean field couplings in the fundamental price. We obtain a set of decentralized strategies using a mean field stochastic control approach and explicitly solve for an epsilon-optimal control up to the solution of a deterministic fixed point problem. As well, we present some numerical results which illustrate how the liquidating agents trading strategy is altered in the presence of the HFTs, and how the HFTs trade to profit from the liquidating agents trading.

[ This is joint work with Mojtaba Nourin, Department of Statistical Sciences, U. Toronto ]

Thu, 27 Nov 2014

16:00 - 17:30
L4

SDEs with weighted local times and discontinuous coefficients, transmission boundary conditions for semilinear PDEs, and related BSDEs

Professor Denis Talay
(INRIA)
Abstract

(Denis Talay, Inria — joint works with N. Champagnat, N. Perrin, S. Niklitschek Soto)

In this lecture we present recent results on SDEs with weighted local times and discontinuous coefficients. Their solutions allow one to construct probabilistic interpretations of  semilinear PDEs with discontinuous coefficients and transmission boundary conditions in terms of BSDEs which do not satisfy classical conditions.

Tue, 18 Nov 2014

12:30 - 13:30
Oxford-Man Institute

tba

Dr. Joseph Engelberg
(UC San Diego)
Thu, 13 Nov 2014

16:00 - 17:30
L4

Optimal Stopping under Coherent Risk Measures

Professor Dr. Denis Belomestny
(Duisburg-Essen University)
Abstract

In this talk we consider optimal stopping problems under a class of coherent risk measures which includes such well known risk measures as weighted AV@R or absolute semi-deviation risk measures. As a matter of fact, the dynamic versions of these risk measures do not have the so-called time-consistency property necessary for the dynamic programming approach. So the standard approaches are not applicable to optimal stopping problems under coherent risk measures. In this paper, we prove a novel representation, which relates the solution of an optimal stopping problem under a coherent risk measure to the sequence of standard optimal stopping problems and hence makes the application of the standard dynamic programming-based approaches possible. In particular, we derive the analogue of the dual representation of Rogers and Haugh and Kogan. Several numerical examples showing the usefulness of the new representation in applications are presented as well.

Thu, 06 Nov 2014

16:00 - 17:30
L4

Securitization and equilibrium pricing under relative performance concerns

Dr. Gonçalo dos Reis
(University of Edinburgh)
Abstract

We investigate the effects of a finite set of agents interacting socially in an equilibrium pricing mechanism. A derivative written on non-tradable underlyings is introduced to the market and priced in an equilibrium framework by agents who assess risk using convex dynamic risk measures expressed by Backward Stochastic Differential Equations (BSDE). An agent is not only exposed to financial and non-financial risk factors, but he also faces performance concerns with respect to the other agents. The equilibrium analysis leads to systems of fully coupled multi-dimensional quadratic BSDEs.

Within our proposed models we prove the existence and uniqueness of an equilibrium. We show that aggregation of risk measures is possible and that a representative agent exists. We analyze the impact of the problem's parameters in the pricing mechanism, in particular how the agent's concern rates affect prices and risk perception.

Fri, 31 Oct 2014

16:00 - 17:30
L4

Optimal Execution Strategies: The Special Case of Accelerated Share Repurchase (ASR) Contracts

Dr. Olivier Guéant
(Université Paris-Diderot)
Abstract

When firms want to buy back their own shares, they often use the services of investment banks through ASR contracts. ASR contracts are execution contracts including exotic option characteristics (an Asian-type payoff and Bermudian/American exercise dates). In this talk, I will present the different types of ASR contracts usually encountered, and I will present a model in order to (i) price ASR contracts and (ii) find the optimal execution strategy for each type of contract. This model is inspired from the classical (Almgren-Chriss) literature on optimal execution and uses classical ideas from option pricing. It can also be used to price options on illiquid assets. Original numerical methods will be presented.

Thu, 23 Oct 2014

16:00 - 17:30
L4

4pm (Joint Nomura-OMI Seminar) - The Use of Randomness in Time Series Analysis

Professor Piotr Fryzlewicz
(LSE)
Abstract
This is an exploratory talk in which we describe different potential 
uses of randomness in time series analysis.

In the first part, we talk about Wild Binary Segmentation for change-point detection, where randomness is used as a device for sampling from the space of all possible contrasts (change-point detection statistics) in order to reduce the computational complexity from cubic to just over linear in the number of observations, without compromising on the accuracy of change-point estimates. We also discuss an interesting related measure of change-point certainty/importance, and extensions to more general nonparametric problems.

In the second part, we use random contemporaneous linear combinations of time series panel data coming from high-dimensional factor models and argue that this gives the effect of "compressively sensing" the components of the multivariate time series, often with not much loss of information but with reduction in the dimensionality of the model.

In the final part, we speculate on the use of random filtering in time series analysis. As an illustration, we show how the appropriate use of this device can reduce the problem of estimating changes in the autocovariance structure of the process to the problem of estimating changes in variance, the latter typically being an easier task.
 
Thu, 16 Oct 2014

16:00 - 17:30
L2

Theta in FX Volatility Modelling and Risk Management

David Shelton
(Merrill Lynch)
Abstract

From a theoretical point of view, theta is a relatively simple quantity: the rate of change in value of a financial derivative with respect to time. In a Black-Scholes world, the theta of a delta hedged option can be viewed as `rent’ paid in exchange for gamma. This relationship is fundamental to the risk-management of a derivatives portfolio. However, in the real world, the situation becomes significantly more complicated. In practice the model is continually being recalibrated, and whereas in the Black-Scholes world volatility is not a risk factor, in the real world it is stochastic and carries an associated risk premium. With the heightened interest in automation and electronic trading, we increasingly need to attempt to capture trading, marking and risk management practice algorithmically, and this requires careful consideration of the relationship between the risk neutral and historical measures. In particular these effects need to be incorporated in order to make sense of theta and the time evolution of a derivatives portfolio in the historical measure. 

Thu, 19 Jun 2014

16:00 - 17:30
L4

Multilevel Richardson-Romberg extrapolation for Monte Carlo simulation

Gilles Pages
(UPMC)
Abstract

This is a joint work with V. Lemaire

(LPMA-UPMC). We propose and analyze a Multilevel Richardson-Romberg

(MLRR) estimator which combines the higher order bias cancellation of

the Multistep Richardson-Romberg ($MSRR$) method introduced

in~[Pag\`es 07] and the variance control resulting from the

stratification in the Multilevel Monte Carlo (MLMC) method (see~$e.g.$

[Heinrich 01, M. Giles 08]). Thus we show that in standard frameworks

like discretization schemes of diffusion processes, an assigned

quadratic error $\varepsilon$ can be obtained with our (MLRR)

estimator with a global complexity of

$\log(1/\varepsilon)/\varepsilon^2$ instead of

$(\log(1/\varepsilon))^2/\varepsilon^2$ with the standard (MLMC)

method, at least when the weak error $\E Y_h-\EY_0}$ induced by the

biased implemented estimator $Y_h$ can be expanded at any order in

$h$. We analyze and compare these estimators on several numerical

problems: option pricing (vanilla or exotic) using $MC$ simulation and

the less classical Nested Monte Carlo simulation (see~[Gordy \& Juneja

2010]).

Thu, 12 Jun 2014

16:00 - 17:30
L4

CAPM, Stochastic Dominance, and prospect theory

Haim Levy
(Hebrew University of Jerusalem)
Abstract

Despite the theoretical and empirical criticisms of the M-V and CAPM, they are found virtually in all curriculums. Why?