Forthcoming events in this series


Fri, 25 May 2012

14:15 - 15:00
DH 1st floor SR

General theory of geometric Lévy models for dynamic asset pricing

Prof Dorje Brody
(Brunel Univeristy)
Abstract

The geometric Lévy model (GLM) is a natural generalisation of the geometric Brownian motion (GBM) model. The theory of such models simplifies considerably if one takes a pricing kernel approach. In one dimension, once the underlying Lévy process has been specified, the GLM has four parameters: the initial price, the interest rate, the volatility and the risk aversion. The pricing kernel is the product of a discount factor and a risk aversion martingale. For GBM, the risk aversion parameter is the market price of risk. In this talk I show that for a GLM, this interpretation is not valid: the excess rate of return above the interest rate is a nonlinear function of the volatility and the risk aversion such that it is positive, and is increasing with respect to these variables. In the case of foreign exchange, Siegel’s paradox implies that one can construct foreign exchange models for which the excess rate of return is positive for both the exchange rate and the inverse exchange rate. Examples are worked out for a range of Lévy processes. (The talk is based on a recent paper: Brody, Hughston & Mackie, Proceedings of the Royal Society London, to appear in May 2012).  

Fri, 18 May 2012

14:15 - 15:00
DH 1st floor SR

Absence of arbitrage and changes of measure

Prof Martin Schweizer
(ETH Zurich)
Abstract

Absence of arbitrage is a highly desirable feature in mathematical models of financial markets. In its pure form (whether as NFLVR or as the existence of a variant of an equivalent martingale measure R), it is qualitative and therefore robust towards equivalent changes of the underlying reference probability (the "real-world" measure P). But what happens if we look at more quantitative versions of absence of arbitrage, where we impose for instance some integrability on the density dR/dP? To which extent is such a property robust towards changes of P? We discuss these uestions and present some recent results.

The talk is based on joint work with Tahir Choulli (University of Alberta, Edmonton).

Fri, 11 May 2012

12:30 - 15:00
Oxford-Man Institute

Commodity Storage Valuation

Prof Kumar Muthuraman
(Univerity of Texas at Austin)
Abstract

We present a general valuation framework for commodity storage facilities, for non-perishable commodities. Modeling commodity prices with a mean reverting process we provide analytical expressions for the value obtainable from the storage for any admissible injection/withdrawal policy. Then we present an iterative numerical algorithm to find the optimal injection and withdrawal policies, along with the necessary theoretical guarantees for convergence. Together, the analytical expressions and the numerical algorithm present an extremely efficient way of solving not only commodity storage problems but in general the problem of optimally controlling a mean reverting processes with transaction costs.

Fri, 04 May 2012

14:00 - 15:00
DH 1st floor SR

A guide through market viability for frictionless markets

Prof Kostas Kardars 
(Boston University)
Abstract

In this talk, we elaborate on the notions of no-free-lunch that have proved essential in the theory of financial mathematics---most notably, arbitrage of the first kind. Focus will be given in most recent developments. The precise connections with existence of deflators, numeraires and pricing measures are explained, as well as the consequences that these notions have in the existence of bubbles and the valuation of illiquid assets in the market.

Fri, 09 Mar 2012
14:15
DH 1st floor SR

G-Expectation for General Random Variables

Marcel Nutz
(Columbia)
Abstract

We provide a general construction of time-consistent sublinear expectations on the space of continuous paths. In particular, we construct the conditional G-expectation of a Borel-measurable (rather than quasi-continuous) random variable.

Fri, 02 Mar 2012
14:15
DH 1st floor SR

Best Gain Loss Ratio in Continuous Time

Sara Biagini
(Unipi)
Abstract

The use of gain-loss ratio as a measure of attractiveness has been

introduced by Bernardo and Ledoit. In their well-known paper, they

show that gain-loss ratio restrictions have a dual representation in

terms of restricted pricing kernels.

In spite of its clear financial significance, gain-loss ratio has

been largely ignored in the mathematical finance literature, with few

exceptions (Cherny and Madan, Pinar). The main reason is intrinsic

lack of good mathematical properties. This paper aims to be a

rigorous study of gain-loss ratio and its dual representations

in a continuous-time market setting, placing it in the context of

risk measures and acceptability indexes. We also point out (and

correctly reformulate) an erroneous statement made by Bernardo and

Ledoit in their main result. This is joint work with M. Pinar.

Fri, 24 Feb 2012
14:15
DH 1st floor SR

Comparison between the Mean Variance Optimal and the Mean Quadratic Variation Optimal Trading Strategies

Peter Forsyth
(Waterloo)
Abstract

Algorithmic trade execution has become a standard technique

for institutional market players in recent years,

particularly in the equity market where electronic

trading is most prevalent. A trade execution algorithm

typically seeks to execute a trade decision optimally

upon receiving inputs from a human trader.

A common form of optimality criterion seeks to

strike a balance between minimizing pricing impact and

minimizing timing risk. For example, in the case of

selling a large number of shares, a fast liquidation will

cause the share price to drop, whereas a slow liquidation

will expose the seller to timing risk due to the

stochastic nature of the share price.

We compare optimal liquidation policies in continuous time in

the presence of trading impact using numerical solutions of

Hamilton Jacobi Bellman (HJB)partial differential equations

(PDE). In particular, we compare the time-consistent

mean-quadratic-variation strategy (Almgren and Chriss) with the

time-inconsistent (pre-commitment) mean-variance strategy.

The Almgren and Chriss strategy should be viewed as the

industry standard.

We show that the two different risk measures lead to very different

strategies and liquidation profiles.

In terms of the mean variance efficient frontier, the

original Almgren/Chriss strategy is signficently sub-optimal

compared to the (pre-commitment) mean-variance strategy.

This is joint work with Stephen Tse, Heath Windcliff and

Shannon Kennedy.

Fri, 17 Feb 2012

14:15 - 15:15
DH 1st floor SR

Implicit vs explicit schemes for non-linear PDEs and illustrations in Finance and optimal control.

Olivier Bokanowski
(UMA)
Abstract

We will first motivate and review some implicit schemes that arises from the discretization of non linear PDEs in finance or in optimal control problems - when using finite differences methods or finite element methods.

For the american option problem, we are led to compute the solution of a discrete obstacle problem, and will give some results for the convergence of nonsmooth Newton's method for solving such problems.

Implicit schemes are interesting for their stability properties, however they can be too costly in practice.

We will then present some novel schemes and ideas, based on the semi-lagrangian approach and on discontinuous galerkin methods, trying to be as much explicit as possible in order to gain practical efficiency.

Fri, 10 Feb 2012
14:15
DH 1st floor SR

Good-deal bounds in a regime-switching diffusion market

Catherine Donnelly (Heriot-Watt)
Abstract

We consider the pricing of a maturity guarantee, which is equivalent to the pricing of a European put option, in a regime-switching market model. Regime-switching market models have been empirically shown to fit long-term stockmarket data better than many other models. However, since a regime-switching market is incomplete, there is no unique price for the maturity guarantee. We extend the good-deal pricing bounds idea to the regime-switching market model. This allows us to obtain a reasonable range of prices for the maturity guarantee, by excluding those prices which imply a Sharpe Ratio which is too high. The range of prices can be used as a plausibility check on the chosen price of a maturity guarantee.

Fri, 03 Feb 2012
14:15
DH 1st floor SR

Transaction Costs, Trading Volume, and the Liquidity Premium

Stefan Gerold
(TU Wien)
Abstract

In a market with one safe and one risky asset, an investor with a long

horizon and constant relative risk aversion trades with constant

investment opportunities and proportional transaction costs. We derive

the optimal investment policy, its welfare, and the resulting trading

volume, explicitly as functions of the market and preference parameters,

and of the implied liquidity premium, which is identified as the

solution of a scalar equation. For small transaction costs, all these

quantities admit asymptotic expansions of arbitrary order. The results

exploit the equivalence of the transaction cost market to another

frictionless market, with a shadow risky asset, in which investment

opportunities are stochastic. The shadow price is also derived

explicitly. (Joint work with Paolo Guasoni, Johannes Muhle-Karbe, and

Walter Schachermayer)

Fri, 27 Jan 2012
14:15
DH 1st floor SR

Modeling and Efficient Rare Event Simulation of Systemic Risk in Insurance-Reinsurance Networks (joint work with Yixi Shi).

Jose Blanchet
(Columbia)
Abstract

We propose a dynamic insurance network model that allows to deal with reinsurance counter-party default risks with a particular aim of capturing cascading effects at the time of defaults. We capture these effects by finding an equilibrium allocation of settlements which can be found as the unique optimal solution of a linear programming problem. This equilibrium allocation recognizes 1) the correlation among the risk factors, which are assumed to be heavy-tailed, 2) the contractual obligations, which are assumed to follow popular contracts in the insurance industry (such as stop-loss and retro-cesion), and 3) the interconnections of the insurance-reinsurance network. We are able to obtain an asymptotic description of the most likely ways in which the default of a specific group of insurers can occur, by means of solving a multidimensional Knapsack integer programming problem. Finally, we propose a class of provably strongly efficient estimators for computing the expected loss of the network conditioning the failure of a specific set of companies. Strong efficiency means that the complexity of computing large deviations probability or conditional expectation remains bounded as the event of interest becomes more and more rare.

Fri, 20 Jan 2012
14:15
DH 1st floor SR

Monte Carlo Portfolio Optimization

William Shaw
(UCL)
Abstract

We develop the idea of using Monte Carlo sampling of random portfolios to solve portfolio investment problems. We explore the need for more general optimization tools, and consider the means by which constrained random portfolios may be generated. DeVroye’s approach to sampling the interior of a simplex (a collection of non-negative random variables adding to unity) is already available for interior solutions of simple fully-invested long-only systems, and we extend this to treat, lower bound constraints, bounded short positions and to sample non-interior points by the method of Face-Edge-Vertex-biased sampling. A practical scheme for long-only and bounded short problems is developed and tested. Non-convex and disconnected regions can be treated by applying rejection for other constraints. The advantage of Monte Carlo methods is that they may be extended to risk functions that are more complicated functions of the return distribution, without explicit gradients, and that the underlying return distribution may be modeled parametrically or empirically based on general distributions. The optimization of expected utility, Omega, Sortino ratios may be handled in a similar manner to quadratic risk, VaR and CVaR, irrespective of whether a reduction to LP or QP form is available. Robustification is also possible, and a Monte Carlo approach allows the possibility of relaxing the general maxi-min approach to one of varying degrees of conservatism. Grid computing technology is an excellent platform for the development of such computations due to the intrinsically parallel nature of the computation. Good comparisons with established results in Mean-Variance and CVaR optimization are obtained, and we give some applications to Omega and expected Utility optimization. Extensions to deploy Sobol and Niederreiter quasi-random methods for random weights are also proposed. The method proposed is a two-stage process. First we have an initial global search which produces a good feasible solution for any number of assets with any risk function and return distribution. This solution is already close to optimal in lower dimensions based on an investigation of several test problems. Further precision, and solutions in 10-100 dimensions, are obtained by invoking a second stage in which the solution is iterated based on Monte-Carlo simulation based on a series of contracting hypercubes.

Fri, 02 Dec 2011

14:15 - 15:15
L3

Multilevel dual approach for pricing American style derivatives

John Schoenmakers
(Berlin)
Abstract

In this article we propose a novel approach to reduce the computational

complexity of the dual method for pricing American options.

We consider a sequence of martingales that converges to a given

target martingale and decompose the original dual representation into a sum of

representations that correspond to different levels of approximation to the

target martingale. By next replacing in each representation true conditional expectations with their

Monte Carlo estimates, we arrive at what one may call a multilevel dual Monte

Carlo algorithm. The analysis of this algorithm reveals that the computational

complexity of getting the corresponding target upper bound, due to the target martingale,

can be significantly reduced. In particular, it turns out that using our new

approach, we may construct a multilevel version of the well-known nested Monte

Carlo algorithm of Andersen and Broadie (2004) that is, regarding complexity, virtually

equivalent to a non-nested algorithm. The performance of this multilevel

algorithm is illustrated by a numerical example. (joint work with Denis Belomestny)

Fri, 25 Nov 2011
14:15
DH 1st floor SR

Optimal discretization of hedging strategies with jumps

Mathieu Rosenbaum
(University Paris 6)
Abstract

In this work, we consider the hedging error due to discrete trading in models with jumps. We propose a framework enabling to

(asymptotically) optimize the discretization times. More precisely, a strategy is said to be optimal if for a given cost function, no strategy has

(asymptotically) a lower mean square error for a smaller cost. We focus on strategies based on hitting times and give explicit expressions for

the optimal strategies. This is joint work with Peter Tankov.

Tue, 15 Nov 2011
14:15
Oxford-Man Institute

Market Selection: Hungry Misers and Happy Bankrupts

Chris Rogers
(Cambridge)
Abstract

The Market Selection Hypothesis is a principle which (informally) proposes that `less knowledgeable' agents are eventually eliminated from the market. This elimination may take the form of starvation (the proportion of output consumed drops to zero), or may take the form of going broke (the proportion of asset held drops to zero), and these are not the same thing. Starvation may result from several causes, diverse beliefs being only one.We firstly identify and exclude these other possible causes, and then

prove that starvation is equivalent to inferior belief, under suitable technical conditions. On the other hand, going broke cannot be characterized solely in terms of beliefs, as we show. We next present a remarkable example with two agents with different beliefs, in which one agent starves yet amasses all the capital, and the other goes broke yet consumes all the output -- the hungry miser and the happy bankrupt.

This example also serves to show that although an agent may starve, he may have long-term impact on the prices. This relates to the notion of price impact introduced by Kogan et al (2009), which we correct in the final section, and then use to characterize situations where asymptotically equivalent

pricing holds.

Fri, 11 Nov 2011
14:15
DH 1st floor SR

An Efficient Implementation of Stochastic Volatility by the method of Conditional Integration

William McGhee
(Royal Bank Scotland)
Abstract

In the SABR model of Hagan et al. [2002] a perturbative expansion approach yields a tractable approximation to the implied volatility smile. This approximation formula has been adopted across the financial markets as a means of interpolating market volatility surfaces. All too frequently - in stressed markets, in the long-dated FX regime - the limitations of this approximation are pronounced. In this talk a highly efficient conditional integration approach, motivated by the work of Stein and Stein [1991] and Willard [1997], will be presented that when applied to the SABR model not only produces a volatility smile consistent with the underlying SABR process but gives access to the joint distribution of the asset and its volatility. The latter is particularly important in understanding the dynamics of the volatility smile as it evolves through time and the subsequent effect on the pricing of exotic options.

William McGhee is Head of Hybrid Quantitative Analytics at The Royal Bank of Scotland and will also discuss within the context of this presentation the interplay of mathematical modelling and the technology infrastructure required to run a complex hybrids trading business and the benefits of highly efficient numerical algorithms."

Fri, 04 Nov 2011
14:15
DH 1st floor SR

Forward-backward systems for expected utility maximization

Ulrich Horst
(Berlin)
Abstract

In this paper we deal with the utility maximization problem with a

preference functional of expected utility type. We derive a new approach

in which we reduce the utility maximization problem with general utility

to the study of a fully-coupled Forward-Backward Stochastic Differential

Equation (FBSDE).

The talk is based on joint work with Ying Hu, Peter Imkeller, Anthony

Reveillac and Jianing Zhang.

Fri, 28 Oct 2011
14:15
DH 1st floor SR

The emergence of probability-type properties of price paths

Vladmir Vovk
(Royal Holloway University of London)
Abstract

The standard approach to continuous-time finance starts from postulating a

statistical model for the prices of securities (such as the Black-Scholes

model). Since such models are often difficult to justify, it is

interesting to explore what can be done without any stochastic

assumptions. There are quite a few results of this kind (starting from

Cover 1991 and Hobson 1998), but in this talk I will discuss

probability-type properties emerging without a statistical model. I will

only consider the simplest case of one security, and instead of stochastic

assumptions will make some analytic assumptions. If the price path is

known to be cadlag without huge jumps, its quadratic variation exists

unless a predefined trading strategy earns infinite capital without

risking more than one monetary unit. This makes it possible to apply the

known results of Ito calculus without probability (Follmer 1981, Norvaisa)

in the context of idealized financial markets. If, moreover, the price

path is known to be continuous, it becomes Brownian motion when physical

time is replaced by quadratic variation; this is a probability-free

version of the Dubins-Schwarz theorem.

Fri, 21 Oct 2011
14:15
DH 1st floor SR

Multivariate utility maximization with proportional transaction costs and random endowment

Luciano Campi
(Paris 13)
Abstract

Abstract: In this paper we deal with a utility maximization problem at finite horizon on a continuous-time market with conical (and time varying) constraints (particularly suited to model a currency market with proportional transaction costs). In particular, we extend the results in \cite{CO} to the situation where the agent is initially endowed with a random and possibly unbounded quantity of assets. We start by studying some basic properties of the value function (which is now defined on a space of random variables), then we dualize the problem following some convex analysis techniques which have proven very useful in this field of research. We finally prove the existence of a solution to the dual and (under an additional boundedness assumption on the endowment) to the primal problem. The last section of the paper is devoted to an application of our results to utility indifference pricing. This is a joint work with G. Benedetti (CREST).

Fri, 24 Jun 2011
14:15
DH 1st floor SR

A Multi-Period Bank Run Model for Liquidity Risk

Dr Eva Lutkebohmert
(University of Freiburg)
Abstract

We present a dynamic bank run model for liquidity risk where a financial institution finances its risky assets by a mixture of short- and long-term debt. The financial institution is exposed to liquidity risk as its short-term creditors have the possibility not to renew their funding at a finite number of rollover dates. Besides, the financial institution can default due to insolvency at any time until maturity. We compute both insolvency and illiquidity default probabilities in this multi-period setting. We show that liquidity risk is increasing in the volatility of the risky assets and in the ratio of the return that can be earned on the outside market over the return for short-term debt promised by the financial institution. Moreover, we study the influence of the capital structure on the illiquidity probability and derive that illiquidity risk is increasing with the ratio of short-term funding.

Fri, 17 Jun 2011
14:15
DH 1st floor SR

Explicit Construction of a Dynamic Bessel Bridge of Dimension 3

Dr Albina Danilova
(London School of Economics)
Abstract

Given a deterministically time-changed Brownian motion Z starting from 1, whose time-change V (t) satisfies $V (t) > t$ for all $t>=0$, we perform an explicit construction of a process X which is Brownian motion in its own filtration and that hits zero for the first time at V (s), where $s:= inf {t > 0 : Z_t = 0}$. We also provide the semimartingale decomposition of $X >$ under

the filtration jointly generated by X and Z. Our construction relies on a combination of enlargement of filtration and filtering techniques. The resulting process X may be viewed as the analogue of a 3-dimensional Bessel bridge starting from 1 at time 0 and ending at 0 at the random time $V (s)$.

We call this a dynamic Bessel bridge since V(s) is not known in advance. Our study is motivated by insider trading models with default risk.(this is a joint work with Luciano Campi and Umut Cetin)

Fri, 03 Jun 2011
14:15
DH 1st floor SR

Cross hedging with futures in a continuous-time model with a stationary spread

Prof Stefan Ankirchner
(University of Bonn)
Abstract

When managing risk, frequently only imperfect hedging instruments are at hand.

We show how to optimally cross-hedge risk when the spread between the hedging

instrument and the risk is stationary. At the short end, the optimal hedge ratio

is close to the cross-correlation of the log returns, whereas at the long end, it is

optimal to fully hedge the position. For linear risk positions we derive explicit

formulas for the hedge error, and for non-linear positions we show how to obtain

numerically effcient estimates. Finally, we demonstrate that even in cases with no

clear-cut decision concerning the stationarity of the spread it is better to allow for

mean reversion of the spread rather than to neglect it.

The talk is based on joint work with Georgi Dimitroff, Gregor Heyne and Christian Pigorsch.

Fri, 27 May 2011
14:15
DH 1st floor SR

Regularity of Value Functions for Nonsmooth Utility Maximization Problems

Dr Harry Zheng
(Imperial College London)
Abstract

In this talk we show that there exists a smooth classical solution to the HJB equation for a large class of constrained problems with utility functions that are not necessarily differentiable or strictly concave.

The value function is smooth if admissible controls satisfy an integrability condition or if it is continuous on the closure of its domain.

The key idea is to work on the dual control problem and the dual HJB equation. We construct a smooth, strictly convex solution to the dual HJB equation and show that its conjugate function is a smooth, strictly concave solution to the primal HJB equation satisfying the terminal and boundary conditions