Bottleneck Option
Abstract
Forthcoming events in this series
Relative entropy weighted optimization is convex optimization problem over the space of probability measures. Many convex optimization problems can be rephrased as such a problem. This is particularly useful since this problem type admits a quasi-explicit solution (i.e. as the expectation over a random variable), which immediately provides a Monte-Carlo method for numerically computing the solution of the optimization problem.
In this talk we discuss the background and application of this approach to stochastic optimal control problems, which may be considered as relative entropy weighted problems with Wiener space as probability space, and its connection with the theory of large deviations for Brownian functionals. As a particular application we discuss the minimization of the local time in a given point of Brownian motion with drift.
The theory of risk measurement has been extensively developed over the past ten years or so, but there has been comparatively little effort devoted to using this theory to inform portfolio choice. One theme of this paper is to study how an investor in a conventional log-Brownian market would invest to optimize expected utility of terminal wealth, when subjected to a bound on his risk, as measured by a coherent law-invariant risk measure. Results of Kusuoka lead to remarkably complete expressions for the solution to this problem.
The second theme of the paper is to discuss how one would actually manage (not just measure) risk. We study a principal/agent problem, where the principal is required to satisfy some risk constraint. The principal proposes a compensation package to the agent, who then optimises selfishly ignoring the risk constraint. The principal can pick a compensation package that induces the agent to select the principal's optimal choice.
The paper shows that financial market equilibria need not exist if agents possess cumulative prospect theory preferences with piecewise-power value functions. The reason is an infinite short-selling problem. But even when a short-sell constraint is added, non-existence can occur due to discontinuities in agents' demand functions. Existence of equilibria is established when short-sales constraints are imposed and there is also a continuum of agents in the market
In this talk we present a work done with M. Di Giacinto (Università di Cassino - Italy) and Salvatore Federico (Scuola Normale - Pisa - Italy). The subject of the work is a continuous time stochastic model of optimal allocation for a defined contribution pension fund with a minimum guarantee. We adopt the point of view of a fund manager maximizing the expected utility from the fund wealth over an infinite horizon.
The level of wealth is constrained to stay above a "solvency level".
The model is naturally formulated as an optimal control problem of a stochastic delay equation with state constraints and is treated by the dynamic programming approach.
We first present the study in the simplified case of no delay where a satisfactory theory can be built proving the existence of regular feedback control strategies and then go to the more general case showing some first results on the value function and on its properties.
We begin by the study of the problem of the exponential utility maximization. As opposed to most of the papers dealing with this subject, the investors’ trading strategies we allow underly constraints described by closed, but not necessarily convex, sets. Instead of the well-known convex duality approach, we apply a backward stochastic differential equation (BSDE) approach. This leads to the study of quadratic BSDEs. The second part gives the recent result on the existence and uniqueness of solution to quadratic BSDEs. We give also the connection between these BSDEs and quadratic PDEs. The last part will show that quadratic BSDE is critic. That is, if the BSDE is superquadratic, there exists always some BSDE without solution; and there is infinite many solutions when there is one solution. This phenomenon does not exist for quadratic and superquadratic PDEs.
Stress levels embedded in S&P 500 options are constructed and re-ported. The stress function used is MINMAXV AR: Seven joint laws for the top 50 stocks in the index are considered. The first time changes a Gaussian one factor copula. The remaining six employ correlated Brownian motion independently time changed in each coordinate. Four models use daily returns, either run as Lévy processes or scaled, to the option maturity. The last two employ risk neutral marginals from the V GSSD and CGMY SSD Sato processes. The smallest stress function uses CGMY SSD risk neutral marginals and Lévy correlation. Running the Lévy process yields a lower stress surface than scaling to the option maturity. Static hedging of basket options to a particular level of accept- ability is shown to substantially lower the price at which the basket option may be o¤ered.
Starting from the problem of perfect hedging under market illiquidity, as introduced by Cetin, Jarrow and Protter, we introduce a class of second order target problems. A dual formulation in the general non-Markov case is obtained by formulating the problem under a convenient reference measure. In contrast with previous works, the controls lie in the classical H2 spaces associated to the reference measure. A dual formulation of the problem in terms of a standard stochastic control problem is derived, and involves control of the diffusion component.
We prove existence of equilibrium in a continuous-time securities market in which the securities are potentially dynamically complete: the number of securities is at least one more than the number of independent sources of uncertainty. We prove that dynamic completeness of the candidate equilibrium price process follows from mild exogenous assumptions on the economic primitives of the model. Our result is universal, rather than generic: dynamic completeness of the candidate equilibrium price process and existence of equilibrium follow from the way information is revealed in a Brownian filtration, and from a mild exogenous nondegeneracy condition on the terminal security dividends. The nondegeneracy condition, which requires that finding one point at which a determinant of a Jacobian matrix of dividends is nonzero, is very easy to check. We find that the equilibrium prices, consumptions, and trading strategies are well-behaved functions of the stochastic process describing the evolution of information.
We prove that equilibria of discrete approximations converge to equilibria of the continuous-time economy
We discuss the valuation problem for a broad spectrum of derivatives, especially in Levy driven models. The key idea in this approach is to separate from the computational point of view the role of the two ingredients which are the payoff function and the driving process for the underlying quantity. Conditions under which valuation formulae based on Fourier and Laplace transforms hold in a general framework are analyzed. An interesting interplay between the properties of the payoff function and the driving process arises. We also derive the analytically extended characteristic function of the supremum and the infimum processes derived from a Levy process. Putting the different pieces together, we can price lookback and one-touch options in Levy driven models, as well as options on the minimum and maximum of several assets.
A modelling framework is introduced in which there is a small agent who is more susceptible to the flow of information in the market as compared to the general market participants. In this framework market participants have access to a stream of noisy information concerning the future returns of the asset, whereas an informative trader has access to an additional information source which is also obscured by further noise, which may be correlated with the market noise. The informative trader utilises the extraneous information source to seek statistical arbitrage opportunities, in exchange with accommodating the additional risk. The information content of the market concerning the value of the impending cash flow is represented by the mutual information of the asset price and the associated cash flow. The worthiness of the additional information source is then measured in terms of the difference of mutual information between market participants and the informative trader. This difference is shown to be strictly nonnegative for all parameter values in the model, when signal-to-noise ratio is known in advance. Trading strategies making use of the additional information are considered. (Talk is based on joint work with M.H.A. Davis (Imperial) & R.L. Friedman (Imperial & Royal Bank of Scotland).
This talk will give a survey of results in continuous-time
contract theory, and discuss open problems and plans for further
research on this topic.
The general question is how a ``principal" (a company, investors ...)
should design a payoff for compensating an ``agent" (an executive, a
portfolio manager, ...) in order to induce the best possible
performance.
The following frameworks are standard in contract theory:
(i) the principal and the agent have same, full information;
(ii) the principal cannot monitor agent's actions
(iii) the principal does not know agent's type We will discuss all
three of these problems.
The mathematical tools used are those of stochastic control theory,
stochastic maximum principle and Forward Backward Stochastic
Differential Equations.
We consider a financial contract that delivers a single cash flow given by the terminal value of a cumulative gains process.
The problem of modelling such an asset and associated derivatives is important, for example, in the determination of optimal insurance claims reserve policies, and in the pricing of reinsurance contracts. In the insurance setting, aggregate claims play the role of cumulative gains, and the terminal cash flow represents the totality of the claims payable for the given accounting period. A similar example arises when we consider the accumulation of losses in a credit portfolio, and value a contract that pays an amount equal to the totality of the losses over a given time interval. An expression for the value process of such an asset is derived as follows. We fix a probability space, together with a pricing measure, and model the terminal cash flow by a random variable; next, we model the cumulative gains process by the product of the terminal cash flow and an independent gamma bridge; finally, we take the filtration to be that generated by the cumulative gains process.
An explicit expression for the value process is obtained by taking the discounted expectation of the future cash flow, conditional on the relevant market information. The price of an Arrow–Debreu security on the cumulative gains process is determined, and is used to obtain a closed-form expression for the price of a European-style option on the value of the asset at the given intermediate time. The results obtained make use of remarkable properties of the gamma bridge process, and are applicable to a wide variety of financial products based on cumulative gains processes such as aggregate claims, credit portfolio losses, defined benefit pension schemes, emissions, and rainfall. (Co-authors: D. C. Brody, Imperial College London, and A.
Macrina, King's College London and ETH Zurich. Downloadable at
Trading a financial asset involves a sequence of decisions to buy or sell the asset over time. A traditional trading strategy is to buy low and sell high. However, in practice, identifying these low and high levels is extremely challenging and difficult. In this talk, I will present our ongoing research on characterization of these key levels when the underlying asset price is dictated by a mean-reversion model. Our objective is to buy and sell the asset sequentially in order to maximize the overall profit. Mathematically, this amounts to determining a sequence of stopping times. We establish the associated dynamic programming equations (quasi-variational
inequalities) and show that these differential equations can be converted to algebraic-like equations under certain conditions.
The two threshold (buy and sell) levels can be found by solving these algebraic-like equations. We provide sufficient conditions that guarantee the optimality of our trading strategy.
Many portfolio optimization problems are directly or indirectly concerned with the current maximum of the underlying. For example, loockback or Russian options, optimization with max-drawdown constraint , or indirectly American Put Options, optimization with floor constraints.
The Azema-Yor martingales or max-martingales, introduced in 1979 to solve the Skohorod embedding problem, appear to be remarkably efficient to provide simple solution to some of these problems, written on semi-martingale with continuous running supremum.
14.15 - 15.00 Part I
Marc Yor : The infinite horizon case.
15.00 - 15.15 A short break for questions and answers
15.15 - 16.00 Part II
Amel Bentata : The finite horizon case.
Roughly, the Black-Scholes formula is a distribution function of the maturity. This may be explained in terms of the last passage times at a given level of the underlying Brownian motion with drift.
Conversely, starting with last passage times up to finite horizon, we obtain a 2-parameter variant of the Black-Scholes formula.
Efficient numerical solutions of several important partial-differential equation based models in mathematical finance are impeded by the fact that they contain operators which are Lipschitz continuous but not continuously differentiable. As a consequence, Newton methods are not directly applicable and, more importantly, do not provide their typical fast convergence properties.
In this talk semi-smooth Newton methods are presented as a remedy to the the above-mentioned difficulties. We also discuss algorithmic issues including the primal-dual active set strategy and path following techniques.
We discuss model-free pricing of digital options, which pay out depending on whether the underlying asset has crossed upper and lower levels. We make only weak assumptions about the underlying process (typically continuity), but assume that the initial prices of call options with the same maturity and all strikes are known. Treating this market data as input, we are able to give upper and lower bounds on the arbitrage-free prices of the relevant options, and further, using techniques from the theory of Skorokhod embeddings, to show that these bounds are tight. Additionally, martingale inequalities are derived, which provide the trading strategies with which we are able to realise any potential arbitrages.
Joint work with Alexander Cox (University of Bath)
The Mutual Fund Theorem (MFT) is considered in a general semimartingale financial market S with a finite time horizon T, where agents maximize expected utility of terminal wealth. The main results are:
(i) Let N be the wealth process of the numéraire portfolio (i.e. the optimal portfolio for the log utility). If any path-independent option with maturity T written on the numéraire portfolio can be replicated by trading only in N and the risk-free asset, then the (MFT) holds true for general utility functions, and the numéraire portfolio may serve as mutual fund. This generalizes Merton’s classical result on Black-Merton-Scholes markets.
Conversely, under a supplementary weak completeness assumption, we show that the validity of the (MFT) for general utility functions implies the replicability property for options on the numéraire portfolio described above.
(ii) If for a given class of utility functions (i.e. investors) the
(MFT) holds true in all complete Brownian financial markets S, then all investors use the same utility function U, which must be of HARA type.
This is a result in the spirit of the classical work by Cass and Stiglitz.
When liquidating large portfolios of securities one faces a trade off between adverse market impact of sell orders and the impatience to generate proceeds. We present a Black-Scholes model with an impact factor describing the market's distress arising from previous transactions and show how to solve the ensuing optimization problem via classical calculus of variations. (Joint work with Dirk Becherer, Humboldt Universität zu
Berlin)
We consider impulse control problems in finite horizon for diffusions with decision lag and execution delay. The new feature is that our general framework deals with the important case when several consecutive orders may be decided before the effective execution of the first one.
This is motivated by financial applications in the trading of illiquid assets such as hedge funds.
We show that the value functions for such control problems satisfy a suitable version of dynamic programming principle in finite dimension, which takes into account the past dependence of state process through the pending orders. The corresponding Bellman partial differential equations (PDE) system is derived, and exhibit some peculiarities on the coupled equations, domains and boundary conditions. We prove a unique characterization of the value functions to this nonstandard PDE system by means of viscosity solutions. We then provide an algorithm to find the value functions and the optimal control. This implementable algorithm involves backward and forward iterations on the domains and the value functions, which appear in turn as original arguments in the proofs for the boundary conditions and uniqueness results. Finally, we give several numerical experiments illustrating the impact of execution delay on trading strategies and on option pricing.
In this talk we will investigate the properties of stochastic volatility models, to discuss to what extent, and with regard to which models, properties of the classical exponential Brownian motion model carry over to a stochastic volatility setting.
The properties of the classical model of interest include the fact that the discounted stock price is positive for all $t$ but converges to zero almost surely, the fact that it is a martingale but not a uniformly integrable martingale, and the fact that European option prices (with convex payoff functions) are convex in the initial stock price and increasing in volatility. We give examples of stochastic volatility models where these properties continue to hold, and other examples where they fail.
The main tool is a construction of a time-homogeneous autonomous volatility model via a time change.