Fri, 20/01/2012
14:15
William Shaw (UCL) Nomura Seminar Add to calendar DH 1st floor SR
We develop the idea of using Monte Carlo sampling of random portfolios to solve portfolio investment problems. We explore the need for more general optimization tools, and consider the means by which constrained random portfolios may be generated. DeVroye’s approach to sampling the interior of a simplex (a collection of non-negative random variables adding to unity) is already available for interior solutions of simple fully-invested long-only systems, and we extend this to treat, lower bound constraints, bounded short positions and to sample non-interior points by the method of Face-Edge-Vertex-biased sampling. A practical scheme for long-only and bounded short problems is developed and tested. Non-convex and disconnected regions can be treated by applying rejection for other constraints. The advantage of Monte Carlo methods is that they may be extended to risk functions that are more complicated functions of the return distribution, without explicit gradients, and that the underlying return distribution may be modeled parametrically or empirically based on general distributions. The optimization of expected utility, Omega, Sortino ratios may be handled in a similar manner to quadratic risk, VaR and CVaR, irrespective of whether a reduction to LP or QP form is available. Robustification is also possible, and a Monte Carlo approach allows the possibility of relaxing the general maxi-min approach to one of varying degrees of conservatism. Grid computing technology is an excellent platform for the development of such computations due to the intrinsically parallel nature of the computation. Good comparisons with established results in Mean-Variance and CVaR optimization are obtained, and we give some applications to Omega and expected Utility optimization. Extensions to deploy Sobol and Niederreiter quasi-random methods for random weights are also proposed. The method proposed is a two-stage process. First we have an initial global search which produces a good feasible solution for any number of assets with any risk function and return distribution. This solution is already close to optimal in lower dimensions based on an investigation of several test problems. Further precision, and solutions in 10-100 dimensions, are obtained by invoking a second stage in which the solution is iterated based on Monte-Carlo simulation based on a series of contracting hypercubes.
Fri, 27/01/2012
14:15
Jose Blanchet (Columbia) Nomura Seminar Add to calendar DH 1st floor SR
We propose a dynamic insurance network model that allows to deal with reinsurance counter-party default risks with a particular aim of capturing cascading effects at the time of defaults. We capture these effects by finding an equilibrium allocation of settlements which can be found as the unique optimal solution of a linear programming problem. This equilibrium allocation recognizes 1) the correlation among the risk factors, which are assumed to be heavy-tailed, 2) the contractual obligations, which are assumed to follow popular contracts in the insurance industry (such as stop-loss and retro-cesion), and 3) the interconnections of the insurance-reinsurance network. We are able to obtain an asymptotic description of the most likely ways in which the default of a specific group of insurers can occur, by means of solving a multidimensional Knapsack integer programming problem. Finally, we propose a class of provably strongly efficient estimators for computing the expected loss of the network conditioning the failure of a specific set of companies. Strong efficiency means that the complexity of computing large deviations probability or conditional expectation remains bounded as the event of interest becomes more and more rare.
Fri, 03/02/2012
14:15
Stefan Gerold (TU Wien) Nomura Seminar Add to calendar DH 1st floor SR
In a market with one safe and one risky asset, an investor with a long horizon and constant relative risk aversion trades with constant investment opportunities and proportional transaction costs. We derive the optimal investment policy, its welfare, and the resulting trading volume, explicitly as functions of the market and preference parameters, and of the implied liquidity premium, which is identified as the solution of a scalar equation. For small transaction costs, all these quantities admit asymptotic expansions of arbitrary order. The results exploit the equivalence of the transaction cost market to another frictionless market, with a shadow risky asset, in which investment opportunities are stochastic. The shadow price is also derived explicitly. (Joint work with Paolo Guasoni, Johannes Muhle-Karbe, and Walter Schachermayer)
Fri, 10/02/2012
14:15
Catherine Donnelly (Heriot-Watt) Nomura Seminar Add to calendar DH 1st floor SR
We consider the pricing of a maturity guarantee, which is equivalent to the pricing of a European put option, in a regime-switching market model. Regime-switching market models have been empirically shown to fit long-term stockmarket data better than many other models. However, since a regime-switching market is incomplete, there is no unique price for the maturity guarantee. We extend the good-deal pricing bounds idea to the regime-switching market model. This allows us to obtain a reasonable range of prices for the maturity guarantee, by excluding those prices which imply a Sharpe Ratio which is too high. The range of prices can be used as a plausibility check on the chosen price of a maturity guarantee.
Fri, 17/02/2012
14:15
Olivier Bokanowski (UMA) Nomura Seminar Add to calendar DH 1st floor SR
We will first motivate and review some implicit schemes that arises from the discretization of non linear PDEs in finance or in optimal control problems - when using finite differences methods or finite element methods. For the american option problem, we are led to compute the solution of a discrete obstacle problem, and will give some results for the convergence of nonsmooth Newton's method for solving such problems. Implicit schemes are interesting for their stability properties, however they can be too costly in practice. We will then present some novel schemes and ideas, based on the semi-lagrangian approach and on discontinuous galerkin methods, trying to be as much explicit as possible in order to gain practical efficiency.
Fri, 24/02/2012
14:15
Peter Forsyth (Waterloo) Nomura Seminar Add to calendar DH 1st floor SR
Algorithmic trade execution has become a standard technique for institutional market players in recent years, particularly in the equity market where electronic trading is most prevalent. A trade execution algorithm typically seeks to execute a trade decision optimally upon receiving inputs from a human trader. A common form of optimality criterion seeks to strike a balance between minimizing pricing impact and minimizing timing risk. For example, in the case of selling a large number of shares, a fast liquidation will cause the share price to drop, whereas a slow liquidation will expose the seller to timing risk due to the stochastic nature of the share price. We compare optimal liquidation policies in continuous time in the presence of trading impact using numerical solutions of Hamilton Jacobi Bellman (HJB)partial differential equations (PDE). In particular, we compare the time-consistent mean-quadratic-variation strategy (Almgren and Chriss) with the time-inconsistent (pre-commitment) mean-variance strategy. The Almgren and Chriss strategy should be viewed as the industry standard. We show that the two different risk measures lead to very different strategies and liquidation profiles. In terms of the mean variance efficient frontier, the original Almgren/Chriss strategy is signficently sub-optimal compared to the (pre-commitment) mean-variance strategy. This is joint work with Stephen Tse, Heath Windcliff and Shannon Kennedy.
Fri, 02/03/2012
14:15
Sara Biagini (Unipi) Nomura Seminar Add to calendar DH 1st floor SR
The use of gain-loss ratio as a measure of attractiveness has been introduced by Bernardo and Ledoit. In their well-known paper, they show that gain-loss ratio restrictions have a dual representation in terms of restricted pricing kernels. In spite of its clear financial significance, gain-loss ratio has been largely ignored in the mathematical finance literature, with few exceptions (Cherny and Madan, Pinar). The main reason is intrinsic lack of good mathematical properties. This paper aims to be a rigorous study of gain-loss ratio and its dual representations in a continuous-time market setting, placing it in the context of risk measures and acceptability indexes. We also point out (and correctly reformulate) an erroneous statement made by Bernardo and Ledoit in their main result. This is joint work with M. Pinar.
Fri, 09/03/2012
14:15
Marcel Nutz (Columbia) Nomura Seminar Add to calendar DH 1st floor SR
We provide a general construction of time-consistent sublinear expectations on the space of continuous paths. In particular, we construct the conditional G-expectation of a Borel-measurable (rather than quasi-continuous) random variable.
Syndicate content