Nomura Seminar
|
Fri, 16/10/2009 14:15 |
Michael Kohlmann (Konstanz) |
Nomura Seminar |
DH 1st floor SR |
| We construct a market of bonds with jumps driven by a general marked point process as well as by an Rn-valued Wiener process, in which there exists at least one equivalent martingale measure Q0. In this market we consider the mean-variance hedging of a contingent claim H 2 L2(FT0) based on the self-financing portfolios on the given maturities T1, · · · , Tn with T0 < T1 < · · · < Tn T. We introduce the concept of variance-optimal martingale (VOM) and describe the VOM by a backward semimartingale equation (BSE). We derive an explicit solution of the optimal strategy and the optimal cost of the mean-variance hedging by the solutions of two BSEs. The setting of this problem is a bit unrealistic as we restrict the available bonds to those with a a pregiven finite number of maturities. So we extend the model to a bond market with jumps and a continuum of maturities and strategies which are Radon measure valued processes. To describe the market we consider the cylindrical and normalized martingales introduced by Mikulevicius et al.. In this market we the consider the exp-utility problem and derive some results on dynamic indifference valuation. The talk bases on recent common work with Dewen Xiong. | |||
|
Fri, 23/10/2009 14:15 |
Albert Shiryaev (Steklov) |
Nomura Seminar |
DH 1st floor SR |
| For a logarithmic utility function we extend our rezult with Xu and Zhou for case of the geometrical Brownian motion with drift term which depends of the some hidden parameter. | |||
|
Fri, 30/10/2009 14:15 |
Mark Davis (Imperial) |
Nomura Seminar |
DH 1st floor SR |
| This paper considers a portfolio optimization problem in which asset prices are represented by SDEs driven by Brownian motion and a Poisson random measure, with drifts that are functions of an auxiliary diffusion 'factor' process. The criterion, following earlier work by Bielecki, Pliska, Nagai and others, is risk-sensitive optimization (equivalent to maximizing the expected growth rate subject to a constraint on variance.) By using a change of measure technique introduced by Kuroda and Nagai we show that the problem reduces to solving a certain stochastic control problem in the factor process, which has no jumps. The main result of the paper is that the Hamilton-Jacobi-Bellman equation for this problem has a classical solution. The proof uses Bellman's "policy improvement" method together with results on linear parabolic PDEs due to Ladyzhenskaya et al. This is joint work with Sebastien Lleo. | |||
|
Fri, 13/11/2009 14:15 |
Jin-Chuan Duan (National University of Singapore) |
Nomura Seminar |
DH 1st floor SR |
| Defaults in a credit portfolio of many obligors or in an economy populated with firms tend to occur in waves. This may simply reflect their sharing of common risk factors and/or manifest their systemic linkages via credit chains. One popular approach to characterizing defaults in a large pool of obligors is the Poisson intensity model coupled with stochastic covariates, or the Cox process for short. A constraining feature of such models is that defaults of different obligors are independent events after conditioning on the covariates, which makes them ill-suited for modeling clustered defaults. Although individual default intensities under such models can be high and correlated via the stochastic covariates, joint default rates will always be zero, because the joint default probabilities are in the order of the length of time squared or higher. In this paper, we develop a hierarchical intensity model with three layers of shocks – common, group-specific and individual. When a common (or group-specific) shock occurs, all obligors (or group members) face individual default probabilities, determining whether they actually default. The joint default rates under this hierarchical structure can be high, and thus the model better captures clustered defaults. This hierarchical intensity model can be estimated using the maximum likelihood principle. A default signature plot is invented to complement the typical power curve analysis in default prediction. We implement the new model on the US corporate bankruptcy data and find it far superior to the standard intensity model both in terms of the likelihood ratio test and default signature plot. | |||
|
Fri, 20/11/2009 14:15 |
Jan Kallsen (Kiel) |
Nomura Seminar |
DH 1st floor SR |
| We reconsider Merton's problem under proportional transaction costs. Beginning with Davis and Norman (1990) such utility maximization problems are usually solved using stochastic control theory. Martingale methods, on the other hand, have so far only been used to derive general structural results. These apply the duality theory for frictionless markets typically to a fictitious shadow price process lying within the bid-ask bounds of the real price process. In this study we show that this dual approach can actually be used for both deriving a candidate solution and verification. In particular, the shadow price process is determined explicitly. | |||
|
Fri, 27/11/2009 14:15 |
Wolfgang Runggaldier (Padova) |
Nomura Seminar |
DH 1st floor SR |
| Traditional arbitrage pricing theory is based on martingale measures. Recent studies show that some form of arbitrage may exist in real markets implying that then there does not exist an equivalent martingale measure and so the question arises: what can one do with pricing and hedging in this situation? We mention here two approaches to this effect that have appeared in the literature, namely the “Fernholz-Karatzas" approach and Platen's "Benchmark approach" and discuss their relationships both in models where all relevant quantities are fully observable as well as in models where this is not the case and, furthermore, not all observables are also investment instruments. [The talk is based on joint work with former student Giorgia Galesso] | |||
|
Fri, 04/12/2009 14:15 |
Anis Matoussi (Le Mans) |
Nomura Seminar |
Eagle House |
| We study a stochastic control problem in the context of utility maximization under model uncertainty. The problem is formulated as /max min/ problem : /max /over strategies and consumption and /min/ over the set of models (measures). For the minimization problem, we have showed in Bordigoni G., Matoussi,A., Schweizer, M. (2007) that there exists a unique optimal measure equivalent to the reference measure. Moreover, in the context of continuous filtration, we characterize the dynamic value process of our stochastic control problem as the unique solution of a generalized backward stochastic differential equation with a quadratic driver. We extend first this result in a discontinuous filtration. Moreover, we obtain a comparison theorem and a regularity properties for the associated generalized BSDE with jumps, which are the key points in our approach, in order to solve the utility maximization problem over terminal wealth and consumption. The talk is based on joint work with M. Jeanblanc and A. Ngoupeyou (2009). | |||
