Date
Tue, 12 Oct 2010
Time
14:15 - 16:15
Location
Eagle House
Speaker
Tomas Bjork
Organisation
Columbia University/Stockholm School of Economics

"We present a theory for stochastic control problems which, in various ways, are time inconsistent in the sense that they do not admit a Bellman optimality principle. We attach these problems by viewing them within a game theoretic framework, and we look for subgame perfect Nash equilibrium points.

For a general controlled Markov process and a fairly general objective functional we derive an extension of the standard Hamilton-Jacobi-Bellman equation, in the form of a system of non-linear equations. We give some concrete examples, and in particular we study the case of mean variance optimal portfolios with wealth dependent risk aversion"

Please contact us with feedback and comments about this page. Last updated on 03 Apr 2022 01:32.