Date
Fri, 23 Jan 2009
14:15
Location
DH 1st floor SR
Speaker
Tomas Bjork
Organisation
Stockholm School of Economics
We present a theory for  stochastic control problems which, in various ways, are time inconsistent in the sense that they do not admit a Bellman optimality principle. We attach these problems by viewing them within a game theoretic framework, and we look for subgame perfect Nash equilibrium points.
For a general controlled Markov process and a fairly general objective functional we derive an extension of the standard Hamilton-Jacobi-Bellman  equation, in  the form of a system of non-linear equations, for the determination for the equilibrium strategy as well as the equilibrium value function. All  known examples of time inconsistency in the literature are easily seen to be special cases of the present theory. We also prove that for every time inconsistent problem, there exists an associated time consistent problem such that the optimal control and the optimal value function for the consistent problem coincides with the equilibrium control and value function respectively for the time inconsistent problem. We also study some concrete examples.
Please contact us with feedback and comments about this page. Last updated on 03 Apr 2022 01:32.