This work deals with a class of stochastic optimal control problems in the presence of state constraints. It is well known that for such problems the value function is, in general, discontinuous, and its characterisation by a Hamilton-Jacobi equation requires additional assumptions involving an interplay between the boundary of the set of constraints and the dynamics
of the controlled system. Here, we give a characterization of the epigraph of the value function without assuming the usual controllability assumptions. To this end, the stochastic optimal control problem is first translated into a state-constrained stochastic target problem. Then a level-set approach is used to describe the backward reachable sets of the new target problem. It turns out that these backward reachable sets describe the value function. The main advantage of our approach is that it allows us to easily handle the state constraints by an exact penalisation. However, the target problem involves a new state variable and a new control variable that is unbounded.
- Mathematical Finance Internal Seminar