Seminar series
Date
Mon, 30 May 2022
Time
15:00 - 16:00
Location
Virtual
Speaker
Guido Montufar
Organisation
UCLA

We consider the problem of finding the best memoryless stochastic policy for an infinite-horizon partially observable Markov decision process (POMDP) with finite state and action spaces with respect to either the discounted or mean reward criterion. We show that the (discounted) state-action frequencies and the expected cumulative reward are rational functions of the policy, whereby the degree is determined by the degree of partial observability. We then describe the optimization problem as a linear optimization problem in the space of feasible state-action frequencies subject to polynomial constraints that we characterize explicitly. This allows us to address the combinatorial and geometric complexity of the optimization problem using tools from polynomial optimization. In particular, we estimate the number of critical points and use the polynomial programming description of reward maximization to solve a navigation problem in a grid world. The talk is based on recent work with Johannes Müller.

Please contact us with feedback and comments about this page. Last updated on 24 May 2022 11:58.