Date
Thu, 13 Nov 2025
Time
16:00 - 17:00
Location
L5
Speaker
Prof. Xunyu Zhou
Organisation
Columbia University (New York)
We study optimal stopping for diffusion processes with unknown model primitives within the continuous-time reinforcement learning (RL) framework developed by Wang et al. (2020), and present applications to option pricing and portfolio choice. By penalizing the corresponding variational inequality formulation, we transform the stopping problem into a stochastic optimal control problem with two actions. We then randomize controls into Bernoulli distributions and add an entropy regularizer to encourage exploration. We derive a semi-analytical optimal Bernoulli distribution, based on which we devise RL algorithms using the martingale approach established in Jia and Zhou (2022a). We establish a policy improvement theorem and prove the fast convergence of the resulting policy iterations. We demonstrate the effectiveness of the algorithms in pricing finite-horizon American put options, solving Merton’s problem with transaction costs, and scaling to high-dimensional optimal stopping problems. In particular, we show that both the offline and online algorithms achieve high accuracy in learning the value functions and characterizing the associated free boundaries.
 
Joint work with Min Dai, Yu Sun and Zuo Quan Xu, and forthcoming in Management Science 


 

Last updated on 29 Oct 2025, 12:22pm. Please contact us with feedback and comments about this page.