Please note that the list below only shows forthcoming events, which may not include regular events that have not yet been entered for the forthcoming term. Please see the past events page for a list of all seminar series that the department has on offer.

 

Past events in this series


Thu, 13 Nov 2025

16:00 - 17:00
L5

Learning to Optimally Stop Diffusion Processes, with Financial Applications

Prof. Xunyu Zhou
(Columbia University (New York))
Abstract
We study optimal stopping for diffusion processes with unknown model primitives within the continuous-time reinforcement learning (RL) framework developed by Wang et al. (2020), and present applications to option pricing and portfolio choice. By penalizing the corresponding variational inequality formulation, we transform the stopping problem into a stochastic optimal control problem with two actions. We then randomize controls into Bernoulli distributions and add an entropy regularizer to encourage exploration. We derive a semi-analytical optimal Bernoulli distribution, based on which we devise RL algorithms using the martingale approach established in Jia and Zhou (2022a). We establish a policy improvement theorem and prove the fast convergence of the resulting policy iterations. We demonstrate the effectiveness of the algorithms in pricing finite-horizon American put options, solving Merton’s problem with transaction costs, and scaling to high-dimensional optimal stopping problems. In particular, we show that both the offline and online algorithms achieve high accuracy in learning the value functions and characterizing the associated free boundaries.
 
Joint work with Min Dai, Yu Sun and Zuo Quan Xu, and forthcoming in Management Science 


 

Mon, 02 Feb 2026
15:30
L3

Mean field games without rational expectations

Benjamin MOLL
(LSE)
Abstract
Mean Field Game (MFG) models implicitly assume “rational expectations”, meaning that the heterogeneous agents being modeled correctly know all relevant transition probabilities for the complex system they inhabit. When there is common noise, it becomes necessary to solve the “Master equation” (a.k.a. “Monster equation”), a Hamilton-JacobiBellman equation in which the infinite-dimensional density of agents is a state variable. The rational expectations assumption and the implication that agents solve Master equations is unrealistic in many applications. We show how to instead formulate MFGs with non-rational expectations. Departing from rational expectations is particularly relevant in “MFGs with a low-dimensional coupling”, i.e. MFGs in which agents’ running reward function depends on the density only through low-dimensional functionals of this density. This happens, for example, in most macroeconomics MFGs in which these lowdimensional functionals have the interpretation of “equilibrium prices.” In MFGs with a low-dimensional coupling, departing from rational expectations allows for completely sidestepping the Master equation and for instead solving much simpler finite-dimensional HJB equations. We introduce an adaptive learning model as a particular example of nonrational expectations and discuss its properties.