Date
Thu, 19 Nov 2020
Time
16:00 - 17:00
Speaker
SUMITRA GANESH
Organisation
JP MORGAN

Agent-based models are an intuitive, interpretable way to model markets and give us a powerful mechanism to analyze counterfactual scenarios that might rarely occur in historical market data. However, building realistic agent-based models is challenging and requires that we (a) ensure that agent behaviors are realistic, and (b) calibrate the agent composition using real data. In this talk, we will present our work to build realistic agent-based models using a multi-agent reinforcement learning approach. Firstly, we show that we can learn a range of realistic behaviors for heterogeneous agents using a shared policy conditioned on agent parameters and analyze the game-theoretic implications of this approach. Secondly, we propose a new calibration algorithm (CALSHEQ) which can estimate the agent composition for which calibration targets are approximately matched, while simultaneously learning the shared policy for the agents. Our contributions make the building of realistic agent-based models more efficient and scalable.

 

Please contact us with feedback and comments about this page. Last updated on 03 Apr 2022 01:32.