Date
Mon, 09 May 2022
Time
15:30 - 16:30
Location
L3
Speaker
LUKASZ SZPRUCH
Organisation
University of Edinburgh

 We develop a probabilistic framework for analysing model-based reinforcement learning in the episodic setting. We then apply it to study finite-time horizon stochastic control problems with linear dynamics but unknown coefficients and convex, but possibly irregular, objective function. Using probabilistic representations, we study regularity of the associated cost functions and establish precise estimates for the performance gap between applying optimal feedback control derived from estimated and true model parameters. We identify conditions under which this performance gap is quadratic, improving the linear performance gap in recent work [X. Guo, A. Hu, and Y. Zhang, arXiv preprint, arXiv:2104.09311, (2021)], which matches the results obtained for stochastic linear-quadratic problems. Next, we propose a phase-based learning algorithm for which we show how to optimise exploration-exploitation trade-off and achieve sublinear regrets in high probability and expectation. When assumptions needed for the quadratic performance gap hold, the algorithm achieves an order (N‾‾√lnN) high probability regret, in the general case, and an order ((lnN)2) expected regret, in self-exploration case, over N episodes, matching the best possible results from the literature. The analysis requires novel concentration inequalities for correlated continuous-time observations, which we derive.

 

-----------------------------------------------------------------------
Dr Lukasz Szpruch

Please contact us with feedback and comments about this page. Last updated on 03 Apr 2022 01:32.