Seminar series
Date
Mon, 10 Nov 2025
Time
14:00 -
15:00
Location
Lecture Room 3
Speaker
Prof Xin Guo
Organisation
Berkeley, USA
Transfer learning is a machine learning technique that leverages knowledge acquired in one domain to improve learning in another, related task. It is a foundational method underlying the success of large language models (LLMs) such as GPT and BERT, which were initially trained for specific tasks. In this talk, I will demonstrate how reinforcement learning (RL), particularly continuous time RL, can benefit from incorporating transfer learning techniques, especially with respect to convergence analysis. I will also show how this analysis naturally yields a simple corollary concerning the stability of score-based generative diffusion models.
Based on joint work with Zijiu Lyu of UC Berkeley.