Thu, 27 Nov 2025
12:00 -
13:00
L3
Transfer learning is a machine learning technique that leverages knowledge acquired in one domain to improve learning in another, related task. It is a foundational method underlying the success of large language models (LLMs) such as GPT and BERT, which were initially trained for specific tasks. In this talk, I will demonstrate how reinforcement learning (RL), particularly continuous time RL, can benefit from incorporating transfer learning techniques, especially with respect to convergence analysis. I will also show how this analysis naturally yields a simple corollary concerning the stability of score-based generative diffusion models.
Based on joint work with Zijiu Lyu of UC Berkeley.