Date
Mon, 11 Nov 2024
Time
14:00 - 15:00
Location
Lecture Room 3
Speaker
Yunhao Tang
Organisation
Google Deep Mind

Self-predictive learning (aka non-contrastive learning) has become an increasingly important paradigm for representation learning. Self-predictive learning is simple yet effective: it learns without contrastive examples yet extracts useful representations through a self-predicitve objective. A common myth with self-predictive learning is that the optimization objective itself yields trivial representations as globally optimal solutions, yet practical implementations can produce meaningful solutions. 

 

We reconcile the theory-practice gap by studying the learning dynamics of self-predictive learning. Our analysis is based on analyzing a non-linear ODE system that sheds light on why despite a seemingly problematic optimization objective, self-predictive learning does not collapse, which echoes with important implementation "tricks" in practice. Our results also show that in a linear setup, self-predictive learning can be understood as gradient based PCA or SVD on the data matrix, hinting at meaningful representations to be captured through the learning process.

 

This talk is based on our ICML 2023 paper "Understanding self-predictive learning for reinforcement learning".

Last updated on 29 Jul 2024, 9:30am. Please contact us with feedback and comments about this page.