Thu, 01 Feb 2024
Dr Renyuan Xu
University of Southern California

Diffusion models, which transform noise into new data instances by reversing a Markov diffusion process, have become a cornerstone in modern generative models. A key component of these models is to learn the score function through score matching. While the practical power of diffusion models has now been widely recognized, the theoretical developments remain far from mature. Notably, it remains unclear whether gradient-based algorithms can learn the score function with a provable accuracy. In this talk, we develop a suite of non-asymptotic theory towards understanding the data generation process of diffusion models and the accuracy of score estimation. Our analysis covers both the optimization and the generalization aspects of the learning procedure, which also builds a novel connection to supervised learning and neural tangent kernels.

This is based on joint work with Yinbin Han and Meisam Razaviyayn (USC).

Further Information

Join us for refreshments from 330 outside L3.

Please contact us with feedback and comments about this page. Last updated on 24 Jan 2024 07:49.