Author
Le, T
Kosiorek, A
Siddharth, N
Teh, Y
Wood, F
Journal title
Proceedings of the International Conference on Uncertainty in Artificial Intelligence
Last updated
2021-08-30T04:10:01.067+01:00
Abstract
Stochastic control-flow models (SCFMs) are a class of generative models that involve branching on choices from discrete random variables. Amortized gradient-based learning of SCFMs is challenging as most approaches targeting discrete variables rely on their continuous relaxations—which can be intractable in SCFMs, as branching on relaxations requires evaluating all (exponentially many) branching paths. Tractable alternatives mainly combine REINFORCE with complex control-variate schemes to improve the variance of na¨ıve estimators. Here, we revisit the reweighted wakesleep (RWS) [5] algorithm, and through extensive evaluations, show that it outperforms current state-of-the-art methods in learning SCFMs. Further, in contrast to the importance weighted autoencoder, we observe that RWS learns better models and inference networks with increasing numbers of particles. Our results suggest that RWS is a competitive, often preferable, alternative for learning SCFMs.
Symplectic ID
1019833
Publication type
Conference Paper
Publication date
25 July 2019
Please contact us with feedback and comments about this page. Created on 21 Jun 2019 - 17:30.