Author
Welling, M
Teh, Y
Journal title
Proceedings of the 28th International Conference on Machine Learning, ICML 2011
Last updated
2021-10-19T13:20:15.847+01:00
Page
681-688
Abstract
In this paper we propose a new framework for learning from large scale datasets based on iterative learning from small mini-batches. By adding the right amount of noise to a standard stochastic gradient optimization algorithm we show that the iterates will converge to samples from the true posterior distribution as we anneal the stepsize. This seamless transition between optimization and Bayesian posterior sampling provides an inbuilt protection against overfitting. We also propose a practical method for Monte Carlo estimates of posterior statistics which monitors a "sampling threshold" and collects samples after it has been surpassed. We apply the method to three models: a mixture of Gaussians, logistic regression and ICA with natural gradients. Copyright 2011 by the author(s)/owner(s).
Symplectic ID
353219
Publication type
Journal Article
ISBN-13
9781450306195
Publication date
7 October 2011
Please contact us with feedback and comments about this page. Created on 17 Jan 2017 - 17:30.