Austerity in MCMC Land: Cutting the Metropolis-Hastings Budget

Open Access
Authors
Publication date 2014
Journal JMLR Workshop and Conference Proceedings
Event International Conference on Machine Learning (ICML 2014)
Volume | Issue number 32
Pages (from-to) 181-189
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Can we make Bayesian posterior MCMC sampling more efficient when faced with very large datasets? We argue that computing the likelihood for N datapoints in the Metropolis-Hastings (MH) test to reach a single binary decision is computationally inefficient. We introduce an approximate MH rule based on a sequential hypothesis test that allows us to accept or reject samples with high confidence using only a fraction of the data required for the exact MH rule. While this method introduces an asymptotic bias, we show that this bias can be controlled and is more than offset by a decrease in variance due to our ability to draw more samples per unit of time.
Document type Article
Note International Conference on Machine Learning, 22-24 June 2014, Bejing, China. Editors: Eric P. Xing, Tony Jebara.
Language English
Published at http://jmlr.org/proceedings/papers/v32/korattikara14.html
Downloads
korattikara14 (Final published version)
Supplementary materials
Permalink to this page
Back