Accelerating Markov chain Monte Carlo simulation by differential evolution with self-adaptive randomized subspace sampling

Authors
  • J.M. Hyman
  • D. Higdon
Publication date 2009
Journal International Journal of Nonlinear Sciences and Numerical Simulation
Volume | Issue number 10 | 3
Pages (from-to) 273-290
Number of pages 18
Organisations
  • Faculty of Economics and Business (FEB) - Amsterdam School of Economics Research Institute (ASE-RI)
  • Faculty of Science (FNWI) - Institute for Biodiversity and Ecosystem Dynamics (IBED)
Abstract
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields
of study to estimate the average properties of complex systems, and for posterior
inference in a Bayesian framework. Existing theory and experiments prove convergence
of well constructed MCMC schemes to the appropriate limiting distribution under a
variety of different conditions. In practice, however this convergence is often observed to
be disturbingly slow. This is frequently caused by an inappropriate selection of the
proposal distribution used to generate trial moves in the Markov Chain. Here we show
that significant improvements to the efficiency of MCMC simulation can be made by
using a self-adaptive Differential Evolution learning strategy within a population-based
evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive
Metropolis or DREAM, runs multiple different chains simultaneously for global
exploration, and automatically tunes the scale and orientation of the proposal distribution
in randomized subspaces during the search. Ergodicity of the algorithm is proved, and
various examples involving nonlinearity, high-dimensionality, and multimodality show
that DREAM is generally superior to other adaptive MCMC sampling approaches. The
DREAM scheme significantly enhances the applicability of MCMC simulation to
complex, multi-modal search problems.
Document type Article
Published at https://doi.org/10.1515/IJNSNS.2009.10.3.273
Permalink to this page
Back