Now showing 1 - 2 of 2
  • Publication
    Exploiting Multi-Core Architectures for Reduced-Variance Estimation with Intractable Likelihoods
    (International Society for Bayesian Analysis (ISBA), 2015) ; ;
    Many popular statistical models for complex phenomena areintractable, in the sense that the likelihood function cannot easily be evaluated.Bayesian estimation in this setting remains challenging, with a lack of computa-tional methodology to fully exploit modern processing capabilities. In this paperwe introduce novel control variates for intractable likelihoods that can dramati-cally reduce the Monte Carlo variance of Bayesian estimators. We prove that ourcontrol variates are well-defined and provide a positive variance reduction. Fur-thermore, we show how to optimise these control variates for variance reduction.The methodology is highly parallel and offers a route to exploit multi-core pro-cessing architectures that complements recent research in this direction. Indeed,our work shows that it may not be necessary to parallelise the sampling processitself in order to harness the potential of massively multi-core architectures. Simu-lation results presented on the Ising model, exponential random graph models andnon-linear stochastic differential equation models support our theoretical findings.
      346ScopusĀ© Citations 13
  • Publication
    Adaptive Incremental Mixture Markov chain Monte Carlo
    We propose Adaptive Incremental Mixture Markov chain Monte Carlo (AIMM), a novel approach to sample from challenging probability distributions defined on a general state-space. While adaptive MCMC methods usually update a parametric proposal kernel with a global rule, AIMM locally adapts a semiparametric kernel. AIMM is based on an independent Metropolis-Hastings proposal distribution which takes the form of a finite mixture of Gaussian distributions. Central to this approach is the idea that the proposal distribution adapts to the target by locally adding a mixture component when the discrepancy between the proposal mixture and the target is deemed to be too large. As a result, the number of components in the mixture proposal is not fixed in advance. Theoretically, we prove that there exists a process that can be made arbitrarily close to AIMM and that converges to the correct target distribution. We also illustrate that it performs well in practice in a variety of challenging situations, including high-dimensional and multimodal target distributions.
      316ScopusĀ© Citations 6