Noisy Monte Carlo: Convergence of Markov chains with approximate transition kernels
|Title:||Noisy Monte Carlo: Convergence of Markov chains with approximate transition kernels||Authors:||Alquier, Pierre
Everitt, R. G.
|Permanent link:||http://hdl.handle.net/10197/6340||Date:||2014||Abstract:||Monte Carlo algorithms often aim to draw from a distribution π by simulating a Markov chain with transition kernel P such that π is invariant under P. However, there are many situations for which it is impractical or impossible to draw from the transition kernel P. For instance, this is the case with massive datasets, where is it prohibitively expensive to calculate the likelihood and is also the case for intractable likelihood models arising from, for example, Gibbs random fields, such as those found in spatial statistics and network analysis. A natural approach in these cases is to replace P by an approximation Pˆ. Using theory from the stability of Markov chains we explore a variety of situations where it is possible to quantify how 'close' the chain given by the transition kernel Pˆ is to the chain given by P. We apply these results to several examples from spatial statistics and network analysis.||Funding Details:||Science Foundation Ireland||Type of material:||Journal Article||Publisher:||Springer||Copyright (published version):||2014 Springer Science+Business Media New York||Keywords:||Markov chain Monte Carlo;Intractable likelihoods;Machine learning;Statistics;Pseudo-marginal Monte Carlo||DOI:||10.1007/s11222-014-9521-x||Language:||en||Status of Item:||Peer reviewed|
|Appears in Collections:||Mathematics and Statistics Research Collection|
Insight Research Collection
Show full item record
This item is available under the Attribution-NonCommercial-NoDerivs 3.0 Ireland. No item may be reproduced for commercial purposes. For other possible restrictions on use please refer to the publisher's URL where this is made available, or to notes contained in the item itself. Other terms may apply.