Options
Noisy Monte Carlo: Convergence of Markov chains with approximate transition kernels
Date Issued
2014
Date Available
2015-12-10T04:00:09Z
Abstract
Monte Carlo algorithms often aim to draw from a distribution π by simulating a Markov chain with transition kernel P such that π is invariant under P. However, there are many situations for which it is impractical or impossible to draw from the transition kernel P. For instance, this is the case with massive datasets, where is it prohibitively expensive to calculate the likelihood and is also the case for intractable likelihood models arising from, for example, Gibbs random fields, such as those found in spatial statistics and network analysis. A natural approach in these cases is to replace P by an approximation Pˆ. Using theory from the stability of Markov chains we explore a variety of situations where it is possible to quantify how 'close' the chain given by the transition kernel Pˆ is to the chain given by P. We apply these results to several examples from spatial statistics and network analysis.
Sponsorship
Science Foundation Ireland
Type of Material
Journal Article
Publisher
Springer
Journal
Statistics and Computing
Volume
26
Issue
1
Start Page
29
End Page
47
Copyright (Published Version)
2014 Springer Science+Business Media New York
Language
English
Status of Item
Peer reviewed
This item is made available under a Creative Commons License
File(s)
Owning collection
Scopus© citations
64
Acquisition Date
Apr 17, 2024
Apr 17, 2024
Views
1402
Acquisition Date
Apr 17, 2024
Apr 17, 2024
Downloads
318
Last Month
5
5
Acquisition Date
Apr 17, 2024
Apr 17, 2024