Easily Computed Marginal Likelihoods from Posterior Simulation Using the THAMES Estimator
The marginal likelihood is essential for Bayesian model selection, in particular in defining Bayes factors and as a critical quantity in Bayesian model averaging (BMA). However, it is often untractable, necessitating estimation.
We propose an easily computed estimator of marginal likelihoods from posterior simulation output, via reciprocal importance sampling, combining earlier proposals of DiCiccio et al (1997) and Robert and Wraith (2009). This involves only the unnormalized posterior densities from the sampled parameter values, and does not involve additional simulations beyond the main posterior simulation, or additional complicated calculations. It is unbiased for the reciprocal of the marginal likelihood, consistent, has finite variance, and is asymptotically normal. It involves one user-specified control parameter, and we derive an optimal way of specifying this. We illustrate it with several numerical examples.
In this talk, I am going to illustrate and explain the ideas behind our estimator. I am also going to give a comprehensive overview of our key results, both theoretical and numerical. With regards to the former, I am going to look at some key techniques used in the proof. With regards to the latter, I am going to discuss how to implement the THAMES in practice. I am going to explain each simulation setting and our results.