Approximate Bayesian computation via classification. Wang, Y., Kaji, T., & Rockova, V. J. Mach. Learn. Res., JMLR.org, jan, 2022. abstract bibtex Approximate Bayesian Computation (ABC) enables statistical inference in simulator-based models whose likelihoods are difficult to calculate but easy to simulate from. ABC constructs a kernel-type approximation to the posterior distribution through an accept/reject mechanism which compares summary statistics of real and simulated data. To obviate the need for summary statistics, we directly compare empirical distributions with a Kullback-Leibler (KL) divergence estimator obtained via contrastive learning. In particular, we blend flexible machine learning classifiers within ABC to automate fake/real data comparisons. We consider the traditional accept/reject kernel as well as an exponential weighting scheme which does not require the ABC acceptance threshold. Our theoretical results show that the rate at which our ABC posterior distributions concentrate around the true parameter depends on the estimation error of the classifier. We derive limiting posterior shape results and find that, with a properly scaled exponential kernel, asymptotic normality holds. We demonstrate the usefulness of our approach on simulated examples as well as real data in the context of stock volatility estimation.
@article{10.5555/3586589.3586939,
author = {Wang, Yuexi and Kaji, Tetsuya and Rockova, Veronika},
title = {Approximate Bayesian computation via classification},
year = {2022},
issue_date = {January 2022},
publisher = {JMLR.org},
volume = {23},
number = {1},
issn = {1532-4435},
abstract = {Approximate Bayesian Computation (ABC) enables statistical inference in simulator-based models whose likelihoods are difficult to calculate but easy to simulate from. ABC constructs a kernel-type approximation to the posterior distribution through an accept/reject mechanism which compares summary statistics of real and simulated data. To obviate the need for summary statistics, we directly compare empirical distributions with a Kullback-Leibler (KL) divergence estimator obtained via contrastive learning. In particular, we blend flexible machine learning classifiers within ABC to automate fake/real data comparisons. We consider the traditional accept/reject kernel as well as an exponential weighting scheme which does not require the ABC acceptance threshold. Our theoretical results show that the rate at which our ABC posterior distributions concentrate around the true parameter depends on the estimation error of the classifier. We derive limiting posterior shape results and find that, with a properly scaled exponential kernel, asymptotic normality holds. We demonstrate the usefulness of our approach on simulated examples as well as real data in the context of stock volatility estimation.},
journal = {J. Mach. Learn. Res.},
month = {jan},
articleno = {350},
numpages = {49},
keywords = {approximate Bayesian computation, classification, likelihood-free inference, Kullback-Leibler divergence, posterior concentration}
}
Downloads: 0
{"_id":"zmx7MSfSHoG7HToaD","bibbaseid":"wang-kaji-rockova-approximatebayesiancomputationviaclassification-2022","author_short":["Wang, Y.","Kaji, T.","Rockova, V."],"bibdata":{"bibtype":"article","type":"article","author":[{"propositions":[],"lastnames":["Wang"],"firstnames":["Yuexi"],"suffixes":[]},{"propositions":[],"lastnames":["Kaji"],"firstnames":["Tetsuya"],"suffixes":[]},{"propositions":[],"lastnames":["Rockova"],"firstnames":["Veronika"],"suffixes":[]}],"title":"Approximate Bayesian computation via classification","year":"2022","issue_date":"January 2022","publisher":"JMLR.org","volume":"23","number":"1","issn":"1532-4435","abstract":"Approximate Bayesian Computation (ABC) enables statistical inference in simulator-based models whose likelihoods are difficult to calculate but easy to simulate from. ABC constructs a kernel-type approximation to the posterior distribution through an accept/reject mechanism which compares summary statistics of real and simulated data. To obviate the need for summary statistics, we directly compare empirical distributions with a Kullback-Leibler (KL) divergence estimator obtained via contrastive learning. In particular, we blend flexible machine learning classifiers within ABC to automate fake/real data comparisons. We consider the traditional accept/reject kernel as well as an exponential weighting scheme which does not require the ABC acceptance threshold. Our theoretical results show that the rate at which our ABC posterior distributions concentrate around the true parameter depends on the estimation error of the classifier. We derive limiting posterior shape results and find that, with a properly scaled exponential kernel, asymptotic normality holds. We demonstrate the usefulness of our approach on simulated examples as well as real data in the context of stock volatility estimation.","journal":"J. Mach. Learn. Res.","month":"jan","articleno":"350","numpages":"49","keywords":"approximate Bayesian computation, classification, likelihood-free inference, Kullback-Leibler divergence, posterior concentration","bibtex":"@article{10.5555/3586589.3586939,\nauthor = {Wang, Yuexi and Kaji, Tetsuya and Rockova, Veronika},\ntitle = {Approximate Bayesian computation via classification},\nyear = {2022},\nissue_date = {January 2022},\npublisher = {JMLR.org},\nvolume = {23},\nnumber = {1},\nissn = {1532-4435},\nabstract = {Approximate Bayesian Computation (ABC) enables statistical inference in simulator-based models whose likelihoods are difficult to calculate but easy to simulate from. ABC constructs a kernel-type approximation to the posterior distribution through an accept/reject mechanism which compares summary statistics of real and simulated data. To obviate the need for summary statistics, we directly compare empirical distributions with a Kullback-Leibler (KL) divergence estimator obtained via contrastive learning. In particular, we blend flexible machine learning classifiers within ABC to automate fake/real data comparisons. We consider the traditional accept/reject kernel as well as an exponential weighting scheme which does not require the ABC acceptance threshold. Our theoretical results show that the rate at which our ABC posterior distributions concentrate around the true parameter depends on the estimation error of the classifier. We derive limiting posterior shape results and find that, with a properly scaled exponential kernel, asymptotic normality holds. We demonstrate the usefulness of our approach on simulated examples as well as real data in the context of stock volatility estimation.},\njournal = {J. Mach. Learn. Res.},\nmonth = {jan},\narticleno = {350},\nnumpages = {49},\nkeywords = {approximate Bayesian computation, classification, likelihood-free inference, Kullback-Leibler divergence, posterior concentration}\n}\n\n","author_short":["Wang, Y.","Kaji, T.","Rockova, V."],"key":"10.5555/3586589.3586939","id":"10.5555/3586589.3586939","bibbaseid":"wang-kaji-rockova-approximatebayesiancomputationviaclassification-2022","role":"author","urls":{},"keyword":["approximate Bayesian computation","classification","likelihood-free inference","Kullback-Leibler divergence","posterior concentration"],"metadata":{"authorlinks":{}},"downloads":0,"html":""},"bibtype":"article","biburl":"https://bibbase.org/network/files/QDBhwkwh8hDye6xL8","dataSources":["4x87iNQYFcnH8n6Cm"],"keywords":["approximate bayesian computation","classification","likelihood-free inference","kullback-leibler divergence","posterior concentration"],"search_terms":["approximate","bayesian","computation","via","classification","wang","kaji","rockova"],"title":"Approximate Bayesian computation via classification","year":2022}