Quantifying the probable approximation error of probabilistic inference programs. Towner, M. C. & Mansinghka, V. K. arXiv preprint, 2016.
Paper
Link abstract bibtex This paper introduces a new technique for quantifying the approximation error of a broad class of probabilistic inference programs, including ones based on both variational and Monte Carlo approaches. The key idea is to derive a subjective bound on the symmetrized KL divergence between the distribution achieved by an approximate inference program and its true target distribution. The bound’s validity (and subjectivity) rests on the accuracy of two auxiliary probabilistic programs: (i) a “reference” inference program that defines a gold standard of accuracy and (ii) a “meta-inference” program that answers the question “what internal random choices did the original approximate inference program probably make given that it produced a particular result?” The paper includes empirical results on inference problems drawn from linear regression, Dirichlet process mixture modeling, HMMs, and Bayesian networks. The experiments show that the technique is robust to the quality of the reference inference program and that it can detect implementation bugs that are not apparent from predictive performance.
@article{towner-arxiv-2016,
author = {Marco Cusumano Towner and Vikash K. Mansinghka},
title = {Quantifying the probable approximation error of probabilistic inference programs},
year = {2016},
journal = {arXiv preprint},
volume = {arXiv:1606.00068},
url_paper = {https://arxiv.org/pdf/1606.00068.pdf},
url_link = {https://arxiv.org/abs/1606.00068},
abstract = {This paper introduces a new technique for quantifying the approximation error of a broad class of probabilistic inference programs, including ones based on both variational and Monte Carlo approaches. The key idea is to derive a subjective bound on the symmetrized KL divergence between the distribution achieved by an approximate inference program and its true target distribution. The bound’s validity (and subjectivity) rests on the accuracy of two auxiliary probabilistic programs: (i) a “reference” inference program that defines a gold standard of accuracy and (ii) a “meta-inference” program that answers the question “what internal random choices did the original approximate inference program probably make given that it produced a particular result?” The paper includes empirical results on inference problems drawn from linear regression, Dirichlet process mixture modeling, HMMs, and Bayesian networks. The experiments show that the technique is robust to the quality of the reference inference program and that it can detect implementation bugs that are not apparent from predictive performance.},
}
Downloads: 0
{"_id":"nz8LHJSeXyKWr9Ctt","bibbaseid":"towner-mansinghka-quantifyingtheprobableapproximationerrorofprobabilisticinferenceprograms-2016","downloads":0,"creationDate":"2017-04-05T15:45:51.818Z","title":"Quantifying the probable approximation error of probabilistic inference programs","author_short":["Towner, M. C.","Mansinghka, V. K."],"year":2016,"bibtype":"article","biburl":"http://probcomp.csail.mit.edu/papers.bib","bibdata":{"bibtype":"article","type":"article","author":[{"firstnames":["Marco","Cusumano"],"propositions":[],"lastnames":["Towner"],"suffixes":[]},{"firstnames":["Vikash","K."],"propositions":[],"lastnames":["Mansinghka"],"suffixes":[]}],"title":"Quantifying the probable approximation error of probabilistic inference programs","year":"2016","journal":"arXiv preprint","volume":"arXiv:1606.00068","url_paper":"https://arxiv.org/pdf/1606.00068.pdf","url_link":"https://arxiv.org/abs/1606.00068","abstract":"This paper introduces a new technique for quantifying the approximation error of a broad class of probabilistic inference programs, including ones based on both variational and Monte Carlo approaches. The key idea is to derive a subjective bound on the symmetrized KL divergence between the distribution achieved by an approximate inference program and its true target distribution. The bound’s validity (and subjectivity) rests on the accuracy of two auxiliary probabilistic programs: (i) a “reference” inference program that defines a gold standard of accuracy and (ii) a “meta-inference” program that answers the question “what internal random choices did the original approximate inference program probably make given that it produced a particular result?” The paper includes empirical results on inference problems drawn from linear regression, Dirichlet process mixture modeling, HMMs, and Bayesian networks. The experiments show that the technique is robust to the quality of the reference inference program and that it can detect implementation bugs that are not apparent from predictive performance.","bibtex":"@article{towner-arxiv-2016,\nauthor = {Marco Cusumano Towner and Vikash K. Mansinghka},\ntitle = {Quantifying the probable approximation error of probabilistic inference programs},\nyear = {2016},\njournal = {arXiv preprint},\nvolume = {arXiv:1606.00068},\nurl_paper = {https://arxiv.org/pdf/1606.00068.pdf},\nurl_link = {https://arxiv.org/abs/1606.00068},\nabstract = {This paper introduces a new technique for quantifying the approximation error of a broad class of probabilistic inference programs, including ones based on both variational and Monte Carlo approaches. The key idea is to derive a subjective bound on the symmetrized KL divergence between the distribution achieved by an approximate inference program and its true target distribution. The bound’s validity (and subjectivity) rests on the accuracy of two auxiliary probabilistic programs: (i) a “reference” inference program that defines a gold standard of accuracy and (ii) a “meta-inference” program that answers the question “what internal random choices did the original approximate inference program probably make given that it produced a particular result?” The paper includes empirical results on inference problems drawn from linear regression, Dirichlet process mixture modeling, HMMs, and Bayesian networks. The experiments show that the technique is robust to the quality of the reference inference program and that it can detect implementation bugs that are not apparent from predictive performance.},\n}\n\n","author_short":["Towner, M. C.","Mansinghka, V. K."],"key":"towner-arxiv-2016","id":"towner-arxiv-2016","bibbaseid":"towner-mansinghka-quantifyingtheprobableapproximationerrorofprobabilisticinferenceprograms-2016","role":"author","urls":{" paper":"https://arxiv.org/pdf/1606.00068.pdf"," link":"https://arxiv.org/abs/1606.00068"},"downloads":0,"html":""},"search_terms":["quantifying","probable","approximation","error","probabilistic","inference","programs","towner","mansinghka"],"keywords":[],"authorIDs":["58e5112f03d5065b4a0000a9"],"dataSources":["KjwPBikSY3GLxsjaT"]}