The prior can generally only be understood in the context of the likelihood. Gelman, A., Simpson, D., & Betancourt, M. Entropy, 19(10):555, 2017. abstract bibtex A key sticking point of Bayesian analysis is the choice of prior distribution, and there is a vast literature on potential defaults including uniform priors, Jeffreys' priors, reference priors, maximum entropy priors, and weakly informative priors. These methods, however, often manifest a key conceptual tension in prior modeling: a model encoding true prior information should be chosen without reference to the model of the measurement process, but almost all common prior modeling techniques are implicitly motivated by a reference likelihood. In this paper we resolve this apparent paradox by placing the choice of prior into the context of the entire Bayesian analysis, from inference to prediction to model evaluation.
@Article{Gelman2017,
author = {Gelman, Andrew and Simpson, Daniel and Betancourt, Michael},
title = {The prior can generally only be understood in the context of the likelihood},
journal = {Entropy},
volume = {19},
number = {10},
pages = {555},
year = {2017},
abstract = {A key sticking point of Bayesian analysis is the choice of prior distribution, and there is a vast literature on potential defaults including uniform priors, Jeffreys\' priors, reference priors, maximum entropy priors, and weakly informative priors. These methods, however, often manifest a key conceptual tension in prior modeling: a model encoding true prior information should be chosen without reference to the model of the measurement process, but almost all common prior modeling techniques are implicitly motivated by a reference likelihood. In this paper we resolve this apparent paradox by placing the choice of prior into the context of the entire Bayesian analysis, from inference to prediction to model evaluation.},
location = {},
keywords = {}}
Downloads: 0
{"_id":"XKfWBiruGDpNFTMyR","bibbaseid":"gelman-simpson-betancourt-thepriorcangenerallyonlybeunderstoodinthecontextofthelikelihood-2017","authorIDs":[],"author_short":["Gelman, A.","Simpson, D.","Betancourt, M."],"bibdata":{"bibtype":"article","type":"article","author":[{"propositions":[],"lastnames":["Gelman"],"firstnames":["Andrew"],"suffixes":[]},{"propositions":[],"lastnames":["Simpson"],"firstnames":["Daniel"],"suffixes":[]},{"propositions":[],"lastnames":["Betancourt"],"firstnames":["Michael"],"suffixes":[]}],"title":"The prior can generally only be understood in the context of the likelihood","journal":"Entropy","volume":"19","number":"10","pages":"555","year":"2017","abstract":"A key sticking point of Bayesian analysis is the choice of prior distribution, and there is a vast literature on potential defaults including uniform priors, Jeffreys' priors, reference priors, maximum entropy priors, and weakly informative priors. These methods, however, often manifest a key conceptual tension in prior modeling: a model encoding true prior information should be chosen without reference to the model of the measurement process, but almost all common prior modeling techniques are implicitly motivated by a reference likelihood. In this paper we resolve this apparent paradox by placing the choice of prior into the context of the entire Bayesian analysis, from inference to prediction to model evaluation.","location":"","keywords":"","bibtex":"@Article{Gelman2017,\nauthor = {Gelman, Andrew and Simpson, Daniel and Betancourt, Michael}, \ntitle = {The prior can generally only be understood in the context of the likelihood}, \njournal = {Entropy}, \nvolume = {19}, \nnumber = {10}, \npages = {555}, \nyear = {2017}, \nabstract = {A key sticking point of Bayesian analysis is the choice of prior distribution, and there is a vast literature on potential defaults including uniform priors, Jeffreys\\' priors, reference priors, maximum entropy priors, and weakly informative priors. These methods, however, often manifest a key conceptual tension in prior modeling: a model encoding true prior information should be chosen without reference to the model of the measurement process, but almost all common prior modeling techniques are implicitly motivated by a reference likelihood. In this paper we resolve this apparent paradox by placing the choice of prior into the context of the entire Bayesian analysis, from inference to prediction to model evaluation.}, \nlocation = {}, \nkeywords = {}}\n\n\n","author_short":["Gelman, A.","Simpson, D.","Betancourt, M."],"key":"Gelman2017","id":"Gelman2017","bibbaseid":"gelman-simpson-betancourt-thepriorcangenerallyonlybeunderstoodinthecontextofthelikelihood-2017","role":"author","urls":{},"downloads":0},"bibtype":"article","biburl":"https://gist.githubusercontent.com/stuhlmueller/a37ef2ef4f378ebcb73d249fe0f8377a/raw/6f96f6f779501bd9482896af3e4db4de88c35079/references.bib","creationDate":"2020-01-27T02:13:34.026Z","downloads":0,"keywords":[],"search_terms":["prior","generally","understood","context","likelihood","gelman","simpson","betancourt"],"title":"The prior can generally only be understood in the context of the likelihood","year":2017,"dataSources":["hEoKh4ygEAWbAZ5iy"]}