Bayes Factors. Kass, R. E. & Raftery, A. E. J. Am. Stat. Assoc., 90(430):773–795, 1995. abstract bibtex In a 1935 paper and in his book Theory of probability, Jeffresy developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpies was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is one-half. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P-values, less attention has been given to the Bayes as a practical tool of applied statistics. In this article we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology, and psychology. We emphasize the following points: From Jeffrey's Bayesian viewpoint, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory. Bayes factors offer a way of evaluating evidence in favor of a null hypothesis. Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis. Bayes factors are very general and do not require alternative models to be nested. Several techniques are available for computing Bayes factors, including asymptotic approximations that are easy to compute using the output from standard packages that maximize likelihoods. In "non-Bayesian significance tests. The Schwarz criterion (or BIC) gives a rough approximation to the logarithm of the Bayes factor, which is easy to use and does not require evaluation of prior distributions. When one is interested in estimation or prediction, Bayes factors may be converted to weights to be attached to various models so that a composite estimate or prediction may be obtained that takes account of structural or model uncertainty. Algorithms have been proposed that allow model uncertainty to be taken into account when the class of models initially considered is very large. Bayes factors are useful for guiding an evolutionary model-building process. It is important, and feasible, to assess the sensitivity of conclusions to the prior distributions used.
@Article{Kass1995,
author = {Kass, Robert E. and Raftery, Adrian E.},
journal = {J. Am. Stat. Assoc.},
title = {Bayes Factors},
year = {1995},
number = {430},
pages = {773--795},
volume = {90},
abstract = {In a 1935 paper and in his book Theory of probability, Jeffresy developed
a methodology for quantifying the evidence in favor of a scientific
theory. The centerpies was a number, now called the Bayes factor,
which is the posterior odds of the null hypothesis when the prior
probability on the null is one-half. Although there has been much
discussion of Bayesian hypothesis testing in the context of criticism
of P-values, less attention has been given to the Bayes as a practical
tool of applied statistics. In this article we review and discuss
the uses of Bayes factors in the context of five scientific applications
in genetics, sports, ecology, sociology, and psychology. We emphasize
the following points: From Jeffrey's Bayesian viewpoint, the purpose
of hypothesis testing is to evaluate the evidence in favor of a scientific
theory. Bayes factors offer a way of evaluating evidence in favor
of a null hypothesis. Bayes factors provide a way of incorporating
external information into the evaluation of evidence about a hypothesis.
Bayes factors are very general and do not require alternative models
to be nested. Several techniques are available for computing Bayes
factors, including asymptotic approximations that are easy to compute
using the output from standard packages that maximize likelihoods.
In "non-Bayesian significance tests. The Schwarz criterion (or BIC)
gives a rough approximation to the logarithm of the Bayes factor,
which is easy to use and does not require evaluation of prior distributions.
When one is interested in estimation or prediction, Bayes factors
may be converted to weights to be attached to various models so that
a composite estimate or prediction may be obtained that takes account
of structural or model uncertainty. Algorithms have been proposed
that allow model uncertainty to be taken into account when the class
of models initially considered is very large. Bayes factors are useful
for guiding an evolutionary model-building process. It is important,
and feasible, to assess the sensitivity of conclusions to the prior
distributions used.},
}
Downloads: 0
{"_id":"3Jty2DXvbM4s6apgc","bibbaseid":"kass-raftery-bayesfactors-1995","downloads":0,"creationDate":"2016-09-05T15:57:58.481Z","title":"Bayes Factors","author_short":["Kass, R. E.","Raftery, A. E."],"year":1995,"bibtype":"article","biburl":"http://endress.org/publications/ansgar.bib","bibdata":{"bibtype":"article","type":"article","author":[{"propositions":[],"lastnames":["Kass"],"firstnames":["Robert","E."],"suffixes":[]},{"propositions":[],"lastnames":["Raftery"],"firstnames":["Adrian","E."],"suffixes":[]}],"journal":"J. Am. Stat. Assoc.","title":"Bayes Factors","year":"1995","number":"430","pages":"773–795","volume":"90","abstract":"In a 1935 paper and in his book Theory of probability, Jeffresy developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpies was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is one-half. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P-values, less attention has been given to the Bayes as a practical tool of applied statistics. In this article we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology, and psychology. We emphasize the following points: From Jeffrey's Bayesian viewpoint, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory. Bayes factors offer a way of evaluating evidence in favor of a null hypothesis. Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis. Bayes factors are very general and do not require alternative models to be nested. Several techniques are available for computing Bayes factors, including asymptotic approximations that are easy to compute using the output from standard packages that maximize likelihoods. In \"non-Bayesian significance tests. The Schwarz criterion (or BIC) gives a rough approximation to the logarithm of the Bayes factor, which is easy to use and does not require evaluation of prior distributions. When one is interested in estimation or prediction, Bayes factors may be converted to weights to be attached to various models so that a composite estimate or prediction may be obtained that takes account of structural or model uncertainty. Algorithms have been proposed that allow model uncertainty to be taken into account when the class of models initially considered is very large. Bayes factors are useful for guiding an evolutionary model-building process. It is important, and feasible, to assess the sensitivity of conclusions to the prior distributions used.","bibtex":"@Article{Kass1995,\n author = {Kass, Robert E. and Raftery, Adrian E.},\n journal = {J. Am. Stat. Assoc.},\n title = {Bayes Factors},\n year = {1995},\n number = {430},\n pages = {773--795},\n volume = {90},\n abstract = {In a 1935 paper and in his book Theory of probability, Jeffresy developed\n\ta methodology for quantifying the evidence in favor of a scientific\n\ttheory. The centerpies was a number, now called the Bayes factor,\n\twhich is the posterior odds of the null hypothesis when the prior\n\tprobability on the null is one-half. Although there has been much\n\tdiscussion of Bayesian hypothesis testing in the context of criticism\n\tof P-values, less attention has been given to the Bayes as a practical\n\ttool of applied statistics. In this article we review and discuss\n\tthe uses of Bayes factors in the context of five scientific applications\n\tin genetics, sports, ecology, sociology, and psychology. We emphasize\n\tthe following points: From Jeffrey's Bayesian viewpoint, the purpose\n\tof hypothesis testing is to evaluate the evidence in favor of a scientific\n\ttheory. Bayes factors offer a way of evaluating evidence in favor\n\tof a null hypothesis. Bayes factors provide a way of incorporating\n\texternal information into the evaluation of evidence about a hypothesis.\n\tBayes factors are very general and do not require alternative models\n\tto be nested. Several techniques are available for computing Bayes\n\tfactors, including asymptotic approximations that are easy to compute\n\tusing the output from standard packages that maximize likelihoods.\n\tIn \"non-Bayesian significance tests. The Schwarz criterion (or BIC)\n\tgives a rough approximation to the logarithm of the Bayes factor,\n\twhich is easy to use and does not require evaluation of prior distributions.\n\tWhen one is interested in estimation or prediction, Bayes factors\n\tmay be converted to weights to be attached to various models so that\n\ta composite estimate or prediction may be obtained that takes account\n\tof structural or model uncertainty. Algorithms have been proposed\n\tthat allow model uncertainty to be taken into account when the class\n\tof models initially considered is very large. Bayes factors are useful\n\tfor guiding an evolutionary model-building process. It is important,\n\tand feasible, to assess the sensitivity of conclusions to the prior\n\tdistributions used.},\n}\n\n","author_short":["Kass, R. E.","Raftery, A. E."],"key":"Kass1995","id":"Kass1995","bibbaseid":"kass-raftery-bayesfactors-1995","role":"author","urls":{},"metadata":{"authorlinks":{}},"downloads":0},"search_terms":["bayes","factors","kass","raftery"],"keywords":[],"authorIDs":[],"dataSources":["mEQakjn8ggpMsnGJi","xPGxHAeh3vZpx4yyE"]}