Paper doi abstract bibtex

Broadly speaking, nineteenth century statistics was Bayesian, while the twentieth century was frequentist, at least from the point of view of most scientific practitioners. Here in the twenty-first century scientists are bringing statisticians much bigger problems to solve, often comprising millions of data points and thousands of parameters. Which statistical philosophy will dominate practice? My guess, backed up with some recent examples, is that a combination of Bayesian and frequentist ideas will be needed to deal with our increasingly intense scientific environment. This will be a challenging period for statisticians, both applied and theoretical, but it also opens the opportunity for a new golden age, rivaling that of Fisher, Neyman, and the other giants of the early 1900s. What follows is the text of the 164th ASA presidential address, delivered at the awards ceremony in Toronto on August 10, 2004. Two Septembers ago, there was a conference of particle physicists and statisticians at Stanford, called phystat2003. I gave a talk at phystat2003 titled " Bayesians, Frequentists, and Physicists. " Earlier that year I'd spoken to a meeting of bio-medical researchers at the " Hutch " in Seattle, under the title " Bayesians, Frequentists, and Microbiologists. " These weren't the same lectures, and both were different than tonight's talk, but you can see that I've gotten stuck on a naming scheme. You might worry that this has gotten out of hand, and next week it might be almost anything else that ends in " ists. " But no, there is a plan here: The common theme I've been stuck on, and what I want to talk about tonight, is the impact of statistics on modern science and also the impact of modern science on statistics. Statisticians, by the nature of our profession, tend to be crit-ical thinkers, and that includes a big dose of self-criticism. It is easy to think of statistics as a small struggling field, but that's not at all what the historical record shows. Starting from just about zero in 1900, statistics has grown steadily in numbers and, more importantly, in intellectual influence. The growth process has accelerated in the past few decades as science has moved into areas where random noise is endemic and efficient inference is crucial. It's hard to imagine phystat1903, back when physicists scorned statistical methods as appropriate only for soft noisy fields like the social sciences. But physicists have their own problems with noise these days, as they try to answer questions where data are really thin on the ground. The example of great-est interest at phystat2003 concerned the mass of the neutrino, a famously elusive particle that is much lighter than an electron and may weigh almost nothing at all. The physicists' trouble was that the best unbiased estimate of the neutrino mass was negative, about −1 on a scale with unit standard error. The mass itself can't be negative of course, and these days they're pretty sure it's not zero. They wished to establish an upper bound for the mass, the smaller the better from the point of view of further experimentation. As a result, the particle physics literature now contains a healthy debate on Bayesian versus frequentist ways of setting the bound. The current favorite is a likelihood ratio-based system of one-sided confidence intervals. The physicists I talked with were really bothered by our 250-year-old Bayesian–frequentist argument. Basically, there's only one way of doing physics, but there seems to be at least two ways to do statistics, and they don't always give the same B. Efron is Professor of Statistics and Biostatistics at Stanford University.

@article{efron_bayesians_2005, title = {Bayesians, {Frequentists}, and {Scientists}}, volume = {100469}, issn = {0162-1459}, url = {http://www.tandfonline.com/action/journalInformation?journalCode=uasa20%0Ahttp://dx.doi.org/10.1198/016214505000000033}, doi = {10.1198/016214505000000033}, abstract = {Broadly speaking, nineteenth century statistics was Bayesian, while the twentieth century was frequentist, at least from the point of view of most scientific practitioners. Here in the twenty-first century scientists are bringing statisticians much bigger problems to solve, often comprising millions of data points and thousands of parameters. Which statistical philosophy will dominate practice? My guess, backed up with some recent examples, is that a combination of Bayesian and frequentist ideas will be needed to deal with our increasingly intense scientific environment. This will be a challenging period for statisticians, both applied and theoretical, but it also opens the opportunity for a new golden age, rivaling that of Fisher, Neyman, and the other giants of the early 1900s. What follows is the text of the 164th ASA presidential address, delivered at the awards ceremony in Toronto on August 10, 2004. Two Septembers ago, there was a conference of particle physicists and statisticians at Stanford, called phystat2003. I gave a talk at phystat2003 titled " Bayesians, Frequentists, and Physicists. " Earlier that year I'd spoken to a meeting of bio-medical researchers at the " Hutch " in Seattle, under the title " Bayesians, Frequentists, and Microbiologists. " These weren't the same lectures, and both were different than tonight's talk, but you can see that I've gotten stuck on a naming scheme. You might worry that this has gotten out of hand, and next week it might be almost anything else that ends in " ists. " But no, there is a plan here: The common theme I've been stuck on, and what I want to talk about tonight, is the impact of statistics on modern science and also the impact of modern science on statistics. Statisticians, by the nature of our profession, tend to be crit-ical thinkers, and that includes a big dose of self-criticism. It is easy to think of statistics as a small struggling field, but that's not at all what the historical record shows. Starting from just about zero in 1900, statistics has grown steadily in numbers and, more importantly, in intellectual influence. The growth process has accelerated in the past few decades as science has moved into areas where random noise is endemic and efficient inference is crucial. It's hard to imagine phystat1903, back when physicists scorned statistical methods as appropriate only for soft noisy fields like the social sciences. But physicists have their own problems with noise these days, as they try to answer questions where data are really thin on the ground. The example of great-est interest at phystat2003 concerned the mass of the neutrino, a famously elusive particle that is much lighter than an electron and may weigh almost nothing at all. The physicists' trouble was that the best unbiased estimate of the neutrino mass was negative, about −1 on a scale with unit standard error. The mass itself can't be negative of course, and these days they're pretty sure it's not zero. They wished to establish an upper bound for the mass, the smaller the better from the point of view of further experimentation. As a result, the particle physics literature now contains a healthy debate on Bayesian versus frequentist ways of setting the bound. The current favorite is a likelihood ratio-based system of one-sided confidence intervals. The physicists I talked with were really bothered by our 250-year-old Bayesian–frequentist argument. Basically, there's only one way of doing physics, but there seems to be at least two ways to do statistics, and they don't always give the same B. Efron is Professor of Statistics and Biostatistics at Stanford University.}, journal = {Journal of the American Statistical Association}, author = {Efron, Bradley}, year = {2005}, pmid = {19626167}, keywords = {Bootstrap, Empirical Bayes, False discovery rate, Simultaneous testing}, pages = {1--5}, file = {Efron_2005_Bayesians, Frequentists, and Scientists.pdf:/Users/baptiste/Library/Application Support/Zotero/Profiles/d9rq1atq.default/zotero/storage/C89C4QZ8/Efron_2005_Bayesians, Frequentists, and Scientists.pdf:application/pdf} }

Downloads: 0