Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors. Gelman, A. & Carlin, J. Perspectives on Psychological Science, 9(6):641–651, November, 2014. tex.ids: gelmanPowerCalculationsAssessing2014 tex.affiliation: Department of Statistics and Department of Political Science, Columbia University gelman@stat.columbia.edu. Clinical Epidemiology and Biostatistics Unit, Murdoch Children's Research Institute, Parkville, Victoria, Australia Department of Paediatrics and School of Population and Global Health, University of Melbourne. tex.pmid: 26186114 tex.publisher: SAGE Publications Inc
Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors [link]Paper  doi  abstract   bibtex   
Statistical power analysis provides the conventional approach to assess error rates when designing a research study. However, power analysis is flawed in that a narrow emphasis on statistical significance is placed as the primary focus of study design. In noisy, small-sample settings, statistically significant results can often be misleading. To help researchers address this problem in the context of their own studies, we recommend design calculations in which (a) the probability of an estimate being in the wrong direction (Type S [sign] error) and (b) the factor by which the magnitude of an effect might be overestimated (Type M [magnitude] error or exaggeration ratio) are estimated. We illustrate with examples from recent published research and discuss the largest challenge in a design calculation: coming up with reasonable estimates of plausible effect sizes based on external information.
@article{Gelman2014-qk,
	title = {Beyond {Power} {Calculations}: {Assessing} {Type} {S} ({Sign}) and {Type} {M} ({Magnitude}) {Errors}},
	volume = {9},
	issn = {1745-6916, 1745-6924},
	url = {https://doi.org/10.1177/1745691614551642},
	doi = {10.1177/1745691614551642},
	abstract = {Statistical power analysis provides the conventional approach to assess error rates when designing a research study. However, power analysis is flawed in that a narrow emphasis on statistical significance is placed as the primary focus of study design. In noisy, small-sample settings, statistically significant results can often be misleading. To help researchers address this problem in the context of their own studies, we recommend design calculations in which (a) the probability of an estimate being in the wrong direction (Type S [sign] error) and (b) the factor by which the magnitude of an effect might be overestimated (Type M [magnitude] error or exaggeration ratio) are estimated. We illustrate with examples from recent published research and discuss the largest challenge in a design calculation: coming up with reasonable estimates of plausible effect sizes based on external information.},
	language = {English},
	number = {6},
	journal = {Perspectives on Psychological Science},
	author = {Gelman, Andrew and Carlin, John},
	month = nov,
	year = {2014},
	pmid = {26186114},
	note = {tex.ids: gelmanPowerCalculationsAssessing2014
tex.affiliation: Department of Statistics and Department of Political Science, Columbia University gelman@stat.columbia.edu. Clinical Epidemiology and Biostatistics Unit, Murdoch Children's Research Institute, Parkville, Victoria, Australia Department of Paediatrics and School of Population and Global Health, University of Melbourne.
tex.pmid: 26186114
tex.publisher: SAGE Publications Inc},
	keywords = {Beauty, Data Interpretation, Statistical, Female, Humans, Male, Menstrual Cycle, Politics, Power, Psychology, Research Design, Sex Ratio, Type M error, Type S error, design calculation, exaggeration  ratio, exaggeration ratio, power analysis, replication crisis, statistical  significance, statistical significance},
	pages = {641--651},
}

Downloads: 0