Why small low-powered studies are worse than large high-powered studies and how to protect against “trivial” findings in research: Comment on Friston (2012). Ingre, M. NeuroImage.
Why small low-powered studies are worse than large high-powered studies and how to protect against “trivial” findings in research: Comment on Friston (2012) [link]Paper  doi  abstract   bibtex   
Abstract It is sometimes argued that small studies provide better evidence for reported effects because they are less likely to report findings with small and trivial effect sizes (Friston, 2012). But larger studies are actually better at protecting against inferences from trivial effect sizes, if researchers just make use of effect sizes and confidence intervals. Poor statistical power also comes at a cost of inflated proportion of false positive findings, less power to “confirm” true effects and bias in reported (inflated) effect sizes. Small studies (n=16) lack the precision to reliably distinguish small and medium to large effect sizes (r\<.50) from random noise (α = .05) that larger studies (n=100) does with high level of confidence (r=.50, p=.00000012). The present paper present the arguments needed for researchers to refute the claim that small low-powered studies have a higher degree of scientific evidence than large high-powered studies.
@article{ ingre_why_????,
  title = {Why small low-powered studies are worse than large high-powered studies and how to protect against “trivial” findings in research: Comment on Friston (2012)},
  issn = {1053-8119},
  shorttitle = {Why small low-powered studies are worse than large high-powered studies and how to protect against “trivial” findings in research},
  url = {http://www.sciencedirect.com/science/article/pii/S1053811913002723},
  doi = {10.1016/j.neuroimage.2013.03.030},
  abstract = {Abstract 
It is sometimes argued that small studies provide better evidence for reported effects because they are less likely to report findings with small and trivial effect sizes (Friston, 2012). But larger studies are actually better at protecting against inferences from trivial effect sizes, if researchers just make use of effect sizes and confidence intervals. Poor statistical power also comes at a cost of inflated proportion of false positive findings, less power to “confirm” true effects and bias in reported (inflated) effect sizes. Small studies (n=16) lack the precision to reliably distinguish small and medium to large effect sizes (r\&lt;.50) from random noise (α = .05) that larger studies (n=100) does with high level of confidence (r=.50, p=.00000012). The present paper present the arguments needed for researchers to refute the claim that small low-powered studies have a higher degree of scientific evidence than large high-powered studies.},
  urldate = {2013-04-19},
  journal = {{NeuroImage}},
  author = {Ingre, Michael},
  keywords = {Statistical power, false positive findings, inflated effect sizes}
}

Downloads: 0