Reproducible Research Can Still Be Wrong: Adopting a Prevention Approach. Leek, J. T. & Peng, R. D. 112(6):1645–1646.
Reproducible Research Can Still Be Wrong: Adopting a Prevention Approach [link]Paper  doi  abstract   bibtex   
[Excerpt] Reproducibility – the ability to recompute results – and replicability – the chances other experimenters will achieve a consistent result – are two foundational characteristics of successful scientific research. Consistent findings from independent investigators are the primary means by which scientific evidence accumulates for or against a hypothesis. Yet, of late, there has been a crisis of confidence among researchers worried about the rate at which studies are either reproducible or replicable. To maintain the integrity of science research and the public's trust in science, the scientific community must ensure reproducibility and replicability by engaging in a more preventative approach that greatly expands data analysis education and routinely uses software tools. [\n] [...] We suggest that the replication crisis needs to be considered from the perspective of primary prevention. If we can prevent problematic data analyses from being conducted, we can substantially reduce the burden on the community of having to evaluate an increasingly heterogeneous and complex population of studies and research findings. The best way to prevent poor data analysis in the scientific literature is to (i) increase the number of trained data analysts in the scientific community and (ii) identify statistical software and tools that can be shown to improve reproducibility and replicability of studies. [\n] [...] To improve the global robustness of scientific data analysis, we must couple education efforts with the identification of data analytic strategies that are most reproducible and replicable in the hands of basic or intermediate data analysts. Statisticians must bring to bear their history of developing rigorous methods to the area of data science.
@article{leekReproducibleResearchCan2015,
  title = {Reproducible Research Can Still Be Wrong: Adopting a Prevention Approach},
  author = {Leek, Jeffrey T. and Peng, Roger D.},
  date = {2015-02},
  journaltitle = {Proceedings of the National Academy of Sciences},
  volume = {112},
  pages = {1645--1646},
  issn = {0027-8424},
  doi = {10.1073/pnas.1421412111},
  url = {https://doi.org/10.1073/pnas.1421412111},
  abstract = {[Excerpt] Reproducibility -- the ability to recompute results -- and replicability -- the chances other experimenters will achieve a consistent result -- are two foundational characteristics of successful scientific research. Consistent findings from independent investigators are the primary means by which scientific evidence accumulates for or against a hypothesis. Yet, of late, there has been a crisis of confidence among researchers worried about the rate at which studies are either reproducible or replicable. To maintain the integrity of science research and the public's trust in science, the scientific community must ensure reproducibility and replicability by engaging in a more preventative approach that greatly expands data analysis education and routinely uses software tools. 

[\textbackslash n] [...] We suggest that the replication crisis needs to be considered from the perspective of primary prevention. If we can prevent problematic data analyses from being conducted, we can substantially reduce the burden on the community of having to evaluate an increasingly heterogeneous and complex population of studies and research findings. The best way to prevent poor data analysis in the scientific literature is to (i) increase the number of trained data analysts in the scientific community and (ii) identify statistical software and tools that can be shown to improve reproducibility and replicability of studies. 

[\textbackslash n] [...] To improve the global robustness of scientific data analysis, we must couple education efforts with the identification of data analytic strategies that are most reproducible and replicable in the hands of basic or intermediate data analysts. Statisticians must bring to bear their history of developing rigorous methods to the area of data science.},
  archivePrefix = {arXiv},
  eprint = {1502.03169},
  eprinttype = {arxiv},
  keywords = {*imported-from-citeulike-INRMM,~INRMM-MiD:c-13514983,~to-add-doi-URL,definition,education,epistemology,featured-publication,free-scientific-knowledge,hidden-knowledge,knowledge-freedom,open-science,replicability,reproducibility,reproducible-research,science-ethics,scientific-knowledge-sharing},
  number = {6}
}

Downloads: 0