The Hard Road to Reproducibility. Barba, L. A. 354(6308):142.
The Hard Road to Reproducibility [link]Paper  doi  abstract   bibtex   
[Excerpt] [...] A couple years ago, we published a paper applying computational fluid dynamics to the aerodynamics of flying snakes. More recently, I asked a new student to replicate the findings of that paper, both as a training opportunity and to help us choose which code to use in future research. Replicating a published study is always difficult – there are just so many conditions that need to be matched and details that can't be overlooked – but I thought this case was relatively straightforward. The data were available. The whole analysis was open for inspection. The additional details were documented in the supplementary materials. It was the very definition of reproducible research. [] Three years of work and hundreds of runs with four different codes taught us just how many ways there are to go wrong! Failing to record the version of any piece of software or hardware, overlooking a single parameter, or glossing over a restriction on how to use another researcher's code can lead you astray. [] We've found that we can only achieve the necessary level of reliability and transparency by automating every step. Manual actions are replaced by scripts or logged into files. Plots are made only via code, not with a graphical user interface. Every result, including those from failed experiments, is documented. Every step of the way, we want to anticipate what another researcher might need to either reproduce our results (run our code with our data) or replicate them (independently arrive at the same findings). [...]
@article{barbaHardRoadReproducibility2016,
  title = {The Hard Road to Reproducibility},
  author = {Barba, Lorena A.},
  date = {2016-10},
  journaltitle = {Science},
  volume = {354},
  pages = {142},
  issn = {1095-9203},
  doi = {10.1126/science.354.6308.142},
  url = {http://mfkp.org/INRMM/article/14153052},
  abstract = {[Excerpt] [...] A couple years ago, we published a paper applying computational fluid dynamics to the aerodynamics of flying snakes. More recently, I asked a new student to replicate the findings of that paper, both as a training opportunity and to help us choose which code to use in future research. Replicating a published study is always difficult -- there are just so many conditions that need to be matched and details that can't be overlooked -- but I thought this case was relatively straightforward. The data were available. The whole analysis was open for inspection. The additional details were documented in the supplementary materials. It was the very definition of reproducible research.

[] Three years of work and hundreds of runs with four different codes taught us just how many ways there are to go wrong! Failing to record the version of any piece of software or hardware, overlooking a single parameter, or glossing over a restriction on how to use another researcher's code can lead you astray.

[] We've found that we can only achieve the necessary level of reliability and transparency by automating every step. Manual actions are replaced by scripts or logged into files. Plots are made only via code, not with a graphical user interface. Every result, including those from failed experiments, is documented. Every step of the way, we want to anticipate what another researcher might need to either reproduce our results (run our code with our data) or replicate them (independently arrive at the same findings). [...]},
  keywords = {*imported-from-citeulike-INRMM,~INRMM-MiD:c-14153052,~to-add-doi-URL,computational-science,open-science,reproducibility,reproducible-research,research-management,rewarding-best-research-practices},
  number = {6308}
}

Downloads: 0