Can cancer researchers accurately judge whether preclinical reports will reproduce?. Benjamin, D., Mandel, D. R., & Kimmelman, J. 15(6):e2002212, 2017.
Can cancer researchers accurately judge whether preclinical reports will reproduce? [link]Paper  doi  abstract   bibtex   
There is vigorous debate about the reproducibility of research findings in cancer biology. Whether scientists can accurately assess which experiments will reproduce original findings is important to determining the pace at which science self-corrects. We collected forecasts from basic and preclinical cancer researchers on the first 6 replication studies conducted by the Reproducibility Project: Cancer Biology (RP:CB) to assess the accuracy of expert judgments on specific replication outcomes. On average, researchers forecasted a 75% probability of replicating the statistical significance and a 50% probability of replicating the effect size, yet none of these studies successfully replicated on either criterion (for the 5 studies with results reported). Accuracy was related to expertise: experts with higher h-indices were more accurate, whereas experts with more topic-specific expertise were less accurate. Our findings suggest that experts, especially those with specialized knowledge, were overconfident about the RP:CB replicating individual experiments within published reports; researcher optimism likely reflects a combination of overestimating the validity of original studies and underestimating the difficulties of repeating their methodologies.
@article{benjamin_can_2017,
	title = {Can cancer researchers accurately judge whether preclinical reports will reproduce?},
	volume = {15},
	issn = {1545-7885},
	url = {https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.2002212},
	doi = {10.1371/journal.pbio.2002212},
	abstract = {There is vigorous debate about the reproducibility of research findings in cancer biology. Whether scientists can accurately assess which experiments will reproduce original findings is important to determining the pace at which science self-corrects. We collected forecasts from basic and preclinical cancer researchers on the first 6 replication studies conducted by the Reproducibility Project: Cancer Biology ({RP}:{CB}) to assess the accuracy of expert judgments on specific replication outcomes. On average, researchers forecasted a 75\% probability of replicating the statistical significance and a 50\% probability of replicating the effect size, yet none of these studies successfully replicated on either criterion (for the 5 studies with results reported). Accuracy was related to expertise: experts with higher h-indices were more accurate, whereas experts with more topic-specific expertise were less accurate. Our findings suggest that experts, especially those with specialized knowledge, were overconfident about the {RP}:{CB} replicating individual experiments within published reports; researcher optimism likely reflects a combination of overestimating the validity of original studies and underestimating the difficulties of repeating their methodologies.},
	pages = {e2002212},
	number = {6},
	journaltitle = {{PLOS} Biology},
	shortjournal = {{PLOS} Biology},
	author = {Benjamin, Daniel and Mandel, David R. and Kimmelman, Jonathan},
	urldate = {2020-01-08},
	date = {2017-06-29},
	year = {2017},
	langid = {english},
	keywords = {Forecasting, Drug discovery, Basic cancer research, Replication studies, Reproducibility, Scientists, Surveys, Trainees},
	file = {Full Text PDF:C\:\\Users\\benjamin\\Zotero\\storage\\FE4B9Y98\\Benjamin et al. - 2017 - Can cancer researchers accurately judge whether pr.pdf:application/pdf;Snapshot:C\:\\Users\\benjamin\\Zotero\\storage\\4JDCI6A8\\article.html:text/html}
}

Downloads: 0