The Assessment of Science: The Relative Merits of Post-Publication Review, the Impact Factor, and the Number of Citations. Eyre-Walker, A. & Stoletzki, N. 11(10):e1001675+.
The Assessment of Science: The Relative Merits of Post-Publication Review, the Impact Factor, and the Number of Citations [link]Paper  doi  abstract   bibtex   
The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative.
@article{eyre-walkerAssessmentScienceRelative2013,
  title = {The {{Assessment}} of {{Science}}: {{The Relative Merits}} of {{Post}}-{{Publication Review}}, the {{Impact Factor}}, and the {{Number}} of {{Citations}}},
  author = {Eyre-Walker, Adam and Stoletzki, Nina},
  date = {2013-10},
  journaltitle = {PLoS Biol},
  volume = {11},
  pages = {e1001675+},
  issn = {1545-7885},
  doi = {10.1371/journal.pbio.1001675},
  url = {https://doi.org/10.1371/journal.pbio.1001675},
  abstract = {The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative.},
  keywords = {*imported-from-citeulike-INRMM,~INRMM-MiD:c-12705497,citation-metrics,impact-factor,open-science,research-metrics},
  number = {10}
}

Downloads: 0