Reviewers Are Blinkered by Bibliometrics. Stephan, P., Veugelers, R., & Wang, J. Nature, 544(7651):411–412, April, 2017.
doi  abstract   bibtex   
[Excerpt] [...] Although journal impact factors (JIFs) were developed to assess journals and say little about any individual paper, reviewers routinely justify their evaluations on the basis of where candidates have published. [...] As economists who study science and innovation, we see engrained processes working against cherished goals. Scientists we interview routinely say that they dare not propose bold projects for funding in part because of expectations that they will produce a steady stream of papers in journals with high impact scores. The situation may be worse than assumed. Our analysis of 15 years' worth of citation data suggests that common bibliometric measures relying on short-term windows undervalue risky research. [...] At our own universities, it is standard for deans and department chairs to summarize candidates' citations and JIFs when committees discuss the merits of their work. Colleagues and external reviewers routinely refer to the bibliometric indicators of junior faculty members who are up for promotion. Hiring committees may also emphasize these indicators to select colleagues who are likely to attract funding for their institutions. [] It is easy to see why. Public funders use these measures to allocate resources to universities. In turn, universities use them to distribute resources to departments. [...] [Research impact] Much has been written about how the pursuit of flashy papers can push scientists to crowd into similar, competitive projects, cut corners or exaggerate the significance of their findings. We think that the problem is more profound: popular short-term bibliometric measures discourage the kind of risky research that is likely to shift the knowledge frontier. [...] In a nutshell, our findings suggest that the more we bind ourselves to quantitative short-term measures, the less likely we are to reward research with a high potential to shift the frontier – and those who do it. [...]
@article{stephanReviewersAreBlinkered2017,
  title = {Reviewers Are Blinkered by Bibliometrics},
  author = {Stephan, Paula and Veugelers, Reinhilde and Wang, Jian},
  year = {2017},
  month = apr,
  volume = {544},
  pages = {411--412},
  issn = {0028-0836},
  doi = {10.1038/544411a},
  abstract = {[Excerpt] [...] Although journal impact factors (JIFs) were developed to assess journals and say little about any individual paper, reviewers routinely justify their evaluations on the basis of where candidates have published. [...] As economists who study science and innovation, we see engrained processes working against cherished goals. Scientists we interview routinely say that they dare not propose bold projects for funding in part because of expectations that they will produce a steady stream of papers in journals with high impact scores. The situation may be worse than assumed. Our analysis of 15 years' worth of citation data suggests that common bibliometric measures relying on short-term windows undervalue risky research. [...] At our own universities, it is standard for deans and department chairs to summarize candidates' citations and JIFs when committees discuss the merits of their work. Colleagues and external reviewers routinely refer to the bibliometric indicators of junior faculty members who are up for promotion. Hiring committees may also emphasize these indicators to select colleagues who are likely to attract funding for their institutions. 

[] It is easy to see why. Public funders use these measures to allocate resources to universities. In turn, universities use them to distribute resources to departments. [...]

[Research impact]

Much has been written about how the pursuit of flashy papers can push scientists to crowd into similar, competitive projects, cut corners or exaggerate the significance of their findings. We think that the problem is more profound: popular short-term bibliometric measures discourage the kind of risky research that is likely to shift the knowledge frontier. [...] In a nutshell, our findings suggest that the more we bind ourselves to quantitative short-term measures, the less likely we are to reward research with a high potential to shift the frontier -- and those who do it. [...]},
  journal = {Nature},
  keywords = {*imported-from-citeulike-INRMM,~INRMM-MiD:c-14345212,~to-add-doi-URL,bibliometrics,check-list,cognitive-biases,innovation,peer-review,psychology,publication-bias,research-funding,research-management,research-metrics,rewarding-best-research-practices,scientific-creativity,short-term-vs-long-term},
  lccn = {INRMM-MiD:c-14345212},
  number = {7651}
}

Downloads: 0