What Ranking Journals Has in Common with Astrology. Brembs, B.
What Ranking Journals Has in Common with Astrology [link]Paper  doi  abstract   bibtex   
[excerpt] Introduction. As scientists, we all send our best work to Science or Nature - or at least we dream of one day making a discovery we deem worthy of sending there. So obvious does this hierarchy in our journal landscape appear to our intuition, that when erroneous or fraudulent work is published in 'high- ranking' journals, we immediately wonder how this could have happened. Isn't work published there the best there is? Vetted by professional editors before being sent out to the most critical and most competent experts this planet has to offer? How could our system fail us so badly? We are used to boring, ill-designed, even flawed research in the 'low-ranking' journals where we publish. Surely, these incidents in the 'top' journals are few and far between? It may come as a surprise to many scientists that the data speak a different language. They indicate that perhaps erroneous and fraudulent work is more common in 'top' journals than anywhere else (Brembs et al., 2013). There is direct evidence that the methodology of the research published in these journals is at least not superior, perhaps even inferior to work published elsewhere (Brembs et al., 2013) There is some indirect evidence that the error-detection rate may be slightly higher in 'top' journals, compared to other journals (Brembs et al., 2013). Neither data alone are sufficient to explain why high-ranking journals retract so many more studies than lower-ranking journals, but together they raise a disturbing suspicion: attention to top journals shapes the content of our journals more than scientific rigor. The attention being paid to publications in high-ranking journals not only entices scientists to send their best work to these journals, it also attracts fraudsters as well as unexpected and eye-catching results, which all too often prove literally too good to be true (Steen, 2011a, 2011b; Fang and Casadevall, 2011; Cokol et al., 2007; Hamilton, 2011; Fang et al., 2012; Wager and Williams, 2011). A conservative interpretation of the currently available data suggests that the attraction for truly groundbreaking, solid research just barely cancels out the attraction for unreliable or fraudulent work. A less conservative approach suggests that the solid research is losing.
@article{brembsWhatRankingJournals2013,
  title = {What Ranking Journals Has in Common with Astrology},
  author = {Brembs, Björn},
  date = {2013},
  journaltitle = {Home About Login Register Search Current Archives Announcements Home {$>$} Vol 1, No 1 (2013) Roars Transactions, a Journal on Research Policy and Evaluation},
  volume = {1},
  issn = {2282-5398},
  doi = {10.13130/2282-5398/3378},
  url = {https://doi.org/10.13130/2282-5398/3378},
  abstract = {[excerpt] Introduction. As scientists, we all send our best work to Science or Nature - or at least we dream of one day making a discovery we deem worthy of sending there. So obvious does this hierarchy in our journal landscape appear to our intuition, that when erroneous or fraudulent work is published in 'high- ranking' journals, we immediately wonder how this could have happened. Isn't work published there the best there is? Vetted by professional editors before being sent out to the most critical and most competent experts this planet has to offer? How could our system fail us so badly? We are used to boring, ill-designed, even flawed research in the 'low-ranking' journals where we publish. Surely, these incidents in the 'top' journals are few and far between? It may come as a surprise to many scientists that the data speak a different language. They indicate that perhaps erroneous and fraudulent work is more common in 'top' journals than anywhere else (Brembs et al., 2013). There is direct evidence that the methodology of the research published in these journals is at least not superior, perhaps even inferior to work published elsewhere (Brembs et al., 2013) There is some indirect evidence that the error-detection rate may be slightly higher in 'top' journals, compared to other journals (Brembs et al., 2013). Neither data alone are sufficient to explain why high-ranking journals retract so many more studies than lower-ranking journals, but together they raise a disturbing suspicion: attention to top journals shapes the content of our journals more than scientific rigor. The attention being paid to publications in high-ranking journals not only entices scientists to send their best work to these journals, it also attracts fraudsters as well as unexpected and eye-catching results, which all too often prove literally too good to be true (Steen, 2011a, 2011b; Fang and Casadevall, 2011; Cokol et al., 2007; Hamilton, 2011; Fang et al., 2012; Wager and Williams, 2011). A conservative interpretation of the currently available data suggests that the attraction for truly groundbreaking, solid research just barely cancels out the attraction for unreliable or fraudulent work. A less conservative approach suggests that the solid research is losing.},
  keywords = {*imported-from-citeulike-INRMM,~INRMM-MiD:c-12810808,citation-metrics,journal-ranking,publication-bias,publication-errors,research-metrics,science-ethics},
  number = {1}
}
Downloads: 0