Roger Watson, Editor-in-Chief, JAN
The world of editing seems to be divided into two camps: those who agree with the use of impact factors and those who don't; I get the impression that the latter is larger and growing. However, I defy any editor of a journal on the list produced annually by Thomson Reuters to deny that they take a look to see where their journal is on the list. I am in the camp which agrees that impact factor exists (i.e. on that I am 100% correct) and that I'll probably make the most of it whatever the score is but rejoice loudly if my journal is at the top of the list. After all, whatever your view, a great many authors, universities and research awarding bodies use it to judge where to send a paper, how to promote individuals and to whom research funds should be awarded.
'So we do try to improve the cite-ability of our papers, do we?' I hear people say. Yes. Of course we do. If you publish in a journal we assume that, at the least, it will be read, at most cited and at the very most cited many times. Lots of citations to an individual paper – even if for the wrong reasons, on very few occasions – must tell us something about the paper, about the author, and it must be related, to some extent, to the 'quality' of the journal. Impact factor is one such measure of quality. There are others.
San Francisco Declaration on Research Assessment. I am very tempted to say 'good luck with that one lads' as I am sure that impact factor addiction will be with us for many years. However, it is a sincere and authoritative effort to gather all the arguments against impact factor into one place and propose that 'enough is enough'. Nevertheless, I would be more sympathetic to the cause of the declaration if they had suggested a credible – or, indeed, any – alternative to the impact factor.
Alternatives there are many and these include: the journal h-index; SCImago index; and Eigen factor. However, these are all citation dependent. While – with the exception of the Scopus impact factor – they use citations in different ways from the Thomson Reuters impact factor, they are all highly correlated. Plus ca change!
Finally, that old chestnut about how easy it is for editors to manipulate the impact factor. Most editors, including me, will joke that they have been trying to do this for years...and failed. Frankly, the penalty is too great: journals can be removed from the Thomson Reuters list for excessive self-citation and 37 were removed in 2013 and Nursing Science Quarterly remains off the list since 2009. No editor or publisher wants that.