Counting citations adds up to improved science

Jonathan R. Goodman is wrong to blame bibliometrics for stifling academic debate, says Craig Aaen-Stockdale

十二月 2, 2019
Child with abacus

Bibliometrics is unquestionably a research policy minefield, but is citation counting really “killing” academic dissent?

According to an opinion piece published in Times Higher Education last week, the use of citations as a measure of scholarly impact is having a variety of negative effects, potentially including the stifling of academic debate. But I would argue that the position taken by Jonathan Goodman manages to both put the cart before the horse and suggest that we throw the baby out with the bathwater.

Goodman discusses coercive citation practices – the trading of citations for publication – as if they were a symptom of peer-reviewers or editors trying to maximise their own citations. There is no doubt that some of this goes on, but the most systematic and damaging examples of coercive citation are the result of editors and publishers attempting to increase the impact factor of their journals.

This points to the central problem in the academic incentive system: the relentless focus on publishing in “prestigious” journals – identified according to dubious journal-level metrics such as the ubiquitous impact factor. Even anti-metrics initiatives like the San Francisco Declaration on Research Assessment limit themselves to criticism of journal-level metrics; they do not discount out of hand the responsible use of article-level bibliometrics.

Goodman is concerned about self-citation, but this is a perfectly normal part of science, especially if you are a leader in your field or work in a niche area. It would be ridiculous not to cite your previous work if it is relevant to your current work. The difficult question is where to draw the line. At what point does self-citation become pathological? Thanks to the very study of self-citations to which Goodman refers, we now have a much better idea about this. And if you are particularly concerned about distortions caused by self-citation, you can usually exclude them from your analysis.

In my adopted homeland of Norway, a recent proposal to use bibliometrics in evaluation was criticised because men are cited more than women, and self-cite more than women do, introducing the potential for bias. But how do we know this? By counting citations. If bibliometricians hadn’t crunched the numbers, we wouldn’t know that there was a problem to fix.

Goodman’s chief objection to citation counting is that it may discourage junior scholars from criticising their seniors, out of concern for their careers. Having criticised a paper published in Nature early in my own career, I share that concern. However, the potential for torpedoing your career by criticising the wrong person would still exist even if citations were not counted at all. Thanks to the humble bibliography, a senior researcher will find out eventually that some young upstart has criticised their work, regardless of whether a bibliometrician or automated citation database has been performing mathematical acrobatics on the citation counts of their articles in the meantime.

Article-level citation indicators are obviously not perfect, for some of the reasons that Goodman outlines. But the perfect should never be the enemy of the good, and when scholarly output is increasing exponentially, they are as good a method as any by which to filter the juicy plankton out of the tsunami. As indicators of an article’s influence, they are a lot better than the impact factor of the journal in which it was published, or any number of derivatives or proxies of that.

More widespread use of article-level citation metrics may even help to break the oligopoly of certain journals and publishers – which would correspondingly help to reduce the very coercive editorial practices that Goodman mentions. It would also mean a lot less time wasted in the rinse-and-repeat cycle of submission and rejection as ambitious researchers work their way down the hierarchy of “top-tier” journals.

Being published in certain journals has understandably become an absolute raison d'être for some, fetishised to the point that a work’s readership, citation count and societal impact have become entirely secondary. But while publications in highly ranked journals hold the potential for impact, citation counts actually demonstrate it. And I like to think that we scientists care about evidence.

Craig Aaen-Stockdale is a senior adviser in the research administration at BI Norwegian Business School.

后记

Print headline: Citation counting adds up

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

Reader's comments (2)

Agree with this Citations have become a proxy for research impact Albeit imperfect, they are a more useful measure than journal IF, which speaks nothing of the impact at all on its own Indeed a low citation count in a high IF journal is a bad sign in terms of impact, implying as it does that even with high exposure few people have found the results to be of much use Quality is more difficult to pin down with citations or IF The first paper to do X is not necessarily of high quality. It may actually be very crude, but it can kickstart a new way of thinking or the application of a new technique to a certain problem This will garner citations, so the work has a big impact even though the quality can be average or even low Getting published in high if journals is a lottery for most people, unless they are a member of a certain club The strategy employed by many is to start at the top and work down until the paper is accepted Inevitably, with enough grad students and postdocts pumping out papers, some will get into the so called top journals by sheer chance (maybe sometimes they’re genuinely good) Nature and Science in my opinion are closer to science fiction magazines than to scientific publications and I would never send a paper to either Re self citations, it’s undeniable that we have colleagues who use them shamelessly to boost their profiles This is self defeating really, because it’s easy to spot and leaves a bad impression Obviously, a certain level of self citation is inevitable, but when it reaches the level of over 25% (you could argue lower) you have to question whether they’re being used appropriately In some systems, e.g. in china, self citations are excluded in any analysis, as are papers in which you are not the leading or corresponding author This may sound draconian, but it discourages a some of the game playing
Oh dear, this really is perfect nonsense. The biggest determinant of the number of citations that you get is the number of people in the field. It has little to do with the quality of the work. One of the best ways to get few citations is to have a lot of equations in a paper. And the best way to get a lot is to include "penis" in the title of your paper (and plug it on twitter). Take some examples. In 1990, we published a paper that solved the mathematical problem of how to fit mechanisms to single ion channel data: it allowed an exact treatment of events that are too short to resolve. In 29 years it has 126 citations (Google scholar). It is mathematically quite difficult and not very many people analyse single molecule data quantitatively. In contrast, in 2014, after I retired from lab work, I wrote a simple-minded simulation of a test of statistical significance. In 5 years it's had 409 citations (and over a quarter of a million pdf downloads). The paper doesn't come close to the 1990 one for originality or intellectual level. The reason for all the citations are obvious. 1. Null hypothesis significance testing is used in a vast range of different areas of science. 2. The topic of what's wrong about such tests is in the news at the moment. 3. The paper has little mathematical content so it is easy to understand. If you want to promote good scientists, read their papers (especially the methods section). If you want to promote someone who writes simple-minded non-mathematical papers in popular areas, count citations. Field-specific citation counts have been touted as a solution to this problem. It won't work in general: who decides what field a every paper is in?