Research Intelligence - Citing to win as journals 'game' system

Impact factor manipulation in spotlight as Thomson Reuters targets offenders. Paul Jump reports

August 9, 2012

When Thomson Reuters announced at the end of June that a record 26 journals had been consigned to its naughty corner this year for "anomalous citation patterns", defenders of research ethics were quick to raise an eyebrow.

"Anomalous citation patterns" is a euphemism for excessive citation of other articles published in the same journal. It is generally assumed to be a ruse to boost a journal's impact factor, which is a measure of the average number of citations garnered by articles in the journal over the previous two years.

Impact factors are often used, controversially, as a proxy for journal quality and, even more contentiously, for the quality of individual papers published in the journal and even of the people who write them.

When Thomson Reuters discovers that anomalous citation has had a significant effect on a journal's impact factor, it bans the journal for two years from its annual Journal Citation Reports (JCR), which publishes up-to-date impact factors.

"Impact factor is hugely important for academics in choosing where to publish because [it is] often used to measure [their] research productivity," according to Liz Wager, former chair of the Committee on Publication Ethics.

"So a journal with a falsely inflated impact factor will get more submissions, which could lead to the true impact factor rising, so it's a positive spiral."

One trick employed by editors is to require submitting authors to include superfluous references to other papers in the same journal.

A large-scale survey by researchers at the University of Alabama in Huntsville's College of Business Administration published in the 3 February edition of Science found that such demands had been made of one in five authors in various social science and business fields.

Xavier Bosch, associate professor of medicine at the University of Barcelona and a commentator on research ethics, said he was aware of several medical journals that "suggested" that authors include references to some of the publications' own papers.

"Although editors won't tell authors that the article will not be accepted unless they cite the suggested papers, authors will always agree," he said.

He noted that while "serious" medical journals had self-citation rates of less than 3 per cent, some journals had rates of more than 50 per cent.

Richard Irwin, editor-in-chief of the journal Chest, said he knew of journals that had commissioned reviews whose authors were instructed to reference all the papers on the topic published in that journal during a specific year of the impact factor calculation period.

Thomson Reuters also began this year to screen for what it calls "citation stacking". Marie McVeigh, director of JCR, defined it as "a specific, anomalous pattern of citations exchanged between two journals", such that "Journal A shows an inordinate, sometimes exclusive concentration of citations to Journal B" during the impact calculation period.

Three journals - Cell Transplantation, Medical Science Monitor and The Scientific World Journal - were excluded from the 2011 JCR, published in June, because of such behaviour.

As first revealed on The Scholarly Kitchen blog by independent researcher and consultant Phil Davis, members of the editorial board of Cell Transplantation wrote several articles for the other two journals containing hundreds of citations to recent Cell Transplantation articles.

Dr Davis calculated that without these citations, Cell Transplantation's 2010 impact factor would have been 4.082. With them, it was 6.204.

As I was saying...

Individual authors are also sometimes accused of citing their own papers excessively, or entering into informal arrangements with colleagues for quid pro quo citations, in the hope of boosting their "H-indexes": a putative measure of an individual's research prowess that takes into account both publication and citation volumes.

One problem with countering such behaviour is the difficulty of defining excessive self-citation.

As pointed out by Stevan Harnad, affiliate professor of electronics and computer science at the University of Southampton, self-citation by individuals can be entirely legitimate, "drawing attention to the author's work that is relevant to (and may have been ignored by) the target research field".

According to Dr Wager, there were no guidelines on journal self-citation until she managed to have a clause inserted into international standards developed at the second World Conference on Research Integrity last summer.

The clause bars editors from trying to "inappropriately influence their journal's ranking by artificially increasing any journal metric".

Of the nearly 6,700 respondents to the Alabama survey, 86 per cent considered citation coercion "inappropriate" and 64 per cent were less likely to submit to a coercive journal. However, 57 per cent said they would be willing to add superfluous references before submitting to a journal known to coerce.

This may suggest that many agree with Ferric Fang, editor-in-chief of Infection and Immunity and professor of laboratory medicine at the University of Washington, that although "coercive citation and citation cartels represent unethical practices", the case has not been made that "excessive self-citation is a common or important problem".

According to Professor Fang, a "more insidious and widespread" problem is under-citation, in which authors fail to cite the real source of a discovery, either out of ignorance or "to inflate the apparent novelty of one's own work".

Ms McVeigh of Thomson Reuters said there had been no "shocking growth" in the number of journals "suppressed" by the JCR.

She noted that the 51 currently suppressed represented only a tiny fraction of the 10,675 listed. And while the number added to the suppression list in 2011 - 26 - was much higher than the seven added in 2006, for example, the JCR had then encompassed only 6,166 titles.

As for citation stacking, she had not yet analysed historical data for the phenomenon, but "the infrequent occurrence of the specific stacking behaviour we saw this year suggests it is neither a common nor a long-standing practice".

Professor Irwin said that most of the unethical practice he had encountered was perpetrated by researchers who thought they were doing nothing wrong. In his view, this was because of repeated instances where researchers had done "dishonourable or unethical things", creating a prevailing sense that "it was OK to cut corners or do even worse".

"With respect to inappropriate self-citing, we are contaminating the literature and are teaching the wrong message to our trainees and young investigators by implying that it is OK to 'game' the system," he said.

Meanwhile, Dr Bosch said the role that impact factors played in appointments and promotions - in both universities and hospitals he knew of - meant that manipulation had to be fought "seriously".

So what can be done? According to Professor Harnad, it is easy to cull self-citation from personal statistics. Automated analysis of self-citation statistics based on norms for different fields could also potentially be used to flag up possible anomalies.

Meanwhile, he added, the rise of open-access publishing would make it easier for "clever graduate students in computer science and scientometric departments" to develop techniques to detect citation stacking.

For his part, Dr Bosch suggested that journals with self-citation rates exceeding 5 per cent of their total counts should first be warned and then banned from the JCR.

But ideally he would like to see all self-citation removed from calculations of impact factor. Only this, he said, would "provide a true and pristine picture of how well a journal's quality stands".

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please Login or Register to read this article.