F1000 journals: rank hypocrisy?

Begun as a counterweight to journal impact tables, Faculty of 1000 starts its own. Paul Jump writes

October 13, 2011

The founder of a biomedical article-rating organisation has denied that its new journal ranking is a betrayal of its values.

The Faculty of 1000 (F1000) was launched in 2002, and was billed as a corrective to the practice of judging papers and researchers by the prestige and impact factor of the journals in which they are published.

The organisation's members, all of whom are senior, peer-nominated researchers in biomedicine, now number more than 10,000 worldwide. They are asked to review and rate papers they deem to be significant in their field.

The organisation has now begun to turn those ratings into a ranking of journals.

But the founder and chair of F1000, Vitek Tracz, told Times Higher Education that he would be dismayed if research managers used them to judge individual researchers.

He said the ratings were primarily aimed at helping researchers choose where to publish.

F1000's overall ordering of journals - headed by Nature, Cell and Science - is similar to that derived from the citation-based impact factors calculated by Thomson Reuters.

But Mr Tracz said researchers were often surprised by the ranking for their specific subfield. He disagreed with the view that authors were only interested in publishing in the journal with the highest impact factor, to share in "reflected glory".

"They don't necessarily want to publish everything in Nature: they want to publish in the best possible journal that is appropriate and that they have a reasonable chance of getting into," he said.

"We are (producing journal rankings) with open eyes and minds. We realise the potential for misuses and if we find doing something like this is not helpful, we will stop.

"But rankings are there, they are useful to some extent, and the more types there are, the better. The whole world rails against impact factors, but every single person uses them."

He added that the F1000 rankings had the advantage of being completely transparent.

"You can see what the papers were that created that ranking, who said they were good and why."

Mr Tracz admitted that there was "a certain amount of doubt and worry" among F1000 members about the rankings, but he expected much more controversy over his plans to produce additional rankings of individual institutions, laboratories and even researchers.

"Our guide will be whether scientists can use them, in terms of where to publish, where to work or with whom to collaborate," he said. "If you move outside territory you are familiar with, they will give you some imperfect but useful help."


to read this article.

Register to continue

Get a month's unlimited access to THE content online. Just register and complete your career summary.

Registration is free and only takes a moment. Once registered you can read a total of 3 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments


Featured jobs