THE DataPoints is designed with the forward-looking and growth-minded institution in view
“Publish or perish” is a mantra that is either responsible for many of the perceived modern ills of academia or is a positive driving force, depending on your point of view.
But among the major research nations, where has a push for academics to publish more been felt the most in recent years? And has there been any noticeable contrast with changes in quality?
One of the 13 metrics behind Times Higher Education’s World University Rankings attempts to directly measure the productivity of researchers in each institution, and examining those data at the national level from the past few years gives some interesting results.
For instance, the data suggest that Australia’s universities have seen some of the biggest productivity increases relative to other nations. Of countries with at least 10 universities in the 2016 and 2019 editions of the ranking, Australia had one of the largest leaps in the average score for papers per staff and is now second only to the Netherlands on the metric.
Accounting for the fact that the Netherlands’ universities are almost all in the top 200 provides even better news for Australia: its top 10 universities now achieve the best score for average papers per staff among leading research nations, overtaking the UK as well as the Netherlands since 2016.
Looking at the figures in the context of overall research output also suggests that in some countries, such as China, a rapid increase in research publication has not yet been accompanied by large productivity gains.
So are national policies behind some of these trends?
In Australia, there have been clear policy incentives in the past decade to boost productivity. The most obvious is that until 2017, block grant funding to support research in Australian universities was determined in part by the amount of research published.
However, a review published in 2016 led to this element of the funding calculation being removed and – alongside the evolving Excellence in Research for Australia assessment – there now appears to be a drive directed more towards quality than quantity.
“Tying funds initially to research income and to publications while largely holding the funding steady put universities in the position of having to improve to maintain funding levels – or risk other universities doing better and attracting a higher proportion of funding,” said Conor King, executive director of Innovative Research Universities, which highlighted Australia’s productivity surge in a recent submission to a parliamentary inquiry on research funding.
“The publication factor was the easiest for academics to influence and [it] quickly rose – hence it has now been removed from the funding formula, its purpose achieved.”
Mr King added that the ERA’s focus on research quality had now “helped balance sheer output with consideration for its value”, but a current squeeze on block funding raised the question of whether increases in research output would now stall.
“It is a live question how much the government can squeeze the base resources [needed] to employ researchers…while looking for greater output, and in particular targeting all new funds to specific projects, expecting the base university capability to provide half or more of the actual expenditure required.”
To judge by rankings data on the citation impact of research, it is quality where Australia still has a little ground to make up on other nations rather than productivity.
However, its push to increase research volume does not appear to have done citation impact any harm, whereas in other countries such as France there do not appear to have been so many quality gains as productivity has increased.
And in some nations – most notably Russia – quality appears to have declined on average as productivity has increased (although the effect of the rankings expanding from 2016 to 2019 may be a factor here and in some other countries).
By and large, however, in the most developed research nations, productivity gains appear to go hand in hand with improvements in citation impact. So does this mean that national policies and evaluations such as the ERA or the research excellence framework in the UK are becoming better at influencing both?
Sergey Kolesnikov, a postdoctoral researcher at Arizona State University’s Center for Organization Research and Design – who has co-authored research on the relationship between productivity and research impact – said that in his view, the better assessment programmes sought not to concentrate too much on one over the other.
“I think that excessive focus either on productivity or quality is equally harmful, especially if the evaluation system is based on a small number of simplistic quantitative indicators, because any indicator is just a poor proxy for a real-world complex phenomenon it measures,” he said.
While the problems of measuring productivity “were well known”, basing decisions on “simple measures” of quality “such as journal rankings or journal impact factors have all sorts of negative impacts, too”.
“So, the movement in contemporary evaluation systems that recognises these problems is not just a shift of emphasis from productivity to quality, but rather a move away from simplistic metrics of one or the other towards more systematic assessment that combines various context-based quantitative measures with qualitative assessment and peer review.”
He said that the evolution of policy in Australia over the past decade might be an example of this, and he also highlighted recent updates to the Wellcome Trust’s open-access policy that emphasised assessing research on the “intrinsic merit of the work, not the title of the journal or publisher”.
This “strong push towards more responsible research evaluation practices within higher education institutions” was also hopefully “a sign of future changes on a nationwide level” too, Dr Kolesnikov said.
So the hope among researchers themselves might be that future gains in both the productivity and the quality of research will be a by-product of sophisticated approaches to research assessment rather than direct attempts to influence them.