Hypercompetition reshapes research and academic publishing

Analysis of more than 120 million academic publications since the beginning of the 19th century shows more is more as academics compete for attention

June 20, 2019
Source: Getty

It is a familiar trope: academics are under increasing pressure to publish, and are producing more papers in record time as a result. Now, the findings of an extensive new study offer a trunkload of evidence to suggest that more really is more when it comes to contemporary publishing trends.

Prompted by the notion that academics chase citation targets over personal research interests in order to be viewed as successful, two computer science researchers set out to explore the extent to which the state and scale of academic publishing have changed in the past century.

To do so, they developed their own program to analyse large-scale datasets covering more than 120 million publications, featuring 35,000 authors with 520 million references since the beginning of the 19th century.

Like the datasets, the subsequent research findings are vast – but many of the findings, published in the journal GigaScience this month, point to a stark increase in output. The authors found that while researchers who started their careers in 1950 published an average of 1.55 papers over a 10-year period, the generation who started their careers in 2000 published more than two-and-a-half times that (4.05 papers) within the same time frame.

And “hyperauthorship” is increasingly on trend. The analysis found “more and more papers are written by hundreds or even thousands of authors”, something that was found across all physical science research fields. Mass authorship may not be all it seems, however, since “honorary and ghost authors” were found to be increasingly prevalent.

On top of this, the mean title length of papers has increased dramatically – rising from 8.71 words in 1900 to 11.83 words in 2014 – as has the percentage of titles that include colons, exclamation marks and other special characters.

Michael Fire, a senior lecturer at Ben-Gurion University of the Negev, who co-authored the report, explained that there was good reason for this. “Academia is a hypercompetitive research environment, so many researchers do their best to get attention and more citations,” he said.

“This includes publishing more papers, using long and eye-catching titles, using more keywords, longer abstracts. It’s something that is also common in other competitive domains.”

The number of references found in papers has significantly increased over time, and is positively correlated with papers being cited more frequently. But at the same time, Dr Fire and co-author Carlos Guestrin, of the University of Washington, warn that “papers may contain hundreds of self-citations” – with figures showing a steady rise in the average number of self-citations per paper from 0.35 in 1950 to more than 2.2 in 2014 – highlighting an appetite for gaming the system.

Dr Fire told Times Higher Education that the study had confirmed his opinion that “nothing makes sense in the world of academic publishing”.

“I really hope our study will help researchers look at the system in a different light…[and that] this will [lead to] better or new measures that will actually encourage researchers to target impactful research,” he said.



Print headline: It’s true: more means more!

登录 或者 注册 以便阅读全文。




  • 获得编辑推荐文章
  • 率先获得泰晤士高等教育世界大学排名相关的新闻
  • 获得职位推荐、筛选工作和保存工作搜索结果
  • 参与读者讨论和公布评论


Reader's comments (2)

Given that more recent metrics seem to discount self-cites it would seem more likely that they are signs of salami-slicing (‘... building on X and Y 2017, we [X and Y] show that...’) than attempts to inflate actual impact measures so as to “game the system”. Citing circles would seem to be the way to game that: A cites B cites C cites A.
One need think about the difference between productivity and competition. The desire to be noticed may be more constant the influence of productivity. For example, I wrote my first published paper in the 1980s before email. I wrote it with a colleague in Germany and all of our edits and correspondence had to travel by mail. As international phone calls were restricted by my university, even that was not a real option. It took several months to revise the paper and several months to get it reviewed and several months to get it revised again for publication. This year, I wrote a paper with four authors on three continents in four different cities. On any given day, we revised the paper several times, did additional analyses and had the paper in constant movement as we could work across time zones. Like a DHL package constantly moving the paper evolved quickly. Imagine if it was the 1980s again? First, I would not have 4 co-authors as this would just increase the time and cost of any effort. Second, the revisions would not iterate quickly but in large blocks. Finally, any revisions even after review would be slowed to a crawl. If you look at the role of editor, again this has changed. I was an Associate Editor of a major journal before the switch to online systems and an Associate Editor and Editor of another after the switch. Again, it was night and day. Requests to review were sent via letters along with physical copies of the manuscripts. Reviews were typed, sent back, copied and where there was identifying information "wited out". The process was slow and time consuming. Now a paper comes in, can be desk reviewed in a few hours, and out for review almost immediately. Similarly, the use of databases allows expansion of citations and the number of available journals. Before the internet came into use, the only journals you had in your office were the "major" ones as having more was expensive. Collecting more references was a slog in the library. Now there is no reason to bother. When I moved jobs, I literally left 20 years of journals sitting in my old office. When they asked what to do with them, I said I didn't care as I didn't need them. Now you can read journals you would most likely never open or even knew existed. Hence, what you have is a massive productivity shock. This leads to more papers being accessible, hence more papers being cited. It leads to a reduction in the cost of co-authoring, hence you now have more co-authors. This leads to more interaction and hopefully better ideas from that interaction. Overall, more productive academics working with other more productive academics leads to more outputs. Hence, the argument that “nothing makes sense in the world of academic publishing” is wrong and not supported by even a causal examination of the reality. It makes a lot of sense. And the fact that there are more papers being produced by more authors is good for society. Consumers want more choice in products. The public is made better off with more contestable science. Sure academics might feel that they are under pressure, but the competition for the predominance of ideas and faster and cheaper discovery is good. Indeed, it is very good.


Log in or register to post comments


Recent controversy over the future directions of both Stanford and Melbourne university presses have raised questions about the role of in-house publishing arms in a world of commercialisation, impact agendas, alternative facts – and ever-diminishing monograph sales. Anna McKie reports

3 October