Late blooming in STEM careers ‘almost never happens’

Early career publishing success is a reliable guide to whether scientists will succeed long term, concludes international study

Published on
April 15, 2026
Last updated
April 15, 2026
Source: istock/Surachet99

Only a tiny number of low-performing early career scientists ever rise to the top of their field, according to a longitudinal survey which suggests researchers’ lifetime productivity can be predicted at an early stage.

After tracking the output of 320,564 scientists with at least 25 years of publishing experience, researchers from Adam Mickiewicz University, Poznań examined whether a researcher’s productivity changed from early career (five to 14 years after first publication) to mid-career (15-24 years) and late career (more than 25 years).

Using a journal prestige measure as a signal of research quality, the study found slightly more than half of the researchers identified in the early career stage as being in the top 50 per cent for research productivity remained top half researchers in the next stage of their careers. In turn, most remained in the top half for productivity for the rest of their careers.

However, only a tiny number of early career researchers in the bottom decile of productivity (0.5 per cent, or 162 individuals in total) exhibited “extreme upward mobility” by reaching the top 10 per cent for research productivity by late career. In addition, only a “small fraction” moved up from the bottom three deciles to the top 10 per cent, with only 0.7 per cent in the second-lowest decile making this leap.

ADVERTISEMENT

In the case of immunology, just one researcher made that jump in the 38 OECD countries studied, with just one economist also achieving the same feat between early career and late career, says the study.

Similarly, those early career researchers identified as being in the top 10 per cent for productivity almost never fell down to the bottom 10 per cent, suggesting “scientists are heavily locked-in early on their careers in productivity classes”.

ADVERTISEMENT

Presenting the findings at a Centre for Global Higher Education seminar, the study’s lead Marek Kwiek, Unesco chair in institutional research and higher education policy, said “how you start and finish [your career] is more or less similar” in the majority of cases and publishing productivity is “largely settled in the first five to 10 years” of a career.

That is likely because those in postdoctoral research positions who contributed to highly cited or influential papers could use this success to acquire positions and resources that led to further success. “Papers are [turned] into grants, grants led to [access to] people [such as PhD students] and equipment, which leads to more grants,” he said.

Asked by Times Higher Education if the study’s conclusion that scientific success is fixed very early was a gloomy message for researchers, Kwiek, who conducted the study with data scientist Lukasz Szymula, said the “optimistic part of this data-driven story is that only slightly more than half of top performers remain top performers in the next stage of their careers”.

“That means that slightly less than half of top performers come from lower productivity deciles,” he continued, stating: “There is some mobility, not closure or total immobility.”

ADVERTISEMENT

Furthermore, the study focused on mainly on STEM science (although the cohort did include 12,585 social scientists) so the patterns of productivity might not apply to other disciplines, added Kwiek, who is director of the Institute for Advanced Studies in Social Sciences and Humanities.

“Late blooming in humanities is pretty possible, perhaps widespread, but this is impossible to measure on a global scale,” he said, citing difficulties obtaining data.

“But this is the pattern: late blooming understood as high publishing productivity in STEM science almost does not exist,” he said.

Studying those few individuals who “jumped up” from the bottom decile to the top 10 per cent could provide clues for individual researchers on how to succeed, Kwiek added.

ADVERTISEMENT

“Our data show that the best way to increase productivity so rapidly is to increase internationalisation in team formation, increase prestige of journals in which we routinely publish and, sometimes, change affiliation to more affluent countries. The combination of internationalisation, journal prestige and country change…should help,” he said.

That said, it was important not to overinterpret the study’s findings, such as forcing low-performing researchers into teaching-only roles based on their early track record in science. “There is no reason to give up research early,” said Kwiek, who noted that while “publication indicators work well for huge numbers of observations, [they work] much worse at the individual level”.

ADVERTISEMENT

jack.grove@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Concerns abound that authors who publish on a weekly basis are cutting corners, corrupting authorship norms and overburdening the peer review system – with AI likely to make matters worse. But if incentives are misaligned, what can be done? And is the moral panic exaggerated? Jack Grove reports  

26 March

Reader's comments (3)

It is somewhat depressing that research like this is still being carried out using journal-based metrics (e.g. impact factors / journal rankings) to assess the contributions of individual scientists. To determine whether a scientist is a "top performer" - you need to evaluate whether they are actually doing good science! I suggest the authors read the San Francisco Declaration on Research Assessment (DORA) which has been signed by thousands of academics and many institutions across the world. Its #1 recommendation is "Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.”
Thank you very much for this comment regarding scientists’ contributions and DORA. In our study, we focused on large-scale science. We had access to the Scopus bibliometric database and several available metrics. When choosing the method for productivity counting, we considered that, in the case of the most prestigious journals, the process of developing a scientific study, writing it up as an article, and going through revisions and peer review often takes a considerable amount of time. Such journals also tend to require research to be of a high standard and, in many cases, innovative. For journals ranked lower, this process is usually much shorter, and the research does not always have to meet the same level of expectations - which of course does not mean that good science cannot be published there, because it certainly can. It is also worth noting the current situation in many institutions, where evaluation, bonuses, funding, and grant applications may unfortunately still be (and is) influenced by the journals included in a researcher’s publication portfolio. Regarding DORA, this is only a suggestion. This group is very small, considering how long it has been active and how many millions of scientists there are worldwide. One may also question the fact that both chairs and organizations from various countries that signed it still visibly publish a substantial share of their work in top-ranked journals or promote / rely on them. My co-author has used and discussed other methods of calculating productivity in previous studies, but here we considered this version to better capture “good” science. I am very grateful for this comment and would be very happy to discuss this further - especially on how to search for good science. Thank you Lukasz
new
As you have explained in your pre-print and as you have reiterated here – you have used journal-based metrics as a surrogate (proxy) measure of scientific quality – and the “performance” of individual scientists. The precise source of the metric is not really important – it’s journal-based – so a deeply flawed measure when it comes to assessing individual scientists and their contributions. DORA has been signed by the National Science Centre in Poland (a major funder) and by the Foundation for Polish Science. So it is far from "very small" in terms of its reach. It is irrelevant that some individual signatories to DORA themselves continue to publish work in journals that could be deemed to be “top-ranked”. DORA makes recommendations about how we should assess the value of science and the contribution of individuals. It is not concerned with where individual scientists might want to publish their work. But by your reasoning – if DORA signatories publish a lot of work in “top-ranked” journals – we should pay particular attention to their views! They must be "top scientists"! When it comes to using journal rankings as a proxy of quality – were you aware of the finding in some disciplines that there is a negative association between journal impact factor and the chances that the reported findings are reproducible? And more specifically the association between journal impact factor and the incidence of retractions due to scientific fraud? These associations arise because of the perverse incentives that the use of journal-based metrics can create. Initiatives like DORA are not merely "suggestions" - they are fundamental to saving science from this.

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT