More than 11 million research papers, published between 2009 and 2014 and drawn from Elsevier’s Scopus citation database, have been analysed as part of the global research project that underpins the Times Higher Education World University Rankings, to be published on 30 September.
But around 600 papers published during that period will be excluded from the calculations. Why? Because we consider them to be so freakish that they have the potential to distort the global scientific landscape.
One such paper, in physics, is snappily titled “Charged-particle multiplicities in pp interactions at sqrt(s) = 900 Gev measured with the ATLAS detector at the LHC”. It has clearly made a significant contribution to scholarship, based on ground breaking research at the Large Hadron Collider, and that is reflected in its high number of citations. But the problem arises from the fact it has 3,222 authors (another paper from the LHC published this year hit 5,154 authors, meaning that only nine pages of the 33-page paper were actually concerned with the science, the rest dedicated to a list of authors).
A similarly unusual paper, this time from biology, appeared this year in the journal G3 Genes, Genomes, Genetics and examined the genomics of the fruit fly.
“Drosophila Muller F Elements Maintain a Distinct Set of Genomic Properties Over 40 Million Years of Evolution” has a more modest 1,014 authors, but it includes 900 undergraduates who helped edit draft genome sequences as part of a training exercise.
In the ensuing debate about how to properly credit academic research, neuroethologist Zen Faulkes, from the University of Texas Rio Grande Valley, wrote on his blog, Neurodojo: “I was curious what you had to have done to be listed as an author. With that many, it seemed like the criterial of authorship might have been, ‘Have you ever seen a fruit fly?’… Papers like this render the concept of authorship of a scientific paper meaningless.”
Under THE’s previous rankings methodology, using data and analysis provided by Thomson Reuters, each and every one of the authors on both of these papers and others like it (which also tend to attract unusually high volumes of citations), would be given equal credit for the work when it came to calculating a university’s research impact (which counts citations per paper, normalised against global citation levels for each discipline).
While this approach may not have had a statistically significant effect on large, comprehensive institutions like Harvard University, which typically publish around 25,000 papers a year, for smaller institutions with much lower overall volumes of research (our threshold for inclusion in the rankings is 200 papers a year over five years), it could have a distorting effect. It could not just artificially inflate a university’s research impact score, but given that research impact is worth a total of 30 per cent of the overall ranking score, it could unfairly push a small institution up the overall ranking table.
After extensive discussion with external experts, our new bibliometric data supplier, Elsevier, and among our burgeoning internal team of data experts (THE’s data and analytics director, Duncan Ross, blogs on the subject here), we have agreed that this approach is not appropriate.
So for the 2015-16 World University Rankings, we have decided to exclude from the analysis all papers with more than 1,000 authors. This amounts to 649 papers from a total of 11,260,961 papers – or 0.006 per cent of the total. It also adds up to 19,627 citations excluded from a total pool of 51,404,506 citations used to calculate the rankings – or 0.04 per cent of the total.
This might not be a perfect solution to a nuanced challenge, and it will cause some unwelcome volatility in the rankings this year.
It will no doubt frustrate a small number of institutions which have benefited from the previous practice and who will see themselves ranked lower in this year’s rankings compared to last year.
But until the global higher education sector can agree a fair and robust way to properly attribute author credit in such freak circumstances - and while THE’s data team take the time to examine proposals to use some other potential solutions such as fractional counting for all authors on all papers - we believe we have taken the transparent and fair approach.
Phil Baty is editor of the THE World University Rankings.