Stevan Harnad Professor of cognitive science Department of electronics and computer science Southampton University.
Unlike journalists or book authors, researchers receive no royalties or fees for their writings. They write for "research impact", the sum of all the effects of their work on the work of others and on the society that funds it. So how research is read, used and built on in further research needs to be measured.
One natural way to measure research impact would be to adopt the approach of the web search engine Google. Google measures a website's importance by rank-ordering search results according to how many other websites link to them: the more links, the higher the rank. This works amazingly well, but it is far too crude for measuring research impact, which is about how much a paper is being used by other researchers. There is, however, a cousin of web links that researchers have been using for decades as a measure of impact: citations.
Citations reference the building blocks that a piece of research uses to make its own contribution to knowledge. The more often a paper is used as a building block, the higher its research impact. Citation counts are powerful measures of impact. One study has shown that in the field of psychology, citation counts predict the outcome of the research assessment exercise with an accuracy of more than 80 per cent.
The RAE involves ranking all departments in all universities by their research impact and then funding them accordingly. Yet it does not count citations. Instead, it requires universities to spend vast amounts of time compiling dossiers of all sorts of performance indicators. Then still more time and effort is expended by teams of assessors assessing and ranking all the dossiers.
In many cases, citation counts alone would save at least 80 per cent of all that time and effort. But the Google-like idea also suggests ways to do even better, enriching citation counts by another measure of impact: how often a paper is read. Web "hits" (downloads) predict citations that will come later. To be used and cited, a paper first has to be accessed and read. And downloads are also usage (and hence impact) measures in their own right.
Google also uses "hubs" and "authorities" to weight link counts. Not all links are equal. It means more to be linked to by a high-link site than a low-link site. This is the exact equivalent of co-citation analysis, in which it matters more if you are cited by a Nobel laureate than by a new postdoc.
What this new world of webmetrics needs to be mined and used to encourage and reward research is not a four-yearly exercise in paperwork. All university research output should be continuously accessible and hence assessable online: not only the references cited but the full text. Then computer programs can be used to extract a whole spectrum of impact indicators, adjustable for any differences between disciplines.
The efficiency, power and richness of these webmetric impact indicators are not their only or even their principal benefits. For the citation counts of papers whose full texts are freely accessible on the web are more than 300 per cent higher than those that are not. All of UK research stands to increase its impact dramatically by being put online. Every researcher should have a standard electronic CV, continuously updated with all the RAE performance indicators listed and every journal paper linked to its full text in that university's online "e-print" archive. Webmetric assessment engines can do the rest.
At Southampton University, we have designed (free) software for creating the RAE CVs and e-print archives along with citebase, a webmetric engine that analyses citations and downloads. The only thing still needed is a national policy of self-archiving all research output to enhance and assess its impact.