As the UK braces in anticipation of the full impact of the 2014 research excellence framework, its seventh country-wide research assessment exercise, Europe’s other research giant has just given the green light to the introduction of its own assessment system.
Eight years in the making, Germany’s Research Rating (or Forschungsrating) has so far consisted of four subject-level pilots, with evaluations carried out in chemistry, sociology, electrical engineering and British and American studies. On 25 January the German Council of Science and Humanities (the Wissenschaftsrat) approved plans to roll out the system across all fields.
A working group has been tasked with developing a detailed proposal on how the system will be organised. Although a number of questions remain, within a couple of years the system will be implemented across the board, Rainer Lange, head of research at the council, told Times Higher Education.
The subcommittee that designed the Research Rating looked at both the UK and the Dutch assessment systems in designing the process, and it opted for a system more like that of the Netherlands.
As with the Dutch system, results of the Research Rating will not be linked to the distribution of research budgets, said Dr Lange.
Although other nationwide programmes in Germany have focused funding on the best institutions, most recently through the €2.4 billion (£2.08 billion) Excellence Initiative, Dr Lange said that the council did not think it was a good idea “to just identify institutions that have been relatively strong in research in the past few years and put all your money into those”.
He added: “Given that research is also very much a question of luck and diversity, that might not be a very clever strategy.”
Instead, the council hopes that university leaders will use the ratings, which will be published at criteria level, to better assess the standing of their research in each area and to develop improved institutional strategies, much as has happened with the Excellence in Research for Australia initiative.
So far the German media have shown little interest in following their Australian peers’ lead in constructing league tables from the results of the ratings exercise, but Dr Lange said he was aware that it could happen when the system is rolled out fully.
In the pilots, the criteria that were used to assess both universities and research institutes included research quality, promotion of young researchers and transfer of research into society.
Another measure that the council hopes to develop, as proposed during the last pilot, is of researchers’ long-term contributions to the research base as a whole, for example through the building of infrastructure or archives.
The impact of research remains the most difficult criterion to capture, said Dr Lange.
Rather than opt for a case-study approach such as that used in the UK’s REF - which, although it is seen as admirably “liberal”, has led to uncertainty and increased workloads - the German system will develop detailed criteria for the kinds of activity that are relevant in specific subject areas, he added.
One issue that the Research Rating pilots had to grapple with was the use of bibliometric data. It is a thorny issue for researchers worldwide. Recently in Italy the first evaluation process led by the National Agency for the Evaluation of Universities and Research Institutes (Anvur), which began in 2011, has faced criticism for its use of publication and citation data when assessing researchers.
After protests made via the online platform Return on Academic ReSearch (Roars) and a number of learned societies, on 11 January Francesco Profumo, Italy’s research minister, issued a circular clarifying guidelines for the next wave of assessment.
The minister said that henceforth in assessing researchers, bibliometric indicators would be neither required nor sufficient in themselves for full evaluation.
Germany’s Research Rating will use both quantitative and qualitative data from the start, said Dr Lange. But exact criteria will be defined for each subject by a committee of domestic and international experts, who also decide when it is appropriate to use metrics.
All the information gathered from universities is fed back into the council to be standardised and analysed before being passed to the original committee for assessment. “It’s a rather flexible process,” said Dr Lange.
Although researchers may well judge such a system to be more fair than the one based on crude, easy-to-measure criteria, the burden that comes with flexible evaluation can also be a point of controversy, said Dr Lange.
In France, the bureaucratic burden has even led to the scrapping of the country’s Evaluation Agency for Research and Higher Education, AERES.
“Researchers in France have the feeling they are just here to provide different papers to different administrations and organisations,” said Bernard Meunier, vice-president of the French Academy of Sciences. In September the academy called for AERES to be replaced by a less burdensome system in which guidelines for assessment would be set at the national level but carried out at the local level by committees of experts.
Assessment should be based in “hard facts” and should demand fewer reports, especially for those departments that are already known to be good because they publish and recruit at international level, said Professor Meunier.
He said that another problem for AERES was that it was too keen to rate every researcher highly. This is useless if the goal is to focus money on the best people and places, he added.
“If you think everybody’s good, what does that mean? I’m not sure people are ready to accept rankings in France, but at least we don’t need to take the time and energy of people for an inefficient evaluation,” he said.
France’s minister for higher education and research, Geneviève Fioraso, announced in December that AERES would close and would be replaced by a national agency defined by the principles of “independence, simplicity of operation and procedures as well as scientific legitimacy and transparency”, as part of revisions to the law designed to make the French university system more competitive.
According to Katrien Maes, chief policy officer at the League of European Research Universities, increasing competition in research is the driver behind most evaluation systems. The UK’s success in increasing research quality is something that is particularly envied, she said.
Britain’s research assessment exercise “has meant using more objective methods to distribute government research money, rather than going on university size, reputation or influence…It isn’t a perfect system, but many countries would like to see something like that, but they aren’t quite there yet,” said Dr Maes.
In any country it is essential to make a system that is fit for purpose, she added. “To quote Albert Einstein: not everything that counts can be counted, and not everything that can be counted counts. It’s about smart measuring, and knowing where to stop,” she said.
Method meets with approval
In Germany, the response of researchers to the Research Rating has been divided, for many of the same reasons as in France.
“I think almost everyone in contact with us has applauded the method…However the [amount of administrative effort] has also been criticised, so we have to find a good balance between, on the one hand flexibility, adaptability to the subject area, detail and quality and, on the other hand, effort,” said Dr Lange.
There is also debate as to how much use will be made of the results once they cover all subjects. But there is very little chance that Germany will head down the same route as the UK and start to distribute funding according to Research Rating results, not least because that would be difficult given the federated funding system in the country.
Dr Lange acknowledged that the incentive to engage with the evaluation that funding offers is an important part of the success of the UK system. But of Germany’s choice not to link funding to its Research Rating assessment, he said: “I think we will nevertheless stick to that decision.”