A valuable Dutch lesson in research assessment

July 7, 1995

I recently spent a week chairing one of the discipline committees set up by the Association of Universities in the Netherlands (VSNU) for the assessment of research quality in Dutch universities. This highlighted a number of differences between the approaches adopted for research assessment in the Netherlands as against those that will be used in Britain for the forthcoming research assessment exercise and raises some important questions about the purpose of evaluation itself.

There are a number of differences in the context within which research evaluation takes place. The Dutch Association of Universities is an advisory body rather than a grant-allocating body like the Higher Education Funding Councils in Britain. Consequently there is no direct funding attached to the five-point grading system used in the Netherlands. Nevertheless, there is no question that the universities take the advice that is given by the association very seriously in the development of strategic plans. In the Netherlands it is specific research programmes rather than departments or groups of departments that are evaluated. Consequently it is possible to focus on substantive themes in large departments for this reason and also to evaluate joint ventures involving staff from several different disciplines.

More important, however, is the process of evaluation itself. Every programme director has an opportunity to meet the committee to explore issues raised by its submission. My committee found this extremely valuable in clarifying points of detail. It enabled fine-tuning of the initial grades allocated to programmes based on the submissions themselves. This generally involved only changes of one notch in the ranking scale, usually in favour of the groups.

The committee also meets all the faculty boards within which the programmes were located. This provides a very important opportunity to explore the wider context within which the work of the group took place. It helps the committee to probe faculties about their future plans and explore resource issues relating to the group being evaluated.

I found this vital especially where groups were experiencing pressures to increase their teaching loads or having to deal with declining student numbers while maintaining an active research programme. This dialogue also enabled the faculties to explore some of their ideas with the committee.

Most important of all in my opinion is the provision that is made in the protocol of the Association of Universities in the Netherlands for the feedback of information to everybody involved. This involves the publication of a report for each discipline which contains the grades for each of the programmes and half to a whole page of commentary explaining the rationale behind them.

In addition the committee is required to prepare an evaluation of the state of research in each of the sub-disciplines being evaluated. This highlights their strengths and draws attention to the quality of the whole sub-discipline as well as the programmes themselves.

The committee is also required to prepare a short overview of research in each faculty on the basis of their discussions. Once again this gives them the opportunity to draw attention to strategic issues and problems needing attention which will arise out of their evaluation.

Last but not least, faculty boards and programme directors are given an opportunity to see the report in draft and make factual corrections to it before its eventual publication by the association. This gives them a further opportunity to correct any misunderstandings.

Obviously no assessment can be perfect. Nor can it be made less than painful for some people. But what impressed me about the Dutch system is that it is an open process which seeks to develop a real exchange of views between the evaluators and the evaluated. This seems to have considerable advantages over the league table approach adopted in Britain which avoids dialogue either during the evaluation process itself or in the form of published feedback.

This limits the usefulness of the whole process of assessment by emphasising the grading aspects at the expense of the need for advice and sometimes even encouragement to be given to the programme directors and faculties who have to live with the outcomes of this process.

IAN MASSER Faculty of architectural studies University of Sheffield

You've reached your article limit.

Register to continue

Registration is free and only takes a moment. Once registered you can read a total of 3 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments
Register

Have your say

Log in or register to post comments