Dutch institutions are taking a positive attitude to their quality control, as Ton Vroeljenstijn finds out. The past decade has brought a breakthrough in the thinking about quality assessment and quality assurance in Dutch higher education.
Until 1985 evaluation took place in the institutions themselves, often on an ad hoc basis and in an unstructured way. Then the government promised more autonomy and less interference but only if quality could be assured.
Originally the institutions were to carry out the internal evaluation with an outside body, the Inspectorate for Higher Education, which is responsible for external assessment. However, the Association of Universities in the Netherlands (VSNU) and the Association of Dutch Polytechnics and Colleges (HBO-raad) claimed that quality assessment and assurance was primarily the responsibility of the institutions themselves.
The institutions came to an agreement with the minister of education and sciences to make themselves responsible for designing and operating a system of external quality assessment. The inspectorate would be a watchdog ensuring that it was done openly.
In 1988 a system for external quality assessment was introduced, coordinated by the VSNU and since 1990 also for the extra-university sector coordinated by the HBO-raad. It is based on self-evaluation and peer review.
The department offering the course to be assessed carries out a critical self-evaluation according to a guide provided by the VSNU or HBO-raad. The detailed checklist is a tool for a critical self-analysis.
A group of experts visits the department for two days, interviewing staff, students, administrators and committees. It then reports with conclusions and recommendations which are sent to the department. It may react if it wishes to. The report is then published.
The assessment is carried out on a nationwide comparative basis. The same committee visits all departments in the Netherlands (and some in the Flemish speaking community of Belgium) offering the same programme and the departments are compared in the published report.
The aim of the exercise is not ranking or rating but accountability and improvement. This puts a heavy load on the committee and demands thorough assessment and reporting.
There is no direct link between the outcome of the assessment and funding. In practice the government may stop funding a programme when a prolonged lack of quality has been determined. If a programme has been assessed as unsatisfactory, the department will get time for improvement. After six years or less the next committee will see what has been done.
If the situation remains unchanged, the minister may warn the department and decide to stop the programme. In some cases the inspectorate declares the situation in a department alarming. The department has to inform the minister what will be done. If nothing has improved action may be taken.
After eight years the Netherlands has a stable and fairly accepted system although there is some resistance in some individual departments. It is neither an inspection nor external control nor a cheap instrument for the government. In a recent report the inspectorate concluded that the system is functioning well although it could be improved in some respects. It has contributed to the growth of greater quality awareness and the inspectorate has found sufficient signs that the departments take the assessments seriously.
A system owned by the institutions themselves and directed towards improvement is bound to last longer than a system handled by an external body and which aims at inspection and control.
Ton Vroeljenstijn is head of the quality assessment division of the Association of Universities in the Netherlands.