Europe's quality wave hits Euro

九月 15, 1995

Recently, I was privileged to attend, on behalf of the Committee for Vice-Chancellors and Principals, a meeting in Brussels. This involved an "expert" working group of the Liaison Committee of the European Rectors' Conference (CRE), whose collective expertise is in the field of quality assurance.

Sitting in the Grand Place, en route for the airport, I reflected with the Finnish delegate on how Britain has successfully exported not only its home -grown versions of the quality industry but also the arguments about their respective merits. At the same time, a cultural revolution is sweeping over continental Europe as the democratic and collegial traditions of its universities begin to weaken in the face of the managerialist imperative engendered by the need to respond to the accountability challenge.

Many of those who have followed or been directly involved in the great quality debate in the UK may be surprised to know that controversy does not stop at the English Channel. The background to this is the decision by the European Commission to launch a pilot project on quality assessment. The implicit aims are, in typical Commission fashion, a mix of the practical and the ideological. It is hoped that the project will assist the process of strengthening institutional accountability to key stakeholder interests. More ambitiously, it is expected that this will lead to the Europe-wide recognition of academic and professional qualifications which is the prerequisite of a single and open labour market. The first phase of the project involves the testing of subject-based quality assessment in a number of specified areas in university and "non-university" institutions in member states. The emphasis is on testing the method rather than looking at outcomes. Diplomatically, rather than impose a common method, guidelines were produced which allowed for flexibility of interpretation as well as differing cultural traditions and political circumstances. Similarly, each country was encouraged to select a mix of "hard" and "soft" cognate subject areas. The national committees are currently drawing up summary reports of their experience which will feed into the Commission's analysis and proposals for phase two.

Early indications suggest that participating institutions and their national committees are divided as to costs and benefits of this approach to quality assurance. Echoing their British counterparts, most are supportive of a subject-based approach which is geared to quality improvement, provided this does not lead to ranking and that there is no direct link to funding. For those countries which previously had little or no experience of either internal or external quality assurance, the pilot scheme has proved to be a useful developmental tool. However, concerns about the cost and the burden of increasing accountability are commonplace.

At a more general level, a major objective of the Commission - to reinforce the "European dimension" of quality assessment - has proved to be problematic. Thus, the attempt to include "foreign" experts in the visiting teams has in many cases foundered on practicalities, for example, finding a mechanical engineer fluent in Portuguese. More seriously, differences of opinion are beginning to emerge about whether the Anglo-Dutch model of quality assessment which underpins the European pilot scheme is the most appropriate method for external assessment. Moves are afoot either to strengthen the subject basis in phase two by insisting on the evaluation of the same subjects in all member states (despite or because of the potential for ranking?) or to abandon subject-based assessment in favour of institutional audit.The parallels are clear: quality assessment is Commission driven: institutional audit is an initiative of the permanent committee of the European Rectors Conference and is offered to participating institutions on a voluntary basis.

The pilot scheme, launched in 1994, was developed jointly by the CRE with the Universities of Goteborg, Oporto and Utrecht. Despite initial difficulties (only one report has been published and rumour has it the other two will never see the light of day), a further 14 institutions have signed up for the next phase. Perusal of the audit guidelines document shows clearly that at a practical level the method draws heavily on that developed by the British HEQC. Interestingly, there are clearly some features which have made European audit much more attractive to institutions than the HEQC model, not least the voluntary nature of the exercise and the potential mileage to be gained from free consultancy by a small team of academics of international repute.

While the CRE audit model looks to the HEQC for its method, the philosophy and focus are Dutch. Indeed, in the battle for academic leadership of the European quality industry, the Dutch are probably currently the winners by a short head. The differences are subtle but important. Thus, the CRE states its primary purpose in using audit is to stimulate quality improvement. However, the underpinning philosophy, as articulated by Frans Van Vught and Don Westerheijden, of the University of Twente in The Netherlands, is that improvement results from focusing on quality management: "The CRE Institutional Quality Audits are instruments to help institutions focus on their strategic, future-oriented choices, as well as the processes and organisational structures related to these choices."

Moreover, it is claimed that where the HEQC audits focus on the past, the CRE audits look to the future. Indeed, a careful reading of the guidelines and the published report reveals the potential for prescriptive solutions likely to be anathema for many institutions. Thus, the guidelines suggest that the self-evaluation prepared by the institution should include a "SWOT" (Strengths, Weaknesses, Opportunities, Threats) analysis which will provide the strategic context for quality management.

The Oporto University audit criticises the lack of institutional authority, recommends a restatement of the university mission and detailed changes to the internal committee structure and fundamental administrative organisation. The language used here smacks more of management consultancy for organisational development than quality assurance. This fans up in the wider European context a tension hotly debated in the UK. For many here, the development of the quality industry and its associated bureaucracy is seen as challenging the traditional collegial norms of academe and threatening academic autonomy.

In continental Europe, democratic structures are clearly ill-equipped to handle the strategic management model being promoted by those seeking to keep control of the quality bandwagon. When the rector is elected, he or she often lacks the power - and in some cases the necessary skills - to implement change.

Despite national differences, the fundamental problems are the same in continental Europe as in the UK. Universities are being challenged to justify their purpose and demonstrate they are providing value for money in a language comprehensible to their stakeholders. Arguments about methodological effectiveness are little more than a diversion for those outside the system. How shall we make progress? There is no easy solution.

One thing is clear: the success of quality assurance depends critically on winning the hearts and minds of those charged with delivering a quality educational experience at the grass roots. Would it be too cynical to suggest that academics, like others, are more responsive to carrots than sticks? Perhaps the best use of management tools in this area would be to make a positive link between quality, evidence of its improvement and financial rewards.

Diana Green is pro vice chancellor at the University of Central England in Birmingham.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.