One would expect the professions to be able to justify their services by showing that clients benefit from them. In fact, all professions resist gathering the necessary evidence.
Progress towards quality control goes through three distinct phases. In the first, professions resist evaluation on the grounds that it is impossible or inappropriate. In the second, procedures are developed which look at everything except long-term consequences. Internationally, many studies have shown that this is more likely to depress than enhance the quality of services. Nevertheless, systems of this kind are now being imposed on British universities.
Only as a last resort, and as a way of avoiding alternative evils, do professions move to the third stage in which they seek and learn from, unambiguous information about their successes and failures. No profession is yet fully into this phase, but medicine is closest to it. The history of quality control in medicine helps understand what is happening in education.
The need to keep records of outcomes, and to use controlled trials to extract the maximum information from them, was clearly stated by Percival at the beginning of the 19th century and by Florence Nightingale about 1850. Both were largely ignored. In 1912, against much opposition, Codman recorded outcomes at his own hospital. Shortly afterwards, the American College of Surgeons (and others) set up quality control systems for hospitals which recorded organisational structure, qualifications of staff, rules governing their work, accessibility of medical records, and therapeutic facilities, ie almost everything except the relationship between treatment and its success or failure. There is a striking similarity between what the medical profession did then, and what is being done now in education.
Following the second world war, there was a big increase in controlled clinical trials but, as our minister of health has recently pointed out, doctors still do not make full use of them to improve the quality of care. Doctors, like teachers, place greater reliance on their unevaluated experience. But there is now a Centre for Evidence-Based Medicine in Oxford which hopes to change this by collating, summarising and circulating the results of medical research and encouraging the use of long-term monitoring to evaluate treatments.
In a new book, Monitoring Education: indicators, quality and effectiveness, to be published this year by Cassell, Carol Fitz-Gibbon of the University of Newcastle upon Tyne suggests that education should follow the lead given by medicine, and she is working towards the creation of a Centre for Evidence-Based Education. More than 1,000 schools are already collaborating with her and Peter Tymms in combining grassroots research with long-term monitoring of educational progress. The new centre will continue this work and also take responsibility for disseminating the results of the most useful educational research.
On a smaller scale, methods are being developed for monitoring the long-term consequences of university education. But the Higher Education Funding Council for England and the Committee of Vice Chancellors and Principals seem determined to impose time-consuming methods of "assurance" (based on short-term observations of practice) and "audit" (based on the organisational paperwork). Must universities remain stuck in stage two?
It is not too late to adopt a more rational approach and set up a quality control system based on outcomes. We could: * Collect available measures of social and academic attainment before, during and, particularly, after university. In selecting measures to be used, many people should be consulted since there are many different consequences of education * Estimate the reliability of these measures and make the estimates public * Look for patterns among the "before", "during" and "after" measures. This will improve our understanding of what is happening * Compare "before" and "after" measures to estimate "value added" * Devise simpler and more comprehensible ways of measuring the most important variables and their interrelationships * Develop hypotheses to explain these relationships and to suggest effective forms of intervention * Test these hypotheses in pilot studies (ie controlled trials) * Use the results of the pilot studies to inform teachers, but monitor what happens if this leads to a change in practice. Less adequate routes to innovation are not only inefficient, they are immoral because they impose unjustified practices on students * Use the results of pilot studies to convince students, government and electorate of the value of education.
The usual objections to such proposals are: * They are too expensive. But they need to be no more expensive than our present labour intensive procedures. Many of the measures are already available. The greatest innovation is the suggestion that we should look at the relationships between them * Measurements oversimplify things and it is better to rely on the intuitions and subjective judgements of "experts". This is a hypothesis which has been tested and found wanting many times. Only the ignorant continue to believe it. The "combinatorial explosion", which follows when we look at the interrelationships between many measures, produces a greater richness of understanding than is possible in less systematic judgement * They will limit academic freedom innovation. This is only the case when too limited a set of measures is used, such as the results of university examinations or the subjective judgements of old men and women. Learning the detailed consequences of one's actions actually increases innovation, and quickly sorts out the wheat from the chaff.
The damaging effects of a restricted range of measures is increased when they have financial implications. This leads to cheating and neglect of the primary purpose. It is inevitable that our imposed systems of quality control will have these as well as other damaging effects. They should be replaced by evaluations based on outcomes.
Ian Howarth is emeritus professor of psychology at the University of Nottingham.