Public policy must be based on evidence. That seems obvious enough, and especially so in higher education. However, it appears that we have our share of prejudice and preconceived opinions.
The initial euphoria following the research assessment exercise results ("we are the best in the world") was replaced in last week's Times Higher Education by scepticism ("it was too easy to get a four-star rating"). The concern is justified and the euphoria, although understandable, was overdone. Last year, Evidence Ltd showed that while UK research was globally outstanding, its reputation was built on a small number of very high achievers. The bulk of our research is fairly ordinary. The Evidence analysis was based on citations, whereas the RAE results were based on peer review.
If, as ministers and the Higher Education Funding Council for England say, the RAE shows that we lead the world and that world-class research is widely distributed, then the citation evidence must be discounted. Where does that leave the rush to replace the RAE with "metrics", and citations in particular? Where is the evidence base for that?
There will almost certainly be a shake-up of funding forced on Hefce as a result of the new RAE scoring system. It will be impossible to avoid providing funds to hotspots of outstanding research that were previously lost in low-scoring departments. And, if some get more, others must get less, even if they remain top-quality. It will be a sure sign that Hefce is fiddling the figures if there is not a substantial rise in the funding of new universities at the expense of the research-intensive ones. That's probably no bad thing - but who knows?
After 2001, Charles Clarke, then the Education Secretary, said funding should be concentrated in ten universities. Hefce still says that excellence should be funded wherever it occurs. That decision, too, will be taken without an evidence base.
Where there is no evidence, policymakers can at least be forgiven for doing the best they can. What is unforgivable is to maintain policies in the face of evidence that they are wrong. We have long known that the month of birth greatly affects chances of going to university. The reasons are obvious - the inflexibility of a school system that requires children to begin school and then move up whether they are ready or not. Other countries do not suffer the same problem. The evidence is clear and the effects are serious - 10,000 young people annually fail to go on to higher education solely for the sake of administrative convenience. But there has been no inclination to address the issue. A Government serious about widening participation would act according to the evidence. The answer is simple - but it would cost money if large numbers of pupils repeat a year. There's the rub.
Another subject now illuminated by evidence concerns the student academic experience. The Higher Education Policy Institute has shown how the time spent studying varies enormously, between subjects and within subjects between universities. The response of some has been extraordinary, in effect asserting that study time is irrelevant to education outcomes. This defensiveness may be understandable, but it is unacceptable. It is welcome that the Innovation, Universities, Science and Skills Committee is to investigate the student experience. Let us hope it gets to the bottom of these variations.
Evidence can be inconvenient, but policymaking without it is poor policymaking.