Risk-based quality assessment ‘cannot work’, study concludes

King’s College London research finds that ability of metrics to predict problems in higher education providers is ‘extremely limited’

十一月 26, 2015
Group playing pin the tail on the donkey
Source: Rex
Guessing games: the PhD student who carried out the research says a risk-based approach ‘cannot be achieved in practice’

Plans to monitor standards in English higher education providers using metrics “cannot be achieved in practice”, a major study has concluded.

The Higher Education Funding Council for England proposes to abolish regular inspections of established providers and to instead rely on student outcomes data for quality assessment, but research conducted by King’s College London warns that this sort of approach could lead to serious problems going unchecked.

The report also says that a risk-based system could wrongly raise questions about standards at some of England’s most prestigious universities and, as a result, could still lead to most providers undergoing a review of the type currently undertaken by the Quality Assurance Agency.

The study, presented at the European Quality Assurance Forum in London on 20 November, compares thousands of pieces of data on the performance of providers with the results of hundreds of institutional reviews conducted by the QAA between 2007 and 2014.

Alex Griffiths, a PhD student at King’s, found that the ability of metrics-based models to predict the results of QAA reviews, and hence to prioritise them, was “extremely limited”.

Hefce plans to utilise data on student outcomes and Mr Griffiths tested the effectiveness of these indicators, but found that the most successful predictive model actually drew on information around increases in student numbers, financing of staff numbers and overspends on research budgets.

Even so, when asked to prioritise the 13 institutional reviews which included “unsatisfactory” judgements out of the 184 that were considered, the model had an error rate of 92.5 per cent, the paper says.  

Put another way, if reviews had been conducted in order of the risk level produced by the model, 174 reviews would have to have been conducted before the 13 that resulted in unsatisfactory judgements had all been reviewed. Therefore, 161 institutions with satisfactory standards would have been prioritised unnecessarily.

For alternative providers, the most effective predictive model was found to draw on data about the size of the institution, its financial position, and past review performance. But the results were still disappointing, with an error rate of 83.5 per cent.

The error rate for further education colleges was 80.4 per cent.

Mr Griffiths said that his findings demonstrated that, while a risk-based approach was “attractive”, it “cannot be achieved in practice”.

“I’m sure this data is very valid, and useful in some senses, but what you can’t use it for is trying to prioritise quality assessment reviews, because even the best model performs pretty poorly and you end up reviewing most institutions anyway,” he said.

“If Hefce do this, it will put a lot of providers who are performing perfectly satisfactorily through review, and will not prioritise other providers who are not doing as well, letting them get away with it.”

The study received funding from the QAA and the EQAF paper was co-authored by Elizabeth Halford, the QAA’s head of research and intelligence.

Ian Kimber, the QAA’s director of quality development, said that the results demonstrated that a system of assessment which relied on metrics “is not going to work”.

“Any risk-based approach needs to use its metrics in a contextual setting and not rely on them because they won’t give you the correlation that you are after,” he said.

Mr Griffiths will reveal more of his findings at the King’s International Centre for University Policy Research on 30 November.

chris.havergal@tesglobal.com

后记

Print headline: Red light on metrics model for standards

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

Reader's comments (1)

There is one small problem here. The QAA assessments ignore entirely that quality of what's taught. They give top ratings to courses in pure quackery as long as the right bits of paper can be produced, while ignoring the fact that the unfortunate students are being taught pure nonsense. That fact alone makes QAA reports almost useless, and sometimes actually harmful to quality,