It is no longer enough to rely on research status and the prestige derived from an institution’s heritage to attract students
A total of 86 per cent of students are satisfied with the teaching of their courses, according to the 2013 National Student Survey - the same figure as in 2012. That sounds pretty good, but looked at another way, it also means that 14 per cent are less than satisfied - a score that would make no business proud. As a number of politicians and educationalists have argued recently, there is still plenty of room for improvement when it comes to how students in the UK are taught.
How might this be achieved? One measure of teaching performance used by some universities is student evaluation of modules (SEM), whereby students assess the value of the experience and the effectiveness of each module they take. The summarised results are usually available to prospective students on university intranets for those specific modules.
Another is student evaluation of teaching (SET). Although the results about individual teachers remain largely confidential, this metric is considered when assessing an individual’s teaching performance.
Unsurprisingly, SET is not universally popular with academics, some of whom yearn for the days when there was no objective way to assess their teaching effectiveness. Some staff are less than cooperative about the process and not all teaching is assessed, but one could argue that SET gives universities access to a tool that could be more fully exploited.
The introduction of such practices has already changed the teaching landscape. Universities are actively attempting to improve their teaching as they seek to attract better students, and their strategies have clearly met with some success as surveyed teaching scores are generally rising.
The trouble is that most universities have little in their armouries to help them deal with poor teachers. Formal performance management or disciplinary action is rarely taken and universities are reluctant to confront good researchers with low teaching scores. In fact, bad teaching is often “rewarded” by programme directors reducing the teaching load on the lecturer. Consequently, poor teaching generally leads to the individual having more time to pursue their research interests. Ironically, this is likely to improve rather than reduce their promotion prospects.
But could the higher education sector take a leaf out of the book of US health practitioners? US surgeons now publish their patient survival rates on websites so that prospective patients are better informed when selecting a surgeon; this practice is also being introduced in the UK in the NHS. In a similar manner, universities might publish the SET data on academics to inform student choice and drive teaching standards.
Such a move might have unintended consequences, of course. Staff might attempt to minimise their teaching or avoid large-class teaching (which often correlates with lower scores), for example. It would also penalise those teaching subjects that are more difficult to teach in an interesting and entertaining manner. But these problems could be addressed and the overall outcome would be that staff would become more likely to devote effort to improving the quality of their teaching.
My view is that publishing SET scores would improve teaching and, as some universities are drifting down the overall rankings owing to poor scores in the NSS, doing so would have obvious benefits. Both SET and the NSS highlight how increased transparency and access to university performance data can drive competition between universities.
It is no longer enough to rely on research status and the prestige derived from an institution’s heritage to attract students. Now is the time for universities to change their tune when it comes to how they reward and motivate good teachers.