This is the season of soon-to-be graduates filling in the National Student Survey. Across the sector, the NSS is being watched with greater anxiety than ever before because the results will be a key contributor to the teaching excellence framework, performance at which may have serious financial implications. Among the 27 questions students are asked to respond to, questions in the category “assessment and feedback” produce the largest variation across institutions. Although these correlate least with their overall satisfaction, they are used extensively in constructing league tables and dominate our daily discourse.
Some months ago, I shared with readers a secret by which all of us could improve our NSS performance (“A satisfyingly simple solution”, Letters, 13 October 2016). The idea was to use staff satisfaction as a tool to achieving student satisfaction because it is axiomatic that the most effective way to reach out to students is via those who teach them. However, implementing that innovation is something over which an individual lecturer has very little control. Hence I write to share a second secret.
The correct answer to the question “Why do students express dissatisfaction with assessment?” is this: “Because we overdo it, stupid!”
While over-assessment can be discussed at various levels, permit me to explain, using an example, one part of it over which an individual lecturer has control. Suppose you teach a module in which 20 per cent of the marks are awarded for a practical assignment or an essay that students have to do mid-semester, marks of which are added to their scores in an end-of-semester examination contributing the remaining 80 per cent. Here are three different ways you could grade this part of the assessment: (a) design a marking scheme in the range zero to 20, mark on this scale and report the score; (b) design and use a marking scheme in the range zero to 100 and divide the score by five before reporting; or (c) design a marking scheme in the range zero to 10,000, mark on that grand scale and divide the score by 500 to scale down.
While (a) is the correct thing to do, my guess is that (b) – which I suggest is “overdoing it” – is common practice. The folly in this approach can be easily understood by considering (c). More specifically, in scheme (b), the marker has the freedom to award 62 and 58 to two students with roughly similar work. This difference serves no purpose and will be lost when rounding down to the required scale on which both students will score 12. Unnecessary dissatisfaction sets in when marked work is returned and students compare notes.
Anyone who has not got it yet should consider an easier thought experiment: To work out travel time, you would not want to measure the distance between Southampton and Winchester using a 12-inch ruler, would you?
University of Southampton