What’s in a grade?

Students need to understand assessment criteria in order to spot weaknesses in their own work, says David Carless

April 9, 2015

A number of students did not believe that the stated criteria represented how they would actually be assessed

Undergraduate students in the 1970s and 1980s often had little idea about the criteria on which they were being assessed. At best, through repeated interactions with their teachers, they gradually came to understand what kind of work was expected of them.

By contrast, contemporary students generally receive lists of criteria or grade descriptors and specifications of expected outcomes. These are meant to clarify expectations and bring much-needed transparency to assessment processes.

But how useful are lists of assessment criteria for students in reality? How effective are they in communicating tacit knowledge about quality and standards? And what might be done to make them more accessible?

My recent research at the University of Hong Kong reveals that students find criteria vague and unclear, couched in an academic discourse they find hard to penetrate. Students told us that the terminological similarity between criteria for different courses and subjects reduced their meaningfulness, and terms like “excellent”, “good” and “satisfactory” failed to help them understand what good-quality work would actually look like.

Many students did not study grade descriptors at all seriously, and misunderstandings were frequent. They often made inaccurate statements about what was required or how they were being assessed. Furthermore, a number of students did not believe that the stated criteria represented how they would actually be assessed. They felt criteria would be outweighed, in reality, by teachers’ personal feelings regarding “hidden criteria” such as student effort and the general impressions they made in class.

Good practices were evident in some of the classes I observed, where students themselves were involved in generating or analysing assessment criteria. This helped them to engage with what good performance involves. Even more effective in this regard was the use of concrete examples of previous student work, which can help students to understand what teachers are looking for in specific assignments. Analysis of exemplars can also be effectively linked to criteria, allowing students to judge the samples on the basis of specific qualities rather than relying on personal reactions. Linkages between samples and grade descriptors can help to make criteria more meaningful.

Seeking students’ suggestions about how a specific exemplar could be improved is also useful in that it can help students to see the difference between their present level of performance and the target level. This can help to develop what Royce Sadler, emeritus professor of higher education at Griffith University in Australia, refers to as “evaluative expertise”: the evolving ability of students to make informed judgements about their own work and that of others.

Using exemplars is, of course, not a panacea. It runs the risk that students view the exemplars as model answers to be imitated, putting a brake on their creativity and discouraging innovative approaches. A useful strategy is to share exemplars which are parallel to, but not the same as, the assignment being attempted. This encourages students to take ownership of insights and transfer them to their own work.

But the potential for exemplars to demystify assessment is fully realised only if they are used as a springboard for further discussion. While this can sometimes be difficult to handle if students lack competence in evaluating exemplars accurately, one useful suggestion is to begin with peer group discussions and then move into teacher commentaries that build on and add to students’ thoughts about the samples. Finally, students need to identify how emerging insights can inform their own work.

The bigger assessment picture is that assessment faces three competing priorities: judging student achievement, promoting student learning and satisfying the demands of quality assurance. Criteria loom large in relation to all three, but my experience suggests that those criteria will be truly meaningful to students only if they are supplemented by dialogue around exemplars.

Times Higher Education free 30-day trial

You've reached your article limit.

Register to continue

Registration is free and only takes a moment. Once registered you can read a total of 3 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments
Register

Have your say

Log in or register to post comments

Featured Jobs

Most Commented

Monster behind man at desk

Despite all that’s been done to improve doctoral study, horror stories keep coming. Here three students relate PhD nightmares while two academics advise on how to ensure a successful supervision

celebrate, cheer, tef results

Emilie Murphy calls on those who challenged the teaching excellence framework methodology in the past to stop sharing their university ratings with pride

Sir Christopher Snowden, former Universities UK president, attacks ratings in wake of Southampton’s bronze award

Reflection of man in cracked mirror

To defend the values of reason from political attack we need to be more discriminating about the claims made in its name, says John Hendry

But the highest value UK spin-off companies mainly come from research-intensive universities, latest figures show