Logo

A step-by-step guide to designing marking rubrics that will save hours of time

Designing marking rubrics that provide guidance but with enough flexibility for students to demonstrate knowledge and skills in multiple ways is a difficult balancing act. Paul Moss explains how it can be done

Paul Moss's avatar
19 Apr 2022
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
A lecturer marking dozens of assignments

Created in partnership with

Created in partnership with

The University of Adelaide

You may also like

Efficient and effective assessment of written work remotely
Effective assessment of written work when teaching online

Designing marking rubrics that provide enough flexibility for both students and the marker is no easy feat. Fortunately, however, it can be done.

The biggest criticism of rubrics is that they can be too reductive, often limiting the answers a student can and will respond with. A student who is solely focused on criteria can produce responses that lack cohesion, while a student who produces answers that are unique and interesting cannot be awarded a grade because their response doesn’t fit anywhere in the criteria descriptions.

Attention to the design of marking rubrics can remove this limitation, save your time and help students understand their own strengths and weaknesses. Here, I explain my choices in designing the attached rubric example – downloadable above.   

Key components of good rubric design 

1. Create a consistent range. Apply the recommended grade boundaries in your university to every marking rubric, with a range included so criteria for grade boundaries can be differentiated in terms of quality. For example, high distinction equals 100% to 85%; distinction equals 84.99% to 75% and so on. 

2. Consider synonyms for the quality represented in the ranges. Synonyms help the teacher articulate what the grade boundary represents. They define and make concrete traditional grade boundaries, which can seem abstract, and provide a tangible explanation for the learner. Having a glossary of terms in the general assessment area of the learning management system to clarify what you think constitutes the various levels would also be useful to students. Explain what you mean by “insightful”, for example.

3. Design and clarify your criteria. Consider the notion of constructive alignment when designing criteria. In principle, criteria would be subsets of a broader domain of knowledge represented by the course learning outcomes. They would act like signposts of the main themes, key ideas, knowledge or skills from the broader domain. More than one learning outcome may be assessed, but be careful that the number of criteria in a marking rubric is not too high. Too many criteria can make it cumbersome, especially in terms of marking. Don’t be overly descriptive of what you are looking for; this information is reserved for what constitutes the quality in the ranges.

Also, the criteria should be linked to a specific learning outcome, and include the total number of marks allocated to it. This provides guidance for students on how much energy they should spend on each criterion. I have marked countless assessments where students have spent disproportionate energy on criteria, making it impossible for them to achieve the higher scores. 

4. Keep descriptions flexible. One of the key strategies to ensure a level of flexibility in the descriptors is to use the phrase “such as”. By suggesting elements that could be included in a response to criteria, you provide sufficient guidance but also open the possibility that other elements might be included, too. This gives you flexibility to still award in a certain range if a student responds in a manner that is not covered in the “such as” list.

The descriptor could also include the synonym for the range, such as “insightful”, as an indicator of the quality needed to satisfy the criterion.  

I have provided an explanation of the design of the descriptor, in red (see rubric in downloadable documents). This is for your design purposes and would not be included in the rubric. The idea here is to play devil’s advocate – is there a student answer that can’t be assigned a range?  

5. Use responses to create generic feedback. Because the descriptors offer what could be included in a response, feedback is made easy. Comments for improvement would direct a student to what they missed from the immediate range above the one in which they scored. Providing feedback in this way is far more directive and actionable. Once all the marking is done, and you get a picture of the types of responses generated across the ranges, you can then create generic feedback for each range. This saves you having to write extensive comments for most of the students because you can train students to look at the whole-cohort feedback and identify what they missed for themselves. 

6. Using whole-cohort feedback to reduce marking time. As the assessment is marked and the rubric applied, look out for patterns of answers for each criterion that are anomalies to what was stated in the rubric. Noting these down and then supplying them to students serves two purposes: first, if many students answered in a certain way, it will save time in having to write the same feedback to each student; second, students get a glimpse of what others are writing, which can expand their own thinking on the way they approached the criterion.   

To make this work effectively, before the assessment is given, you will have to train students and explain how the rubric and whole-cohort comments will be applied. It essentially becomes a mapping activity; students analyse the descriptions in the awarded range to evaluate why they were given the grade they received. If they can’t see how their respective answers fit, then they either consult the whole-cohort feedback or are given an individual comment explaining the marker’s choice.

Some students may feel that not getting individualised feedback is “poor service”. However, this process makes them more attuned to the marking rubric and provides an opportunity for greater self-regulation in understanding their areas of strength and weakness. There will always be responses that require individualised comments, but whole-cohort feedback will reduce the number of these, and in large classes, this can save hours.

Paul Moss is a learning design and capability manager at the University of Adelaide.

If you found this interesting and want advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the THE Campus newsletter.

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site