Well, no, actually that won't do. Practical assessment of skills requires scrupulous fairness and clarity. Not only that, says Harriet Swain, students must be able to learn from the exercise, even when they fail.
What's it to be? Eight out of ten? Seven? Let's plump for seven since that particular student's performances are usually nothing special. Next.
Not many marks for your assessment skills, there. If you are assessing practical skills, you have to be scrupulously clear about the criteria you are using and make sure they are reliable and fair - and any other assessor should come to the same evaluation given the same evidence, according to Sally Brown, author, with Ruth Pickford, of Assessing Skills and Practice .
You also have to be especially vigilant about potential bias because you can rarely mark practical assignments anonymously.
Do not mark up students who normally do better or mark down those who have performed uncharacteristically well. Devise marking criteria that can assess individual and group work, and be clear about when they are expected to collaborate or not.
Brown and Pickford's advice is to think about how you are going to assess these skills as soon as you begin designing a course so that you can build in practice opportunities. Write learning outcomes that lend themselves easily to practical assessment so that both students and fellow markers understand what is required, although you must be careful not to be so overspecific that you inhibit creativity. You must also make sure you are measuring exactly what you intend to measure.
"Practical assessments should link clearly to the most sensible evidence of practical skill students could produce to demonstrate their achievement of the intended outcome," Brown and Pickford write.
Paul Kleiman, deputy director of Palatine, the Higher Education Academy's subject centre for dance, drama and music, says that if you are simply measuring competence then the assessment should focus on whether the student can do what they are supposed to do without worrying how they got there. But if you are measuring learning, you must find a way "not only to get at and assess the process but also to enable students to articulate their understanding of what they have achieved and how".
Jude Carroll, deputy director of the Assessment Standards Knowledge Exchange Centre for Excellence in Teaching and Learning, says a useful way of measuring practical skills is to break them down into individual elements. The disadvantage of this is that you do not use skills in small bits, so students need the chance to practise them in realistic situations, such as placements.
David Nicol, director of the project Re-Engineering Assessment Practices in Scottish Higher Education, argues that practical skills should always be assessed in a context as close as possible to that in which they will be used.
Peter Klappa, senior lecturer in biochemistry at Kent University, has tried to reconcile these two conflicting needs - while cutting down on the time needed for assessment - by getting students to carry out under exam conditions a practical that they have already tried before. At the end of the assessment, students produce a number, which they give examiners along with their name. Students get slightly different samples to prevent cheating and are able to resit if they fail. The intention is to test their ability to carry out a practical assignment accurately.
Pickford stresses that sound assessments involve students practising.
Feedback is therefore essential and needs to be built into the assignment.
Pickford and Brown's book stresses the importance of how you deliver this feedback to maximise the chance of students taking it on board, advising opportunities for them to respond and take notes.
Liz McDowell, director of the Centre of Excellence in Teaching and Learning in Assessment for Learning at Northumbria University, says it is useful to focus on specific elements rather than to praise or criticise everything at once. She also suggests getting students to monitor their own skills development through self-review and peer-review checklists.
Nicol says engaging students in identifying their own assessment criteria is helpful because it gives them ownership. It is particularly useful in group working. He says a common mistake is to ask students to set criteria to assess the contribution of others in their group once they have already finished a task, which inevitably leads to disputes. Establishing what the criteria should be early on means that everyone is clear about what is being judged. But while he argues that students can be valuable assessors of others' work, he recommends giving some training in constructive criticism before they start.
Pickford suggests that stakeholders such as clients may also be able to help design an assessment and carry it out.
Brown and Pickford suggest observing and reviewing performance on several occasions to assure consistency. They also advise keeping good systematic records for quality assurance and clarifying benchmarks and standards with fellow assessors. Make sure your standards do not slip or climb as you become exhausted or exhilarated by the assessment process. And make sure you are taking account of differences in a diverse student body.
McDowell says students should have plenty of chance to practise before being summatively assessed. She also recommends resisting the temptation to develop students' skills first and letting them use them in a "real"
situation only later. She says: "Students are more motivated and more likely genuinely to take skills on board if they learn them as part of a project, experiment or performance rather than in isolation."
Assessing Skills and Practice , by Sally Brown and Ruth Pickford, Routledge, 2006 Palatine, the Higher Education Academy subject centre for dance, drama and music: www.palatine.heacademy.ac.uk
Re-engineering Assessment Practices in Scottish Higher Education: www.reap.ac.uk
Centre for Excellence in Teaching and Learning in Assessment for Learning: http:///northumbria.ac.uk/cetl_afl/