Call for debate as lack of consistency in assessment attracts warning of student litigation. Rebecca Attwood reports. Lecturers' marking of student work is "inherently frail" and assessment procedures would struggle to stand up to legal challenge, academics warned this week.
A new book argues that there is little evidence of reliability when it comes to marking, and highlights "considerable" marking discrepancies between tutors.
One academic, who runs assessment workshops in universities, told The Times Higher that when a group of tutors was given the same piece of undergraduate work to mark, the resulting grades could vary from a borderline first to a bare pass.
Sue Bloxham, co-author of Developing Effective Assessment in Higher Education , said: "My hunch is that students will become increasingly litigious about marking in the future and our procedures will struggle to stand up to this kind of onslaught if we persist in claiming that marks given are completely accurate."
She said that research on marking consistency is "depressing", with some studies showing that differences of eight or nine marks out of 25 are common. One study found that the same piece of work received higher marks when it was submitted using a larger typeface.
A spokesman for the Quality Assurance Agency said that the book was "welcome", as it was "crucial to maintain a system that is transparent and fair", and the debate must be "sustained and spread widely".
The Burgess group's final report on recording students' achievement, published last week, calls for "greater clarity in assessment practice", and recommends a review by the Higher Education Academy of marking and assessment, to stimulate "robust debate".
The group said evidence received suggested assessment "could be more fit for purpose".
Robert Burgess, head of the group and vice-chancellor of Leicester University, told The Times Higher : "I think the time is right to look in detail at assessment practice. As soon as you talk about degree class ... you automatically raise questions about the way you assess and different styles."
Professor Bloxham, who co-wrote the book with Peter Boyd, said that when unreliable.
Marking occurs repeatedly a student's final degree classification may depend as much on particular examiners as academic competence.
She said: "There is limited research on marking given the amount that takes place each year, but what is there is consistently depressing in relation to issues of reliability, and this is particularly the case for assignments such as essays and in disciplines such as the arts, social sciences and humanities."
A letter signed by ten academics, sent to The Times Higher this week in response to the Burgess group's report, describes the grading of student work as often "more a matter of judgment than of measurement". It describes the grading of student work as "inherently and inevitably rough," and argues for an end to the degree classification system.
Mantz Yorke, a signatory of the letter and author of another new book, Grading Student Achievement in Higher Education, said part of the difficulty with marking is the complexity of knowledge at university level.
"Complex achievements are difficult to grade. While one can judge performances with reasonable reliability, measuring them presents all sorts of problems how do you combine the assessments for various components; do different assessors combine them in the same way; do they even interpret the assessment criteria in the same way?"
He added that an overall mark for an assignment in effect often "adds together apples and pears".
"Then there are all the personal things that bear on assessment like the number of items to be marked, in which order they are marked, developing tiredness of the marker, and so on," he said.
"If we expect students to achieve complex outcomes, and I believe higher education should aim to do this, then my argument is that assessment methodology has to reflect this."
Margaret Price, director of ASKe, a centre for excellence in teaching and learning on assessment based at Oxford Brookes University, said: "Many academics seem to have a belief in the absolute accuracy of numbers, but it is a myth that you can mark so precisely."?
And while quality assurance measures have greatly increased, Professor Yorke said studies had shown that double-marking added less to reliability than might be supposed. Others see external examiners as a guarantee of quality, but Developing Effective Assessment argues there is a lack of research evidence on their effectiveness.
According to Professor Price, what is needed is a focus on sharing understanding of assessment standards among staff and students. The most effective way of shoring up standards is through cultivating a much closer learning community, she said.
Professor Bloxham agreed and said the involvement of students could guard against potential litigation. "It would be better for us to engage students from the beginning in assessment of their own work against standards, helping them understand that part of being a professional in any field is being able to recognise good quality," she said. "We need to express our marks as a reflection of our professional judgment, not an absolute."