An unglamorous stint on an exam committee led Steve Newstead to undertake research into cheating and marking
It was every researcher's nightmare: being asked to chair a working party to revise my university's examination regulations. What could be worse than spending hours deciding on invigilation arrangements or what type of calculators should be allowed into exams? I reluctantly accepted. We kicked off with a heated debate on whether to introduce anonymous marking. As a psychologist, I am well aware of the danger of bias and supported the proposal. I also knew that students, backed by their national union, were campaigning for anonymous marking. But other members of the working party were vehemently opposed.
The challenge, in the 1980s, was to get evidence that bias existed. The principal study was by Clare Bradley, then at the University of Sheffield.
She had explored the historical records of local universities and found evidence of a bias to mark males more extremely than females, in other words to give more firsts but also more thirds to males.
This was not enough to persuade my colleagues. I felt obliged to replicate the study on data from my own institution. I was unable to get the same results until years later, which delayed the introduction of anonymous marking at Plymouth by a decade.
Student cheating was another burning issue. There was little evidence on cheating but there was a vast literature, although none of it was British.
I started studying the frequency and causes of academic dishonesty.
Cheating was more common than expected. Some students claimed to have used bribery, corruption or seduction to improve their marks. Students on professional training programmes reported the least cheating, those in science and technology the most. Mature students reported cheating less than younger students. Females reported cheating less than males. I wondered if social expectations came into it. Is it macho to cheat? If the average male reports having heterosexual sex about twice as often as the average female, they can't both be telling the truth.
I was particularly intrigued by a strategy that US students used when taking multiple-choice tests. When there were four possible responses, students placed their pen in one of the four corners of the desk to communicate with each other about what response they thought was correct.
Fifteen minutes of fame followed this work. Newspaper articles, radio and television interviews and press conferences followed. I even got invited onto the chat show Kilroy . I declined. One journalist asked what kinds of cheating we had looked at. We said plagiarism, that is to say, quoting from another source without acknowledgement. That's not plagiarism but journalism, the journalist quipped.
Plagiarism is evidently an important transferable skill. The government even admitted to plagiarising the "dodgy dossier" on Iraqi weapons of mass destruction. No one seemed to care about this misdemeanour. It is the other dossier, the one that was allegedly "sexed up", that has attracted all the criticism.
Cheating is a question of interpretation. If staff cannot agree on what it is, how can students be expected to act appropriately? There is a fine line between appropriate referencing of material and plagiarism. Institutions need to ensure that this distinction is adequately defined.
As a result of my experiences on the working party, I realised that there was a dearth of good research on student assessment. Psychologists should know a thing or two about assessment. After all, we have spent more than a century developing psychometric assessment into a fine art. Research on the reliability of marking I conducted with colleague Ian Dennis revealed that experienced external examiners did not agree on marks for exam essays and were no more consistent than less experienced internal markers. A subsequent study showed that they were no more reliable than third-year students who were asked to mark the essays.
Students seem to appreciate knowing the basis of the marks they are awarded and believe that it helps them to improve. There is a danger, however, that they will produce somewhat stylised and predictable answers if they stick too rigidly to the criteria. I am sceptical as to whether the criteria that have been developed necessarily lead to an increase in reliability.
I continue to carry out research on student assessment but have never abandoned my initial research area - the psychology of language and thinking, which I suspect carries more academic prestige. However, I have a strong suspicion that my research into assessment and learning has been read by rather more people and has had significantly greater impact. I certainly have no regrets about that fateful day when I was landed with the job of chairing an unglamorous university committee.
Steve Newstead is a professor of psychology at the University of Plymouth.