In the 18th century, the Royal Navy decided to experiment with a new type of gun. The plan was to combine the short-range, ship-smashing carronade with the more common and longer-range long cannon to create a new weapon called a cannonade. But the result was a failure. The cannonade possessed neither the power of the carronade nor the range of a cannon, and it was swiftly decommissioned.
We initially suspected that seen exams – exams that students are shown several weeks in advance – might be the cannonade of student evaluation: an unhappy hybrid of the assessed essay and the standard exam, testing neither depth of knowledge and analysis nor ability to marshal and express a well-supported argument from memory. There was also the danger that students would view it as a soft option.
However, despite our misgivings, we decided to give it a try. The plan was to create a short exam paper consisting of a small number of questions that embraced the course as a whole, rather than its individual constituent topics. This, we hoped, would encourage the students out of their comfort zone by testing their overall knowledge, rather than allowing them to rely solely on the content of a couple of seminars.
The first thing that we needed to do was to set some ground rules about how we should rework the examination assessment criteria. Marking needed to be tougher. Fewer allowances would be made for common mistakes, such as misremembered names, dates and the odd howler (“many people in the Middle Ages died of plaque”). Similarly, while all essays need to provide a lucid argument, the seen exam offers the same opportunity to plan, structure and hone a line of reasoning as a coursework essay does – so, we reasoned, it should be similarly refined.
We worried about collusion, so we emphasised the value of originality. We anticipated that some students would prepare too much material, so we encouraged them to practise their answers to time. But the worst-case scenario, we surmised, was the opposite one: that students might not prepare adequately because knowing the questions in advance had instilled false confidence.
Yet, as the exam scripts landed on our desks with a heavy thud (always a happy moment!), it rapidly became clear that the experiment had been a success – for several unforeseen reasons. First – astonishingly – the exam scripts were overwhelmingly better written than conventional essays. While marking standard essays routinely obliges us to ask students to explain their ideas with greater clarity, we rarely had to do so with the seen exam. We concluded that the pressure of a timed environment had caused the students to write-as-they-thought, in a flow of consciousness that was extremely beneficial to their arguments. There were very few examples of the convoluted 10-line sentences and stilted quasi-academic-speak that litter conventional undergraduate coursework. This is great news! Many students lack confidence in their ability to write well, yet our seen exam demonstrated that they can do it.
Another virtue lay in the depth of evidence the students presented. Seemingly, the knowledge that they would ultimately be put on the spot in a bleak exam hall led them to conduct their research very thoroughly, obviating the need for us to ask – as we frequently do when marking conventional exams or essays – to supply more detail, context or case studies.
Our concerns about collusion also proved unfounded. While some students made reference to the same evidence, the exam format meant that they were forced to evaluate and reproduce it in different ways – and, crucially, in their own words. The opportunities for plagiarism were therefore greatly reduced in comparison with those offered by coursework, and, overall, the seen exam encouraged originality.
The benefits we discovered were so significant that we are considering using seen exams again, in preference to a conventional exam. And it is worth underlining that many of those benefits were not predicted in advance. This emphasises the importance of trying new things even when you are sceptical about their efficacy.
In other words, however useless the cannonade turned out to be, those 18th-century naval engineers should have been given 10 out of 10 for effort.
Nicholas Morton and Natasha Hodgson are lecturers in history, languages and international studies at Nottingham Trent University.