Bespoke robot-written exams to curb student cheating

Creating unique datasets for online exams preferable to ‘naive’ honour codes or faulty online surveillance, experts say

January 5, 2022
Robots charge by the play field at the 2013 RoboCup German Open tournament, illustrating an article about online cheating
Source: Getty

Bespoke exams containing unique datasets for each student being trialled at a UK university could significantly curtail cheating in online assessments, researchers believe.

With some studies suggesting that cheating has massively increased following the switch to remote assessments, scholars have begun examining how they can design exams that make it impossible for students to collude or plagiarise their classmates’ work.

In a novel approach, chemists at the University of Exeter are using computer coding to generate 60 different datasets for a single class – one for each student – sitting a data analytics test that is worth 20 per cent of the entire module grade.

THE Campus resource: using multimedia as a tool to enhance and transform assessment

The script – which models lab equipment to produce realistic data but introduces some randomness so that each dataset is different – could with very little work be used to generate unique datasets for thousands of exams in different disciplines, says a study published in the Journal of Chemical Education.

“If you have an exam which counts a lot towards a degree and it moves online, it presents an opportunity to cheat and even incentivises it,” explained Alison Hill, a senior lecturer in biosciences who co-authored the paper with her Exeter colleague Nicholas Harmer.

“This [online cheating] isn’t just happening in academia – my husband is a Devon chess champion, and when chess moved online in the pandemic, there were reports of people using computers to help them win.”

One method to prevent collusion on data analysis questions is to limit exams to one hour. However, this brief exam window unfairly penalised students with poor internet connections or those based in different time zones who often had to begin their tests at 3am, explained Dr Hill.

But the students’ preferred option of a 24-hour exam window was an invitation for students to share their answers, she argued.

“We’ve seen in other territories how once the paper goes live, a WhatsApp group is set up immediately – people simply see this kind of sharing as a good investment of their time,” said Dr Hill, who argued that relying on university honour codes to halt cheating would be “completely naive”.

“We can’t entirely stop cheating, but if every student has their own dataset – with the same question – the cost-benefit balance of cheating is no longer in the student’s favour as they will need to do the work again [for a classmate] with a different information set,” said Dr Hill.

Designing out cheating by creating different datasets could be applied to most data-heavy exams – with automatically generated answer sheets for each paper – claims the journal article.

This kind of test design would be far more effective than some exam proctoring techniques piloted in the pandemic, such as the webcam surveillance of those sitting exams, which some students easily circumvented, said Dr Hill.

“Lockdown students will find a way to get around these types of rules,” she said.


Print headline: Computer code tops honour code in thwarting cheats

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please Login or Register to read this article.

Related articles

Reader's comments (9)

I really rate and really value these kinds of approaches. I've been doing similar for around 8 years now with coursework for a large physiology module where there is still cheating going on in one of the small tasks where they have several days to complete it but timers on the screen when they start. They share answers and ask questions in the WhatsApp group. My exams for the same module have also been online for several years now, but cheating is happening far less as I'm able to package and constrain those far better. A colleague in my department is doing a similar randomised data set thing as this article on his module for chemistry and maths using "Numbas". And a buddy in my office ran his genetics exam last year using a series of unique gene sequences where each student had a different one that changed the outcome of the question for each student so there was no point sharing information. We need more of this. Great stuff, Alison!
Thank you, Chris. I use NUMBAS for my Medicinal Chemistry module and it is brilliant. Good luck with maintaining standards!
This certainly looks like a feasible solution that will get rid of the benefits of cheating. The most depressing thing is that students cheat at all. They do not seem to see the value in actually doing the work rather than just chasing the marks. The chess anecdote is really sad as players are only cheating themselves since they will one day play face-to-face again and find it more difficult without their computer assistance.
This seems like a great idea but it will only work for a very limited range of courses which require analysis on numerical data. I suspect it is also hugely time consuming to prepare such tests and to ensure that they are bug-free.
This seems like a great idea but it will only work for a very limited range of courses which require analysis on numerical data. I suspect it is also hugely time consuming to prepare such tests and to ensure that they are bug-free.
We also included images and so it is not restricted to numerical data. There is some initial investment but we have provided annotated scripts of the programme files we used and anyone interested can get in touch with us. Once the initial file is written, it can be reused again the following year with new parameters.
This seems like a great idea but it will only work for a very limited range of courses which require analysis on numerical data. I suspect it is also hugely time consuming to prepare such tests and to ensure that they are bug-free.
It's your classic "a little effort up front saves infinite time later" situation. Yes, the first year is a bit of a faff. But after that? You have more time to dedicate to other innovations, research, and better quality and quantity of feedback on those assessments that cannot be so easily automated and protected. And if you start to building different question formats as well as things like branched scenarios to change the outcomes based on single decision points, you can really start to get creative with how these assessments run in different, non-numerical areas.
Nothing new here (apart from the robot). To avoid cheating or rote learning back in the 1970s (before all the cheating mills and online learning) we at Sociology at Birminghan Poly 'personalised' assessment by, for example, asking for a comparison between two case studies, from a list of 50 or so. Once a pairing had been selected no one else could offer the same pairing. Seemed an obvious way to deal with the problem but, of course, requires more work on the part of the assessor/examiner!