Logo

Online exams are growing in popularity: how can they be fair and robust?

Nicholas Harmer and Alison Hill share advice on using unique datasets to deter student collusion in online exams

,

28 Jan 2022
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
Advice on using unique datasets in online exams to deter cheating among students

Created in partnership with

Created in partnership with

University of Exeter

You may also like

Designing online assessment to prevent academic misconduct
Advice on designing online assessment that reduces the opportunity or temptation to cheat

University exams are high stakes assessments for students. Incremental gains allow students to access higher grades, and with them better future opportunities. Students have shown a preference for most examinations being online. However, there are concerns that this environment is ripe for misconduct. There is a clear need to set fair and robust exams that will protect honest students from being scored comparatively lower than those who cheat. There has been widespread reporting of students cheating in online exams, using online chat groups to collude and accessing third-party “helper” sites. This is particularly challenging for problem-based exams where there is a single correct answer. Parallels can be made with doping in sport, where honest competitors lose out to cheats.

Using individualised datasets in exams to reduce student collusion

One solution to this challenge is to set problem-based questions where every student has a unique set of data to work with. The challenge or questions for students are the same (allowing a single paper to be set). Students are asked to download a separate file containing their individualised dataset. This reduces the risk of collusion. Each student will have to work through a separate but related challenge; and the outcomes that they are expected to come to may be very different.

We have found that students are surprised that we do arrange data that lead to different outcomes, and some dislike not being able to “check their answers” with classmates, but others praise it as an effective way to cut out misconduct.

The use of individual data sets for assessments was an excellent way of ensuring the work was fair for all students as collaboration was not possible.” (University of Exeter student, 2021).

By using individualised data, we observed no statistical difference in student performance between time-limited invigilated exams and online, 24-hour exams.

Considerations in preparing individualised datasets for online exams

Redesigning an assignment or exam to use individualised datasets for the first time can be daunting. We identified key actions that we took that helped to improve the chances of success:

Working in a team: All our datasets were developed with a team of two. This helped to identify errors and incorrect assumptions early in the process. Having more people involved added creativity in terms of the types of data (numerical, images, possible solutions that might be explored) included. Doing this saved a lot of time that might otherwise have been used correcting issues.

Coordinating with exam officers: We benefited greatly from discussions early in the process with our exam officers, who were extremely helpful in explaining the limits of what our university regulations would allow. They also checked example datasets to verify that the presentation and method of release was acceptable.

Designing marker sheets: As each student dataset is unique, the answers will also be unique. Markers need to know what the “correct” answers are. We included production of marker crib sheets with expected answers, including diagrams, and any intermediate working into our code. Coordinating with markers to make sure that answer sheets met their needs was very important for success. Writing the code with the need to produce these answer sheets in mind helps to save time.

Using realistic data: The problem setting is easier and more reflective of student experience if the data are based on real experiments. We therefore used historical data from (good) students to provide a range of realistic experimental outcomes.

Include reasonable variability: Ideally, individualised data is sufficiently different for students to gain little or no benefit from collusion. However, it must be readily interpretable by a good student; the question setting model must deliver this automatically without the setter needing to check every dataset. We found it easiest to do this by considering reasonable sources of variability in the supplied parameters; and modelling these sources of variability with differing levels of randomness to give a robust outcome. Often, a “variability parameter” can be helpful to test randomness before committing to a final plan.

Start in Excel: In every case, we have found that building a model in a spreadsheet tool saves time. The main advantage of such tools is that they can provide graphs that alter in real time as parameters are modified. This allows the question setter to rapidly explore the effects of altering parameters. An Excel model is also easier to share with colleagues, especially those who are less confident with code.

Use the software that works best for you: Once all colleagues are agreed on the paper, the mode of delivery to students and markers, and the data-generating model that will be used, the model can be committed to code. Whoever chooses to do this should use the code that they feel most comfortable with. We found R to be convenient for a moderately complicated model; more complex models, or those requiring specific modules, are likely to be better captured by other coding languages.

It is unlikely we can eliminate cheating from online exams entirely. Redesigning assessments and using unique datasets is one way we can improve their integrity, which is appreciated by both examiners and honest students alike.

This advice is based on Nicholas Harmer and Alison Hill’s paper “Unique Data Sets and Bespoke Laboratory Videos: Teaching and Assessing of Experimental Methods and Data Analysis in a Pandemic” in the Journal of Chemical Education.

Nicholas Harmer is an associate professor in biochemistry, and Alison Hill is a senior lecturer, both at the University of Exeter.

If you found this interesting and want advice and insight from academics and university staff delivered directly to your inbox each week, sign up for the THE Campus newsletter.

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site