We read with interest your article on the problems with resit examinations (“Resits may not improve academic performance, says study”, News, 15 November).
At our own medical school, we realised a few years ago that resits in undergraduate medicine were unfit for purpose. One only had to sit on exam boards and see the thick files of issues that had occurred for individual failing candidates to know this. Such candidates generally had a long history of poor performance but had progressed via resits.
Realising that resits did not seem to promote longer-term learning, we then undertook a longitudinal analysis of borderline and failing candidates, and found that over time the performance of these candidates typically worsened. Since then, we have developed a sequential model of assessment, in which candidates who fail the full assessment (the aggregate of a screening and an additional assessment for the weakest performers) must repeat the year rather than resitting and (usually) progressing as they would have done under the old model. We have subsequently found that not only do these repeating candidates improve, but so too do those who were called back for the additional assessment but who passed.
We believe that medical education assessment practices are some of the best in higher education, in part because of the strong research base motivated by the naturally high stakes involved in training junior doctors. We would recommend that many other subject areas would benefit from considering these issues in greater depth when developing their own assessment practices.
Godfrey Pell and Matt Homer
Assessment Research Group
University of Leeds Medical School