In "How to help students who fail exams" (Teaching, THES, July 9) one important factor was overlooked - failures due to faults with examinations.
Although examiners are usually subject experts, few have received formal training in examining. Consequently, many examinations contain errors in construction, content, selection of testing methods, establishing the pass mark and combining marks from different examination components. Marks are often taken at face value and defended robustly in examiners' meetings, with no account taken of measurable error of attempts being made to establish confidence intervals. When considering borderline candidates, marginal "passes" are usually ignored and the lower limit of the borderline is set arbitrarily rather than in consideration of the psychometric characteristics of the exam. Furthermore, examiners often subsequently rely on viva voce examinations (one of the least reliable of all examination methods) to make overriding decisions.
As an indication of the magnitude of the problem of inaccuracy in high-stakes examinations, the 95 per cent confidence interval for a typical university final examination is likely to be in the region of at least 6 per cent. In other words, the examination is not able to discriminate with appropriate accuracy between members of groups of candidates whose marks fall within 6 per cent of any cutting point. There could be a substantial number of students who undeservedly pass, and others who undeservedly fail, examinations where there is no rigorous scrutiny of examination marks within these "zones of uncertainty". But to make properly informed decisions about such borderline students, it is first necessary to calculate the measurement errors in the examination. Few universities and colleges do this - yet many are protected by regulations that prohibit appeals against academic decisions, even though those decisions might be statistically very suspect. If statistics students handled data in the same way, they would deservedly fail.
We must apply proper quality assurance measures to examinations. Until we do, neither tutors nor students will know whether candidates with marks close to any cutting point were placed in their pass/fail or honours category, rather than an adjacent one, as a true reflection of their performance or through some inaccuracy in the examination system.
Gareth Holsgrove, Cambridge Medical Education, Consultants