Exam problems

May 14, 2015

I was interested in the study into inconsistent marking, in particular that the examiners awarded “hugely inconsistent marks” and that “a large element of unreliability” would always remain in assessment (“Low marks on an essay? Try a different examiner”, News, 30 April). Both the findings and the conclusion are unsurprising as the essay format is one of the least accurate formal assessment methods because of examiner variance. The process of moderation simply adds one unreliable practice on top of another, and Sue Bloxham is correct in saying that we should stop using it.

However, the wider truth is that many assessments in higher education are simply not fit for purpose. Among the common problems are lack of proper training for examiners, poorly designed exams and inadequate statistical evidence about the performance of the examination or its component parts. Standards such as pass marks and degree grade boundaries are seldom set using a proper method. Instead, they are often predetermined and set out in regulations without regard to the content or difficulty of the questions, which is ludicrous.

Undergraduate and postgraduate medical exams tend to be reliable and valid because of the requirements of the General Medical Council. Even in medicine, however, examiners can encounter problems, as Kevin Fong described a few weeks ago (“Campus Hunger Games”, Opinion, 5 March). He identified questions that were considered too easy, and the fact that candidates can get the correct answer to a multiple choice question through a blind guess.

These two articles illustrate an important but often ignored principle of examinations. There are three different kinds of expertise necessary to design and quality-assure examinations. The first is subject expertise. The second is expertise in examination design, which includes expertise in assessment theory, methodology, number of items required, testing time and standard setting. The third is expertise in psychometrics, which includes the measurement and interpretation of statistics such as reliability, item difficulty and ability to discriminate between passing and failing candidates.

Many examiners in higher education might not have much more than subject expertise, and until this changes, unreliable and invalid examinations will probably continue.

Gareth Holsgrove
Consultant in medical and dental education

Times Higher Education free 30-day trial

You've reached your article limit

Register to continue

Registration is free and only takes a moment. Once registered you can read a total of 3 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments
Register

Have your say

Log in or register to post comments

Most Commented

Lady Margaret Hall, Oxford will host a homeopathy conference next month

Charity says Lady Margaret Hall, Oxford is ‘naive’ to hire out its premises for event

women leapfrog. Vintage

Robert MacIntosh and Kevin O’Gorman offer advice on climbing the career ladder

Woman pulling blind down over an eye
Liz Morrish reflects on why she chose to tackle the failings of the neoliberal academy from the outside
White cliffs of Dover

From Australia to Singapore, David Matthews and John Elmes weigh the pros and cons of likely destinations

Michael Parkin illustration (9 March 2017)

Cramming study into the shortest possible time will impoverish the student experience and drive an even greater wedge between research-enabled permanent staff and the growing underclass of flexible teaching staff, says Tom Cutterham