Why I... think teachers know best

九月 27, 2002

Even if the inquiries into this year's A-level results succeed in identifying blame, there are underlying problems that will not go away. Outstanding is the fact that the system that has emerged from endless debates is a new and untidy compromise, one that has presented those who have had to set grade boundaries with an almost insoluble problem because the collections of marks for each candidate are of a different nature from those of previous years.

One solution could be a norm-based approach, that is, to set the boundaries so that the overall percentages of candidates in each grade were the same as in previous years. This would involve assuming that the quality of the work could not have improved. How can that be justified when teachers have been reporting that over the past two years the system has made students work harder than before?

A criterion-based solution would involve the examiners judging that the mark of any candidate represents work of comparable quality to that earning a different mark the year before, the two marks being outcomes produced under very different conditions. This is not quite as hard as judging between wine and beer, but it cannot be done by rule and must involve trust in examiners' judgements. Indeed, the setting of grade boundaries has always required such trust, given the inevitable differences in marks that arise from one year to the next. But if the public is not prepared to trust examiners' judgements that pass rates should change or to accept that pass rates can never change from year to year, there is no solution.

The irony of the present "crisis" is that it is teachers who have complained that their students deserve better grades. If the public takes them seriously, it is in effect saying that it places more trust in teachers' judgements than in the results of external examinations. In that case, why have external examinations at all?

Indeed, to determine anyone's life chances on the strength of a few hours of stressful work in the artificial environment of an exam room seems strange. Examiners do their best in setting a variety of written tasks to reflect as wide a range of types of knowledge and understanding as possible within the test constraints, but the limitations are inescapable.

Such tests are also unreliable. Even when all marking is of the same high quality, there is a finite probability that a grade is "wrong", for on a different day or with a different set of questions, the student might have achieved a higher or a lower grade. But no measure of this probability exists because the research has not been done, although research into other tests shows it could be as high as 30 per cent.

Teachers' judgements of their students can be informed by evaluation of many pieces of work, across a variety of contexts. In terms of validity and reliability, external testing cannot compete - and perhaps survives only because the public is not aware of its limitations. Of course, the skill and objectivity of teachers' judgements will be called into question. So rigorous procedures will need to be developed for ensuring that they all work to comparable standards and that there is protection against bias, with much more effort than has been given hitherto to coursework assessment. But it can be done. In Australia, the state of Queensland abandoned external tests for end-of-school certification in 1982; their system, which has earned public and government support, uses clusters of schools that work together to ensure fairness and comparability. So teachers can do it - there is indeed no alternative to trusting teachers. It just seems strange to trust them only when they are examiners in the straitjacket of an externally set system.

Paul Black is a former chief examiner for A-level physics.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.