Magic markers

三月 3, 1995

Marking by computer is proving a boon to academic staff. John Davies reports

Essential equipment if you are studying subjects such as chemistry or biochemical sciences at Kingston University: a 2B pencil, preferably with an eraser on the end.

This is because you will regularly use that simple tool to answer sets of multiple-choice questions (MCQs), at least in your first two years of study. Faced with a selection of five answers to every one of usually 20 questions, you will shade in whichever circle you think appropriate on a "bubble form". (Apparently 2B pencils "give a blacker mark", but HBs may becceptable). An optical mark reader (OMR) will then scan the forms as soon as they are completed. Indeed, it is Michael Pittilo's proud boast that they "can be marked during a coffee break''.

Pittilo, professor of biochemical sciences and head of the life sciences department at Kingston, is an enthusiastic proponent of automated assessment.

"Selected use of multiple choice tests has been very popular with students and has acted as a fair discriminator," Pittilo says. By automating such tests "the saving in staff time is considerable and has allowed us to provide greater individual feedback on other types of assessment such as essays". In other words, factual knowledge can be tested with automated methods; other skills continue to be evaluated in more traditional ways.

"The teaching and learning experience has not deteriorated here in any way since the introduction of the optical mark reader," Pittilo claims. "It has actually enhanced it in many ways."

There is no question that it is the growth in student numbers and high student:staff ratios (SSRs) that has spurred the increased use of automated assessment. "It's a resource-driven thing. Academic staff here would not be able to cope with the load of conventional marking," says Will Bland, a principal lecturer in applied chemistry at Kingston. (In his department, a typical SSR would be 18:1) "But if high SSRs were to disappear tomorrow we'd still be using the optical mark reader as one of the range of things that we use in assessing students."

Pittilo describes the impact of automated assessment as "freeing time positively". "Far too little emphasis is put on keeping up to date," he explains. "At conferences, you meet people from a wide range of institutions, and you find most of them have their backs against the wall - they don't have time to keep up their subject. There is a danger that not just five years but 20 years later they're still teaching what they taught when they first came in."

What has been the attitude of external examiners? "Initially they reacted with some horror, but to a person they've been convinced," says Pittilo.

David Rolls, a lecturer in geography at Kingston, agrees that traditional essay-question examinations still have their place. "We're doing students no favour at all if we turn them out into the world unable to write their thoughts on a piece of paper." He instances the first year of his earth science course, where as well as sitting OMR-scanned tests, students take an end-of-module examination in which "they do two or three mini-essays to demonstrate they can make a story, show some understanding. Most geographers that we send out from here will need to produce a verbal report to the boss from time to time, and write down a coherent report about a topic."

In the same department, however, Ken Lynch is more sceptical. He has not - as yet - used multiple choice tests. "We're not just teaching scientific techniques," he says. "We're teaching analytical skills and ways of putting together an argument and writing good English, hopefully . . . An essay is still a good diagnostic tool for assessing what a student has learned." Lynch is, however, "looking at computer packages that could be applicable at the first-year level".

Of course, Kingston is not alone. At Bradford University, Terry Baker, professor of biochemical sciences, says that "most of us are going down a similar road - but with some reservations". MCQ tests are restricted to the first two years at Bradford. "Most of the students like them - they don't like to write essays - but you've got to reduce their use in year two to give them opportunities to write at length. It's largely for their benefit." The SSR in his department is around 17:1.

John Partington, director of the Sheffield-based project Alter (Assessment of Learning through Technology for Efficiency and Rigour), describes Kingston as "one of the leaders in that particular field". He emphasises that "it is important to settle what automated assessment can and can't do. By definition it's all about candidates recognising things". With multiple-choice questions "you can't do anything that's creative or productive".

In December 1993 Alter issued a report, "Using technology to assess student learning. Multiple-choice questions, it found, had been used in universities in a variety of disciplines, although "the factual content of subjects such as medicine, biochemistry and pharmacology means the construction of MCQs is somewhat easier than it would be in, for example, English literature". To which Partington adds that automated assessment "isn't an area you can get into without an enormous initial heave" in switching to the appropriate technology, and that banks of test questions are required in the relevant subjects.

Indeed, while such technology can save time normally spent marking, teachers may need extra time -initially, at any rate - to compose their new-style questions. As a colleague of Pittilo's at Kingston says: "Essay questions are easy to set and a bugger to mark; with MCQs it's the other way round."

At Kingston, Pittilo had help from across the Atlantic. The university has an exchange programme with Michigan's Grand Valley State University, "where they are too reliant on objective testing" but were able to supply a bank of questions that could be adapted for British use. Here, too, he notes, "many of the medical colleges rely very heavily on multiple-choice questions for their higher exams. There are banks of questions available in some areas like physiology that we find very useful."

In his department, adds Rolls, "we have produced a template of question types for staff so they can try to fit their questions to it. But one of the beauties of this system is that once you've set a question, it can go into a bank of material, and with a small manipulation it can be used in a subsequent year."

How many alternatives should be offered in a multiple-choice test? At Bradford, Baker recalls that his department initially had four possible answers, only one of which was right, for each question "and no negative marking'' - that is, no penalties for a wrong answer. Now, he says, he and his colleagues are thinking of occasionally having more than one right answer among five alternatives. Meanwhile, Kingston's Rolls is of the opinion that "we may have to reprogramme the optical reader so that it can give a range of marks: for example 10 for a correct answer, and five for a nearly right one".

As for what is currently happening, Bland produces a recently scored MCQ test, together with a computer analysis of the results. It is for a "materials resources" module, with 43 students whose marks range from 85 to 30 per cent (in other words, from 17 to six correct answers out of 20). The analysis also reveals which questions proved least or most difficult. Number three elicited 41 correct answers ("I always make the first questions relatively simple, then hit them with the hard ones later on," says Bland).

A question that nobody or everybody gets right is one that could be discounted in a final assessment, but this is not such a case, thinks Bland: "I thought that would sort them out." It is more of a problem, he says, when the teacher is taken by surprise. "If you thought there was a relatively easy question you were hoping most students would get right, and they don't, then either you haven't taught them properly or the question wasn't as good as you thought."

We watch another batch of "bubble forms'' go through the optical mark reader. Their lightning speed is interrupted now and again: the machine has been programmed "to examine multiple marks'' - that is, to stop if it appears that more than one circle has been shaded in for a question. When that happens, the operator, a postgraduate student, examines the offending form. Usually it is found that one of the two shaded circles has been rubbed out, but not sufficiently well to avoid detection by the OMR. The necessary correction is made.

So it is still worth having an eraser on the end of that 2B pencil.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.