New software is giving computer-based assessment a lease of life, by enabling lecturers (and students) to post coursework on the internet. By Kim McCaffery
Once upon a time, a student could write a lab report and the lecturer would read it thoroughly, correcting it and adding any necessary comments. Even face-to-face meetings to give feedback were not uncommon. But the expansion of higher education, the trend towards coursework-based assessment and the sheer workload this generates all too often leave the lecturer with no alternative but to add a single note - a grade. Hence the once-valuable feedback that was given to students to improve their learning seems to have become a fond memory of what we did "when we did things properly".
With this in mind, many lecturers have worked hard at introducing new assessment methods that give the necessary feedback to promote effective learning rather than surface learning. Many different strategies are being employed to reach this end and so such techniques as peer marking and self-marking have become familiar methodologies.
In engineering we have been grappling with this problem and, as many subjects lend themselves to objective testing, we have introduced the use of computer-based assessment (CBA) into a number of areas.
This has achieved only a moderate success, as tests have not been very accessible. Students need easy access to good material, yet CBA is often limited to a single room that is open only at specific times on certain days of the week. Also, the software is often either very limited and can produce only a few question types, or requires a high level of expertise. We believe these difficulties have been holding back the technique's wider use.
Nearly two years ago, however, a commercial software package became available that enabled assessment via the internet. This software, called Perception, is sufficiently user-friendly for the average lecturer to be able to employ it, writing tests using an ordinary PC and then publishing them on the internet. The software is versatile enough to produce a variety of objective-based tests.
The greatest benefit of this new generation of software, however, is that not only does it give the lecturers the freedom to publish their tests to the world, but it also enables students to access them from anywhere in the world - well, from anywhere that has internet access.
In a nutshell, the major drawback of CBA - namely, accessibility - has been removed. Now a student can do a coursework assessment from his or her Battersea digs or from a yacht in the Bahamas (at midnight on a Sunday). The massive technological infrastructure that is normally needed is eliminated and is replaced by the existing internet. These developments have given CBA a new lease of life.
Consequently, we are able to reconsider the original problem. How can we improve our ability to give feedback to students and so help improve learning? We feel that we can use CBA as a vehicle for improving feedback. But we want to know whether giving minimal feedback to students is beneficial to them. We have set up an educational research project with the aim of implementing CBA as a method of assessment and alleviating the workloads that come with mass education, but also with the objective of trying to improve learning by providing some feedback to students on performance (this being one of 18 "Taskforce' projects aimed at exploiting technology to improve teaching, learning and assessment).
We have implemented CBA into a first-year undergraduate subject, and all of the coursework has been replaced with tests derived from the technique. This involved developing tests for each week of study, which are then delivered to about 250 students. Each test contains questions randomly selected from a bank of questions. Hence each student receives a different test every time. All students are given two attempts at each test. After each test, they are given their overall marks and are shown which questions they have got wrong, but are not given any answers.
Thus the aim of the first attempt is to enable the student to identify any topics needing further study. They are able to revise any misunderstood topics and then sit another test on the same material. Although this way of presenting assessment is not novel, today it would be virtually impossible to administer using non-CBA methods because of sheer logistics and workloads.
The results of the tests show that, with the exception of those students who scored an A grade at the first attempt, almost all the students took both attempts at the assessments. This is encouraging, as we did not expect all students to put this amount of effort into the assessment. Also, it is clear that the marks generally improve at the second attempt, leading us to conclude that the students have learnt something from the first attempt. Our research shows that even a minimal use of feedback greatly improves results.
At the end of the course, we ask the students to comment on the CBA. Their responses indicate that on the whole they were happy with the assessment method. Most notably, 44 per cent of them also state that the first attempt helped improve their learning. These students commented that the feedback showed them their strengths and weaknesses on a given topic, and also the overall level of their understanding (often lower than they appreciated). From the many comments received, it becomes clear that feedback given to students on their first attempt helps them improve their learning.
We have come to the conclusion that using CBA with just minimal feedback leads to improved learning, so it is something worth exploiting to the full.
Kim McCaffery is lecturer, School of Engineering Systems, Coventry University.