When red ink marks show that the lecturer is at fault

February 9, 2001

Project and exam results tell different stories, so they should be kept separate, argues Howard Allen.

Teaching structural engineering over a period of years involves a lot of time spent marking students' projects. Is it worth all the effort and does it do any good? Do the marks have any real value in the end-of-year assessments?

A typical project can involve two to ten weeks of planning, discussing, sketching, calculating, drawing and writing, with or without the help of computers. Sixty students working in this way can produce a lot of paper for the supervisor to digest.

My early attempts to cope with the problem soon revealed that vague remarks such as "not very clearly explained" or "better diagrams needed" were not helpful to the student. So I began to explain more precisely what needed to be done, perhaps by using a more logical structure or drawing better diagrams. It was important (I thought) not only to point out mistakes, but also to show how they could be avoided. I soon discovered that it took an hour or so to go through a typical submission, so the marking of each project exercise required about one working week. Hardly a practical proposition.

ADVERTISEMENT

There was another problem with my mini-essays in red ink. I found that only the exceptional student digested my comments and acted on them in the next exercise. Most students stuffed the marked work into a drawer to await the grand assembly of work at the end of session, ready for scrutiny by the external examiner. Students who did this had, in effect, learnt only bad habits from the work that they had just completed. This was not at all what I wanted, and it happened despite all that time spent on marking.

Yet another problem presented itself. It often happened that several (sometimes many) students made the same mistake and I found myself writing the same corrections and observations on one submission after another. A waste of effort.

ADVERTISEMENT

What was to be done? It was only too clear that if ten students made similar mistakes, it was probably because I had not explained sufficiently what was expected of them. Or perhaps those ten students happened to be looking out of the window when that particular point had been dealt with. But then, why had they been looking out of the window?

I concluded that it was better to devote my main effort to getting the teaching right in the first place, making sure that the student knew what he or she was expected to do, rather than spending lots of time making corrections to imperfectly completed submissions (or bolting the stable door).

I still think that is the right approach, but it can be carried to excess. If you tell students exactly what to do, you might expect all submissions to be perfect, although even then there will be some students who cannot or will not follow advice. It will be pretty dull work too, with no initiative left to the student. Yet, the student needs to learn how to do the next job - whether a project, design, laboratory work or essay - more effectively. Assessment should encourage students to achieve the best possible result, thus reinforcing learning, maximising satisfaction and avoiding the learning of wrong procedures.

It follows from this that, if the preparation - the teaching - has been done properly and if the students have been correctly motivated (again a function of the teaching), then most of the work done should be of a high standard and should deserve a high mark. The average for the class will tend to be fairly high and the marks will be bunched towards the 100 per cent end of the scale.

ADVERTISEMENT

Contrast this with the typical written examination. Here, the examiners usually aim to discriminate between good and bad students. A "good" examination is often taken to be one that produces a good spread of marks. An examination that regularly produced an average mark of 75 per cent might well be regarded with suspicion and many would feel uncomfortable with an average mark too far removed from 50 per cent. So a written examination is intended to produce a good spread of marks, while coursework (if it is good coursework) ought to produce marks bunched towards the top of the range.

Unfortunately, this distinction is not widely recognised. In a given subject, marks from coursework are mixed with marks from examinations. The fact that they are (or ought to be) distributed differently is rarely considered. If the mix varies from subject to subject, it is not possible to make a sensible comparison between the mixed mark in one subject with that in another.

Many years ago, this difficulty did not arise because it was customary to treat coursework entirely separately from written examinations. Students had to complete their coursework to a satisfactory standard before they were even allowed to sit the examinations. This system had its own problems, but the degree classification was based unequivocally on the written examination (with the possible exception of the final-year dissertation if there was one).

While sifting applications for places on an MSc course, I found among the applications from overseas that some foreign universities recorded marks for written examinations separately from marks for coursework. This seemed to be a very good idea, because it allowed the two quite different sets of marks to be evaluated independently. Perhaps we should be thinking of a similar system for the United Kingdom.

ADVERTISEMENT

Howard G. Allen is emeritus professor of structural engineering at the University of Southampton.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored

ADVERTISEMENT