Judging teaching in FE

March 15, 2002

Peer observation of teaching is being held up as the next big quality tool. But will staff want to score one another? The THES reports.

Susan Orr, teaching and learning coordinator at the London College of Fashion, and Marion Wilks, head of academic services, Surrey Institute of Art and Design, University College.

In higher education, it is unlikely that you will ever be asked to quantify your teaching on a seven-point scale. Equally, if you take part in peer observation it is unlikely that you will be asked to grade your peer's teaching.

But in further education you might well already be grading colleagues' teaching sessions.

Colleges have to write a self-assessment report for the inspectors and for yearly funding. In it, they score themselves for aspects of provision on a scale of one to five, where one is excellent. When inspectors call, they independently rate teaching and learning.

The colleges are also judged on how well they judge themselves. For example, if a college awards an area a one, when the inspectors regard it as a three, the college is penalised.

Colleges have looked at their peer-observation schemes to see if they can be used as an evidence base from which to grade curriculum areas. Unsurprisingly, this has led to a debate about how to measure teaching and learning.

These colleges are adapting peer observation arrangements so that they include grading against the inspectors' seven-point scale. But there are problems associated with linking peer observation to inspection requirements.

Peer observation is a developmental process. If colleagues are to score one another it becomes judgemental. Participation is usually optional, but in this context it becomes compulsory.

Participants are asked what they have learnt from the process, but do not usually have to send details of the observation itself to line managers. This would not be possible if reports were to be used to grade provision. Grading peer observation suggests simple numerical scores can capture what is happening in a complex teaching environment.

There is the added danger that lecturers who get low grades may be demoralised and those who get high grades may become complacent. This divisive atmosphere could engender a competitive approach that threatens the many benefits of peer observation. As a result, some colleges are opting out of grading peer observation.

Hazards of feedback
Student feedback forms have long been used to gauge reaction to teaching performance. The proposed introduction of student satisfaction surveys has, however, added a further dimension.

The Cooke report is expected to recommend that universities make public graduates' views on their university experience. This survey would provide "up-to-date, consistent information about quality and standards".

But a survey is only as strong as the questions asked. These questions, and the reliability of the data they provide, is the subject of much debate.

Lee Harvey, director of the Centre for Research into Quality at the University of Central England, argues that questions about teacher performance should not be included.

David Baker, an undergraduate programme director at Warwick University, says that publishing student feedback forms would undermine the balance between fair comment and inbuilt bias.

Zazie Todd, a social psychology lecturer at Leeds University, warns that student bias on grounds of gender, age, ethnicity and teaching style can all come to the fore in feedback. Pat Leon

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored