Why I believe an annual student satisfaction survey is needed

七月 4, 2003

What course should I study and where? This is a decision that affects the life of every prospective student each year. The decision-making process is a difficult one and not undertaken lightly.

Those going through it like to think they have acquired all the facts and information they need: there are prospectuses, open days, Quality Assurance Agency audit reports, and Universities and Colleges Admissions Service points. One of their peers may even be studying a course they are contemplating.

But prospectuses and open days will never be objective as both are designed to promote the university; neither will provide a "warts-and-all" guide. The friend doing the same or similar course may be having a fantastic time but equally they may hate every minute of it. Who or what should a prospective student believe? The hype or the friend?

What prospective students need is good-quality advice from a credible source - not from a booklet trying to promote the course and university, and not first-hand information from one person, or second or even third-hand accounts from others. To provide that advice would be the role of the student satisfaction survey proposed by the government and backed by the National Union of Students.

The NUS is committed to the development of a survey that will gauge student satisfaction at subject level. Union representatives are on the steering committee helping to develop this tool so that it is capable of providing useful information to prospective students. It will not be a weapon with which students can hit out at lecturers over poor essay marks.

One of the criticisms of the Australian model is that a student disgruntled with his or her degree mark could exact revenge through the survey. To avoid this pitfall, I believe the survey should be completed when students have finished their exams but before they have received their degree classification. Away from exam stresses, students would be more inclined to complete the questionnaire and comments would not be influenced by final marks.

There have been complaints ( THES, June ) that just ten students could damage a course and university's reputation by marking it down in the survey. Clearly, ten students out of a couple of thousand would not be a fair representation and should not be treated as such. However, if there were 12 students doing a particular branch of science and ten responded, this would represent a sizeable proportion (83.3 per cent) of the course.

If 83.3 per cent of 3,000 students studying a different course responded, no eyebrows would be raised. So, to avoid a scenario where ten students could damage a university's reputation, more general subject headings could be used. This broader categorisation would ensure responses from specialist modules were taken seriously. A wider catchment of students from, say, English or plant sciences would not be small enough to be considered invalid.

To counter any arguments of a poor response rate and unreliable evidence further, I would like to see a benchmark respondent rate set up. To avoid this being abused by institutions not wanting their courses to be properly audited, percentage respondent rates should be included with results. The prospective students themselves could then assess the thoroughness of the survey.

As things stand, there is no way for prospective students to hear direct from their predecessors exactly what their course is like, which is shocking when we consider the long-term implications of the decision they are making. Promotional prospectuses, open days and the experiences of perhaps one contemporary are no basis on which to make that decision. The results of a thorough and independent student satisfaction survey will be of far more help.

Chris Weavers is National Union of Students vice-president, education

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.