Student poll is 'not valid'

October 14, 2005

The Government's flagship national poll of student satisfaction was declared invalid this week by leading members of the steering group that helped design the survey.

Harvey Goldstein, professor of statistics at Bristol University, and Ivor Goddard, the director-general of the Royal Statistical Society, said this week that the national poll of more than 170,000 students - published for the first time last month - did not include the "essential requirements" to safeguard against low response rates.

Professor Goldstein said that the Higher Education Funding Council for England reneged on an agreement to include statistical "uncertainty intervals" or "confidence intervals" that would be used where response rates were low. They would prevent the publication of a precise figure where low responses meant precision was unobtainable and thus ensure that valid statistical comparisons could be made.

In a letter to The Times Higher , Professor Goldstein says: "It was generally agreed that a condition for the survey's validity was the provision of statistical uncertainty intervals for the scores... Since these intervals now seem to have been dropped, it is not possible to make scientifically valid comparisons between institutions."

Mr Goddard said: "I too was a member of the steering committee for the pilot version of the National Student Satisfaction Survey, and I share the concerns expressed by Harvey Goldstein in the letter."

Under the NSS, students rated various aspects of their university experience on a scale from one to five. The categories were awarded an average score by Hefce, which managed the survey. It created the first national league table of students' assessment of the quality of their experiences.

Including the uncertainty intervals would have prevented the presentation of the average results per department as a precise figure, but would have presented a small range of possible figures to reflect the "uncertainty" of small response rates.

Professor Goldstein told The Times Higher : "It looks like Hefce was under pressure to put out the quick and dirty survey results without the safeguards agreed."

He added that Hefce had acknowledged that there was an issue with low response rates by excluding the results for any department where fewer than 30 students replied to the survey.

"However, even with responses of 40 or 50 you have still got large intervals," he said. "So the problem is still there. You need extremely large numbers to really tell departments apart - it is not good enough to say you have excluded those under 30. You need to describe everything as a range not just a number, which gives a spurious accuracy."

Hefce said: "We agree with Professor Goldstein that the provision of statistical uncertainty intervals could help users in making decisions, and it is one of the features we are keen to introduce as part of the process for developing the site. One of the challenges is to provide the more sophisticated users with all the additional information they need without confusing others. We shall be carrying out further tests with a range of users, including prospectiveJstudents, to see how this can be achieved."

Please login or register to read this article

Register to continue

Get a month's unlimited access to THE content online. Just register and complete your career summary.

Registration is free and only takes a moment. Once registered you can read a total of 3 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments

Have your say

Log in or register to post comments