While readers should be encouraged by some of the changes made to the Times Higher Education Student Experience Survey (23 March), it remains deeply flawed. It is honest that you admit (even if only in the small print) that a difference in score of about 3.7 would be needed to claim that two institutions’ scores are statistically significantly different (so a university ranked 61st might not have a significantly different score from one ranked 22nd or one ranked 94th). However, the survey suffers from many of the problems of all such surveys: arbitrary weightings (which don’t in fact match the influence of the item on the final score), arbitrary choice of questions (why ask about sports facilities but not art, music or theatre facilities?) and so on.
The decision to report subsets of responses as different tables is also to be applauded, but there is little evidence that the factors chosen make statistical or theoretical sense. There is evidence that some questions are institution-level questions (such as “good community atmosphere”) while others are at the department level (such as “well-structured courses”), but not that there is any structure beyond these factors.
The key problem, however, is that there is a strong correlation between the response rate and the rating (even after accounting for institution size): that is, institutions that have proportionately more students participating in the survey get higher ratings. Until there is a clear explanation of why this should be the case and why there is a curiously high number of institutions with exactly 100 respondents, we cannot make sense of the results nor trust the rankings.
Principal, Josephine Butler College