A number of methodological issues are cause for concern in the national student satisfaction survey ("Student poll puts staff under pressure", September 23).
First, the subject categories bear little relationship to degree programmes and thus to undergraduate experience - for example, my discipline of civil engineering is included with chemical engineering and other engineering subjects. At my institution these two subjects are taught separately to a large extent and thus the student experience will be different. The methodology takes averages across fundamentally different student samples.
Second, when the results for my subject are analysed, it can be shown that for each of the questions asked the average responses are very similar with little spread between institutions. Any "differences" are almost certainly not statistically significance.
Finally, the number of responses required for the results to be included in the analysis is at least 30, or more than half the students surveyed. This small sample size gives the possibility that the results will be adversely skewed by a small number of dissatisfied students - who perhaps have just been subject to the return of a rigorously marked piece of coursework that was not to their liking.
While some of these problems may be less significant for institution-wide assessments, where the sample sizes are larger, the subject-level scores should be presented with at least some indication of their reliability in statistical terms.
The results at subject level are neither reliable nor statistically significant, and certainly not robust enough to be used for the formulation of league tables. I can think of many ways in which Higher Education Funding Council for England money could be much better spent.
Chris Baker Birmingham University