Student Experience Survey 2017: methodology

How we judged the student experience

March 23 2017
Students on campus

The rankings for the Times Higher Education’s Student Experience Survey (SES) are now in their 10th annual release. This year, as with all other years of the survey, the research has been conducted by YouthSight.

Here we reveal the answers to some of the key methodological questions.

What are the questions behind the SES?

The SES questions are based on the responses of 1,000 undergraduates who were asked to describe how their university contributed to a positive or negative student experience. Responses were used to formulate 21 measures of student experience, across eight areas, which are now asked in the SES as follows:

Based on your experience, how strongly do you agree that your university offers the following... (Seven point scale from ‘Strongly Agree’ to ‘Strongly Disagree’)

 

Academic experience

High quality staff / lectures

Helpful / interested staff

Well-structured courses

Good personal relationship with teaching staff

Tuition in small groups

Fair workload

University facilities

High quality facilities

Centralised / convenient facilities

Cheap shop / bar / amenities

Good library and library opening hours

Good sports facilities

Societal experience

Good social life

Good community atmosphere

Good extra-curricular activities / societies

Good environment on campus / around university

Student Welfare

Personal requirements catered for

Good support / welfare

Accommodation

Accommodation

Industry connections

Industry connections

Security

Security

Student Union

Good Student Union

In addition, respondents are also asked to rate a 22nd measure, I would recommend my university to a friend, on the same scale.

The questions have remained unchanged since they were devised in 2005 to allow stability and comparability in analysis across years.

When and how is the SES conducted?

SES responses are collected between October and June every year from full-time undergraduates studying in the UK. There is a one year time-lag between data collection and publication, i.e., the data shown in the SES 2017 were collected between October 2015 and June 2016.

The survey is appended to YouthSight’s Student Omnibus Surveys. Each respondent can only participate once per year. Neither the purpose nor timing of the survey is divulged to respondents. HEIs are unaware of which undergraduates are selected to participate. 

Who takes part in the SES?

Only members of YouthSight’s OpinionPanel Community (OPC) take part in the survey. The OPC is the UK’s largest youth (16-30) research panel and includes 80,000 full-time undergraduate students studying at UK universities. Most undergraduate students are given an opportunity to join the OPC via partnership programmes and advertising. In return for joining, members receive incentives each time they complete an online survey or take part in qualitative research.

How many respondents take part in the SES each year?

For each of the past four years around 15,000 full-time undergraduates have participated in the SES. The exact number for the SES 2017 was 15,876.

How are the SES rankings determined?

For each of the 22 survey measures, a mean "agreement score" is created per institution. Each mean score for the 21 formulated measures is then weighted depending on how strongly the measure correlates with the 22nd measure; I would recommend my university to a friend.

Weight

Formulated SES measure

2

High quality staff / lectures

Helpful / interested staff

Well-structured courses

Good social life

Good community atmosphere

Good extra-curricular activities / societies

Good environment on campus / around university

High quality facilities

Personal requirements catered for

1.5

Good Student Union

Good support / welfare

Good personal relationship with teaching staff

Centralized / convenient facilities

Good industry connections

Good accommodation

Good security

1

Cheap shop / bar / amenities

Tuition in small groups

Good library and library opening hours

Fair workload

Good sports facilities

For each institution the sum of the weighted mean scores is divided by the sum of the weights (33.5) to calculate a weighted average mean. This is then indexed to provide an Overall Score (out of 100) that institutions are ranked by.

Why are some HE institutions missing from the SES ranking?

The survey team acknowledge that maximising the number of institutions the SES in turn makes the ranking more useful. However, as the number of achievable responses per institution broadly reflects institution size, it is not always possible to achieve a large enough sample to provide statistically robust data at every HEI (logically, it tends to be the smaller HEIs that do not appear). 

The compromise in order to balance the size of the listing with statistical validity is to set a minimum threshold of 50 responses per HEI before inclusion. But despite this fixed threshold, the survey team aims higher. Each year the number of institutions that achieves 100 responses or more is maximised.

This year, for the first time, we reached 100 institutions with 100 or more responses (up from 94 last year). In total, 122 institutions were included in the SES 2017, with an average sample size of 130 per institution.

Are base sizes large enough to make comparisons between institutions?

As with any respondent survey there is an "error bar" with each institution result. This is linked to each institution’s sample size and variance (how consistent the scores were between respondents at a given institution).

It should be noted that, on average, for SES annual rankings a difference of 3.7 points in the Overall Score is required to produce a significant difference between institutions at the 95 per cent significance level. This means that at the top and bottom of the rankings HEIs only have to move a few places up or down to see a significant difference. However, in the middle of the rankings, where the scores are bunched together, HEIs rankings need to shift by many more places to see a significant difference – typically 30-40 places for the middle ranked institution. This explains why the institutions at the top and bottom are fairly consistently ranked year on year, while the ranking of institutions in the middle can fluctuate.

Is it fair to collapse student experience into a single measure ranking?

The SES has been designed to be multi-dimensional, that is, to cover all the important elements of student experience whether academic, societal or pastoral.

As described above, each of the 21 formulated measures that feed into the Overall Score are weighted to reflect their relative importance at a sector level. However, the survey team appreciate that there will be groups of students within the whole undergraduate population that place differing importance to each of the measures compared to the sector as a whole. For example, some undergraduates may place more importance on academic measures, while others place more importance on societal measures. This may impact then impact the Overall Score for some institutions. Therefore, for the first time this year, seven composite scores have been created to allow institutions to see how they are performing in different areas of student experience.

Reader's comments (1)

You may want to correct the figure at the top of this page, as people may dismiss the validity as if I have read your description correctly - it says the sample size is 1,000 but should probably be 15,860...

Have your say

Log in or register to post comments

Study Business & Management

Study Mechanical & Aerospace Engineering

Study Electrical & Electronic Engineering

Study Accounting & Finance

Study Biological Sciences