The phones have been ringing for the past month and, across the country, four-fifths of last summer’s graduates have answered the Destinations of Leavers from Higher Education survey, as they have for the past decade. And, across the campus, almost three-quarters of this year’s finalists are answering at least 23 questions about their academic experience in the National Student Survey. A dozen years, almost two dozen questions, five choices and more than 250,000 respondents every spring.
So we now know what the vast majority of graduates are doing six months into their post-graduation lives, and we will soon know what the cohort behind them think of their university experience. Yet, we decry, both studies and are in grave danger of dumping a decade or more of time-series data. Why?
As academics we’re critics, bred to focus on limitations and shortcomings. An evening with us is like a date with an actuary. But where else will you find such a welter of information provided by those with real experiential insight? Of course, the DLHE isn’t perfect; not everyone’s long-term career is going to be determined by what they are doing in the first winter of discontent. But it gives us a vital indicator of the diminishing number affording postgraduate study, of the initial earnings of this April’s taxpayers, and of the first steps in the labour market. Yes, it should be supplemented by time-series data that follows the individual, not just the sector, but this is not an argument for its disappearance.
The future of the NSS has also been subject to review; it will be different and it may well also disappear, at least in its present form. We already know that question 23 on students’ unions – a vital measure of student engagement – has been consigned to the dustbin of history. The Higher Education Funding Council for England is warning institutions and departments against exerting undue influence on this year’s NSS respondents, having identified abuses in the past. But reported instances have been rare and are an argument for a different approach to data collection focused on the individual, not the institution – not data dismissal.
Basic questionnaire techniques, such as asking some questions in the negative, would help to eliminate the column-tickers, those who return 22 unthinking or uncritical “fives” or, more rarely, 22 unthinking or malevolent “ones”. Or, as an alternative, eliminate the outliers from the calculation of institutional means so beloved of some league table compilers. And persuade other compilers to stop using two categories (scores of four and five) from one question (overall satisfaction) and representing this as a holistic analysis of anything more than one-50th of what the survey has to say.
We already operate in an evidence-rich environment, but one that is data-engagement weak. This is not necessarily a bad thing; choosing a university is, and should be, as much a personal, even an emotive, decision as a rational one. But, if the state heads in the direction it is signposting, starting anew, ditching data sources and losing time-series evidence, we may be heading towards a darker shade of pale. And, when we bemoan this loss in a half-decade’s time, there will be no way to backfill the gap.
John Cater is vice-chancellor of Edge Hill University.