With more than 1.4 million undergraduates, thousands of courses and hundreds of providers, English higher education is anything but uniform.
So would it really be possible to design a standardised test that could be taken by students throughout the country, regardless of the subject that they study and the university they are enrolled at?
This is the question being asked by the Higher Education Funding Council for England as it looks for ways to measure “learning gain” – the improvement in skills and competencies made by students during their time at university.
Standardised testing has been pioneered in the US, and could be useful in the UK because the high proportion of students who get a 2:1 or a first makes it difficult to compare standards between institutions.
Hefce is seeking to trial nationally administered exams that could be delivered electronically at two points during a student’s time at university, and would be separate from degree-related assessment.
The funding council is also providing £4 million for 12 projects to test methods of measuring learning gain, seven of which will include standardised testing in their methodology.
But this is not the only tool in the box. Methods that use students’ grades, survey data and qualitative methods such as personal development portfolios compiled by students will also be explored in the institutional pilots.
Chris Millward, Hefce’s director of policy, said that a system that mixed several of these approaches was likely to be the one that was adopted.
“It will be very important, as we do this [pilot of standardised testing], to understand the quite different types of students that are involved in higher education currently and the different dynamics in different disciplines,” Mr Millward said. “We don’t think this will lead to simplistic outcomes that will be applicable system-wide but, given the practice that is prevalent in other countries, it would be one part of the equation.”
Comparing art and astrophysics
A study of learning gain, conducted by consultants Rand Europe and published by Hefce last month, says that a number of leading British universities already make use of standardised testing.
Students on medical sciences courses at institutions such as the universities of Manchester and Cardiff take a test several times a year based on a bank of questions, in the expectation that they will get low marks at first but will improve over time.
The report says that this approach, called Progress Testing, may help to encourage students to take greater responsibility for their learning.
But Hefce’s preference is believed to be for generic, non-subject-specific exams, that would allow for comparisons to made across disciplinary boundaries.
One of the most widely used tests, the Collegiate Learning Assessment, is used by more than 700 institutions in the US and worldwide and aims to measure students’ critical thinking and written communication skills.
The exam, which can be used to measure learning gain if it is taken at different points in time, sets students one assignment of 60 minutes requiring them to analyse a library of documents and to write a response to a scenario-based problem, and a further 30-minute task in which they must answer a series of multiple-choice questions.
Chris Rust, emeritus professor of higher education at Oxford Brookes University, argued that a before-and-after test “certainly seems to be better” than some of the metrics for measuring learning gain that have been suggested, such as the number of firsts – a measure that could lead to grade inflation – or employability, which is linked to a number of non-academic factors.
“If you could get a good test looking at things that a degree is trying to do, that might be interesting,” he said. “If you think [that] 60 per cent of graduate jobs don’t care what discipline a graduate sat, this kind of measure, if it really was a good valid test, might be better than what happens now, when there are prejudices about what university you went to and no one takes into account if you got much higher marks in maths.”
But Professor Rust said that he was “sceptical” about whether a meaningful test could actually be developed and the Rand Europe report acknowledges that there are concerns about whether the tests are “too general” to provide a useful measure of learning gain, particularly given that degrees in England are relatively specialised.
Sally Brown, emeritus professor of higher education diversity in teaching and learning at Leeds Beckett University, questioned whether the range of skills that students develop, and that employers want, could be reliably measured by a standardised test.
“A standardised test for universities could only concentrate on generic skills such as interpersonal communication, problem solving, use of digital information and dealing with complex situations,” Professor Brown said. “If you look at these measures and look at how you are going to test them, an exam isn’t going to be the best way to do it.”
While there may be some value for students, and Hefce also hopes that results could be used by institutions to identify which teaching methods work best, the results of a generic nationwide test would inevitably be used to compare institutional performance. They could be an important metric for the government’s proposed teaching excellence framework.
Mr Millward said that any use of learning gain measures in a cross-institutional context would require “very careful piloting”, and Hefce’s emphasis on using testing as part of a mixed methods approach may offer some reassurance.
The funding council will be running a multi-institutional trial of an existing learning gain assessment programme – likely to be the Wabash National Study, developed at Wabash College in the US state of Indiana – which uses a mix of grades, surveys and standardised tests.
But the possibility of standardised testing being used to compare university performance has been criticised by Baroness Wolf of Dulwich, Sir Roy Griffiths professor of public sector management at King’s College London, who said that the exams would be “completely unable” to measure university performance “in any reliable or valid sort of way”. The content of a test would be “far more closely related to the subject matter of some subjects and degrees than they are to the subject matter of others”, she said.
“Universities vary hugely in their degree ‘mix’, so a university whose degree mix is heavily weighted towards the content of a test will appear to be doing a much better job than those whose mix is not so closely linked, even though that isn’t necessarily the case at all,” Baroness Wolf said.
“If there were a single test, universities would be tempted to distort and maul about the syllabuses of subjects that are not naturally aligned with it, in order to prepare students for the test, whether or not this is a sensible use of their time.”