Analysis - Global benchmarks for student standards

How do you compare graduate quality internationally? David Matthews examines an OECD pilot project

March 28, 2013

There is an elephant in the room of global higher education. Given the endless debate about university prestige, reputation, exclusivity and so on, you might assume we actually know how capable students are when they graduate.

But we don’t. Not in any rigorous or scientific way at least.

This is because universities set and mark their own exams, and there is no firm national standard, let alone an international one, for deciding who deserves a first, a 2:1 or a particular grade point average.

We do know that some universities have higher entry grades than others. We also know that graduates from certain types of universities are favoured by employers. But there is no objective way to know how talented final- year students really are and therefore no precise method to measure what, if anything, they have learned at university.

This could all be about to change, however. In the first half of 2012, the Organisation for Economic Cooperation and Development piloted internationally comparable skills tests on 23,000 students in 17 countries, from the US to Egypt, and is now considering rolling it out properly.

The Assessment of Higher Education Learning Outcomes (Ahelo) project involved students taking computerised exams in “generic skills” and in either economics or engineering, for those who had studied these subjects. They were scored using a mixture of qualitative and quantitative marking.

At present, the global league tables that have become hugely influential are based mainly on research power. The OECD claims that current ranking criteria are “narrow” and create a “distorted vision of educational success and fail to capture the essential elements of an education: teaching and learning”.

The OECD already runs global standardised tests in reading, mathematics and science for 15-year-olds - the Programme for International Student Assessment (Pisa). Sixty-four countries participated in the 2012 round, the results of which have been used to create an international league table comparing school systems.

If it were possible to run Pisa-like tests for higher education, who knows what shocks might be in store for supposedly elite institutions and lauded national university systems.

“This is potentially hugely significant,” said Bahram Bekhradnia, director of the Higher Education Policy Institute. “I suspect that governments in particular would be watching very closely.”

This kind of project has been attempted before at a national level, with unsettling results. The Collegiate Learning Assessment (CLA) tests students’ abilities to “think critically, reason analytically, solve problems and communicate clearly and cogently” across institutions in the US.

The data collected were used as the basis for a book, Academically Adrift: Limited Learning on College Campuses (2011), by Richard Arum, a sociology professor at New York University, and Josipa Roksa, an assistant professor of sociology at the University of Virginia. Their conclusions were damning: 36 per cent of students showed no improvement in the CLA skills over four years of higher education.

Ahelo draws on the CLA framework, but is the first attempt to draw comparisons internationally. Perhaps the most contentious element is what questions are asked. “I do suspect it will be enormously difficult to find culturally neutral things to measure,” Mr Bekhradnia said.

In the generic skills test, for example, students are presented with a situation that is fairly clearly set in the US: they are put in the role of a city administrator investigating the emergence of a three-eyed catfish in the local lake. Students are provided with a sheaf of evidence - including a map, news report, graphs of pollution levels and an email exchange with a biologist - and asked to figure out the cause.

Their analytical reasoning and evaluation, problem-solving and writing effectiveness is then graded on a scale of zero to six.

However, students from Kuwait (which participated in the pilot) “don’t have a lake and don’t have catfish”, acknowledged Diane Lalancette, an analyst in the OECD’s directorate of education. “But this doesn’t mean they don’t know what they are.”

She stressed that the participating countries had all agreed to the questions, and said she had been “surprised” at how much consensus there had been on setting a core of questions for the economics exam.

Although the discipline contains competing theoretical models, it was possible to formulate questions that tested a student’s ability to “think like an economist”, Ms Lalancette said. And this was only the pilot study - they did not set out to write perfect questions, she pointed out.

Even if culturally neutral questions can be devised, there is still the danger that Ahelo will pressure universities to teach to the test, reducing the diversity of the global curriculum.

Homogenising effect

A worldwide Ahelo exam would be deeply “homogenising”, said Alison Wolf, director of the International Centre for University Policy Research at King’s College London.

“It would be an incredibly conservative force” and would create conformity in higher education in a manner similar to what has happened in the UK school system, she argued.

Partly for this reason, universities will refuse to sign up to the Ahelo system, Professor Wolf predicted, because they would be on a “hiding to nothing”.

At best, the results will confirm that an institution teaches complex skills - something already assumed by academics, students and the government. But at worst, Ahelo could show that a university or an entire higher education system was actually teaching students far less than anyone thought.

Ms Lalancette stressed that most OECD members do not want a ranking, either by country or institution. Universities might be given some sort of comparative information without making the full results public, she said.

“Pull the other one,” said Professor Wolf, who is sceptical that anyone would be interested in anything other than league tables. And of course, Ahelo results could always be turned into an unofficial ranking, Ms Lalancette acknowledged, even if the OECD refused to create a league table itself.

The Ahelo project is also working out how to measure “value added” - that is, what students learn during their degree courses. They could be tested at the beginning and the end of their degree programmes, or the first- and final-year cohorts could be examined at the same time. This latter option would save time, but it assumes that a university’s entry standards remain constant, Ms Lalancette explained.

OECD member countries will now discuss whether to roll out Ahelo properly, possibly at the Education Policy Committee meeting in April. Even if they give it the go-ahead, the first batch of results will not be ready until 2017-18, according to Ms Lalancette.

Everyone agrees on the need for better, comparable data on student ability. “We actually have incredibly little information on whether people learn anything in university,” Professor Wolf said. But whether Ahelo is the best way to correct this remains to be seen.

david.matthews@tsleducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored