A leading US school has embarked on a quest to find the holy grail of university league tables - how to measure accurately the academic impact an institution makes on its students.
Columbia University's five-year pilot project to identify the value-added component of higher education, now in its second year, could shed light on the dispute between academics and the media about the worth of university guides.
Academic critics claim university league tables lack the rigour the sector deserves. Media proponents counter that would-be students are entitled to the best information they can track down on choices of subject, department and institution.
The Columbia experiment, which is supported by the American Association of Universities (AAU), applies a standard measure at entry and exit to gauge changes independently of the discipline studied.
The pilot is likely to be extended to a small number of other schools in the autumn, and this, according to John Vaughn, executive vice-president of the AAU, broadly equivalent to the UK's Russell Group, will be a better indication of its value.
A further project on the wider issue of statistical indicators has also been launched by Unesco and the European Centre for Higher Education (Cepes). This three-year project was conceived at the 1998 World Conference on Higher Education, which called for a quantitative system of fact-reporting.
Commercial rankings largely split into two - those, like most published by national newspapers in the UK, that mine existing data, and those pioneered by US News and World Report that are based on attitudinal/reputational surveys of academics, employers and students.
A further division is between those that rank in numerical order and those that lump together similarly performing universities in clusters.
In the US, there is an ambivalent attitude among universities that set considerable store by their comparative rankings in private while decrying their lack of objectivity in public.
One American speaker at a Unesco-Cepes round table in Warsaw earlier this month complained that "the competition between universities for students is so sharp that these rankings become crucial - universities falsify data, distort data.
"There are very corrosive effects where resource allocation decisions... are made strictly in terms of moving up in the rankings. Rather than saying how should we invest our financial resources to achieve our mission, it is 'how do we invest our resources to move up in the rankings?'" In particular, there was a "corrosive" effect on the readiness of top schools to take a risk by admitting students from disadvantaged backgrounds.
Central issues that divide the commercial league-table compilers from the academics include:
- Whether data should or should not be weighted. Weighting was regarded as a subjective intervention likely to distort outcomes
- Whether to rank institutions numerically or assign them to "clusters" of universities sharing broadly similar profiles
- Whether the small variations in data already open to error be permitted to alter rankings
- Whether the position of one institution in a ranking should be affected by the recalculation of the score for another.
HOW THE WORLD RANKS ITS UNIVERSITIES
- Magazine Asiaweek
published rankings across 18 countries. Ceased publication in 2001.
- Good Universities Guide
Web/book based, 21 indicators but no overall ranking.
Magazine based, annual. Ranks universities numerically in three groupings: primarily undergraduate; universities with a significant amount of research activity; and medical-doctoral universities. It includes reputational rating based on alumni support over five years and a survey of 7,255 individuals across the country who were asked to rate schools.
- Der Spiegel
Magazine based. The magazine questioned 12,000 students and 1,600 professors. The data were augmented by government statistics.
Magazine based. Since 1999, in cooperation with the CHE-Centre for Higher Education Development. Based on data such as the number of students per professor and the number of PCs per student. It also takes into account the opinions of students and staff. Does not rank but offers recommendations for subjects.
Collaboration between newspaper and educational publishing house. Covers 75 top schools in Poland, using 16 criteria drawing on official data and survey material.
- The Times/Thes
Annual. Ranks numerically 101 mainstream universities offering a full range of disciplines. It does not list specialist institutions with a limited range of subjects. The Thes also includes university colleges with degree-awarding powers, and ranks by nine indicators derived from official data.
- The Financial Times
Annual. Ranks 93 universities using 18 indicators derived from the same official data and uses broadly the same statistical processes. Indicators selected with employment factors in mind.
- Daily Telegraph
Annual. Ranks 99 universities according to outcome of teaching quality assessments in football league style.
- The Sunday Times
Annual. Uses seven key performance indicators derived from public sources.
Annual. Compares universities across 49 subjects focusing on teaching, based on published data. Has no unified table. It has attempted to factor in an added-value measure that gauges the ability of departments to take on students with poor A-level grades and produce graduates with first-class degrees.
- US News and World Report
Magazine-based, annual. Based on survey returns, ranks 1,400 accredited colleges and universities based on academic reputation, retention of students, faculty resources, student selectivity, financial resources, alumni-giving and "graduation rate performance" - the difference between the proportion of students expected to graduate and the proportion who do.
Magazine-based. Ranks separately the top 100 public and private universities on a combination of quality and cost measures, including student debt. Greater weight on quality (71 per cent) than on cost (29 per cent).