Crude recipe for a 'chicken-run' sector?

June 1, 2007

Performance targets are spreading despite the criticism that they destroy freedom of inquiry, Phil Baty writes.

"A university is not a corporation making razor blades for a price in a particular market for a given delivery time."

This was, arguably, the line that clinched a heated debate at Oxford University's Congregation of dons and persuaded academics to throw out plans for new "mandatory" reviews of staff performance.

The line was delivered by Don Fraser, professor of earth sciences, two years ago this month. But in an uncompromising statement to The Times Higher this week, Professor Fraser who, in his own words, is no "old-school conservative die-hard", warned that the issue of target-setting and performance management in higher education remains a serious threat.

He attacked the rise in universities imposing "inappropriate corporate models intended for profit-oriented industrial production on a completely different animal".

Oxford, he said, "will not benefit from the introduction of management techniques that measure false productivity via pseudo-performance metrics or that offer promotion. This is a simple, brutal and crude recipe for a chicken-run university.

"If you want to destroy an international premier-league player and substitute a spreadsheet-managed piece of mediocrity, this is exactly how to do it. Destroy thinking; destroy collegiality and teamwork; substitute short-term goals."

Performance management in higher education has well and truly arrived. It follows the investment of tens of millions of pounds in human resources facilities in universities under the Government's Rewarding and Developing Staff fund, and the introduction of the 2004 pay framework reforms.

A survey of 192 institutions by the Universities and Colleges Employers Association last year found that 81 per cent of institutions were introducing new, improved systems of "performance management" or were considering doing so.

A draft human resources strategy from Birmingham University, seen by The Times Higher , says that staff "will participate in a rigorous and sustained performance dialogue with their managers" and that staff must understand that "effort alone will not dictate my performance rating". Similar initiatives are under way across the sector.

Few would argue against the need for employees to be held to account, in some way, for the contribution they make at work.

But academics argue that their output is not easily quantifiable. The research assessment exercise has meant that great emphasis has been placed on how many papers academics publish and in which journals the work appears. And the competition for research funding means you can often be judged by the size of your grants.

Such short-term goals can narrow the scope of research, force academics to play safe and remove scope for brilliance, Professor Fraser argues.

In his Oxford speech, Professor Fraser, highlighted the Laboratory of Molecular Biology at Cambridge, with its staggering record of producing 12 Nobel prizewinners.

One alumnus, Sidney Brenner, received the Nobel Prize for Physiology in 2002, yet he was "far from prolific", Professor Fraser said. Another laureate from the laboratory, Max Perutz, said himself that "discoveries cannot be planned; they pop up, like Puck, in unexpected corners."

Oxford academic Peter Matthews took up the theme in an article at the time in which Professor Perutz said he "never directed the laboratory's research but tried to attract talented people and give them a free hand". Among those young researchers he offered a "free hand" to were Francis Crick and James Watson, who discovered the structure of DNA.

Concerns about the negative effect of target-setting are to be raised at the University and College Union annual conference in Bournemouth this week.

Queen's University Belfast has put forward a motion that warns that managers no longer see academic freedom as "the ability to pursue research questions of one's choosing". Instead, what an academic researches is "driven by issues related to cost".

Renee Prendergast, of Queen's School of Management and Economics, said:

"Academics are being required to set annual targets in line with school and university policies, and their performance against these targets is being reviewed at six-monthly intervals.

"University managements somehow imagine that if only they could get more control over what staff do, their high-level plans would be fulfilled. They forget that research is a creative activity and that the results of scientific inquiry cannot be predicted. If the outcome of research were predictable, there would be no need to engage in much of what we do."

A motion from University College London states: "Academic freedom of inquiry is under attack. Disciplines or activities cease on cost grounds without debate, research becomes dominated by external interests."

Such is the strength of feeling at UCL that academic staff there have set up a dedicated blog on the subject. One of the first contributions, written last month, is an essay by Kalvis Janson, a mathematician at UCL, lamenting the tendency to judge academics on the basis of simple, mechanical measurements of their outputs.

"We are in a world controlled by people who are increasingly using simple metrics to measure our performance, and many of us are somewhat understandably providing work that fits these metrics, rather than doing what we know has real value," he writes. "In, say, a hundred years only the good ideas from our time will matter, and I wish I had more time to find them."

Another contributor highlighted the flaws of automated measurements of research output: one citation system listed "W. Building" and "P. Road" as the most prolific writers in his laboratory - which is based in the Wolfson Building on Parks Road.

But many parts of academic life remain relatively untouched by recent trends in performance review, according to Matthew Pateman, director of studies in media, culture and society at Hull University. Hull has a promotions round review system called Rate, under which academics are assessed on their performance in research, administration, teaching and external activity, but it is transparent and fair, he insisted. "The system works in such a way that every academic is aware that he or she needs to be working in a particular way if they want to be promoted."

For Sheila Gupta, human resources director at Edinburgh university, there are many positive benefits of performance review systems, such as assisting staff development and supporting their claims for more pay.

But she said schemes "vary enormously" from the bureaucratic to those that focus on "meaningful review" - "where the quality of the conversation is more important than form-filling".

phil.baty@thes.co.uk



Imperial College London's performance measures

Imperial's "productivity" target for publications is that staff should "publish three papers per annum, including one in a prestigious journal" with a high impact factor.

Staff are also given a publication "score", which is calculated by multiplying the impact factor of the journal they have published in by a weighting given for where their name appears on the list of authors, and then dividing this by the number of authors. The author position weight is 5 for the first and last author, 3 for the second author, 2 for the third author and 1 for any other position.

Every year staff are sent a spreadsheet that shows not only their publication score and financial viability, but also those of their colleagues.

TARGETS MISS MARK

The case against

David Colquhoun, professor of pharmacology at UCL, says Imperial's productivity criteria fail to capture the true value of research. This is an edited version of his analysis.

This "publication score" is clearly the invention of a boneheaded bean-counter. For a start, it is well known that there is no discernable correlation between actual citations for papers and the journal's impact factor. That is certainly true for me.

My highest publication score, under Imperial's metrics, is 77.3. This is for a two-page perspective in Science, with a mere 41 citations. According to Imperial's criterion this was 7.2 times more valuable than my best-ever paper (on which I was recently asked to write a classical perspective), which has 565 citations but a publication score of only 10.7.

The dimwitted nature of the publication score can be illustrated in another way. Consider some of the background to a couple of examples; these are the real-life facts that are ignored by bean-counters.

A 1981 Nature paper, co-written with the German physiologist Bert Sakmann, got a score of 73.2 and 8 citations. It was a three-page Nature letter.

It wasn't bad, but Nature papers are so short they can't be thought of as real papers, and four years later we published the work properly, over 57 pages, in the Journal of Physiology. It was the result of six years' work and has 565 citations.

For this, Imperial would have awarded me a publication score of a mere 10.7.

My most highly cited journal paper (Colquhoun, Neher, Reuter and Stevens, in Nature in 1981) has 630 citations and a publication score of 36.6 for me, though only 14.6 for my co-author Harald Reuter.

This paper came from a vacation job in Reuter's laboratory at the University of Bern. We hadn't expected to get a paper out of the job, but our findings seemed novel enough to write up, so we sent it to Nature.

Because all the authors had contributed much the same amount of work, we put the authors in alphabetical order.

The paper just happened to be the first one of its type and so has been cited a lot, despite being scientifically trivial. This example shows not only the iniquitous uselessness of the publication score used by Imperial, it also shows dramatically the almost equal uselessness of counting citations.

Two scientists who command universal respect in my field are Erwin Neher and Bert Sakmann. They received the Nobel Prize for Physiology or Medicine in 1991. In the ten years from 1976 to 1985, Sakmann published an average of 2.6 papers a year. In six of these ten years, he failed to meet the Imperial's "productivity target". In two, he had no publications whatsoever. One of his papers in 1981 has more than 15,000 citations: his publication score for this paper would have been a miserable 0.71.

Universities will have to decide what sort of science they want. They can bend their policies to every whim of the research assessment exercise and they can bow to the pressures to corporatisation from the funding council.

Or they can have creative scientists who win the real honours. They cannot have both.

A full version of this paper is at www.goodscience.org.uk

 

METRICS ARE FAIR

The case for

Steve Bloom, head of the division of investigative science at Imperial, replied:

"We specifically state that numerical indices are there only to get the discussion going about an individual scientist's performance. They are, after all, based on a whole series of independent international reviews of the scientist's output, both in research and teaching. They act as a protection - if the divisional reviewer doesn't rate the science but the scientists have achieved good papers and grants they can refute the criticism.

"One needs to take in several factors, including teaching, community activities and research, and produce some sort of summation of the overall value to society, and brilliance in any area caps everything."

A spokesman for Imperial College added:

"Imperial is committed to ensuring its international competitiveness, and it assesses the activities of its staff using objective and rigorous standards. Metrics is one tool we use to ensure that our processes are fair and transparent, but we also have room to be flexible so that we can recognise achievements that fall outside these criteria.

"The college provides guidelines for academic performance review. Departments and divisions are given a degree of discretion within these guidelines so that they can devise the most appropriate criteria for assessing performance, depending on the type of research they are carrying out.

"Our high international standing is proof of the fact that we continue to attract the best people to come and work here.

"The college stands above all else for encouraging research excellence and blue-skies thinking. This year, for example, we ran our inaugural Research Excellence Awards, an internal college competition that provides funding of up to £150,000 to some of our best researchers, specifically so that they can carry out blue-skies research."

 

Vox pop:Opinions

'The whole point of working in a university is being able to explore your own intellectual agenda. Our vice-chancellor has said he wants our university to become the Princeton of Europe. If you look at what they measure at Princeton in terms of the performance of academics, you will find the answer is nothing. They just recruit good people and let them get on with it'

David Campbell, professor of geography at Durham University

'The problem is, you can't have a performance assessment system that is a one-size-fits-all. Academics working in different disciplines work and progress research in different ways. One of the problems for us, for instance, is that it is hard to achieve a critical mass of work that is measurable. We do get published in peer review journals, but there are not very many of them. Any system should be flexible enough to recognise different kinds of success'

Anne Bacon, head of conservation of fine art at Northumbria University

'In our department we have several lecturers who are widely published authors of children's literature, but their work does not count against the RAE's performance measures'

John Turner, principal lecturer in English studies at Sheffield Hallam University

'As soon as you start work on a particular funded project you start thinking about developing something that is going to be publishable, because if you don't get a paper out of it in three years you will find it hard to get any more funding'

Benjamin Abell, senior lecturer in plant biochemistry at Sheffield Hallam

'My general strategy is to ignore all these distractions that come in from on high. If I wasn't able to do that I'd kick up a big fuss, because academics should not allow their career paths to be distorted'

Stuart Derbyshire, senior lecturer in psychology at Birmingham University

'Noam Chomsky began his career with a book review. If he had been working under a metrics-based performance system, maybe he would never have got his career started'

Sue Blackwell, English lecturer at Birmingham University

'Metrics just continue what the research assessment exercise started and make it a continuous part of academic life. The RAE forced many academics away from following lifelong projects in their disciplines. Manchester University has not yet formalised a metric system, even though schemes operate in schools whereby, for example, the amount of teaching allocated may depend on publications and grants'

Ivan Leudar, professor of analytical and historical psychology, Manchester University.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored