Productive head drives others to raise their game

Quality of papers rises in units led by strong researchers, study claims

January 30, 2014

Heads of department who demand four stellar papers from their juniors for the research excellence framework without having any of their own to offer are often the subject of bitter whispering around departmental water coolers.

Research now suggests that this demoralisation may continue when researchers return to their offices.

A working paper published earlier this month claims that appointing a departmental head with a strong research record is the best predictor of an improved departmental output of high-quality papers.

The paper, Leadership and the Research Productivity of University Departments, examines the citation count for the five most highly cited past papers of 169 newly appointed chairs of economics departments at 58 US universities between 1995 and 2010. It also looks at the departments’ output of papers in 11 highly selective journals, beginning a year after the appointment.

After adjusting for individual characteristics, such as gender and work experience, and institutional factors, such as grant income and department size, it concludes that the chair’s citations are the best predictor of an increase in publication success.

Co-author Amanda Goodall, senior lecturer in management at City University London, said that others would have to check whether the correlation held for non-economics departments, but she was confident that it would because she had observed a similar effect for vice-chancellors in her 2009 book Socrates in the Boardroom: Why Research Universities Should Be Led by Top Scholars.

She said the results could not be explained merely by the power of better departments to attract high-calibre researchers as leaders because the study tracked changes to departmental output over time, rather than merely examining correlations at one point in time.

She defended the study of citations, describing them as a “beacon showing work is viewed as important”. However, the length of time it took to accrue citations made it impractical to use them as a measure of departmental output quality in the study.

Dr Goodall plans further research into why highly cited researchers have such a positive effect. But she suggested that one reason might be that their presence sent a message to current and potential department members that good research would be valued and rewarded: “They understand [the researchers’] world so they can create the right kind of environment [to facilitate it].”

The findings suggested that universities may be misguided to allow academics without strong research records to lead departments, and that greater incentives might be needed to encourage such figures – who are often reluctant to take on management positions – to apply.

She described Sir Paul Nurse, the Nobel prizewinning Royal Society president, chief executive of the Francis Crick Institute and former president of Rockefeller University in the US, as “the perfect example of someone who has maintained his interest in research while doing a lot for the scientific community, the commercial effect of which has been substantial”.

“I would argue that is because he is an outstanding researcher,” she said. “The question is how we encourage other Paul Nurses into management roles.”

paul.jump@tsleducation.com

Times Higher Education free 30-day trial

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Reader's comments (1)

One solution is to appoint heads of department that the academics approve of. It is unlikely that a set of colleagues who are doing good research will be asking for a mediocre achiever to play the role - what would he or she know about research? The other real issue is to start moving away from any type of assessment that depends on putting numbers on the uncountable. To use a different example, many academic editors of PLOS ONE received a friendly email a couple of days ago informing them about how many papers they edited last year, how many reviewers that they had contacted accepted to complete their review, how timely they managed the process, what percentage of articles they accepted, etc. and they could see how they "rank" against the average academic editor but also against the journal's "optimal policy", based on "best practice". I suppose a natural reaction is to start comparing one's own numbers to what is happening, and to reflect upon what the numbers mean. What stroke me just a few moments later in this thought process, is that the most difficult aspect of editorial work, the agonising to treat the papers received with due care, the finding of the best reviewers, the evaluation of the authors responses and - on controversial occasions the most difficult aspect - the final decision, were not actually being evaluated (not possible to do so with numbers, or perhaps I am just not good enough in mathematics).

Sponsored