In the private sector, scarcely a day passes without some company announcing the steps it is taking to be more friendly to its employees.
Even the most demanding city employers use the vocabulary of staff empowerment and talk about promoting work-life balance as a way of building staff commitment.
But in higher education it seems that employers are taking an altogether tougher approach to those at the coalface. At Imperial College London and elsewhere they are assessing staff not as members of a scholarly community but based on a numerical analysis of their publications and their ability to bring in money.
This may seem like a strange way to assess clever, creative people. But managers are not entirely to blame for asking these questions. The criteria used in the research assessment exercise drive institutions to look at how much research their staff are publishing and how much cash they raise. But while the RAE has become more subtle in its appreciation of how research works, the people running universities seem to be getting less subtle about the questions they ask of their academic colleagues.
On their own, data on how many papers researchers write, where they publish them and how much cash they bring in will be almost impossible to convert into useful management information for universities. Even in science, engineers publish less often than neuroscientists, mathematicians need less money than chemists, and physicists find it easier to get their papers in the top journals than metallurgists. If the comparison is extended to the arts and humanities an output-based judgement of quality becomes even less feasible.
Although Imperial is clear that it uses the data it gathers only as a "starting point" for discussion on academics' performance and allows different departments discretion over how to apply the criteria, the danger for all those institutions adopting the numerical approach is that people working in a productive way within their own subject's culture could end up looking like failures.
Institutions should be making sure that their staff are asking the deepest questions about the research areas they work in and that they have the staff, equipment and funds to seek out the answers. Then the management could stand back and wait for great papers to appear. Any other approach suggests that universities have little confidence in their staff.
Exclusively mechanical performance measures are based on an outdated understanding of what academics are supposed to be doing. Research funders expect them to work across disciplines and international boundaries, to engage the public in their research and to think about its commercial potential. Anybody who is told that their job security depends on a few narrow measures is unlikely to work innovatively or to think about the wider implications of what they are doing.
In a world where every research award is won against severe competition, funders want to see something original in an application for funds, not the promise of a large number of papers reporting incremental changes in knowledge.
In practice, universities may discover that telling the cleverest and most driven people how to run their professional lives is not likely to be a success. They will find ways of looking as if they are enthusiastic about change while continuing to work as they want to. And although talented academics like to work at top institutions, they also like to feel well treated. No university gets the best staff purely by offering good salaries. It tempts them with interesting work, good colleagues, the right facilities and the feeling that they are valued. Even world-famous institutions will become less attractive in the job market if they measure staff success in inappropriate ways.