Reading a comment posted in response to an article in Times Higher Education caused me some unpleasant feelings, even while enjoying my Christmas break.
The article by Chris Havergal was reporting on a new study by Angela Brew, David Boud, Karin Crawford and Lisa Lucas, “Absent research: academic artisans in the research university”. The study suggests that academics who are not necessarily high-performing researchers nevertheless play an important role in the functioning of the university.
I had never thought of academics who devote a lot of time and energy to what one might call academic citizenship as artisans but that is a good label for them. People who recognise talent, skill, passion and commitment in others and who use their skill, experience and wisdom to labour, nurture, shape, carve, create…not a functional work of art but something far more extraordinary: people working together towards a shared goal.
The academic trinity of research, teaching and service fails to capture the artisanal trait. The dominant discourse privileges individual research excellence but, it seems to me, marginalises the art, craft, creativity and teamwork required to foster productive academic environments.
So, it was with a heavy heart that I read the comment, posted in response to the article: “This is straw manism… this all seems like an exercise in making non-research ‘academics’ feel better about themselves.”
The full comment is here and you can decide for yourself about the merits of their argument. Here are my reflections:
- I’m not sure just who or what the straw man is in this instance. The study sought to describe and illuminate the role played by “those people who have not developed a recognised or ‘accepted’ research profile for research assessment purposes”. Such people exist – they are not made of straw, nor, on the basis of the study findings, are the roles they play inconsequential. The straw man, if he is there at all, seems to have been raised by the person who posted the comment.
- There are some fairly compelling reasons to ensure people do feel good about themselves and their role in the workplace. For example, how hard would it be to perform at a high level if you felt undervalued and invisible? Are you likely to operate effectively as part of a team if you feel that what you do is worthless?
- I assume that the scare quotes around “academics” are intended to emphasise that a non-research academic is an oxymoron. Yet, there do exist teaching-only academic positions in many institutions including my own, so that can’t be right. Or, maybe this is a cunning case of Grice’s conversational implicature and we all just know that non-research academics aren’t real academics.
But, what I found really disturbing was that I could almost hear the closing words from this comment echoed closer to home: “This is the game we’re in, we need to play it (counting publications); stop whinging, start publishing; if you’re not likely to make at least a B in the next Performance-Based Research Fund round your job could be at risk…”
There is no shortage of research that describes, evaluates and contests the impact of national research performance assessments (e.g. PBRF in New Zealand, Excellence in Research for Australia, research excellence framework in the UK). The problem is not that there is not enough research. The problem is that, in practice, the detail seems to be largely ignored by the institutions who claim to prize research. Funny that.
Personally, I think some form of accounting for institutional effectiveness is important. The humble taxpayer funds our work and deserves reassurance that their hard-earned cash is not gurgling down the toilet. So, I have far less issue with the external measures per se than I do with internal measures of individual performance and gaming that have spawned as a result.
Assessment, including research performance assessment, can have unintended consequences. Professor Jonathan Boston’s comments relating to the design of PBRF reported by Annabel McGilvray in 2014 are apposite: “We simply failed to fully realize the implications of the Privacy Act and Official Information Act…If I had known we would end up with a regime in which individuals had their scores reported to them, and that other people could potentially know what they were, I would not have supported it.”
Where university departments operate as largely independent fiefdoms the ground is fertile for manipulation. All performance indicators are not equal, decisions about what to count are subjective and within departmental and institutional cloisters, these decisions are almost impossible to contest. Furthermore, it is far from easy to gather data relating to academics’ experiences of research performance management. Comments made by Fiona Edgar and Alan Geare in 2013 regarding the ticklish business of collecting data for their study into factors affecting research performance at universities in New Zealand are telling in this regard.
In the hands of thoughtful, even-handed and artisanal heads of department there may be little to fear from internal performance measures. In the hands of bumbling but well-meaning or distant and distracted heads of department, there could be cause for concern.
In the hands of self-serving, manage-up and boot-down career climbers, alarms should sound: through a solitary interpretive act, something as simple as a publication count can be transformed into an instrument of torture.
The only outcomes of torture are fear, defensiveness, toxicity and despair. On every level, institutional, departmental and individual, this serves no one well.
So, good on Brew et al for their study. I hope that it does get noticed and I really hope that it informs thoughtful and critical action. Blindly counting publications is the road to hell.