The rewards of rule-bending are high, the chances of getting caught are low and prosecutions are virtually unheard of.
As Malcolm Gillies points out in one of our features this week, that common complaint about the City of London was even endorsed by the governor of the Bank of England, Mark Carney, in his annual Mansion House speech, and the harsh punishment given earlier this week to former City trader Tom Hayes suggests that the courts have reached a similar conclusion. Gillies also notes that City values have long been spreading beyond the Square Mile into areas of university culture such as governance, educational aims and the purposes of research.
You could argue that they have also seeped deeply into the culture of research. Indeed, there are those who consider the above criticism of the City to be equally applicable to modern scientific research. For an individual, rule-bending can be a shortcut to getting papers in top journals, which, in turn, can lead to tenure, promotion and lucrative chairs.
But aren’t the chances of getting caught committing research misconduct relatively high? After all, scientists have to run the gauntlet of peer review before they can publish any ill-gotten results. Not everyone thinks the system is sufficiently robust. Former BMJ editor Richard Smith has argued forcefully in Times Higher Education that peer review “doesn’t guard against fraud because it works on trust: if a study says that there were 200 patients involved, reviewers and editors assume that there were” (“Ineffective at any dose? Why peer review simply doesn’t work”, Opinion, 28 May).
Unease about peer review is also borne out by the six contributors who respond to Smith’s article in our main feature this week. All have horror stories to recount. None, however, can quite conceive of a system that would work any better. This is probably because sorting the wheat from the chaff in the thousands of manuscripts submitted to journals every week is about far more than just assessing whether their results were honestly obtained. But when it comes to addressing the latter point, one measure that might help is better policing of research integrity by institutions.
Just as banks stand accused of being reluctant to investigate and punish wrongdoing by their staff, there is a sense that many universities, no matter what their value statements proclaim, likewise prioritise image management over the naming and shaming of wrongdoers. As someone who has often passed allegations of misconduct on to universities, and has attempted to follow the investigation of many more, I don’t always get the impression that my attentions are welcomed.
Of course, I am not qualified to take a view on the merits of such allegations. And I sympathise with universities’ nervousness about libel challenges if they criticise individuals publicly. Moreover, while Carney may have reached the view that, in banking, the whole barrel of apples is rotten, the extent of research misconduct is a matter of wide contention.
But given the frequently trumpeted connection between scientific advance and economic growth, research misconduct and fraud could pose a significant risk to future prosperity, just like financial misconduct and fraud does. If it is important that oversight of banking is as rigorous as it can be, then the same must be no less true of research.