Detect fraud earlier to avoid red faces later, say Roger Watson and Mark Hayter

Universities should vet research outputs before they get to the publishing (and scandal) stage, say Roger Watson and Mark Hayter

April 4, 2013

Universities operate rigid systems for vetting the quality and cost of research conducted under their auspices. Likewise, they rightly insist that research adhere to the highest ethical standards and establish committees to review research proposals. But universities do not operate systems for vetting the probity of research outputs. Why not, when publishing malpractice remains a problem?

One need only look at the web pages of the Committee on Publication Ethics, an organisation supporting ethical practice in scientific publishing, to find examples of inappropriate authorship, academic fraud and similarity between manuscripts. A study by Mounir Errami and colleagues published in the journal Bioinformatics in 2008 showed that the Medline database probably contained 3,500 plagiarised papers and 117,500 duplicated (“self-plagiarised”) papers. We have worked as editors of two leading UK-based academic nursing journals, and there is rarely a time when we are not dealing with cases of unethical practice, including many from the UK.

Of course, some of this is due to better means of detection. But although many journals now employ sophisticated similarity detection software as part of the submission process, such problems need to be addressed much earlier.

When misconduct is detected, the consequences are potentially disastrous for an author’s career, but it can also reflect badly on the author’s institution. In today’s world of trial by media, universities may want to think about protecting their reputations.

Higher education institutions provide systems for similarity detection for students’ work. And there is no shortage of guidance advising academics about publishing malpractice or warning them about the serious consequences. It is a safe assumption that academics understand the issues at stake.

The main responsibility for avoiding malpractice rests with authors, and the vast majority of them are good citizens. The main responsibility for detection lies with publishers, who should inform authors about good practice, administer systems for similarity detection and report the consequences of malpractice, including the retraction of published papers. But this does not mean that universities - and other public and commercial bodies from which research publications emanate - should not take a more active role.

Currently, universities virtually ignore research outputs at the point of submission for publication. This is hard to understand in an age of research assessment and when universities are rigorous about the inputs to research proposals, funding and ethics.

Although many journals employ detection software as part of the submission process, these problems need to be addressed much earlier

Dissenters will complain, no doubt, about the idea of adding an additional tier of scrutiny. Publications are already peer-reviewed, so why add a pre-review process? But peer review cannot usually address issues such as authorship, fraud or even similarity. And while research proposals are refereed externally, most universities also examine proposals internally as an obligatory step towards accepting and administering the funding.

So what steps could be instituted? At the very least, universities should insist that papers by their staff are scrutinised for similarity prior to submission to a journal. If manuscripts were run through similarity detection software and reports filed, this would help to avoid plagiarism and duplication.

All research outputs to be submitted for publication should also be read by a cognate colleague, not only to help improve quality but also to see if any insights into the necessity for and originality of the paper can be gleaned. This might also uncover some aspects of academic fraud such as data fabrication. Where co-authors inside and outside the university are involved, statements of agreement to the contents of the final submitted version should be obtained and filed.

Given that none of this happens to our knowledge at the moment in any university, these steps would represent a start; they might also serve to deter potential wrongdoers from doing wrong. They would also help people who have little experience of publishing to avoid genuine mistakes.

What else might be done? A growing area of concern centres on the issue of authorship. This may, in part, be due to better definition of authorship (by, for example, the International Committee of Medical Journal Editors) and a greater willingness among junior staff and PhD students to question what constitutes co-authorship of their papers. Certainly this is an issue over which journal editors are frequently asked to advise or intervene.

The area is also becoming more complex: researchers are increasingly being expected to collaborate internationally, and conceptions of authorship vary across cultures. Universities could therefore play a useful role in checking that all authors on a paper merit authorship. There are clear guidelines as to what constitutes authorship in journal guidelines; these should be checked by the institution.

Universities could also ensure that data management processes - including the depositing of data with databanks for scrutiny by referees and future researchers - have occurred. This should help to reduce fraud and fabrication of data. Such measures are not new but they are becoming more common and soon they will be obligatory.

When things go wrong, it can generate negative headlines across the world. If universities played their part in ensuring honesty in academic publishing, it would help to keep researchers in the public eye for the right reasons.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored