Brussels, 03 Jun 2004
Scientists and editors of scientific journals are being urged to pay particular attention to the quality of statistics used in research papers, after a new study revealed widespread mathematical errors in two leading scientific publications.
Two biostatisticians, Emili García-Berthou and Carles Alcaraz from the University of Girona in Spain, set out to examine the extent of statistical errors contained within four editions of Nature and two BMJ (British Medical Journal) volumes from 2001.
The pair decided to recalculate the 'P values' contained within selected published results. The P value is the means by which researchers measure whether their results have a statistical significance, and generally a P value of less than 0.05 is considered significant and unlikely to have resulted from chance.
Mathematical software packages were used to recalculate the P values based on relevant figures also contained within the research papers. The two scientists found that their results differed from the published P value in more than 11 per cent of cases, and that minor mistakes, such as rounding errors, were present in 38 per cent of the Nature papers and 25 per cent of the BMJ ones.
Of the P value errors that the researchers found, only one resulted in a significant result becoming a non-significant one. However, even though many of the errors were too minor to have any great impact on the overall result of the research, some believe that it reveals a general sloppiness towards statistics in science.
The editor in chief of Nature, Philip Campbell, said that the journal would take a closer look at the figures contained in the critical study before deciding what action to take. Meanwhile, the editor of the BMJ Richard Smith suggested that researchers or journals could publish more raw data on the Internet, where others could check them.