I read with interest the recent article by Darrel Ince, "Systems failure" (5 May). Unfortunately, I do not think such problems as those he discusses are uncommon with new analyses, but it is rare that they have such an impact.
Ince suggests several lessons to be learned from the Duke University case. However, I foresee one problem of which I have had direct experience.
Having been asked to do a simple conversion of raw data to usable numbers, I was passed the equations to use. Upon checking them, I noticed a mistake. After tracing back the error to the source, it turned out to be a genuine accident, but the conversion had already been used on data that appeared in peer-reviewed publications in some of the top journals and had clearly affected the conclusions. It passed because only the converted values had been presented.
Journals will never be able to go through all the raw data, so unless the data conversion is itself the subject of the paper, such mistakes are unlikely to be discovered.
Ince's suggestions concern each part of the research and publication process. Perhaps this shows that the system works, but only if each component functions correctly.
Name and address withheld