Trials with too much error

April 17, 1998

Julia Hinde reports on growing concerns that patients are suffering because of the inflexible methodology of clinical trials

Thousands of British patients and volunteers were last year involved in around 800 clinical trials designed to bring new drug treatments onto the market.

Such trials have been used in medicine for 50 years, but now there is a rising groundswell of anxiety about the way they are conducted. Some academics believe that randomised clinical trials - whereby two groups of patients are offered different treatments in a bid to assess the effectiveness of a new procedure - are unethical. Treatments are withheld from one group of patients for the months or even years the trial lasts - even when there is a possibility that they could save lives.

An example is the drug Tamoxifen, which is being tested as a prevention to breast cancer for women deemed to be susceptible. The drug hit the headlines last week after US trialists said it was a success, but in the UK the drug remains firmly off limits until clinical trials are complete.

Radomised clinical trials have changed remarkably little since they were introduced into medicine in 1948. For the most part, they maintain throughout the length of the trial fixed numbers of participants in two groups, one of which takes the new drug, dose or procedure, while the other keeps to traditional treatments or to a placebo. Results are only assessed statistically once the trial is completed, although patients are monitored throughout for safety. No use is made of the accumulating results during the trial.

Yet, according to Chris Palmer, a statistician at Cambridge University's Centre for Applied Medical Statistics, statistical mechanisms are now available to enable the initial results of trials to be acted on while the trials are still on-going. He believes that it is unnecessary to wait the months, and sometimes years, needed for trials to finish. In fact, in cases of life-threatening diseases, such as Aids, Dr Palmer believes it could be unethical not to start acting on the results of clinical trials as early as possible. In these cases, he says, we should be putting the ethics of individuals above collective interests, while not compromising good science.

"I am not dismissing traditional clinical trials, rather just looking at ways of improving how trials are done," says Palmer. "People think the traditional double blind trial is the only way science can prove that a new drug is better than an old one. But in certain cases we could and should be making use of the information as it arises. We could be talking life and death."

Last month Palmer told the American Association for the Advancement of Science conference in Philadelphia that 1998, the 50th anniversary of the first randomised trial in medicine, should act as a focus to encourage change in the way trials are undertaken. A conference in London this October, organised by the British Medical Association to review the last 50 years of progress, may do just that.

The first randomised trial was introduced in agriculture in 1926 as a means of comparing fertilisers. It was more than 20 years before randomisation was introduced into medicine. Since then refinements have been introduced, culminating in the four-phase double blind trial that is commonly used today to bring new drugs to market.

Yet, despite concerns such as Palmer's being raised, those who run trials have almost unanimously resisted the introduction of new methods. Palmer highlights four kinds of alternative trial design - adaptive, Bayesian, decision theoretic and sequential, each of which he thinks could be used. All of them make use of data as it accumulates. This can be by skewing treatment allocation probabilities towards the favoured treatment, modifying prior beliefs about the efficacy of different treatments, or running a trial until just enough data is collected to prove a treatment successful, rather than for a pre-determined number of subjects or period of time.

There are limitations on these "data-dependent" designs. Each has limited applications, and is statistically more complex than traditional methods. They require intensive statistical support so that data can be processed quickly and acted on as it accrues. But Palmer adds: "If someone has a life-threatening illness, they have intensive medical help, so why not intensive statistical support as well?

"Clinical trials are a delicate balance between doing what is right for those in the trial and for future patients. I would argue that each trial needs to be considered individually - there is a lot of difference between a potential treatment for a lethal disease such as Aids, and a late-phase flu trial. In one case we can afford to wait until all the data is in. In the other case, waiting costs lives.

"Traditions and the deep culture of clinical trials mean change is hard to introduce," Palmer adds. "Statisticians have been discussing new trial designs for years, but we have been telling the wrong people - other statisticians. It is not statisticians who drive the trials." He says it is imperative for statisticians to increase awareness of new methods by spreading the message to clinicians who carry out the trials and to those who fund them.

"It is a poor justification to argue this is how it has always been done," he says. "Why are we using trial designs which date from 1920s medicine? Would there not be an outcry if today we only used medicine dating from the 1920s?"

登录 或者 注册 以便阅读全文。




  • 获得编辑推荐文章
  • 率先获得泰晤士高等教育世界大学排名相关的新闻
  • 获得职位推荐、筛选工作和保存工作搜索结果
  • 参与读者讨论和公布评论


Log in or register to post comments


Recent controversy over the future directions of both Stanford and Melbourne university presses have raised questions about the role of in-house publishing arms in a world of commercialisation, impact agendas, alternative facts – and ever-diminishing monograph sales. Anna McKie reports

3 October