Citation diversity tied to medical research success

US NIH finding raises hopes of focused funding, but concerns of bandwagon outlook

十月 11, 2019
Machine wearing lab coat
Source: Getty

The US National Institutes of Health has developed a computer program to help it guide future research investments by predicting which of its published studies are most likely to reach clinical trial stage.

The program, using artificial intelligence-style analysis of written articles and citations, generally has found that articles with citation lists covering greater numbers of fields and scientific approaches have the greatest chance of advancing to human testing.

“It’s basically a measure of the diversity of interest in the paper,” said George Santangelo, director of the NIH’s Office of Portfolio Analysis and co-author of a paper published in Plos Biology describing the new tool.

The effort is nevertheless raising concern that AI-driven decision-making might reinforce any existing biases in US medical science and funding, such as those that favour drugs and devices over preventive strategies.

The new NIH tool does appear to make an important contribution to the goal of better understanding how research translates into clinical benefit, said John Ioannidis, a professor of medicine and of disease prevention at Stanford University. But, he continued, “the proposed method, or any method used in the same premises, may be promoting a normative view of what types of papers get embedded into clinical trials citation networks”.

Another expert, Sandro Galea, the dean of public health at Boston University, questioned the long-term reliability of such an approach. “To suggest that an algorithm can detect particular success, or not, of specific papers seems to me a stretch,” he said.

Dr Santangelo readily conceded the shortcomings of the NIH data tool, including the fact that for its key measuring point it uses a discovery’s coming to a Phase 1 clinical trial – an early evaluation in human subjects of a proposed intervention’s safety and toxicity.

Most drugs reaching a Phase 1 trial never progress to become a government-approved medication, he acknowledged.

But the tool is valuable nevertheless, Dr Santangelo continued, because published findings can take a decade or more to reach a clinical trial, whereas the new NIH analysis method can offer its prediction after just two years of accumulated article citation history.

“Even though it’s a lagging indicator,” he said, “it’s well ahead of the actual event of the papers being cited by clinical trial guidelines.”

In addition, Dr Santangelo said, the NIH did not envisage the tool’s being used by drug manufacturers or others to accelerate the movement of promising lab discoveries into clinical testing.

Instead, the NIH – the largest provider of basic research funding to US universities, with a $31 billion (£25 billion) annual budget – hopes to use the data analysis method primarily to help determine which types of research to favour in its future grant awards, he said.

“The value for us is that we can look across the entire landscape and ask what areas look like they’re showing promise of moving into the clinic,” Dr Santangelo said.

That still raises concern, Professor Ioannidis said. “This is a useful contribution,” he said, “but we probably need to explore more [fully] the space of what truly useful translation means.”

Dr Santangelo affirmed that the tool’s development was an ongoing process, and that its recommendations would not override the judgement of NIH officials in the grant awarding process.

“The computer is not making the decision,” he said. “The human being is.”



  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
Please 登录 or 注册 to read this article.