Last year’s huge assessment of academic quality in the UK that sought to rate the benefits of research may have captured only short-term benefits that had little impact on patients, a new paper has argued.
The research excellence framework controversially asked for case studies in order to judge academia’s impact on the wider world.
In medicine, the panel that looked at research impact was “extremely impressed” with the results, according to “Research impact in the community-based health sciences: an analysis of 162 case studies from the 2014 UK Research Excellence Framework”.
But the analysis into case studies submitted to the public health, health services and primary care subpanel concludes that universities generally touted changes to guidelines and policies in their case studies, rather than an impact on patients. Only a few case studies actually showed that research had changed patient mortality, morbidity or quality of life.
“There was a striking mismatch between institutions’ claims to have engaged with ‘patients and the public’ (universally made in impact templates) and the limited range and depth of activities oriented to achieving this described in the individual case studies,” it says.
Underlying this problem, the paper argues, is the mistaken idea that medical research can easily have a direct, beneficial impact.
“Policymakers may assume that they can commission targeted research to solve policy problems. In reality, these ‘knowledge-driven’ and ‘problem-solving’ mechanisms of impact are uncommon,” it says. “Clinicians rarely read published research or consciously follow guidelines; and policymakers ask different questions and operate to very different logics, timescales and value systems from researchers.”
The paper argues that it is far more common for research to have an impact indirectly, for example by increasing “tacit knowledge”, or allowing clinicians and policymakers to better understand each other’s work.
This in part might have been due to the design of the assessment itself, the paper notes. “The format of the REF impact case study (strong emphasis on measurable impacts that could be tracked back to the study reported in the ‘research’ section) allowed direct but not indirect flows of influence to be demonstrated,” it says.
The high impact scores given to the impact of medical research in the REF are therefore “no cause for complacency”, the paper warns.
The authors are Trisha Greenhalgh, professor of primary care health sciences at the University of Oxford, and Nick Fahy, a health policy consultant.