When, just over a year ago, Vince Cable misquoted the results of the 2008 research assessment exercise in making the erroneous claim that 45 per cent of UK science was not "excellent", it did not go down well. Coming at the end of a long summer of dark murmurings about likely cuts to the science budget, Cable's slight on the scientific community ignited a reaction that flared rapidly. The Science is Vital campaign sprang into existence, raising about 36,000 signatures within a few weeks and drawing more than 2,000 scientists to a noisy protest outside the Treasury.
The message from the protest was sharply focused on the productivity of UK science and the economic value of public investment in it. This stance was backed by the timely publication earlier in the year of The Scientific Century: securing our future prosperity, a report by the Royal Society that showed the UK's research base was the most productive in the world and an important driver of growth. It had no shortage of case histories showing the impact and value of UK research to choose from: penicillin, monoclonal antibodies, radar, computed tomography and graphene are just a few examples of the UK's rich scientific pickings.
The budget announced in the 2010 Comprehensive Spending Review - flat cash for recurrent expenditure combined with severe cuts in capital expenditure - was hardly generous compared with the spending of some of our major economic competitors, but was far better than the scientific community had expected or was imposed on other parts of the public sector.
Having unashamedly made the economic case for science, the community seems well placed to prepare submissions for the research excellence framework, which challenges scientists to report on their research output, including evidence of impact.
Yet there are longstanding and widespread misgivings about the measurement of science. Many would side with Einstein, who declared that "everything that counts cannot necessarily be counted". The developmental biologist Peter Lawrence has written eloquently, even elegiacally, about how the mismeasurement of science in the UK has degraded scientific life and productivity.
Mistrust in the process of assessment is perhaps in part due to confusion that goes all the way to the top. Last December in these pages, Sir Mark Walport, director of the Wellcome Trust, exhorted scientists to knuckle down and embrace the impact agenda. But more recently, the former science minister Lord Sainsbury, albeit while campaigning to become chancellor of the University of Cambridge, recommended that the REF's impact component be scrapped since it risked the "misallocation" of science funding.
Acknowledging that impact remains a misunderstood topic, David Willetts, the universities and science minister, used the Gareth Roberts Science Policy Lecture last month to reiterate the coalition's commitment to blue-skies research as the necessary fount of discovery, and to emphasise that a broader definition of impact has been adopted for the REF and by the research councils. It includes (but is not limited to) "an effect on the activity, attitude, awareness, behaviour, capacity, opportunity, performance, policy, practice, process or understanding of an audience, beneficiary, community, constituency, organisation or individuals in any geographic location whether locally, regionally, nationally or internationally". In other words, it's not just about money. That said, the appetite for returns on public investment is a reflex that politicians who are strapped to a five-year electoral cycle may find hard to suppress.
The problems of measuring impact - however widely it is defined - are real, particularly with respect to evidence gathering. As a community, we should be wary of the process becoming a tail that wags the scientific dog. But I think there is no alternative but to engage as constructively as we can with the REF. It will not suffice to bleat at its inevitable flaws. We need to be more sophisticated and realistic when dealing with our political masters, who are the representatives of the public into whose purses we are permitted to dip.
We cannot afford simply to point to past successes and ask to be trusted to get on with it. We must be held to account, for that is the price of money in an open society, especially when there are so many competing demands for access to the public purse. If the scientific community holds to the line that lobbying is not our style, we will lose out to interest groups that have fewer misgivings about bending the ears of politicians.
This is not to discount longstanding tensions over the Haldane principle, but the situation is not as grim as some may fear. One way to release the tension is for scientists to become more involved in public engagement, which is something we need not fear. The Science is Vital campaign, as well as demonstrating the value of taking our case directly to the government, helped to reveal the high level of public backing for science. I also experienced that warm approval as a participant in the public engagement competition I'm a Scientist, Get Me Out of Here!, in which schoolchildren across the UK were given free rein to quiz scientists on any and every topic. The competition revealed to me not only the high regard in which science and scientists are held, but also that the public have relevant questions to ask of us. That contact was both testing and affirming.
To look positively at the REF's impact component, one could do worse than to see it as an opportunity to continue that conversation. If nothing else, it will help us to assemble a fresh pile of stories about the success of UK science. Rightfully, it will also expose us to national problems and priorities that the country expects scientists to address.