In 1997, when new Labour came to power, it promised that its policies would be "evidence-based".
But according to Hugh Bochel, co-editor of Making Policy in Theory and Practice and professor of public policy at the University of Lincoln, many researchers misinterpreted this phrase.
"Everyone thought, 'Oh, evidence-based policy - that means research, that means our findings, that means the Government will listen to us. But ... policymakers have a much broader view of what constitutes evidence."
Just how broad, not to say nebulous, the definition of "evidence" can be is apparent from a government website. According to the Department for Environment, Food and Rural Affairs' site: "Evidence is any information that Defra can use to turn its policy goals into something concrete, achievable and manageable (its italics). It can take many forms: research, analysis of stakeholder opinion, economic and statistical modelling, public perceptions and beliefs, anecdotal evidence and cost-benefit analyses."
Many voices are competing for the ear of policymakers. Amid the clamour it can be difficult for academics to be heard.
Sir Roger Jowell, director of the European Social Survey and a research professor at City University London, warns that the voice of researchers must not get lost amid the hubbub.
"To some extent, chats in the pub are quite influential in politics. So are MPs' surgeries. But because they are self-selected and unrepresentative, they are an extremely dangerous way of making policy.
"The value of research is that it counters the influence of individuals or powerful subgroups. You want the influence of those subgroups, but you want to be able to look at the whole and they often don't represent that."
According to one US analysis, when senior officials want evidence, they look in rank order to: special advisers; "experts"; professional associations; think-tanks and opinion formers; lobbyists and pressure groups; the media; constituents and consumers; and, finally, academics.
What explains the lowly position of academics? Of course, there can be strong incentives for policymakers to ignore academic research. It can come to conclusions that are at odds with existing policies, and politicians concerned about staying in power may not want to heed such messages if it requires them to own up to their errors. They also worry about whether academic verdicts will be acceptable to the media and public opinion.
Nevertheless, during the 20th century, political parties became more keen on providing some sort of research evidence to back up their claims, says Sir Roger - even if it was not carried out in universities.
"They began to look at what had gone wrong in the past, and to try to see what might go right by looking at different countries' examples. This is not research in the scientific sense of the word, but it is desk research of a very important kind."
However, all this can go out of the window very easily. "As you get closer to an election, new ministers have got to make their mark quickly, and they tend to do so by introducing pieces of legislation, well thought out or not."
Nine years ago, David Blunkett, who was then Secretary of State for Education and Employment, told social scientists what he wanted from them. In a speech to the Economic and Social Research Council titled "Influence or irrelevance: can social science improve government?", he said: "We need to be able to rely on ... social scientists to tell us what works and why, and what types of policy initiatives are likely to be most effective. And we need better ways of ensuring that those who want this information can get it easily and quickly."
Policymakers want certainty, but research often cannot provide it. Researchers can reach different conclusions on the same subjects, and their work contains caveats and qualifications.
In 1970, the American politician Walter Mondale, then a member of the US Senate, summed up these difficulties in a speech to the American Psychological Association.
"I had hoped to find research to support or to conclusively oppose my belief that quality-integrated education is the most promising approach. But I have found very little conclusive evidence. For every study, statistical or theoretical, that contains a proposed solution or recommendation, there is always another, equally well documented, challenging the assumptions or conclusions of the first.
"No one seems to agree with anyone else's approach. But more distressing, no one seems to know what works. As a result, I must confess, I stand with my colleagues confused and often disheartened."
Bochel says that social scientists - and he includes himself in this group - are generally better at identifying problems with policy than at offering solutions. "That is part of the reason why governments like think-tanks. Think-tanks say: 'What you need to do is A, B and C.' If someone comes to me, I will say: 'Well, you could do A, B and C, but there are the following problems associated with them.' I think that is a very good way of looking at things - but someone who makes policy doesn't want ifs and buts."
Even if policymakers do make the effort to consider academic research, they may indulge in "cherry-picking", he adds.
"I think politicians feel happier with solutions that fit with their values," Bochel says. "We can produce tons of research and lots of findings, but a lot of it comes down to what is on the agenda and who chooses to act on what."
For these reasons, it is easier to find examples of government policies informed by research evidence than based on it, Bochel believes.
Judy Sebba, professor of education at the University of Sussex, spent six years working as a government adviser on research strategy following Labour's ascension to power.
She entered the Department for Education and Employment on the strength of the case she had made for the use of research in policymaking. Despite having previously worked with the DFEE on specific policies and outside academia, she found that it was a culture shock.
Some policy teams thought that research was "just a nuisance that got in the way", she says.
"Policymakers want hard evidence by 4 o'clock this afternoon in three bullet points, and researchers want to pontificate. That was a very hard lesson to learn and I made some terrible mistakes at the beginning, such as presenting them with a 20-page paper on the theoretical aspects of school improvement. I had bad run-ins ... with one or two people."
Academics can be too "precious" about sharing the initial findings of ongoing research in case their findings change, she thinks.
"But ministers and senior policymakers say: 'We are making a decision about this on Friday, involving the allocation of millions of pounds, whether or not you give us the evidence.' I'd rather they made the decision on the best evidence to date. I think it is very problematic to be too puritanical."
Sebba was involved with commissioning the 1998 Hillage Review, which concluded that educational research was following, not leading, policy. It recommended the greater use of systematic reviews, where research to date in a particular area is evaluated systematically and appraised critically, an approach that Sebba champions. The review also called for research findings to be presented in more accessible forms.
She also believes that systematic reviews are a way of stopping individual researchers or studies gaining too much influence.
Historically, they have been used most in science, where they inform decision-making bodies such as the National Institute for Clinical Excellence.
During her spell with the DFEE, Sebba argued that systematic reviews should be carried out in social science, too. She helped set up the Evidence for Policy and Practice Information and Co-ordinating Centre (EPPI-Centre), based at the Institute of Education, University of London. It has developed review methods in social science and public policy.
Systematic reviews have come in for criticism, and concerns have been voiced about their methodology, cost and the time they take. But Sebba believes they are vital, citing what happened during the foot-and-mouth outbreak in 2001 as a paradigm case.
"During the outbreak, the Government kept bringing in individual 'gurus' to quote their work on one study. Every day, the policy changed. Whitehall just could not deal with the research evidence systematically."
Sebba believes that some of the clearest examples of research influencing education policy can be found when academics and policymakers worked together from an early stage while maintaining the integrity of the analysis.
In her role at the DFEE, she was involved in the establishment of research centres including the Centre for the Economics of Education and the Centre for Research on the Wider Benefits of Learning. They were designed to "drip-feed" research evidence to policymakers.
Inviting academics into government departments to give policy seminars to select groups of ministers can also be successful, she says.
"People argue that you lose research integrity if you do that, that the researcher tells ministers what they want to hear. I have to say that wasn't the case (in my experience). We heard some quite difficult conversations where people came in and told ministers things they didn't want to hear. It was done in a context of extreme trust."
Finally, Sebba makes the point that academics may not even realise when their research has made a difference. She has seen that findings can influence things behind the scenes, or can be taken up years after they are submitted.
Speaking to the Science and Technology Select Committee in 2006, Alistair Darling, then Secretary of State for Trade and Industry, explained how he balanced the information he received when making decisions.
"You want to take into account all the available evidence, but a minister's job, Parliament's job, is to reach a judgment as to whether or not a particular policy ought to be pursued ... I strongly defend my right, as the Secretary of State, a member of an elected Government, to form a judgment as to what I think is the right thing to do, and the House of Commons and Lords will decide."
Bochel thinks this is only fair, and he has a certain amount of sympathy, given the difficult decisions politicians have to make. But that is what we elect them for, he says.
ON CRIME AND PUNISHME NT, DECISION-MAKERS ARE REPEAT OFFENDERS
David Wilson says he has not seen much "evidence-based" policy.
"Frankly, if that were the case, we would not have our highest-ever prison population," says the professor of criminology at Birmingham City University.
Wilson refers to the "veritable mountain" of criminological evidence demonstrating that large-scale imprisonment has very little impact on crime levels and can even make matters worse, given the rates of recidivism among released offenders.
"Our prison population is the result of political decisions, not a consequence of the Government listening to academic criminologists," says Wilson.
He also argues that there is a lack of data to support the big prisons being planned to house 3,000-5,000 inmates.
Instead, the evidence suggests that small prisons are safer and better run and may be able to help offenders overcome some of the problems that they entered prison with, such as drug addiction and poor mental health.
"However, there are economies of scale here, and so 'bigger' means 'best' if you are worried about costs - and that is what will go down best in the constituencies. It is a 'pile it high, sell it cheap' mentality."
Another issue is whether government-sponsored crime research is sufficiently independent.
Reece Walters, a professor at The Open University, has called on academics to boycott Home Office research grants, arguing that the Government "manipulates and cherry-picks" criminological data.
Wilson agrees, and says that for this reason, many criminologists refuse to touch government-funded research. "You might conclude that research undertaken by criminologists on behalf of the Government is not seen as academic criminology at all."
Martin Innes, director of the Universities' Police Science Institute at Cardiff University, is frustrated by the persistent use of antisocial behaviour orders. He says that research shows that labelling people as antisocial can exacerbate that tendency in them. But he adds that there are areas where the Government has taken notice of academic research.
"In early 2000, community policing models were out of favour. But research showed what they could achieve if properly delivered, and this resulted in the roll-out of neighbourhood policing teams across the country," he says.