It should be easy to find out about science. After all, open publication of results is an article of faith for most researchers. But in practice it is almost impossible. There is so much published research - mountains of the stuff. How on earth do you know where to look? And how do you know if what you find is any good?
One answer is to leave the choice to the staff of the few key journals that, by general consent, publish the really significant new work across the sciences. So those journals - the two weeklies Nature and Science for all the natural sciences, and a few more in medicine - now wield extraordinary influence. There is a paradox here. There is so much new, though mostly trivial, science pouring from the labs that these journals can only cover an ever-diminishing fraction of it. But it is just because the volume of research is so overwhelming that readers rely on them to filter gold from grit.
Researchers know this, of course. It is not just that these few titles have what the citation buffs call a "high impact factor". They are also the way to catch the eye of policy-makers and funding agencies. Helping the process along is one group of readers with the power to increase the impact still further, the journalists.
Ever noticed how good science stories always seem to break in the second half of the week? That's because the weekly journals all publish then. If a story appears on Sunday or Monday, chances are someone has broken the embargo on the weekly press release from Nature or The Lancet, or there has been a leak of results about to appear in one of those journals. Dolly the sheep was a notable example of such a leak. Even if an early announcement is deliberate, access to the real results often has to wait for journal review.
How do the editors of these journals handle the more everyday pressures of gatekeeping between science and the wider world? And how should we view the use of the papers they publish by journalists, who regard publication in Nature as news in itself?
More than four-fifths of the papers submitted to Nature, for example, are rejected, so how do they choose? The editor, Philip Campbell, naturally maintains that the prime criterion is quality. But if pressed he will admit that a "unique result" may also appeal, even if it is not especially profound science, "like the person who counted, I think it was a million drops of water from a tap...".
Social impact comes some way behind, Campbell suggests, but is definitely a factor: "There's no question that if it is good science, and is going to play a key role in some public issue of the time, we will take that into account." But, he insists, Nature will not pander to journalists: social impact is not the same as news value, and news value alone will not suffice if the science is merely ordinary. If a paper is submitted about a genetic link with a particular condition that is obviously going to be news, Nature would not take it unless the science is novel, we are told.
The editor also has to be the final arbiter when personal animosities cloud the supposedly objective evaluations of papers from an author's scientific peers. Would-be authors should try to steer clear of such controversies, it appears, because Nature errs on the side of caution, preferring to reject good papers rather than publish bad ones.
So much for how to get your paper in Nature. What happens next to the select few papers that finally appear? An even smaller selection are featured in Nature's weekly press release. This is the real influence on the wider reporting of science. For hard-pressed hacks, the system works perfectly. Publication of a paper in a peer-reviewed journal provides a news peg. And journalists assume that the peer review process guarantees the results, so no further checks are needed. The precious press release then answers the next two most important questions about a piece of science news: is it understandable and can it be written fast?
There is little role here for the independence of mind or critical judgement that the press is supposed to show. The present symbiosis suits the science journals and the popular media very well. But how well does it serve the wider public? It means that there are rather few voices commenting.
The often slavish reliance on a few journals implies taking science as a given, simply reporting on work that is already done. This means that broader commentary on science is in short supply. With a supply of easy stories guaranteed, there is little incentive to ask about issues like the motivations underlying funding or who creates the agenda for doing the research. The journals select stringently, but they do not pose new questions. Perhaps science correspondents should receive a monthly bonus, or be eligible for a journalistic enterprise award, if they can come up with stories that do not originate in one of the weekly science journals.
Jon Turney teaches science communication at University College London.