John Reed, chief executive officer at the Sanford-Burnham Medical Research Institute in California, once heard a story about the principal investigator of a “very large” genomics group, who was approached by a younger scientist at a conference.
“The younger guy said, ‘Hey, Dr So-and-so, I’ve thought of a great idea for an experiment.’
“The PI said: ‘Why don’t you come to my lab and do the work?’
“The younger guy replied: ‘I am in your lab.’”
Reed himself claims never to have had such a problem, despite running a lab whose headcount has, at times, reached 55.
“Not everyone knows how to run a large lab, but there are plenty of PIs who also don’t know how to run a small lab, for that matter,” he says.
He was not best pleased, therefore, when in August the National Institutes of Health in the US announced that it would apply extra scrutiny to grant applications from PIs who already have more than $1 million (£620,000)a year in direct grants from the NIH, to ensure that the proposed project did not overlap with the PI’s other NIH-funded work.
The agency was concerned “to assist in the most efficient management of NIH resources” in an era of flat budgets, a historically low success rate of 18 per cent for grant applications and funding levels for some large labs that, according to an analysis of 2007 funding figures by the journal Nature, reach up to $25 million a year.
The policy, however, bears more than a passing resemblance to one that has been in place for two decades at one of the agency’s biggest institutes, the National Institute of General Medical Sciences (NIGMS). There, applications from investigators with more than $750,000 in funding from all sources are subject to a similar extra level of scrutiny. And, according to Jeremy Berg, who was director of NIGMS between 2003 and 2011, this policy - which can result in the denial or reduction of funding, or provision of grants on condition that others are not renewed - was always motivated by a sense that larger labs became difficult to manage and, therefore, less productive.
That sense was firmed up in 2010 when Berg, who is now associate senior vice-chancellor for science strategy and planning in health sciences at the University of Pittsburgh, carried out an analysis of the productivity of nearly 3,000 researchers funded by NIGMS, measured by the number of papers produced and the average impact factor of the journals in which they were published. He found that productivity reached a plateau at a funding level of around $750,000 a year and, beyond that, began to fall off slightly.
Berg’s use of journal impact factor as a proxy for paper quality has been criticised. He defends it for analyses of large aggregates of people but accepts that “some well-funded laboratories are impressively productive both in terms of numbers of publications and their impact” - which is why a hard cap on funding would “not be wise at all”. But Berg believes his analysis informed and “helped provide an empirical basis” for the NIH policy - which, in an opinion article published following its announcement, he described as a “step in the right direction”.
“Special consideration should be given to investigators with strong proposals who have few or no other sources of funding, such as those at the beginning of their careers or established, productive investigators,” he suggested in the article in Nature. “Funding these applicants would probably have a bigger impact… than providing incremental support to an investigator who already has substantial other support.”
Concerns about large labs draining the funding system have also been raised in the UK. A particular flashpoint was the 2009 decision of the Wellcome Trust to replace its project grants with fewer, more generous “investigator awards”. Critics were concerned that these would all be gobbled up by people whose strong track records had already allowed them to assemble large labs.
Those fears have been only partially realised. One of those with a relatively small lab to have received an investigator award is Peter Lawrence, an MRC emeritus scientist at the University of Cambridge. Lawrence, who keeps his lab’s headcount to three or four, is an outspoken critic of large labs. He warns that “in the worst - and not infrequent - cases, (large labs) become inefficient, lacking in drive and originality and riven by internal fights over authorship and jealousy between the younger people”. They are also prone to “groupthink”, such that “the whole group, lemming-like, backs the wrong view and ignores inconvenient truths”. He insists it is right to hold larger groups to a “higher standard of evidence and expectation” than smaller groups, and to require their leaders to “demonstrate both efficiency and effectiveness and show they have enough time to run each of their grants and care for each of their people”.
Lawrence welcomes both Berg’s study and a forthcoming paper by bibliometrician Peter van den Besselaar, professor in VU University Amsterdam’s department of organisation sciences, which reveals that levels of citations per publication, which he takes as a measure of “quality or creativity”, bear no relation to group size, while output per researcher actually declines as groups grow.
The paper, which is currently under review, also reveals that the most productive labs are those with the highest proportion of doctoral students, while the groups with the highest-quality publications tend to have a wider variety of funding sources and leaders who spend more time on research.
Another critic of large labs is David Colquhoun, former A.J. Clark chair of pharmacology at University College London. He never had more than six people in his lab, and usually only three. “Even then I found it hard to check all their data while continuing to do something myself,” he says.
Sceptical of bibliometric analyses, he prefers to back his argument with examples of “the early careers of people who absolutely everyone agrees are outstanding”: namely, the Nobel laureates Andrew Huxley, Bernard Katz and Bert Sakmann, all of whom he knows or knew personally.
“In every case, during the time when they were rising to fame, they were doing experiments and analysing data themselves. They took responsibility for what went into their papers,” he says.
By contrast, the PIs in large, modern labs are “barely ever seen” since they are too busy attending conferences, writing grant applications or “wrestling with bureaucracy”, Colquhoun claims, and “that means little input into the ideas and little control of quality”. This lack of scrutiny, in his view, also makes scientific misconduct more likely, and is a strong argument for keeping headcounts to three or four people whose expertise does not stray too far from that of the PI.
Meanwhile, a Nature editorial published earlier this year suggested that smaller lab sizes might be one way to stem what the journal believes is a rising tide of “sloppy” mistakes requiring subsequent corrections to papers. “It is unacceptable for lab heads - who are happy to take the credit for good work - to look at raw data for the first time only when problems in published studies are reported,” it said.
Berg agrees that more corrections and retractions appear to come from large labs, but neither he nor VU University Amsterdam’s van den Besselaar is aware of any formal studies into the issue. For his part, van den Besselaar doubts that misconduct correlates with lab size, since misconduct can be the work of PIs themselves, and it is a mistake to assume that all of it is carried out by mischievous mice when the cat is away. “A large group often has more group leaders, and they may control each other,” he explains.
Sanford-Burnham’s Reed also denies that large labs are more prone to misconduct. Displayed on the walls of his lab - whose NIH funding touched nearly $11 million a year in 2007, according to the Nature analysis - is a code of scientific conduct and no papers are submitted for publication before they are scrutinised against a checklist, which includes determining that the presented data match primary data. Reed also likes to see at least some of the data reproduced by another researcher in the lab, although he does not require this systematically.
In addition, Reed insists on seeing raw data well before the point of publication: each one of his staff - who currently number around 35 - is required to bring the data to the monthly meetings they have with him. He also holds less formal weekly meetings and uses project management software to track goals set and coordinate access to technical resources.
Meanwhile, a team of permanent postdocs, known as “senior leaders”, provide technical support on a daily basis to his technicians and graduate students. In the past he has also charged senior leaders with overseeing formally defined sub-modules of around 10 researchers each.
Reed believes the efficacy of his methods is apparent in his productivity: his lab averages at least one paper per person per year, totalling about 850 since he established his lab in the late 1980s. He has also registered more than 100 patents. He says he feels as on top of each project as he needs to be - but this does not include “driving everybody crazy” by “micromanaging” their experiments. “You don’t need 30 years of experience to be spending your time offering technical advice on how to run a western blot. It is more important for you to be setting the overall scientific vision and helping bring value to the science,” he contends.
Sean Eddy, a group leader at the Howard Hughes Medical Institute’s Janelia Farm Research Campus in Ashburn, Virginia, agrees that large labs are “not necessarily a bad thing”. He points out that the average lab size of university PIs funded by the Howard Hughes Medical Institute is around 15.
Nonetheless, Janelia Farm takes a different approach. Lab sizes are typically restricted to six people. It is not alone: the European Molecular Biology Laboratory in Germany and the Medical Research Council’s Laboratory of Molecular Biology in Cambridge impose size limits of 10 and eight, respectively.
All three are core-funded institutes, which enables them to provide ample central facilities. This removes the need for individual groups to hire technicians with expertise in those particular areas. According to Eddy, this means the level of expertise in his lab of six is similar to the level found when he ran a lab of 15 people at Washington University in St Louis.
He is happy to be doing some of his own experiments again - not least because it minimises the chances of “honest error”. He fears this is rife in papers produced by more junior researchers in his field of genomics, owing to the complexity of the data.
But the official rationale espoused by core-funded institutes for group-size limits relates to their belief that scientific breakthroughs are most likely when PIs collaborate and - as in Colquhoun’s cited cases - conduct at least some of their own experiments.
Reed’s institute also has numerous core facilities and encourages collaboration; but Reed still believes that a large lab, and the even greater range of expertise it permits him to acquire, facilitates the most effective and efficient science. “Typically a small lab has a limited repertoire of technical approaches it is able to bring to a problem,” he argues.
He agrees that small labs can overcome those limits through multiple collaborations. But the advantage of having all of the expertise in-house, he says, is that it avoids the problem of having to find partners willing to give the envisioned project the same priority as he would.
Collaboration, he adds, is particularly difficult for university-based labs, because their faculty’s need for a broad range of teaching expertise means they often lack an in-house concentration of people “who all care about a common area of biology and are willing to pitch in and work on it”.
Nor, he says, is he ever short of ideas for projects to hand out; in his view, duplication is more likely in small labs that “submit basically the same project and tweak it a little differently” for each grant application.
He points out that NIH programme officers already check for overlap before issuing new grants, and he is suspicious of the agency’s real motives in subjecting one class of labs to extra scrutiny, fearing the policy may become a de facto cap on funding levels.
Hans Clevers, professor in molecular genetics at the Hubrecht Institute in Utrecht, the Netherlands, has a little more sympathy for the agency’s move.
His 30-member lab also includes three permanent senior postdocs charged with providing technical support and training to younger members, as well as reporting any “social problems” to him. He guards against misconduct by having more than one person working on each project and by being on hand when the results of key experiments are coming through.
But his time as a postdoc in the US exposed him to PIs who “sit in their office” and content themselves with weekly digests of data produced by their labs, or who succumb to the “natural tendency” to concentrate their limited time and attention on those projects and lab members on track to produce the best papers, essentially neglecting everyone else.
He also acknowledges that it is possible for successful labs to use their track record to attract “more grants than they really deserve”, allowing them to recruit a superfluous number of staff.
He says such behaviour does not show up in a system that judges people only on their volume of publications, rather than the number of papers they produce per grant. Any restriction on NIH funding should relate to this latter figure, he believes.
“In my case, we are trying to do experiments with many different techniques all centring around one question, so my productivity per capita would go down a lot if I halved my lab,” he says. “But I couldn’t handle anything larger than 30: that would just be too much work.”
Big is bountiful
Until recently, Fiona Watt was in charge of 29 researchers split between two laboratories in Cambridge.
The arrangement made sense as the labs were focused on two distinct scientific interests - cancer and stem cells - and she was able to set up fruitful collaborations relating to both.
But although she has colleagues who enjoy working across two sites, Watt found it stressful. “Some people run two labs in different countries, which can work well if you hive off specific pieces of work and visit the lab once every six months or so. But I was visiting both of my labs every day,” she explains.
However, she thinks that her stress levels would not have improved much had her staff all been on one site. “I don’t delegate supervision and I felt that as the lab got beyond 30, some of the less able postdocs left with no publications because I didn’t have time to produce something (with) them.
“That is really bad, because I want them to leave in a good position to start their own lab if that is what they want to do,” Watt says. “You could say it is their lookout - and I have plenty of colleagues who wouldn’t take time to publish the less-impressive papers - but, for me, the individuals and their experiences are important and I was perennially feeling guilty.”
Earlier this year, Watt moved to King’s College London to take up the role of director of the Centre for Stem Cells and Regenerative Medicine. She took the opportunity to merge her labs and downsize the headcount to 20. She finds this more manageable.
“I am so relieved at not having to be in two offices in one day, and having a smaller number of people has been really good.”
Small is beautiful
The Medical Research Council’s Laboratory of Molecular Biology in Cambridge is the poster child for those who believe that small group size is beautiful, having hosted nine Nobel prizewinners since 1958.
But the lab’s current director, Sir Hugh Pelham, says the idea that the prizes were all won through the work of small groups is “a bit of a myth”. Some laureates, such as 1962 winner Max Perutz, received “a lot” of help - albeit from people who were not officially members of his group, he says.
Sir Hugh also admits that average group size at the lab has crept up from around five to nearer eight in recent years, owing to the greater range of techniques required to publish a modern paper.
Nevertheless, the LMB continues to impose that limit of eight fairly strictly because, in Sir Hugh’s view, “there is no question that the closer you supervise people, the more you get out of them” and “simple maths” suggests that the best science is likely to stem from the highest possible density of driven, independently thinking people - namely PIs. He agrees that large labs can be run by a network of postdoctoral “henchmen” but argues that “on the whole, those people are not as good as the people you would hire as a true group leader”.
Sir Hugh believes that misconduct is less likely when PIs are “very closely in touch with what is going on in their lab” because “you see raw data minutes after they are produced and people don’t have a chance to hide things from you”.
“If the PI is travelling all the time or wasting their intelligence on grant (application) writing, there may be people in their lab who get a little careless,” he adds.
He accepts, however, that there might be a good case for an isolated and “really good” group in a university to expand beyond eight members.
He says there is no reason in principle why a group needs more than one person. But “the problem is the funders inevitably judge people by their output and don’t really make allowances” for small groups. The LMB gets judged every five years by the MRC and “that tends to require (that its labs have) a certain critical mass so you can churn (a high number of) papers out. We have always been rather ambivalent about that.”