Science funding is a gamble so let’s give out money by lottery

Perhaps your life, like that of many of my friends and relatives, has been improved by propranolol – a beta-blocker that reduces the effects of stress hormones, and that’s used to treat conditions such as high blood pressure, chest pain, an uneven heartbeat and migraines. It’s considered one of the most important pharmaceutical breakthroughs of the 20th century.

Thank goodness, then, that the United States in the 1940s didn’t have the same attitude to science funding that it does today. If it had, you could expect to see seven experts sitting around a table, trying to assign a score to an unorthodox grant proposal to study the function of adrenaline in the body. ‘If I have properly understood the author’s intent, then this mechanism has already been settled, surely,’ a senior physician might say. A lone physiologist mounts a defence, but the pharmacologists in the room are dismissive, with one who remarks that the mathematics ‘look cumbrous and inconvenient’. So the pathbreaking research of the late Raymond Ahlquist, a professor at the Medical College of Georgia who laid the foundations for the discovery of propranolol, could easily end up with low marks, and his theories would never see the light of day.

Science is expensive, and since we can’t fund every scientist, we need some way of deciding whose research deserves a chance. So, how do we pick? At the moment, expert reviewers spend a lot of time allocating grant money by trying to identify the best work. But the truth is that they’re not very good at it, and that the process is a huge waste of time. It would be better to do away with the search for excellence, and to fund science by lottery.

Superficially, the grant-giving process seems rational. Following an application deadline, academics assess and rank the proposals they’ve received. For example, members of a molecular biology review panel might find themselves weighing up a proposal to investigate a new biochemical pathway that’s potentially relevant to Alzheimer’s disease against a request to screen large protein datasets that could give rise to new treatments for diabetes. Each reviewer gives the proposal a score, and the scores are averaged across reviewers. Grants are awarded from the highest average mark downwards, stopping at the point at which the money runs out.

One big problem with this approach is that the monetary cut-off point still tends to be way above the quality cut-off point. Even though money for research has been generally increasing, the number of researchers is growing even faster. As a consequence, success rates for applicants have been falling, and adventurous proposals rarely get funded. A review panel in the 1970s might have been able to fund 40 per cent of applications, which meant it could support all of the excellent, solid proposals and still take a few risky bets. Today, a review panel can often fund 20 per cent or less of proposals submitted, leaving little chance for the likes of Ahlquist to secure funding.

Peer review adds another layer of irrationality. Sir Mark Walport, the UK government’s chief scientific adviser and the former director of the Wellcome Trust, the UK’s largest philanthropic funder, has labelled peer review a folie à deux because it relies on the researcher and the reviewer sharing a delusional belief in their capacity to make accurate predictions.

On the part of the applicant, she is forced to commit to a plan of action and a set of objectives or ‘deliverables’, most of which are probably quite hazy at the outset. Research, after all, is about finding out what you don’t know, so it’s a pretty messy and unscriptable process. The systems biologist Uri Alon, in a TED talk, has likened science to improvisational theatre. You might think you’re going from A to B, but halfway there you get lost, stumble around, completely forget what you’re even doing there – yet, if you manage to hold on for a while, you might find C, which is valuable in its own right. But if you promised your funder to go from A to B, then finding C becomes much harder, and you aren’t likely to find B anyway.

Reviewers suffer from their own version of precision-madness. When ranking proposals, panellists are making conjectures: which of these projects, given enough time, will contribute most to society? But the path from initial funding to wider social impact is poorly understood, and can take 30 to 50 years to unfold. It’s ludicrous to think that you can specify, down to multiple spaces after the decimal point, the ideas that are most likely to succeed. This obsession with ranking means that we also demand excessive amounts of information from applicants, and waste a colossal amount of their time. In Australia, during a recent annual funding round for medical research, scientists spent the equivalent of 400 years writing applications that were eventually rejected.

Finally, ‘expert reviewers’ are not fungible commodities. One reviewer is not the same as another, and their judgements tend to be highly personal. Of the nearly 3,000 medical research proposals submitted for public funding in Australia in 2009, nearly half would have received the opposite decision if the review panel had been different, according to one notable study. As a result, the process isn’t just ineffective – it’s systematically biased. There’s evidence that women and minorities have lower chances of securing grants than people who are male or white, respectively.

Fortunately, there’s a simple solution to many of these problems. We should tell the experts to stop trying to pick the best research. Instead, they should focus on filtering out the worst ideas, and admit the rest to a lottery. That way, we can make do with shorter proposals, because the decision to accept or reject a ticket to a random draw requires less information – and highly specific proposals are unrealistic anyway. So instead of asking reviewers to make unreasonable predictions, they can turn their minds to weeding out cranks and frauds. Bias will still occur in the filtering stage, of course, but many more proposals will make it through to a lottery, which is inherently unbiased. The New Zealand Health Research Council is experimenting with such a programme, although with funding extended only to about four researchers per year, their sample size is too small to convince larger funders.

A lottery might sound like an extreme, baby-and-bathwater kind of solution. Not all scientific enquiry takes decades to play out, and sometimes there’s genuine agreement that a certain strand of research is important and timely. But perhaps we could keep a small proportion of the grant money for ideas where there’s a consensus among the expert panellists. Then we pluck out the bad ones and throw everything else into a pot. The trick with this triage would be to keep the bulk of the funds for the higher-risk, randomly selected proposals. My own view isn’t settled – I’ve run computer simulations for both scenarios, and while each one comes out looking better than the current system, the comparison between them is inconclusive. Other experts who study science funding, and accept the need for a lottery, still disagree about the best model (appropriately enough). More experiments are needed.

The late Sir James Black, the Nobel prizewinning inventor of propranolol, said that the peer review system was the enemy of scientific creativity, and that his own work would have been impossible without Ahlquist’s theory. Scientific thinking can often lead to progress, but the institutions of science can also create a major regress. Let’s face it: getting a grant is a lottery anyway. We should at least make it official, so the whole process can be cheaper, fairer and more efficient.

This article was originally published at Aeon and has been republished under Creative Commons.

Aeon counter – do not remove

13 Responses

  1. How about a system where each researcher is allocated a sum and they pool their budgets together around projects that they want to collaborate on?

    Liked by 1 person

  2. The numbers suggest that the problem is a hard minimum on project cost, C, that is around 5-10 times larger than the amount each eligible scientist would receive if the total money M was doled out equally (for N scientists, C > M/N, by a factor of 5 to 10), so all the researchers who fail to find a collaboration would represent a net loss of M/N.

    Maybe that’s not a major worry because a) almost everyone will find a collaboration, or b) researchers will find meaningful uses for partial funding, but the cost of equipment, the fragmentation of disciplines, and inherent competition pressures within science might make that hard.

    A very close alternative that has slightly less problems is pre-lottery sharing agreements, so a group decides that if any of them win the lottery (thus guaranteeing C funds required for a viable project) they would all join in on that project (perhaps the lottery winner gets to be the PI).

    Like

  3. It seems that there would be great pressure to collaborate, but if this becomes an issue commitment to collaboration can be made as a condition for funding (i.e., 5 or 10 researchers must agree to collaborate for any of them to be granted their allotted funding).

    In general, it seems that a lottery should only be used if there is no way to split the goods equally. This is the case when it comes to large scale decision-making power, but it may not be the case in many other situations, including this one.

    A split-the-resources solution may also be appropriate for democratizing mass media – anyone who is interested is allocated some funding, and can collaborate with others to produce media content: news, non-fiction work, documentaries, opinion pieces, interviews, opinion polls, etc., and possibly also fiction and works of art.

    Like

  4. I agree with you about the principle, but suspect that in practice a split-the-resources solution in scientific research would be less-than-optimal (because of things like cost of equipment, cost of university overheads per staff member, and similar).

    As an extreme example, I’m pretty sure that the Large Hadron Collider costs significantly more than if you pooled the budgets of 1,000 or so experimental physicists.

    In less extreme cases, such as equipment-the-size-of-a-room instead of a mega project like the LHC, maybe time-sharing solutions are possible, but I wouldn’t count on them always being viable/efficient. It might just be that one person with an expensive piece of kit delivers better results than 5 persons sharing that piece of kit.

    Like

  5. > Large Hadron Collider

    Something as big as the LHC would seem to require a more formal organization – maybe with a decision making body whose members are selected through sortition.

    > It might just be that one person with an expensive piece of kit delivers better results than 5 persons sharing that piece of kit.

    Theoretically, yes, but that sounds like the standard authoritarian argument against collaboration on an equal-footing basis: “someone has to be in charge for things to run efficiently”.

    Like

  6. Re LHC, I agree (just sketching out limits to equal distribution in science).

    Re the smaller scale, I’m not arguing for distribution by authority, rather about the nature of the good: I think it might be the case that *any* of the five researchers + 1 piece of kit would be more efficient in producing meaningful results than if all five tried to use that piece of equipment at the same time (using some time sharing agreement or similar).

    Like

  7. I agree with Yoram.

    Doling out the money evenly would mean a considerable boost to the budgets of most PIs. Any large project typically requires some degree of collaboration anyway, if only for want of equipment. That’s nothing new. PIs share equipment and spend their money on whatever would prove most useful to have in-house. When I was a grad student there were usually students from other labs waiting to borrow our $100k peptide synthesizer.

    A lottery would create winners and losers at random, which is a dubious improvement on the partly random, partly merit-based reputation-centric arrangement we have at present. The proposed system would still force researchers to share their good ideas with reviewers who are better equipped and well-motivated to scoop the ideas for themselves. Huge amounts of time and effort are wasted writing grants that are little more than wishful thinking. Better to keep/fire PIs based on a review of their activities and give everyone on the payroll the same budget. Or have a tiered system with more experienced PIs receiving more funds. Like a pay grade. That way the most efficient and clever upstarts are the ones who stick around and end up with the biggest budgets down the road.

    Special megaprojects like the LHC aren’t exactly funded by conventional grants at present. If a project is big enough to merit it’s own breakdown in the government’s official budget… it’s not going to be funded by lottery. It really doesn’t make sense to try to roll something like that in with all other science funding.

    In any case, the real issue is that funding levels have not kept pace with the growth of the rest of the budget. That’s why the grant approval rate has fallen off. Get funding levels up and the problem mostly goes away.

    Like

  8. … or just do your ‘risky’ research without funding. Not necessarily easy but you are game.

    Like

  9. The “fund everyone” versus “give everyone an equal chance” is an important debate that I’d like to get right (thanks Yoram and Naomi for engaging in this!) so I’d like to set the scene a little first and then see where we might get to.

    Re the problem going away if we throw more money at it: I agree, though I also see it as a somewhat separate issue (we should both have optimal levels of public funding for R&D, and also have a good mechanism for distributing those funds). At the same time it’s true that low funding levels make the pains on the distribution side more acute, and, conversely, high levels of funding mean a suboptimal allocation mechanism is not so painful.

    Two general issues to keep in the back of one’s mind are:
    1. It will be nice to actually cause some change to happen. There seems to be growing unease with the way things are currently done, and it may be more politically expedient to change the allocation mechanism (a potentially inter-agency policy decision) than to double the science budget. This also means we should evaluate the political/policy appeal of suggestions given the current decision makers.
    2. Scientists are smart and adaptive. If it suddenly becomes a lot easier to have a well-paid job in academia, it is reasonable to assume the number of people going into PhDs or staying in academia after a PhD will go up significantly (the converse seems to also be true, though to a lesser extent, e.g. the exodus of machine learning academics into high-paying research jobs in industry).

    So, numbers. Here is what I can find from a quick survey of US statistics (all from https://www.nsf.gov/statistics/2016/nsb20161/#/report/chapter-5/highlights):
    “The academic workforce with research doctorates in science, engineering, and health (SEH, hereafter referred to as S&E) numbered just under 370,000 in 2013, the latest year for which data are available.”
    “In 2014, U.S. academic institutions spent $63.7 billion on research and development in all S&E fields.”
    “the federal government provided well over half of academic R&D funds in 2014 (58%)”

    This gives us 58%*$63.7B/370K = $99.8K per researcher per year. This assumes we divide up ALL the federal support, including DoD, DoE and NASA funds, as well as the NIH and NSF money earmarked for specific mega-projects. I recon this makes the “fund everyone” just about viable, especially if pooling of resources is actively encouraged, but that would depend on what percentage of the total pool could be diverted to this effort. If earmarking and department-specific budgets cannot be easily touched, we may get down to $50K per researcher per year, which to me starts to look difficult, but perhaps you think this could be made to work?

    As to specific points made above: while I don’t think scooping by reviewers is a major problem (there is anecdotal evidence that it happens, but not that it’s common), a lottery would allow the same reduction/elimination of written proposals that a fund-everyone scheme would, so I’m not sure that’s a differentiating factor.

    Liked by 1 person

  10. Hi Shahar,

    The question of the size of the budget seems to me to be independent from the question of the distribution. Whatever is the average per-researcher sum, the distributional question is whether this sum would be better managed on a collaborative-egalitarian basis than on an central-authority basis (that of the award-winner, whether by lot or by supposed merit).

    I’d also like to bring up an issue that to me seems related: the so-called peer-review process. The situation is somewhat similar to that of grants: the set of researchers is competing for a limited resource. In this case it is publication space (tiered by publication prestige). The “peer review” system is legitimized based on the claim that it promotes better research. In fact, just like in the context of grants, it is far from clear that it does. It is likely that it promotes the same negative phenomena that occur in the context of grants.

    Like

  11. Hi Yoram,

    I guess what I’m trying to get at is a question of whether “research position” is a scarce good or not. If it takes, say, $200,000 per year before anyone get to a point where they can carry out meaningful research, and there are not enough $200k pots of cash to give out to all the people who want one (and are qualified to have one), then I think allocating the pots available at random is the egalitarian solution. I think you pick up an important point though, that PIs often get several multiples of the minimal pot and dish it out in a central-authority manner to collaborators and other researchers in their lab. Maybe the experience advantage of PIs justifies this, or maybe grants should be parcelled up to the minimal viable pot size. I think even if the latter is the solution chosen though, there would still not be enough pots to go around to all the people who want research careers – hence the analysis of the size of the budget.

    Re publication peer review, while that system is plagued by many similar problems (bias, slow response time, scarcity that may well be artificial), there is a big difference I think in terms of level of uncertainty: my argument is that during grant peer review experts are not actually in a very good position to judge which proposal is best, because the information about the quality of the research that will emerge from the project is just not there to be had. In the publication peer review case the results of research are available, and so the uncertainty is much lower and the role of expertise becomes more justified.

    Like

  12. But suppose that indeed the total sum available is M, the number of researchers is n and the minimum usable amount is x = k M / n, where k > 1, say k ~ 5 or 10. Then the options are drawing lots and giving money to one out of 5 or 10 researchers, or giving out money to groups of 5 or 10 (or more) researchers who manage to agree beforehand to pool their resources.

    The difference is that in the first case the lucky winner calls the shots, while in the other case the groups have to collaborate on an equal-footing basis. It seems that unless we believe that a hierarchical structure is inherently superior to an egalitarian one, then there is no reason to prefer the former to the latter. No? The argument that the single winner will be able to better manage the resources than the committee of 10 is an argument in favor of hierarchy.

    As for “peer-review”: my feeling is that the essential issue is not about uncertainty, but simply about the subjectivity of judgment of quality – just like in the case of grants. The ability of “the community” (in fact, of established researchers) to measure the quality of research according to some objective, useful characteristics is minimal and to the extent that it exists, it is more than offset by the negative effects of the system.

    Like

  13. […] visits a school in the US and finds that the students are receptive to the idea. He also mentions the idea of using a lottery to allocate research […]

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.