Jul 19, 2022

Why We Decided to Pick $2.5 Million of Grants Out of a Hat

Simple alterations to grant making processes can make them more equitable and innovative.

By Dawid Potgeiter and Virginia Cooper

At Templeton World Charity Foundation, we are interested in not only funding innovative research in human flourishing, but in also improving how the research grantmaking process is conducted. As part of this, we have experimented with how we carry out requests for proposals (RFPs) for basic research in psychology and neuroscience as part of our Accelerating Research on Consciousness initiative. The goal of these experiments is to simultaneously make the outcomes of the process more equitable and the Foundation less risk-averse in its selection of proposals.

Business As Usual

Altering the RFP and grantmaking process required that we discard preconceived notions about the ability of experts to select projects. Typically, proposals are evaluated by teams of experts who look at both the content of the research proposal itself and the background of the researcher and institution behind the proposal. While this has historically been the way things were done, it’s not necessarily the best way to select research projects to fund.

The business-as-usual approach to grantmaking has the effect of privileging research proposals coming from brand name institutions in Western, educated, industrialized, rich and democratic (WEIRD) countries and discouraging researchers from submitting riskier but more innovative proposals. Under traditional RFP processes, researchers from well-known institutions could be favored by reviewers, and the applicants are more likely to submit ideas which are more mainstream and which they believe will be funded.

As grant makers we recognize that external expert reviews are not a completely accurate predictor of the success of a proposal or of how innovative an idea is. This is not to say that we do not receive some useful information from such reviews, but there is nevertheless a degree of uncertainty that comes with every review as a result of implicit biases or other limitations on the part of reviewers. Moreover, we try to account for the uncertainty that occurs when submissions are ranked, given that proposals that may be riskier but more innovative tend to be ranked lower. Put differently, bias enters the ranking process because, by definition, those proposals with the top scores would be the ones with the least criticism. Truly innovative ideas, however, are sometimes more likely to draw criticism, which could negatively impact their ranking. Instead of chasing after the rainbow and trying to identify the “best” ideas, grant makers should instead focus on supporting the most suitable, innovative ideas available to them.


Changing the RFP Process

Our challenge was to figure out how to make the RFP process both more equitable and also increase the probability that more innovative ideas might receive grants. This required that we alter both how the proposals were initially reviewed and the way they were ranked and then ultimately selected.

We began by instituting a double-blind external review, which meant that the reviewers did not know who they were evaluating. By concealing the identities of the researchers and their institutions, the double-blind review helped to reduce the risk of unintended bias towards brand name institutions in WEIRD countries. However, while double-blind review can help reduce the effects of implicit bias, it does not decrease risk aversion on the part of selection committees or increase the chance that innovative ideas are ultimately selected.

Consequently, the second change we instituted was an element of randomization. We reviewed 50 proposals overall and scored each of them. Then we got rid of every proposal below the median score, which left us with 27 proposals. We then took this pool of 27 proposals and picked randomly from them by assigning each proposal a random number and then—quite literally—drawing them from a hat. An important element of this is that we told the applicants up front that this final selection would be random, so that they would know that if they were within the top 25–27 proposals, they would have an equal chance of being selected. The introduction of this randomization meant that applicants did not feel the pressure to only submit “safe” ideas instead of more innovative ones, and it eliminated the tendency of reviewers to choose proposals perceived to be less risky.

Future Experiments

These two changes two our RFP process seemed to be effective, and we are already initiating a second RFP using a similar process as part of our Listening and Learning in Polarized World initiative. Naturally, these methods which require radically altering the way expert review is incorporated into grantmaking will not take hold overnight. However, we believe that as double-blind and random selection processes begin to yield scientific breakthroughs, they will become increasingly common. The goal of scientific inquiry is to test ideas to understand our world better and improve human flourishing. The way we fund those activities should be just as rigorous as the scientific process itself.