We have all heard the various statistical reasons for experimental evidence to be given special consideration in policy research. For example, JPAL has nice resources covering such points (link), such as the need to balance unobserved confounders.
But when speaking to those involved in designing and implementing policies, I also point to two considerations that are not really statistical so much as sociological:
1. Putting manipulability to the test
As it happens, a randomized experiment is not necessarily the most efficient manner to obtain a consistent estimate of a causal effect. See, e.g., Kasy’s research (link) or this discussion recently on the Development Impact blog (link). Of course, the non-randomized alternatives do share in common with RCTs the fact that treatments are manipulated and therefore not the products of endogenous selection. It is such manipulation, not whether or not it is applied according to randomization, that is the essence of an “experiment.”
We have the famous quote from Box (1966, link):
To find out what happens to a system when you interfere with it you have to interfere with it (not just passively observe it).
This I would say is the essential argument in favor of experimentation for policy research.
Whether one or another intervention is likely to be more effective depends both on the relevant mechanisms driving outcomes and, crucially, whether the mechanisms can be meaningfully affected through intervention. It is in addressing the second question that experimental studies are especially useful. Various approaches, including both qualitative and quantitative, are helpful in identifying important mechanisms that drive outcomes. But experiments can provide especially direct evidence on whether we can actually do anything to affect these mechanisms — that is, experiments put “manipulability” to the test.
The successful use of experiments in policy research typically requires drawing on insights from other research on relevant mechanisms. This other research defines debates about what policy makers should do and how they should do it. Experiments have a distinct role in such debates by clarifying what is materially possible.
Related to this is replicability. What is nice about an experiment is that, in principle, you should have before you a recipe for recreating an effect. Context-dependence means that replicability may sometimes be elusive in practice. One could measure scientific success in the ability to fashion complete recipes (including the contextual conditions) for replicating effects. Observational studies are often deficient in this regard because we cannot control where and when we get the variation in treatments of interest. We are left to wonder whether we have really mastered what the observational data imply about causal effects. It’s possible that we have not mastered it at all and have merely tricked ourselves (or others!) into believing that certain causal effects are evident. With a complete experimental recipe we can test it.
2. Deep engagement
Experimental evaluations of policies or programs are prospective. As such they typically require deep engagement between researchers and implementers in processes of policy formulation, beneficiary selection, and site selection. Compare this to an ex post analysis. In an ex post analysis, such details are often lost. It is for good reason then that you often hear from practitioners that ex post evaluators did not understand “what really went on” in the program. They weren’t there from the beginning. In my experience, this is much less the case for experimental studies. Working prospectively, the researcher is there operating alongside implementation. The experimental method typically defines beneficiary selection.
Finally, constructing the experiment requires programmatic goals to be made concrete. Such concreteness is needed for defining interventions crisply and devising outcome measures. In my experiences, implementing partners have found it useful to go through the process of making interventions and outcomes concrete. Often the process of doing so for the purposes of an experimental evaluation was the first time they had to think so precisely about interventions and outcomes. It is a good disciplining device. It helps to make clear what is really at stake.
Of course these two factors should be taken alongside some of the limitations of experiments, which mix statistical and sociological considerations. Experiments face timescale challenges, since we can often only sustain experimental variation in treatment differentiation for so long, whether due to ethical or program cycle reasons. They also face spatial scale challenges: it can be impractical to develop well-powered experiments for macro level institutions or programs that cover large areas. Finally, there is the logistical complexity of experiments, given all the up front decisions that they require. (I do not include external validity as a distinct challenge for experiments as external validity is an issue that all research typically faces.)
Nonetheless, it is useful to have these ideas articulated to see how experimentation is about a lot more than balance on unobservables.