Some thoughts on blinding and permutation methods

Here is an interesting twitter thread on blinding and permutation methods:

What Adam is proposing here is related to the “mock analysis” that Humphreys et al. discuss in their 2013 paper on fishing: link to preprint

I have also had discussions recently with Pieter Serneels and Andrew Zeitlin about this idea, who are writing on this topic (look out for their work, I will update with a link to it when it is available).

Generally I think simulation and “mock” analysis are great for checking power and other inferential characteristics. The DeclareDesign project is an attempt to systematize this approach: link.

That said, I think the statistics behind the blinding + permutation approach are a bit more subtle than what Adam’s post suggests. My concerns can be expressed using a toy example. Suppose the following toy research design:

  • We have only two units.
  • We run an experiment that randomly (fair coin flip) assigns one unit to treatment and the other to control.
  • Control potential outcomes are (0, 5) (that is, for the first unit the outcome is 0 under control and for the second unit the outcome is 5 under control).
  • There are two possible treatments that could be assigned. Treatment A has no effect, and so the potential outcomes under treatment A are (0, 5). Treatment B generates an effect such that under treatment B, potential outcomes are (5, 7) (so for the first unit, the effect is 5, and for the second it is 2).
  • That being the case, if treatment A were being applied, the experiment would always generate data (0, 5). If treatment B were being applied, then the experiment would generate either (0, 7) as data, or (5, 5) as data.
  • Now, let us suppose that we, the analysts, do not know which of treatment A or B was applied, nor do we know all of the potential outcomes. Rather, all we know are the fact that there are two units, one was assigned to treatment, and then the outcome data. We blind ourselves to which of the two units was assigned to treatment. Ultimately we are interested to learn whether treatment A or B was applied, but at the moment we want to operate in a manner that is blind to treatment assignment so as to figure a good way to test.

This toy example captures the situation that is relevant in Adam’s illustration of the blinding + permute method. However, it is straightforward to see the problem in this case. If it is indeed the case that treatment B was applied, then the resulting data will not allow us to characterize the null distribution (that is, the distribution that would have arisen had A been applied). Moreover, the resulting data could either over- or under-state the variance of outcomes under the null. That being the case, it seems problematic to adhere too closely to what one learns under blinding + permutation. Rather, I would propose that one use it only to get “ballpark” ideas of how different estimation strategies perform, but then for more refinement, I think you’d have to either use analytical results or try simulating data under different assumptions on the potential outcomes.

Share