With Stephanie Zonszein, Dean Eckles, and Peter Aronow, we have a new review article on estimating spillover effects with experimental data, with accompanying R package:
At seminars one often hears “what about SUTVA violations?” Don’t just wave your hands, rather:
Learn what’s identified even w/ SUTVA violations of unspecified form–e.g., https://arxiv.org/abs/1711.06399.
Estimate the spillover effects–that’s what this review piece and accompanying R package are about.
As per Athey et al. (2019, Annals of Statistics), random forests can be interpreted as kernel estimators. If you haven’t seen this before, here is a toy example: .html
I am doing some work on conformal prediction methods, which allow for doing predictive regression-based inference with minimal assumptions. Mostly to help myself understand the methods in algorithmic terms, I created the following tutorial: link.
An accessible introduction is offered in this paper by Lei et al. (2017, arxiv), which accompanies the R package,
conformalInference (github). They demonstrate conformal inference methods in connection with high dimensional regression and covariate selection.
In the causal inference literature, Chernozhukov et al. (2017, arxiv) use conformal methods for robust inference with synthetic control and related panel methods. Coauthors and I are doing some more work in this area.
Chernozhukov et al. (2018, arxiv) also have new work extending conformal inference to time series and other dependent-data settings.
Here is an interesting twitter thread on blinding and permutation methods:
What Adam is proposing here is related to the “mock analysis” that Humphreys et al. discuss in their 2013 paper on fishing: link to preprint
I have also had discussions recently with Pieter Serneels and Andrew Zeitlin about this idea, who are writing on this topic (look out for their work, I will update with a link to it when it is available).
Generally I think simulation and “mock” analysis are great for checking power and other inferential characteristics. The DeclareDesign project is an attempt to systematize this approach: link.
That said, I think the statistics behind the blinding + permutation approach are a bit more subtle than what Adam’s post suggests. My concerns can be expressed using a toy example. Suppose the following toy research design:
- We have only two units.
- We run an experiment that randomly (fair coin flip) assigns one unit to treatment and the other to control.
- Control potential outcomes are (0, 5) (that is, for the first unit the outcome is 0 under control and for the second unit the outcome is 5 under control).
- There are two possible treatments that could be assigned. Treatment A has no effect, and so the potential outcomes under treatment A are (0, 5). Treatment B generates an effect such that under treatment B, potential outcomes are (5, 7) (so for the first unit, the effect is 5, and for the second it is 2).
- That being the case, if treatment A were being applied, the experiment would always generate data (0, 5). If treatment B were being applied, then the experiment would generate either (0, 7) as data, or (5, 5) as data.
- Now, let us suppose that we, the analysts, do not know which of treatment A or B was applied, nor do we know all of the potential outcomes. Rather, all we know are the fact that there are two units, one was assigned to treatment, and then the outcome data. We blind ourselves to which of the two units was assigned to treatment. Ultimately we are interested to learn whether treatment A or B was applied, but at the moment we want to operate in a manner that is blind to treatment assignment so as to figure a good way to test.
This toy example captures the situation that is relevant in Adam’s illustration of the blinding + permute method. However, it is straightforward to see the problem in this case. If it is indeed the case that treatment B was applied, then the resulting data will not allow us to characterize the null distribution (that is, the distribution that would have arisen had A been applied). Moreover, the resulting data could either over- or under-state the variance of outcomes under the null. That being the case, it seems problematic to adhere too closely to what one learns under blinding + permutation. Rather, I would propose that one use it only to get “ballpark” ideas of how different estimation strategies perform, but then for more refinement, I think you’d have to either use analytical results or try simulating data under different assumptions on the potential outcomes.
(Note: some typos in the notes corrected now.)
Below, I have posted some notes on matrix completion, inspired by this great Twitter thread by Scott Cunningham:
Have a look at Scott’s thread first. Also, have a look at the material that he posted. Then, the following may be helpful for further deciphering that methods (in formats friendly for online and offline reading):
Update: I had a very useful twitter discussion with @analisereal on the identification conditions behind matrix completion for estimating the ATT. Here is the thread and then I am updating the notes to incorporate these points: