Here is a link to the slides that Peter presented yesterday at the NYU-CESS conference on experiments in political science: PDF. (The link to the conference site is here: link.) Here is a very slightly updated version of the paper that includes a minor correction: PDF. Comments or additional corrections welcome.
A few interesting issues came up during the Q&A that are worth some more discussion and consideration:
- Uncertainty over exposure models: The exposure model is formalized as $latex f(\bold{z}, \theta_i)$. As discussed, the properties of the $latex \bold{z}$’s are known by design. This leaves the potential for uncertainty about either $latex \theta_i$ or $latex f(.)$. Uncertainty about $latex \theta_i$ is basically a measurement problem, and so we can put a probability distribution on values of $latex \theta_i$ based all the available data and then integrate over it. E.g., if $latex \theta_i$ is a row in a network adjacency matrix, then we could use available data to create a bunch of imputed adjacency matrices, and then integrate over those imputations. Uncertainty over $latex f(.)$ is different. When you change $latex f(.)$ you are changing the set of causal estimands. In this case, the issue is one of model selection. Because our framework allows for arbitrarily complex exposure models, one could apply the usual model selection principles to work from a complex model to a more parsimonious nested model.
- Reciprocal effects and dynamics in indirect exposure: A question came up as to how we would handle the possibility that effects could initially transmit from A to B, but then transmit back from B to A, and so forth in a dynamic and reciprocal way. A thought that comes immediately to mind is that this too may be best characterized as a measurement problem — e.g., are you measuring outcomes after all such dynamics have led to steady state or are you measuring outcomes at some point mid-way toward steady state? Causal effects could reasonably be defined in either terms and estimated using the methods proposed in the paper; one should simply be careful that one is measuring the outcomes that they think they are measuring.
Happy to hear more questions or comments. Upcoming presentations of the work will be at the Princeton methodology seminar and the EGAP conference in Vancouver.