Randomized field experiment on electoral security in Liberia: slides from UBC talk

Here is a link to slides from my talk at University of British Columbia (UBC) political science department yesterday: link. The talk was on a randomized field experiment that Eric Mvukiyehe and I recently completed in Liberia. The field experiment tested the effectiveness of curriculum-based and security-institution-based strategies for preventing intimidation and violence during the 2011 elections. The endline data are still coming in, so the presentation focused on the theoretical motivation and design, with only a light discussion of preliminary results. We are hoping to have a project report in the coming weeks, and then papers over the coming year or so. Updates will be posted when those are out.

The talk was part of an excellent series that the UBC political science department is hosting on “Experiments in Development.” Here is the full roster of speakers: link. The series will host the semi-annual Experiments in Governance and Politics (EGAP) meeting in a few weeks as well: link.

Share

Slides and update from NYU-CESS talk on causal effects under interference (spill-over, externalities, etc.)

Here is a link to the slides that Peter presented yesterday at the NYU-CESS conference on experiments in political science: PDF. (The link to the conference site is here: link.) Here is a very slightly updated version of the paper that includes a minor correction: PDF. Comments or additional corrections welcome.

A few interesting issues came up during the Q&A that are worth some more discussion and consideration:

  • Uncertainty over exposure models: The exposure model is formalized as $latex f(\bold{z}, \theta_i)$. As discussed, the properties of the $latex \bold{z}$’s are known by design. This leaves the potential for uncertainty about either $latex \theta_i$ or $latex f(.)$. Uncertainty about $latex \theta_i$ is basically a measurement problem, and so we can put a probability distribution on values of $latex \theta_i$ based all the available data and then integrate over it. E.g., if $latex \theta_i$ is a row in a network adjacency matrix, then we could use available data to create a bunch of imputed adjacency matrices, and then integrate over those imputations. Uncertainty over $latex f(.)$ is different. When you change $latex f(.)$ you are changing the set of causal estimands. In this case, the issue is one of model selection. Because our framework allows for arbitrarily complex exposure models, one could apply the usual model selection principles to work from a complex model to a more parsimonious nested model.
  • Reciprocal effects and dynamics in indirect exposure: A question came up as to how we would handle the possibility that effects could initially transmit from A to B, but then transmit back from B to A, and so forth in a dynamic and reciprocal way. A thought that comes immediately to mind is that this too may be best characterized as a measurement problem — e.g., are you measuring outcomes after all such dynamics have led to steady state or are you measuring outcomes at some point mid-way toward steady state? Causal effects could reasonably be defined in either terms and estimated using the methods proposed in the paper; one should simply be careful that one is measuring the outcomes that they think they are measuring.

Happy to hear more questions or comments. Upcoming presentations of the work will be at the Princeton methodology seminar and the EGAP conference in Vancouver.

Share