Philosophical foundations for design-based inference

“[Here are] two questions about ravens:

  • The general raven question: What is the proportion of blackness among ravens?
  • The specific raven question: Is it the case that 100 percent of ravens are black?

Consider a particular observation of a white shoe. Does it tell us anything about the raven color? It depends on what procedure the observation was part of. If the white shoe was encountered as part of a random sample of nonblack things, then it is evidence. It is just one data point, but it is a nonblack thing that turned out not to be a raven. It is part of a sample that we can use to answer the specific question (though not the general question), and work out whether there are nonblack ravens. But if the very same white shoe is encountered in a sample of nonravens, it tells us nothing. The observation is now part of a procedure that cannot answer either question.

The same is true with observations of black ravens. If we see a black raven in a random sample of ravens, it is informative. It is just one data point, but it is part of a sample that can answer our questions. But the same black raven tells us nothing about our two raven questions if it is encountered in a sample of black things; there is no way to use such a sample to answer either question. The role of procedures is fundamental; an observation is only evidence if it is embedded in the right kind of procedure.”
(Godfrey-Smith 2003, pp. 215-216).

This passage on the “procedural naturalism” view of science is from Peter Godfrey-Smith’s book-length survey of current debates in the philosophy of science: amazon.

When you write a pre-analysis plan, this is how you should be thinking. And you should be relating it to theoretical propositions (what the two questions about ravens are standing in for).

Share

Thoughts on “structure” and identification

See this post at A Fine Theorem and the discussion in the comments: link.

Structural modeling and identification give rise to lots of possible combinations. Randomization (and its analogues) can non-parametrically identify ATEs or LATEs and other things that can be constructed using only marginal potential outcome distributions. But as, e.g., Heckman et al. (1997; link) have shown, there are pretty strict limits to what randomization can do to identify parameters from the joint counterfactual distribution. Behavioral assumptions, the basis of structural models, “fill in” the information needed to proceed with estimation tasks that require more than just the marginal potential outcome distributions. Along similar lines, Chetty (2009; link) has shown how behavioral assumptions can motivate the interpretation of non-parametrically identified parameters as “sufficient statistics” to judge welfare effects (or, at least, to put bounds on such effects). The general principle behind all these combinations is that models (“structure”) fill in for what randomization cannot identify non-parametrically (that is, “on its own”). An issue in the discussion linked above (in the comments especially) is whether and when it is okay to just work with what is non-parametrically identified.

Perhaps a key source of the tension in “randomistas versus structuralists” debates is a difference in opinion over where we should draw the line between acceptable and unacceptable use of structure to “fill in.” Even randomista papers sometimes apply bits of structure to decompose (L)ATEs to link results to theoretical claims about behavioral mechanisms. Here is a very barebones example from Duflo and Saez (2003): link. So the debates are not black versus white. There is probably less controversy over the suggestion that we shouldn’t use structure to identify parameters that could in principle be identified with an experiment or natural experiment. E.g., introducing structure merely to identify a LATE (selection models, anyone…) probably rub a lot of people on both sides of the “debate” the wrong way these days. (And even this would be seen as a step above completely hand-wavvy identification strategies like plopping an ad hoc array of covariates into a regression or matching algorithm…)

Share

summer reading: “Reinventing the Bazaar” by McMillan

For a market to function well, [1] you must be able to trust most of the people most of the time [to live up to contractual obligations]; [2] you must be secure from having your property expropriated; [3] information about what is available where at what quality must flow smoothly; [4] any side effects of third parties must be curtailed; and [5] competition must be at work.

So concludes John McMillan in his magisterial and highly engaging 2002 book on institutions and markets, Reinventing the Bazaar: A Natural History of Markets (amazon). McMillan provides fantastic examples from across time and around the world on how formal and informal institutions have served in meeting these five conditions. Examples range from produce vendors in the Makola market in Accra to bidders for public construction contracts in Tokyo.

I was reading this while traveling through the DRCongo the past two weeks. Helped to open my eyes about the various third party roles that state, armed group, and traditional elites play in market exchange there.

Share

“Random routes” and other methods for sampling households in the field

Himelein et al. have a draft working paper (link) covering methods for household sampling in the field when you don’t have administrative lists of households or full enumeration on-site is not possible. This includes various “random route”/”random walk” as well as methods that use satellite data. Some choice tidbits:

  • On using satellite maps to construct a frame: “Based on the experience mapping the three PSUs used in the paper, it takes about one minute per household to construct an outline. If the PSUs contain approximately 250 structures (the ones used here contain 68, 309, and 353 structures, respectively), mapping the 106 PSUs selected for the full Mogadishu High Frequency Survey would have required more than 50 work days.” Yikes! Of course they probably could have cut this time down if they sampled subclusters within the PSUs and only enumerated those. Nonetheless, the 1-minute/household estimate is a useful rule of thumb.
  • They define the “Mecca method” as choosing a random set of GPS locations in an area, and then walking in a fixed direction (e.g., the direction of Mecca, which almost everyone in Mogadishu knows) until you hit an eligible structure. The method amounts to a form of probability proportional to size (PPS) sampling, where “size” in this case amounts to the area on the ground that allows for an unobstructed path to the structure. This may not be such an easy thing to measure, although the authors propose that one could approximate the PPS weights using distance between the selected household and the next household going up the line that was traveled. Also it’s possible that some random points induce paths that never come upon an eligible structure. This would create field complications, particular in non-urban settings where domicile layouts may be sparse.

The authors take images of domicile patterns Mogadishu and some information on consumption variable distributions to construct simulations. They use the simulations to evaluate satellite-based full enumeration, field listing within PSU segments, interviewing within GPS-defined grid squares, the Mecca method, and then the Afrobarometer “random walk” approach. No surprise that satellite-based full enumeration was the least biased, segmentation next, and then Mecca method with PPS weights and approximate PPS weights third and fourth. All four of these were quite good and unbiased though. Grid, random walk, and unweighted Mecca method were quite biased. Such bias needs to be weighed against costs and ability to validate. Satellite full enumeration is costly but one can validate. The segment method is also costly and rather hard to enumerate. The grid method fares poorly on both counts. The Mecca method with true PPS weights is somewhat costly, but with approximate PPS weights is quite good on both counts. The random walk is cheap but hard to validate. Again, I would say that some of these results may be particular to the setting (relatively dense settlement in an urban area). But the insights are certainly useful.

I found this paper from David Evans fantastic summary of the recently concluded conference on Annual Bank Conference on Confronting Fragility and Conflict in Africa: link.

Share