Thoughts on “structure” and identification

See this post at A Fine Theorem and the discussion in the comments: link.

Structural modeling and identification give rise to lots of possible combinations. Randomization (and its analogues) can non-parametrically identify ATEs or LATEs and other things that can be constructed using only marginal potential outcome distributions. But as, e.g., Heckman et al. (1997; link) have shown, there are pretty strict limits to what randomization can do to identify parameters from the joint counterfactual distribution. Behavioral assumptions, the basis of structural models, “fill in” the information needed to proceed with estimation tasks that require more than just the marginal potential outcome distributions. Along similar lines, Chetty (2009; link) has shown how behavioral assumptions can motivate the interpretation of non-parametrically identified parameters as “sufficient statistics” to judge welfare effects (or, at least, to put bounds on such effects). The general principle behind all these combinations is that models (“structure”) fill in for what randomization cannot identify non-parametrically (that is, “on its own”). An issue in the discussion linked above (in the comments especially) is whether and when it is okay to just work with what is non-parametrically identified.

Perhaps a key source of the tension in “randomistas versus structuralists” debates is a difference in opinion over where we should draw the line between acceptable and unacceptable use of structure to “fill in.” Even randomista papers sometimes apply bits of structure to decompose (L)ATEs to link results to theoretical claims about behavioral mechanisms. Here is a very barebones example from Duflo and Saez (2003): link. So the debates are not black versus white. There is probably less controversy over the suggestion that we shouldn’t use structure to identify parameters that could in principle be identified with an experiment or natural experiment. E.g., introducing structure merely to identify a LATE (selection models, anyone…) probably rub a lot of people on both sides of the “debate” the wrong way these days. (And even this would be seen as a step above completely hand-wavvy identification strategies like plopping an ad hoc array of covariates into a regression or matching algorithm…)

Share

5 Replies to “Thoughts on “structure” and identification”

  1. This was very interesting. Do you have any more suggestions on things to read on this topic? I have Wolpin’s book on my list but haven’t read it yet.

    How do you think prediction relates to this as far as model validation is concerned? My thinking has been that, at least in parts of our field, theory is not reliable enough (in terms of predictive validity specifically, but other sorts) to support the claim that the estimation of any structural parameter is credible. I have not read enough in economics to know whether this is the case in economics but mostly from my exposure to statisticians who have worked with economists it would seem that it is.

    It seems quite right what is said in the link and the comments about the necessity of theory to interpret treatment effects as well as to identify particular parameters of substantive interest. It just seems less than useful in areas without theory that people believe is valid (and you know, evidence). It also seems strange in that they do not define a model evaluation criteria. Implicitly the criteria for the AIDS paper (which of course I have not read) seems to be that some parts of the fit did not comport with intuitions.

    Of course this model validation stuff brings up what sorts of things should be predictable and how predictable they should be, which seems a rather interesting question. I have not encountered much writing on this topic, though perhaps it is follows from some basic theory I have missed.

  2. As for readings, yes Wolpin is a great reference, and the 2000 JEL by Rosenzweig and Wolpin mentioned in the comments to the A Fine Theorem post is also a good start. Also JEL 48(2) contains an important series of papers each containing useful references.

    I am not quite sure how methodologists on either side have engaged issues of prediction or model evaluation, to be honest. I think for structural minded methodologists part of the appeal is the ability to extrapolate to as yet unobserved policy interventions on the basis of the model. And whether the model is any good or not depends in part on whether the data give rise to anomalies that run against the model assumptions but then, ultimately, on how it compares to other models out there (as discussed in the comments to the A Fine Theorem post — “it takes a model to beat a model”). For empiricists, model evaluation is sort of irrelevant because the “model” (or rather, estimator) is typically implied by the “design.” There might be efficiency gains to be had, in which case the evaluation criteria might be along the lines of “among the class of consistent estimators, which has the most favorable bias-efficiency tradeoff.” Nonetheless, extrapolation is also a goal for empiricists, but it is done in a manner that is more “agnostic” as to the data generating process (see Imbens’s paper in JEL 48(2) and also my paper with Dehejia and Pop-Eleches: http://www.nber.org/papers/w21459).

    I suppose in cases where theory “doesn’t serve as a reliable guide,” then one could go in either of two ways (replicating the structuralist/empiricist divide): start building models and hope you can get to a point where the models do serve as a guide, or pursue more “model free,” reduced form analysis.

  3. Thanks for the references! Added to my infinite to-do list.

    Certainly I agree about design implying the model and that the appeal of the structural approach is model-based extrapolation. I am not sure precisely what you mean wrt to structuralist model evaluation. Are you saying that the models are (implicitly or explicitly) calibrated (i.e., the observed statistic(s) of interest is likely under the model) to minimize some quantity other than excess risk? And this is used to compare to other competing models? I see this implicitly going on in our field where models seem to be compared on their “plausibility.” I have trouble believing model based inferences that do not rely on a design or a (in all senses) valid theoretical model if they do not have small excess risk in absolute terms (where what constitutes small depends on costs for said problem). Am I missing something here?

  4. Well, this is me trying to infer what structuralist model evaluation might involve without having a deep understanding myself. It would be good to ask a bona fide structuralist about it! Nonetheless, to try answer your question based on what I understand:

    First, I am not sure what a structuralist would say if presented with a situation where there were two models, one where risk is higher than the other (based, say, on some function of prediction error) but the one with higher risk operates on assumptions that are more consistent with the data (perhaps, assumptions evaluated with data that are outside the current problem). Perhaps there is no consensus on this particular point.

    Second, what I do think is a common perception among structuralists is that “black box” methods designed specifically to minimize empirical risk, but that do not build up from behavioral rudiments, are generally not okay. This kind of “predicting without knowing,” I would think, would rub most structuralists the wrong way. But again this what I infer from a pretty casual engagement (at this point) with the structural literature.

  5. Interesting. My brother just started an Econ PhD so perhaps he will know in a few years. An interesting (epistemological?) question nonetheless. I am glad you give that reason against black box models! Interpreting them is an interesting topic where not enough work has been done, and existing software to do this is either non-existent or of limited usefulness.

Comments are closed.