Monthly Archives: January 2011

Statistical significance goes before the Supreme Court

The Economist View blog posts a letter from economist Steve Ziliak (link) describing a case due to be argued before the Supreme Court next week on whether “drug manufacturers and other companies [should] be required to report the adverse effect of a product on users, if the effect is not statistically significantly different from zero at the 5% level.” Briefs presented before the court commenting on the case are here (link). The blog post also links to this page, as well as to some of Ziliak’s writing on significance testing.


Reading: 7 Properties of Good Models (Gabaix & Laibson, 2008)

This short essay argues that the following criteria should be used to judge whether an analytical economic model is good or not:

  1. parsimony, viz., minimal assumptions and parameters, to reduce risk of overfitting. This would seem to be the essence of modeling, right?
  2. tractability.
  3. conceptual insightfulness, which in the authors’ characterization bears some resemblance to Lakatos’s axiom that a scientific theory should produce “novel facts”.
  4. generalizability.
  5. falsifiability.
  6. empirical consistency.
  7. predictive precision, which is a necessary complement to falsifiability and empirical consistency: a model that makes vague predictions may hold up against the data, but a more useful model might be one that makes sharp predictions that are only slightly off from the data.

The authors acknowledge that these criteria may conflict, forcing trade-offs. Special tensions would seem to arise between parsimony/tractability and falsifiability/empirical consistency/predictive precision.

In their discussion, the authors claim that economic models should not be judged on whether they satisfy optimization axioms. They wish to create space for models that allow a separation between the normative preferences of agents and the actions that they ultimately take—the separation may be due to non-voluntary errors, biases, or emotions. Abandoning optimization axioms means that behavior does not immediately reveal preferences, which complicates normative analysis. The authors accept this, claiming that instead, we should specify models that incorporate parameters capturing non-voluntary processes, and then use data to identify “latent” preferences after conditioning on estimates of these parameters.

Full reference: Gabaix, Xavier, and David I. Laibson. 2008. “The Seven Properties of Good Models.” In The Foundations of Positive and Normative Economics, ed. Andrew Caplin and Andrew Schotter, 292–99. New York: Oxford University Press.

Ungated link: