Monthly Archives: January 2016

Inverse covariance weighting versus factor analysis

These are two ways to take a bunch of variables that are supposed to measure common latent factors and reduce them to a single or a few indices. What is the difference? I get the question fairly often, so I thought I’d put this post up.

The two approaches do different things. Inverse covariance weighting applies an assumption that there is one latent trait of interest, and constructs an optimal weighted average on the basis of that assumption. Factor analysis tries to partial out an array of orthogonal latent factors.

An intuitive way to think of it is like this:

Suppose you have data that consists of three variables: College Math Grade, Math GRE, and Verbal GRE. The two math variables will be highly correlated, and the verbal variable will be somewhat correlated with the math scores.

The inverse covariance weighted average of these three variables would result in an index that gives about 25% weight to each math score and then 50% weight to the verbal score. It “rewards” the verbal score for providing new information that the math scores don’t. The resulting index could be interpreted as a “general scholastic aptitude” index.

A factor analysis of these three variables would yield two orthogonal factors, the first factor of which would give almost 50% weight to each math variable and almost zero weight to the verbal variable, and the second would give almost zero weight to each math variable and almost 100% weight to the verbal variable. So you would get a “pure math” factor and a “pure verbal” factor.

Which one is better? It depends on the goals of your analysis.

I discuss this a bit more in my lecture on “measurement” in the quant field methods class (see these notes: [PDF]). Here is some R code to play around with these concepts too: [link].


Reasons for experiments in policy research that have little to do with statistics

We have all heard the various statistical reasons for experimental evidence to be given special consideration in policy research. For example, JPAL has nice resources covering such points (link), such as the need to balance unobserved confounders.

But when speaking to those involved in designing and implementing policies, I also point to two considerations that are not really statistical so much as sociological:

1. Putting manipulability to the test

As it happens, a randomized experiment is not necessarily the most efficient manner to obtain a consistent estimate of a causal effect. See, e.g., Kasy’s research (link) or this discussion recently on the Development Impact blog (link). Of course, the non-randomized alternatives do share in common with RCTs the fact that treatments are manipulated and therefore not the products of endogenous selection. It is such manipulation, not whether or not it is applied according to randomization, that is the essence of an “experiment.”

We have the famous quote from Box (1966, link):

To find out what happens to a system when you interfere with it you have to interfere with it (not just passively observe it).

This I would say is the essential argument in favor of experimentation for policy research.

Whether one or another intervention is likely to be more effective depends both on the relevant mechanisms driving outcomes and, crucially, whether the mechanisms can be meaningfully affected through intervention. It is in addressing the second question that experimental studies are especially useful. Various approaches, including both qualitative and quantitative, are helpful in identifying important mechanisms that drive outcomes. But experiments can provide especially direct evidence on whether we can actually do anything to affect these mechanisms — that is, experiments put “manipulability” to the test.

The successful use of experiments in policy research typically requires drawing on insights from other research on relevant mechanisms. This other research defines debates about what policy makers should do and how they should do it. Experiments have a distinct role in such debates by clarifying what is materially possible.

Related to this is replicability. What is nice about an experiment is that, in principle, you should have before you a recipe for recreating an effect. Context-dependence means that replicability may sometimes be elusive in practice. One could measure scientific success in the ability to fashion complete recipes (including the contextual conditions) for replicating effects. Observational studies are often deficient in this regard because we cannot control where and when we get the variation in treatments of interest. We are left to wonder whether we have really mastered what the observational data imply about causal effects. It’s possible that we have not mastered it at all and have merely tricked ourselves (or others!) into believing that certain causal effects are evident. With a complete experimental recipe we can test it.

2. Deep engagement

Experimental evaluations of policies or programs are prospective. As such they typically require deep engagement between researchers and implementers in processes of policy formulation, beneficiary selection, and site selection. Compare this to an ex post analysis. In an ex post analysis, such details are often lost. It is for good reason then that you often hear from practitioners that ex post evaluators did not understand “what really went on” in the program. They weren’t there from the beginning. In my experience, this is much less the case for experimental studies. Working prospectively, the researcher is there operating alongside implementation. The experimental method typically defines beneficiary selection.

Finally, constructing the experiment requires programmatic goals to be made concrete. Such concreteness is needed for defining interventions crisply and devising outcome measures. In my experiences, implementing partners have found it useful to go through the process of making interventions and outcomes concrete. Often the process of doing so for the purposes of an experimental evaluation was the first time they had to think so precisely about interventions and outcomes. It is a good disciplining device. It helps to make clear what is really at stake.

Of course these two factors should be taken alongside some of the limitations of experiments, which mix statistical and sociological considerations. Experiments face timescale challenges, since we can often only sustain experimental variation in treatment differentiation for so long, whether due to ethical or program cycle reasons. They also face spatial scale challenges: it can be impractical to develop well-powered experiments for macro level institutions or programs that cover large areas. Finally, there is the logistical complexity of experiments, given all the up front decisions that they require. (I do not include external validity as a distinct challenge for experiments as external validity is an issue that all research typically faces.)

Nonetheless, it is useful to have these ideas articulated to see how experimentation is about a lot more than balance on unobservables.