EGAP’s funding round on taxation, publicly financed goods, and development

EGAP has just announced a new round of funding for research on taxation and publicly financed goods. You can view the call for expressions of interest here: link. Expressions of interest are due by September 15 (!).

The funding round will be an EGAP “metaketa.” This means that the projects will be aligned in terms of the interventions and outcomes that they study so as to allow for meta-analysis. A recent issue of the American Economic Journal:Applied featured studies from a similar initiative on microcredit: link. Here is a link to EGAP’s explanation of the metaketa approach: link.

Having been involved in the drafting of the request for proposals (RFP), I want to emphasize a few points. The “Focus” section of the RFP indicates,

We aim to fund research on strategies to move citizen-government relations toward responsiveness on the part of government and corresponding tax compliance on the part of citizens. Interventions of particular interest are: the provision of government-funded public goods; the empowerment of citizens vis a vis predatory tax collectors; and/or the strengthening of civil society initiatives that help citizens to comply with tax regulations, while demanding effective and responsive public action. Projects implemented in collaboration with governments and/or civil society organizations are strongly encouraged to apply.

In considering whether to apply, it is okay to use a broad definition of “taxation.” That is, it does not necessarily have to be a study about property or income taxes, say. Usage fees for publicly provided services, for example, could fall within the parameters of the RFP, so long as the proposed research looks into the reciprocal exchange between citizens, who have fee obligations, and public agencies, who have service obligations. The primary interest is in strategies to nudge society-state relations in the virtuous direction of reciprocal exchange on the basis of such obligations.

The RFP also emphasizes research in developing countries, meaning essentially countries that are not high-income by World Bank standards, although this is not a formally specified parameter.

The timeline is rather tight on this, so those applying should have a clear idea of exactly which government agencies or civil society organizations they would be able to work.

Share

Toward a norm of results-free peer review and “ex ante science”

Vox recently posted an article on “problems facing science” (link). A panel of 270 scientists from across a range of disciplines chimed in. A major theme, and arguably the biggest problem identified after issues related to accessing grants, was that “bad incentives” undermine scientific integrity. Specifically, these bad incentives arise because publication and grant decisions tend overwhelmingly to be based on assessments of whether research results are “exciting.” Vox also reported that the “fix” for this problem, as suggested by many of the panelists, was for editors and reviewers to “put a greater emphasis on rigorous methods and processes rather than splashy results.”

Recently, Comparative Political Studies hosted a special issue dedicated to applying a results-free review process (link). The editors of this special issue concluded that the process promoted attention to “theoretical consistency and substantive importance.” It introduced some complications too, such as questions about how to handle statistically insignificant results and how to accommodate research designs other than experiments or certain types of observational templates. But generally, they concluded that the process “exceeded our expectations.”

These two articles reference other detailed arguments promoting the idea of review based on whether hypotheses are well motivated and methods rigorously applied. I have also elaborated on why I think this kind of “ex ante science” is a good idea (link1 link2). The principles of “ex ante science” are to evaluate the value of applied empirical research contributions on the basis of whether the empirical analyses are well motivated in substantive or theoretical terms, whether the empirical methods are tightly derived from the substantive motivation, and whether the proposed empirical methods are robust. One avoids referencing results in judging the value of the contribution.

Here I want to suggest something that we can start doing immediately to promote this goal: voluntary commitment by journal reviewers to evaluate manuscripts on the basis of principles of ex ante science. Journal editors give reviewers discretion to apply their judgment in evaluating a manuscript. This grants a license to those interested in promoting the principles of ex ante science to do just that.

Here are some operational guidelines. As a reviewer you could begin by masking results prior to starting to read a manuscript. Then, you could structure your review so that it addresses the questions pertaining to the principles stated above.

Let’s take it even further, in the interest of promoting a norm of reviews based on principles of ex ante science: To resolve any ambiguity about one’s commitments to these principles, as a reviewer make it explicit. Reviews could begin with a declaration along the lines of “This review is based on assessments of whether or not the empirical analyses are well motivated and the empirical methods robust. Results were masked in judging the merits of the manuscript.”

Share

Inverse covariance weighting versus factor analysis

These are two ways to take a bunch of variables that are supposed to measure common latent factors and reduce them to a single or a few indices. What is the difference? I get the question fairly often, so I thought I’d put this post up.

The two approaches do different things. Inverse covariance weighting applies an assumption that there is one latent trait of interest, and constructs an optimal weighted average on the basis of that assumption. Factor analysis tries to partial out an array of orthogonal latent factors.

An intuitive way to think of it is like this:

Suppose you have data that consists of three variables: College Math Grade, Math GRE, and Verbal GRE. The two math variables will be highly correlated, and the verbal variable will be somewhat correlated with the math scores.

The inverse covariance weighted average of these three variables would result in an index that gives about 25% weight to each math score and then 50% weight to the verbal score. It “rewards” the verbal score for providing new information that the math scores don’t. The resulting index could be interpreted as a “general scholastic aptitude” index.

A factor analysis of these three variables would yield two orthogonal factors, the first factor of which would give almost 50% weight to each math variable and almost zero weight to the verbal variable, and the second would give almost zero weight to each math variable and almost 100% weight to the verbal variable. So you would get a “pure math” factor and a “pure verbal” factor.

Which one is better? It depends on the goals of your analysis.

I discuss this a bit more in my lecture on “measurement” in the quant field methods class (see these notes: [PDF]). Here is some R code to play around with these concepts too: [link].

Share

Reasons for experiments in policy research that have little to do with statistics

We have all heard the various statistical reasons for experimental evidence to be given special consideration in policy research. For example, JPAL has nice resources covering such points (link), such as the need to balance unobserved confounders.

But when speaking to those involved in designing and implementing policies, I also point to two considerations that are not really statistical so much as sociological:

1. Putting manipulability to the test

As it happens, a randomized experiment is not necessarily the most efficient manner to obtain a consistent estimate of a causal effect. See, e.g., Kasy’s research (link) or this discussion recently on the Development Impact blog (link). Of course, the non-randomized alternatives do share in common with RCTs the fact that treatments are manipulated and therefore not the products of endogenous selection. It is such manipulation, not whether or not it is applied according to randomization, that is the essence of an “experiment.”

We have the famous quote from Box (1966, link):

To find out what happens to a system when you interfere with it you have to interfere with it (not just passively observe it).

This I would say is the essential argument in favor of experimentation for policy research.

Whether one or another intervention is likely to be more effective depends both on the relevant mechanisms driving outcomes and, crucially, whether the mechanisms can be meaningfully affected through intervention. It is in addressing the second question that experimental studies are especially useful. Various approaches, including both qualitative and quantitative, are helpful in identifying important mechanisms that drive outcomes. But experiments can provide especially direct evidence on whether we can actually do anything to affect these mechanisms — that is, experiments put “manipulability” to the test.

The successful use of experiments in policy research typically requires drawing on insights from other research on relevant mechanisms. This other research defines debates about what policy makers should do and how they should do it. Experiments have a distinct role in such debates by clarifying what is materially possible.

Related to this is replicability. What is nice about an experiment is that, in principle, you should have before you a recipe for recreating an effect. Context-dependence means that replicability may sometimes be elusive in practice. One could measure scientific success in the ability to fashion complete recipes (including the contextual conditions) for replicating effects. Observational studies are often deficient in this regard because we cannot control where and when we get the variation in treatments of interest. We are left to wonder whether we have really mastered what the observational data imply about causal effects. It’s possible that we have not mastered it at all and have merely tricked ourselves (or others!) into believing that certain causal effects are evident. With a complete experimental recipe we can test it.

2. Deep engagement

Experimental evaluations of policies or programs are prospective. As such they typically require deep engagement between researchers and implementers in processes of policy formulation, beneficiary selection, and site selection. Compare this to an ex post analysis. In an ex post analysis, such details are often lost. It is for good reason then that you often hear from practitioners that ex post evaluators did not understand “what really went on” in the program. They weren’t there from the beginning. In my experience, this is much less the case for experimental studies. Working prospectively, the researcher is there operating alongside implementation. The experimental method typically defines beneficiary selection.

Finally, constructing the experiment requires programmatic goals to be made concrete. Such concreteness is needed for defining interventions crisply and devising outcome measures. In my experiences, implementing partners have found it useful to go through the process of making interventions and outcomes concrete. Often the process of doing so for the purposes of an experimental evaluation was the first time they had to think so precisely about interventions and outcomes. It is a good disciplining device. It helps to make clear what is really at stake.

Of course these two factors should be taken alongside some of the limitations of experiments, which mix statistical and sociological considerations. Experiments face timescale challenges, since we can often only sustain experimental variation in treatment differentiation for so long, whether due to ethical or program cycle reasons. They also face spatial scale challenges: it can be impractical to develop well-powered experiments for macro level institutions or programs that cover large areas. Finally, there is the logistical complexity of experiments, given all the up front decisions that they require. (I do not include external validity as a distinct challenge for experiments as external validity is an issue that all research typically faces.)

Nonetheless, it is useful to have these ideas articulated to see how experimentation is about a lot more than balance on unobservables.

Share