Monitoring versus Feedback

Think for a moment about service providers and beneficiaries. The issue is to motivate service providers to do a good job in providing services for beneficiaries. I am thinking about this in the context of development research, and so the focus is often on public service providers. But I think the concerns here could apply to private (whether non-profit or for-profit) actors as well.

Would-be beneficiaries are sometimes called upon to rate the quality of services provided.
At least in development research, beneficiary ratings are often interpreted as “monitoring” in the service of holding service providers accountable. Examples of this include Olken’s study on community monitoring of infrastructure spending, scorecard programs, and other “social accountability” arrangements. (Of course there are also examples that are closer to home, like student evaluations for professors.)

When ratings are used for “monitoring,” they are tied to threats that are meant to keep service providers honest. Maybe the threat is for the ratings to be passed on to higher authorities who have some kind of sanctioning power. Maybe the threat is just some kind of more diffuse social sanction.

But I want to propose that there is another way to view beneficiary ratings: as feedback rather than monitoring. To see what I mean, step outside the realm of development and think instead of things like Amazon seller ratings and Yelp reviews. In these cases, the reviews are not tied to any real sanctioning. Rather, the feedback serves different purposes.

First, it may help the service providers to know what they are doing well and what they are doing poorly. This information can in itself help to improve service delivery.

Second, the ratings can function as a tool that service providers use to win new clients. (E.g., restaurants may like Yelp reviews because when they get good reviews, they have a tool for winning the trust of new patrons.) Of course, the importance of this function will depend on the extent that a service provider benefits from winning the confidence of new people. Not all services would fall into this category but many may. (Indeed these thoughts came about during a discussion of strategies for extending the reach of basic health services via community health workers, where it was important to win the trust of new potential clients.) An institution that ensures that (i) good deeds are recognized and, through their recognition, (ii) allows for ratings to be used to gain new clients, would induce higher quality service provision as well.

This “monitoring” versus “feedback” distinction can have higher order “selection” effects too. You could imagine that the introduction of a punitive monitoring approach may disincline some people from taking up jobs as service providers. By contrast the feedback approach may provide assurance and induce some to take up such jobs. The point is that the manner in which the ratings system is presented and used may affect the types of people who become service providers. (This is a pretty basic adverse selection argument.)

Share

Pre-analysis plans

Good in theory, but problems in implementation currently make them less useful than they should be. Note that the point of a plan is to show voluntary commitment to transparency as a way to distinguish oneself as credible—cf. separating equilibria. Features of plans that improve this “separating” function are preferred by those who want credible science.

I am going to focus on two issues: challenges to checking fidelity and lack of public vetting.

First it is too cumbersome at the moment to check papers against fidelity to the plan. This is partly because there are often so many hypothesis tests proposed. This is also because plans are formatted poorly such that we cannot quickly take in what is being proposed. And finally this is also because results are presented separately from what is specified in the plan. There are some exceptions to this dim assessment—eg, Beath et al. did a stellar job in the final report of their NSP study, but even in that case, the sheer volume of tests was quite dizzying. Similar for Casey et al. in their GoBifo study. In both cases though it would have been nice for formatting that permitted fidelity-checking in the main texts of the published papers.

Second is the lack of public vetting of plans. The standard now is to publicly register. But what is being registered? Mostly specifications and tests that the author thinks are persuasive. But the point isn’t to for authors to signal back to themselves. The point is to signal out to the academic community. This function would be enhanced if the academic community weighed in on the plan before it was finalized. The Comparative Political Studies pilot of results-free review was an awesome move toward improvement on this front. As are registered reports (a la Cortex journal). Let’s do more of this.

Improving on both of these fronts implies higher costs to plans, but of course that is quite the point (cf. separating equilibria).

(I will also continue here to push for plans to be just as much about specifying how to interpret results as about how to generate them. That is, plans are more meaningful when they show a theoretical model and they map the statistical estimates back to parameters in the model. But these are separate issues.)

Share

Let’s be clear about what we mean by (substantive) theory

I’ve had a few discussions recently about how to think about substantive theory. What should we be looking for?

A proposal I like comes from a passing remark by Dixit in Lawlessness and Economics (2004, p. 22; link):

The aim of theory should be to construct a collection of models that is sufficiently small to be remembered and used, and covers a sufficiently large portion of the spectrum of facts.

This is not so different than Clark and Primo’s proposal of theory as map-like working approximations that we use for guidance in addressing particular problems (link). I like their view and it’s one that I endorse when discussing how theory and empirics interact in the recent JOP piece (link; ungated).

Personally, I don’t use the word “model” lightly, and I suspect that Dixit doesn’t either. When I use it I do in fact mean a formal model. An important benefit of a formal model, to me, is its low semantic ambiguity, at least when compared to verbally stated theories. There is nothing more frustrating than debating the internal consistency of a theory when everyone has a different interpretation of the terms. Of course, formalization does not solve the problem of relating the theory back to reality, but then this issue of operationalization is separate.

Share

EGAP’s funding round on taxation, publicly financed goods, and development

EGAP has just announced a new round of funding for research on taxation and publicly financed goods. You can view the call for expressions of interest here: link. Expressions of interest are due by September 15 (!).

The funding round will be an EGAP “metaketa.” This means that the projects will be aligned in terms of the interventions and outcomes that they study so as to allow for meta-analysis. A recent issue of the American Economic Journal:Applied featured studies from a similar initiative on microcredit: link. Here is a link to EGAP’s explanation of the metaketa approach: link.

Having been involved in the drafting of the request for proposals (RFP), I want to emphasize a few points. The “Focus” section of the RFP indicates,

We aim to fund research on strategies to move citizen-government relations toward responsiveness on the part of government and corresponding tax compliance on the part of citizens. Interventions of particular interest are: the provision of government-funded public goods; the empowerment of citizens vis a vis predatory tax collectors; and/or the strengthening of civil society initiatives that help citizens to comply with tax regulations, while demanding effective and responsive public action. Projects implemented in collaboration with governments and/or civil society organizations are strongly encouraged to apply.

In considering whether to apply, it is okay to use a broad definition of “taxation.” That is, it does not necessarily have to be a study about property or income taxes, say. Usage fees for publicly provided services, for example, could fall within the parameters of the RFP, so long as the proposed research looks into the reciprocal exchange between citizens, who have fee obligations, and public agencies, who have service obligations. The primary interest is in strategies to nudge society-state relations in the virtuous direction of reciprocal exchange on the basis of such obligations.

The RFP also emphasizes research in developing countries, meaning essentially countries that are not high-income by World Bank standards, although this is not a formally specified parameter.

The timeline is rather tight on this, so those applying should have a clear idea of exactly which government agencies or civil society organizations they would be able to work.

Share

Toward a norm of results-free peer review and “ex ante science”

Vox recently posted an article on “problems facing science” (link). A panel of 270 scientists from across a range of disciplines chimed in. A major theme, and arguably the biggest problem identified after issues related to accessing grants, was that “bad incentives” undermine scientific integrity. Specifically, these bad incentives arise because publication and grant decisions tend overwhelmingly to be based on assessments of whether research results are “exciting.” Vox also reported that the “fix” for this problem, as suggested by many of the panelists, was for editors and reviewers to “put a greater emphasis on rigorous methods and processes rather than splashy results.”

Recently, Comparative Political Studies hosted a special issue dedicated to applying a results-free review process (link). The editors of this special issue concluded that the process promoted attention to “theoretical consistency and substantive importance.” It introduced some complications too, such as questions about how to handle statistically insignificant results and how to accommodate research designs other than experiments or certain types of observational templates. But generally, they concluded that the process “exceeded our expectations.”

These two articles reference other detailed arguments promoting the idea of review based on whether hypotheses are well motivated and methods rigorously applied. I have also elaborated on why I think this kind of “ex ante science” is a good idea (link1 link2). The principles of “ex ante science” are to evaluate the value of applied empirical research contributions on the basis of whether the empirical analyses are well motivated in substantive or theoretical terms, whether the empirical methods are tightly derived from the substantive motivation, and whether the proposed empirical methods are robust. One avoids referencing results in judging the value of the contribution.

Here I want to suggest something that we can start doing immediately to promote this goal: voluntary commitment by journal reviewers to evaluate manuscripts on the basis of principles of ex ante science. Journal editors give reviewers discretion to apply their judgment in evaluating a manuscript. This grants a license to those interested in promoting the principles of ex ante science to do just that.

Here are some operational guidelines. As a reviewer you could begin by masking results prior to starting to read a manuscript. Then, you could structure your review so that it addresses the questions pertaining to the principles stated above.

Let’s take it even further, in the interest of promoting a norm of reviews based on principles of ex ante science: To resolve any ambiguity about one’s commitments to these principles, as a reviewer make it explicit. Reviews could begin with a declaration along the lines of “This review is based on assessments of whether or not the empirical analyses are well motivated and the empirical methods robust. Results were masked in judging the merits of the manuscript.”

Share