Monthly Archives: October 2016

Monitoring versus Feedback

Think for a moment about service providers and beneficiaries. The issue is to motivate service providers to do a good job in providing services for beneficiaries. I am thinking about this in the context of development research, and so the focus is often on public service providers. But I think the concerns here could apply to private (whether non-profit or for-profit) actors as well.

Would-be beneficiaries are sometimes called upon to rate the quality of services provided.
At least in development research, beneficiary ratings are often interpreted as “monitoring” in the service of holding service providers accountable. Examples of this include Olken’s study on community monitoring of infrastructure spending, scorecard programs, and other “social accountability” arrangements. (Of course there are also examples that are closer to home, like student evaluations for professors.)

When ratings are used for “monitoring,” they are tied to threats that are meant to keep service providers honest. Maybe the threat is for the ratings to be passed on to higher authorities who have some kind of sanctioning power. Maybe the threat is just some kind of more diffuse social sanction.

But I want to propose that there is another way to view beneficiary ratings: as feedback rather than monitoring. To see what I mean, step outside the realm of development and think instead of things like Amazon seller ratings and Yelp reviews. In these cases, the reviews are not tied to any real sanctioning. Rather, the feedback serves different purposes.

First, it may help the service providers to know what they are doing well and what they are doing poorly. This information can in itself help to improve service delivery.

Second, the ratings can function as a tool that service providers use to win new clients. (E.g., restaurants may like Yelp reviews because when they get good reviews, they have a tool for winning the trust of new patrons.) Of course, the importance of this function will depend on the extent that a service provider benefits from winning the confidence of new people. Not all services would fall into this category but many may. (Indeed these thoughts came about during a discussion of strategies for extending the reach of basic health services via community health workers, where it was important to win the trust of new potential clients.) An institution that ensures that (i) good deeds are recognized and, through their recognition, (ii) allows for ratings to be used to gain new clients, would induce higher quality service provision as well.

This “monitoring” versus “feedback” distinction can have higher order “selection” effects too. You could imagine that the introduction of a punitive monitoring approach may disincline some people from taking up jobs as service providers. By contrast the feedback approach may provide assurance and induce some to take up such jobs. The point is that the manner in which the ratings system is presented and used may affect the types of people who become service providers. (This is a pretty basic adverse selection argument.)


Pre-analysis plans

Good in theory, but problems in implementation currently make them less useful than they should be. Note that the point of a plan is to show voluntary commitment to transparency as a way to distinguish oneself as credible—cf. separating equilibria. Features of plans that improve this “separating” function are preferred by those who want credible science.

I am going to focus on two issues: challenges to checking fidelity and lack of public vetting.

First it is too cumbersome at the moment to check papers against fidelity to the plan. This is partly because there are often so many hypothesis tests proposed. This is also because plans are formatted poorly such that we cannot quickly take in what is being proposed. And finally this is also because results are presented separately from what is specified in the plan. There are some exceptions to this dim assessment—eg, Beath et al. did a stellar job in the final report of their NSP study, but even in that case, the sheer volume of tests was quite dizzying. Similar for Casey et al. in their GoBifo study. In both cases though it would have been nice for formatting that permitted fidelity-checking in the main texts of the published papers.

Second is the lack of public vetting of plans. The standard now is to publicly register. But what is being registered? Mostly specifications and tests that the author thinks are persuasive. But the point isn’t to for authors to signal back to themselves. The point is to signal out to the academic community. This function would be enhanced if the academic community weighed in on the plan before it was finalized. The Comparative Political Studies pilot of results-free review was an awesome move toward improvement on this front. As are registered reports (a la Cortex journal). Let’s do more of this.

Improving on both of these fronts implies higher costs to plans, but of course that is quite the point (cf. separating equilibria).

(I will also continue here to push for plans to be just as much about specifying how to interpret results as about how to generate them. That is, plans are more meaningful when they show a theoretical model and they map the statistical estimates back to parameters in the model. But these are separate issues.)