Practitioners, academics, and development impact evaluation

Micro-Links forum is hosting a three-day online discussion on “Strengthening Evaluation of Poverty and Conflict/Fragility Interventions” (link). Based on my own experience in trying to pull off these kinds of evaluations, I wanted to chime in with some points about the nature of academics’ involvement in impact evaluations. A common approach for impact evaluation is for implementing organizations to team with academics. The nature of the relationship between practitioners and academics is a crucial aspect of the evaluation strategy. Let me propose this: for impact evaluations to work, it’s good to have everyone in the program understand that they are involved in a scientific project that aims to test ideas about “what works.”

A way that I have seen numerous impact evaluations fail is through a conflict that emerges between practitioners who understand their role as helping people irrespective of whether or not the manner in which they do it is good for an evaluation, and academics who see themselves as tasked with evaluating what the program staff are doing with little stake themselves in the program. This “division of labor” might seem rational, but I have found that it introduces tension and other obstacles.

An arrangement that seems to work better is for practitioners and academics to all see themselves as engaged together in (i) the conceptualization of the program itself; (ii) the elaboration of the details about how it should be implemented; and (iii) and then the methods for measuring impact. All three steps should harness both practitioners’ and academics’ technical and substantive knowledge. What does not seem to work very well is to try to force a division between tasks (i) and (ii) on the one hand, which are often “reserved” for practitioners to use their substantive knowledge, and (iii) on the other hand, which is reserved for academics to use their technical knowledge. Often times steps (i) and (ii) will have been worked out among practitioners from the commissioning and implementing agencies, and then academics will be consulted to realize step (iii). In my experience, this process rarely works out. There is a logic that needs to flow from conceptualization of the program to measurement of impacts, and this requires that academics and practitioners work together from square one, conceptualizing “what needs to be done” by the program all the way to designing a means for determining “whether it worked” to produce the desired change. To put it another way, impact evaluations are better when they are seen less as technical exercises tacked on to existing programs and more as rich, substantive exercises to design programs for testing ideas and discovering how to bring about the desired change.

For the academics, a fair representation of this might be for key actors involved in the implementation of the program to be considered as co-authors in at least some of the studies that emerge from the program; indeed, the assumptions and hypotheses on which programs are based are often drawn largely from the experiences and thoughts of members of program staff, and their input should be acknowledged. Along similar lines, practitioners should appreciate that when academics are engaged, their substantive expertise is a resource that should be tapped in developing the program itself, and that there needs to be a logic that holds between the design of the program itself and the evaluation. For all, this is a deeper kind of interaction between academics and practitioners than is often the case, though I should note that, e.g., Poverty Action Lab/IPA projects often operate with such richness and nuance.

Share