Consultancy opportunity: developing methods to measure governments’ commitment to evaluation

The International Initiative for Impact Evaluation (3ie) is seeking a consultant to research methods for measuring governments’ commitment to evaluation and evidence-based policy. The consultant will review methods for measuring such commitment. The consultant will also study whether and how performance indices of this sort actually have an impact. Sounds like interesting work. Full terms of reference are here: link.

Share

Draft syllabus for NYU graduate course on designing surveys & field experiments

As mentioned in a previous post (link), I will teach a graduate course this Spring at NYU called “Quantitative Field Methods.” The course description is as follows:

POL-GA 3200 Quantitative Field Methods (4 points) Instructor: Cyrus Samii (GSAS/Politics), Spring 2012, Thu 4-6
This is a graduate course on statistical methods for designing quantitative social science field research, including sample surveys, field experiments, and observational (quasi-experimental) studies. The purpose of this course is to train graduate students in the social sciences to design rigorous quantitative micro-level fieldwork for their research. The learning goals are (i) to understand why some sampling, experimental, or measurement techniques are to be preferred over others, (ii) to be able to analyze design alternatives and implement sampling, treatment assignment, and measurement algorithms in the R statistical computing environment, and (iii) to develop an ability to take meaningful social science questions and translate them into hypotheses and research designs that can address the questions in a compelling manner.


A working draft of the syllabus is here: quant field methods 120103. I welcome comments or suggestions.

For NYU students or NYC-area students interested in the course:

  • First, I recently noticed that in the registration system this class had been mislabeled (something about “design of institutions”). As far as I know, this has been corrected. In any case, you can be sure that POL-GA 3200 refers to the class described in this post and not the institutions class.
  • Second, the course is open to PhD students as well as master’s students, subject to availability of slots, from across the social sciences. (See syllabus for details.) However, the prerequisites are at least a year of graduate level social science statistics training (or equivalent, subject to my assessment), as the presentation will be technical and the assignments will involve fairly intensive programming in R. No auditing will be allowed.
Share

Job with SI on leading impact evaluations

An interesting job opportunity announcement landed in my inbox from our friends at Social Impact (link):

Social Impact’s impact evaluation work has grown quite rapidly, with 15 IEs now in the planning or implementation stage. Given the rapid growth, we are looking for additional technical staff. Essentially, recent post-docs who have IE/field research experience and who could technically lead a portfolio of IEs. The specific qualifications and sector of expertise are somewhat flexible if we find a good candidate, and I think this is a great opportunity for someone who wants to get more hands-on development research experience with regular opportunities for publication.


The full recruitment notice with instructions on how to apply is linked here: Posting_Impact-evaluation-advisor.

Share

“, robust” for experimenters

Forthcoming in Statistics and Probability Letters are results by Peter Aronow and myself on how heteroskedasticity-robust and homoskedasticity variance estimators for regression coefficients relate to the exact randomization variance of the difference-in-means estimator in a randomized experiment.

The gated link is here (link). For those without access to the journal, contact me for a copy.

The main results are that the ol’ White heteroskedasticity-robust variance estimator (aka, “, robust”) yields a conservative (in expectation) approximation to the exact randomization variance of the difference-in-means for a given sample, and estimates precisely the randomization variance of the difference-in-means when we assume that the experimental sample is a random sample from some larger population (the “super-population” interpretation). There are two slight tweaks though: (1) the exact equivalency is for the “leverage corrected” version of White’s estimator, but the difference between this version and White’s original version is negligible in all but very small samples; (2) because of Jensen’s inequality, these nice results for the variance don’t necessarily translate to their square root — a.k.a., your standard errors — but the consequences shouldn’t too horrible. The take-away, then, is that experimenters can feel okay about using “, robust” to obtain standard errors for their average treatment effect estimates in randomized experiments (assuming no cluster randomization, which would require other kinds of adjustment).

We also show that in the special case of a randomized experiment with a balanced design (equal numbers of treated and control), all sorts of estimators — including the heteroskedasticity-robust estimator, homoskedasticity estimator, and “constant effects” permutation estimator — are actually algebraically equivalent! So balance between treatment and control group sizes is a nice feature because of the way that it eases variance estimation.

Share