Agonizing over peer review is a perennial theme in conversations among scholars. I have given this some thought, and in this attached document, I propose an alternative “proceedings” model for publication in political science, my home discipline: [PDF]
A point that I make in the document is that, in other disciplines like computer science, proceedings-type publications are the highest-prestige outlets, and conventional journals are considered second-tier. So, there is nothing essential about conventional journals for granting prestige.
Something that I do not make explicit, but is implicit, in this model, is that there is ample scope for scholars to be entrepreneurial in organizing new events, perhaps even one-off events or short-termed series, that generate new proceedings outlets. An overarching governing body (like an APSA section) could serve to “certify” such proceedings. This would be an alternative to “special issues” of journals that are sometimes arranged to serve a similar purpose, but again tend to be bogged down unnecessarily by hurdles associated with operating through conventional publication processes.
P.S.: For those interested in models of publication alternative to the conventional closed-review, closed-access formats, here are two to consider:
- NIPS Proceedings (link) are the peer-reviewed proceedings of the annual Neural Information Processing Systems conference, a major forum for advances in machine learning. Note that papers are posted along with their reviews.
- Theoretical Economics (link) is an open-source journal focusing on economic theory and published by the Econometric Society. Note that they host using software generated by Open Journal Systems of the Public Knowledge Project (link).
In my research, I typically try to inform decisions on the design of policies. Sometimes this amounts to a binary “adopt” / “do not adopt” decision, but usually it is more complicated than that. To the extent that it is, I would like to have an experiment that sets me up to inform the more complicated decision. This often requires that I pose some kind of theoretical model that relates a range of options for policy inputs to outcomes of interest.
To the furthest extent possible, I would like my experimental design to deliver estimates of key parameters in the model with minimal additional assumptions needed when it comes to analysis. That is, I want as many of the identifying assumptions that I need to be guaranteed by my design. This is, in essence, the approach developed by Chassang et al. in the context of treatments that work only if recipients put in some effort to make them work. Here is a link to the paper: [published] [ungated PDF]. In this, the model informs the design in an ex ante manner. Ex post, after the experimental data are in, we can just estimate some simple conditional means to get the parameters of interest. A la Rubin (link), design trumps analysis; the revision is that it is model-informed design.
Now, sometimes I cannot do everything that I want in my design. For example, suppose my theoretical model suggests a potentially nonlinear relationship between inputs and outcomes. Suppose as well that I can only assign treatment to a few points on the potential support of the inputs (maybe even just two points). Then, I may need to do more with the analysis to get a sense of what outcomes might look like in areas of the support of the inputs where I have no direct evidence. This would be important if we want to propose optimal policies over the full support of inputs levels. We could take as an example this approach by Banerjee et al., who try to estimate the optimal allocation of police posts to reduce drunk driving: [ungated PDF].
(These issues are central to a working group that Leonard Wantchekon and I are now running for NYC-area economists and political scientists. We had our first event last week at Princeton and it was great! This post is inspired by the thought provoking talks given by Erik Snowberg, Brendan Kline, Pierre Nguimpeu, and Ethan Bueno de Mesquita at that event.)