In the current PS, Luke Keele (link) makes a great point about the need for journals to let researchers take existing theories and subject them to multiple rounds of empirical scrutiny:
Of course, there is nothing wrong with new theories; however, I believe that an overemphasis on theory can impede the ability of the discipline to establish causal relationships. One point of emphasis in the identification revolution is that causal inference is difficult. A series of regression models is hardly the last word in whether a causal theory holds. Within causal inference, it is generally understood that only a series of different studies using disparate research designs can provide solid evidence for a causal relationship. This understanding implies that, as opposed to a single test, a theory requires a number of tests from different research designs. We might assert that whereas theory is critically necessary, too many theories might be harmful. It is probably the case that the gain from an incomplete test of a new theory is less than the gain from a new test of an existing theory.
Economics provides a useful example. The question of whether attending college increases income is one of long-standing interest to economists. The theory behind the question is relatively simple. Although I hesitate to state that the causal hypothesis that college leads to higher incomes is definitively settled, a review of the many different research designs used makes a convincing case for a causal effect. If all of the researchers conducting these studies had been told by reviewers that new theory was needed, little progress would have been made.
When we overvalue novel theories, we tend to dismiss attempts to answer old questions with new research designs. We can value papers with new theories and little or no empirics. We also can value papers with little in the way of new theory but that present novel research designs that provide new empirical evidence about causal relationships. We could argue convincingly that both topics require such attention that it may be difficult to present both novel theory and empirics in the same paper. Currently, I would state that, in general, a paper that does not engage in new theory development will have a difficult time being published in a top journal. I don’t think that is healthy. If we really want researchers to clearly establish causal relationships that is often worth doing alone in a single paper.
In my (and Luke’s) home discipline of political science, I have frequently seen reviewers at top journals suggesting rejection for papers because they don’t think the theoretical contribution is novel enough, even if the research design is compelling. (I am not talking about reviews for my own work either, but that of my much more capable colleagues who have discussed their reviews with me.) But journals seem to be okay with papers that propose new theories with no empirical tests whatsoever. Weird, isn’t it?
I am not saying pure theory papers shouldn’t be published—quite the contrary in fact. What makes sense is some division of labor in the discipline. I recognize the importance of pure theory papers. I also recognize the importance of compelling empirical work that offers a new and credible test of an existing theoretical claim. The usual refrain that I hear when I say this is that “top economics journals don’t seem to have this problem,” and with that I agree. I wonder why there is such a difference.
I agree with pretty much everything else in Luke’s paper too. It is part of a symposium on whether “big data,” “causal inference,” and “formal theory” are conflicting trends, an obviously ridiculous proposition, but one that triggered some nice essays by the contributors.