As per Athey et al. (2019, Annals of Statistics), random forests can be interpreted as kernel estimators. If you haven’t seen this before, here is a toy example: .html
Conformal inference tutorial
I am doing some work on conformal prediction methods, which allow for doing predictive regression-based inference with minimal assumptions. Mostly to help myself understand the methods in algorithmic terms, I created the following tutorial: link.
An accessible introduction is offered in this paper by Lei et al. (2017, arxiv), which accompanies the R package, conformalInference
(github). They demonstrate conformal inference methods in connection with high dimensional regression and covariate selection.
In the causal inference literature, Chernozhukov et al. (2017, arxiv) use conformal methods for robust inference with synthetic control and related panel methods. Coauthors and I are doing some more work in this area.
Chernozhukov et al. (2018, arxiv) also have new work extending conformal inference to time series and other dependent-data settings.
Some thoughts on blinding and permutation methods
Here is an interesting twitter thread on blinding and permutation methods:
a lil thread about the magic of randomization inference and blinded data analysis.@ml_barnett @andrewolenski & i are doing a study where we already know our confidence intervals but have no idea what our point estimates are! and here’s why i think that’s so cool. pic.twitter.com/o5TJ9UUoUH
— Adam Sacarny (@asacarny) November 30, 2018
What Adam is proposing here is related to the “mock analysis” that Humphreys et al. discuss in their 2013 paper on fishing: link to preprint
I have also had discussions recently with Pieter Serneels and Andrew Zeitlin about this idea, who are writing on this topic (look out for their work, I will update with a link to it when it is available).
Generally I think simulation and “mock” analysis are great for checking power and other inferential characteristics. The DeclareDesign project is an attempt to systematize this approach: link.
That said, I think the statistics behind the blinding + permutation approach are a bit more subtle than what Adam’s post suggests. My concerns can be expressed using a toy example. Suppose the following toy research design:
- We have only two units.
- We run an experiment that randomly (fair coin flip) assigns one unit to treatment and the other to control.
- Control potential outcomes are (0, 5) (that is, for the first unit the outcome is 0 under control and for the second unit the outcome is 5 under control).
- There are two possible treatments that could be assigned. Treatment A has no effect, and so the potential outcomes under treatment A are (0, 5). Treatment B generates an effect such that under treatment B, potential outcomes are (5, 7) (so for the first unit, the effect is 5, and for the second it is 2).
- That being the case, if treatment A were being applied, the experiment would always generate data (0, 5). If treatment B were being applied, then the experiment would generate either (0, 7) as data, or (5, 5) as data.
- Now, let us suppose that we, the analysts, do not know which of treatment A or B was applied, nor do we know all of the potential outcomes. Rather, all we know are the fact that there are two units, one was assigned to treatment, and then the outcome data. We blind ourselves to which of the two units was assigned to treatment. Ultimately we are interested to learn whether treatment A or B was applied, but at the moment we want to operate in a manner that is blind to treatment assignment so as to figure a good way to test.
This toy example captures the situation that is relevant in Adam’s illustration of the blinding + permute method. However, it is straightforward to see the problem in this case. If it is indeed the case that treatment B was applied, then the resulting data will not allow us to characterize the null distribution (that is, the distribution that would have arisen had A been applied). Moreover, the resulting data could either over- or under-state the variance of outcomes under the null. That being the case, it seems problematic to adhere too closely to what one learns under blinding + permutation. Rather, I would propose that one use it only to get “ballpark” ideas of how different estimation strategies perform, but then for more refinement, I think you’d have to either use analytical results or try simulating data under different assumptions on the potential outcomes.
Notes on matrix completion methods
(Note: some typos in the notes corrected now.)
Below, I have posted some notes on matrix completion, inspired by this great Twitter thread by Scott Cunningham:
I've been working on a matrix completion project for a while; ever since I saw Athey, Bayati, Doudchenko, Imbens, and Khosravi 2017 paper (now updated at NBER 2018).I thought I'd share what I've learned, which is still very primitive. https://t.co/Z8eZPXEefM
— scott cunningham (@causalinf) November 26, 2018
Have a look at Scott’s thread first. Also, have a look at the material that he posted. Then, the following may be helpful for further deciphering that methods (in formats friendly for online and offline reading):
- HTML: matrix-completion
- PDF: matrix-completion
Update: I had a very useful twitter discussion with @analisereal on the identification conditions behind matrix completion for estimating the ATT. Here is the thread and then I am updating the notes to incorporate these points:
What are the identification conditions for these methods to work?
— Análise Real (@analisereal) November 28, 2018
Descriptive quantitative work in political science
Here is a roundup of replies to a question I posted on Twitter regarding descriptive quantitative research in political science:
What’s your favorite example of a descriptive quantitative paper in political science—not trying to estimate a causal effect or fit a model, but rather use good measurement to challenge conventional wisdom about state of the world?
— Cyrus Samii (@cdsamii) May 11, 2018
Outside political science, I can think of a number of examples, although I was interested in political examples per se, and particularly ones that are published as papers:
i like this one quite a bit:https://t.co/9avMy2y6kB
— Josh McCrain (@joshmccrain) May 11, 2018
I can think of many in econ too—eg, “Economic Lives of the Poor”, “Financial Diaries”, or Bloom/Van Reenan management stuff. I am specifically wondering about poli sci though.
— Cyrus Samii (@cdsamii) May 11, 2018
i did a political science version of the AER lobbying papers and is definitely descriptive:https://t.co/GUdFh8cQR8
— Josh McCrain (@joshmccrain) May 11, 2018
Hands down: Blattman, C., & Miguel, E. (2010). Civil war. Journal of Economic literature, 48(1), 3-57.
— Mark Shadden (@MarkShadden1) May 11, 2018
In economics, I would say the classic "Law and Finance". In poli sci, definitely Wand et al. (2001) on butterfly ballots (is this a descriptive paper?)
— Ye Wang (@yezhehuzhi) May 11, 2018
One thing that distinguishes poli sci from, say, econ is that poli sci has lots of books, many of which contain important descriptive work, as in this:
No a paper, but I think the descriptive sections of Unequal Democracy by Bartels are amazing.
— Tiago Ventura (@_Tiagoventura) May 11, 2018
Nonetheless, I was mostly interested in work published in paper form.
An important class of measurement contributions in poli sci include dimension reduction, scaling, and latent variable estimation methods. This includes things like ideal point estimation as well as analyses of text:
- Example 1:
Wouldn’t a lot of the text as data stuff fit here?
— Claire Adida (@ClaireAdida) May 11, 2018
Yes for sure. Do you have a favorite?
— Cyrus Samii (@cdsamii) May 11, 2018
clearly, anything written by @mollyeroberts.
— Claire Adida (@ClaireAdida) May 11, 2018
King, Pan, Roberts (2013) on Chinese censorship. There are causal interpretations in there, but it's mostly just a beautiful descriptive paper.
— Daniel de Kadt (@dandekadt) May 11, 2018
- Example 2:
Poole and Rosenthal (1985)
— Ryan D. Enos (@RyanDEnos) May 11, 2018
- Example 3:
"Democracy as a Latent Variable" by Treier and @SimonJackman is a very nice (dare I say 'important'?) piece in this area https://t.co/aftw9uolb1
— Arthur Spirling (@arthur_spirling) May 11, 2018
- Example 4:
Farris 2014 in the APSR
— Tore Wig (@torewig) May 11, 2018
Fariss (2014) https://t.co/R48tzm6OW4
— Yonatan Lupu (@yonatanlupu) May 11, 2018
(Chris’s last name is spelled Fariss, by the way.)
Poli sci scholars have also done a lot to elaborate small area estimation techniques and use them in analyzing survey data, as with the “MRP” papers, e.g.:
public opinion papers using MRP, especially Broockman and Skovron https://t.co/Hyok7uJpDy
— Alexander Sahn (@sahnicboom) May 11, 2018
Taxonomy, that is, organizing cases on the basis of conceptual categories, is another class of measurement-related work:
This is a nice example of simple classification. It's straightforward and improves on other classifications. We don't do too much taxonomy these days. https://t.co/fXQcSo9Myc
— Peter Loewen (@PeejLoewen) May 11, 2018
Sometimes descriptive work can indirectly inform causal questions:
Ansolabehere & Snyder paper showing overtime trends in inc. adv. for all statewide offices are identical to those for Congress. Purely descriptive, but suggests it's unlikely change in congressional inc. adv. is due to gerrymandering since the other offices have fixed districts.
— Ethan BdM (@ethanbdm) May 11, 2018
What I was most interested in were creative contributions that don’t apply especially new statistical methods, but are the result of shoe-leather effort that allows us to view important dynamics more clearly. Examples:
Gelman/Margalit's penumbras paper has considerable amounts of interesting descriptive stats. Not exactly challenging conv. wisdom, but has interesting implications. https://t.co/Y0tsdvaBrs.
— Michael Aklin (@MichaelAklin) May 11, 2018
Converse 64
— Yphtach Lelkes (@ylelkes) May 11, 2018
“Why is there so little money in politics?” is another great example
— Andy Hall (@andrewbhall) May 11, 2018
McDonald, M.P. and Popkin, S.L., 2001. The myth of the vanishing voter. American Political Science Review, 95(4), pp.963-974. @ElectProject
— Eric D. Lawrence (@eric_d_lawrence) May 11, 2018
I’d suggest “the rational public” and other work on ‘mood’ that demonstrates (not causally) that collective public opinion seems to be thermostatic, eg wlezien 1995
— Tom O'Grady (@DrTomD_OG) May 11, 2018
Not published yet but https://t.co/YRCcAItVcT
— Kevin Munger (@kmmunger) May 11, 2018
Here’s a “hard copy” of this post (which I will update again after all edits are in), for archival sake, in anticipation of potential Twitter link instability: [PDF]