Methods for situating a scholar in their field

I am putting these notes here to remind me of steps and also in case others are curious about doing something similar.

Suppose we want to situate a scholar in their field, for example as part of a tenure review case. One way to do that is to look at the scholar’s papers and see who they are citing:

  1. Go to their Google scholar profile and pull up their papers. Choose some of their most cited papers (reflecting how others see the scholar’s contributions) and some of their most recent papers (reflecting their current thinking).

  2. Construct the network of people that the scholar references in their most prominent work.

A low-tech way to do this is to copy/paste bibliographies from the papers into https://anystyle.io/ to put the bibliographies into machine readable format (e.g., bibtex). I like to tag the entries from each paper’s bibliography by the date of the paper’s publication (e.g., by adding a custom field to the bibtex file) so that I can sort and see how the scholar’s reference base has changed over time. Compile the different bibliographies into a library in a reference manager. If you keep duplicate entries you get a sense of the scholar’s key points of reference.

A higher-tech way to do this is to use the Connected Papers app. You can look at the graph to find well-cited work that the scholar tends to reference.

UPDATE (12/14/23): The “InfluenceMap” project allows for creating an influence diagram (people that the scholar draws up, and then people who cite the scholar): [link]

  1. Pare down the list to seminal contributions. E.g., keep only entries from relevant general interest and field journals that are highly cited.

Now some analyses:

First, who appears most often in the library? What does the work of these primary referents represent in the literature and how does the current scholar’s work relate?

Second, whose work is being referenced at different times over the course of the scholar’s career (I do this using the custom field described above)? What does this say about how the scholar’s work has evolved alongside the reference literature?

As far as I know, the steps above are not as well-automated as methods to see who else is citing the scholar’s work (there are numerous tools to do that, like the “scholar” package in R). Would love to see someone do it (and welcome any suggestions below).

Share

Readings on statistical discrimination and inefficiency

A tweet by Sarah Jacobson prompted a few discussion threads on current perspectives on statistical discrimination and efficiency/inefficiency. Here is the original tweet:

I have collected references to some of the papers that discussants mentioned as providing more refined takes on the original Arrow and Aigner-Cain analyses:

  • Lundberg, Shelly J., and Richard Startz. “Private discrimination and social intervention in competitive labor market.” The American Economic Review 73.3 (1983): 340-347.
  • Schwab, Stewart. “Is statistical discrimination efficient?.” The American Economic Review 76.1 (1986): 228-234.
  • Coate, Stephen, and Glenn C. Loury. “Will affirmative-action policies eliminate negative stereotypes?.” The American Economic Review (1993): 1220-1240.
  • Bohren, J. Aislinn, et al. Inaccurate statistical discrimination. No. w25935. National Bureau of Economic Research, 2019.
  • Lang, Kevin, and Ariella Kahn-Lang Spitzer. “Race discrimination: An economic perspective.” Journal of Economic Perspectives 34.2 (2020): 68-89.
  • Komiyama, Junpei, and Shunya Noda. “On Statistical Discrimination as a Failure of Social Learning: A Multi-Armed Bandit Approach.” arXiv preprint arXiv:2010.01079 (2020).
  • Fosgerau, Mogens and Sethi, Rajiv and Weibull, Jorgen W., Costly Screening and Categorical Inequality (April 21, 2021). Available at SSRN: https://ssrn.com/abstract=3533952 or http://dx.doi.org/10.2139/ssrn.3533952
Share

Design-Based Inference for Spatial Experiments with Interference

Excited to share “Design-Based Inference for Spatial Experiments with Interference”, joint with Peter M. Aronow and Ye Wang: arxiv

In settings with complex spatial effects and interference, the paper defines a type of marginal effect, the “average marginalized response,” that has a clear interpretation and can be identified with a spatial experiment and a simple contrast.

It took time to work out details for robust inference, and finally got there with Ye working out reasonable conditions that justify the spatial HAC variance estimator, and then by connecting to a breakthrough CLT result from Ogburn et al. (2020; arxiv link).

We are working on the public release of the R package and also a more didactic paper that walks through applications. Stay tuned for those.

Share

Using pre-analysis plans to learn better and to learn together

Below is a Twitter thread in which I offer a perspective from my experience through EGAP (egap.org) on how to make effective use of pre-analysis plans and also research designs. The basic idea is that your research design and pre-analysis plan should serve as the basis of a discussion in which you can refine your design and analysis and gain buy-in from skeptics. A research design or pre-analysis plan that is never discussed publicly before it is implemented is a huge missed opportunity.

The thread was in response to a paper by Duflo et al. (linked in the thread) who focus mostly on pre-analysis plans as ways to bind yourself, without giving much consideration to the idea of using them as the basis of having an ex ante conversation about the research.

The thread is here:

Share

Open source environments for structural estimation

If you click on the tweet below, you will get a conversation on open source options (essentially Python, Julia, and R) for students interested in getting started with structural estimation:

Among other things, people pointed to the following resources to get you started:

Share