Unification of the social sciences? Gintis’s “The Bounds of Reason”

I recently finished Herbert Gintis’s The Bounds of Reason (link), in which the author tries to bring the foundations of game theory into better alignment with actual social behavior and make a case for the unification of the social and behavioral sciences (economics, psychology, sociology, and political science). Gintis suggests that certain dividing lines drawn between humans and other living beings’ behavior can be indefensible: traits commonly construed as “distinguishing” of humans, like language and property rights, commonly appear among other animals as a matter of fact.

Gintis introduces behavioral postulates that go beyond canonical game theoretical analyses, such as allowing for rational agents to pursue goals that go beyond narrow self-interest. The conflation of rationality with narrow self-interest is major peeve for anyone with more than a cursory knowledge of rational choice theory, myself included, and Gintis does a good job of exposing this fallacy. He also discards improbable or illogical constructs such as deep backward induction or actual randomization in mixed strategies. He shows that by reconfiguring the rudiments in this way, one is able to carry out even more compelling and true-to-life analyses of strategic problems like coordination and commitment without losing tractability. Gintis thus responds to Ariel Rubinstein’s famous statement that “models in economic theory … are not meant to be testable” (quoted on p. 129), calling Rubinstein “dead wrong: the value of a model is its contribution to explaining reality, not its contribution to society’s stock of pithy aphorisms” (p. 129).

Gintis also provides a spirited critique of methodological individualism, another totem to which many assume rational choice theory is tethered. Methodological individualism cannot properly account for the emergence of social norms, social cues, or frames, but norms, cues, and frames are fundamental to social behavior. The repeated games literature has been the primary venue for methodological individualists to explain cooperative behavior. But Gintis demonstrates that attempts to loosen the strict knowledge assumptions that underpin the classical folk theorem have failed to produce compelling explanations for cooperative equilibria. This impels one to think about other sources of cooperative behavior, like hard-wiring, perhaps through evolutionary processes, of other-regarding preferences and “social choreography” through prescriptive norms. Gintis proposes that Robert Aumann’s concept of correlated equilibrium provides analytical foundation for understanding the operation of social norms.

The book is technically demanding at times, and the rather frequent typos don’t help to make it digestible. But it is well worth the attention of researchers across the social sciences.

In fact, I’d be most interested to hear what social psychologists have to say about the book. Social psychology is a branch of the social sciences where formal decision- and game-theoretic modeling has not caught on. But it is also a branch from which economists and political scientists seem to be drawing a lot of inspiration these days. By Gintis’s logic, the translation of social psychological insights into the formal decision and game theoretical models represents a crucial scientific step forward. I wonder whether any of this work in synthesis and formalization will be done by social psychologists themselves?

Share

What a solid theoretical framework does for you

Paul Krugman had a superb paragraph in a column last week [link], explaining how we can’t get away from theoretical models:

[W]henever somebody claims to have a deeper understanding of economics (or actually anything) that transcends the insights of simple models, my reaction is that this is self-delusion. Any time you make any kind of causal statement about economics, you are at least implicitly using a model of how the economy works. And when you refuse to be explicit about that model, you almost always end up – whether you know it or not – de facto using models that are much more simplistic than the crossing curves or whatever your intellectual opponents are using.


This came to mind today as I was commenting on some student work, and felt the need to explain how important it was to have a strong theoretical foundation even if you are working with a well-identified experiment or natural experiment. Here’s what I wrote (with specific references to the paper removed):

Developing a theoretical framework is important for lots of reasons. First, it provides a basis for both deriving hypotheses coherently and also setting us up to draw out implications from the results of the empirical analysis. Right now the hypotheses sort of come from nowhere, based on some intuitions. But this is inadequate motivation. How would evidence in favor of (or against) these hypotheses affect the implicit or explicit models that we rely on to form expectations about the phenomenon you are studying? Second, it helps people who do not care about the specific application you are studying to take interest nonetheless in your research. Identifying the relevant theoretical framework is a way of addressing the all important question, “what more general thing is this a case of”? (Sorry for the hanging preposition.) We want to reduce the specific, applied problem to something that can be analyzed in a general way such that the results of this particular study have implications for other types of actors in other types of situations.

Well that’s how I see it at least.

Share

“Assumptions are self destructive in their honesty”

Here’s a great quote from Pearl and Bareinboim (2014, p.2) [link] in their analysis of “external validity” and conditions that allow for one to transport the results of a causal analysis from one context to another:

[The literature on external validity] consists primarily of threats, namely, explanations of what may go wrong when we try to transport results from one study to another while ignoring their differences. Rarely do we find an analysis of “licensing assumptions,” namely, formal conditions under which the transport of results across differing environments or populations is licensed from first principles.

The reasons for this asymmetry are several. First, threats are safer to cite than assumptions. He who cites “threats” appears prudent, cautious and thoughtful, whereas he who seeks licensing assumptions risks suspicions of attempting to endorse those assumptions.

Second, assumptions are self destructive in their honesty. The more explicit the assumption, the more criticism it invites, for it tends to trigger a richer space of alternative scenarios in which the assumption may fail. Researchers prefer therefore to declare threats in public and make assumptions in private.

Third, whereas threats can be communicated in plain English, supported by anecdotal pointers to familiar experiences, assumptions require a formal language within which the notion “environment” (or “population”) is given precise characterization, and differences among environments can be encoded and analyzed.


There are so many truths in there that extend beyond research on external validity.

Share

When teaching people that risks are greater than they thought leads to more risk taking

A working paper by UMich grad student Jason Kerwin considers how “fatalistic” thinking can lead even rational individuals to increase risky behavior when they learn that risks are higher than they thought. It sounds crazy, but the investigation is motivated by some interesting empirical patterns, including the fact that in a recent survey in Malawi, overestimation of HIV risks from unprotected sex was associated with higher rates of engagement in unprotected sex. How can this be? Kern crystalizes the logic as follows. Suppose that you tell a potential risk taker that the true risk of contracting HIV is higher than they thought. Well,

a change in the per-act risk affects not only the marginal cost of the acts the agent is deciding over, but also a stock of previously-chosen acts over which one no longer has any control. If an agent’s perceived per-sex-act risk of contracting HIV rises, this has a direct effect of increasing the marginal cost (in expected utility) of having more risky sex. But it also increases the probability that the agent already has HIV, which decreases the marginal cost of more risky sex. When the second effect dominates, increases in perceived risks will lead to more risk-taking rather than less.


Kerwin develops the logic formally by modeling perceived risk of HIV infection in terms of a cumulative distribution function that incorporates not only the next act in question, but all past acts. Such CDFs typically have inflection points and become concave in their upper reaches. So, an upward shock to someone’s belief about where they stand currently can result in a diminishment in the relative magnitude of added risk relative to any benefits whose value is unaffected by the belief shock. When this occurs, the effect of increasing perceived risk is to increase the attractiveness of the risky behavior.

Paper here: link

Share

Conference and workshop formats that work

I’ve found myself explaining these things to colleagues a few times in the past month, with the response always being “wow, that makes a lot of sense — I never considered that” — so I thought I’d try to share more broadly here.

After attending lots of conferences and workshops, I have learned what formats actually work to produce meaningful discussions of papers and thus useful feedback for presenters. I am not alone in this — everyone, and I mean everyone, with whom I have participated under these formats agrees with this sentiment. The ideas aren’t my own but rather inherited from colleagues who participate in EGAP (link) and CAPERS (link) and who also come together on occasion at the big conferences like APSA and MPSA.

In my home discipline, political science, conferences and workshops are usually organized around the following two “traditional” formats:

  1. Short format panel presentations with discussant: typically a 1-2 hour session with authors of 3-5 papers given about 15 minutes each to present their papers, followed by a discussant or two providing summary comments on all of the papers, followed by floor Q&A.
  2. Long format presentation with discussant: typically about a 1 hour session with author of paper given about 30 min to present a paper followed by discussant comments and then floor Q&A.

I don’t know anyone who likes format 1, mostly because of the ridiculous way that the discussant role is defined and the fact that the Q&As tend to jump around between papers rather than following any intellectual progression. Format 2 makes sense for really big plenary type talks, but when the group is smaller, it’s a highly inefficient way to use an hour if the goal is actually engage with the material.

Here are some alternative formats that tend to generate much deeper discussions in the same amount of time:

  1. Short format panel redux: Take the sessions and divide the time evenly into blocks for each of the papers. So 2 hour session with five papers has 24 minutes per paper. Then, in each block, presenter can start by giving their presentation for a bit less than half the time, followed immediately by discussant for that paper and then immediately by floor Q&A for that paper. Also, instead of one discussant for all papers, a nice thing to do is to have a “discussant round robin”: each paper presenter serves as discussant for someone else’s paper. You can use the round robin to actually assign discussants in a way that emphasizes overlapping interests. We used this on all of my MPSA panels this year and it was SO MUCH BETTER! If you are really organized, an even better thing to do is to organize in advance both with panel participants and those who will attend in the audience. Among that group, you commit to read the papers before the panel. Then, you can actually skip the paper presentation altogether, and rather lead with the discussant who provides a short summary of the paper followed by comments to get a conversation going, altogether taking less than 10 minutes. Then you have a good chunk of time for an open discussion of the paper. This is the way to really get a lot out of 24 minutes. It is also a miniature version of the “no presentation” long format, to which I now turn.
  2. The “no presentation” long format: This is the best way that I have experienced to have a deep discussion of new academic work. EGAP, CAPERS, and NEWEPS (link) are organized around this format. It requires that all those attending the workshop/conference do some homework before arriving. The format is simple: there is no author presentation, rather there is simply an entire hour devoted to having a discussion with the author about the paper, which everyone has read in advance of the meeting! You can have a discussant who serves to “get the ball rolling” by providing a really short summary of the paper and offers some starting comments or questions. That’s how CAPERS and NEWEPS work, although EGAP doesn’t even do that. You might think that this format will tend to result in a bunch of people sitting in silence for an hour. But I can tell you that has never happened. Sometimes it takes a little time for the discussion “momentum” to build, but when it does it is always energetic and there is always the feeling that we wished we had more time to discuss (that’s a sign of a good discussion!). The format is fueled by a strong ethic among the group of reading and critically engaging with papers prior to arriving at the meeting.

Either of these formats benefits greatly from the following:

  • Session chairs that are dynamic in promoting the discussion and managing time. Whereas the standard formats privilege presentations and leave floor discussion as an afterthought, these revised formats do the opposite. For that reason, the role of the chair is really important. The chair needs to scan the room actively and maintain a list of people wishing to raise a question or comment. The chair can also help to clarify questions or comments that paper authors misunderstand or address inadequately.
  • Rules for managing the discussion. Very useful are to use what are known as the “one finger” and “two finger” rules. (I’m not sure from where these rules originate, but I’ve seen them used in settings ranging from academic workshops to formal conferences at the United Nations.) The session chair manages a list of people who want to ask a question or make a comment to the author. To indicate to the chair that you want to be added to the list, you show one finger. The session proceeds with the chair going down the list allowing each person to ask their question or comment, and then allowing the author to respond. But, if you want to contribute to the discussion at that moment (rather than waiting for your turn on the list), you signal to the chair at that moment with two fingers. The chair then has the option to suspend the list for the moment and take two-finger comments or questions. This is useful when people want to dig deeper on a point that is being discussed at the moment. When the session is nearing the time limit, the chair has the option to declare “no more two fingers” and even to tell the author to withhold any responses so that the list of one-finger questions and comments can be cleared. It might sound a little rigid, but the rules work really well in keeping the discussion lively and on track.
  • Keeping it manageable and fun. For NEWEPS and CAPERS, we’ve established that we are going to limit things to four papers per meeting. That is the maximum that members of the working groups think that they can really commit to read, and read deeply, in advance of the meeting. So, NEWEPS and CAPERS are organized as semi-annual (Fall and Spring), four-paper meetings that kick off with lunch, followed by the four sessions (with a short break in the middle), and then end with a group dinner. That makes it a manageable, engaging, and fun format.

I find these revised formats to be so much better than the traditional formats that I actually feel sorry in situations where the traditional formats are still used.

Share