Book Notes: Moral Tribes


  • Josh Greene has had a really strong influence on my thinking, both through interactions with Harvard EA and through his books, papers and lectures.
  • A good summary talk for the book is on YouTube. And a longer one.
  • Some related things
    • The Turing Test with Josh Greene
    • Baron’s Thinking and Deciding
    • Consequentialism FAQ
    • 80,000 Hours podcast with Will MacAskill
    • Nick Beckstead’s thesis
    • The Righteous Mind
    • Selim Berker
    • Expanding Circle
    • Scope Insensitivity

SlateStarCodex says of Josh (from the Asilomar conference on AI):

“My own table was moderated by a Harvard philosophy professor [in fact, Josh transitioned to psychology] who obviously deserved it. He clearly and fluidly expressed the moral principles behind each of the suggestions, encouraged us to debate them reasonably, and then led us to what seemed in hindsight, the obviously correct answer. I’d already read one of his books, but I’m planning to order more. Meanwhile, according to my girlfriend, other tables did everything short of come to blows.”



Introduction: The Tragedy of Commonsense Morality


Moral Problems

  1. The Tragedy of the Commons
  2. Moral Machinery
  3. Strife on the New Pastures


Morality Fast and Slow

  1. Trolleyology
  2. Efficiency, Flexibility, and the Dual-Process Brain


Common Currency

  1. A Splendid Idea
  2. In Search of Common Currency
  3. Common Currency Found


Moral Convictions

  1. Alarming Acts
  2. Justice and Fairness


Moral Solutions

  1. Deep Pragmatism
  2. Beyond Point-and-Shoot Morality: Six Rules for Modern Herders



The Moral Brain as a Camera

Core metaphor: the moral brain is like a camera with two modes: automatic (System 1) and manual (System 2):

Intuitive Cooperation

Greene cites an interesting study finding that the more time people were given to think in public goods games, the more likely they were to defect. In other words, the fast gut reaction was more cooperative than the considered System 2 response.

A taxonomy of moral emotions

Effects of physical distance on moral judgment

Greene cites a really interesting experiment on the role of various in Drowning Child-like scenarios. I think this is a really neat way of illustrating that what’s going on is a rationalization of a dumb reflex to physical distance, rather than some deep philosophical objection. (Or perhaps there are good reasons for considering distance that I’m not seeing? E.g. as a heuristic, longer distance goes with worse information. That was controlled for in the experiment but reality might not bother with these controls).

In our experiments, the factor that had the biggest effect by far was physical distance. For example, in one of our scenarios you’re vacationing in a developing country that has been hit by a devastating typhoon. Fortunately for you, you’ve not been affected by the storm. You’ve got a cozy little cottage in the hills overlooking the coast, stocked with everything you need. But you can help by giving money to the relief effort, which is already under way. In a different version of this scenario, everything is the same except that, instead of being there yourself, it’s your friend who is there. You’re at home, sitting in front of your computer. Your friend describes the situation in detail and, using the camera and microphone in his smartphone, gives you a live audiovisual tour of the devastated area, re-creating the experience of being there. You can help by donating online.

In response to the versions of this scenario in which you are physically present, 68 percent of our subjects said that you have a moral obligation to help. By contrast, when responding to the versions in which you’re far away, only 34 percent of our respondents said that you have a moral obligation to help. We observed this big difference despite the fact that, in the faraway versions, you have all of the same information and you are just as capable of helping.

It’s worth emphasizing that this study controlled for many of the factors that people typically cite when they resist Singer’s utilitarian conclusion. In none of our scenarios does one have a unique ability to help, as in the case of Singer’s drowning child. In all of our scenarios, the aid is delivered in the same way: through a reputable organization that accepts donations. Our experiments control for whether the aid is sought in response to a specific emergency (drowning child) or to an ongoing problem (poverty). In all of our scenarios, the victims are citizens of foreign countries, thus removing a patriotic reason for helping some more than others. Finally, our experiments control for whether the unfortunate circumstances were brought about by accident rather than by the actions of other people who might bear more responsibility for helping, thus relieving you of yours. In short, there are very few differences between the near and far versions of our helping dilemmas, suggesting that our sense of moral obligation is heavily influenced by mere physical distance or other factors along these lines.*

Consider, once again, a friend who calls for moral advice: “Should I help those poor typhoon victims or not?” It would be rather strange to respond, “Well, it depends. How many feet away are they?”

Jeremy Bentham on treating gays…

I’d like to see a systematic study of beliefs of utilitarians vs people with other moral beliefs. I’m somewhat afraid that the examples are cherry-picked. Reflectiveness and self-skepticism was probably a much stronger factor: Franklin, Condorcet or the abolitionists weren’t utilitarians but they did share the ability to second-guess intuitions rather than rationalizing them.

[…] begins with a willingness to question one’s tribal beliefs. And here, being a little autistic might help. This is Bentham writing circa 1785, when gay sex was punishable by death:

I have been tormenting myself for years to find if possible a sufficient ground for treating [gays] with the severity with which they are treated at this time of day by all European nations: but upon the principle of utility I can find none.


…And Kant on masturbation

Whatever the case with utilitarians, it’s hard to escape the conclusion that Kant was a massive rationalizer. Here he’s trying to justify his beliefs about masturbation:

That such an unnatural use (and so misuse) of one’s sexual attributes is a violation of one’s duty to himself and is certainly in the highest degree opposed to morality strikes everyone upon his thinking of it… However, it is not so easy to produce a rational demonstration of the inadmissability of that unnatural use, and even the mere unpurposive use, of one’s sexual attributes as being a violation of one’s duty to himself (and indeed in the highest degree where the unnatural use is concerned). The ground of proof surely lies in the fact that a man gives up his personality (throws it away) when he uses himself merely as a means for the gratification of an animal drive.”


Confabulation and Rationalization

Korsakoff’s amnesia and damage to corpus callosum nicely exposes how much confabulation is going on in our thinking:

Patients with Korsakoff’s amnesia, for example, will often attempt to paper over their memory deficits with elaborate stories, typically delivered with great confidence and no awareness that they are making stuff up. Neurologists call this “confabulation.” In one study, for example, an amnesic patient seated near an air conditioner was asked if he knew where he was. He replied that he was in an air-conditioning plant. When it was pointed out that he was wearing pajamas, he said, “I keep them in my car and will soon change into my work clothes.” One sees similar effects in “split-brain” patients, people whose cerebral hemispheres have been surgically disconnected to prevent the spread of seizures. With the two cerebral hemispheres disconnected, each half of the brain is denied its usual inside information about what the other half is up to. In one study, a patient’s right hemisphere was shown a snow scene and instructed to select a matching picture. Using his left hand, the hand controlled by the right hemisphere, he selected a picture of a shovel. At the same time, the patient’s left hemisphere, the hemisphere that controls language, was shown a picture of a chicken claw. The patient was asked verbally why he had chosen the shovel with his left hand. The patient (i.e., the patient’s left hemisphere, seeing the chicken claw but not the snow scene) answered, “I saw a claw and picked a shovel, and you have to clean out the chicken shed with a shovel.”


The Speed of Our Unconscious Processing…

Pretty fast:

As Paul Whalen and colleagues have shown, the amygdala can respond to a fearful facial expression after being exposed to it for only 1.7 hundredths of a second. To do this, it uses a neat trick. Instead of analyzing the entire face in detail, it simply picks up on a telltale sign of fear: enlarged eye whites

…And the limited capacity of our System 2

The people who had memorized seven-digit numbers—the ones under higher cognitive load—were about 50 percent more likely to choose the chocolate cake than the ones who had memorized two-digit numbers

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s