Turing Test #4: Mapping the Moral Brain with Josh Greene

[These are old notes on The Turing Test interview with Josh Greene. Not everything endorsed anymore.]

The best blog on the internet, SlateStarCodex, says of Josh:

“My own table was moderated by a Harvard philosophy professor [in fact, Josh transitioned to psychology] who obviously deserved it. He clearly and fluidly expressed the moral principles behind each of the suggestions, encouraged us to debate them reasonably, and then led us to what seemed in hindsight, the obviously correct answer. I’d already read one of his books, but I’m planning to order more. Meanwhile, according to my girlfriend, other tables did everything short of come to blows.”

Recording this episode felt similar. To see Josh’s training in action, jump to the end where he does a brilliant job at passing the ideological Turing Test.

Below are some notes to set the context of our debate.

Starry Night, Moral Law Within, and Wanton Self-Abuse

Josh opens his early paper, called the Secret Joke of Kant’s Soul, with a quote from Immanuel Kant

“Two things fill the mind with ever new and increasing wonder and awe, the oftener and more steadily we reflect on them: the starry heavens above me and the moral law within me.”

In many cases, this moral law served Kant quite well. As will MacAskill mentions in an interview, he was “one of the earliest proponents for universal democracy and international cooperation”.

But this inner moral compass was also far from perfect in predicting what views will be held as less wrong by future generations. Kant’s ethical principles also fail the test of time in his now-hilarious essay, Concerning Wanton Self-Abuse. Kant argues (well, he just claims) that masturbation is a violation of this innate Moral Law.

That such an unnatural use (and so misuse) of one’s sexual attributes is a violation of one’s duty to himself and is certainly in the highest degree opposed to morality strikes everyone upon his thinking of it… However, it is not so easy to produce a rational demonstration of the inadmissability of that unnatural use, and even the mere unpurposive use, of one’s sexual attributes as being a violation of one’s duty to himself (and indeed in the highest degree where the unnatural use is concerned). The ground of proof surely lies in the fact that a man gives up his personality (throws it away) when he uses himself merely as a means for the gratification of an animal drive.”

More importantly, Kant also thought that women have no place in civil society, that illegitimate children should receive fewer legal protections, and that there was a ranking in the moral worth of different races”.

So it seems quite important to learn to use this inner moral compass if you don’t want to be today’s equivalent of a misogynist, slave trader or a Nazi. But how?

The historical track record of moral theories is certainly one way to get a sense of the likely direction (hint: it seems hard to err on the side of expanding the moral circle too much. Though listen to our episode with Brian Tomasik and think about whether you think future generations will actually treat insects as moral patients).

Another approach is to get a science-based manual for our moral brain – to get a sense of where it’s reliable and where we should trust it less. This quest lies at the heart of Josh Greene’s research.

Mapping our Moral Software

For a shorter introduction to Greene’s framework, watch his EA Global talk from 2015.

The basic idea is that the dual process theory of the mind pioneered by Kahneman and Tversky applies to moral cognition as well. In Moral Tribes, Greene uses the metaphor of a camera:

The moral brain is like a dual-mode camera with both automatic settings (such as “portrait” or “landscape”) and a manual mode. Automatic settings are efficient but inflexible. Manual mode is flexible but inefficient. The moral brain’s automatic settings are the moral emotions […], the gut-level instincts that enable cooperation within personal relationships and small groups. Manual mode, in contrast, is a general capacity for practical reasoning that can be used to solve moral problems, as well as other practical problems.

We have fast, automatic gut reactions such as guilt, compassion, gratitude or anger. Or disgust felt towards masturbation, if you share it with Kant. These are handled by our System 1.

­­

But we can also second-guess these reactions with considered judgment. In the “manual mode”, we can think through the data we’re getting from our fast moral machinery.

If you’re not familiar with the dual-process theory, a good place to start is Thinking, Fast and Slow or Rationality: From AI to Zombies. There is also an excellent short (though a bit more technical) description by Paul Christiano, the Monkey and the Machine.

Bugs in the Software

Both modes, automatic and manual, have their bugs.

We think of the “rational decision maker”, System 2, as the president in the land of our brain. As the economist Robin Hanson puts it, it is better to think of it as the president’s press secretary – someone who only gets information about his boss’s decision after the fact, and whose job is to come up with a good explanation.

Studies of confabulation in split-brain patients illustrate this well:

Some of the most famous examples of confabulation come “split-brain” patients, whose left and right brain hemispheres have been surgically disconnected for medical treatment. Neuroscientists have devised clever experiments in which information is provided to the right hemisphere (for instance, pictures of naked people), causing a change in behavior (embarrassed giggling). Split-brain individuals are then asked to explain their behavior verbally, which relies on the left hemisphere. Realizing that their body is laughing, but unaware of the nude images, the left hemisphere will confabulate an excuse for the body’s behavior (“I keep laughing because you ask such funny questions, Doc!”).

It’s useful to hold this image in mind when we’re finding reasons why our political tribe of choice (or birth) happens to be right on questions of economic policy, pre-natal development, sex and theology.

But the snap intuitive decisions made in the automatic mode are often laughably (or cryably, depending on your mental predispositions) wrong.

Greene in particular started studying this topic early on in his undergraduate life, as an assistant to Jonathan Baron – in my opinion, a hugely underappreciated psychologist. Baron’s book Thinking and Deciding is an excellent introduction to decision making informed by cognitive biases (for LessWrong readers, think the academic equivalent of the Sequences). It also covers inconsistencies in moral reasoning

In particular, Baron and Greene studied a phenomenon called scope insensitivity. Josh describes it in our interview:

You can ask people things like, “How much would you pay to save 10,000 birds?” But the problem was people are wildly insensitive to quantity, which is something that is high relevant to effective altruism. If you ask people how many to save 10,000 birds, they go oh, this much. And if you say, “How much to save 20,000 birds?” They say the same thing.

In fact, they say the same thing regardless of the number of birds: people pay the same amount for saving 1,000 birds as for 100,000. (For a compressed description, see Eliezer Yudkowsky’s essays Scope Insensitivity or One Life Against the World.)

This is a serious problem. Consider the case of Viktor Zhdanov who lobbied the WHO to start the smallpox eradication campaign. In the twentieth century alone, smallpox killed 300-500 million – a far larger toll than all wars combined.

Zhdanov’s interventions clearly saved at least tens of millions of lives. But he would probably get the same (perhaps even bigger) emotional rewards if he worked as a doctor in a local Ukrainian hospital. Same goes for Bill Gates – adopting an African child would emotionally register roughly as much as saving millions of African children from a preventable disease (For the origins of Gates’s focus on global health, listen to our episode with Larry Summers).

To quote from Yudkowsky’s essay One Life Against the World:

Saving one life probably does feel just as good as being the first person to realize what makes the stars shine. It probably does feel just as good as saving the entire world.

But if you ever have a choice, dear reader, between saving a single life and saving the whole world – then save the world. Please. Because beyond that warm glow is one heck of a gigantic difference.

I would like to join Yudkowsky here: if you’re ever facing the choice, please prevent a global pandemic of a deadly, instead of a local school epidemic of chickenpox.

Finding a Middle Ground

The data supplied to us by our moral sentiments is often inconsistent or simply fails to hold on reflection. In his series No-nonsense metaethics, Luke Muehlhauser provides a summary of surprising findings from moral psychology and neuroscience

  • Whether we judge an action as ‘intentional’ or not often depends on the judged goodness or badness of the action, not the internal states of the agent.8
  • Our moral judgments are significantly affected by whether we are in the presence of freshly baked bread or a low concentration of fart spray that only the subconscious mind can detect.9
  • Our moral judgments are greatly affected by pointing magnets at the point in our brain that processes theory of mind[6].10
  • People tend to insist that certain things are right or wrong even when a hypothetical situation is constructed such that they admit they can give no reason for their judgment.11
  • We use our recently-evolved neocortex to make utilitarian judgments, and deontological judgments tend to come from evolutionarily older parts of our brains.12
  • People give harsher moral judgments when they feel clean.13

Because of this evolutionary baggage, we shouldn’t trust our moral intuitions very much when it comes to big global problems. Aiding our intuitions by consequentialist means-ends analysis (i.e. by comparing interventions in terms of their impact on Quality-Adjusted Life Years) will often feel wrong, but will provide us with a consistency that our brain is incapable of.

With all of this, we probably don’t want to throw the data from our moral sentiments out of the window. In the now-archived Consequentialism FAQ, Scott Alexander invites you to imagine a magical amulet that can exempt you from the laws of Morality.

The stories say that whoever wears the Heartstone is immune from the moral law, and may commit any actions he desires without them being even the mildest of venial sins[…]

Upon returning home, you decide to test its powers, so you adopt a kitten from the local shelter, then kill it.

You feel absolutely awful. You just want to curl up in a ball and never show your face again. “Well, what did you expect?” asks the ghost of Hrogmorph, who has decided to haunt you. “The power of the Heartstone isn’t to prevent you from feeling guilty. Guilt comes from chemicals in the brain, chemicals that live in the world like everything else – not from the metaphysical essence of morality. Look, if it makes you feel better, you didn’t actually do anything wrong, since you do have the amulet. You just feel like you did.”

The point of this story is that there is a limit to the usefulness abstract arguments in moral philosophy. If we have a moral theory that completely fails cohere with our moral intuitions – well, so much worse for the theory.

Thinking like a Chess Grandmaster

It seems like a common novice mistake among people exposed to the literature on cognitive biases to become extremely distrustful of their intuitions – prudential and moral. This is often a positive directional change, but it is also quite easy to start over-relying on System 2 to solve all problems.

This is because explicit System 2 thinking is often not your most efficient choice. System 1 can integrate much more information.

From Thinking, Fast and Slow:

When we think of ourselves, we identify with System 2, the conscious, reasoning self that has beliefs, makes choices, and decides what to think about and what to do. Although System 2 believes itself to be where the action is, the automatic System 1 is the hero of the book. I describe System 1 as effortlessly originating impressions and feelings that are the main sources of the explicit beliefs and deliberate choices of System 2. The automatic operations of System 1 generate surprisingly complex patterns of ideas, but only the slower System 2 can construct thoughts in an orderly series of steps.

For all its impressive analytical abilities, the use of System 2 is severely limited by both speed and capacity. Try to break down every decision you face during the day with explicit analysis and you will see that this is unsustainable.

In fact, if you observe the best thinkers in action, you often observe that much of their thinking relies on implicit models and intuitions. In other words, they develop powerful System 1 heuristics that replace the less efficient ones in their everyday decisions. Greene mentions one of his many famous mentors, the philosopher David Lewis (see his work on counterfactuals).

You’d say something and he would just stop for a moment and then he’d come out and say, “There are seven things you might mean.” And then list the seven things you might have meant in perfect complete paragraphs, and you have to sit there and decide, I knew it wasn’t two and four, but maybe it was one and five, I have to think more about it….

It’s interesting, he’s probably more like a chess expert, which is to say that he benefits from having certain very strong habits of thought that are then well regulated under a conscious reasoning system. I don’t think you could have the efficiency that he has if he was working it all out from first principles at a moments.

This approach of gradually integrating the slow, mathematical approach to problem-solving into our fast, intuitive judgments is described by an actual chess expert, Joshua Waitzkin, in his book the Art of Learning. He calls this approach “numbers to leave numbers”: a chess player first learns how to calculate the mathematical value of a piece:

We will start with day one. The first thing I have to do is to internalize how the pieces move. I have to learn their values. I have to learn how to coordinate them with one another.

Then, over time, I learn about bishops in isolation, then knights, rooks, and queens. Soon enough, the movements and values of the chess pieces are natural to me. I don’t have to think about them consciously, but see their potential simultaneously with the figurine itself.

Let’s begin the plunge into this issue with chess serving as a metaphor for all disciplines. The clearest way to approach this discussion is with the imagery of chunking and carved neural pathways.

Chunking relates to the mind’s ability to assimilate large amounts of information into a cluster that is bound together by certain patterns or principles particular to a given discipline. The initial studies on this topic were, conveniently, performed on chess players who were considered to be the clearest example of sophisticated unconscious pattern integration.

The Dutch psychologist Adriaan de Groot (1965) and years later the team of William Simon and Herbert Chase (1973) put chess players of varying skill levels in front of chess positions and then asked them to re-create those positions on an adjacent empty board. The psychologists taped and studied the eye patterns and timing of the players while they performed the tasks. The relevant conclusions were that stronger players had better memories when the positions were taken out of the games of other strong players, because they re-created the positions by taking parts of the board (say five or six pieces) and chunking (merging) them in the mind by their interrelationships. The stronger the player, the more sophisticated was his or her ability to quickly discover connecting logical patterns between the pieces (attack, defense, tension, pawn chains, etc.) and thus they had better chess memories.

On the other hand, when presented with random chess positions, with no logical cohesiveness, the memories of the players seemed to level off. In some cases the weaker players performed more effectively, because they were accustomed to random situations while the stronger players were a bit lost without “logic to the position.”So, in a nutshell, chunking relates to the mind’s ability to take lots of information, find a harmonizing/logically consistent strain, and put it together into one mental file that can be accessed as if it were a single piece of information.

 

Working with Our Moral Intuitions

We discuss Nick Beckstead’s excellent PhD thesis On the Overwhelming Importance of Shaping the Far Future.

Beckstead argues that the trajectory of humanity’s long-term future dominates all other moral concerns. This conclusion is rather counterintuitive, so Beckstead also lays a groundwork for thinking about our intuitions.

Beckstead draws the analogy (apparently developed by Robin Hanson and independently arrived at by Josh) to curve fitting in statistics. In statistics, a predictive model that tries to fit all data (i.e. accord with all our intuitions) would be useless.

If Amazon’s machine learning algorithms tried to fit a curve that matches your every click and purchase, their predictions would be junk. A simpler model is likely to predict your preferences much better.

Beckstead argues that in this edge case where our intuitions are uncertain, a simple consequentialist calculus is likely to lead to better decisions than a more complicated theory with many exceptions.

Josh Greene:

… it’s exactly what Beckstead said – that you can have a very complicated theory that’s designed to sort of fit all these cases, and that makes sense if you think that your intuitions are sort of rays of sun from God. But if you have a more naturalistic view of the moral mind, and you think that these intuitions are just products of rough emotional heuristics that are products of our learning on biological and cultural and personal time scales, then you’re not going to feel like the moral theory has to fit every last nook and cranny of our intuitions. Instead, you can say okay, presumably our moral thinking is in some ways on the right track, but some of our intuitions are probably just wrong and not worth chasing. In one of my papers I call this bullet biting versus intuition chasing, and I think that one way of putting what Beckstead …

This academic debate is actually quite relevant. At some point, we will probably develop powerful AI systems that will give advice on complicated decisions.

[…]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s