Joshua Greene's Moral Tribes -- Emotion, Reason, and the Gap between Us and Them is a mixture of political philosophy, evolutionary psychology, and ethics. The cool thing is that Greene is a clear thinker and superb writer, so the book is accessible to the educated layperson.
The central question of the book is: In a modern, pluralistic society, how can diverse groups who disagree on fundamental issues come to agreement on matters of morality and public policy?
Pro-life versus pro-choice. Small government versus Big Government. Capitalism versus quasi-socialism. Individualism versus collectivism. Theism versus secularism. People have strong sentiments about these and other topics, and it often seems that reasoned argument is of no avail.
Greene uses the analogy of multiple tribes living near each other. Within each tribe there is broad agreement about morality and expectations of behavior. Indeed, our inbred morality was designed to deal with life in a tribe or village. Humans have evolved to cooperate, to some extent, especially with people from the same family or tribe. But for outsiders, our gut reactions often say: fight! When tribes interact and need to work together is there any way to resolve differences other than "might makes right"? (I hope democracy isn't just another form of "might makes right.")
Greene's answer to the question "How can we come to agreement?" is: we should use utilitarian reasoning. That is we should aim to maximize happiness and reduce suffering for the greatest number of people. He thinks utilitarianism, properly understood, is a "common currency" that lets us transcend our tribal differences. He does not believe that utilitarianism is a moral absolute or that it amounts to a foundational principle or axiom for ethics. He just says that, pragmatically, utilitarianism works and it's something almost everybody understands and uses in our daily lives.
He spends much of the book defending utilitarianism against supposed counter-cases in which it seems that maximizing happiness violates individuals rights.
For example, is it correct to push a person into the path of a train if it can stop the train from killing five other people? That is, can we sacrifice one life for five by shoving someone? (Experiments show that the answer to such question depends on whether you personally have to touch the person, in contrast to moving a switch, for example.)
Is it correct to torture one person to increase the happiness of 1000 other people?
He argues that such cases never or rarely apply in the real world. I'm not sure he really establishes that. He also says that although our gut reaction may be that sacrificing one life to save five may be a violation of fundamental rights, Greene thinks that, in fact, it may be the rational, moral thing to do, from the point of view of utilitarianism. But in the real world, such counter-examples rarely if ever matter. For example, it's hard to imagine that a utilitarian would support enslaving one person to increase the happiness of others, since the loss of happiness for the slave would far outweigh the gain in happiness of the others.
A far greater problem for utilitarianism, I think is: whose happiness matters? All humans? Including criminals and the insane? Fetuses? Greene says we needn't be able to calculate firm numbers for such questions in order to use utilitarian arguments. But it seems that one's opponent could always say "those people's happiness simply doesn't matter." And how about gorillas, whales and other animals? Does their happiness count but not as much?
Greene compares human morality to the operation of a camera. Modern cameras have two modes: an automatic point-and-click mode, plus a manual mode in which we consciously and carefully adjust settings. Similarly, human morality has two modes.
Our auto mode morality is like the operation of a point-and-click camera. We use it without thinking and it's largely inborn. Humans evolved to cooperate with others in a tightly knit group (our "tribe"). Our inborn, natural emotions related to in-group cooperation help us overcome the Tragedy of the Commons: situations where everybody suffers because nobody is willing to sacrifice their selfish needs for the greater good. Auto mode includes intuitive reactions such as shame, compassion, loyalty, disgust, fear, and vengefulness. Auto mode was made for overcoming selfishness and for getting along with our group/family/tribe.
Manual mode morality involves conscious reasoning about options and outcomes. Its conclusions about what's right and just may be different from our gut reactions, which we tend try to rationalize with arguments. That is, we have auto mode moral intuitions which feel right, based on tradition, prejudice, and religion. And we try to justify them and raise them to foundational principles or "rights." Often such arguments are question-begging and ad hoc. For example, Kant tried to show from first principles that masturbation was inherently evil. When we read his arguments now they seem silly.
Green makes a case that most arguments on both sides of the abortion debate are shallow rationalizations that don't handle the hard cases (Does life become precious right after an egg becomes fertilized? Does a woman have the right to abort a nine month old fetus?)
Though Greene thinks that much moral reasoning is rationalizations of prejudices or auto mode feelings, he thinks that manual mode moral reasoning can be a basis for living together in a pluralistic society.
The prototypical example of the Tragedy of the Commons is shared grazing land, which each farmer can use to graze cattle. If everyone grazes the maximum number of cattle, then the grass dies and nobody's cow gets fed. So there has to be a way to ration use of the shared resource and prevent cheaters from taking more than their fair share. It's to each person's individual benefit to graze as much as possible, and to cheat by violating any rules, but a society (and a species) needs a way to overcome such selfishness. Auto mode morality evolved/developed for that purpose.
(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).