As a graduate student in philosophy, Joshua Greene had the idea of asking people to undertake some well-known philosophical thought experiments while in a brain scanner, so that he could study the neural processes involved in making moral judgements. Now head of the Moral Cognition Lab in Harvard University’s department of psychology, Greene draws on results from psychology and neuroscience to argue that we should live our lives according to the precepts of utilitarianism. Many philosophers will immediately be wary – it is widely accepted that you cannot derive an “ought” from an “is” – but the argument Greene makes in Moral Tribes is subtle and deserves careful attention.
Our big problem, as Greene sees it, is the challenge of adjudicating between the different values of opposing “tribes”. On his account, we evolved a set of psychological capacities and dispositions that promote within-group cooperation: for instance, we have an aversion to committing acts of violence; we have emotions such as guilt and shame that encourage cooperative behaviour; and we have others such as anger that commit us to punish non-cooperators. These dispositions are triggered automatically.
However, the flip side of this story is that these automatic settings evolved to favour our own tribe over others. With globalisation the world has become smaller, so we need to cooperate with other tribes. Since tribes tend to coalesce around shared values, when they encounter each other there are value conflicts to which we must find solutions. But evolution did not furnish us with automatic mechanisms to promote cooperation between groups. If anything, we are programmed to be biased in favour of our own tribe, interpreting events, evidence and even moral principles such as fairness in a way that favours our group. We are designed for local and not global cooperation.
Luckily, we have another setting that can be subverted to solve this problem. Greene proposes a “dual process” theory of moral judgement: as well as an “automatic mode”, we have a “manual mode”, responsible for conscious moral reasoning. From the results of his own research programme, Greene concludes that utilitarian judgements, which favour the action with the best overall consequences, are made in the manual mode.
So far, so much psychology. But Greene argues that we can use the resources of the manual mode in order to resolve value disputes between tribes. Whatever values we endorse, we all have a capacity for happiness, defined as good experiences, and we can recognise that everyone’s happiness is equally valuable. Therefore, utilitarianism is supposed to provide a common currency for resolving disagreements. The moral psychology of the manual mode also happens to be the philosophical solution.
Many of Greene’s arguments are controversial and the parts about philosophy most of all. For starters, there is his identification of manual mode reasoning with utilitarian judgements. Scores of philosophers, who are presumably doing moral reasoning, advocate theories other than utilitarianism. Greene’s response is that they are rationalising their intuitive judgements. While this is a genuine concern about any philosophy that relies heavily on thought experiments, when it is done properly, moral philosophy takes intuitively plausible premises, such as equal respect for people, and teases out exactly what they mean and the implications for moral behaviour. This is no different to what Greene has done with his premise about the equal value of experience. Of course, he would argue that the resulting ethical theories differ in their coherence and plausibility. But both intuition and reasoning enter into making those judgements.
Greene also assumes that the manual mode leads to better moral judgements than the automatic mode, but the relative merit of reasoning versus intuition is a live question. Other proponents of dual process theories, such as Daniel Kahneman, acknowledge that we are not always good at doing the calculations (as indeed does Greene at some points in this book). Sometimes our intuitive judgement has evolved to get us to the correct result when conscious reasoning would lead us astray. We shouldn’t fetishise conscious reasoning, which can also lead us to immoral conclusions.
These debates about individual reasoning will be familiar to those who know Greene’s work, but the really novel part of this book is their application to tribal conflict.
It is not clear that different tribes will agree to use utilitarianism to solve disputes. Greene says that because he is not arguing that utilitarianism is the moral truth, he prefers to call it “deep pragmatism”. Utilitarianism is just a way for people with competing values to get along together. But utilitarianism is a theory of value. It claims that what is valuable is experience and that all people’s experiences count equally. Why would someone who thinks that experiences are not the ultimate bearers of value buy into utilitarianism as a way of resolving value-conflict? Contrast Greene’s idea with John Rawls’ proposal for a theory of justice within which people with different values could coexist. Rawls’ method was to seek the agreements that people would make from behind a “veil of ignorance”, not knowing what position in society they would occupy or what their values would be. Given this stipulation, they would agree on a schema that does not privilege any one conception of the good, imposing a separation between “the right” and “the good”. But Greene’s proposal dissolves the boundary between the right and the good, as assenting to his proposal involves accepting a utilitarian conception of value.
Nor is Greene’s idea quite as far-reaching as he hopes. He gives several examples drawn from the contemporary political scene in the US, where the tribes are conservatives versus liberals. Naturally, Greene is concerned about these apparently irresolvable political differences, but he is more convincing when talking in the abstract about the demands of global justice. (Is it really more imperative to sacrifice a $500 suit to save a child who is drowning in a pond next to me than to send a cheque for $500 to a charity to save the life of a child in the developing world?) Debates between Right and Left about the welfare state simply do not boil down to a dispute about whether the boundary of the tribe is the local community or the whole state. The disagreement is about whether there is a fundamental right to the fruit of one’s labour, which means that taxation is illegitimate and poor relief should be left to voluntary organisations. But Greene gives rights short shrift, arguing that they are simply rationalisations of our intuitions.
Moral Tribes is a hugely ambitious book, and a work of this scope is easy to criticise because it can never be possible to anticipate every counterargument – although Greene does a pretty good job via some lengthy footnotes. He is particularly strong on the psychology of moral judgement. There is a wealth of books in this area, but Greene has something new to bring to the debate. The philosophical application of the psychology, too, is both thoughtful and thought-provoking. Readers who were already well disposed to utilitarianism before reading this book will find plenty in it to support their views, while those who come to it with the view that utilitarianism is wrong, will not be persuaded otherwise. However, on the basis of Greene’s theory of tribes, that is exactly what we would predict.
Joshua Greene, the John and Ruth Hazel Associate Professor of the Social Sciences in the Department of Psychology at Harvard University, was raised in Florida. But, he says, “I never felt very Floridian, and people often assume I’m from New York or Boston, presumably because the Sunshine State is not known for producing philosophy-science nerds.”
“I now live in Cambridge, Massachusetts with my wife, Andrea Heberlein, and our two kids. Cambridge is known for its philosophy-science nerds, and we feel very much at home there.”
Cambridge, observes Greene “is extremely diverse, but has a strong community spirit. It’s as close as one gets to a post-tribal town.”
A “natural sceptic” as a child, Greene is glad to have trained as a philosopher before becoming a scientist. “But the downside is that I picked up most of my quantitative and technical skills as I went along. I collaborate with people whose techie skills outstrip my own, and I’m grateful for that. But having those skills myself would not only allow me to do more with my hands, it would extend my thinking.”
Working across the boundaries of psychology, neuroscience and philosophy is a challenge, he says, “but we’ve made enormous progress bringing these disciplines together in recent years. The biggest challenge comes from traditional moral philosophers who, when confronted with the new science of morality, simply repeat the old saw that you can’t derive an ‘ought’ from an ‘is’ and then plug their ears.”
Speaking of the project at the heart of his work, Greene contends: “Studying the causes of moral problems is, in a sense, no different from studying the causes of cancer. In both fields there’s a moral purpose. And in both fields one must respect the evidence regardless of what one hopes to find.
“Unlike cancer research, the science of morality is not aimed at producing a useful mater-ial technology, such as a pill. It is about producing useful self-knowledge, what you might call ‘social technology’, ideas that can improve our lives by changing collective behaviour. It’s a much riskier endeavour. Cancer researchers will solve their problems sooner or later. Whether the science of morality will pay off remains to be seen. We’re just getting started.”
Moral Tribes: Emotion, Reason and the Gap Between Us and Them
By Joshua Greene
Atlantic, 432pp, £22.00
Published 2 January 2014