Answer to Sam Harris's Moral Landscape Challenge

By sulthan on Thursday, February 27, 2014

At the start of his website’s FAQ for the Challenge, Sam Harris summarizes what he calls his book’s central argument. That summary is clearly invalid: he slides from the assumption that moral values “depend on” facts having to do with conscious creatures, to the conclusion that morality itself has scientific answers. This is like saying that because land-dwelling animals depend on ground beneath their feet, biology reduces to geology.

As for the implicit argument in The Moral Landscape, as I interpret it, that argument is also flawed. Harris thinks that because morality has to do with the facts of how to make conscious creatures well, and these facts are empirical, there’s a possible science of morality. Putting aside the question of what exactly counts as science, let’s consider whether any kind of reasoning tells us what’s moral. Take, for example, instrumental reason, the efficient tailoring of means to ends. If we want to maximize well-being and we think carefully about how to achieve that goal, we can, of course, help to achieve it. Is that all there is to morality? No, because instrumental reason—as it’s posited in economics, for example—is neutral about the preferences. This kind of rationality takes our goals for granted and evaluates only the means of achieving them. So we can be as rational as we like in this sense and the question will remain whether our goals are morally best.

Russell Blackford makes the same point and Harris replies that a utopia in which well-being is maximized is possible, and so if a bad person’s preferences stand in the way of realizing that perfect society, we might as well change that person’s way of thinking, even by rewiring his brain. Presumably, we could do that—just as we could turn an altruist into a psychopath. Reason alone doesn’t tell us which would be the superior person, so Harris’s response here merely begs the question.

For another example of reasoning, take the basic scientific aim of telling us the naturally probable facts, through observation and testing of hypotheses. Conceivably, scientific methods could uncover facts of how Harris’s utopia would work and they might even lay out a roadmap for how to perfect our current societies. As Harris says, some present societies might be better than others, given the ideal of maximizing well-being. If empirical reasoning could tell us that much, would that answer the central moral questions?

No, because as Harris admits, science wouldn’t thereby show that the maximization of well-being is factually the best ideal. Rather, a science of morality would presupposethat utilitarian ideal as being self-evident, just as medicine presupposes the goal of making people healthy, as Harris says. This analogy is flawed, though, because doctors can fall back on the biological functions of our organs, whereas moral aims needn’t be the same as what we’re naturally selected to do. Medical doctors try to make our bodies function in the way that maximizes our species’ fitness to carry our genes. Note how much harder it is to explain what counts as mental health. This is because the question of which mind is ideal is partly a normative one, and science apparently doesn’t address it.

So is our well-being self-evidently what we all ought to pursue? No, for at least two reasons. First, we may not deserve to be happy or we may be obligated to suffer because too much well-being would be unseemly in the indifferent universe that we have no hope of altering. This is roughly the point of the Christian doctrine of original sin—which may be neither here nor there for secularists, but this doctrine also reflects the ancient Eastern religions’ pessimism about natural life. Instead of trying to be happy, says the Hindu or Buddhist, we should ideally resign ourselves to having a detached and alienated perspective until we can escape the prison of nature with honour. Robert Nozick makes a similar point with his Happiness Machine thought experiment: living in a computer simulation might maximize well-being in terms of our conscious states, but people tend to feel that that narrow flourishing would be undignified, under the circumstances.

Second, “well-being” is a vacuous placeholder that must be filled by our personal choice of a more specific ideal. Harris says otherwise, because he thinks his Good Life and Bad Life illustrations point us in the direction of the relevant facts. But notice that his heroine leading the good life is on a slippery slope to suffering in the way that Oskar Schindler suffers at the end of Spielberg’s movie. “I could have done more,” Schindler says in horror. The fact is that the more empathetic we feel, the more we must personally suffer because in that case we must suffer on behalf of many others. So a world in which we prefer to maximize collectivewell-being is simultaneously (and ironically) one that maximizes individual suffering, and that’s so even though many people would come to our aid in such a world. A selfless person can’t accept aid or even compliments that could just as well go to other, more needy folks. Indeed, those with altruistic motives intentionally sacrifice their personal well-being, because they care more about others than themselves.

Thus, in so far as the ideal of well-being includes the goal of personal contentment, this ideal is opposed to the moral one of altruism, of maximizing (other) people’s happiness. Which goal is preferable isn’t up to pure reason of any kind. Rather, as with all our core values, we must ultimately take a leap of faith that our personal stamp is worth putting on the world. And in so far as so-called rationalists would presuppose Harris’s utilitarian ideal, they’d clash with pessimists, existentialists, world-weary misanthropes, melancholy artists, esoteric Hindus, Buddhists, and the like. Moreover, in light of how extroverted Western norms have tended to become more global, not reason but force and a crass lowering of standards would likely settle the matter.