[content warning: description of fictional rape and torture.]
Phil Robertson is being criticized for a thought experiment in which an atheist’s family is raped and murdered. On a talk show, he accused atheists of believing that there was no such thing as objective right or wrong, then continued:
I’ll make a bet with you. Two guys break into an atheist’s home. He has a little atheist wife and two little atheist daughters. Two guys break into his home and tie him up in a chair and gag him.
Then they take his two daughters in front of him and rape both of them and then shoot them, and they take his wife and then decapitate her head off in front of him, and then they can look at him and say, ‘Isn’t it great that I don’t have to worry about being judged? Isn’t it great that there’s nothing wrong with this? There’s no right or wrong, now, is it dude?’
Then you take a sharp knife and take his manhood and hold it in front of him and say, ‘Wouldn’t it be something if [there] was something wrong with this? But you’re the one who says there is no God, there’s no right, there’s no wrong, so we’re just having fun. We’re sick in the head, have a nice day.’
If it happened to them, they probably would say, ‘Something about this just ain’t right’.
The media has completely proportionally described this as Robinson “fantasizing about” raping atheists, and there are the usual calls for him to apologize/get fired/be beheaded.
So let me use whatever credibility I have as a guy with a philosophy degree to confirm that Phil Robertson is doing moral philosophy exactly right.
There’s a tradition at least as old as Kant of investigating philosophical dilemmas by appealing to our intuitions about extreme cases. Kant, remember, proposed that it was always wrong to lie. A contemporary of his, Benjamin Constant, made the following objection: suppose a murderer is at the door and wants to know where your friend is so he can murder her. If you say nothing, the murderer will get angry and kill you; if you tell the truth he will find and kill your friend; if you lie, he will go on a wild goose chase and give you time to call the police. Lying doesn’t sound so immoral now, does it?
The brilliance of Constant’s thought experiment lies in its extreme nature. If a person says they think lying is always wrong, we have two competing hypotheses: they’re accurately describing their own thought processes, which will indeed always output that lying is wrong; or they’re misjudging their own thought processes and actually there are some situations in which they will judge lying to be ethical. In order to distinguish between the two, we need to come up with a story that presents the strongest possible case for lying, so that even the tiniest shred of sympathy for lying can be dragged up to the surface.
So Constant says “It’s a murderer trying to kill your best friend”. And even this is suboptimal. It should be a mad scientist trying to kill everyone on Earth. Or an ancient demon, whose victory would doom everyone on Earth, man, woman, and child, to an eternity of the most terrible torture. If some people’s hidden algorithm is “lie when the stakes are high enough”, there we can be sure that the stakes are high enough to tease it out into the light of day.
Churchill: Madam, would you sleep with me for five million pounds?
Lady: Well, for five million pounds…well…that’s a lot of money.
Churchill: Would you sleep with me for five pounds?
Lady: (enraged) What kind of a woman do you think I am‽
Churchill: We’ve already established what kind of a woman you are, now we’re just haggling over the price
The woman thinks she has a principle, “Never sleep with a man for money”. In fact, deep down, she believes it’s okay to sleep with a man for enough money. If Churchill had merely stuck to the five pounds question, she would have continued to believe she held the “never…” principle. By coming up with an extreme case (5 million Churchill-era pounds is about £250 million today) he was able to reveal that her apparent principle was actually a contingent effect of her real principle plus the situation.
In fact, compare physics. Physicists are always doing things like cooling stuff down to a millionth of a degree above absolute zero, or making clocks so precise they’ll be less than a second off by the time the sun goes out, or acclerating things to 99.99% of the speed of light. And one of the main reasons they do is to magnify small effects to the point where they can measure them. All movement is causing a little bit of time dilation, but if you want to detect it you need the world’s most accurate clock on the Space Shuttle when it’s traveling 25,000 miles per hour. In order to figure out how things really work, you need to turn things up to 11 so that the effect you want is impossible to miss. Everything in the universe has been exerting a gravitational effect on light all the time, but if you want to see it clearly you need to use the Sun during a solar eclipse, and if you really want to see it clearly your best bet is a black hole.
Great physicists and great philosophers share a certain perversity. The perversity is “Sure, this principle works in all remotely plausible real-world situations, but WHAT IF THERE’S A COMPLETELY RIDICULOUS SCENARIO WHERE IT DOESN’T HOLD??!?!” Newton’s theory of gravity explained everything from falling apples to the orbits of the planets impeccably for centuries, and then Einstein asked “Okay, but what if, when you get objects thousands of times larger than the Earth, there are tiny discrepancies in it, then we’d have to throw the whole thing out,” and instead of running him out of town on a rail scientists celebrated his genius. Likewise, moral philosophers are as happy as anyone else not to lie in the real world. But they wonder whether they might be revealed to be only simplifications of more fundamental principles, principles that can only be discovered by placing them in a cyclotron and accelerating them to 99.99% of the speed of light.
Sometimes this is even clearer than in the Kant example. Many people, if they think about it at all, believe that value aggregates linearly. That is, two murders are twice as much of a tragedy as one murder; a hundred people losing their homes is ten times as bad as ten people losing their homes.
Torture vs. Dust Specks is beautiful in its simplicity; it just takes this assumption and creates the most extreme case imaginable. Take a tiny harm and aggregate it an unimaginably high number of times; then compare it to against a big harm which is nowhere near the aggregated sum of the tiny ones. So which is worse, 3^^^3 (read: a number higher than you can imagine) people getting a single dust speck in their eye for a fraction of a second, or one person being tortured for fifty years?
Almost everybody thinks their principle is “things aggregate linearly”, but when you put it into relief like this, almost everybody’s intuition tells them the torture is worse. You can “bite the bullet” and admit that the dust specks are worse than the torture. Or you can throw out your previous principle saying that things aggregate linearly and try to find another principle about how to aggregate things (good luck).
Moral dilemmas are extreme and disgusting precisely because those are the only cases in which we can make our intuitions strong enough to be clearly detectable. If the question was just “Which is worse, a thousand people stubbing their toe or one person breaking their leg?” neither side would have been obviously worse than the other and our true intutition wouldn’t have come into sharp relief. So a good moral philosopher will always be talking about things like murder, torture, organ-stealing, Hitler, incest, drowning children, the death of four billion humans, et cetera.
Worse, a good moral philosopher should be constantly agreeing – or tempted to agree – to do horrible things in these cases. The whole point of these experiments is to collide two of your intuitions against each other and force you to violate at least one of them. In Kant’s example, either you’re lying, or you’re dooming your friend to die. In Jarvis’ Transplant Surgeon scenario, you’re either killing somebody to harvest their organs, or letting a whole hospital full of people die.
I once had someone call the torture vs. dust specks question “contrived moral dilemma porn” and say it proved that moral philosophers were kind of crappy people for even considering it. That bothered me. To look at moral philosophers and conclude “THESE PEOPLE LOVE TO TALK ABOUT INCEST AND ORGAN HARVESTING, AND BRAG ABOUT ALL THE CASES WHEN THEY’D BE OKAY DOING THAT STUFF. THEY ARE GROSS EDGELORDS AND PROBABLY FANTASIZE ABOUT HAVING SEX WITH THEIR SISTER ON THE HOSPITAL BED OF A PATIENT DYING OF END-STAGE KIDNEY DISEASE,” is to utterly miss the point.
So let’s talk about Phil Robertson.
Phil Robertson believes atheists are moral nihilists, or moral relativists, or something. He’s not quite right – there are a lot of atheists who are very moral realist – Objectivists, as their name implies, believe morality and everything else up to and including the best flavor of ice cream, is Objective – and even the atheists who aren’t quite moral realist usually hold some sort of compromise position where it’s meaningful to talk about right and wrong even if it’s not cosmically meaningful.
On the other hand – and I say this as the former secretary of a college atheist club who got to meet all sorts – there are a bunch of atheists who very much claim not to believe in morality. Less Wrong probably has fewer of them than the average atheist hangout, because we skew so heavily utilitarian, but our survey records 4% error theorists and 9% non-cognitivists. When Friendly Atheist says he “doesn’t know a single atheist or agnostic who thinks that terrorizing, raping, torturing, mutilating, and killing people is remotely OK”, I can believe that he doesn’t know one who would say so in those exact words. But I’m not sure how, for example, the error theorists could consistently argue against that position.
And what Phil Robertson does is exactly what I would do if I were debating an error theorist. I’d take the most gratuitiously horrible thing I could think of, describe it in the most graphic detail I could, and say “But don’t you think there’s something wrong with this?” If the error theorist says “no”, then I congratulate her for definitely being a real honest-to-goodness error theorist, and unless I can suddenly think up a way to bridge the is-ought dichotomy we’re finished. But if she says “Yes, it does seem like there should be something wrong there,” then we can start exploring what that means and whether error theory is the best framework in which to capture that intuition.
On the other hand, if I were debating Phil Robertson, I would ask him where he thinks morality comes from. And if he suggested some version of divine command theory, I could use an example of the graphic-horrifying-extreme-thought-experiment genre even older than Kant – namely, Abraham’s near-sacrifice of Isaac. If God commands you to kill your innocent child, is that the right thing to do? What if God commands you to rape and torture and mutilate your family? And it wouldn’t work if it were anything less extreme – if I just said “What if God told you to shoplift?” it would be easy to bite that bullet and he wouldn’t have to face the full implication of his views. But if I went with the extreme version? Maybe Robertson would find he’s not as big on divine command theory as he thought.
But this sort of discussion would only be possible if we could trust each other to take graphic thought experiments in the spirit in which they were conceived, and not as an opportunity to score cheap points.
[EDIT: This post was previously titled “High Energy Ethics”, but I changed it after realizing it was unintentionally lifted from elsewhere]