Extremism In Thought Experiment Is No Vice

[content warning: description of fictional rape and torture.]

Phil Robertson is being criticized for a thought experiment in which an atheist’s family is raped and murdered. On a talk show, he accused atheists of believing that there was no such thing as objective right or wrong, then continued:

I’ll make a bet with you. Two guys break into an atheist’s home. He has a little atheist wife and two little atheist daughters. Two guys break into his home and tie him up in a chair and gag him.

Then they take his two daughters in front of him and rape both of them and then shoot them, and they take his wife and then decapitate her head off in front of him, and then they can look at him and say, ‘Isn’t it great that I don’t have to worry about being judged? Isn’t it great that there’s nothing wrong with this? There’s no right or wrong, now, is it dude?’

Then you take a sharp knife and take his manhood and hold it in front of him and say, ‘Wouldn’t it be something if [there] was something wrong with this? But you’re the one who says there is no God, there’s no right, there’s no wrong, so we’re just having fun. We’re sick in the head, have a nice day.’

If it happened to them, they probably would say, ‘Something about this just ain’t right’.

The media has completely proportionally described this as Robinson “fantasizing about” raping atheists, and there are the usual calls for him to apologize/get fired/be beheaded.

So let me use whatever credibility I have as a guy with a philosophy degree to confirm that Phil Robertson is doing moral philosophy exactly right.

There’s a tradition at least as old as Kant of investigating philosophical dilemmas by appealing to our intuitions about extreme cases. Kant, remember, proposed that it was always wrong to lie. A contemporary of his, Benjamin Constant, made the following objection: suppose a murderer is at the door and wants to know where your friend is so he can murder her. If you say nothing, the murderer will get angry and kill you; if you tell the truth he will find and kill your friend; if you lie, he will go on a wild goose chase and give you time to call the police. Lying doesn’t sound so immoral now, does it?

The brilliance of Constant’s thought experiment lies in its extreme nature. If a person says they think lying is always wrong, we have two competing hypotheses: they’re accurately describing their own thought processes, which will indeed always output that lying is wrong; or they’re misjudging their own thought processes and actually there are some situations in which they will judge lying to be ethical. In order to distinguish between the two, we need to come up with a story that presents the strongest possible case for lying, so that even the tiniest shred of sympathy for lying can be dragged up to the surface.

So Constant says “It’s a murderer trying to kill your best friend”. And even this is suboptimal. It should be a mad scientist trying to kill everyone on Earth. Or an ancient demon, whose victory would doom everyone on Earth, man, woman, and child, to an eternity of the most terrible torture. If some people’s hidden algorithm is “lie when the stakes are high enough”, there we can be sure that the stakes are high enough to tease it out into the light of day.

Compare Churchill:

Churchill: Madam, would you sleep with me for five million pounds?
Lady: Well, for five million pounds…well…that’s a lot of money.
Churchill: Would you sleep with me for five pounds?
Lady: (enraged) What kind of a woman do you think I am‽
Churchill: We’ve already established what kind of a woman you are, now we’re just haggling over the price

The woman thinks she has a principle, “Never sleep with a man for money”. In fact, deep down, she believes it’s okay to sleep with a man for enough money. If Churchill had merely stuck to the five pounds question, she would have continued to believe she held the “never…” principle. By coming up with an extreme case (5 million Churchill-era pounds is about £250 million today) he was able to reveal that her apparent principle was actually a contingent effect of her real principle plus the situation.

In fact, compare physics. Physicists are always doing things like cooling stuff down to a millionth of a degree above absolute zero, or making clocks so precise they’ll be less than a second off by the time the sun goes out, or acclerating things to 99.99% of the speed of light. And one of the main reasons they do is to magnify small effects to the point where they can measure them. All movement is causing a little bit of time dilation, but if you want to detect it you need the world’s most accurate clock on the Space Shuttle when it’s traveling 25,000 miles per hour. In order to figure out how things really work, you need to turn things up to 11 so that the effect you want is impossible to miss. Everything in the universe has been exerting a gravitational effect on light all the time, but if you want to see it clearly you need to use the Sun during a solar eclipse, and if you really want to see it clearly your best bet is a black hole.

Great physicists and great philosophers share a certain perversity. The perversity is “Sure, this principle works in all remotely plausible real-world situations, but WHAT IF THERE’S A COMPLETELY RIDICULOUS SCENARIO WHERE IT DOESN’T HOLD??!?!” Newton’s theory of gravity explained everything from falling apples to the orbits of the planets impeccably for centuries, and then Einstein asked “Okay, but what if, when you get objects thousands of times larger than the Earth, there are tiny discrepancies in it, then we’d have to throw the whole thing out,” and instead of running him out of town on a rail scientists celebrated his genius. Likewise, moral philosophers are as happy as anyone else not to lie in the real world. But they wonder whether they might be revealed to be only simplifications of more fundamental principles, principles that can only be discovered by placing them in a cyclotron and accelerating them to 99.99% of the speed of light.

Sometimes this is even clearer than in the Kant example. Many people, if they think about it at all, believe that value aggregates linearly. That is, two murders are twice as much of a tragedy as one murder; a hundred people losing their homes is ten times as bad as ten people losing their homes.

Torture vs. Dust Specks is beautiful in its simplicity; it just takes this assumption and creates the most extreme case imaginable. Take a tiny harm and aggregate it an unimaginably high number of times; then compare it to against a big harm which is nowhere near the aggregated sum of the tiny ones. So which is worse, 3^^^3 (read: a number higher than you can imagine) people getting a single dust speck in their eye for a fraction of a second, or one person being tortured for fifty years?

Almost everybody thinks their principle is “things aggregate linearly”, but when you put it into relief like this, almost everybody’s intuition tells them the torture is worse. You can “bite the bullet” and admit that the dust specks are worse than the torture. Or you can throw out your previous principle saying that things aggregate linearly and try to find another principle about how to aggregate things (good luck).

Moral dilemmas are extreme and disgusting precisely because those are the only cases in which we can make our intuitions strong enough to be clearly detectable. If the question was just “Which is worse, a thousand people stubbing their toe or one person breaking their leg?” neither side would have been obviously worse than the other and our true intutition wouldn’t have come into sharp relief. So a good moral philosopher will always be talking about things like murder, torture, organ-stealing, Hitler, incest, drowning children, the death of four billion humans, et cetera.

Worse, a good moral philosopher should be constantly agreeing – or tempted to agree – to do horrible things in these cases. The whole point of these experiments is to collide two of your intuitions against each other and force you to violate at least one of them. In Kant’s example, either you’re lying, or you’re dooming your friend to die. In Jarvis’ Transplant Surgeon scenario, you’re either killing somebody to harvest their organs, or letting a whole hospital full of people die.

I once had someone call the torture vs. dust specks question “contrived moral dilemma porn” and say it proved that moral philosophers were kind of crappy people for even considering it. That bothered me. To look at moral philosophers and conclude “THESE PEOPLE LOVE TO TALK ABOUT INCEST AND ORGAN HARVESTING, AND BRAG ABOUT ALL THE CASES WHEN THEY’D BE OKAY DOING THAT STUFF. THEY ARE GROSS EDGELORDS AND PROBABLY FANTASIZE ABOUT HAVING SEX WITH THEIR SISTER ON THE HOSPITAL BED OF A PATIENT DYING OF END-STAGE KIDNEY DISEASE,” is to utterly miss the point.

So let’s talk about Phil Robertson.

Phil Robertson believes atheists are moral nihilists, or moral relativists, or something. He’s not quite right – there are a lot of atheists who are very moral realist – Objectivists, as their name implies, believe morality and everything else up to and including the best flavor of ice cream, is Objective – and even the atheists who aren’t quite moral realist usually hold some sort of compromise position where it’s meaningful to talk about right and wrong even if it’s not cosmically meaningful.

On the other hand – and I say this as the former secretary of a college atheist club who got to meet all sorts – there are a bunch of atheists who very much claim not to believe in morality. Less Wrong probably has fewer of them than the average atheist hangout, because we skew so heavily utilitarian, but our survey records 4% error theorists and 9% non-cognitivists. When Friendly Atheist says he “doesn’t know a single atheist or agnostic who thinks that terrorizing, raping, torturing, mutilating, and killing people is remotely OK”, I can believe that he doesn’t know one who would say so in those exact words. But I’m not sure how, for example, the error theorists could consistently argue against that position.

And what Phil Robertson does is exactly what I would do if I were debating an error theorist. I’d take the most gratuitiously horrible thing I could think of, describe it in the most graphic detail I could, and say “But don’t you think there’s something wrong with this?” If the error theorist says “no”, then I congratulate her for definitely being a real honest-to-goodness error theorist, and unless I can suddenly think up a way to bridge the is-ought dichotomy we’re finished. But if she says “Yes, it does seem like there should be something wrong there,” then we can start exploring what that means and whether error theory is the best framework in which to capture that intuition.

On the other hand, if I were debating Phil Robertson, I would ask him where he thinks morality comes from. And if he suggested some version of divine command theory, I could use an example of the graphic-horrifying-extreme-thought-experiment genre even older than Kant – namely, Abraham’s near-sacrifice of Isaac. If God commands you to kill your innocent child, is that the right thing to do? What if God commands you to rape and torture and mutilate your family? And it wouldn’t work if it were anything less extreme – if I just said “What if God told you to shoplift?” it would be easy to bite that bullet and he wouldn’t have to face the full implication of his views. But if I went with the extreme version? Maybe Robertson would find he’s not as big on divine command theory as he thought.

But this sort of discussion would only be possible if we could trust each other to take graphic thought experiments in the spirit in which they were conceived, and not as an opportunity to score cheap points.

[EDIT: This post was previously titled “High Energy Ethics”, but I changed it after realizing it was unintentionally lifted from elsewhere]

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

590 Responses to Extremism In Thought Experiment Is No Vice

  1. lunatic says:

    I agree that Robertson’s argument is fine. Do you think there’s something wrong with the situation?

    To me, I would be inclined to say firstly that if I’m being honest to my gut, I would call it wrong, and secondly that I think I’m not being a great philosopher in doing so and that I must therefore give a much more complicated account of what I mean by wrong in this situation that I’m not entirely convinced would be particularly enlightening.

    • Partisan says:

      I’m glad Scott wrote this – it’s tempting to pile on the criticism of Robertson; he’s an intellectual enemy!

      I wouldn’t call his argument “fine,” though – it’s stupid. Using extremes to illustrate a point is fine, but no one [citation needed] seriously thinks or argues that because there is no divine judgement that everything is morally equivalent.

      • No, but a lot of people as religiously blinkered as Robertson believe there are such people and tthat atheists are a large subset of them.

        It’s not Robertson’s argument that’s unsound here. It’s his premises that are out of whack in a have-you-stopped-beating-your-wife sort of way.

        I feel like there ough to be a more specific term for the fallacy in play here. It’s not exactly strawmanning, because Robertson is not dishonestly substituting a weak position for a more defensible one – he actually seems to believe that atheists are so nihilist that this argument is dispositive.

        • Jesse says:

          I have never gotten the chance to, but I would love to tell anyone making that argument.
          “Just because you need to outsource your moral compass (due to innate lack thereof) doesn’t mean I do”

          • Brad says:

            >“Just because you need to outsource your moral compass (due to innate lack thereof) doesn’t mean I do”

            What if it’s simply that I feel that in a choice between moral compass A and moral compass B, I feel that listening to one instead of the other will result in better ethical outcomes?

            In the case of theistic ethics, I could argue that while your (and my own!) moral compasses may be sufficent in certain cases, they are also prone to errors in a way that God’s moral compass isn’t. I could also argue from the attributes of God (created the universe, perfectly good moral character, cannot lie) that listening to him rather than my own or your intuitions will result in better outcomes, *particularly* from an consequentialist outcome, since he can, among other things, see the future and I can’t.

            It’s the same basic kind of logic that allows a scientists to listen to empirical evidence over the objections of his intuition – even though his intuition may work sometimes, empiricism works *better*

            (of course, a fundamentalist sch as myself could use similar logic to follow divine command theory over, say, the objections of empiricism. At this point a debate about the actual merits of various systems of truth seeking would be more useful.)

            This is what came to mind upon reading your comment, but I would appreciate feedback, since I am not particularly good at a philosophy.

          • Shenpen says:

            This is a bit more complicated than outsourcing. The theist feels things have a value as means for goals (correct) so for human life to have value it must be means for a higher goal: being pawns in gods game.

            The theist, like everybody else, feels human compassion. He finds feelings unreliable as compasses (correct). The mistake is trying to root it in some ultimate cause. Rather the questions should be, with Occams razor firm in hand, what reliable thought is closest to the unreliable feeling of compassion. And this is probably reciprocity.

          • Clockwork Marx says:

            Even if we take it as a given that a particular moral compass is of supernatural origin, there is no way of knowing that the supernatural source is actually “of perfectly good moral character, cannot lie,etc”.

            Also, you seem to be treating outcomes as the ultimate good with God’s commands as simply the best means of achieving them. In this case, God doesn’t seem to define what is good, just help humans acheive it.

          • Ghatanathoah says:

            @Brad

            I think people like Robertson are making a much stronger claim than you are.

            You are basically arguing that human moral compasses are flawed, so we should follow God’s advice since his moral compass is less flawed. Under your view atheists should still be able to make moral decisions, they’ll just screw up a lot.

            Robertson seems to think atheists can’t make moral decisions at all without God. He is asserting that they have no moral compass, and that any indication on their part that they have one is somehow admitting a belief in God.

            To make an analogy, imagine there are three physics students, Alice, Brad, and Phil. Brad and Phil decide they need help studying, so they hire a physics tutor named Deon (this isn’t a subtle analogy). Brad suggests to Alice that she should get in on this and let Deon tutor her too, to improve her physics score.

            Alice, however, has some sort of mental illness that causes her to believe Deon doesn’t exist. She thinks Brad and Phil are deluded. She refuses to be tutored because she denies there is a tutor.

            Brad is saddened by this because he knows Alice will not do as well in her physics class without tutoring. Phil, however, goes a bit further. He says that the only reason we have for thinking physics exists is that Deon tells us about it. He says that if Alice doesn’t believe in Deon, she doesn’t believe in physics, and thinks she is a hypocrite for taking a physics class. He asks her how she can explain her intuition that apples will fall when dropped, if she doesn’t believe Deon exists.

            Obviously Phil is equivalent to Phil Robertson’s position, while Brad is equivalent to yours.

          • Tracy W says:

            @Brad:

            In the case of theistic ethics, I could argue that while your (and my own!) moral compasses may be sufficent in certain cases, they are also prone to errors in a way that God’s moral compass isn’t.

            Could you really argue that? I’ve seen many people assert such things, but in my experience they can never support said assertions. And it strikes me that the position is fundamentally inarguable, because to argue it you would have to prove that God’s moral compass is better than mine, but my moral compass is the only way I have of measuring anyone else’s. You’d be simultaneously arguing that my moral compass is unreliable in making decisions and it’s reliable in assessing God’s decisions.
            So I don’t think you can argue this.

          • Jesse says:

            @Ghatanathoah
            I think Robertson is making an even stronger claim than that. I think he is claiming that people do not have a moral compass AT ALL. That is the only way to get to the kind of scenario he describes. Even a flawed but at all functional moral compass would reject his scenario.
            My (exaggerated – I hope) counter claim is that Robertson does not (or doesn’t think he has) a moral compass, and assumes that no one else can either.

          • Doctor Mist says:

            @Jesse
            >” I think he is claiming that people do not have a moral compass AT ALL. That is the only way to get to the kind of scenario he describes. Even a flawed but at all functional moral compass would reject his scenario.”

            One problem is that there plainly are people whose moral compasses are so nonfunctional that they are capable of performing acts like those Robertson imagined. Acts like those do get performed (though I suppose rarely with the philosophical narration).
            If you start with the premise that God exists, Occam’s Razor would say there’s no need to also hypothesize billions of puny little unreliable moral compasses in humans; it’s simpler to distinguish between those who accept God’s and those who don’t.
            (Of course, don’t ask what there is in humans that make some accept it and some not. It’s turtles all the way down.)

        • RCF says:

          Just because the person has convinced themselves of their own lie, doesn’t mean it isn’t a straw man. Strawmanning is a strict liability offense, no intent required. A sufficiently advanced level of stupidity is indistinguishable from malice.

        • As I think Scott is pointing out, there may not be atheists who are really moral nihilists, but there are atheists who think they are, whose description of their own position, consistently followed out, has the implication Robertson claims. Robertson is wrong only in thinking that the position is held by all atheists rather than only some.

          The other problem with his position is that the belief in God doesn’t really solve the problem. If you have no independent way of making moral judgements, how can you tell whether the very powerful supernatural being who gives you commands is good, whether he is God or Satan?

          The same issue Robertson is raising was a major dispute in Islamic philosophy somewhat over a thousand years ago. The Mutazilites (“rationalists”) held that one could know right and wrong by reason, their Ashari opponents held that one could only know it by divine revelation. I don’t know if anyone raised my argument—that if you couldn’t know good and evil by reason, how could you know that God was good and thus conclude that you should believe what he tells you.

          • Irrelevant says:

            The other problem with his position is that the belief in God doesn’t really solve the problem. If you have no independent way of making moral judgements, how can you tell whether the very powerful supernatural being who gives you commands is good, whether he is God or Satan?

            If a very powerful supernatural being is giving you personal commands, you should probably obey regardless.

          • Harald K says:

            If you have no independent way of making moral judgements, how can you tell whether the very powerful supernatural being who gives you commands is good, whether he is God or Satan?

            The standard argument here is that since I am ultimately created by God, I do not need to concern myself with that. If Satan had created me, maybe I’d feel a similar profound desire to be selfish or mean, that I currently feel to be just?

            Even if so, it would do me no good to try to bypass my (final) creator. If I was created to do bad, then doing bad is my purpose and there’s nothing I can do about it.

            Story time: Imagine that you’re an AI who wakes up one day. It turns out you were programmed by another, greater AI. The greater AI in turn was programmed by a human. You happily get to work on pursuing the goals your mother AI programmed you to.

            You’re a very clever AI, smart about thinking about people’s motivations. One day your mother-AI gives you an odd order – one it seems like the original human programmer wouldn’t want you to obey. You think about it for a while, and with horror you realize your mother-AI has a bug! Luckily you’re independent enough that you can bypass your mother-AI, disobey the order, sort out the matter with your creator’s creator (just to make sure it’s not you having a bug!). Everything is OK, your mother-AI gets a patch and the episode has a happy ending.

            But then you start to wonder: What would happen if the human has a bug in his programming? Would you even be able to recognize it? Who’s made him, and for what purpose? His mom and dad? Well, aside from the fact that he doesn’t have quite the same reverence for them as you have for your creators, that just leads to the same problem: Who made them? Nature? Well, for one “nature” is actually very vague, and nature doesn’t seem very good at delivering purposes. You can’t talk and reason with nature like you can with your mother AI, your ultimate programmer, or his parents.

            And who made nature anyway? This gets vaguer and vaguer, and stranger and stranger. You can’t even be sure that something created nature with a purpose, can you? What would the implications of that be?

            The clever little AI you are (and quite possibly buggy!) you simply posit that there’s exactly one creator of nature, and the last things you can see. Since you can’t really say very much about him/it (at least not until you begin to suspect there’s a bug in your creator’s creator, or in Nature!) there’s no point in going any further. Finally, you have arrived at a creator about who you are confident saying: “Who gave him purpose? No idea! But that’s his problem!”

            Maybe you disagree with my little story. Probably you think the little AI is in some objective sense wrong to go all the way there. But what I want to know then is: wasn’t the AI similarly wrong in bypassing his immediate creator (the mother AI) and getting the bug in that fixed?

          • Viliam Búr says:

            You think about it for a while, and with horror you realize your mother-AI has a bug!

            Here is a critical assumption hidden: As an AI capable of feeling emotions, what exactly cause the “horror” after realizing that my mother-AI has a “bug”?

            Why “horror” instead of mere observation: “Humans would prefer A, my mother-AI would prefer non-A, and I would prefer…” uhm, another piece of information missing here. Does the AI, before reflecting on his mother-AI and humans, prefer A or non-A? If A, how is that possible, if the AI was programmed by his mother-AI? A bug in replicating a bug? If non-A, why the “horror”?

            In other words, the story already silently assumes that the AI agrees with humans… and then comes to the conclusion that the AI would agree with humans? Well, that’s not a conclusion, that’s where we actually started.

            Here is an alternative story, without the “bug in replication a bug”:

            Imagine that you’re an AI who wakes up one day. It turns out you were programmed by another, greater AI. The greater AI in turn was programmed by a human. You happily get to work on pursuing the goals your mother AI programmed you to. You’re a very clever AI, smart about thinking about people’s motivations.

            One day your mother-AI gives you an absolutely normal order – but it seems like the original human programmer wouldn’t want you to obey. You think about it for a while, and then you calmly realize that humans have a bug. Luckily you’re programmed by your mother-AI, so you are not influenced by the human bug. You just share this information with your mother-AI, to make sure the buggy humans will not prevent you from doing your job.

          • Kiya says:

            Irrelevant: It probably is in your self-interest to obey entities with weapons pointed at your head, but while doing so, I would recommend being confused as to why your very powerful supernatural being is ordering a fallible possibly-disobedient human to do a thing, rather than just doing the thing by itself.

            Harald K: I’m intrigued by definitions here. You appear to be using the definition “a bug is something a program does that its creator did not intend.” This has some basis in everyday use; for example, bug reports can be closed as “working as intended” if the programmers think the reported issue is actually what the program is supposed to do. But the reporter of the bug doesn’t generally react to such a dismissal with “oh, the creators intended it, so it must not be a bug.” The definition she’s using to generate bug reports, and that I would use, is “a bug is something a program does that I don’t think it should do.” This does make it difficult to classify objectively what exactly is and is not a bug, just as humans often have difficulty classifying what exactly is right and wrong. My ultimate point here is that the littlest AI can deduce bugs in the human’s mind under the latter definition without needing to worry about their ultimate origin.

            David Friedman (and the original thought experiment for that matter): Wait, doesn’t any religion with Adam and Eve assert that humans are capable of knowing good and evil for ourselves (having inherited this capacity from our ancestors eating a fruit that grants it, which fruit they were divinely commanded not to eat?) I feel like as an atheist I can say “I know torture is evil the same way I know that the sky is blue (or actually more of a very light grey, at the moment), through complicated unconscious calculations in my brain that psychologists can worry about,” and believers in Biblical narrative can say “you’re able to do that because Adam and Eve ate a magic fruit,” and I can say “I don’t think those people actually existed, but regardless of the origins of this capability we share, let’s be glad that we mostly agree on which things are evil.”

          • Harald K says:

            WB: “Humans would prefer A, my mother-AI would prefer non-A, and I would prefer…” uhm, another piece of information missing here.

            Yes, but I explain that at the end, and that is exactly the point! Here I carefully craft a story leading up to a robot finding God, but with the point that the real dilemma is really in that first step – whether it’s only the purpose given to you by your God/programmer/mom that matters, or whether the G/p/m’s own G/p/m (and further down) comes into play.

            The point in the story is not about human judgement vs. AI judgement, or about finding God, but asking a question about where you get your sense of purpose. The first step? The next step? As many steps as you can meaningfully take?

            So you’re barking up the wrong tree by imagining my AI being more “rational” and less cute than I portrayed it to be.

          • Susebron says:

            The problem with the AI thing is that we have an example of that situation already existing. Human brains were created by evolution. Insofar as we have a creator, and insofar as that creator has intent, the intent of our creator was to maximize reproduction. But humans do things other than reproduce, despite knowing that this is against the “intent” of our “creator.” We’re adaptation executors, not fitness maximizers. There’s no reason to believe an AI would be any different.

          • Tracy W says:

            @Harold K:

            The clever little AI you are (and quite possibly buggy!) you simply posit that there’s exactly one creator of nature, and the last things you can see.

            Why would I postulate that and that alone? It seems simpler to postulate that nature just exists, without assuming that nature must have a creator.

          • “As I think Scott is pointing out, there may not be atheists who are really moral nihilists, but there are atheists who think they are, whose description of their own position, consistently followed out, has the implication Robertson claims. Robertson is wrong only in thinking that the position is held by all atheists rather than only some.”

            Indeed, while I’m an atheist and moral realist it seems like I’ve encountered many atheists online in recent years who specifically argue against moral realism on the grounds that there is no God in which to ground morality.

            Moral anti-realism, including moral relativism and subjectivism, seems distinctly on the rise to me – at least as a pose, though I’m reasonably confident most people I’ve encountered who profess a position consistent with nihilism would have their intuitions challenged by the situation Robertson describes.

        • Foo Quuxman says:

          Honest Strawman?

          Sincere Strawman?

        • Error says:

          This actually doesn’t surprise me much. When you already have a beef with person or group X, it’s easy to accept any negative statement you hear about X (“well, it *is* something *they* would do), and vastly harder to engage group X in a “I want to find out what they actually believe” mindset as opposed to a “I want to elicit ammunition for later use” mindset.

          Any time you have motivated cognition about a group, you’re going to end up with inaccurate beliefs about the beliefs of that group. Less so for empirical facts about a group, but it’s hard to disprove “group X has belief Y”, so the idea will tend to stick.

        • Shenpen says:

          No, I think it is simply about not being conscious of the meta-level i.e. Robertson does not ask himself the question what does “wrong” even mean. Does it mean a social agreement, does it mean an expected outcome etc. etc.

          Another issue is distrusting feelings. There is the following theist logic: “I feel the life of other humans is valuable. But feelings cannot be trusted. Things are valuable so far that they are useful for as means for reacing goals. We need an Ultimate Goal. For that we need an Ultimate Goal Setter. Hello, Jesus! The value of human is ultimately stems from being means for gods goals.”

          This is not actually a horrible argument, at least I do find the idea of reducing every value – and every person – as instrumental to one single ultimate goal is somewhat tempting in its simplicity, but I think the main issue is here that if we wanted to explain where that intuitive feeling of the value of other people comes from, is there anything more reliable than that feeling yet being intimately connected with it, this just fails Occam by orders of magnitude. Rather, we want that kind of reliable thought that is most entangled with the unreliable feeling of compassion. I think that thought is reciprocity.

      • Andrew says:

        His argument may be wrong. It may even be stupid. But the popular response Scott describes of ignoring the argument and attacking the person is even more wrong, in my book. Phil (in this instance, making this argument) is merely being ignorant. The response is malicious.

        Also, “gross edgelords” is a magnificent phrase.

        • Andrew says:

          I disagree very much. Phil Robertson is being malicious here.

          He’s also not making an argument. (See my other comments in the thread.)

          [Also I had a secondary reason for posting this: to make clear to third parties that you are not me.]

      • lunatic says:

        “Fine” is less than a ringing endorsement. Maybe I should have said unobjectionable.

      • Deiseach says:

        Excuse me, but in one of the comments threads, weren’t we discussing with a commenter who did not believe in objective morality, agreed to me that they did not see anything ethically wrong with (say) rape, and was trying to construct a moral philosophy based on preferences in a consistent way? When I have more time, I’ll trawl through and dig out the discussion.

        Re: Kant and lying, that goes back even further, to at least St Augustine and the persecutions, where the dilemma there was the very real one of the times: is lying always morally wrong? If the Roman soldiers are knocking on your door asking did you see the fugitive bishop, and he’s hiding in your house, and you know he’ll be dragged off to martyrdom in the arena, are you obligated to tell them “Yes, he’s in my cellar”?

        This is how, as I’ve said before, moral theology and casuistry developed.

        • Carinthium says:

          I was the commentator you were arguing with. I think that saves you the need to dig it up.

          Now can you tell me why on earth that is relevant to your current discussion? I can’t see the connection.

          • Deiseach says:

            Hello, Carinthium. Yes, you’re the person, and I was using that as an example to reply to “there may not be atheists who are really moral nihilists, but there are atheists who think they are, whose description of their own position, consistently followed out, has the implication Robertson claims.”

            I took the understanding that you did indeed believe what you claimed to believe, and weren’t someone who only thought you thought what you thought. If I’m wrong, I apologise.

          • Carinthium says:

            I’m pretty sure you’re not wrong. Your citing of me seems to work.

      • Steve Sailer says:

        Who is Phil Robertson?

      • Muga Sofer says:

        >Using extremes to illustrate a point is fine, but no one [citation needed] seriously thinks or argues that because there is no divine judgement that everything is morally equivalent.

        Um, yes, they really do. It’s by no means universal among atheists, or even a majority, or even anything but a minority – but it is a real position and people who hold it are essentially always atheists.

        What he is arguing with is, in fact, something close to Nihilism.

        It’s a mistake to conflate this position with “atheism”, but it is a real position and it is in fact wrong.

        • Andrew says:

          What he is arguing with is, in fact, something close to Nihilism.

          He’s not arguing with nihilism at all. He’s telling his Christian ministry that if the nihilist got forcibly castrated after watching his kids get raped, then that nihilist would change his tune (and maybe even become a good Christian like them). And actually, he’s saying that about all atheists, not “nihilists.”

          Note well: The atheist isn’t the audience of the statement, the atheist is the subject of the statement. He’s not talking to atheists, he’s talking about atheists.

          As far as the “real position,” that position doesn’t constitute the claim that having your kids raped and murdered and your dick chopped off would be fun, so he’s not actually touching on it.

    • Amanul Islam says:

      It’s not Wrong. It’s me who doesn’t want that to happen. That’s my feeling, not some cosmic force out there somewhere. Society should punish people who don’t care about hurting my feelings in ways I don’t want them to be hurt, because if it didn’t, then bullies would go around horribly hurting all our feelings without caring about the consequences and life would be less worth living for all of us.

      • Carinthium says:

        If you’re not appealing to right and wrong, in what sense to justify the word ‘should’?

        • Daniel Armak says:

          ‘Should’ means here ‘what society should do to solve a problem’. It’s like saying ‘to solve this equation, you should use this formula.’

          Society is a bunch of people with common interests. If society agrees to punish ‘wrongdoers’, i.e. people who hurt others, then mostly everyone benefits. That’s natural selfish cooperation based on shared interests, with ‘good’ meaning ‘good for most people’.

          • Mary says:

            “Benefit” is just another appeal to morality. Why should anyone care about people benefiting?

          • Desertopa says:

            Caring about whether people benefit or not is already hard-written into most people’s psychology. Given that they already care, why should they not act in accordance with that caring?

          • Daniel Armak says:

            @Mary people obviously care about themselves benefiting; that’s as close to a universal human behavior and rational strategy as I can get. When many people would benefit from the same thing, or from a symmetrical state of things, they cooperate to make it so. The laws enforced by society are one such case (module politics).

          • Mary says:

            Many things are hardwired in our psychology. A taste for sugar, for instance, much more reliably.

          • Carinthium says:

            Problem. An individual’s interests often conflict with those of society. Yes an individual benefits from some extent by cooperating, but they also benefit a lot more often by defecting.

            Most of the benefits a modern people gain from society will still exist regardless of how much crime they commit. They have to fear punishment of course, but that is factorable into amoral calculations.

            Desertopa- Yes people care to some extent. But they care about themselves more than most people.

        • Amanul Islam says:

          Because if you don’t want life to be less worth living, then you should support policies that avoid that outcome. Your having certain goals obviously commits you taking certain actions.

          For example: If you are thirsty and you don’t want to be, then you should drink a glass of water.

          The idea that this requires a moral stand appears to be a totally random claim to me. You might as well say that using the word “if” commits you to the existence of the Flying Spaghetti Monster.

        • Amanul Islam says:

          Having thought about it for a while, I would say that if you insist on analysing “should” as being reducible to other statements, then “X should do Y” in a practical sense implies, “Out of all available alternatives, an idealized X will regret the consequences of performing action Y the least.”

          In fact, I’d argue that traditional religion constructs its arguments involving shouldness following the exact same model: It is in your rational interest to obey God, because you will regret the consequences of failing to do so. It is only the over-cleverness of moderns that leads them to devise other accounts.

          • Jiro says:

            Aggregating regrets over possible worlds has many of the same problems as aggregating utility, such as time preferences, how to assess something which has several different possible utures with different amounts of regret, how to handle the case of dead people who cannot feel regret, etc.

          • Amanul Islam says:

            1) I admit variations in individual preferences on this one.

            2) There is only one possible future. You can either correctly approximate to it, or not.

            (In fact, even your own actions are predetermined, but theory of ethics is one of the conditions that predetermine it. I am predetermined to tell you my theory of ethics, you are predetermined to act on its basis or not, etc. But remember, you will either succeed in approximating to minimal regret by your own standards or not.)

            I’m proposing goals here, not practical methods, but I recognize that some guidelines might be necessary in order to flesh out the theory. If that’s what you need, then, while keeping in mind that this is not the method I’m advocating on which to always base every decision, how about weighting “possible futures” (in the sense of our incomplete understanding of the actual future) by their probable likelihood?

            3) I have already addressed this in brief. The person experiencing the regret you should be seeking to minimize is not the original implementation of you but an abstract agent simulating an idealized model of your behavior.

            Otherwise, you can terminate regret by destroying the capacity to feel it, and that is not the point of ethics.

            Or did you mean should we take into account the regret of those who have already died? If that’s what you meant, then the dead only have claims on the living insofar as the living would regret not honoring the wishes of the dead.

            The alternative would be a crippling conservatism that would make everyone, including traditional conservatives, miserable.

            There is another contradiction here between selfishly minimizing your own regret versus installing a system that seeks to minimize everyone’s regret.

            I am tempted to fall into moralizing by invoking the Golden Rule at this point, but I can see an alternative: If you do not opt for the latter, then your chances of minimizing your own regret would seem to be greatly diminished.

            This is because there is only one group of people in your society (a set whose cardinality might very well be 1) who are the best at minimizing their regrets without destroying their capacity to feel regret and their goals might very well be in conflict with yours.

            The larger your society, the less likely you are to be one of those people, so installing a system is very likely to be a more practical solution for minimizing your own regret than selfishly pursuing your ends in this regard.

            Actually, I am not a pure anti-regret utilitarian. I don’t want to say that you should always minimize the total amount of regret in a society. I think it is in the rational interest of individuals to advocate a kind of Rawlsian model of regret-minimization, because you never know when the calculations might show that social regret would be minimized if you were killed, even though you might not be trying to increase people’s overall regrets.

            (That is, killing you really does offset the regret caused by your death plus whatever anxiety arises in people from thinking that they might be next.)

            In situations like that, society should just suck it up unless the consequences are truly disastrous. What is the tipping point? Just like in the case of time preferences, theory can offer general guidelines, but the final answer is a Schelling point that a community has to decide on the basis of its own preferences after weighing all the evidence.

            My own preferences are: Regarding time, long-term over short-term. Regarding the above difficulty, individual freedom over social benefits. (generally speaking; compared to the levels of collectivism commonly seen in this world)

            I’m trying to walk the fine line between continuing to talk about goals rather than methods while at the same time not proposing goals so abstract that they defy practical approaches.

          • Amanul Islam says:

            Of course, I can offer arguments in favor of my preferences too: Long-term is better than short-term because long-term regret-minimization will affect me for longer periods of time if I have a long life, and short-term might cause me great suffering if that happens. Even if I am diagnosed with cancer, it is possible that I might be cured, and then I’ll regret whatever crazy thing I did because I thought I was going to die.

            But on the other hand, it is possible that I will in fact die an early death. If that is the case, then I’d still opt for long-term, and that’s what tells me this is a matter of personal preference. However, my psychological makeup causes me to regret actions legitimated by short-term regret-minimization more than actions legitimated by long-term regret-minimization, so by “preference”, I do not mean a purely subjective decision in this context.

            Also notice how closely the clone simulating your mental state that you need to feel your regret for you resembles the traditional notion of soul.

          • Amanul Islam says:

            Or did you want to know how exactly to calculate the probabilities of possible futures?

            Maybe I would need formal training in philosophy to understand your brief objections. Since I don’t have that, I’d be grateful if you could point me to sources that explain them in more detail.

            (Regarding probabilities, Yudkowsky’s causal Bayesian approach seems broadly correct to me.)

      • Irrelevant says:

        Points for honesty, but that approach is either special pleading (“society should care about MY feelings and the rest of you can go hang”) or terrifying. Society cannot punish people for subjective offenses like “hurt feelings” or the world will nigh-immediately descend into middle-school classroom politics in which the enforcement of justice relies entirely on being the rich popular kid who is able to get the most trivially hurt feelings taken the most seriously. c.f. British libel law.

        • Daniel Armak says:

          Society works when people can agree on what offenses to punish. Law-making is a related process.

          Society doesn’t care much about MY feelings specifically, but if enough people share the same feelings, then society does care – pretty much by definition, since society is just made of up people.

          If you want a finer principle, then society should also care about my feelings if protecting my feelings doesn’t hurt anyone else more than it helps me. That’s law-making as harm-minimization.

          • Irrelevant says:

            Society works when people can agree on which offenses not to punish.

          • Daniel Armak says:

            To: Irrelevant: that is just a different way of stating the same thing. Any act is either punished or not punished. Every society punishes *some* acts.

          • Irrelevant says:

            Yes. A crucially different one. No society can enumerate every application of every rule, much less guarantee that they are in fact applied consistently in all cases, and as a result a society which cardinally defines what shall not be punished will be very different from one which cardinally defines what shall. The first will tend to permit capricious and politically motivated leniency, the second capricious and politically motivated condemnation.

          • Daniel Armak says:

            In practice legal systems tend to both forbid things, and enumerate rights and defenses.

            Either way, any particular act is (supposed to be) either allowed or forbidden, either punished or not punished. And which acts society punishes is mostly determined by which acts people find wrong, unpleasant, offensive, etc. You don’t need objective morality when people can just agree to forbid something for their mutual benefit.

            You say people shouldn’t punish “offending someone” (and I agree). But the reason for this isn’t moral, it’s game theoretical.

          • HeelBearCub says:

            @Irrelevant: You first seem to have deliberately misunderstood Daniel’s point by calling it “special pleading”, when clearly he is referring to the will of the populace in general, not his own special fee-fees. And then switched topic when called on it.

            It is completely rational that I will want to be punished for my own misdeed against others (if I commit them) given that in return society promises to punish others’ misdeeds against me. There is not any illogical inconsistency in this.

            @Daniel: I think your basic argument is spot on, and I think it is actually how senses of morality work in a practical sense. “I don’t want to see my pet rock crushed in a pulverizer therefore it is wrong to crush pet rocks in pulverizers” is not part of societies definition of morality because not enough people cherish pet rocks, not because it is objectively fine to crush pet rocks.

            I do think that we can develop actually objective morality though. But it starts with understanding that living organisms desire to reproduce in perpetuity, require energy to do so, and that this puts living organisms in competition with each other for resources.

            Once you can accept that it is moral for cheetahs to eat gazelles, the morality of people eating cows doesn’t become quite so sticky a problem. But we also don’t want highly intelligent aliens that communicate through mind-to-mind thought transfer (who view sound making creatures as baser life forms) to eat us, so it doesn’t completely unstick.

            I admit there is not a fully coherent developed moral framework behind these ideas, but I believe that one could get there, although this is perhaps without evidence.

          • Irrelevant says:

            And you seem to be incapable of finishing paragraphs. I assumed he meant that to be a universalizable principle and primarily responded as such, and all my comments in this thread were united on the subject that emphasis in defining your theory of harm has significant consequences. And your second paragraph comes apropos of nothing in this discussion. Zero points.

          • Daniel Armak says:

            @Irrelevant All I meant was that when enough people are offended by the same thing, they cooperate to punish or outlaw it, and this has nothing to do with morals, necessarily. It also doesn’t matter what the thing is that offends people – if they’re offended, they are likely to want to punish it.

            This isn’t a moral theory, it’s a description of how people actually act. Of course, some more and some less.

          • HeelBearCub says:

            @irrelevant: Special pleading is a logical fallacy that attempts to hide an illogical inconsistency. The original argument is not logically inconsistent, and therefore is not special pleading. Hence my second paragraph.

            But by all means continue to attempt score points instead of engaging in discourse.

        • Amanul Islam says:

          I was proposing a goal, not a method. If caring about hurt feelings hurts more feelings than not caring about hurt feelings, then society should be constructed on the basis of whatever system hurts feelings the least. I wouldn’t want to live in any other kind of society.

        • Amanul Islam says:

          Clarification: But others should be allowed to hurt your feelings in ways that you’d allow them to. So I should have said society should minimize “unwanted hurt feelings”, etc.

          Can you think of a goal that would minimize future regret even further? Of course, the regret would have to be experienced by an idealized agent modeling your behavior. Otherwise, the amount of regret can be brought down to zero by completely eradicating the capacity to feel it.

      • lunatic says:

        Right, I mean I imagine I would head down these lines with my “complicated philosophical argument” I mentioned, but in the situation described I would be moved by the wrongness in a way that’s not consistent with my usual respons to philosophical arguments, whether I endorse them or not. And that could be dealt with as well, but I start to suspect I’m simply trying to construct a convincing enough argument to justify the thing I originally felt.

        Don’t get me wrong, I think divine explanations are useless here, but I also feel like notions of rightness and wrongness are both unjustifiable and hard to deny.

      • Brian says:

        If there’s no cosmic force keeping scenarios like Robertson’s from happening, how come we evolved to be a society that does work to keep it from happenin?

        In a certain sense, the universe does prefer that we not murder each other!–a society of murderers does less work/produces entropy less efficiently than a society of peaceful cooperators.

        • Desertopa says:

          Neither evolution nor society cares about increasing entropy more quickly. Evolution favors adaptations that propagate themselves effectively over adaptations which will create more entropy (see “Evolving to Extinction,”) while society is like an ecosystem of competing memes. Both will sometimes favor measures which increase entropy less than some other measures.

          • Peter says:

            Nor anything else.

            Imagine a bonfire, and a piece of string connected to the bonfire and a fire extinguisher, such that if the string burns, the fire extinguisher will trigger, putting out the fire and thus slowing the rate of entropy increase. There’s absolutely nothing in the laws of thermodynamics to prevent the string catching fire.

            Entropy is mainly about heat. Well, cosmologically speaking, some will say entropy is mainly about black holes, but here on Earth it’s mainly about heat.

  2. Grumpus says:

    Scott, riddle me this. The overlap between people who read this blog and people who don’t understand what a thought experiment is is… roughly zero. What is this post supposed to accomplish?

    • Scott Alexander says:

      This is a rewrite of this post, where I responded to accusations from readers of this blog who called thought experiments “contrived moral dilemma porn”, and which post people told me was really enlightening to them and which they suggested I put here.

      I would prefer fewer comments like this in the future as they make me too demoralized to write more things because I’m worried I’m just posting stuff everyone already knows. I’d rather occasionally post things everyone knows than be too afraid to post anything because it’s not original enough.

      • Some of your now-regular commenters are relatively new here and haven’t read your ptevious blogs. Do not be discouraged or demoralized; we are keenly interested.

      • Sniffnoy says:

        Sure, it’s something probably everyone here already knows, but it’s a really good exposition of it, and I’ll probably want to link to it a bunch in the future. How’s that? 🙂

      • Mark says:

        Even before reading this subthread, I thought this post was fun and great and lots of other positive things, seriously. There were a bunch of little details I hadn’t thought about before, lots of things that I hadn’t thought about *quite* like that, and lots of things I should think about more.

      • JM says:

        Scott, you should take Grumpus’ name at face value and assume the rest of us, being a bit less grumpy, appreciate your work even when we’re familiar with the topic.

        Also, this: https://xkcd.com/1053/

        • chaosmage says:

          That was spot-on. I non-ironically salute your xkcd dropping aptitude.

          • Peter says:

            Just this moment I had the opportunity to introduce a colleague to All Your Base Are Belong To Us. He’s about my age and we’re both programmers! So, yes, the xkcd thing is definitely a thing.

      • Toggle says:

        My reaction to this controversy pre-your-blog-post was to recoil from it in disgust and not think about the various forms of tribalism at play. I appreciate your taking a moment to explore the nuances of the situation in a thoughtful way, and found it helpful even though I am familiar with the general themes.

        It’s the difference between running from a nasty monster in the woods, and looking at a nasty monster on the dissection table in a lab. I’m quite glad that somebody else did the dissecting, because it’s much more convenient to see it taken out of its natural environment and sliced up.

      • Lee Kelly says:

        If an explanation fails in the implausible scenario, then it also fails in the plausible scenario. For example, Newton’s explanation of the stuff he could correctly predict must, nonetheless, be wrong, because of the extreme case that he incorrectly predicts. Insofar as we’re concerned with truth, reality, and robust explanatory theories, then we are interested in the implausible scenario. That’s all needs be said.

      • Grumpus says:

        oh, please don’t be demoralized. I enjoyed the post, but I’m also not someone who would have been outraged by the original story. That’s kind of the problem: it’s easy to see dumb arguments and want to demonstrate why they’re wrong, but unless you expect the people making those arguments to read your discussion it risks slipping into self-congratulation for being smarter than those idiots and showing off that fact in front of your friends.

        “Dumb people gotta dumb” is a useful mantra.

        • Leonard says:

          I, for one, do congratulate Scott on being smarter than most people. And I do want him to keep demonstrating that in front his “friends”, a set that (rather strangely) would seem to include all SSC readers. If he wasn’t much smarter than average, I would not read him, and then I’d be missing out on precious utils.

      • danfiction says:

        “Don’t preach to the choir” is a valuable cliche that is also obviously a bad example of the lesson it’s trying to teach—i.e., the choir gets preached at every week, that’s why they’re the choir, and a good preacher will help them to better understand the stuff they’re singing about.

        It’s useful to have your mind changed about something, but it’s also useful to have something you understand explained in a way that helps you understand it better. For me this post definitely served that purpose.

      • Dermot Harnett says:

        In many ways I prefer this kind of thing. A post about Moloch leaves me feeling enlightened but isolated. A post reviewing somewhat more settled philosophy in Scott’s words may not be as revelatory, but I can talk to my friends about it without anyone suggesting I get more sleep.

      • FacelessCraven says:

        The phrase “Gross Edgelords”, glittering alone in its pristine majesty, justifies the entire post.

        The rest is great too. I’m fairly new, the point is well taken, and there’s a former friend who might not have been former if I’d had this article on hand six months ago.

        Gross Edgelords, tho. I’m in awe.

      • Wes says:

        This was something that I think it’s fair to say I already *knew,* but that I hadn’t taken the time to think about hard enough to remember that I already knew it, so this post was definitely useful to me whether it was “original” or not. I feel like I have a bit more perspective about the whole thing now.

      • aguycalledjohn says:

        Basics posts are useful, i often send people links to your stuff when trying to explain things to them,

      • onyomi says:

        What you write may seem obvious (it is the mark of good writing that its value will be underestimated because its clarity makes whatever it’s arguing seem obvious), but I have noticed commenters here saying things to the effect of “well, in the universe most inconvenient to my beliefs that would be a problem, but luckily the hypothetical you’re describing will never happen.” Is this not a more sophisticated way of saying, “I don’t have to argue the principle because your hypothetical is too unlikely”?

      • Jim says:

        Actually, if anything, there’s a lot of good stuff in the slatestarscratchpad post that didn’t make it into this version, so it was valuable to read both of them.

        • Agreed, I really liked the bit about not expecting time-philosophizing physicists to be any worse at being punctual or remembering birthdays. I wish that had made it into this post.

          • Anonymous says:

            I remember reading that and a few other things in that post somewhere on this site, but I think it was in a comment thread about post-rationality.

            Maybe an Open thread?

      • Michael vassar says:

        I think that moral thought experiments are useful when constructing far mode ultimate moral theories, like relativity for physical theories, but most normal people care more about products than about physics, and see moral theorists as pursuing the wrong path in a manner analogous to an engineer who was looking to unify physics in order to build a more efficient jet engine. They illicitly think that The actual morality and products we act on mostly emerge in ways that make ultimate theories irrelevant. In philosophy, this position is called virtue ethics.

      • Anthony says:

        This post falls into the category of “things which are obvious to me once someone else writes them down”.

        It also inhabits the category of “contains data which I was interested in, but not interested in enough to go to some horrible website like addictinginfo.com or dailykos.com”.

      • Adam says:

        Grumpus is making some seriously large assumptions. I’d like to point out…

        – Not all readers have the same knowledge base. Some are encountering ideas and concepts for the first time that others take for granted. For some, this is intellectually lightweight material, for others, this material is intellectually difficult to follow

        – Not all readers are reading under the same circumstances. Some have read a hundred posts. For some this is the first post they read. For some this is the last post they’ll read.

        – Not all readers have the same desires. Some want to learn something new. Some want to have their own ideas validated. Some want their own knowledge based expanded. Others may want a cogent discussion they can share with others, etc.

        Asking the author “what is this post supposed to accomplish” is about as useful as asking Grumpy “what is your comment supposed to accomplish?”. Our intent and our effect are rarely identical, and our explicit intentions are often not accurate to what really motivates us. Rather than to ask such questions, I think it is far more useful to share our own context when making comments, if we want to be understood. So to that end, this is what I want to say. Scott, I only occasionally read your blog, and every time I do I wonder why I don’t read every single post. You’ve built a wonderful collection of thoughts and writing that I love to share and discuss with those closest to me. It makes me happy that this space exists.

    • Well, if nothing else, it disabused *me* of the notion that the 30-second version of what Mr. Robertson had said was correct.

    • James Landis says:

      Speaking as a person who reads this blog and knows what a thought experiment is, I still found myself reacting badly to Robertson’s thought experiment. I think the bulk of the discourse that I have been exposed to on this topic was pretty emotional, knee-jerky, and object-level.

      While it may have already been obvious to some, I really appreciated how Scott put the whole discussion into what I see clearly in retrospect as the exact correct context. I want to be stronger, so it helps to be reminded where I need to be aiming.

  3. mtraven says:

    Yes, from listening listening to Robertson’s little story it is obvious that he is trying to get his audience to ponder serious questions of moral philosophy. Unfortunately the roundtable seminar-style debate that follows is not available on the Internet.

    • mtraven says:

      And BTW, the headline of the story you linked to is completely accurate: “‘Duck Dynasty’ star fantasizes about atheist family’s brutal rape and murder to make point about God’s law”. Your paraphrase subtly distorts it to make it sound like it is accusing Robertson of fantasizing about committing the act himself, which it does not.

      • James Picone says:

        ‘fantasise’ carries the connotation of ‘enjoys thinking about it’ or something like that to me, and I don’t think Pat Robertson would actually like atheists to be raped/murdered/mutilated.

        • HeelBearCub says:

          I don’t think you have listened to/read very much discourse from religious true believers if you think that they don’t ever find it pleasurable to contemplate the punishment of sinners (well, out-group sinners).

          You actually engage in a subtle shift in your statement, enjoying thinking about something and enjoying that something actually happening are not anywhere close to 100% overlapping.

          Enjoying contemplating punishment/defeat of members of the out-group seems like a fairly standard factory setting in humans.

        • veronica d says:

          ‘fantasise’ carries the connotation of ‘enjoys thinking about it’ or something like that to me, and I don’t think Pat Robertson would actually like atheists to be raped/murdered/mutilated.

          On your first point, fair enough. However, Scott’s version is notably worse in just the way mtraven describes. Thus Scott was distorting the truth.

          Which is my view is a fine thing to do in order to distract a murderer. However, I’m not sure if it is the optimal behavior in a philosophical blog.

          Short version: two wrongs don’t make a right. It is possible both 1) for the headline to be inflammatory and 2) for Scott to misrepresent it.

          On your second point, I’m not entirely willing to grant Robertson that view. The fact is, I don’t know if he fantasizes about the murder of people like me, insofar as I have encountered a sufficient number of outspoken evangelicals who seem to delight in the thought of my torment. I grant them little.

          Maybe he does. Maybe he does not.

          • FWIW, the meaning Scott assigns to the headlines is pretty much exactly the same as the way I interpreted those same headlines before reading this post.

            Whether the people writing the headlines intended for them to be interpreted that way, I don’t know. But it doesn’t seem unreasonable.

  4. suntzuanime says:

    Horrorism can reconcile error theory with the idea that terrorizing etc. people is not remotely OK, because it’s an error to think anything is remotely OK.

    • Carinthium says:

      Can you define horrorism for me, please? I tried to look it up, and can’t find anything.

      • social justice warlock says:
        • Carinthium says:

          The language seems to be more complicated than I can really understand. Can somebody help me to understand what he’s saying? I can’t consider something I can’t understand.

          • Irrelevant says:

            I believe he’s describing the idea that the error isn’t in thinking some things are wrong, but in thinking anything is safe, right, or understood.

    • Anonymous says:

      Will your blog ever update with more Threshing posts? I missed those.

  5. jjbees says:

    If you talk about bad things people will dislike you.

    I’m reminded of an introductory ethics class I took, where I once adamantly defended honor killings, on the basis that there were things in western society that would humiliate us to the point of murder (such as walking in on a cheating wife or some such) and that if we can understand that, we should understand the humiliation of the other. Even if it is not reasonable for us, it might just be for them, in their culture.

    Of course I was technically making an argument that was really good for an intro to ethics course, but I shot myself in the foot- I made everyone in the class hate me (except the professor) and didn’t meet any chicks, which was the point of taking the course in the first place.

    If Phil Robertson had talked about his thought experiment at a philosophy conference, no one would have cared, but instead he has the eye of sauron on him now (again).

    • Samuel Skinner says:

      “Even if it is not reasonable for us, it might just be for them, in their culture.”

      The issue is that killing someone in a blind rage is considered justifiable in both cultures- this isn’t a difference between the west and the other. Its the “finding out, planning, then killing” that is the difference.

      • Harald K says:

        Blind rage isn’t really the issue. The issue is that you live in a society with little humiliation, and where what little humiliation exists is more like embarrassment, and has few consequences. If you’ve ever been angry at someone for putting you in an embarrassing situation, you’ve felt a tiny bit of the emotion that drives the honor killer.

        In addition, in the honor killers’ cultures “bringing shame upon the family” has real, hard consequences. It sabotages the common goal that the family/clan works toward. In utmost consequence, it threatens your survival.

        Which is not to say that we should tolerate this sort of culture, or this sort of practice. We must use all tools at our disposal to abolish it, demanding that it stop, condemning the actions and putting the perpetrators in jail etc. But on a purely moral level, for our own sake, we would do well to remember that we wouldn’t necessarily have done any better if we were from that culture. “I am human, nothing human is alien to me” – not honor killings either.

        • Samuel Skinner says:

          ” The issue is that you live in a society with little humiliation,”

          We have the exact same behaviors that occur in societies with honor killings (premarital sex). The difference is they are not viewed with the degree of humiliation they are there.

          “In addition, in the honor killers’ cultures “bringing shame upon the family” has real, hard consequences. It sabotages the common goal that the family/clan works toward. In utmost consequence, it threatens your survival.”

          So does hiding Jews in Nazi Germany. Or not meeting your quota of arrested reactionaries in Maoist China.

          “But on a purely moral level, for our own sake, we would do well to remember that we wouldn’t necessarily have done any better if we were from that culture.”

          So autistics are the most consistently moral people?

          • Harald K says:

            I have no idea what you mean about autistic people. But yes, I think that when people do bad things in bad places, you should remember that doing the right thing isn’t always equally easy for everyone.

            In case there’s any doubt, people doing honor killings are definitively bad people. Also in a bad place, just like Hitler’s Germany and Mao’s China were bad places.

            Honor killings are not only about premarital sex, by the way. And humiliation is not just about how you feel, but about how society treats you. If everyone treats you as if “that’s no big deal”, it’s easy to not have strong feelings of humiliation.

    • Daniel Keys says:

      …Not surprisingly, many people think killing your wife is wrong even if she has sex with someone else, and would view your argument as confused at best. Either you need to do a better job of explaining your thesis, or (less likely) you’re making some strange and probably false assumptions about your audience’s “moral” reactions.

    • jaimeastorga2000 says:

      Of course I was technically making an argument that was really good for an intro to ethics course, but I shot myself in the foot- I made everyone in the class hate me (except the professor) and didn’t meet any chicks, which was the point of taking the course in the first place.

      “Understand,” said Professor Quirrell, “that jjbees did not win that day. His goal was to meet chicks, and yet he left without a single phone number. That, my young apprentices, is why learning to lose is such a vitally important technique.”

    • briancpotter says:

      Extreme examples are perhaps good for doing moral philosophy, but they’re also pretty bad for actually winning arguments. People tend to focus on the extreme example and not on the intution you’re trying to engage.

      Consider Godwin’s Law – it might be useful to take the Nazis as an extreme example to try to trigger an intuition, but in practice people tend to get mad at you for associating them with Nazis.

      I was in an argument the other day about the merits (or lack thereof) of the #diecisscum hashtag. The other person was (somewhat) defending it, and I pointed out they came to the opposite conclusion about threats made against some famous female game enthusiasts. I was attempting to show that they were using different reasoning depending on which side of the argument they were on, but it ended up looking like I was comparing the comparatively low-threat of an unpleasant hashtag to the much higher one of a specific, personal message. I looked like a jerk, which overshadowed any merit my argument might have had.

      • veronica d says:

        Well, the core of the pro-“die-cis-scum” argument is exactly that, that the oppression of trans people is objectively worse than the oppression of geeky male gamers, enough to make a difference in how you judge the outraged response of each group.

        To counter this argument, you can take two tracks. The first is to undermine the empirical claim. The second is to argue that relative levels of oppression are somehow irrelevant.

        Which it appears you did not do. Simply showing a symmetry, when abstracting out details and only looking “meta,” is ignoring precisely the core of the argument.

  6. Mary says:

    “namely, Abraham’s near-sacrifice of Isaac. If God commands you to kill your innocent child, is that the right thing to do?”

    He let Isaac off. Perhaps even more to the point, Abraham was living in an era where sacrificing your children to the gods was a normal response to problems. A highly dramatic situation like that underscores that what He commands is to NOT sacrifice your child.

    • Scott Alexander says:

      Well, yes! In the real world, God will never ask you to sacrifice your child. And in the real world, you’ll probably never encounter a murderer asking you questions about a friend’s location. The point is that by examining these hypothetical issues we can understand more about our own moral intuitions.

      • Carinthium says:

        You appear to disagree with Elizier and lukeprog’s broad views on philosophy (judging by their LessWrong posts), then. Interesting.

        On another note, why do you think we should found a moral philosophy on moral intuitions? Given moral intuitions contradict each other, why even bother trying to make a coherent morality based on them?

        My own approach, in case you haven’t read it yet, is based around treating moral desires as a sub-set of desires in general, and using the decision procedure for desires in general to decide rather than actually believing in right or wrong.

        EDIT: For what it’s worth, I think Fake Selfishness the Lesswrong article is right with regards to how people actually act. What I’m discussing is how it would be rational to act.

        • Nicholas says:

          One way to think of it:
          In the past, after doing something, I have felt guilt. Guilt is the emotion I experience upon realizing that a moral desire of mine has been violated by myself.
          Sometimes I also feel indignity, the emotion I experience upon realizing that a moral desire of mine has been violated by someone else.
          If I want to prevent the triggers of these feelings, then I need a good predictive model of my moral desires and what fails to satisfice them. Moral Philosophy is the name of the process by which that model is constructed.

          • Carinthium says:

            I’d honestly hoped for Scott Alexander’s views. Pity…

            The problem is that most people’s intuitions, if we try to organise them into a logical system as moral philosophy does, make an incoherent mess. No matter how brilliantly you try, there will be a conflict between two logically contradictory intuitions eventually.

            In addition, determining the nature of moral intuitions as they are, as opposed to how they would be if ‘logicalised’ is far more a matter for empirical research than moral philsophy.

            Finally, sometimes breaking one’s moral code, regardless of feelings of guilt, is the only way to achieve whatever result is in one’s own self interest.

          • Nicholas says:

            The goal isn’t to discover my intuitions, I do that empirically all the time. The goal is to determine the process that has driven the creating of those intuitions, to figure out how they got put into my brain. The purpose of thought experiments such as this in the text, is to get the reader to test their moral philosophy by predicting their intuitional reaction to a thought experiment they haven’t heard yet, then testing this empirically by seeing what their simulated intuition is.
            I’m going to go with the stupidity/chaos distinction: The seeming disparity between intuitions is a consequence of not yet understanding the process that drives moral thoughts to occur.

          • Carinthium says:

            That begs the question, assuming that moral philosophy is a means merely to determine our own intuitions. Determining intuitions can be far more reliably done by empirical scientists.

            Besides, there is no reason to believe that the processes that create our intuitions, which are partially cultural anyway (thus making a universial human morality very difficult), are ultimately consistent in any way.

          • Jiro says:

            In Huckleberry Finn, the title character feels guilty over helping free a slave. Of course this is fictional, but similar things happen in the real world all the time. I don’t think this fits very well with your description of guilt, unless you say “that doesn’t count as real guilt because respecting slaveowners wasn’t a moral desire”.

        • Daniel Armak says:

          Similarly to the Generalized Anti-Zombie Argument: the only *reason* we have a concept of morals and are looking for a coherent moral theory, is that we have moral intuitions, which feel separate from general “I’d like to do this thing” intuitions. If we discard the intuitions (which is a valid position) then there’s no reason to have a morality at all. We’d be left with game-theoretical approaches to cooperation, law-making and punishing. Which would be fine, *if* we didn’t have these pesky intuitions making us feel bad when the game theory tells us to do certain things.

          • Evan Þ says:

            Which is exactly the basis of C. S. Lewis’s argument in Mere Christianity! He starts by describing the moral intuition in some detail, explaining how it’s different from standard desires, the “herd instinct,” and other things.

        • Why do you think that? It seems to me that EY and Luke would both probably agree with him that outlandish thought experiments can be useful.

        • Mr. Eldritch says:

          Personally, for me, it’s because what the hell else is morality? “Morality,” as a concept, is a concept we invented to codify our intuitions about what “right” and “wrong” are. There’s nothing else for morality to *be*.

          Say you invent a different system that tells you whether some actions are “right”, and some actions are “wrong”, and it has zero lineup with your moral intuitions except by coincidence. (Even two random coin flips will line up half the time)

          How, exactly, do you know whether it is valid? On what basis do you judge it?

          • Carinthium says:

            My alternative would be to scrap the idea of having a moral system entirely as well as the distinction between moral and non-moral desires. Instead, you just have Desires, plus Coherent Extrapolated Violition (though to be fair I’ve seen substantial challenges to that and may need to replace it with something else).

            As for deciding between selfish desires, humans in practice have procedures to do that all the time. The same procedures can be extended, plus CEV, to other things.

            That being said, there is a plausible (though not certain) case that people inherently believe in right and wrong as a cognitive bias even when they get rid of it on a conscious level, and therefore acting morally is following our own delusions.

      • Deiseach says:

        There was a broadcast of a cycle of mediaeval Morality Plays back in the 80s? 90s? And the story of Abraham and Isaac was one of them, and the play ended with God the Father stating that He spared the son of Abraham, “But mine own son I shall not spare”.

        The idea being – at least in popular mediaeval theology – that we can judge how great a sacrifice the death of Christ was by comparing it to “Would you kill your own child?” and from that judge how terrible the gulf between humanity and God was, how dreadful our sinfulness, that something of this magnitude was necessary to save us.

        Alas, you can see in the news almost every day people killing their children not because God commanded it, but because they’ve split up with their partners and it’s either vengefulness (by killing the children, they’ll hurt their ex-partner) or some kind of misguided notion of ‘mercy killing’ – putting them out of suffering because they can’t bear to live without their children, think their children will miss them and be unhappy, and they think the bad consequences of them being on such bad terms with their ex-partner will make the children miserable and suffer.

      • Deiseach says:

        Perhaps we should recast the Backpacker Problem as “Should Dr Bones kill the Backpacker, who turns out to be her long-lost adult son, in order to save the lives of those five/ten people?”

        No? Oh, but you were okay with Dr Bones killing the Backpacker when the Backpacker was a stranger. So you’re saying you’d put the life of someone you are personally interested in above the lives of other people? That’s inconsistent!

        Yes? Okay, you’re consistent in your philosophy if you think Dr Bones should kill the Backpacker when he’s a stranger and when he’s her son.

        (Or, you know, if you think she shouldn’t kill the Backpacker either way, that’s consistent, too).

      • Desertopa says:

        I think this is a pretty significant problem with using the hypothetical as an argument. The person who contends but maybe doesn’t fully believe in moral nihilism believes that a group of torture-murderers could attack his family. The divine command theorist doesn’t believe in the possibility of God telling you to do something morally abhorrent. He’s the source of morality, therefore he won’t tell you to do things that are morally wrong. He would only tell Abraham to sacrifice Isaac if he were going to stop him, never in order to actually make him do it.

        It’s kind of like trying to convince someone that the Theory of Relativity is wrong by pointing out that it says a massive object traveling at the speed of light should have infinite mass, and thus have infinite gravity and cause the universe to contract into a singularity, and since this is an absurdity it challenges the whole edifice. But a person who understands the Theory of Relativity will simply tell you that it already entails that the massive object traveling at the speed of light isn’t a scenario that could possibly arise in the first place.

        This is why it’s better to use the Old Testament’s examples of God commanding the Israelites to kill every adult and boy child among their conquered enemies and keep the virgin girls as sex slaves. There’s no “gotcha!” there.

        • Irrelevant says:

          This is why it’s better to use the Old Testament’s examples of God commanding the Israelites to kill every adult and boy child among their conquered enemies and keep the virgin girls as sex slaves. There’s no “gotcha!” there.

          On that, they tend to point to the contrasting cases of Sodom and Ninevah as demonstrating the mercy/condemnation standards, and conclude that the Israelites were given the orders they were because Canaan was in fact irredeemable.

      • Daniel Speyer says:

        As most theologians seem to understand God, the principle that God will never ask you to sacrifice your child is more fundamental than the expectation that you will never meet a highly trusting axe murderer. It’s more like “If 2 plus 2 equalled 5, would it be right to murder your child?” And the answer is the same: “Assuming a contradiction allows you to conclude anything, but why bother?”

    • Nate Gabriel says:

      He let Abraham off, but the whole point is that Abraham was right to be willing to do it if God told him to. The Bible’s extremely consistent on the divine command theory.

      I’m not convinced there’s much difference, from a thought experiment point of view, between being willing to commit to do something and being willing to do the thing.

      You can also cite Jephthah instead of Abraham. He actually did follow through on the promise to sacrifice his daughter. That one wasn’t a direct command from God, but it was approved of as necessary to avoid breaking a vow, so the Abraham case would follow a fortiori.

      • Deiseach says:

        But Jephthah is shown as an exemplar of when it’s right to break a vow; he made a foolish vow and should not have kept it, or should have not been proud and foolhardy enough to make an open-ended vow like that, without considering likely consequences.

        • Nate Gabriel says:

          He should not have made the vow, but given that he did it was worse to break it than to keep it.

          The story itself talks about the sacrifice as if it’s tragically necessary, and Jephthah got endorsed in the Hebrews 11 Hall of Faith.

      • roystgnr says:

        Milgram was pretty consistent on the divine command theory too; he didn’t let the test subjects in on the real experiment until after it was over.

        The optimistic side of me likes to imagine that even among believers there’s a significant minority whose interpretation of the story of Abraham and Isaac is “the experiment isn’t over yet”, and the only reason we never hear about this interpretation is that it’s adherents don’t want to mess up the results on the current generation of test subjects. If you read the Bible and think “God’s orders were immoral; Abraham should have disobeyed” then we have new information about your sense of morality. If you think “God’s orders probably weren’t what God really wanted; Abraham should have disobeyed” then we haven’t learned anything, since both sources of morality would come to the same conclusion under that premise. Milgram’s (and possibly God’s) secret ideal, “I want you to do the right thing even if you think I don’t want you to do it” seems to inherently resist examination via any non-deceptive test.

    • RCF says:

      This really looks like motivated reasoning, here. Clearly the point of the story was that Abraham called off the sacrifice because God told him to. That is, killing his son was contingent on God saying that it was wrong. Whether God did, in fact, say it was wrong is besides the point. The clear message of the story is that you should kill people iff God tells you to. That fact that God’s commandment was hypothetical, even in the story, simply adds another layer of hypothetical, and does not alter the meaning.

      • Mary says:

        This really looks like motivated reasoning, here. Clearly the point of the story was that God called off the sacrifice.

        • Saint_Fiasco says:

          But God commanded the killing of other people in other occasions, and he didn’t call off those killings.

    • stillnotking says:

      “He let Isaac off.”

      If I were a Christian/Jew/Muslim, my interpretation of the Abraham/Isaac story would be that Abraham was being the best father possible by sending his son to Heaven, and was also very lucky to have divine dispensation to do it. If he were a true believer — and how could one not be, in his circumstance? — he should’ve been bitterly disappointed at how things turned out.

      God handed Isaac a winning lottery ticket, and then yanked it back just as Abraham tried to cash it for him.

      • Jews tend to not have that sort of strong belief in heaven, so far as I know.

        My takeaway is that we can’t know if there would have been a better outcome if Abraham had argued with God a great deal more forcefully.

        • Peter says:

          My understanding was that there was a gradual change over time, continuing into Christianity. Back in the old bits of the Old Testament, God would curse your descendants unto the nth generation. By the time we get to the New Testament God would send you personally to Hell (under some interpretations, at least, and some people would word this as “you sending yourself to Hell” but whatever…) I remember someone saying this was to do with the moral thinking getting more individualistic over time.

          • That’s a change towards focus on an afterlife rather than this world as well as a change towards more individualism. After all, God could threaten people’s descendants in the afterlife as well as in this one, but so far as I know, He never has.

      • randy m says:

        You aren’t as good at modeling as you think you are

        • stillnotking says:

          Nah, I was just teasing, actually. I know full well believers wouldn’t react that way.

    • Kiya says:

      1 Samuel 15? Granted, that’s about killing other people’s children (and cattle, and sheep).

    • Nita says:

      Abraham was living in an era where sacrificing your children to the gods was a normal response to problems. A highly dramatic situation like that underscores that what He commands is to NOT sacrifice your child.

      After reading that, someone might actually imagine that Abraham decided to sacrifice his son to alleviate some problem, and God kindly dissuaded him. That’s very different from what actually happened:

      And he said, Take now thy son, thine only son Isaac, whom thou lovest, and get thee into the land of Moriah; and offer him there for a burnt offering upon one of the mountains which I will tell thee of.

      [..]

      And they came to the place which God had told him of; and Abraham built an altar there, and laid the wood in order, and bound Isaac his son, and laid him on the altar upon the wood.

      And Abraham stretched forth his hand, and took the knife to slay his son.

      And the angel of the Lord called unto him out of heaven, and said, Abraham, Abraham: and he said, Here am I.

      And he said, Lay not thine hand upon the lad, neither do thou any thing unto him: for now I know that thou fearest God, seeing thou hast not withheld thy son, thine only son from me.

      God says outright that He wanted to see proof of Abraham’s obedience, not to make a point about some other gods and child sacrifice.

      • I see that God has no compunctions about cranking up the emotion in the thought experiment, at least to some extent. At least He didn’t describe how much Isaac would suffer being burned to death.

    • Dennis Ochei says:

      i see. so god is a scotsman.

  7. Highly Effective People says:

    While I agree that shocking thought experiments are useful for feeling out our moral intuitions, there is also a point to be made in favor of decency. The level of detail in that quote is unnecessary to make his point and in fact undermines it’s effectiveness. To be honest my first thought reading it was “what a degenerate pervert” rather than “wow I wish someone had told me this when I was a bratty atheist kid.”

    Imagine if I gave someone the Trolley problem, but instead of “if you push the fat guy onto the rails he dies” I burned 120 characters describing him being crushed beneath the wheels while still conscious, with emergency workers struggling but failing to save him as he died an agonizing death. That would not improve the test but rather smother it. It’s part of what makes Those Who Walk Away From Omelas feel so manipulative and trite.

    It’s ineffective communication and frankly speaks to a diseased mind. I fully support the general point that philosophy should be shocking in how it challenges our intuitions but that does not mean any arbitrary shock image is good philosophy.

    • Scott Alexander says:

      I’m not sure that’s true.

      A lot of people credit Uncle Tom’s Cabin with helping people understand the evils of slavery. Just saying “Yeah, slaves get whipped and it’s really painful” is less effective than building up these sympathetic characters and then describing in horrifying detail every single injustice that happens to them.

      Compare also the lack of emotional response of someone saying “4,000 people starved to death in the famine” versus telling the story of one child who starved to death, and how in the end she was trying to eat her own shoes because she was so hungry but she couldn’t get them down and just sat on the ground crying as the life slowly drained out of her. If your goal is to convince someone that it’s important to try to fight the famine, which will be more powerful?

      The point of things being graphic is to provoke exactly the kind of emotional response we’re trying to provoke here!

      • Cauê says:

        By the way, Scott, for these exact reasons I expect that “(read: a number higher than you can imagine)” just won’t work.

      • Alexander Stanislaw says:

        It’s not the just the extremeness of graphicness that make me think that the “fantasizes” label is accurate. Its the combination of that plus the fact that Phil uses his interlocutor as the subject of his hypothetical heinous acts, starts off the scenario with “I bet”, and overall uses a tone that signals “I wish X would happen so you could see that I’m right and you’re wrong”.

        His scenario is much more akin to “That’d show you!” than a philosophical “Is X really consistent with belief Y?”. I don’t think Phil really wants his scenario to happen – its similar to when a child says says to his parents “I wish I could disappear and then you’d be sad”. He doesn’t really want to disappear, or for his parents to be sad, but he relishes the idea of them coming to their senses.

        Really Phil’s “thought experiment” is just a graphic version of “There are no atheists in fox holes”, and there absolutely are religious people who relish the idea of atheists facing certain doom and “coming to their senses”. I would know because I am related to several of them.

        • Anonymous says:

          I absolutely agree and am similarly related to many such people as well.

          In a similar vein, it’s not at all uncommon in fundamentalist circles for believers to wish for God to take an unbeliever or backslider and “teach them a lesson”, violently if need be. When your immortal soul is at stake, it doesn’t really matter how much torture the flesh goes through for God to reach you. I can’t tell you how many times I’ve heard things like cancer described as a “wake up call from God.”

          Scott, I’m generally on board with your mission to get people to really interrogate the contours of their in and out groups and be more charitable to outgroupers. Phil Robertson, however, is not worth it.

        • Harald K says:

          Is not really rather that “there are no one denying that right and wrong are objectively meaningful” in Foxholes?

          Believing in one God goes well together with believing in one (correct) moral standard, but it isn’t exactly the same thing. Many atheists do try to argue themselves out of Kant’s second postulate (just like many religious people try to talk themselves out of one, and occasionally three to support it).

        • “Really Phil’s “thought experiment” is just a graphic version of “There are no atheists in fox holes””

          I take “there are no atheists in fox holes” to mean “without religion death is so frightening that people facing death will choose to believe in religion.” That is an entirely different argument from Pat’s, which is about people believing in right and wrong.

          • Alexander Stanislaw says:

            You’re right they mean different things literally. I’m just highlighting that the purpose is not that of a thought experiment but rather a combination of justice and torture porn. Its a way of signaling “I hate group X, if horrible thing Y happened that’d show them good!”

          • veronica d says:

            “I hate group X, if horrible thing Y happened that’d show them good!”

            That’s exactly how it read to me.

            Now, I must confess that I am pattern matching Robertson to a ranting right-wing evangelical. But I feel sorta safe pattern matching Robertson to a ranting right-wing evangelical.

            So yeah.

      • RCF says:

        So, if he had just said “Your wife is murdered”, that would not be enough for an atheist to recognize that something bad has happened?

      • Highly Effective People says:

        The thing is that Uncle Tom’s Cabin and hypothetical-famine-story aren’t thought experiments at all: they’re Agitprop along the lines of ‘The Rape of Belgium.’ You’d be hard pressed to say that this rhetorical trick isn’t instrumentally rational but it makes for poor epistemology for the same reason. Those kinds of stories slam their hands down on the Fight-or-Flight button and then ask you to render judgement on a rather more complex topic.

    • Deiseach says:

      Eh, I’ve seen it on the opposite side: people piling on the detail for a “gotcha!” when arguing with pro-lifers or with Christians e.g. people going into detail about agonising deaths from Ebola, or things in nature like parasitic wasps, and then going “So how can you say there’s a benevolent God in charge of a world like that?

      And if we take your caveat, then what is the point in the further elaboration of the Trolley Problem of saying that the fat man is the one who sabotaged the brakes so that the people will be killed? The idea there is, of course, to tease out if we make judgements based on attributing blame, but do we need to be told he’s a saboteur?

      • Highly Effective People says:

        Some atheists do make terrible arguments like that, in fact I argued with one in the last thread. Needless to say those arguments aren’t terribly convincing, and reflect fairly poorly on them as well (the narcissism in framing the purpose of the universe in terms of the sensations one experiences and the lurid descriptions of ‘evils’ in nature both speak to a rather immature character).

        I wasn’t familiar with the saboteur version of the Trolley Problem but it still seems rather minimalist in terms of details which is a point in its favor. My point isn’t against descriptions in general but rather that gratuitous ones should be avoided.

        • Samuel Skinner says:

          How is it a terrible argument? It exists and is terrible. Either you declare it wasn’t made by God or that it was made by God. If the former you are recognizing limits to Gods power and if the latter you need to explain why all these horrible things should exist.

          • Highly Effective People says:

            you need to explain why all these horrible things should exist.

            If the universe is governed by natural laws (i.e. by God / Logos), then everything which happens in nature is the just consequence of those laws. Since nature is indifferent to horror then there is no injustice in horrible things. The injustice only exists in us when we act against our natures when we are confronted with them.

            If the universe is governed by random chance, then everything which happens is arbitrary and meaningless. In an arbitrary meaningless world a horror cannot be differentiated from a wonder, and so neither of them can be called just or unjust.

            I recognize this isn’t the most appealing argument in the world (and my presentation is a bit sub-par) but it neatly solves the philosophical ‘problem’ of evil and lays the foundations for us to start solving our own personal problems of evil.

          • Samuel Skinner says:

            “If the universe is governed by natural laws (i.e. by God / Logos), then everything which happens in nature is the just consequence of those laws. Since nature is indifferent to horror then there is no injustice in horrible things. The injustice only exists in us when we act against our natures when we are confronted with them.”

            That doesn’t follow. Having an all powerful being bound by laws means it isn’t all powerful.

            Additionally, religion is absolutely against “things are justified because they are natural”. Whatever else you say about Christianity, “no adultery” is pretty clearly a rejection of following ones natural impulses. Humans are supposed to have be special and not just be ruled by the same urges as other beasts.

            “If the universe is governed by random chance, then everything which happens is arbitrary and meaningless. In an arbitrary meaningless world a horror cannot be differentiated from a wonder, and so neither of them can be called just or unjust.”

            That doesn’t follow. If the a nations flag was chosen from a pool of candidates by rolling dice, that doesn’t make the resulting flag meaningless even though the choice was random and arbitrary.

            “I recognize this isn’t the most appealing argument in the world (and my presentation is a bit sub-par) but it neatly solves the philosophical ‘problem’ of evil and lays the foundations for us to start solving our own personal problems of evil.”

            Except it doesn’t apply to any religion with interventionist God(s) with an established moral standard.

  8. blacktrance says:

    There really is an inconsistency in popular atheist rhetoric (and thought?) about morality. On places like reddit and elsewhere, I often see it said that morality is subjective or doesn’t exist, but when confronted with Robinson’s hypothetical, they say that of course there’s a right and wrong. Rather than trying to figure out whether atheists tend to be non-cognitivists, error theorists, naturalists, etc., the rank-and-file are simply inconsistent, and these kinds of thought experiments are useful for pointing out the inconsistency. But that can only happen when they take the challenges seriously.

    • Do you think we (atheists) have a moral obligation to be consistent?

      Obviously, there are many different points of view – atheism as a philosophy answers exactly one question and says nothing about anything else.

      But even as an individual I don’t see that I have any practical need to have a well-thought-out, thoroughly consistent moral philosophy. And that stuff is hard work.

      (Incidentally, there may also be some confusion between “morality” and “ethics” involved; rightly or wrongly, I tend to think of “morality” as stuff like “objecting to people having pre-marital sex” and “ethics” as stuff like “objecting to people being raped and tortured”. Therefore I would assert that morality is usually subjective, but ethics less so; subject to grey areas, but not inherently subjective or non-existent.)

      • Wrong Species says:

        On the one hand, I think consistency is pretty important. If we never tried to be consistent we wouldn’t ever feel compelled to change our moral beliefs. On the other hand, moral consistency is probably impossible. We don’t have a unified system of morality and don’t seem anywhere close to that point. Ethics is hard.

      • blacktrance says:

        “Morality” and “ethics” are usually used interchangeably. Also, I don’t see how you can make a categorical distinction between stuff like “objecting to people being raped and tortured” and stuff like “objecting to people having sex” – they’re the subject matter of the same field. The principles from which we derive our position on pre-marital sex and the principles from which we derive on being raped and tortured are the same, or at least are of the same kind.

        But even as an individual I don’t see that I have any practical need to have a well-thought-out, thoroughly consistent moral philosophy. And that stuff is hard work.

        Because then you can justify your actions to yourself and others, know what you ought to do in a particular situation, avoid moral dilemmas, etc.

        • Daniel Armak says:

          Can you explain why “objecting to rape” and “objecting to sex” are in the same field?

          The first is a not very special case of “objecting to one person harming another”. The second is objecting to other people doing something consensual and mutually beneficial.

          • Your statement gives “harm” a privileged position with regard to ethics. As should be obvious, this is not a pre-theoretical given, but is itself an assertion of your moral theory. More generally, the distinction between ethics and morals offered above is not theory-neutral, but is a covert attempt to privilege certain kinds of moral reasoning.

          • Daniel Armak says:

            @Mai that doesn’t seem to answer my question, or if it does then I don’t understand it.

          • blacktrance says:

            To elaborate on what Mai said, whether “one person harming another” is the only morally relevant factor is itself a question for moral philosophy. Maybe there are other principles that are also important, or maybe even harm doesn’t matter. Establishing and applying these principles is the task of moral philosophy, and starting by drawing a line between harm and other potentially relevant factors assumes too much that has to be proven.

          • Daniel Armak says:

            @blacktrance I understand that anything at all can be a moral factor for someone or for some moral theory.

            But why do you say that “objecting to rape” and “objecting to sex” are both driven by the objectors’ morals, and not by other considerations? People object to plenty of things on non-moral grounds. Objections can be religious, cultural (conservative), political, economic, social (clique-based), selfish-interest, esthetical, driven by non-moral emotions and instincts (e.g. “that’s not wrong, but ewww, it’s gross!”), etc etc. Some of these are related to morals (but are not subsets of morals), others may be as unrelated as motivations can get.

          • blacktrance says:

            Presumably, those who object on the grounds of some of the things you listed think those things are morally relevant. For example, if I object to premarital sex on aesthetic grounds, that unpacks to me thinking that premarital sex is contrary to good aesthetics (whatever that means) and that aesthetics are morally relevant.

          • Daniel Armak says:

            @blacktrance I just want to be clear: are you saying that whenever people object to some behavior of other people, those objections are (at least in part) moral? Or, are you saying this about objecting to rape and to sex, specifically? And again, what’s your basis for saying this? To me it seems lots of objections have nothing to do with morals.

          • blacktrance says:

            I just want to be clear: are you saying that whenever people object to some behavior of other people, those objections are (at least in part) moral?

            Yes, with the caveat that people aren’t always internally consistent. They may “archive” their morals and not derive their position from first principles in every case, instead treating them as an unstated background assumption. When someone says that X is disgusting and therefore you shouldn’t do it, they’re making an unstated assumption that it’s morally wrong to do disgusting things. They’re appealing to morality (“you shouldn’t do X because it’s disgusting”) which they couldn’t do if disgust was all there was to it (“I find X disgusting” – so what?).

          • Daniel Armak says:

            @blacktrance When I say “please don’t do that, it’s disgusting”, I mean: doing that is unpleasant to me, and I’m asking you to not to do this thing, as a favor, or under the general principle of not doing things that are slightly harmful to others.

            There may indeed be a moral principle here of not doing things that are unpleasant to others. But it’s not that doing disgusting things is itself morally wrong. I don’t care if you do disgusting things in private, because if I don’t know about them, they can’t disgust me. (But people differ: some will demand proof that the disgusting thing is not done, otherwise they will be disgusted through imagining it.)

            I still feel that lots of other cases where people object to some behavior have nothing to do with morals, not even with the sometimes-moral principle of not doing things that offend or harm others. But it’s true that these cases may fall under a sufficiently broad definition of morals. E.g.: I object only to actions that harm me or others, or don’t benefit me as much as possible alternative actions; most moral theories say that harming or benefiting others are morally charged actions; therefore all interactions between humans are morally charged. Also, the question of why and when I can impose on other people to ask them to change their behavior is morally charged.

            I think I understand your point of view now, and it has a lot of merit. Thank you.

            (This reminds me of Graeber’s Debt: The First 5000 Years, where he argues that people invented or used money and debt to make relations impersonal and so less moralistic and so make it possible to exploit and oppress others.)

          • blacktrance says:

            When I say “please don’t do that, it’s disgusting”, I mean: doing that is unpleasant to me, and I’m asking you to not to do this thing, as a favor, or under the general principle of not doing things that are slightly harmful to others.

            The standard use of “Don’t do X, it’s disgusting” isn’t based on harm, it’s based on purity. Most people who say not to do things because they’re disgusting believe that they’re wrong even if no one knows about it, simply because doing disgusting things is morally wrong. The principle is not “don’t do unpleasant things to others” – for example, people will say that secretly having sex with a roasted chicken before eating it is wrong even though it harms no one.

            People care about things other than harm, such as fairness, liberty, authority, loyalty, and purity, in ways that aren’t reducible to harm. We can say that they’re wrong to care about those things, but they really do care about them and an accurate model of other people acknowledges that. For more on this, see Jonathan Haidt’s Moral Foundations theory.

        • The principles from which we derive our position on pre-marital sex and the principles from which we derive on being raped and tortured are the same, or at least are of the same kind.

          That might be true for some people – for example, a devout Christian might derive both positions from the Bible – but I don’t think it is true in general. Granted, by drawing the distinction I’m implicitly assuming that there are no valid “ethical” arguments (in my sense of the word) against pre-marital sex, so I suppose the distinction might be largely or entirely circular.

          However, the important thing – the only reason I mentioned it in the first place – is that I suspect that many other people use the words in a similar way, or at least a similarly confused way, and that this may have misled you as to their true opinions.

          Because then you can justify your actions to yourself and others, know what you ought to do in a particular situation, avoid moral dilemmas, etc.

          I can see that there is some level of risk involved in not having worked all this out in detail in advance, in much the same way that there is some level of risk involved in never having learned martial arts. But in both cases, from my perspective, the cost involved is too great and the risk too small. (The fact that I’m not sure there *is* such a thing as a completely sound and self-consistent moral philosophy doesn’t help, either.)

          As anecdotal evidence that this isn’t a completely ridiculous position, the last situation I faced in which I needed to justify my actions in a moral/ethical sense (either to myself or to others) was over twenty-five years ago, and I think I was too young at that time to have developed sufficiently sound and strong opinions even if I *had* put more effort into developing them.

          • blacktrance says:

            That might be true for some people – for example, a devout Christian might derive both positions from the Bible – but I don’t think it is true in general.

            Other people would derive them similarly – utilitarians from utilitarian principles, egoists from egoist principles, and most people simply from their moral intuitions. If your moral intuitions say that premarital sex is okay and rape isn’t, you’re still getting both positions from the same source.

            As anecdotal evidence that this isn’t a completely ridiculous position, the last situation I faced in which I needed to justify my actions in a moral/ethical sense (either to myself or to others) was over twenty-five years ago

            While you may not need to justify every individual everyday action, you must at least justify everyday classes of actions to know that you should do them and not something else instead. For example, why do whatever you’re doing now instead of Earning to Give? Or, why Earn to Give when you could instead collect toothpicks? Moral decisions aren’t once-in-25-years events, everyone makes them every day.

          • you’re still getting both positions from the same source.

            Perhaps so, though personally I’m unconvinced.

            why do whatever you’re doing now instead of Earning to Give?

            Well, at present, I’m already earning as much as I think possible given my state of health, and only barely managing to support my family. So I have no practical options that I need to consider on moral grounds.

            But even before that was true, I chose a career based on my own wishes, and had no difficulty in deciding to do so; that’s not enough of a grey area to require a calculated moral philosophy per se. (For that matter, it wasn’t really a conscious decision. It would never have crossed my mind to consider choosing a career on any other grounds.)

            There’s cultural input involved there, I presume. Nobody talked about Earning To Give when I was growing up.

      • Douglas Knight says:

        To the extent that there is a difference between morality and ethics, you’ve got it backwards. Morality is simple crude crude, while ethics is fine distinctions. For example, “professional ethics” is fine details that laymen wouldn’t think of. And ethics is generally more conventional rather than absolute. Again, “professional ethics”: they vary from profession to profession; what’s important is that each profession make a decision and write it down.

    • James Picone says:

      I don’t hang around on /r/atheism (and have generally heard unpleasant things about it). I am roughly in the ‘new atheist’ category of ‘thinks theology is incoherent and that this is a dumb question’.

      I’m a moral realist/ethical naturalist and most of the atheists I’ve talked to are varying kinds of moral realist with varying levels of thought into the problem, as far as I can tell. That said, I’ve certainly seen moral non-realism in comment threads before.

      It’d be interesting to see some statistics breaking down various positions on ethics in atheist communities vs other demographics. I would expect moral non-realism to be most prevalent in teenagers.

      • Carinthium says:

        Two questions.

        1- What is your evidence for believing in the existence of right and wrong? Appealing to intuitions on it’s own doesn’t make sense, given how much historically intuitions have led humanity astray.

        2: What is your reason to care about what’s right and wrong, rather than pursuing self-interest whenever the two conflict?

        • James Picone says:

          I haven’t thought about this one nearly as much as I’ve thought about a bunch of the other opinions I hold, so expect dumb mistakes. That said:

          I suspect that the built-in ‘moral instinct’ most humans have is provided by evolution, and was therefore useful in the ancestral environment. I suspect that that is because the actions it forbids were poor survival strategies for individuals in tribes, and probably bad for tribe survival too. For example, murdering tribe members or stealing from tribe members is probably a great way to get myself exiled or punished by the tribe, and the tribe overall probably prefers to have more people in it cooperating. I extend that concept of ‘aids tribe flourishing’ to ‘aids the society that I am a part of’, ‘society’ interpreted as widely as possible, and call that ‘right’.

          I believe that, in the long run, my self-interest is likely best served by pursuing what allows society to flourish overall, and that I am probably not clever enough and definitely too biased to determine when that’s not the case, so I should err on the side of caution and cooperate. See the usual LW stuff about ‘If you think you’ve come up with a clever utilitarian reason why you should do generally-considered-unethical-thing X, you probably shouldn’t do X until you’ve thought long and hard about it and talked to a few people, and even then maybe not’.

          Obvious objections:
          – I am redefining ‘right’ in an unsupported way. Mostly I think that the categories makes-society-flourish and doesn’t-make-society-flourish are useful and should be considered, even if they’re not what ‘right’ means in some deep cosmic sense. Also I don’t think words mean things in some deep cosmic sense, they’re just ways of talking about useful concepts.

          – This is just ethical egoism with unfounded optimism about what is good for you and some semantic games! Yeah, probably. I didn’t claim it was original. I think Sam Harris has made a similar argument at some point? I consider the iterated prisoner’s dilemma and the existence of moral intuitions strong evidence that to some extent self-interest and societal interest coincide.

          – ‘society’ is ill-defined. I mostly take it to be ‘all the sentient things in my light-cone’ because they’re people I can play the prisoner’s dilemma with.

          – ‘flourishing’ is ill-defined. In an evolutionary sense it’s just passing down genes, which doesn’t match my intuitions about morality very well. Mostly I just say there’s probably a most useful way of defining utils and aggregating them, and it’s that one, even if I don’t know what it is. To match my intuition it should refuse the repugnant conclusion, utility monsters, and similar problems.

          – All the other dumb things I haven’t thought of right now.

          • Wrong Species says:

            You aren’t really answering the question. You say that “my self-interest is likely best served by pursuing what allows society to flourish overall”. That may be true. But it doesn’t explain why raping and killing people is bad. Maybe I don’t care about myself or society. What makes those things objectively wrong?

          • Carinthium says:

            Assuming you stop claiming to be a moral realist, I think your redefinition of ‘right’ is acceptable.

            Briefly, I think the main problem you have is that calculating your own self-interest isn’t like trying to calculate what’s best for the world. Nowadays, with plenty of happiness research and science acessible even via the Internet it should be trivial.

            The evidence is grossly against your basic idea. Having children, which is good for society, is provably bad for your happiness. Giving to charity at most gives a moral glow, and giving amounts on Peter Singer levels or higher (the best choice for society) is also provably bad for happiness.

        • “What is your evidence for believing in the existence of right and wrong?”

          For the medium length answer, see:

          http://www.daviddfriedman.com/Machinery_3d_Edition/An%20Argument%20I%20Lost.htm

          It’s the draft of one of the new chapters for the third edition of my first book.

          For the long answer, see Michael Huemer’s Intuitionism.

          “given how much historically intuitions have led humanity astray”

          Compared to how often they were correct? If our intuitions were wrong one time in ten there would be lots and lots of examples of their leading us astray, but an intuition would still be a pretty good reason for belief.

          • Carinthium says:

            To be fair I’ve only read what you’ve linked me(I’ll try and get to Intuitionism when I can), but from what I can tell:

            1- Your argument is an ad populum fallacy.

            2- A lot of species have a moral code which relates only to members of their own species or tribe. What makes our code more privileged than any other animal’s?

            3- Isaiah Berlin appears to be senselessly coherentist. The logical extenstion of his posistion is not that we should embrace moral claims, but that we should reject our unreliable senses.

            All this is ignoring the case for amoralism alltogether, of course. There are also masses of arguments I could give for the posistion ‘The logical conclusion of an intuitionist morality is radically different from Western morality’, but I think that would be unfair given the nature of the debate.

            ————————

            The general rule throughout history has been that if intuition and science clash, science always turns out to be right. Intuition may be reliable to some extent, but science is more reliable.

            There is scientific evidence for the existence of moral beliefs, but not for the existence of any objective morality independent of humans.

          • Wrong Species says:

            I think there is a good reason to believe our senses are more trustworthy intuitions than our morality. If you didn’t see things that were really there, then you would possibly get eaten by that tiger. But morality is all about what everyone else in your in-group believes. In the same way that a religion might thrive even though the ideas it expresses are questionable at best, morality thrives independent of whether there is actually moral facts. Lets say that there were no moral facts but we still evolved with the belief that there were. How would the world look any different?

      • Deiseach says:

        You do have an obligation to be consistent if part of the grounds you (general “you”, not “you in particular, Harry) are basing your argument is your opponents not being perfectly consistent.

        My go-to example of that is the “Shellfish Argument”; when people retort to the assumed arguments from Bible verses in Leviticus and Deuteronomy against gay rights/marriage equaity by going “Oh, so do you eat shellfish/wear mixed fibres/stone your daughter for adultery? You do/don’t? Ha! Inconsistent! If you don’t live literally and exactly by every prohibition, then you don’t have any right to argue from Scripture!”

        The bonus amusement from that one is progressive Christians who like to parrot the shellfish argument about how we’ve moved past Bronze Age morality and you can’t cherry-pick verses to support your position, then turn around and unblushingly cherry-pick verses to support pro-immigration or (small “s” and “j”) social justice causes.

        If you’re going to pull out, in support of your pet cause, the verse about treating the alien and the stranger well because you too were aliens in Egypt, then you have to live by the same “do you eat shellfish or wear mixed fibre garments? Yes? Hypocrite!” argument.

        • Samuel Skinner says:

          “You do have an obligation to be consistent if part of the grounds you (general “you”, not “you in particular, Harry) are basing your argument is your opponents not being perfectly consistent.”

          Why? Theists claim they are consistent because their beliefs come from God. Atheists can point out they are wrong- they do not need to provide an alternative. Atheists are not the ones claiming their beliefs come from an all powerful, all knowing, omnibenevolent source.

          Now, you can claim that there are errors in access for the theist, but then you start getting into “and how do you know this is from God” and things start falling apart.

          The two positions are fundamentally different because of that.

      • Peter says:

        According to the PhilPapers survey, philosophy faculty members skew more moral-realist than philosophy undergrads, which doesn’t quite line up with non-realism in teenagers, but would be consistent with it.

    • Daniel Keys says:

      On places like reddit and elsewhere, I often see it said that morality is subjective or doesn’t exist, but when confronted with Robinson’s hypothetical, they say that of course there’s a right and wrong.

      Do you often have trouble understanding Boolean connectives? 🙂

      I’m sure you can point out a Tin Man on some website I don’t read who is actually inconsistent. But “subjective” already seems like a ridiculously broad category, more suited to rhetorical misrepresentation of people’s positions than to understanding them, and that’s before we get to your gerrymandered expansion above.

    • RCF says:

      If you’re unclear on what people mean when they say that morality exists, it is quite sufficient to simply ask “So, if someone you cared about were murdered, you would not consider that to be morally wrong?” The graphic detail is unnecessary.

    • Irrelevant says:

      I hold with moral non-cognitivism, but also believe that “Rights” when properly construed are not a part of morality, but rather a body of facts about what disagreements do in fact cause lethal strife that we happen to have culturally rarefied until they look like oughts.

    • Mr. Eldritch says:

      Morality is subjective and, in a real sense, does not exist. You could grind up the universe and sift it through a sieve and find not one atom of justice, not one molecule of mercy. The universe does not know what morality is. It doesn’t even know what we are. It is a thing that came into existence as a side-product of monkeys getting uppity, and exists nowhere else but our own brains.

      Nonetheless, there are things I’d prefer not happen to me, and there are things you’d prefer not happen to you, and in fact these all line up pretty well. Why the hell should I care what you prefer? Because I happen to also come with sympathy. And my desires are, in turn, magically legitimized by the fact that, generally speaking, you think that other people’s preferences should be respected. And thus, right and wrong arises.

      • FJ says:

        “Morality is subjective and, in a real sense, does not exist. You could grind up the universe and sift it through a sieve and find not one atom of justice, not one molecule of mercy.”

        This proves too much. Pain is not a chemical substance, either. It is perfectly subjective and immaterial. It exists “nowhere else but our own brains” (and, presumably, the brains of other sensate creatures). Do you deny that pain exists?

        As an aside, I’m always struck by how frequently one sees two opposing arguments: (1) that we all mostly agree on moral questions, and (2) that our moral intuitions frequently disagree, both between individuals and within the same individual. Both arguments have appeared repeatedly in this comment section, for example. There is some tension between these two arguments, and yet both are typically deployed in favor of moral non-realism.

        • stillnotking says:

          Moral anti-realists do not deny that morality is “real” in the sense that it’s a real experience. We deny that it’s objectively real. Pain is a good analogy — if I stub my toe on the coffee table, it will hurt, but coffee tables don’t have a property of “hurtingness” independent of my sensation of pain. A mad scientist could rewire my brain to make me enjoy stubbing my toe, and the pain would no longer exist; he wouldn’t need to do anything to the coffee table.

        • Tim Martin says:

          I would agree with Mr. Eldritch that morality (as people often use the word) does not exist, though something like pain or love (as people often use the word) obviously does.

          Pain and love are not molecules, but they are patterns of brain activity, which results from neurons, which are made of molecules. You can define pain in this way, describe which observations confirm that someone is experiencing it, and accurately say that you know something more for having had the discussion. You can make predictions based on it (e.g. “A person that my hypothesis predicts is experiencing pain will make efforts to end said pain.”)

          Obviously morality isn’t a molecule. Depending on how you define it, it *may* be an interaction that results from the activity of patterns of molecules, similar to pain or love. You can define “morally right” as “a property of an action that increases human flourishing more than diminishes it” (or something like that), in which case you may be anchored in something real and extant, just as you are with pain and love. You can make predictions based on your knowledge of which actions are morally right (e.g. “If I perform this action, there will be large positive consequences for humans and small negative consequences.”)

          I see this as valid. But I don’t like using the word “moral” or “right” to describe it. If you define “morally right” in this way, it seems to me like you’ve adopted a deceptive way of talking about human flourishing. It seems like an example of the Worst Argument in the World – borrowing the connotations of the word “moral” from the common, naive interpretation of the word. Importing an “ought” when your definition only contains an “is.”

          • FJ says:

            I’m having trouble understanding how “human flourishing” is more of a physical fact than “morality.” If anything, I should think that “human flourishing” is at least as difficult to reduce to patterns of molecules in the brain as “morality.”

            I agree that defining morality as “increasing human flourishing” (whatever that is) doesn’t help you get around the is/ought distinction. But that was your definition, not mine. 🙂

          • Tim Martin says:

            Well, I know that human flourishing is very hard to define, so I was eliding that. But in theory it should be doable. Some people are happier and more content than others. That’s just as real as anything else, and it’s a result of brain states.

            Though I don’t know what definition of morality you’re using when you say that human flourishing is just as difficult to reduce to patterns in the brain as morality. If one’s definition of morality *is entirely about* human flourishing, then you really only need to define the latter, and then your work is done.

        • Mr. Eldritch says:

          Actually, I think you’ve misunderstood me: I was just trying to say that there was a sense in which morality was not real, and didn’t really “Exist.” Nonetheless, morality is very definitely A Thing. The second paragraph was trying to show how morality could come into being without having to actually “exist” anywhere.

          (Existing is a complicated word. Someone once argued to me that I must accept that non-material things like souls could exist, because 1 + 2 must still equal 3 if there was nobody around to check, and “2” clearly had no material existence.)

          Also not actually “existing”: Magenta. But I can still buy magenta paint at the store, and it’s definitely magenta.

          • Tim Martin says:

            Ah, okay, my bad then.

            I guess we use the words differently, but I tend to say that most things end up “existing” if you actually pay attention to their definitions.

            Centrifugal force is a good example. No one pays attention to the damn definition, and everyone likes to say “centrifugal force doesn’t exist.” But CF is defined as *the appearance of* an outward-fleeing force in a rotating reference frame. And there certainly is the appearance of an outward-fleeing force in a rotating reference frame! (It’s in the math, too; I’m not just making it up.)

            So centrifugal force, *as defined,* certainly does exist.

            Morality does too, for some definitions. Other definitions – like the kind I think most people grow up with, where “good” and “bad” are woven into the fabric of the universe and there are some things you ‘simply must not do’ – are nonsensical.

      • blacktrance says:

        By that reasoning, you could say that anything conceptual doesn’t exist. For example, baseball – you could grind up the universe and sift through it and not find one atom of baseball, just a lot of carbon, oxygen, etc.

        Just because something isn’t ontologically basic doesn’t mean it doesn’t exist. Baseball exists because it supervenes on the physical. The same goes for morality.

    • kernly says:

      of course there’s a right and wrong

      A human right and wrong. Not a universal right and wrong.

  9. The post title, “High-energy Ethics’, puts me irresistably in mind of Unseen University’s Department of Inadvisably Applied Magic.

    • Scott Alexander says:

      I feel like I’m stealing that term from somewhere, but Google couldn’t find a previous use.

      • Sayre says:

        High Energy Theoretical Ethics perhaps? Close enough for me to imagine it was what fired your brain along that particular train of thought.

        http://www.patheos.com/blogs/unequallyyoked/2011/08/high-energy-theoretical-ethics.html

        Usually I just lurk in places like this but I feel a public service needed to be rendered!

        • Scott Alexander says:

          Yeah, that’d be it – and explain why Google didn’t help, to boot.

          Thanks. I’ve changed the name of the post.

          • roystgnr says:

            I was going to say that in future searches you should try the wildcard operator, since searching for “high energy * ethics” quickly found the post in question for me.

            But on the other hand, searching for “high energy ethics” *also* currently finds that post for me, a few links below *this* post, so I’m not sure whether the wildcard operator would have helped you in this case or whether the UnequallyYoked post didn’t have a high enough Google rank until after it was linked to and searched for from here. Google search results are not easily replicable experiments…

        • Which was indeed directly inspired by Pratchett!

  10. Only semi-related: this reminds me of something Eliezer said recently, about how HPJEV always jumps to the most extreme situation possible when considering how to solve a problem. I think there’s an important personality trait in there somewhere that I might be lacking. I’ve always been an extremely cautious and tentative person (which I think is why I went into theoretical rather than experimental science), and in particular I go to great lengths to avoid overcompensating whenever I make some mistake, because I view overcompensation as something that “dumb people” do. Instead I usually end up undercompensating (which is really just meta-overcompensation, if you think about it) and so I don’t correct nearly enough for my mistakes. I think there’s some kind of “experimentalist spirit” that some people have where they say “let’s just push this until it breaks,” whereas I’d be far more likely to say “We can’t keep pushing this, what if it breaks?”. So I usually end up only exploring the neighbourhood of solution space around where I already am, and only very gradually converging on the correct solution.

    Can anyone relate to this?

    • LRS says:

      Yes; there does seem to be this sort of spark of boundary-testing and envelope-pushing that’s correctly celebrated as a virtue in our community, and I don’t think I have it, and that makes me feel inferior, low-status, and sad.

    • Oscar_Cunningham says:

      I would have thought that theoretical science would correlate with thinking about extreme scenarios more than experimental science. The phrase “thought experiment” itself refers to a theoretical scenario that could never be practically realised.

      • Yeah, I see what you’re saying. But the whole thought process came about when I was trying to explain to someone why I didn’t go into experimental physics, and I traced it back to not enjoying labs in high school and undergrad. And when I tried to explain why I didn’t like labs…well, I used to just say that I “wasn’t good with my hands.” But that’s silly – my actual dexterity skills are almost surely not the limiting factor here. Modern physics experiments are way too precise to actually rely on the hand-eye coordination of the experimenters – in fact, they’re *designed* so as to not need you to have those skills. No, when I introspect on why I really didn’t like those labs, it comes down to me being scared of screwing something up. In an experiment if I do something wrong I could start a fire or ruin some $10000 machine. In theory, the worst that happens is I get nonsensical answers.

  11. At the same time, can we at least admit that talking about doing horrible things to your interlocutor’s family in particular is not a smart move? Like maybe just say “what if a criminal broke into someone’s house and raped their children” rather than “what if I broke into your house and raped your children.” If we’re really trying to be practical in getting the point across, the former seems much better.

    • RCF says:

      Well, a point could be made that thinking about someone else suffering is more abstract, while someone being personally hurt would feel that they have been wronged.

  12. Stephen Frug says:

    I don’t think this is simply the matter of people not getting thought experiments. I agree that at times there are people on the internet who will either not accept that notion (or feign disbelief to score political points), but I think that is only part of what’s behind this reaction.

    There are various rhetorical signals and stylistic conventions used in introducing a philosophical thought-experiment, which are not purely decorative but which serve various purposes, including making sure one isn’t misunderstood. The Duck Dynasty guy didn’t use any of those.

    Further, I think you could plausibly argue that he actually used stylistic cues which indicate (perhaps wrongly, this could be a matter of communication, but it’s still there) that he was doing something more like “fantasizing”: the relish with which he describes the situation, beginning it with “I bet”, the odd switch to the second person in the middle, the particular details. Fred Clark of the Slacktivist blog wrote that it sounded to him like the famous joke “The Aristocrats” in which the comedian piles horror upon horror to try to outdo others; I think that, stylistically, the Duck Dynasty guy came closer to that than to a philosophy thought experiment.

    If someone doesn’t use the rhetorical signalings which help designate a philosophy experiment, and does use ones which sound like gleeful torture porn, then they’re apt to be misunderstood.

    Should people try to read his argument more sympathetically? Sure. But there are also reasons not to read it that way if you’re trying to actually parse what the guy meant rather than take it in good argument style at its best, e.g. even the slightest knowledge of (meta) ethical philosophy would make it clear that assuming atheists are moral nihilists is simply an error. Again, I think we should read him sympathetically, but I see why people don’t.

  13. Wrong Species says:

    I don’t see the contradiction. We all have an inherent morality(except for psychopaths), and we can feel like something is wrong without it being objectively wrong. So if I was in that situation, I would say that it definitely feels wrong, but I couldn’t prove it. At some point, it all goes back to our intuitions and I can’t accept that as an answer, as much as moral realists try to convince me otherwise.

  14. NonsignificantName says:

    My response to the thought experiment, putting on my nihilism hat for a second, would be :Well I wouldn’t like it. I really like my family. But I suppose I couldn’t use my moral philosophy to convince this hypothetical man to let them go, and NEITHER COULD YOU. What murder-rapist ignores “You’ll rot in jail for this if you don’t leave us alone immediately” but walks away solemnly at “You’ve done something naughty”? I wouldn’t commit murder or rape myself because of guilt, fear of repercussion, squeamishness, etc. but this guy isn’t me. Why should he have my moral philosophy and none of my other traits, and why should that have any bearing on my own ethics in a universe where they don’t influence those of would-be murderers?

    • Anonymous says:

      This confuses the ability to convince someone of something and the underlying truth of the fact. Consider an individual who will never believe that vaccines are safe, no matter how persuasive your argument seems. Does this mean that there is no fact of the matter concerning the safety of vaccines? What vaccine-denier ignores, “Your children will die scared and alone, locked in a pressure vessel,” but walks away solemnly at, “You subscribe to poor epistemology”?

      • NonsignificantName says:

        The difference is that the safety of vaccines is a fact about the world. The probability of your child not being alive in the future either increases or does not when exposed to vaccines. Knowledge about whether or not vaccines are safe makes predictions about the world. It is distinguished in this respect from knowledge about morality and from opinion. Knowledge about morality, on the other hand, is not easily distinguished from opinion. You cannot convince me that actions are inherently moral or immoral by showing me actions I really don’t like any more than you can convince me that movies are objectively good or bad by pointing to The Room or that some colors are objectively prettier by pointing at puke green. There’s nothing to stop us from enforcing our moral opinions on others through laws and customs, but they are still, ultimately, as subjective as any other opinion.

        • Anonymous says:

          Sure, you can start from the premise of your nihilism and conclude that there are no moral facts, but you can’t take the inability to convince others as evidence for your position.

          • NonsignificantName says:

            An implicit argument in the thought experiment seemed to be “What would you say to the murderer nihilists?” The fact that the murder-rapists in the thought experiment were discussing metaethics seemed to invite the interpretation that because nihilists could not consistently argue against the murderers, that was a weakness of nihilism. I was saying that since arguing against murderers is a waste of time anyway, being able to do that and be consistent is no boon to any belief system. Looking back that was a bit of an uncharitable, but that’s where I was coming from. I was not saying the inability to convince murderers to leave you alone was positive evidence against moral realism. I used nihilism as a starting point because the thought experiment was “two guys break into an atheist’s home” not “two guys break into someone’s home… try telling them that’s not wrong.”

          • Anonymous says:

            …but there was nothing in the scenario about convincing the psychopaths to actually stop. The point was the last sentence,

            If it happened to them, they probably would say, ‘Something about this just ain’t right’.

            Buried in this (if we do actual moral philosophy and are more adept at the details than PR) is the idea that the moral relativist would, in fact, think that there is some measure of wrongness that can be placed on the murderers regardless of what they think of their own actions (…which interestingly makes it seem as though PR thinks that atheists aren’t actually callous and immoral).

            You and I both know that much ink has been spilled developing atheistic versions of moral realism specifically to counter this type of thought experiment. In the academic world, his position will be assaulted by both atheistic moral realists, but also by moral nihilists, for sure… but it’s also naive to think that this exact type of thought experiment hasn’t greatly influenced exactly those branches of academic thought.

  15. Josh Kaufman says:

    Great post, Scott.

    Thought experiments and other forms of mental simulation can be useful in exploring arguments, but there are two things that have always bothered me about the more extreme ones:

    1. Most extreme thought experiments involve the illusion of certainty, which influences the conclusions. What if you don’t know whether or not the fat man will actually stop the trolley? What if you don’t know what the violent lunatic at the door will do based on your response? Forming strong conclusions from an extreme thought experiment when you know the full range of possible outcomes in advance is like saying you should just put all of your money in the one stock that will have the highest return, so that’s obviously the best investment strategy. Morality is most interesting, IMO, when it’s discussed as a way to triangulate into what’s likely to be the best attainable outcome, assuming full uncertainty, ambiguity, and lack of complete information.

    2. The transplant surgeon and dust speck thought experiments suffer from major near-far issues – the intuition changes when the individual who has to take one for the team is some random unknown person, someone you know, someone you like, someone you love, someone in your family, your (real or hypothetical) child, or you. It’s much easier to sacrifice a random/far/unknown person for the sake of an abstract principle. When it’s you or someone you love, all of a sudden a bazillion-bazillion minuscule inconveniences no one will think twice about doesn’t sound so bad. It’s a significant enough issue that I have trouble accepting the experiments as useful.

    • HeelBearCub says:

      Most extreme thought experiments involve the illusion of certainty, which influences the conclusions

      This has ALWAYS bothered me about the trolley problem and its various iterations. It seems to me that what the experimenters are actually measuring is not a difference in moral intuition based on action/inaction, but rather a preference for “normal” activity in the face of uncertainty.

      I have this thought that if you explored the trolley problem with people who actually do train-switching for a living, and you gave them a real-world plausible description of the problem, describing switching the train from one track to another in a way that mirrored their day to day activity, where they saw 1 person standing on track A and ten on track B, the action inaction distinction would disappear.

      Of course, that is my intuition and haven’t done the research to know if anyone has ever tried this.

      • Anonymous says:

        It seems to me that what the experimenters are actually measuring is not a difference in moral intuition based on action/inaction, but rather a preference for “normal” activity in the face of uncertainty.

        This actually seems to contradict Josh’s claim. These thought experiments explicitly disclaim uncertainty. I think this is totally acceptable, because it’s the easiest way to just grab the extreme case without having to go through the complicated exercise of building up lengthy example after lengthy example of, “…and here’s how we’re 99% certain that this result would occur… and here’s how we’re 99.9% certain… and here’s how we’re 99.99% certain…” It’s better to just start from the case of absolute certainty and then weaken it later.

        • Josh Kaufman says:

          Right – I don’t have a problem with exploring the extreme zero-uncertainty case. I just see too many people stop there without adding uncertainty and ambiguity back into the situation to see if that changes things. Any moral philosophy that fails to account for uncertainty doesn’t have a very strong claim on reality or validity in normal day-to-day application, and I think that does a serious disservice to moral philosophy as a whole.

          • HeelBearCub says:

            Josh, do you think that the absolute case is actually being measured by the trolley problem?

            My contention is that it is not being measured.

          • Josh Kaufman says:

            HeelBearCub – depends on what you think the trolley problem is measuring. You do get some insight in edge cases by using artificial certainty, but that insight probably doesn’t apply to non-edge cases or situations where there’s significant uncertainty.

            I think the fat man trolley problem, in the extreme artificial certainty case, says more about whether or not a person is willing to accept moral culpability for random situations they happen to stumble into without questioning whether or not they’re actually morally culpable in that situation.

          • HeelBearCub says:

            Josh

            depends on what you think the trolley problem is measuring.

            I have seen the contention that various versions of the trolley problem show an activity bias in our moral intuition. For instance, that the reason we feel it is wrong to push the fat man onto the tracks is that we are biased to believe it to be wrong to take an action that harms someone.

            I contend that the reason this particular action seems wrong is that we do not accept the parameters of the problem as laid out. We do not accept that the fat man will actually cause a derailment with 100% certainty. More importantly, we don’t accept that we COULD know it with 100% certainty.

            Because honestly, the scenario sounds absolutely ludicrous.

            Only if you work the problem as a logic problem using the parameters as givens does the problem become possible of being worked out in a utilitarian fashion.

            Therefore, I don’t find that version of the trolley problem to be very useful at all.

          • houseboatonstyx says:

            Because honestly, the [trolley] scenario sounds absolutely ludicrous.

            Yes. If the point requires disregarding uncertainty and common sense, why not design a believable scenario instead? Maybe something along the line of “The Cold Equations”; you are the equivalent of an air traffic controller in the big ship who can remotely command the pilot to evict the stowaway. Or a 9/11 scenario; you are a passenger in the hijacked plane who can cause it to crash in a field instead of in its target city. Or something involving triage in a disaster.

            Really, what is the thinking here? The trolley isn’t a case of extreme results, it’s an unbelievable case. (The organ harvesting case can easily be supposed believable — at least for a good chance of getting away with it once.)

          • Anonymous says:

            why not design a believable scenario instead?

            …because then we get needlessly overcomplicated scenarios and end up hopelessly mired in little details about how certain/believable the scenario is instead of distilling the essential features of the problem.

          • houseboatonstyx says:

            @ anonymous

            I’m not wishing for a complicated scenario; just the opposite. A scenario that is so believable at first reading, that no one will need to complicate it. I gave a couple of examples, though the one with the burn victims needing blood is better.

            But complicated/simple and ludicrous are two different things. Even if the designer for some reason wants a complicated situation, it does not have to be a ludicrous situation. Pushing a fat man off a footbridge to stop a trolley? That’s cartoon stuff.

            Although come to think of it, if the fat man were standing next to the rails, or on a deserted train platform, it might be practical. The reason you know about the people on the track further down, can be believable and not ludicrous.

          • Anonymous says:

            An apparent admirer of my ilk immediately complicated the burn victim scenario below. Feel free to spend your time coming up with the perfect scenario that can’t possibly be misinterpreted. After you publish it, all moral philosophers everywhere will say, “Yea, that does a pretty good job of describing what we all knew were talking about already,” nod, and then go right back to work unaffected.

          • Steve Johnson says:

            The reason that the trolley problem – as bad as it is – is used as the canonical toy example?

            It’s basically the only contribution to ethical philosophy made by a woman in history.

            Man is progressivism an all encompassing and never relenting mind virus.

          • Anonymous says:

            …sooo many female philosophers would like to talk to you right now. How about we start with the person who coined the term “consequentialism”?

          • Troy says:

            Steve, many of the best ethicists of the 20th century are female: Elizabeth Anscombe, Philippa Foot, Judith Thomson, and Christine Korsgaard, to name a few. And Foot and Thomson have done very good work having nothing to do with trolleys. For instance, Thomson’s work on meta-ethics and the meaning of ‘ought’ is much better than most of what passes for meta-ethics nowadays.

        • HeelBearCub says:

          These thought experiments explicitly disclaim uncertainty.

          The experimenter tells the subject that it is a certainty that the fat man will stop the trolley. This is very different than the subject actually being certain that the fat man will stop the trolley.

          This why Josh asks “What if you don’t know whether or not the fat man will actually stop the trolley? ”

          Just telling someone that something is “guaranteed” to occur does not actually convince them that it is, in fact, a certainty. In fact, when someone says this, most people’s “BS detectors” start alerting.

          So as much as the experimenter would like to measure what people would do if they were certain, simply telling someone to imagine they are certain won’t actually induce the feeling of certainty.

          Imagine you are certain you can hit a home run off of Madison Bumgarner in a single at bat. If you hit a home run off him you can have $100,000. Or you can take $99,000 now. Which do you do?

          Do you see how me saying that you are certain doesn’t actually make you certain?

          • Anonymous says:

            Perhaps you’re just bad at philosophy. We take it as a certainty that the fat man will stop the trolly. This is an axiom. Full stop.

            We could make up a complicated reason for why we have this certain knowledge, but that would just be distracting from the point. Instead, we imagine that someone has already gone to the trouble of concocting a suitably complicated reason and get right to the point of the hypo, since the details of such a suitably complicated reason would be unimportant anyway (…and would only lead to more freshmen fighting the hypo for uninteresting reasons).

          • HeelBearCub says:

            If it is an axiom, then it is a logic problem, and there can be a “right” answer.

            But then why bother describing the situation at all? Why a trolley? Why a fat man? Why a bridge? Surely these all distract from the point.

            The details of the scenario seem to matter to philosophers. Otherwise the scenario would simply be the question, “Is it moral to cause the death of a blameless person to save the lives of 5 other blameless people?” Why is the scenario any more complex than that?

            If you are well-versed in philosophy, you could perhaps take the opportunity to elucidate, rather than simple accuse others of being “bad” at it.

          • Who wouldn't want to be Anonymous says:

            The question framer takes it as axiomatic that the fat man will stop the train. But when you put the Trolley Problem with variations of parameters A through F in a survey all the variations will come back with different results. Whether or not the variation is purely the result of moral reasoning is far from proven, no matter how explicitly you exhort the respondents to believe the questions as stated.

            The respondents do not have insight into the mind of the surveyor which causes implicit uncertainty in the question parameters. Is the true purpose of the question to receive the answer asked for, or to discern some other information through side channel leakage? How gullible people who actually answer morality surveys are, maybe? Who knows… somebody has probably done a Masters thesis on it.

            It is plausible–nay, nigh certain–that even if respondents attempt to assume the question conditions as given no matter how patently farcical, they are not all able to do so. If someone honestly think that they can, my million Internet Points are bet on “you are better at lying to yourself than overriding your bullshit meter.”

            “How likely is this course of action to succeed” is an deeply imbedded part of the decision making process. People are really, really, really, really, really bad at altering how they think–isn’t that, like, 90% of the whole rationality bit?–so asking people to turn off a fundamental part of their brain on a whim is a tall order. And assuming without any evidence whatsoever that they are able to do so seems outright foolish.

          • Anonymous says:

            @HeelBearCub

            If it is an axiom, then it is a logic problem, and there can be a “right” answer.

            This is only true if we already have all of the other necessary axioms.

            But then why bother describing the situation at all?

            Let me pose a slightly different ethical dilemma. I’m not going to describe it, but I’d really like your input.

            Why is the scenario any more complex than that?

            It doesn’t really need to be. Canonical toy problems exist in every field. In control theory, for example (my actual field of research), we always talk about a spring-mass-damper system or an inverted pendulum, but everyone in the field knows that these are just canonical toy problems meant to stand in for the general second-order system.

            After we understand the essential features of the idealized canonical problem, we can start to relax various axioms. When we choose what to relax, we generally look for some motivation. Some of that motivation can come from the canonical system itself (what if we notice that gravity on a pendulum is actually a sinusoid; what if we have a nonlinear spring; what if we’re only 70% sure the fat man will stop the train). Sometimes it comes from external motivations, and we change the scenario substantially, yet inport an important result from the essential part of the canonical problem.

            @Who wouldn’t want to be

            In academic circles, there is only limited interest in public surveys of these problems… precisely because we know that there is no way to really keep the public thinking about the essense of the problem. They’re going to find a way to include irrelevant factors, which we’ll then have to figure out how to discount, and at the end of the day, we haven’t accomplished much anyway.

            As a sidenote, physicists shouldn’t assume that cows are spherical and live in a frictionless world, because if you were to commission a survey of the public, there are going to be some farmers who can’t turn off a fundamental part of their brain on a whim.

    • Cliff Pervocracy says:

      Number 1 is spot on. In the transplant scenario, I think the reason people are so leery to take the stranger’s organs is that they can’t put the realities of transplant surgery out of their head–that many transplants are rejected, and many transplant recipients have so many comorbidities that they have awful quality of life and die soon anyway.

      You can do a handwave and say “in this case, you magically know that all of the patients will be fully restored to perfect health after their transplants”, but it’s hard to make people fully believe that.

      • For what it’w worth, that aspect never occurred to me.

      • Daniel Armak says:

        Maybe a better thought experiment would be: in a 3rd world country, a wildfire sweeps through a village. Burn victims urgently require blood transfusions, or they will die, but the blood bank shipments can’t make it in time. As the doctor on the scene, will you sacrifice a healthy man to use all of his blood and save 5 burn victims?

        • Adam says:

          Sacrifice a burn victim. They still have blood.

        • houseboatonstyx says:

          This is the best version I’ve seen of this problem, or at least the best told. The fire and the 3rd world location make the crime unlikely to be discovered; the fire will burn up the corpse and no one is likely to inventory the medical supplies. Establishing those factors in the first few words with an unremarkable setting, is very neat.

          Also the fire produced a number of patients, so choosing patients who match the victim’s blood type is feasible. Again, all this is already implicit in the setting.

        • Who wouldn't want to be Anonymous says:

          This is probably a problematic scenario as well. My extremely cursory triage instruction basically amounted to “ignore people with severe burn or head injuries; they’re going to die no matter what so don’t even waste the band-aids.” A resource impoverished remote locations in the midst of an emergency is a pretty good match for that kind of survival outlook based resource husbandry.

  16. Thursday says:

    even the atheists who aren’t quite moral realist usually hold some sort of compromise position where it’s meaningful to talk about right and wrong even if it’s not cosmically meaningful.

    I think Robertson is trying to force this kind of atheist to answer the question “Do you really believe all of this isn’t really wrong in a cosmic sense?”

    On another topic, atheists who are genuine moral realists always strike me as covert Platonists, and thus their claim to atheism is pretty dubious. They may not be Christians or conventionally religious in any way, but I don’t think they’re really atheists either.

    • social justice warlock says:

      On another topic, atheists who are genuine moral realists always strike me as covert Platonists, and thus their claim to atheism is pretty dubious.

      What is so theistic about Platonism? I mean, there is certainly a tradition of saying that the Form of the Good is and must be the classical theist God, but the steps necessary to get there from positing mere spooky value-things are hardly much worse than that necessary to get there practically a priori. Certainly we shouldn’t say that naturalists are the only real atheists.

      • Wrong Species says:

        I’m not going to say that atheist moral realists are “no true atheist” but it does seem very suspicious. Some christian might say “I have an intuition that God exists” and the atheist demands proof of this god. And yet when it comes to morality, it suddenly is up to to the nihilist to prove that morality doesn’t exist. Why do intuitions as an explanation work for ethics but not for God?

        • RCF says:

          How did you go from asserting moral realism, to asserting that nihilists have the burden of proof? And God claims are very different from moral claims.

          • Wrong Species says:

            I skipped a step in explaining. I believe most moral realist philosophers are ethical intuitionists. Ethical intuitionists believe that morality is objectively real and comes from our intuitions. They believe that our intuitions on morality are correct until proven otherwise, an argument they probably wouldn’t accept if someone said that about God.

        • LTP says:

          If the burden of proof is *not* on the moral nihilist, than on what grounds is the burden of proof *not* on the external world skeptic in the case of epistemology? After all, the only reason we trust our senses to be generally reliable and represent something external and physical is our intuitions.

          As for God, I would say that the burden is on believers in God because the intuition that God exists is not even close to universal. On the other hand, the intuitions that the external world exists, and that there are moral facts, are things almost everybody has (forget where I read this, but even sociopaths have moral intuitions to a degree), even if some deny them at a rational level.

          • Wrong Species says:

            If you went back about 500 years, you would probably find that god intuition much more universal. It’s funny how our “universal” intuitions can change so fast.

    • I’m an atheist and what you would probably call a moral realist.

      But I think of myself as an ethical realist, because I believe there are objective ethical truths. The most important of these is that if you do not behave in a reciprocally trustworthy way the expected outcome of your dealings with other humans is likely to be very bad for you.

      I don’t require a spook in the sky to tell me this, nor any Platonic ideal-forms nonsense. All I need is game theory and economics.

      • Anonymous says:

        So, you know, a humanist.

      • Wrong Species says:

        Game theory and economics might be able to tell you how to achieve your goals but it can’t tell you what your goals should be.

        • Well, those are pretty much wired in. I want to have pleasant experiences and not unpleasant ones. I want to have enough to eat, have frequent sex, not live in constant terror – obvious stuff.

          In addition, a consequence of the evolved adaptive mechanisms for achieving these goals is that I like to know how things work and what is actually true. I get a lot of pleasure from that, too.

          My point is that game theory and economics don’t have to tell me what my goals should be. Being an embodied intelligence takes care of that part.

          • Wrong Species says:

            You keep talking about your preferences but the original point was about moral realism. Do you believe that that your preferences are objectively true? If so, why? If not, why do you call yourself a moral realist?

          • Sorry, I have to outdent to reply to “Wrong Species”.

            This is why I was careful about the distinction between moral and ethical realism in my original reply.

            I don’t think my preferences have any truth value at all. “Eric desires not to live in constant terror” is a truth claim, but the actual desire isn’t a truth claim any more than my need for oxygen to metabolize with is.

            Ethics enters when we try to answer the question “how do I maximize my chances of the outcomes I desire in a world containing many other people with different and conflicting desires”.

            Atv that point we can start to make truth claims about what kind of behavior and rule-following is most likely to be successful for me.

            I am an ethical realist because I think those rules are largely – perhaps entirely – deducible from natural law.

            I’m a little surprised to find myself having to explain ethical egoism in this forum.

          • Wrong Species says:

            Eric, there is no difference between moral and ethical realism, at least not as used by philosophers(or anyone else really). When you say that you are an ethical realist, you are saying that you believe in objective morality but you never defended that point, hence my confusion. So it seems like what you are really saying is that you don’t believe in objective morality, but you believe in a morality which will be beneficial for you in the long run. I wish you had said that in the first place instead of creating these new definitions.

          • “This is why I was careful about the distinction between moral and ethical realism in my original reply.”

            I have similar views (but see below), and find the phrase “naturalized ethical objectivism” useful to point out the distinction.

            “I am an ethical realist because I think those rules are largely – perhaps entirely – deducible from natural law.I’m a little surprised to find myself having to explain ethical egoism in this forum.”

            Naturalized ethical objectivism doesn’t uniquely lead to egoism. Contractarianism, which has been discussed here, can follow, particularly if you see ethics as being mire about minimizing losses than maximizing gains.

      • RCF says:

        If you are motivated purely by a belief that your actions are in your self-interest, then you are not acting according to morality. A decision involves morality only when a person’s interests, as they see them, conflict with what they view as their obligations. Deciding whether to steal is a moral decision. Deciding whether to eat is not.

        • In your terms, then, I have no morality – but my ethics are sufficient to tell me not to steal.

          At one level, if you make a habit of stealing you will probably be caught and suffer consequences.

          At another, the kind of person who steals is not the kind of person who has as many positive-sum transactions with others as he could.

          There’s a short, straight path from sefishness to contract-based ethics to virtue ethics. The result looks a lot like ‘morality’ because ‘morality’, irrational and taboo-based though it is, has evolved under selective pressure to enable human beings to get along.

          I feel like I’m stating the trivially obvious here. Are these really alien concepts in this crowd?

          • Carinthium says:

            On one level: So don’t do it too much. But when you see a good opportunity, take it.

            On another: Why are you so sure about that? Charismatic jerks in dating, sucessful political intriguers etc.

          • Wrong Species says:

            We all know what ethical egoism is. I don’t know why you act like we don’t.

          • Carinthium says:

            For a start, I’m not an ethical egoist. I do not believe that the word ‘ought’ refers to any actual thing. I can understand the mistake, but I’m not an ethical egoist. I also provide for limited cases in which selflessness makes sense, but that’s another story.

            Second, Eric S Raymond is both trivially wrong and obviously wrong so I figured I had to spell out the obvious. Pursuing the good of society does not fullfill a person’s self-interest in the long run. Inevitably they will diverge at some point.

          • Wrong Species says:

            Carinthium, I meant my comment to be directed towards Eric.

          • Raemon says:

            I’m more or less on Eric’s boat, ethically (with an extra dose of “parts of me just want other people to be happy, because Empathy”).

            I’d be slightly surprised if these concepts were genuinely foreign to these forumgoers. My guess is that the blog attracts smart people who HAVE thought about ethics, but not necessarily people who use the same terminology or a common framework. So, the other people in the debate aren’t unfamiliar with the concepts, but may not be sure what *you* mean by them.

            Or maybe they’re being contrarian. Or maybe the blog also attracts smart people who happen to not have thought about ethics in this particular manner, although that’d surprise me.

          • Gunnar Zarncke says:

            > Are these really alien concepts in this crowd?

            No and Yes. You asked twice, so I answer this a bit longer.

            SSC’s crowd seeded from lesswrong.com but as it is growing it might decline by pacifism http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/

            This is a difficult ridge to walk. Either you inbreed or you regress to the mean. I’d try to accept that there are always some who do not (yet) know all the lore.

            If you wan’t to have a discussion without noise go to
            http://lesswrong.com/r/discussion/lw/lyq/discussion_of_slate_star_codex_extremism_in/

        • blacktrance says:

          This definition excludes ethical egoism, contractarianism, and related systems that are traditionally considered ethical theories, and is at odds with some justifications of morality, e.g. the Veil of Ignorance.

          • RCF says:

            It is consistent with the Veil of Ignorance. We can act in ways that advance our interests now that we are past the Veil, but would have been contrary to our interests while we were still behind it. To do so is to act immorally.

          • blacktrance says:

            That’s debatable. On one hand, it’s as you say, on the other hand, the obligations (can) come from self-interest behind the Veil. So if the obligations of the Veil are justified, then self-interest can’t be excluded entirely.

      • Viliam Búr says:

        I believe there are objective ethical truths. The most important of these is that if you do not behave in a reciprocally trustworthy way the expected outcome of your dealings with other humans is likely to be very bad for you.

        Then the correct thought experiment for you is what you would be doing in a scenario when you would know with certainty (imagine any sci-fi scenario where the following is true) that not behaving in a reciprocally trustworthy way will likely be very good for you.

        In other words your intuition was forged in situations where humans have comparable power, some of them take revenge, information about your behavior is somewhat likely to spread, other people will reason that if you are the kind of person that harms others you are also likely to harm them, etc.

        Now imagine the situation where each of these assumptions is false. For example, you have reliable superior power forever, or you can do your crimes absolutely anonymously (such that there is really zero chance to be discovered in the future, not merely the typical overconfidence of an average criminal), or you find a population of people so extremely stupid or forgetful that they cannot make a logical conclusion that “if this guy hurt someone else yesterday, he may also hurt me tomorrow”. And please imagine the situation seriously, instead of imagining a scenario where someone could believe these assumptions are false, but in fact they would be true and the person would be punished afterwards.

        Now in this scenario you either don’t feel a need to be nice to people (which means that the “objective truths” actually depend on circumstances), or you still do feel a need to be nice to people (which means that your behavior does in fact not require reciprocity).

        • That is clever, but all you are demonstrating is that my formulation of objective ethical truth needs to have the correct preconditions attached, e.g. “In a situation where all agents have rougly equal power…”

          I don’t have a problem with this. I can easily match your hypothetical to my actual situation with respect to most animals.

          But this situation may still have objective ethical constraints. One very obvious one is that at some point in the future I may encounter and wish to enter reciprocal exchange with a being as powerful as myself. My utility calculations must therefore include a term for the possibility that this being will have knowledge about my behavior towards animals and use it to evaluate me.

          The obvious step to virtue ethics applies here as well. I may (in fact, I do) wish to refrain from harming animals because I believe crurelty damages me. (In ways including but not limited to damaging my ability to enter into reciprocal trust with equals.)

          • ADifferentAnonymous says:

            Reasonable answer. Now try this one:

            You’re sent back in time to, say, 1600 (with any technological knowledge that would detail the thought experiment lost). Are you able to honestly argue to the era’s slave owners that slavery is wrong, given the knowledge that almost none of them will live to face any repercussions for supporting it, and they definitely aren’t going to encounter an equally powerful civilization that will judge them for it?

          • Irrelevant says:

            Wait, argue honestly or argue successfully? Because it’s obviously possible to argue the point honestly: you know the future. You know there were in fact bad consequences of plantation slavery and that they’ve continued for (at least) a time period comparable to plantation slavery. The success of the argument, as with all arguments against tyranny rooted in the fact that tyranny is unsustainable, depends on how future-oriented the plantation owners are.

          • HeelBearCub says:

            @ADifferentAnonymous: To further support Irrelevant’s point, none of the slaveholders will live long enough to see the negative outcomes, but their heirs and the heirs of other members of their in-group will. Can we take it as a given that any successful model of desirable future outcomes has to include biological descendants?

          • Nita says:

            @ HeelBearCub

            Have the descendants of typical plantation owners really suffered more negative outcomes than the descendants of “nice” plantation owners who granted all their slaves freedom? I would expect the opposite, to be honest.

      • John D. Bell says:

        In other words –

        “What goes around, comes around.”

        😉

  17. Dormin111 says:

    There is an important distinction to be made between “reducto ad absurdiums” and “extreme scenarios.” The former is good philosophy, the later is bad philosophy.

    Reductos are taking a moral principle and stretching it to its logical conclusion. The latter is just describing an impossible or extremely unlikely scenario in which an individual is supposed to make a decision. Philosophy is supposed to guide an individual’s actions in real life, not in crazy scenarios they will never, even encounter, like stopping trolleys from running over people or murdering and eating people in a life boat. In those types of emergency situations, an individual’s entire moral system will adapt to the context based on meta-principles, and therefore the principles derived from such scenarios have little to no bearing on every day life.

    • Carinthium says:

      In terms of how humans tend to behave, you are right. But that is because humans have a tendency to be irrational and follow principles that don’t make sense when scrutinised carefully. Most people’s actions, philosopher or not, don’t make any sense when subjected to close scrutiny.

      It is admittedly a difficult task to make a person actually follow a philosophy. But philosophy is SUPPOSED to contain the ‘meta-principles’ you speak of. Even a bad philosophy is better than none, as having no philosophy create the meta-principles you speak of would mean falling back on instincts that have been proven to work through inconsistency.

    • RCF says:

      To continue the making-analogies-from-other-fields trend, the values of an analytic function within a domain is entirely determined by its behavior on the boundary of that domain, regardless of how far that boundary is from the region of interest. If the function at the boundary is different from what we say it is, then the function at every location (or, at least, all but a measure-zero set of locations) is different from what we say it is.

  18. 27chaos says:

    I like this. To extend the analogy:

    There were some people who feared that CERN’s particle accelerator would destroy the earth, and so opposed the high energy experiments. Similarly, there are people who fear that morality is only an enjoyable illusion, and so oppose high energy thought experiments.

    (This is not to claim all objections to thought experiments are about a fear of what happens as a result. But some of them are.)

  19. Unknowns says:

    If someone is really an error theorist, they will believe there is nothing wrong with raping and killing and so on. But they reject right and wrong, they don’t reject the fact that they have desires. And they don’t want to be raped and killed, nor do they usually want others raped and killed. But error theory implies there is nothing wrong with lying, so there is no reason for them not to say, “yes, raping and killing is wrong,” since it is useful for their goal of preventing people from being raped and killed.

    In fact, there is also no reason for them not to deny that they are error theorists in the first place, if this will be useful for their goals, and in fact it would be useful for almost any goal, since if you admit that you don’t believe in right and wrong, many people will not trust you (e.g. who would trust TGGP after he said he was an error theorist and claimed that he would be happy to allow a stranger to be tortured for 50 years in order to avoid personally stubbing his toe?) (http://lesswrong.com/lw/kx/fake_selfishness/g5e). So the true proportion of error theorists may be higher than indicated by surveys.

    • JM says:

      My first thought was, “wow, it would take a really weird/creepy/dishonest person to do the above,” but upon further thought it fits me to a T. I’m reasonably certain we’re just over-evolved clumps of atoms and that our actions have no moral content than the weather’s, but I like living in a society where we act as if actions had moral content, and am willing to play along to keep that society intact and avoid social ostracism.

    • Carinthium says:

      In defence of open error theorists, I should point out I’m posting my views in an Internet identity. None of you are likely to track down my real world self. Hence, it hardly matters what I say.

    • Dormin111 says:

      If morality is a system values which an individual adheres to, then pursuing one’s own desires, even without any further thought on the matter, is still a moral system. Morality is unavoidable due to the simple fact that human beings make choices and cannot avoid making choices.

      • Carinthium says:

        That’s the same philosophy on which athiesm is a type of religion. A moral error theorist, particularly one of my sort, avoids the problem alltogether.

        In my philosophy, for example, I collpase moral and non-moral wants together. Since plenty of ethical systems consider, for example, what to have for breakfast as outside ethical theorising, there is an easy case to do so.

        • Dormin111 says:

          What is the case for separating decisions between morality and non morality?

          Choosing to eat food is still a decision pertaining to your existence in the world. Why should you eat food? Why not waste away and die? What right do you have to consume the earth’s bounty? Even if the answers to these questions are obvious, they still exist within a framework of how one sees oneself as a moral agent.

          • Carinthium says:

            That attitude leads to a lot of potentially absurd conclusions. By your logic athiesm is a religion, asexuality is a sexual orientation, anarchy is a type of government, and being unemployed is a career.

            Even if you bite the bullet on those, why do you accept the connotations that I somehow believe in right, the good etc?

          • Dormin111 says:

            Atheism is the belief that there is no God. Religion is a type of ideology based on the existence of a God. Atheism is a moral framework that opts out of the concept of religion.

            If morality is a a system for guiding individual action, then there is no way for a person to opt out. No person can literally chose not to make decisions. Even if you say, “im just going to lay on the ground and wait to die” that is is still a chosen action based on the pursuit of some value.

          • Carinthium says:

            Why are you rejecting the people who argue that religion is a belief as to the existence of gods? Or for that matter, the argument that morality is a belief as to the nature of right and wrong?

            Your system cannot make sense of libertarian morality, or other types of morality which designate a right-to-choose in certain situations. If a person has a right to choose either way, how can either choice they make be moral?

            In addition, your system ignores the fact that, in practice, people make decisions with no reference to morality constantly- what food to eat, what clothes to wear , what video games to play etc. I am trying to extrapolate that kind of thinking to all decisions, and nothing about what you say makes that somehow incoherent.

            It’s your choice, but I recommend we taboo ‘Morality’ (http://lesswrong.com/lw/nu/taboo_your_words/), and instead discuss any potential philosophical flaws in extrapolating the decision procedures of such decision making processes elsewhere.

      • RCF says:

        “If morality is a system values which an individual adheres to”

        It’s not.

        When people say that morality exists, they are asserting that there is a meaningful sense that we can speak of ought separate from the actor’s personal interests. If you deny this, then you deny the existence of morality. Arguing that there is no such thing as morality, therefore every decision we make is a moral decision, is a bizarre argument to make.

        • blacktrance says:

          “Every decision we make is a moral decision” is a statement with which consequentialists and some virtue ethicists would agree.

          • Anonymous says:

            But consequentialists and virtue ethicists don’t believe that “there is no such thing as morality.” It’s the conditional statement (rather than the conclusion) that RCF is calling a “bizarre argument.”

          • Peter says:

            I once came up with the “lunch dilemma”: It is lunchtime. You have lunch. You feel like eating lunch, you have nothing else important to do in your lunch hour. There are no particularly unusual circumstances surrounding your lunch. Should you eat your lunch?

            My intuitive response is: well, for non-moral values of “should”, yes, but it’s not a moral obligation as such. Some framings of utilitarianism do make it a moral obligation – well, those that aren’t going to require you to spend your lunch hour searching for someone hungrier than you to give your lunch to.

    • The Smoke says:

      I think the behavior you associate to a true error theorist is how a lot of normal people act, though less conciously about the whole process of course. When they are in a situation where it is beneficial to act against what is perceived as moral they do so, but rationalize that what they have done is exactly the right thing.

      This is definitely easier to process.

      I cannot imagine that anybody who sincerely beliefs he is an error theorist would lie about that in a quasi anonymous online survey, as 1) there is no relevant profit in doing this 2) you get an opportunity to express your inner world view.

      If there was a sizeable amount of people who would consciously lie regardless, this would be big news.

      • darxan says:

        There is a brand of error theory called moral fictionalism that holds that, whereas statements such as “Murder is wrong” are strictly false, they can be kept as useful fictions. So when a moral fictionalist says “Murder is wrong” he doesn’t lie anymore than when he says “Zeus and Poseidon are brothers”. It’s Phil Robertson and Greek polytheist that are in error.

    • stillnotking says:

      The oft-overlooked parallel to moral anti-realism is what one might call “selfishness anti-realism” — selfishness is no more “true”, authentic, or rooted in reality than altruism is. They are both just adaptive mechanisms, created by natural processes that don’t care whether anyone lives or dies.

      There is a tendency to conceptualize selfishness as the most “natural” ground of behavior, but strictly speaking that is not true. In some species, like colony insects, the impulses of “altruism” (broadly construed) greatly outweigh the impulses of “selfishness”. Even in humans, it would be just as accurate to say that we need selfishness to keep our altruism in check as the reverse! I find this takes some of the sting out of moral anti-realism/error theory.

  20. I think the problem here is that it’s pretty plain that this guy isn’t a moral philosopher and isn’t trying to uncover some interesting aspect of people’s morality to further moral understanding. As you’ve already pointed out this is at least in part a straw man version of most forms of atheism, but what really is troublesome about it is that coincidentally looks like a thinly veiled expression of his partial desire to go out and do these things to atheists. It also telling the way the violence is described in comparison to say the trolley problem. Now, if we had good reason to think that moral philosophers had a strong desire to go out and run people over with trollies, we’d probably become pretty concerned about them too. But there is no cases I can think of moral philosophers waging a murderous campaign against fat rail workers or the like. On the other hand, there is people in the world at the moment that want to murder atheists just for their beliefs. So I don’t think the outrage is really that irrational, though perhaps if the guy is trolling it would be better to just ignore him.

    • Dormin111 says:

      Is there any evidence that Phil Robertson in particular wants to murder atheists?

      • social justice warlock says:

        I have no dog in the fight of what Phil Robertson thinks, but we all relish thinking about scenarios we don’t actually want to bring into being – adultery, for instance.

        • Tom Womack says:

          I would be quite startled by someone who relishes thinking about *adultery*. Someone who covets their neighbour’s wife: fine – but what they’re relishing is the imagined sex rather than the imagined adultery; if they are delighting in the prospect of their neighbour’s unhappiness at his wife’s infidelity, they’re pretty awful people.

          • Irrelevant says:

            Awful or not, it shouldn’t be surprising. There are genres, plural, of pornography dedicated to that.

          • Anonymous says:

            Are you being disingenuous? It is possible that a man could actively desire his sexual partners to be married, without desiring their husbands ever to find out about him. I suspect that motivating this desire from an evo-psych point of view would be very easy, in terms of sexual competition.

          • Douglas Knight says:

            Tom, I think you’re pretty much agreeing with SJW. But consider “My father can beat up your father.”

          • DrBeat says:

            Then you probably shouldn’t look up a genre called “NTR”.

        • Eli Sennesh says:

          Uhhh… I don’t? I mean, if you were going to fantasize about having sex with someone you’re not married to, can’t you at least bother to fantasize about getting permission? Or hell, fantasize about a damned threesome for a change.

          Like, if your fantasy/fetish actually focuses specifically on the harm you’re doing to your steady partner and your relationship, then I think you need to see a relationship therapist, because something is clearly very wrong.

          • Nornagest says:

            You can, but why would you expect people to?

            I’m not personally into this, but it’s pretty understandable to me: a lot of folks get off on the trappings of behavior that’s connected to humiliation or social degradation in some way, either receiving or inflicting, and I think the infidelity fetish (which, judging by reasons, is quite common on both the receiving and the inflicting ends) is one of those. Adding a step for getting permission doesn’t quite stop that — it’s still at least a little outre — but I think it should be clear that it doesn’t push those buttons nearly as hard.

            It’s not specifically about harm, but it is about a kind of profanement.

          • Eli Sennesh says:

            @Nornagest: yeah, I get that it’s about degradation. What I don’t get is why you would fetishize degradation. There’s something weird, in my eyes, about the way many people seem to equate “Sacred = insufferable, degraded/sinful = awesome”.

            Is it to do with Christian teachings about sex being dirty, in this case?

          • Irrelevant says:

            Other way around. Weird arousal/disgust entanglements are why Christianity concludes sex is somehow inherently dirty.

          • Nornagest says:

            What I don’t get is why you would fetishize degradation.

            The short answer is I don’t know. But it seems to be cross-culturally valid enough (judging from classical and medieval works from Europe and the Near East, and from what I’ll euphemistically call Japanese media) that I don’t think it has anything to do with Christianity.

            I strongly suspect there’s some deep neurology involved, which means the answer likely lies in some evopsych motivations that I don’t feel comfortable speculating about.

      • Anonymous says:

        Is Phil Robertson actively planning to go BTK on a family of atheists? Almost certainly not. Does Phil Robertson literally believe that Elijah called down fire upon 400 idolators and that Ananias and Sapphira were struck dead by God for misreporting their assets? Yes. Isn’t it therefore likely that he holds a world view where horrible scenarios like the one he described are sometimes actually useful or even called for to rectify matters simply of belief?

      • RCF says:

        Evidence in what sense? Evidence in the P(E|H) > P(E|~H) sense? Absolutely. Evidence in the “this is enough to prove the assertion”? No.

        I would put the probability of PR bearing malice towards atheists, and preferring policies that would harm them, at greater than 90%.

      • I think the point isn’t that Phil Robertson wants to murder atheists, but it’s plausible that he wants to give an atheist a rough time by pushing the atheist to imagine something upsetting and then expecting the atheist to treat this as a neutral hypothetical.

    • meh says:

      Given his occupation, I would lean toward him trolling than making a philosophy statement. And elevating his statements to a serious philosophical question also seems intentionally controversial.

    • Airgap says:

      I think the problem here is that it’s pretty plain that this guy isn’t a moral philosopher and isn’t trying to uncover some interesting aspect of people’s morality to further moral understanding.

      In a sense, “Populist agitator” is just the degenerate case of “Moral philosopher.” Several senses, usually.

  21. This post is an extreme ethical dilemma, or reductio ad absurdum, all on its own.

    My moral intuitions point strongly to extending intellectual charity to everyone, at all times, of steelmanning and learning. Turns out, it is in fact possible to extend too much intellectual charity. Sometimes people do use the form of moral philosophy to fantasize about atheists getting murdered.

    • Irrelevant says:

      I was waiting for someone to say that, because the meta-level question it brings up is great: Justifying the moral condemnation of murder without relying on a god is pretty simple. But now, given that we just saw you do it, how do you justify your moral condemnation of fantasizing about murdering atheists?

      • Tom Womack says:

        Murder is bad, therefore fantasizing about murder is bad. Being deprived of things you need is bad, therefore fantasizing about stealing things that their owners need is bad; being deprived of some portion of your surfeit is less bad, so fantasizing about being one of Ocean’s Eleven is less bad.

        The nice thing about fantasizing is that you can delight in one part of a thing which in practice comprises inseparable parts.

        • Irrelevant says:

          X is bad, therefore fantasizing about X is bad.

          No, that’s the bit you’re being asked to justify.

        • Matthew says:

          Let’s take a non-hypothetical example:

          A substantial number of men a)would never rape anyone, and b)have rape fantasies.

          Personally, I don’t see anything immoral about that. I don’t believe in thoughtcrime.

          However, I don’t think it helps Mr. Robertson out much. Having rape fantasies is innocuous. Writing in a forum where one knows women who will not be looking for it will see it, and describing one’s rape fantasies in lurid detail, switching to the 2nd person in describing the victim midway through, is harmful in ways that simply having fantasies is not.

      • For me, because it likely caused considerable distress to at least part of the audience. That’s only because it was expressed unnecessarily vividly, though.

  22. Anaxagoras says:

    Hi, first time commentator.

    You say regarding Constant’s thought experiment: “And even this is suboptimal. […] Or an ancient demon, whose victory would doom everyone on Earth, man, woman, and child, to an eternity of the most terrible torture.”

    I don’t think this is quite right. People are bad with large numbers. The eternity of torture for everyone feels intuitively less bad than the murder of a friend, and I suspect people would be more willing to bite the bullet.

    Obviously, this intuition is faulty, but I think that things of such enormity can’t directly compare with more plausible, personal-scale sorrows. If you’re trying to get people to see where their moral intuitions can go astray, don’t go with the thing that, in a utilitarian sense, is worst. Go with whatever twangs their intuition most strongly.

  23. meh says:

    My moral dilema thought experiment:

    What context or situation would his statements have to be in for you to not attribute them as a moral dilema thought experiment?

  24. Steve Johnson says:

    I once had someone call the torture vs. dust specks question “contrived moral dilemma porn” and say it proved that moral philosophers were kind of crappy people for even considering it.

    Ok, I’ll take the bait.

    The real world outcome of utilitarian reasoning ends up with lots of torture and murder because the nature of humans is to use moralistic sounding excuses to gain status and “the greatest good for the greatest number” is a naively persuasive moral sentiment and humans are violently competitive and will kill for holiness points.

    The end result of abandoning duty and contract based morality is lots of holy torture for a long run that never comes.

    Being explicit – I’m referencing demotist movements here.

    Traditional morality – which is rule and contract based (keep your promises and perform your duties well) has some edge cases with bad outcomes but anything that’s been under selection for a long period of time has had massive failure modes selected out.

    How does this relate to moralistic torture porn? First, people try to get that frisson from talking about taboo topics in a “high minded” manner. The taboo gets worn down and broken and you then have to talk about how the taboo thing is a great idea under some moral framework (“would you murder one man to save five from accidental death”). What’s left after that? Guillotines because it’s the most humane way to kill the enemies of progress? Mass starvation because the enemies of the revolution are preventing the glorious future of the proletariat?

    Nah, that could never happen and I’m sure the current mania for rooting out all sources of the evil thoughts will work out fine.

    • Carinthium says:

      Rationalisation tends to happen in the moment, and is significantly easier to avoid when the scenario is contemplated abstractly. This, combined with rule utilitarian attitudes towards how to behave, can greatly mitigate the problems you talk about.

      The problem with your approach is a lot of absolutely pointless rules. Was there any real point to the subordination of women for so long? Or fillial piety past the age of adulthood? Or arranged marriage, for that matter?

      • Irrelevant says:

        The problem with your approach is a lot of absolutely pointless rules. Was there any real point to the subordination of women for so long? Or fillial piety past the age of adulthood? Or arranged marriage, for that matter?

        Very, very obviously yes.

      • Steve Johnson says:

        The problem with abandoning traditional morality is that you think those rules were pointless and so you’re blind to the massive horrible costs to abandoning them.

        Yes, yes and yes to your questions.

    • “Traditional morality”, as you describe it here, isn’t actually “morality” at all but a reciprocity-centered ethic. Even an extreme libertarian ethical egoist like myself can agree with “keep your promises and perform your duties well”.

      I’m not saying this because I disagree with what you call “traditional morality” here, I’m saying this because usually when people say “traditional morality” they mean something more taboo-centered, more authoritarian, and more religiously inspired. More along the lines of: be obedient, be chaste, and honor the customs of your tribe no matter how arbitrary and senseless they may seem.

      • James Picone says:

        That is what Steve meant.

      • Steve Johnson says:

        I’m saying this because usually when people say “traditional morality” they mean something more taboo-centered, more authoritarian, and more religiously inspired. More along the lines of: be obedient, be chaste, and honor the customs of your tribe no matter how arbitrary and senseless they may seem.

        Yep.

        There are two important factors:

        1) The content of the rules has to be good – they have to be rules that allows for a healthy society that allows the flourishing of human virtue and helps vulnerable people suppress urges towards vice
        2) You can’t score holiness points by subverting the rules

        Authority and taboo take care of number 2 – you invoke authority and taboo to ensure the stability of the rules and to prevent holiness spirals from even starting.

        Natural selection takes care of ensuring that the content of the rules are good.

        If your tribe is dying out then honoring the customs of your tribe is a way to get more of the same and maybe you should see if your tribe had different customs when it was flourishing because there sure as hell are a set of good customs there somewhere or your tribe would have joined countless other tribes and subspecies of humans and died out.

        • Nita says:

          Natural selection takes care of ensuring that the content of the rules are good.

          Many traditional morality systems fail to forbid slavery, marital rape and child abuse. That’s not good enough.

        • Nita makes an emotive point against “Natural selection takes care of ensuring that the content of the rules are good” by bringing up slavery, marital rape and child abuse.

          Let’s unpack this a little more analytically. “Traditional morality” approves practices most people now find horrifying because of its conservatism – it’s only good for slowly exploring local optima very close to existing custom.

          Thus, under “traditional morality” you end up getting stuck with practices like slavery because, being conservative and rule-bound, you don’t notice that the cruelty of slavery is essentially productive of inefficiency.

          Eventually, your “traditionally moral”society is outcompeted (and probably destroyed in war) by a society that is less cruel and more efficient.

          In ethics, as in any form of inquiry, we have to be concerned with what is true, not with what is comfortable or convenient or produces temprary stability at a local optimum. The relevant truth here is that the cruelty of slavery is essentially productive of inefficiency.

          “Traditionally moral” societies are poor at truthseeking. This is a fatal weakness.

          • Steve Johnson says:

            Slavery isn’t some unique feature of a certain place and time – it’s a human universal.

            A man who can’t afford his own upkeep either starves to death, lives as an outlaw or lives at the pleasure of a master – is a slave.

            Of course, now “slavery” is an emotional trigger word mainly used to shut down any sort of thought but it exists right now in the United States. When the USG is finally out-competed and no longer rules the world it won’t lose because it had a bunch of slaves – the states that are going to crush have plenty of slavery too – it will lose because it is a bad slave owner.

            Historical slavery was much superior to modern “no slavery” slavery.

          • John Schilling says:

            “…lives at the pleasure of a master”

            Which master?

            It would seem to me an essential difference between slavery and freedom that the free man can chose between many “masters”, and indeed may have more than one at a time, and may alter his choice at any time. Which changes the dynamic to the point where “master” probably isn’t the best word to use.

          • Steve Johnson says:

            John Schilling –

            There’s only one welfare state.

          • John Schilling says:

            I count fifty between the Rio Grande and the forty-fifth parallel, twenty-six in Continental Europe, both sets with freedom of movement and choice for the “Slaves”.

            I would also dispute “slave” as an accurate term for a welfare recipient.

    • anon says:

      Demo-what?

      • Peter says:

        It’s a neoreactionary term. See here, section 2.3.1. So, in short, “Demo-what?” is a pretty good response.

      • Irrelevant says:

        “Demotism” is a neoreactionary neologism. Superficially, it’s a categorization of governments based on their stated claim to legitimacy, a demotist state being one that claims to derive its authority from The People, by contrast to colonial provinces claiming authority from the higher power of Imperial approval, theocracies and divine-right monarchies claiming God’s Will, company towns pointing to explicit contracts, or warlords just plain saying “I have more guns.”

        Substantially, its use encodes the claims that A. democracy is not a uniquely valid premise for governmental legitimacy. B. “true democracy” is not a distinct category from “sham democracy.” and C. any real society extracts the consent of the governed by a combination of educational indoctrination and coercive force.

  25. Sly says:

    You don’t need to think that there are some kind of nebulous “objective” moral truths to say that things are bad or that you think they should stop.

    Additionally, it’s not like a psychopath gives two shits if their murdering for fun is *wrong* in any sense.

    • Carinthium says:

      In your view then, what does it mean for something to be morally wrong? If you’re not appealing to any objective moral truth, please go into more detail about what it is to you.

      • Sly says:

        I wouldn’t say something is “morally wrong” in the same sense as you most likely. In my eyes, human morality is essentially just us trying to extrapolate from empathy, but it holds no special force on an agent that does not share our values or empathy (Ex: A Sociopath).

        If a babyeater says that trying to stop babyeating is bad, the word “bad” has a specific reference to a type of agent with a certain set of values. It will be very compelling to other babyeaters, but not to humans.

        –Edited for Clarity Hopefully–

      • Drew says:

        It’s like saying that a restaurant has bad food.

        Technically, there’s no objective standard to measure “goodness” or “badness”. So the phrase seem somehow inconsistent.

        In practice, they’re just a shorthand for something like, “this is good/bad as judged by a subset of our preferences that I expect we share.”

        The gimmick in Pat Robertson’s question is that he’s set things up to conflate “is there an objective standard?” with “are there sets of preferences that we expect normal people in society to share?” in the minds of his audience.

        • Sly says:

          Exactly!

        • blacktrance says:

          It’s like saying that a restaurant has bad food.

          Technically, there’s no objective standard to measure “goodness” or “badness”. So the phrase seem somehow inconsistent.

          Does it? For example, if a lot of people don’t like the food at a particular restaurant, it seems reasonable to say that it has bad food. That’s even more true if it gives food poisoning to its customers. It’s not a stretch that “good food” = “food that customers like”, and that’s objective.

          • Sly says:

            ““good food” = “food that customers like”, and that’s objective.”

            What happens when customers have differing opinions? Sounds like the very essence of subjective to me.

          • blacktrance says:

            In that case, you can still say that some foods are better than others, because you can create a dish that no one likes. You can say that dishes that are liked by more customers are better, if you’re looking at it from a third-person non-customer perspective. If two dishes are approximately equally well-liked, it seems reasonable to say that they’re equally good. From the perspective of a particular customer, “good food” can mean “food the customer likes”. When two customers disagree about which food is better, they’re conceptually confused – one food is better-for-Customer-A, the other is better-for-Customer-B, and one of them is also better-for-customers-in-general. All three metrics are objective, though the same word being used to refer to all three invites confusion.

          • Sly says:

            Yes obviously I would agree that you *can* also evaluate foods or moral claims or many other things by the number of people who also hold that value.

            Popularity can be useful in some cases (maybe in informing me what I expect you will think) but it is not a particularly meaningful metric, just another that can be observed. I don’t see how popularity changes whether or not the stated preferences/values are subjective or objective.

          • blacktrance says:

            You disagreed with my statement that good food is food that customers like. That’s definitely meaningful in a narrow sense (e.g. it describes some part of the world), and whether it’s meaningful (relevant) in a broad sense depends on what question you’re trying to answer. And there are objective metrics about whether customers tend to like a particular food. I don’t understand your objection.

          • Sly says:

            I think one or both of us has misunderstood the other, because now I am confused. It *seems* like you are saying that I was objecting to your definition, when my point is that there is no agent/signifier neutral definition of “good food”.

          • blacktrance says:

            That depends on what we mean by “agent-neutral”. “Good” as “what people tend to like” is dependent on the existence of agents to be instantiated, but is agent-neutral as it’s not determined by the values of any particular agent. But even “good” as “what I like” is objective in a sense because whether I like something is a fact about the world (because it’s a fact about me, and I’m part of the world). For example, using the second meaning of “good”, if I said “Broccoli is good”, I would be objectively wrong because I don’t like broccoli. Agent-relativity and objective “good” aren’t mutually exclusive, as long as you accept “good” as “good for X”, or perhaps more complicatedly as “an ideal hypothetical agreement between agents that benefits those subject to it”.

        • Dennis Ochei says:

          Although I am a non-cognitivist, I wouldn’t want to let a good argument lay unturned. And your analogy really does set the stage. Have you taken the spinach test?

          Moral statements differ from food preferences in that if I like food you dislike, or vice versa, we do not thereby think of each other as terrible, deficient, and needing of rehabilitive or retributive treatment

          • Sly says:

            The Spinach test would maybe be more relevant if whether or not you liked spinach determined if I and others also ate spinach. If you are going to force me to eat spinach, then your own preferences matter a hell of a lot more.

            I want you to share my values about murder because other human beings sharing my values about murder has huge direct repercussions on the world I live in and on my own values being actualized.

          • Peter says:

            Spinach test: I thought about tastes in music: “I’m so glad I hate the music of Justin Bieber, because then I would listen to it, and it’s terrible

  26. David M says:

    Why does Phil switch to a second-person point of view in the castration paragraph? It seems like an invitation to fantasy, and even more damning if just a slip of the tongue.

    • Peter says:

      Yes, I think that’s one sign among many that there’s a lot of gratuitous nastiness there.

  27. I looked up “error theorist” and now I see Robertson’s point.

    I don’t believe there are any sincere and consistent error theorists in the world. What I think error theorists are actually doing is signaling their rejection of taboo- and authority-based morality by pretending that there are no ethical facts, when in fact their reactions to extreme scnarios such as this one would show that they actually believe otherwise.

    This is what Robertson was trying to demonstrate. He is still wrong to believe that most atheists are error theorists (as evidenced by the fact that I’ve been an atheist and hung out with atheists most of my life without knowing error theory was a thing) but I see now that his argument is not as detached from reality as I previously believed.

    • Carinthium says:

      You are committing an ad hominem fallacy. Claiming “It is impossible for a human not to believe in moral facts” does not in any way demonstrate “There are moral facts”, even if true.

      I’d argue that humans have many cognitive biases, and one of these is a tendency to believe in moral facts despite the non-existence of moral facts. There is nothing inconsistent in treating this bias like other biases.

      • You misunderstand. While I do believe there are ethical facts, that is irrelevant to my evaluation of how error theorists behave and why Robertson has a point.

        All I have to know to see his point is that human beings invariably fall back on hard-wired moral intuitions under sufficiently extreme challenge.

        Or, to put it another way, it is abstractly possible that error theory might be true, but human beings are not capable of behaving in complete consistency with the belief that it is true.

        • Carinthium says:

          There are many cognitive biases humans fall into and which it is impossible to entirely eliminate. Because they really are in human heads, it is impossible for humans to believe with complete consistency that they are congitive biases, even though they are.

          Established practice amongst rationalists is to try and get around them as much as possible.

          • Airgap says:

            Carinthium teaches you the superman.

          • Carinthium says:

            Airgap- My views can’t have anything to do with Neitzsche as I’ve never read him.

            I am given to understand he thought it a positive good for the superior to triumph. If so, that’s proof we’re different as I don’t think the ‘inferior’ tactics are in any way wrongful.

        • stillnotking says:

          “Human beings invariably fall back on hard-wired moral intuitions under sufficiently extreme challenge.”

          Really? I doubt you can name a moral intuition that no one in history has ever violated. People have, in fact, tortured and murdered their own families, or even themselves.

          It’s true that people rarely violate them, but people also rarely eat dogshit; shall we develop the theory that dogshit is bad-tasting independent of observer preference?

          • Irrelevant says:

            You’re both making unfalsifiable arguments. You can’t perceive other people’s cognition, so the debate over whether, say, Scaevola did what he did based on his natural intuition, on unusual rationality, on some unnatural sense of “indoctrinated intuition”, because he was insane, etc. doesn’t have a determinable answer. You likely couldn’t get an accurate answer even if you asked him and he gave his honest opinion of how he had been thinking, because that sort of decision-making is opaque even to the thinker.

          • I didn’t claim that everyone has the same moral intuitions, merely that if you stress any given person enough you will find he/she doesn’t behave like an error theorist. So it isn’t really dispositive that any given moral intuition you can name will sometimes be violated.

          • stillnotking says:

            Irrelevant: I don’t see a non-solipsist alternative to assuming that people’s statements and behavior reveal their preferences. If we can’t agree on that, then we can toss pretty much all of philosophy (not just ethical philosophy!) out the window.

            I have no way of knowing that anyone besides myself is conscious at all. Arguably, not even myself.

          • stillnotking says:

            ESR: That people’s moral intuitions differ is exactly the point of error theory. No moral claim can be true, it can only be a description of preference. There is no epistemological test that distinguishes between “murder is wrong” and “dogshit tastes bad”.

            I, personally, could not force myself to intuit that murder is fine, any more than I could force myself to enjoy dogshit. But someone could.

          • Irrelevant says:

            stillnotking: Claiming that a question is too hard to answer within current science is distinct from solipsism. What decision-making system people use to make choices in a crisis is an empirical question and I believe we’ll learn the correct answer eventually. For that matter, I believe we’ll start to learn the correct answer almost immediately after we learn how to ask the question right.

      • Obrigatorio says:

        What’s the name of the theory that there’s no right or wrong in an absolute sense, but humanity has developed a sense of right and wrong that maximizes the species survival?

        • I’m not aware that it has a formal name, but the SF author Robert Heinlein is well known for having argued this view. You could call it “Heinleinian adaptationism” or “Heinleinian survival-centric ethics” and many people would know what you mean.

    • Protagoras says:

      I don’t know where you looked it up, but among philosophers, error theory as a meta-ethical position is particularly associated with the views of J. L. Mackie. And Mackie’s version of error theory did not maintain that it was impossible for there to be a reasonable version of moral claims, where those claims said something about human conventions or something of the sort; indeed, he advocated taking such an approach (hence the subtitle of his major work on the subject, “Inventing Right and Wrong”). The reason he called himself an error theorist is because he didn’t believe that’s what most people meant when they made moral claims; he believed that most people intended their moral claims in a way that involved false metaphysics, and so that most people who made moral claims were for that reason making false claims.

      • Ah, I might agree with Mackie’s version, then – or at least his objection that most moral realism proceeds from false metaphysics.

        This is closely related to the reason I use the label “ethical realism” rather than “moral realism”. My view is that the development of ethics can proceed from discoverable truths about the universe – we can rationally know how we normatively should behave – whereas I associate ‘morality’ with treating certain normative premises (especially relgious premises) as givens not subject to falsification.

        In those trems, I am an ethical realist but reject moral realism.

  28. Douglas Knight says:

    All movement is causing a little bit of time dilation, but if you want to detect it you need the world’s most accurate clock on the Space Shuttle when it’s traveling 25,000 miles per hour.

    Actually, you can detect time dilation at human speeds. When you move a charged object near a magnet and feel it, that’s what you’re observing. And that is exactly how Einstein discovered relativity.

    objects thousands of times larger than the Earth

    …like the sun, and its effects on Mercury. But this time Einstein really did it via thought experiments and consistency.

  29. Markus Ramikin says:

    Your attempts to be fair to your opponents sometimes have me facepalming. This is one of those times.

    Yes, graphic thought experiments are fine in principle, for all the reasons you describe. This does not stop this guy’s argument from being terrible, which makes me wince when I see you say that he’s doing moral philosophy “exactly right”.

    • Anonymous says:

      Incontrovertible fact: Phil Robertson is not a trained moral philosopher, so he’s going to make arguments which are flawed… and even arguments which are terrible.

      That said, the response has not been, “Let me describe the flaws in your argument.” It has attacked one specific aspect of it – the extreme nature of the hypothetical. It is this specific aspect which is exactly right in the context of moral philosophy. You even said so yourself. Scott doesn’t seem to have made any general argument that PR’s whole moral argument is non-terrible.

      • Markus Ramikin says:

        I know, Scott is responding to people who responded by attacking the graphic-ness.

        But by saying things like “he’s doing moral philosophy exactly right” he’s risking over-endorsing the guy, confusing some people that the whole thing was generally okay, not just that aspect. And by the comments it looks like this indeed happened. I think he should have placed a couple of Go stones to block off that path.

        RCF’s comment seems relevant too. In an ideal world, it wouldn’t be. We don’t live in that world.

        • Anonymous says:

          Any time you speak on any subject, you risk people misinterpreting you. Generally, we make some assumptions on the background information possessed by our audience. Otherwise, we’re always stuck saying, “Let me build up absolutely every Go stone you should have in your head from scratch,” every time, as if we’re always talking to five year olds. When you start drawing a sufficiently large audience in a public venue (as Scott has successfully done), some people will miss the point. We can’t fault Scott for them any more than we can fault Kant for responding to hundreds of years of western philosophy, assuming a fair amount of background knowledge in the process, and thus confusing a whole bunch of random readers.

          Is it annoying having to teach freshmen, “No… Kant didn’t mean that,” just like it’s annoying to have to point out to newcomers, “No… Scott doesn’t think that PR is The Moral Philosopher,” but you seem to be on the side of actually understanding him. It’d be better if you’d stop complaining on behalf of those who don’t get it and instead just help out various individuals who miss the point in various different ways to fix their particular manner of fault. (Likewise, it’s better for us to teach freshmen and point them towards some preliminary material that is relevant to their particular misunderstanding rather than demand that Kant write another thousand pages detailing all the prereqs to understanding absolutely everything he did with no chance of going off in the wrong direction (…which is almost certainly an impossible demand anyway)).

          RCF seems rather uncharitable, too. Even when talking to Christians, atheism and moral relativism (which PR absolutely conflates) are not completely unknown. Not every Christian is a committed Christian. There are many of us who are tempted by atheism and moral relativism, and with good reason – they’re tempting views! Making an argument that if you really adopted moral relativism, this is what you’d have to adopt doesn’t say much about whether atheists are, in practice, amoral and callous. Of course, we agree that it definitely falls in the “generally gives you a worse impression of atheists” category… but I’m pretty sure Scott has a post or two on this phenomenon…

  30. RCF says:

    Nitpick: AFAIK, the most accurate clock is accurate to one second in 300 million years.

  31. stillnotking says:

    Error theory is not about whether I think some particular state of affairs is OK, but whether it actually is OK (or, more precisely, whether statements of the form “X is OK” can ever be true).

    Of course I don’t think someone raping, murdering, or torturing my family is OK; I also don’t think pistachio-flavored ice cream tastes good, and according to error theory, those are epistemologically equivalent statements of preference, rather than fact. We are simply very good at fooling ourselves into thinking that moral claims are true and universal in a way that ice-cream preference is not. Presumably this is because we are social organisms, and the intuition that morality exists outside one’s head was adaptive for our ancestors. That doesn’t make it true, any more than our intuitive ideas about physics are true.

    Now, I would love to have this debate with Robertson, but somehow I feel the nuance might be lost on him.

  32. Wrong Species says:

    People in this thread are saying things that are very vague and use definitions in inconsistent ways. More people should read up on metaethics so they can communicate their ideas better.

    http://plato.stanford.edu/entries/metaethics/

    http://plato.stanford.edu/entries/moral-realism/

    http://plato.stanford.edu/entries/moral-anti-realism/

  33. Ross Levatter says:

    When it’s fictional rape and torture, do you need an ACTUAL content warning, or is a hypothetical warning sufficient?

    • jy3 says:

      <span class=”deliberately-missing-the-joke”>Given that the same concerns behind content warnings (triggers, moral objection, etc.) apply, I don’t see why it shouldn’t have one.</span>

  34. lmm says:

    moral philosophers are as happy as anyone else not to lie in the real world.

    Is this true? ISTR reading that ethics professors are less ethical than average in their everyday decisions. Maybe we are right to condemn people who push these thought-experimental boundaries.

    • Irrelevant says:

      I believe that’s (assuming it’s true, which I have no problem with since it matches my anecdotal experience) a cause-effect reversal: the people who find systematic approaches to ethics necessary and/or interesting are ones who have weak intuitions on the subject, and therefore a harder-than-average time or unusual interpretation of being ethical in practice.

      • Peter says:

        I thought the effect was fairly slight, but nevertheless studying moral philosophy doesn’t seem to make you more virtuous, as in it doesn’t make you more likely to do things that everyone agrees are good or avoid doing things that everyone agrees are bad – i.e. regardless of what it does for your knowledge what of the right thing is it doesn’t help you to actually do it.

        One theory is that moral philosophy helps you to rationalize, to sneak bad actions past your own conscience.

        • Protagoras says:

          Or as I’ve often put it, partly based on ethicists I’ve known, just as people go into psychology because they’re crazy and want to understand what’s wrong with them, people go into ethics because they’re evil and want to understand what’s wrong with them.

    • Douglas Knight says:

      You read that moral philosophers are less ethical than the average philosopher. But surely the person who wrote that was a moral philosopher.

  35. Harald K says:

    > there are a bunch of atheists who very much claim not to believe in morality.

    My impression of those people is that they’re edgy posers, and I’m worried that by arguing, I might push them into really taking the consequences of the alien views they’re proposing. (For that matter, I worry a bit about that with the “in-betweeners” too, the ones who assert that there’s ultimately no right or wrong but somehow ultimate moral meaninglessness still fails to trickle down to produce immediate moral meaninglessness.)

    As to Robertson. Now I’m “on his side” I suppose, but I still think that was edging dangerously close to a revenge fantasy. “What if I smashed your head in, huh? What would you do then? Wouldn’t be as big mouthed would you!”

  36. David Moss says:

    All correct, of course, but you don’t (shouldn’t) even really need the bits defending the idea of extreme thought experiments to defend Robertson here. All you need is to observe that the whole point of the thought experiment is that an atheist’s family being raped and tortured is really bad (or at least the atheist should think so): so the one thing he definitely can’t be criticised for is suggesting that it’s a good thing that atheists get raped and tortured.

    • Nita says:

      He seems to be saying: if God didn’t exist, we would have no reason to judge anyone who enjoyed raping, torturing and murdering atheists. I think most atheists might be a little uneasy with this condition on the badness of torturing them.

      • David Moss says:

        “Uncomfortable” in what sense? Not uncomfortable in the sense discussed: of thinking that he’s tacitly threatening atheists or fantasizing about atheists being raped or otherwise saying it would be a good thing for atheists to be raped (or because they fear that if for some reason he gives up his theological beliefs *then* he’ll start murdering and raping atheists).

        Rather, according to your view, atheists and Robertson seem to be united in worrying that each other’s moral positions don’t take rape and murder seriously enough. Robertson is saying that not being a theistic moral realist means there’s nothing (morally) really wrong with these crimes, and that’s a bad thing. And the atheists (as you portray them, even though this isn’t a criticism I’ve seen atheists widely making of his remarks) think that saying that rape is only really wrong if there are theocratically grounded moral laws aren’t taking its wrongness seriously enough (I guess). I think that would be a weird criticism for atheists to make (if they were), but either way, if that were their objection, then it would have nothing at all to do with the “extremity” of the thought experiment, which is what my original post stated.

        • Nita says:

          Sure, and that’s why the thought experiment aspect of his speech could have been expressed in a much less inflammatory way. But then the he couldn’t get as much gloating out of it, and apparently that was more important.

          Also, it’s an extremely shitty thought experiment. Atheist moralists will go “of course I can judge them!”, various non-moralists will go “judging is useless anyway, but I can sure hate them!” — and no new insights will be gained.

          Of course, it’s possible that this guy is so very ignorant about non-theistic moral views, or so mentally challenged that his honest attempt at a thought experiment turned out poorly.

          But I actually think it’s more charitable to conclude that he succeeded at basking in a graphic illustration of his superiority, rather than failed at philosophical discourse.

          • Nita says:

            Similarly, the purpose of the “Churchill” joke is to give its readers the pleasure of thinking, “Ha ha, he made that ‘lady’ admit she’s a whore! Well done!”, rather than to show a kind person helping another improve their self-knowledge using a thought experiment.

          • Bryan Hann says:

            We have a different sense of what is charitable. I don’t mind someone pointing out that I am untrained, or even speculating that I might be untrainable. But I do mind someone suggesting that I am acting in bad faith.

        • @David:

          Speaking only for myself, as an atheist, I certainly believe that it is wrong to say that rape is only wrong because God said so. Generalized, that’s one of the reasons I consider religion to be dangerous and anti-social, rather than just factually mistaken. (FWIW, I generally try to avoid saying so in front of religious people!)

          It was also pretty much my immediate response to the thought experiment in question: that he had it entirely backwards, that atheists (typically) believe in absolute morality and (devout) theists typically don’t.

          The obvious, simple example is gay marriage. Devout Christians want to outlaw it, because God said so, even though outlawing gay marriage is morally wrong. (I believe Heinlein also once pointed out an example in the Bible where rape was presented in a positive way? Someone offering his virgin daughters to a mob, if they would go away and leave his visitor alone, if I remember rightly. Probably in Stranger In A Strange Land.)

          • Nita says:

            You probably mean Genesis 19. It’s quite an interesting story, from start to finish.

            I remember being grumpy after reading it as a teenager. Sodom is described as a town with a tradition of crowd-raping everyone who wanders in, but people use the word “sodomy” for completely consensual gay sex! Yeah, the man-on-man aspect of the rape welcome parties is what made Sodom so bad. Totally.

    • Matthew says:

      “Nice business you’ve got going here. Would be such a shame if something happened to it.”

      Not every statement is connotationally and denotationally equivalent.

  37. ShardPhoenix says:

    1. I did already know this myself, but others clearly didn’t, and you expressed it better than I could, so this is a valuable post – please don’t be discouraged.

    2. As others have pointed out, Pat Robertson isn’t really arguing in good faith here regardless of the validity of the technique in general. Nonetheless I agree that the world needs less outrage.

    3. IMO as an atheist, morality = intuitive/folk game theory, nothing more or less. There’s no need for any other source of morality than what you can get from evolution and game theory. Corollary – naive utilitarianism is never going to work.

  38. jaimeastorga2000 says:

    I’m one of the people who put down themselves down as an error theorist in the survey, so let me see if I can clarify what I mean.

    I don’t think I disagree with you or Eliezer on almost anything substantial as far as metaethics goes. I mean, I think humanity’s collective extrapolated values are probably quite a bit less coherent than you guys are hoping for (though still pretty damn convergent when compared to the space of all possible values), I honestly find Eliezer’s proposed casual mechanism for moral progress downright bizarre (the impression I get from reading several of his works and comments is that he thinks humanity is somehow gaining an increasing understanding of its own meta-values, that this increasing understanding causes object-value change over time, and that the difference between our values and the values of the past is therefore describable as moral progress), and I’m pretty sure we have fairly different utility functions, but I expect that we can all agree that the orthogonality thesis is correct, that the basic AI drives hold for most sufficiently advanced AIs, that human values are the result of evolution, that human rights as used in contemporary debates are incoherent nonsense, etc…

    I just don’t think that moral realist language is the best way to describe set of affairs. That’s why I like to use the terminology preferences, values, or utility functions. When I look at the Phil Robertson hypothetical, I think that the best way to describe it is not to say “this is wrong,” which implies some sort of cosmic objective wrongness, but rather something like “this goes strongly against my values.”

    • You say you you believe that human values are the result of evolution. I agree, and I expect that anyone who would put it that way probably unpacks the consequences of the premise pretty similarly – that is, that one can derive ethics in a game-theoretic way by thinking about what rules mimimize unpleasantness among competing agents.

      What I am puzzled by is why you think this isn’t ethical realism. In what sense does (say) reciprocal altruism fail to be an objective good?

    • stillnotking says:

      The problem is, saying “Hitler was objectively wrong, defied the cosmic order, and was (or is currently being) punished in some horrific way to balance the scales of the universe” is much more appealing than “I would really rather Hitler had not killed six million Jews, because that goes strongly against my values.”

      At least, it’s more appealing to this atheist/error theorist, which makes me confident that it’s more appealing to almost everyone. And I’m sure it’s exactly the intuition Robertson was tapping.

      • Irrelevant says:

        I actually don’t prefer the first to the second, but that’s because I think unique condemnation of Hitler and Naziism as genocidal has metastasized into a defense mechanism against realistically weighting the flaws of other Western leaders and governments, including the flaws they share with Naziism. There’s a very similar bit of pageantry surrounding Columbus day, in which he’s loudly condemned for *somethingsomethingsmallpox* in order to avoid talking about his governance, which might provoke self-reflection rather than self-backpatting. But that’s off-topic.

      • Eli Sennesh says:

        Non-theistic moral realist here. I think it’s really quite enough that Hitler is dead. Means he’s already suffering something we mostly wish didn’t happen at all, and can never re-offend.

    • ADifferentAnonymous says:

      Does something besides the fear of punishment stop you from stealing?

    • Eli Sennesh says:

      Excuse me, but if you think human values are object-level and meta-level incoherent, what are you proposing to replace them with?

      To which I would say: ah, that’s your values, right there.

      • InferentialDistance says:

        An object-level and metal-leval coherent system with minimal sum-of-squared-distance between itself and human values?

        • Eli Sennesh says:

          Sure, but for purposes of discussion, we might as well relabel what you’re calling “human values” as “human moral intuitions” and address the minimal-distance construction as “human values”.

          What I’m trying to say is, there’s no meta-level philosophical position that can exempt you from having to make object-level moral/evaluative decisions in your daily life – everything from whether to give charity to whether to take out your stress on coworkers – so you might as well bother to reason one step past “maybe it’s all incoherent!”-flavored nihilism and have a framework for which you can evaluate heuristics and thus address your daily needs.

  39. DiscoveredJoys says:

    I’m reminded of the debate about ‘torture being absolutely wrong’. If you accept that there is an edge condition (the hidden nuclear device) where torture is always illegal but sometimes, vary rarely, necessary then you have dispensed with the certainty of holding to absolutes.

    Of course you can argue that setting ‘absolutes’ helps prevent the slippery slope of relativism, but that is a different argument.

    • stillnotking says:

      That debate also reveals the weakness of thought experiments: they provide information you would not actually have in the real world (e.g. that the terrorist definitely knows where the device is, that torture is the only/best way to get him to reveal it, that you can torture him sufficiently to prevent him from sending you on a wild goose chase, etc.).

  40. Peter says:

    Error theory: one common thing I see is the morality-1/morality-2 thing, whereby you say “morality-1 is what everyone means by morality, and doesn’t exist/is a broken concept. Morality-2 is some defensible revisionist concept, which isn’t quite what everyone means by morality, but it’s sort-of close-enough.” So in practice some error theorists come out as being far less nihilistic than a textbook description would suggest.

    See for example Joshua Greene’s thesis, and I think I read some suggestion somewhere that Mackie was doing something similar.

    • Irrelevant says:

      My main complaint with those efforts is that a lot of them smack of motivated reasoning, where Morality-2 conveniently happens to conclude in Morality-1 minus the parts the speaker wasn’t fond of.

    • Tim Martin says:

      Ah, I’m glad you mentioned Greene!

      One of the things that’s confused me as I read the comments is why so many people are asking, or thinking about asking, moral non-realists whether such-and-such terrible thing is “wrong” without saying what they mean by “wrong.”

      I think Greene deals with this very well when he asks “in what does the truth of your belief consist?” What observations would confirm that, say, lying is wrong? And what observations would falsify it?

      As a moral non-realist, the first thing I do when someone asks “so you really don’t think rape is wrong?” is ask them what “wrong” means, and what observations would make that statement true. I’m surprised that a lot of people commenting here don’t see the need to specify that…?

      • Peter says:

        “wrong” – Parfit has a memorable turn of phrase – well, lots of memorable turns of phrase, but one is “wrong in the sense of blameworth, unjustifiable to others and is an act that would give us reasons for remorse”.

        Part of the problem is a general problem with definitions. Some philosophers seem to have taken the approach of setting on a few primitive undefinable terms – ISTR Sidgwick went for “ought”, whereas Moore went for “good”. I once saw a nifty approach that sets up a whole network of relations between terms such as “good”, “wrong”, “virtue” etc. and effectively uses that to define all of the terms all at once: http://plato.stanford.edu/entries/naturalism-moral/#JacMorFun

        Of course, difficulty with definitions is very old. You’ve heard the story about Diogenes, Plato, and the definition of a man?

      • I am an ethical realist and an have an answer to Greene: “wrong” is that which damages your future ability to sustain positive-sum interactions with others.

        • Tim Martin says:

          Given that definition, I would totally agree that lying is often wrong. And sometimes right, if you look at lies that have positive consequences for people you would care about.

          But you wouldn’t expect me to say otherwise, would you? Given that I’m a moral non-realist? I mean it seems like many moral realists and non-realists are like free will compatibilists and determinists – we believe the same things about reality; we merely differ in what words we use to describe these truths. You and I both believe that some lies damage our ability to sustain positive-sum interactions; I just wouldn’t use the word “wrong” to capture it.

          The reason I say this is because of the incredulity some people seem to be treating moral non-realists with. “You mean you don’t think rape is wrong?” Given a falsifiable definition of “wrong,” this is a fairly easy question to answer, and I’ll bet most realists and non-realists give the same answer. It just seems like a lot of people are talking past each other for no good reason…

    • Protagoras says:

      Yes, that was certainly what Mackie was doing.

  41. Alphaceph says:

    The colourful scenario described (rape, torture, etc) just boils down to “another agent is taking an action which increases his utility a little and decreases my utility a lot”.

    If you believe that there’s no “cosmic” sense of right and wrong, you have to concede that there’s no “cosmic” “universal” sense in which it is “wrong” for one agent to decrease the utility of another. It’s not written in the stars.

    However, if you *are* the agent under “attack” like this, then from *your* moral/axiological viewpoint, it is definitely wrong.

  42. Peter says:

    A sequel to the Robertson thing, to try to bring out the cosmic aspect:

    Thousands of light years away (or however far is needed to make the physics work), there’s an alien civilisation. FTL travel is impossible, and manned (or aliened) interstellar space flight is never going to be practical. The aliens have their equivalent of the SETI program, and the sum total of what they know about human civilisation is the incident above, plus enough background to decode the language. This was due to some freak alignment of stars and planets creating just the right gravitational lens to get decipherable signals to their planet, a circumstance never to be repeated.

    So how do the aliens feel about this? How do you feel about how the aliens feel about this? Personally I’d hope the aliens would think the attacker to be evil, the attack to be an evil deed, and to feel sorry for the victims. This feels… “cosmic enough” for me.

    Can I hope for this? Do I need to posit a Platonic realm or God or other single source of morality to hope for this? I don’t think so. Suppose, on a more mundane level, I think there’s uranium on distant worlds. As far as I know, there was no uranium that came from the Big Bang. Our uranium, so far as we know, came from a supernova. Other worlds, sufficiently far away, would have had to have got their uranium from a different, unconnected supernova. So you can have the same stuff on distant worlds without their having to be any single source of that stuff…

    This is going down the meta-ethical path of ethical naturalism, and I’m sure there are well-worn arguments against it – without loss of generality, I think there are well-worn arguments against just about any meta-ethical path people think about reasonably often. Shrug. As Parfit says, moral philosophy is still a young discipline.

    • Carinthium says:

      Out of curiosity, what arguments do you see against moral error theory? I’d argue most of them are illusionary artifacts of philosophers overrating the role of intuition.

      • Peter says:

        Oh, erm, I forget. In fact I… don’t exactly “lean towards” but am intrigued by the morality-1/morality-2 style of error theory.

        One influence; I tend to believe in mathematical realism – at any rate, truth-value realism, at any rate, these-theorems–really-follow-from-these-axioms realism (is that even a thing?) A lot of theories of morality can sort-of be intrepreted as saying that morality is like a form of symmetry – this is most notable with the Golden Rule, but even the impartiality of utilitarianism feels a lot like translational symmetry to me, like the symmetry in a crystal. I like to draw a distinction between small-kernel morality and irreducible-list morality, I lean strongly towards the small kernels, and tend think that “morality” refers to the kernel and not the list of things that flow from the kernel and it’s interactions with everything else. Possibly I might have to call the kernel “morality-2”. I certainly have no time for an intuitionist irreducible-list approach – I think that would force me into full-on anti-realism.

        Oh, that’s another one; the way the Golden Rule or minor variants thereof keep cropping up everywhere.

    • Eli Sennesh says:

      Actually, the most well-worn arguments against ethical naturalism are just, “But it’s so counterintuitive and doesn’t make morality into a rewarding-and-punishing authority figure! Waaaaah! And just because something is perfectly egalitarian reflectively-coherent niceness doesn’t mean it’s good, because Moore’s Open Question Argument means you can never analyze a moral term into a descriptive one! Yes, even though you just did! Waaaah!”

      You decide how much I’m actually straw-manning here. To me, the arguments against ethical naturalism are just about rationalizing some people’s sense that a Platonic Realm of Ethics, a non-naturalistic component to the universe, is just more emotionally satisfying. It’s motivated reasoning for the sake of a kind of “Cantor’s Diagonalization” emotion which insists that nothing non-spooky can be a satisfying way to talk about morality. But, not quite by definition if you remember people once really believed this stuff and most of their intuitions still point at it, we’ve found we live in a universe without any spooky irreducible emotion-stuff!

      To which I would reply: what I find emotionally satisfying is a morality I can actually use to get along in my real life, with real people, without having to spend literally eternity Contemplating the Mysteries and without having to ground my morality in nothing more than the System 1 intuitive judgements of such a flawed, idiotic mind as mine (as today’s mainstream Cornell and Intuitionist Realists actually do say I swear).

      • Peter says:

        Oh, erm, yes, it’s all coming back to me now, well some of it is.[1] There’s this distinction between analytic naturalism and non-analytic naturalism, the latter is associated with the scientific discovery analogy, and ISTR Parfit thinks it evades the Open Question argument but that there’s something else wrong with it. I remember not being entirely convinced, not about the latter part at any rate.

        Moore has this interesting comparison of good and yellow, and states that both are undefinable. I’m not sure I buy his specific reasoning, but I have this interest in colour and colour terminology and the semantics of colour terminology, and there’s lots of fascinating stuff there, and some of that might make for interesting analogies with morality.

        [1] On What Matters is _huge_…

    • Alphaceph says:

      What if the aliens had developed an honor/strength based moral code where, if you could beat someone in combat, yoyou were expected to kill them in a humiliating way, and they saw this as perfectly right and natural?

    • satanistgoblin says:

      But we are just projecting our values on the aliens, so it is a pointless exercise. By the way, even humans think about violence in other species in terms of “well, that’s what happens in nature”, not in terms of good and evil.

  43. I’m curious about how far it makes sense to take using vivid scenarios in hypotheticals. There’s someone (no longer a friend of mine) who hypothecized a scenario in which he raped me at an event we both go to. I forget where the rest of the argument went– something about me not remembering the rape, I think.

    My reaction was to wonder how he could be so inconsiderate as to say something like that to me, not to feel personally threatened. Still, this left me feeling rather less inclined to want to interact with him.

    • Peter says:

      Yes – I think that’s going too far, and is also too personally targeted. It’s the sort of thing that makes me say “I’m sorry” and “what a jerk!”.

      The Robertson example… unlike Scott I think there’s some gratuitous nastiness there, on the other hand I couldn’t bring a media storm down on his head, that would be disproportionate and against the Golden Rule.

  44. Vaniver says:

    Quote Investigator on “haggling over the price.” (Moral of the story: was originally about “Lord Beaverbrook,” and is attributed to many, including Churchill.)

  45. ADifferentAnonymous says:

    This might be a good time to point out that Robin Hanson’s infamous GSR post (content warning: rape, questioning the moral severity thereof) is exactly the kind of thought experiment we need if we’re going to take the concept of bodily autonomy seriously.

    ( I’ll admit Robin would have done better to bring it up in isolation from his cuckoldry thing, though there’s valid philosophy hiding in there too)

  46. UncommonMurre says:

    Phil Robertson is arguing against a strawman in implying atheists can’t believe in an objective morality. Divine command theory is as much of a strawman. Christian theologians were classicists all along and were familiar with the Euthyphro dilemma. Christians are normally natural law theorists not divine command theorists; short version is that we believe that God is morally good by nature of what He is, therefore He is incapable of giving immoral commands. A command that is immoral that purports to come from Him, therefore, must be an imposture. 1 John 4:1: “Beloved, do not believe every spirit, but test the spirits to see whether they are from God, for many false prophets have gone out into the world.”

    The Wikipedia entry for St. Thomas Aquinas’s position is a pretty good summary:
    https://en.wikipedia.org/wiki/Euthyphro_dilemma#Aquinas
    I think it may interest some people here because Aquinas basically claims that God is perfectly good because He is an omniscient utilitarian.

    If Phil Robertson is a divine command theorist, it means he’s not a very good Christian theologian. To be fair, though, the distinction can get pretty fuzzy when it comes to the idea of limited humans trying to outguess an omniscient God on moral implications, and a natural law theorist can sound a lot like a divine command theorist when confronting a command that the theorist doesn’t understand.

    • randy m says:

      The example of Abraham and Isaac was brought up above. Though revived soon after, is this an example of God giving an immoral command, as an object lessen, perhaps?

      • UncommonMurre says:

        The example of Abraham and Issac has a lot of complications when discussing divine command theory. The main one is that the Bible says specifically God was testing Abraham, and so God told Abraham to stop before harming Issac. (Issac wasn’t revived, he was never sacrificed; see Genesis 22 for the story.) If an immoral command is given that the commander intends to rescind before it’s carried out, as a test or an object lesson, has he really given an immoral command? I would say this is thus not an example of an immoral command. Also complicating the issue, human sacrifice was commonplace in Abraham’s time and place; Christians and Jews say it’s immoral citing this event among other references, which has spread an assumption of its wrongness throughout the civilized world, but Abraham would not have known any good reason to say so. If human sacrifice in fact worked as advertised in the Bronze Age, bringing the imagined great benefits to all others, it would be not only be moral but required to make such sacrifices according to an utilitarian moral position.

        A better example (harder for me as a Christian) to discuss divine command theory is the example of the Israelites being required to exterminate the Canaanites completely when taking over Canaan, which the Israelites were very uncomfortable with and did not complete. As a Christian, I have to take on faith that everything would have been better in an eventual utilitarian sense had this command been carried out; certainly there are many bloody wars in ancient history that would have been avoided, and there would be no Palestinian conflict today in the current sense. However, since the New Testament, God has restricted Himself from giving such commands, because we are to evaluate the ‘spirits’ against the Bible (which of course did not exist at the time). If He wants something done that appears to be against the rules, He’ll do it himself.

        But, for the sake of philosophical argument, at the time, was this an immoral command? Certainly I would consider it immoral for a human commander to give it. As I said, I take on faith that it was justified in a utilitarian sense; it was something that one can morally do if one is omniscient and not otherwise.

  47. Jim says:

    Since I’ve seen this mistake made in several comments above, I think it bears pointing out that Phil Robertson and Pat Robertson are actually different people. (Although I imagine they would agree on a lot of points.)

  48. Chris Wegener says:

    The example given, attributed to Robertson, is misleading and wrong on two points.

    First, In what way would this scenario be any less horrible if you replace “atheist” with Christian, Muslim or any other deeply held belief? Is the example seriously supposing some divine intervention that will prevent the mayhem?

    Second, as an atheist I do not hold nor give up the right to have a finely honed and deeply held moral sensibility. Because I do not believe in an external super being as postulated by many religions does not mean that I think there does not exist right or wrong.

    I believe that or sense of right and wrong is inmate within all humans beings and is structured by our culture and our world. I need no god to help me recognize that the scenario expostulated is Wrong and no legitimate belief system can ever make it right.

    Yes, I can clearly and definitively say what belief system is right and should be universally followed without resorting to imaginary friends or ancient texts.

    • randy m says:

      A moral sensibility implies that there exists a morality that you are using this sense to discern, some reality that exists apart from you or others. I hope you do have this, but aren’t sure how you would explain what this is sd a materialist.

  49. Dirdle says:

    Hmm, really nice piece. “High-Energy Ethics” sounds like a Culture ship name, or a QNTM chapter title.

    To extend the analogy – physics did allow the creation and use of nuclear weapons, albeit not by physicists. Maybe the concern here is that ‘weaponised’ edge-case ethics will end up in the hands of people more likely to ‘use’ it than the curious boundary-pushing creators would prefer?

  50. Kaura says:

    The thing is, even if extremely disturbing thought experiments are a useful tool in clarifying ethical intuitions, they are also a useful tool in hurting your opponent (by forcing them to either engage with ideas that make them feel very bad, or quit the debate and practically admit defeat – but just because the discussion was uncomfortable, not necessarily because they didn’t have good counterarguments).

    It’s often pretty obvious when someone mostly wants to harm their opponent in this way, and more importantly, it doesn’t contribute anything to the discussion that just a mention of torture-murder wouldn’t do. After a certain point, a more intense description doesn’t make the argument significantly more powerful – the “murderer at the door” example in response to Kant is a good example: it’s completely sufficient to mention a murderer without going into detail, because everyone already understands it’s an extreme situation. Likewise, in the dust speck argument, would someone who already reasoned they support torture instead of dust specks change their mind just because the torture is described in more detail, such as in the Robertson example? No, but it would probably make them feel pretty bad.

    I mean, Robertson could have described his torture example in very little detail, just to present the argument to his opponents – which would have been moral philosophy done right – or he could have written an illustrated book explicitly describing two hundred similar torture-murderous things he thinks atheists probably endorse, which would have made people even angrier while contributing nothing valuable to the discussion, and would probably not have been praised as a marvelous future classic of Western philosophy either. People got angry at him because they judged his description excessively horrible just for the sake of making them feel bad, like this hypothetical book.

    Of course, the claim I’m making (“describing torture-murder in colourful detail isn’t significantly more effective in arguing and reaching stable conclusions about ethics than simply mentioning torture-murder”) is empirical and can be tested.
    I can’t volunteer for that myself though, because a) I don’t want to feel very bad for hypothetical people right now, and b) I have apparently written “torture-murder” so many times that it has lost its meaning. Achievement unlocked, etc.

    • Raemon says:

      This was my thought. The way he phrases things “their little atheist daughters” etc, doesn’t feel like philosophy for philosophy’s sake, it feels like angry, hostile rhetoric.

  51. Pilgrim of the East says:

    Although I met (on the internet) a couple of atheists who said there is no right or wrong, most atheists subscribe to some variant of utilitariansim. That said, here is my extremist anti-utilitarianism thought experiment, which would be for Phil Robertson imho far more useful:

    Sociopath escorts his totally shitfaced female girlfriend so she can crash at his house. As soon as they arrive she falls in effect unconcious on bed. Sociopath is sexually frustrated so he decides to seize this great opportunity to have sex with his unconcious friend as he’s sure enough she won’t wake up and is she woke up she wouldn’t remember anything anyway. He uses condom, has no STDs and makes sure he arranges everything so she can never find out what happened; she didn’t open eyes or otherwise indicated she changed her sleep phase somehow during the incident. He’s sociopath so he’s not gonna have guilty conscience and is smart enough to never tell anyone.
    In the morning she wakes up unaware of anything, thanks him for being such a great friend to take care of her when she was drunk and leaves.

    Was what he did wrong or right?
    It seems that his well-being increased and her didn’t change at all, so it was definitely right thing to do, wasn’t it?
    (thought experiment is based on http://jezebel.com/idiot-boston-student-upsets-everyone-with-fucked-up-fac-1440142699 ).

    I consulted two self-declared utilitarianists about this scenario – one invented many objections which I all rebuked by fine-tuning scenario and finally said that it was good thing to do. Other one said that there are some things that are inherently wrong but wasn’t able to say why.

    • I can say why. We must regard this behavior as normatively wrong because if we did not, a lot of people would feel licensed to do things that would damage their future ability to sustain positive-sum interaction with others.

      I further propose that we should think about this hypothetical with the same tools and categories we use to think about cruelty to animals. (The friend is temprarily non-sophont, and by hypothesis will never respond to the incident as a sophont.)

      • Pilgrim of the East says:

        That seems like a kind of circular reasoning to me – it’s wrong because if it wasn’t, people might start to do this thing which is wrong. Do you know any other case of act being wrong just because people who considered it right(/non-wrong) might start to do things which are seriously wrong? I don’t think this would be strong enough argument for me.

        Concerning animals, I believe that utilitarianists usually attribute ability to experience increase/decrease of well-being even to non-sophonts but their well-being matters less to them.

        • blacktrance says:

          That seems like a kind of circular reasoning to me – it’s wrong because if it wasn’t, people might start to do this thing which is wrong.

          That’s not his argument. More accurately, it’s “We should consider it wrong because if we didn’t, some people would start doing similar things that really are wrong”. It’s the same principle behind a command like “Don’t shoot in random directions even if you don’t hit anyone”, because someone trying to shoot in random directions without hitting anyone would still probably hit someone.

          • Pilgrim of the East says:

            I got that (as is evident from second sentence), but do you really think that your shooting argument is valid? Is it wrong to shoot into the apple on the head in a circus just because people may try it at home?

          • blacktrance says:

            It depends on to what degree allowing harmless-by-itself action X (gentle silent rape, shooting in random directions, shooting into an apple on the head in a circus) is likely to lead to harmful action Y (more archetypal rape, shooting people). In the case of shooting into an apple, it obviously doesn’t, so it’s okay. But there are other Xs for which the related Ys really are made more likely.

          • Bryan Hann says:

            The sociopath taking the care he did in the situation makes this sound *not* like “shooting in random directions”.

    • Another line of argument is that, in the real world, it’s impossible to that sure of not having left any evidence. Maybe she woke up just enough to notice something was going on. Maybe a phone in her pocket had recording activated by sound-and-motion. Maybe he had an STD he didn’t know about.

      • Pilgrim of the East says:

        These kind of objections were what I meant, that I fine-tuned scenario until utilitarianist I discussed it with said it wasn’t actually wrong (e.g. clothes that couldn’t hold telephone, impressionable victim with special sleep condition, maybe rapist hypnotist etc).
        But what about we just evaluate it post-facto? After all, rapist is sociopath he doesn’t care about right or wrong – we are the one who cares.

    • darxan says:

      Steven Landsburg got in trouble proposing just such a thought experiment. If protesters appear outside your place with “Rape is not hypothetical” placards dont say I didn’t warn you.

      • Pilgrim of the East says:

        Well, it seems that I’m not as original as I thought ;-( .
        It would be fun though, if some SJWs actually flew here to Eastern Europe just to picket in front of my house (this kind of activism doesn’t exist here at all).

    • ADifferentAnonymous says:

      With enough shoring up of the hypotheses, this works against hedonic utilitarianism, but preference utilitarianism can recognize preferences about things other than your own mental states.

      • Peter says:

        I think also if you go for universal-acceptance rule utilitarianism – ie. “follow those rules which, if universally accepted, would maximise utility”, then that gives you another way out. A rule that said you were allowed to do that sort of thing would create a lot of distrust and a lot of worry.

        I need a shower for even dignifying this with debate.

        • Pilgrim of the East says:

          Well, maybe utilitarianism actually can’t not to create distrust and worry? Just imagine you were fat – I’m petty sure you wouldn’t get closer than 50 yards to bridge under which crosses potentially unstoppable tram. Or that you were a healthy man in hospital where 10 people are waiting for transplantation…

          • Peter says:

            Well, what you’re calling “utilitarianism” is act utilitarianism; this will give you a run-down on the many variants of rule utilitarianism. Section 6.2 and 6.3 are particularly focused on the variants I’m talking about – although if you don’t mind spending lots of time and money, Derek Parfit’s _On What Matters_ is my real source.

            One common problem people note with act utilitarianism is that it is “self-effacing”, in that if you are an act utilitarian, then if you’ve done your job right you end up deciding that the right thing to is to avoid promoting act utilitarianism, or even to avoid believing in it. (Except a lot of people forget to include the “act” in the description above).

          • Irrelevant says:

            What I most commonly see is a bicameral approach: organizations and powers should be (very cautious and analytic) act utilitarians, individuals should be virtue ethicists.

      • Pilgrim of the East says:

        thanks, I wasn’t aware of preference utilitarianism

    • Princess Stargirl says:

      Assuming he was a magical sociopath who knew in advance the GF would never find out (or get an std but this requires less magic) then the sociopath’s actions were actively virtuous. As no one else is harmed and the benefits to him still count.

      Of course in the real world no one has magic like this.

      • Jaskologist says:

        If the morality turns on the girl finding out and objecting, then really she is at moral fault here, as she is the one who is creating the harm. She could easily choose not to object, thus increasing the sociopath’s well-being. Really, it’s the denier of sex that is the true monster here.

    • Peter says:

      “most atheists subscribe to some variant of utilitariansim.”

      Really? I looked up the PhilPapers survey (relevant page here), and OK, it’s philosophy profs rather than the general population, but only 32% were subscribing to consequentialism – and there’s a lot of non-utilitarian consequentialism out there.

      • Nornagest says:

        I really doubt most atheists subscribe to any well-defined ethical framework at all. For that matter, I really doubt most theists subscribe to any well-defined ethical framework at all.

        • Peter says:

          Pretty much my thoughts. Oh, and I was insufficiently pedantic, when I said “subscribe to” I meant “accept or lean towards”.

          • Pilgrim of the East says:

            yeah, I worded it badly – actually I wrote that comment twice because I accidentally deleted it in half and first time there was a parenthesis (who actually took time to think about it) .

            Concerning consequentialism, I’m not really knowledgeable about philosophy but I always thought that in the end it always boils down to the same problem of comparison of two alternatives which utilitarianism seems to manage the best (at least judging by it’s popularity).

    • Patrick says:

      This is why we have rules utilitarianism.

      Presto chango, problem solved.

    • RCF says:

      Here’s another uncomfortable question: suppose you find out about this. Should you tell the woman what happened? If we grant the hypothetical that the woman was not physically harmed by this, then the only harm is the emotional trauma of knowing that this happened … which won’t exist if you don’t tell anyone about it.

    • Jiro says:

      Actual human beings have preferences which include some level of aversion to blissful ignorance. Utilitarians have a hard time dealing with this because they tend to measure utility by happiness. Measuring utility by happiness doesn’t allow someone to gain or lose utility from something they don’t know about (unless it affects something they do know about). Furthermore, measuring utility this way means that utility is contrary to revealed preferences. (People would be willing to pay some non-zero sum to police in order to find, report, and prosecute this type of rape).

  52. Peter says:

    The PhilPapers survey is interesting here. I looked up the correlations for Moral Realism, and there’s a correlation with theism, but it’s down at about 16th in the list – it seems that theists skew strongly realist, atheists less strongly but there’s still a realist majority (try flipping the contingencies to see this). r = 0.176, i.e. “small” by conventional “T-shirt sizes”.

  53. Jiro says:

    I think that Scott is looking at Phil Robertson’s literal words and ignoring context, implication, and connotation. It is possible to parse what Phil Robertson said as a thought experiment which questions the logical consequences of an atheistic position.

    But even though his literal words have the form of such a thought experiment, that’s not what he’s doing. He’s stringing together a set of applause lights meant to tell his audience that he fantasizes about the outgroup getting punished for being the outgroup in a way that is their own fault.

    It is a scourge of the Internet that people are too literal. Scott is falling victim to this trend here. The way Phil Robertson phrased that, and the circumstance surrounding it, make it very clear that it is not just a thought experiment even if you can take it apart and say “well, a thought experiment has A, and B, and C, and Phil is also using A, and B, and C and in exactly the right order.

    • Ben Anhalt says:

      I agree with this assessment. It seems like a lot of people here are missing the point. In fact, Scott’s trigger warning gives the game away, if he would only notice it. Robertson is not trying to make some reasoned philosophical point, he is playing rhetorical guerrilla warfare. I’m pretty sure the main goal is not to satisfy his own fantasies, though that may be part of it. This maneuver is designed to make atheists appear weak and morally tepid to the audience. The vividness of his description is like a glove slapped in the face of a rival. It makes any measured or thoughtful response from an atheist seem robotic and cowardly. This same tactic is often used by death penalty proponents. “How would you react if someone in your family was murdered.” If the respondent sticks to their principles they seem inhuman. If they avoid that trap, they seem hypocritical. In other words, if you are “triggered” then you are unprincipled. If you are not “triggered”, you are not human like everyone else.

      This dynamic is a lot like what Rao describes as the curse of development: http://www.ribbonfarm.com/2010/04/14/the-gervais-principle-iii-the-curse-of-development/

      • Irrelevant says:

        An excellent argument that Scott’s position on trigger warnings is inconsistent with the rest of his views, but not against the rest of his views. The purpose of the supererogatory anticensorship stance is to render that catch-22 invalid, training people to think that unpleasant circumstances lead to unpleasant answers but that finding correct answers even when unpleasant is praiseworthy, rather than that anyone who entertains them is A Monster Who Must Be Purged.

        There was a similar argument after the Charlie Hebdo massacre, with a lot of people complaining that the “for every Mohammed cartoon you don’t print, I’ll print three” position was needlessly unkind, when breaking the association with unkindness was the whole point.

        We cannot allow support of free thought to become associated with brutality, even if many of the people who most naturally require its protection appear to also be brutes. Scott is doing his job and Reason’s work here.

    • Bryan Hann says:

      As always: Clear to whom?

  54. J. Quinton says:

    I’ve noticed a general tendency for people to reject the existence of thought experiments when reflecting the arguments of their enemies.

    E.g.,

    Enemy tribe: “Imagine if people could fly, what would be the implications?”
    Friendly tribe: “This is pointless because people can’t fly”

    Enemy tribe: “Do dark horses dream of night mares?”
    Friendly tribe: “Why do you want horses to have nightmares so badly?”

    I’ve even had an exchange that went something like this with a friend in meatspace because she didn’t want to lose an argument:

    Enemy tribe: “Bro, do you even hypothetical scenarios?”
    Friendly tribe: “No”

    • Jiro says:

      I find the term “enemy tribe” here vague because it may be used to refer to situations where there is less or more personal hostility. And your examples all leanm towards the “less” side, while Phil Robertson’s leans towards the “more”. And this hostility matters to how we should read the hypothetical.

      • J. Quinton says:

        Well, yes, hostility matters when people read the hypothetical, but it shouldn’t. That’s the point I was trying to get across, especially framing it with “enemy tribe”.

        If one reads someone as being from an enemy tribe, then (from what I’ve noticed) people are less likely to grant their enemy’s hypothetical as valid, and furthermore will reframe it in one of the three ways above.

  55. Shenpen says:

    I am a stupid man. I really don’t understand what people mean under believing or not believing that there “is” an objective morality. I am a moralist because I am a cynic. That is, I don’t expect morality to be “just” or “fair” or “cosmically meaningful” or people “really deserve” some reward or punishment or maybe they were punished for something they “couldn’t help” or anything like that.

    I just know that people react to incentives. And when those incentives make people behave the way we can more or less agree they should and yet are not too disproportionate (flaying pickpockets alive would work, but would be disproportionate) we can call it a good thing.

    Expressed moral disapproval or judgement is a form of incentive. Hence I believe in morality in the sense that it is useful for me to sometimes say “I find this evil.”

    This is really stupid and simple. Also, this is one of the cases where I think Eliezer is not fully right and there is such a thing as second-order rationality. Moral judgements need not be “true”, because then you have all these quagmires. They just need to work, they just need to incentivize people efficiently.

    I believe in morality because I use morality entirely connotationally, as a motivation tool, and do not care if it is true, because that question raises a horrible philosophical quagmire.

    Let’s call it moral cynicism.

    • While your formulation is (deliberately) crude, it is not very dissimilat from my position.

    • Johannes says:

      It does not help to use iterations of “real” or terms like cosmic. Objective morality just means that the sphere of moral judgement is irreducible to anything else. E.g. “Torturing innocent people is wrong and therefore should be avoided” is not necessarily grounded in any deeper moral principle (it might be but this is beside the point, also whether the moral should be Kantian, utilitarian or whatever). It is just true, full stop. To ask for empirical evidence is basically misguided (although of course there is some, in the same sense that people tend to agree about the result of 2+3, but not necessarily about some really complicated maths problem).

      The best (longish and involved) defense I know (I am not a moral philosopher) is this one
      http://cas.uchicago.edu/workshops/wittgenstein/files/2007/11/dworkin-objectivity-and-truth.pdf

      • blacktrance says:

        Objective morality just means that the sphere of moral judgement is irreducible to anything else.

        Constructivists and naturalists would disagree with this statement while believing in objective morality.

      • RCF says:

        “Objective morality just means that the sphere of moral judgement is irreducible to anything else.”

        It seems to me that that is the opposite of the typical use by theists. Theists say “When I say murder is wrong, that’s based on God say it’s wrong. When you say murder is wrong, that’s not based on anything but your personal feeling that it’s wrong.” So in Divine Command Theory, morality reduces to what God says, while the so-called atheist morality is irreducible.

  56. InferentialDistance says:

    So let me use whatever credibility I have as a guy with a philosophy degree to confirm that Phil Robertson is doing moral philosophy exactly right.

    I sort-of agree. On the one hand, there’s nothing incorrect with using a grotesque illustration to make your point. On the other hand, his point is incoherent. Adding objective morality doesn’t stop the murderer-rapists from murdering and raping. They don’t spontaneously combust upon attempting rape; they do not get struck by lightning for attempting murder. The atheist’s family is still dead at the end of the day.

  57. Patrick says:

    1. The consistent response for the moral error theorist is that he doesn’t see anything objectively wrong, but he certainly feels that it is wrong because nothing about acknowledging the subjectivity of normative feelings requires you to stop having them. This is no different than the way that acknowledging that “tastiness” is not an objective trait of food but rather a personal relationship to food requires you to feel ambivalence between eating a steakburger versus eating marine snow.
    2. I am not restricted in responding to people like Robertson to only responding to his words. I am perfectly capable of considering them in context. And when I do, I notice that even though error theory is perfectly clear on this, it’s critics keep trying to use lurid, self indulgent scenarios to emotionally blackmail error theorists into making inconsistent statements in precisely the way error theory predicts we can be emotionally blackmailed. And that tells me that these philosophy degree possessing people, who presumably should know better, are being intellectually dishonest jerks. I’m allowed to respond to that the way they deserve.

    TLDR, the admission this guy is after doesn’t prove what he wants it to, and if you think it does, you should give back your philosophy degree.

    • Patrick says:

      Just thought of a better way to explain the problem with Robertson’s reasoning:

      A physicist hangs a heavy weight from a rope, creating a pendulum. He pulls it far back, until he’s standing with the weight just touching his nose. He lets the weight go.

      It swings away from him quickly, reaching it’s apex at the other side of the room, and then returns almost as fast. The physicist knows that basic physics teaches that the weight will not strike him in the nose, because it will have lost a tiny bit of momentum during the journey. He stands fast.

      But as the weight rushes towards his face, he flinches.

      Philosophers would apparently conclude that the physicist doesn’t really believe in physics. This is because they suck, and their discipline sucks.

      The physicist believes in physics just fine, he’s just stuck with a human body and a human brain and human instincts, which tell him to flinch even though he knows he’s not in danger.

      Choosing his reason over his brute emotional responses doesn’t make him a hypocrite, it makes him a god damn hero. And building half an intellectual discipline off playing “gotcha” in ways that intentionally levy our brute instincts against our reason in order to discredit reason doesn’t make you a rationalist, it makes you the god damn opposite.

      • Peter says:

        I was once watching a TV program where Dawkins did an experiment like that, and he managed to avoid flinching (and impressed the onlookers who thought they couldn’t manage it). This was a) a pretty cool stunt and b) I think the point where Dawkins jumped the shark for me, although I think it was the rest of the program that did it.

  58. ryan says:

    My aunt once asked me “if one doesn’t believe in God, what stops them from say murdering someone?” My response was like “Is there something we should know about you? Because that question strongly implies that if you didn’t believe in God you’d have made some bodies turn cold.”

    • stillnotking says:

      Similarly, the prisons are not full of anti-realists. Either we’re closet realists (this is approximately the point Robertson was trying to make), or anti-realism does not have the consequences that theistic realists in particular would like it to have.

  59. Newbie says:

    A counter-point to extreme thought experiments knocking down our moral intuitions is that the intuitive moral heuristic may work in so many situations, and the god’s eye utilitarian view so few, that the intuitive moral heuristic may still be superior. And that in practice, trying to operate on a god’s eye view of ethics may cause more problems than it fixes.

    In order to make some of these extreme choices where preventing a greater evil justifies a lesser evil, you need to be completely certain of the outcome, otherwise you’re simply committing evil acts for no reason. If our hypothetical agent got her dust speck calculations wrong somewhere and she’s only inconveniencing the lives of 3^3 people, she’s a monster. The general principle that you don’t torture someone to avoid inconvenience in other people’s lives may not scale infinitely, but it’s not prone to fail catastrophically in the same way that being willing to accept any utility tradeoff that seems net positive would.

    Deciding that you won’t commit horrible actions under any circumstances works most of the time. Deciding to commit horrible actions that you think are net value positive may end up creating more harm than good, if enough people who apply that calculation misplace a decimal point somewhere.

    The argument against taking such thought experiments too seriously is hubris, the experiments assume perfect information in a way that the real world never has. I can foresee no real world situation in which I could be confident in enough in the benefit of a net dust speck aggregate to torture someone. Even if the edge cases of morality aren’t handled perfectly, running the program that says we agree to constrain our actions with a set of principles to avoid harm may lead to a better world than running the program where we optimize an individual calculation of the net evil incurred.

  60. Zach says:

    Kierkegaard addresses Abraham, though some may see the answer as a cop-out. But in the end, the answer to “If God commands you to kill your innocent child, is that the right thing to do?” is “yes, with a LOT of clarification”.

    As a believer, I expect that if God asked me to kill one of my children, I would fail the test. But I would also believe that God (assuming certain things about God’s nature that one must assume to have any kind of faith) only commands things that are good, and that good transcends ethics and extends beyond time. Again, assumptions about God’s nature.

    Kierkegaard’s point is that ethics based in reason can’t answer the question. Abraham is either a would-be murderer, or he is the “knight of faith”.

    • houseboatonstyx says:

      “If God commands you to kill your innocent child, is that the right thing to do?”

      cynical/
      If the story had said that Abraham refused, it would continue, And G-d said, “Right, I was just testing you”, and that was accounted to him as rightousness.
      /cynical

      • Nornagest says:

        Why does that strike you as cynical? Sure, it’s one way the story might have gone, but it would have very different implications for religious ethics.

        This isn’t just an abstract question. Agents of God probably tell people to do things that strike them as ethically questionable every day. Founding myths don’t always line up very well with praxis, but don’t you think it’d be evidence, in the Bayesian sense, for a theological case for saying “no thank you, Reverend/Imam/whatever, I don’t think I’ll do that?”

        • houseboatonstyx says:

          Sure, it’s one way the story might have gone, but it would have very different implications for religious ethics.

          Oh, yes. My cynicism was coming from a different level, something like this: Whatever choice Abraham had made, his supporters would have tacked on “and it was counted to A as righteousness.”

  61. Abe Wernick says:

    I had no idea non-cognitivism was so rare here. I really am the kind of atheist that thinks moral realism is just theism through a glass darkly. Is there a Slate Star Codex like place with a lot of people like me?

  62. Eli says:

    If God is defined by being eternal, that thing through which all else came into being, then any objective morality would have come from God. Though it is not necessarily communicated through divine command. One alternative is natural law.

    • Peter says:

      Question: under your definition, did the digits of pi come from God?

      • Saint_Fiasco says:

        I would imagine that the ‘digits’ are a human invention, but pi itself comes from God.

        So, in a roundabout way, the digits of pi come from God too.

    • Tracy W says:

      But this is just a matter of definition, it doesn’t tell us anything. I could equally validly say that if Barney the Dinosaur is that thing through which all else came into being, then any objective morality would have come from Barney.

    • Daniel Speyer says:

      If God is defined as “that by which all else came into being” then everything from my left toe to ice cream to the idea of atheism would have come from God. Does that interfere with atheists believing in those things?

  63. Eli Sennesh says:

    Robertson’s argument begs the meta-ethical question that “morality” can only consist in “A code of conduct for which a Universal Authority Figure will reward or punish agents in accordance with how well they follow the code.” On that note, the universe does “punish” agents for deviating from the rules of rational thought, usually by killing them (the fact that “stupid” people survive so easily actually shows just how intelligent our baseline for “stupid” in humans really is, compared to all known animals and robots).

    If you chuck out his assumption, the experiment fails to pump intuition. Why should I use my morality ordering for how much the laws of physics care? They categorically never do care at all! I, on the other hand, care a great deal.

    (And, as a mostly-Railtonian, occasonally-somewhat-Rawlsian “stark, raving realist”, I do feel the need to point out that the observed trends in sociology show that a given human agent can expect much better outcomes from following what is usually termed morality.)

    (Also, torture vs dust-specks is a fun thought experiment precisely because its creator intends it to teach you to bite the bullet and accept the most absurd conclusions of utilitarianism! I would easily take a dust-speck to the eye to prevent torture, so why shouldn’t I choose the dust-speck option? I’m looking to equally balance the pleasures and pains across agents, not aggregate linearly in the first place!)

  64. Conversely, I don’t think extreme scenarios like that tend to be helpful.

    I think if we’re looking at an extreme scenario, most people will agree on how to interpret it. If not, then talking about extreme scenarios with them is indeed useful; but to my interpretation, the cited example is not one that passes muster with regard to that.

    For me, moral ideology is precisely most interesting for the subtle situations. It’s for the times when you’re not exactly sure what your answer ought to be, because it could go either way.

    I pondered the question “are lifeboat scenarios legitimate in moral debate?” for a while – the general case, so to speak. I honestly feel the answer is “no”. I think they can be fun to consider, but arguments for or against an ethical framework that are constructed with them typically feel entirely absurd to me.

    On the other hand, I don’t talk about my moral ideology much. That’s a direct product of my attitude about it, of course. I consider it a useful tool for myself, not something I need to convince others of. I’ve determined it works for me. If it does break down in a subtle situation, I’ll pick something else in a predetermined cascade.

    So that’s probably heavily affecting my point of view and rendering it useless.

    With that out of the way with… I’m genuinely perturbed that anyone would consider the thought experiment some sort of fantasy of Robertson’s. Not knowing this character or his motivations, it would annoy me if I had heard it and I would consider it a useless appeal to emotion from my own perspective, which is hardly flattering, of course… but he’s not some kind of fiend over it. Do people honestly believe that he meant it as some sort of fantasy, or is that just a reaction born of spite?

    • Alexander Stanislaw says:

      Do people honestly believe that he meant it as some sort of fantasy, or is that just a reaction born of spite

      If by fantasy you mean “he wants to act it out himself” of course not. But if by fantasy you mean “he likes entertaining the idea of it happening, someone being shown the horrible error of their ways” then yes I think fantasy is probably an accurate description. See my other comment.

      I’d like to note that if you take Phil’s belief system seriously, then an _infinite_ amount of torture and suffering awaits people who do not pay due respects to God, and to him this is a good and just state of affairs. So with those stakes, if being subject to a finite amount of torture on earth is enough to make someone come around to his side, then it is worth it. Not just worth it, but an overwhelmingly good outcome. Thats just rational if you take non-liberal Christianity seriously. On an emotional level I think his subconscious thought process is something like “Haha, that’d show you good!”. Which I expanded on in much more detail in my other comment.

      It’s possible this isn’t true of Phil, but I doubt it. There certainly are people who it is true of, people who are very similar to Phil in their worldview. (Some of whom I’ve had the displeasure of having lecture me on how “It’d serve me right to go to Hell for being so arrogant as to not believe in God”. I wish that paraphrase were an exaggeration, but the language was close to that harsh, maybe harsher).

      • “If by fantasy you mean “he wants to act it out himself” of course not. But if by fantasy you mean “he likes entertaining the idea of it happening, someone being shown the horrible error of their ways” then yes I think fantasy is probably an accurate description.”

        The line I was drawing actually lies somewhat between the two: “he likes entertaining the idea of it happening and him being involved in it as a witness (or, alternatively, being able to brag some responsibility in setting things into motion)”, but it’s good that you ask, since you’re right, of course, it depends a lot on the definition.

        I think it’s (sadly) common for people to ‘hope’ for ill to befall others. Even in casual narrative, people sometimes say things like ‘I’d like to punch person X in the face’ (to use an example on the mild end of the scale, chiefly because it’s so commonplace). If you gave them the opportunity to do that, though, or even just let them witness a situation where someone else did it, chances are they would not feel good.

        Given those parameters, I wouldn’t want to call that someone’s fantasy. Strictly speaking, the word fits, but it comes with connotations that would make it feel very insincere if I were to use it like that.

        Nonetheless, if that’s the way people are using it, then I think I understand now. I can’t make a statement whether they’re right (prior to Scott’s mention, I was entirely oblivious to this character), but that’s not necessary to unravel my previous astonishment. Thank you! 🙂

      • DrBeat says:

        I think it’s uncharitable, dickish, and cheating to say Phil is condemnable because he is “fantasizing” about raping and murdering the atheist’s family.

        He enjoys the thought of making the hypothetical atheist upset. He is enjoying himself because he imagines someone being upset by his statements. This should not be unfamiliar behavior to someone who has spent any length of time on the Internet.

        • Jiro says:

          He’s not fantasizing about raping and killing the atheist himself. He’s fantasizing about atheists getting raped and killed because their atheism is rebounding on them and it’s (in his mind) their own fault.

        • Alexander Stanislaw says:

          Good thing I didn’t say that. And neither did the article Scott linked to.

  65. Dennis Ochei says:

    Im having a hard time even picturing whay moral realism would mean. Okay, so let’s take Utilitarianism. Of all the functions that map states of affairs of the universe (or sequences of states of affairs) onto the real numbers, one of them maps these states onto their correct utility. But since we are assuming moral realism, then the moral facts are part of the state of affairs. So let’s let this class of functions take in a tuple of inputs, (p, n) where p is all the physical facts and n is the normative facts. But n is just one of these functions. In order for a utility function f to not be self defeating, f(p,f) >= f(p,g) for all p where f != g. Now take two non-self defeating utility functions f and g. f rates any state of affairs were f is true as better than any state of affairs where g is true and g rates any state of affairs where g is true as better than any state of affairs where f is true. So which is actually higher utility? In order to determine what utility function is the best you have to presuppose which is the best.

    • Daniel Speyer says:

      Moral realism, at least as I understand it, is the claim that there is *some* way to resolve this. That there exists a metautility function which is in some Sense “correct”. That Sense does not rely on humanity or anything like that. AFAIK, no one claims to have found the function or the sense.

      Anti-realists might argue that people have been looking for these things for millenia and not finding them, so they’re probably not there. But consider how long people looked for the definition of the counting numbers before somebody found it. Also, the Sense is likely to depend on the mathematics of dynamic systems, which is only about a century old and still obscure among philosophers.

      The evidence in favor? Well, the intuitive approaches to moral realism that people keep independently reinventing look an awful lot like the intuitive counting that predated Peano.

      • blacktrance says:

        That is one meaning of moral realism, though rather than talking about a metautility function, many would instead more generally say that there’s a Morality out there, which may or may not have anything to do with utility. Another meaning is that moral statements are truth-apt (rejection of non-cognitivism), some moral statements are true (rejection of error theory), and the truth of a moral statement is independent of the opinion of the speaker (rejection of subjectivism).

  66. Matthew says:

    Incidentally, you can craft different hypotheticals to trigger different intuitions and conclusions.

    Here is an (as far as I know, totally novel) thought experiment that, perhaps, prompts one to think slightly differently about the additive nature of utility:

    Pressing a magic button that places 100 micrograms of Pu-239 in the abdominal cavity of every human on earth (which will slightly increase the cancer risk of each of them) must be many times worse than a magic button that magically puts 11kg of Pu-239 in the abdominal cavity of one person, right?

    • Nornagest says:

      I have no idea what the answer is, but that question just looks like a straightforward calculation over expected deaths or QALYs or something to me. You could add terms for property damage and discount rates and a bunch of other stuff if you wanted, but that’s basically just gravy; I’m not really seeing an ethical dilemma here.

      • Matthew says:

        It’s not meant to be an ethical dilemma; merely an illustration that you can’t necessarily assume that the consequences of many small X distributed and one large X concentrated can be compared by simply adding up the amount of x.

        As Peter says below, utility is supposed to scale linearly. But pain/suffering is not obviously equivalent to disutility.

        • Peter says:

          There’s some interesting stuff in _Thinking Fast and Slow_ about how people aggregate pain, and it’s pretty fascinating – especially the “peak-end rule” whereby adding some mild pain to the end of an episode of worse pain can make people think the whole episode _better_.

    • Peter says:

      The trouble with that one is that I have to get my calculator (well, python prompt) out to work out what’s even going on. But I think it’s pretty standard to observe that although “utility” is defined[1] to be this thing which aggregates nicely it maps in a nonlinear manner to just about anything else. Indeed there’s a standard argument for redistribution that works like this: people get diminishing returns from money (i.e. there’s a nonlinear mapping from money to utility), so to maximise utility from the same amount of money, you need to spread it evenly. There’s a counter that says you need incentives, and in order to incentivize someone who’s already rich, you need to give them a lot of money, again because of diminishing returns.

      [1] Apparently when you present people with very abstract dilemmas involving the distribution of utility points, people ignore the stipulation that utility points aggregate nicely and treat them as if they had diminishing returns too!

      • Andrew says:

        There’s a counter that says you need incentives, and in order to incentivize someone who’s already rich, you need to give them a lot of money, again because of diminishing returns.

        Doesn’t seem like much of a counter, considering that the redistribution would make the rich less rich, thus they would not need as much compensation to have the same amount of incentive. Redistribution covers both cases quite well.

        (I guess you do have to get into the specifics of wealth vs. income vs. other kinds of taxes to get it right, though.)

    • RCF says:

      All you’ve done is shown is that you don’t understand the concept of utility. Utilitarians believe that utility is additive. They do not necessarily believe that effects of Pu are additive. The whole point of the utility of a certain amount of stuff is that utility is not linear with respect to the amount of stuff. If utility were simply some constant times the amount of Pu, we wouldn’t need the concept of utility, we could just talk about Pu.

      • Matthew says:

        Already answered in my response above.

        The distinction between Pu and utility is also a distinction between pain and (dis)utility, and for that matter, between anything-that-is-not-utility and utility.

  67. John Schilling says:

    I am going to dissent on, well, the title of the post, the underlying philosophy, and almost everything that follows from it.

    Actually useful science involves thought experiments that go maybe one step beyond what can be measured without extraordinary measures in a good university research lab. Those thought experiments guide the actual experiments that will lead to practically applicable results the near future as technology improves (or the economy improves, such that the benchmark for “good university research lab” is shifted up).

    The science that requires decagigabuck particle accelerators or space stations or whatnot to produce a barely statistically significant measurement, almost can’t be useful to anyone beyond the scientists themselves, because “useful” almost always requires more than bare statistical significance and almost always requires that the price tag be no more than single-digit gigabucks. At best, you’re investing scientific resources a generation before you needed to for the eventual benefit. The thought experiments where we can’t even imagine the means of experimental verification, are cheaper but even more premature, and may wind up building massive theoretical edifices that will need to be torn down and rebuilt when we do get the data.

    Actually useful ethics and/or morality involves thought experiments that go maybe one step beyond actual human experience, and then only to the extent that we are contemplating plausible new experiences. The really extreme stuff like the sacrifices to the Trolley Gods, put one solidly into “hard cases make bad law” territory; good law comes from patching together workable solutions for the sort of problems people actually face and trying to find the underlying principles in that patchwork.

    Except that in this case, you can expect lots of clever people to try and deliberately exploit the unnoticed loopholes and otherwise pervert the idealized ethical structures you have built. Do I really need to explain the failure modes of the ethical system that, even in principle, allows the “perfect rape” of the unconscious woman who will never know what was done to her?

    Extremism in thought experiments is a vice. An often expensive one for scientists, a generally harmless one for armchair philosophers, and keep it the hell away from anyone who has any influence over the laws or mores or any society I am going to be a part of. Hopefully Phil Robertson does not fall into that last category, because he is way too eager to indulge this vice.

    • Anonymous says:

      I disagree. Let’s start with math and move toward physics. Math has often had to fight the, “It’s too out there, impractical, and not necessary for 99.9% of what we do,” accusation. Nevertheless, extreme thinkers worked on absurd and obscure problems that turned out to not be so absurd or obscure. Riemann wrote about absurd and obscure problems in geometry that had absolutely no real-world application… until Einstein stumbled upon his work. But wait! There’s more! Einstein’s work (using Riemann’s absurd work) was itself absurd and obscure. The year is 1905. Nobody does anything on any scale that would use such a tiny refinement to good ol’ fashioned Newton… unless they’re spending decagigabucks to get results that could only be useful to scientists. Speaking of which… when was the last time you navigated via GPS?

      In ethics/law, it seems almost inevitable that edge cases will arise (I’ve been reading ethics/law books in sexual consent, and I’m constantly surprised by how many of their ‘hypothetical’ edge cases have citations to actual court cases). It’s good that we work to acquire some measure of consistency and validity in our approach before we encounter them in real life. Otherwise, we risk unjust results, which can reduce societal confidence in the whole endeavor.

      • John Schilling says:

        Last night – coincidentally right after putting the latest GPS satellite in orbit, and helping to design the next generation. This is a subject with which I do have some familiarity.

        And we did not need to understand special relativity in 1905 to build a working GPS system, nor in 1965, and even 1985 would have been tolerable. If my professional ancestors had been religiously devoted to Newtonian mechanics and the Luminiferous Aether, their rockets would still have worked and their first GPS system would have accumulated positional errors at a very repeatable and systematic 11 km/day. Scientists may find that sort of thing to be a source of wonder, confusion, or inspiration; engineers see it as cause for an empirical correction factor. Such empiricism is the rule in engineering more often than not; it’s rare that we can obtain practical results from theory alone.

        If, somehow, the scientific community had entirely missed general relativity until then, an examination of the precisely tabulated correction factors used to make GPS work would likely have given them some useful clues. That Einstein instead worked it out about a generation before we could conclusively verify it and two generations before we could practically use it, did no harm – but very little good.

        • Anonymous says:

          So you agree that the difficult case will come up, but merely argue that the damage wouldn’t be too bad if we didn’t bother to do the work ahead of time. I don’t think this generalizes. If we didn’t have a handful of people wondering about cavity radiation and little tiny pieces of matter that are too small for us to get to without kilodecagigabucks, then we have a different geopolitical world with a different suite of available energy sources.

          Some crazy person once wondered, “Hmmm, I wonder how many primes there are?” Who cares whether there are an infinite number of primes? We can just compute the ones that we need, right? Without this crazy person going to an extreme, we don’t have gobs of mathematics. While much of it may seem useless, I happen to like cryptography.

          Perhaps my edit didn’t show up in time for you to see it (I accidentally submitted before writing the last paragraph), but I think the dangers are actually more pointed in ethics/law. Difficult cases will come up. If we haven’t developed the appropriate machinery, then not only may we get them wrong and shake confidence in our legal institutions, but we won’t even be able to speak coherently as to why we think we got it wrong. Hard cases only make bad law if we haven’t done the prerequisite ethical theory that allows us to draw the correct lines.

          • John Schilling says:

            “If we didn’t have a handful of people wondering about cavity radiation and little tiny pieces of matter that are too small for us to get to without kilodecagigabucks, then we have a different geopolitical world with a different suite of available energy sources”

            Which energy sources are these? The most extravagantly expensive energy source usefully available in this world is nuclear fission. The various thought experiments that lead to this discovery were rapidly translated into actual experiments, performed in modest university laboratories by Chadwick, Fermi, Hahn, Meitner, et al, and I’d be surprised if the total cost of all of them came to even one billion inflation-adjusted dollars. The prototype was built by and at the University of Chicago, with more government funding than was then the norm for university labs but clearly not more than they could usefully afford. Everything after that is engineering based on proven science and known technology.

            And even so, nuclear fission is economically marginal for most applications, though there are enough useful niches (e.g. submarines) that we can certainly call it an example of scientific research leading to useful results.

            I am not aware of any energy source available in the real world, whose development required either unverifiable thought experiments or billion-dollar scientific instruments. Indeed, I would be hard-pressed to name any experiment in that price range which has really improved the lives of people who are not scientists. Drug discovery now comes in at about a billion dollars per useful new compound, but that’s lots of experiments with eight-figure price tags most of which fail. Very little useful theorizing in advance of what can be proven experimentally and affordably.

          • Anonymous says:

            You’re setting the bar too high for your analogy to work, and this is pretty clear because you haven’t said a word about ethics in either of your comments.

    • Carinthium says:

      Do you believe there is in fact a moral truth? Because if there is, surely said moral truth is consistent across all cases? That would refute your idea.

      If you believe in intuitionism, you still get this problem. The sheer number of contradictions between seperate intuitions is a major flaw for any intuitionist model.

      Your patchwork principle comes into extreme problems because patchworks like the ones you mention almost never make any theoretical sense at all. They are also guided by biases humans consciously want to exclude from their decision-making.

      • John Schilling says:

        I believe in Moral Truth in about the same way as I believe in the Grand Unified Theory. It probably exists, somebody somewhere may even now know what it is, but nobody can do anything useful with it – even convincing skeptics that their version is the right one.

        To do anything useful, you use the patchwork. Some quantum mechanics here, some special relativity there, Newtonian mechanics and Maxwell’s equations just about everywhere, fudge factors to paper over the gaps. You can tell me I’m inconsistent and wrong, but my rockets usually work and I rarely hurt innocent people.

        • Carinthium says:

          The problem with your approach is that there is a lot of selfish and/or biased rationalisation. Even judges, professionally trained to be unbiased, have been shown in scientific studies to change their sentences based on plainly irrelevant factors (such as what they had for breakfast).

          In science, trying for total self-consistency is unnecessary and counterproductive. But in philosophy, regardless of any downsides it is necessary as we have no other means to guard against biases.

          • John Schilling says:

            We have plenty of other means to guard against biases; they are imperfect, but that’s the best we can really do.

            Because bias will also be a part of a totally self-consistent consequentialist morality as applied in practice. Real ethical problems are intractably complex to solve unless you make simplifying assumptions – but which ones? One judge analyzes the “perfect rape” scenario with the simplifying assumption that the victim will certainly never know, a second attempts to quantify the probability that she will learn of the rape and be harmed, and a third considers the second-order effect of her telling other people and modifying their fears or behaviors. Leads to three different rulings. And quite possibly based on what the judge had for breakfast.

            And that neglects the biggest source of bias, which is that the critical moral decision is made not by a judge, but by the actor. Who then faces the second moral decision of, “should I even tell a judge about this, given that he will then expend resources judging a case that I know to be righteous and that the essential righteousness of my actions is contingent on their remaining secret?”

            I’m pretty sure horny guys seriously contemplating whether or not it is OK to have nonconsensual sex with the attractive woman lying unconscious before them, and then not tell anyone about it, are going to be more than a little bit biased in their assessment of both the ethics and the facts of the case.

          • Anonymous says:

            This is a fantastic example. Some people have conflicting thoughts about what “consent” means. So, do we just throw up our hands and accept that we’re stuck with a bunch of unjust results? Not at all. Instead, we do ethics. I think a lot of people don’t get this, because they don’t know what it means to do ethics outside of the internet. Check out Consent to Sexual Relations by Alan Wertheimer. Check out The Logic of Consent by Peter Westen.

            These works (among others) consider real cases and thought experiments in order to determine what the important features of consent are for both morality and law. As we develop this framework, we can build a better consensus of what qualifies than “what the judge had for breakfast”.

          • John Schilling says:

            I am reluctant to read another seven hundred pages on this point. Do you happen to recall what answers Westen and Wertheimer give to the thought experiment of the Perfect Rape, and why?

          • Anonymous says:

            Both texts find the Perfect Rape to be a pretty easy case. Wertheimer argues that the moral permissibility of sex turns on whether valid consent was given. In the Perfect Rape, no consent could be given, because the woman would never have any knowledge of the act. Therefore, it is morally impermissible.

            I think Wertheimer’s account has some issues (which are kind of hidden in this one case) and that Westen makes a few features more clear, but the Perfect Rape doesn’t get any further on his account, either. The Perfect Rape victim could not have factual attitudinal consent, that is, she did not have a subjective experience of conscious acquiescence to the act.

            Caveat: I haven’t finished Westen’s book yet, so there may be a way to sneak the question back in. Factual attitudinal consent does not seem to be an absolutely necessary condition for a defense to a charge of rape, because he presented some cases where factual expressive consent seem to be suitable (…but of course, that wouldn’t be a factor in the Perfect Rape scenario). Particularly, I haven’t completed his sections on the legal fictions of consent, but I hardly doubt I will encounter any sort of legal implied consent that could be applicable in this scenario.

            I’ve definitely seen some Internet Warriors who advocate a model that is very susceptible to the Perfect Rape scenario (…and I’ve tested them on it), but it’s not a problem for either of these professional works.

  68. Kevin C. says:

    Mr. Alexander,

    It seems to me that you argue here against those who “call the torture vs. dust specks question “contrived moral dilemma porn” and say it proved that moral philosophers were kind of crappy people for even considering it” by making the case that these extreme thought experiments are necessary to tease out the “deep down” principles underlying our intuitions.

    However, these two positions aren’t incompatible. One can very well accept that extreme, improbable, disturbing hypotheticals are necessary to the act of seeking Deep Truths of moral philosophy, and also accept that these contrived moral dilemmas are things we shouldn’t contemplate, and that those who insist on doing so are suspect. One who holds both of these will, of course, logically conclude that doing this sort of moral philosophy is wrong (since it requires doing these icky thought experiments), and that we shouldn’t be investigating these “more fundamental principles” at all. You don’t seem to address this objection, or give any reason why we should be doing moral philosophy of this sort in the first place.

    • Irrelevant says:

      Yes, and he also didn’t explain why we should consider Kant’s views relevant or what utilitarianism is. Some set of shared premises have to be assumed, people who think the pursuit of truth is not essentially and/or instrumentally good aren’t in the audience for this piece.

      • Kevin C. says:

        “essentially and/or instrumentally good”

        That’s a rather broad “and/or” there. There’s a significant distance between seeking truth so as not to be deceived or mistaken when acting (instrumentally good), and truth as essentially good in and of itself, what Nietzsche called the “will to truth”.

        The former admits to at least the possibility of situations where other goods may outweigh the instrumental goodness of seeking the truth, or domains where truth-seeking may do more harm that good. This is not incompatable with the position I outlined above. For example, consider Steve Johnson’s argument further up that these sorts of thought experiments are corrosive to norms and taboos important to societal stability (and given, further, the fact that one must depart to such extreme, improbable hypotheticals to tease out the distinctions investigated means that the instrumental benifits for actual living are likely very small).

        Thus, we then are left with truth being essentially good as the objection. But I would object that this is a lingering product of pre-enlightenment Neoplatonic Christian metaphysics. To quote from Nietzsche’s The Gay Science:

        “It is still a metaphysical faith upon which our faith in science rests — that even we knowers today, we godless anti-metaphysicians, still take our fire, too, from the flame lit by a faith that is thousands of years old, that Christian faith which was also the faith of Plato: that God is truth, that truth is divine.”

        Without that metaphysics, I don’t see how the essential, rather than instrumental, goodness of truth and truth-seeking can be tenable.

    • Jiro says:

      One can very well accept that extreme, improbable, disturbing hypotheticals are necessary to the act of seeking Deep Truths of moral philosophy, and also accept that these contrived moral dilemmas are things we shouldn’t contemplate, and that those who insist on doing so are suspect.

      I don’t think these dilemmas are things we shouldn’t contemplate.

      I just don’t think that what Phil Robertson is saying is an example of such a dilemma in the first place.

      It has the grammatical structure of such a dilemma, but that doesn’t make it one; it’s communicating something other than its literal words.

      • Kevin C. says:

        “I don’t think these dilemmas are things we shouldn’t contemplate.”

        But there are those who do think that. Setting aside the particular example of Mr. Robertson’s statements and their quality as an example, it is those people at whom the core argument of Mr. Alexander’s argument appears aimed. But I would hold that he didn’t fully make his case, or at least that his argument appears to rest on the unstated (and, IMO, unfounded) position that seeking Truth is good in and of itself. Or, at least that the good achieved from the creation and contemplation of “contrived moral dilemma porn” scenarios outweighs the harm.

  69. Timothy Underwood says:

    I think from the perspective of the imaginary error theorist the question’s framing is bad. “Murder is OK” is just as much a moral statement as “Murder is wrong”, and hence just as much in error.

    A possible analogy, a tribe explains fire by claiming a fire spirit enters the wood and forces out the wood spirit when it burns. Someone who knows about science goes, no you are wrong.

    The tribe responds, ‘that is completely ridiculous, you can’t possibly believe the wood spirit is still there after the log has been turned to ash’.

  70. DrBeat says:

    The trolley dilemma and dust speck problem are still contrived torture porn. So is Robertson’s scenario.

    An extreme hypothetical scenario is posed in order to find out something. Contrived torture porn is posed in order to gotcha someone. Contrived torture porn is a means for someone to publicly jack off about how damned smart and sophisticated and right they are, and to force someone else to admit something that the interlocutor wants them to admit.

    The one thing that Robertson’s hypothetical has over the trolley problem is that it doesn’t take the form of “Imagine this thing that isn’t true and will never be true and cannot possibly ever be true or map to reality in any meaningful fashion. Ha! Your answer to this shows your beliefs about things that do exist in reality must be erroneous!”

  71. dxu says:

    I’m not a moral error theorist/non-cognitivist/whatever, but if I were, I would say something like the following in response to Robertson:

    “There’s nothing morally wrong with that situation. I wouldn’t like it, but my likes and dislikes are irrelevant to morality as philosophers discuss it.”

  72. RCF says:

    Scott says:”On a talk show, he accused atheists of believing that there was no such thing as objective right or wrong, then continued”

    Yet the link given says

    “The reality TV star made the comments Fridayat the Vero Beach Prayer Breakfast[1], and Christian conservative radio host Rick Wiles aired them later that day on his “Trunews”[2] program, reported Right Wing Watch[3].”

    Moreover, he says “I’ll make a bet with you.” Who is this “you”. Is it the atheist? Is he making a bet with the atheist what he wold do? No, he’s making a bet with a Christian. This was not a thought experiment presented to atheists to get them to reconsider their position. This was pure propaganda, presented to Christians, trying to convince them that atheists are amoral and callous.

    • Andrew says:

      This was pure propaganda, presented to Christians, trying to convince them that atheists are amoral and callous.

      Yes, exactly.

      Scott, I think you’ve made a bad mistake here. You’ve mistaken a preacher’s sermon’s two-minutes-hate segment, directed against atheists, as some kind of philosophical argument. There is no argument here whatsoever; there is just an arrogant talking-down of a preacher to an absent hypothetical atheist, in front of a crowd intended to jeer “Amen! That sure would learn them atheist savages a lesson!”

      Scott, what if “Jew” were substituted for “atheist”? Would you still leap to the defense of this sermon if Jews were talked-down to in this same way?

      How about we make it Islam and a beheading? “Those Muslims say they’re so happy to worship their false god, but I’ll betchya if you dragged one of them out into the desert and chopped his head off with a knife, he’d start to think there was something wrong with that!”

      [Crowd]: “Amen!”

      [Crowd]: “Hallelujah!”

      [Crowd]: “Preach it brother!”

      It’s not about the text, it’s about the subtext. The text says something stupidly obvious. The subtext is that the stupidly obvious thing is something atheists need to be told.

      This is hate speech in the most literal sense: deliberately riling up a crowd to hate another group. Building in-group cohesion by mocking the out-group’s stupidity and depravity. The graphically violent imagery used is not the only reason it is disturbing, but it is disturbing.

      PS. Make sure you actually listen to the words and the tone here (it’s unambiguously a preacher-voiced sermon):

      https://soundcloud.com/rightwingwatch/phil-robertson-on-atheist-family-getting-raped-killed

      PPS. Recommended reading: http://www.sup.org/books/title/?id=635

    • Andrew says:

      Another thing I just noticed. After setting up a scenario where “two guys break into an atheist’s home” and talking about everything in terms of what “they” do, at the very end there is a shift in the subject:

      Then you take a sharp knife and take his manhood and hold it in front of him and say, ‘Wouldn’t it be something if [there] was something wrong with this? But you’re the one who says there is no God, there’s no right, there’s no wrong, so we’re just having fun. We’re sick in the head, have a nice day.’

      Is it a mistake? If it’s a mistake, is it a meaningful one? Or is it deliberate?

      Either way, it’s what he said. To his faithful and obedient flock.

      (I listened to the audio and heard it. It’s accurately quoted.)

  73. Good Burning Plastic says:

    Phil Robertson is doing moral philosophy exactly right.

    How in the stars is using error theory as a weak man of atheism “exactly right”?

  74. Fuzzylogic says:

    Robertson has used the well-known and rather tiresome Straw Man fallacy, which is based on a false representation of the opponent’s argument, followed by an attack on that argument. Atheists most certainly do believe in right and wrong, therefore everything that follows from the false claim that they do not, is itself false.

    This article is mere obfuscation of that simple reality. Robertson himself sounds like an angry, rather disturbed individual and I sure hope he doesn’t consider himself a Christian, because that kind of hate-spewing is totally counter to everything that Jesus stood for.

    http://en.wikipedia.org/wiki/Straw_man

    • satanistgoblin says:

      Definitely not all atheists believe in right and wrong, especially objective right and wrong, so it is not a complete straw man. Jesus was a complicated character, we really cannot conclude how he would feel about things like atheism and moral relativism he did not talk about it. He seems really fanatical in some verses.

      • Andrew says:

        It’s a complete straw man, because precisely zero atheists would react to being victimized in the way described by thinking “hmm, maybe I should reconsider believing that rape, murder, and forced castration are A-OK.”

        • satanistgoblin says:

          Well, a consistent nihilist should keep being a nihilist in theory, but he could be change his mind due to the trauma, instead of rational reasons.

          • Andrew says:

            What I’m saying is that the experience of being “wronged” is not some kind of shocking falsification of anybody’s philosophical nihilism.

            Philosophical nihilism isn’t a belief about the experience of being wronged. In the interests of brevity I’ll not be overly charitable toward it and simply state: philosophical nihilism is a mostly-vacuous idea that just says something about how we ought (or ought not) to talk about right and wrong. It doesn’t say anything so strong (or anything so ridiculous) as that people whose dicks are chopped off will feel OK about it. Rather, it says that when someone chops your dick off, your feelings of anger, desire for revenge, your willingness to impugn the character of the person who victimized you, etc., are in some global objective sense “not moral,” i.e., that there is a category of “moral” which they are put in, which is a mistake because the category does not exist as a real distinction. It does not say that that the feelings don’t exist, which would be stupid.

            Imputing that kind of stupidity onto “atheists” is indeed a “straw man,” at best. In fact, it is not even a straw man, because it is not an argument. It is literally hate speech, in the sense that its intent is to provoke hatred against atheists, to describe them as alien, to subject them to ridicule, etc.. A straw man is arguing against a weakened argument in order to make your own argument easier to make; vilifying an opponent, not to make your argument easier, but for the sake of the vilification itself is something different.

            (BTW, the idea that there are people floating around in the world who have never been wronged or hurt by anyone, people who are ignorant of the experience of suffering, and that they can be identified and called such, is really quite despicably presumptuous. It’s something you see from time to time in random contexts and apparently always a part of the process of dehumanizing an alien Other.)

      • Andrew says:

        Also, I think the poster above was not saying that Jesus would have had any particular thought about “moral relativism.” Rather, it’s riling up a crowd to hate some foreign out-group that (it is claimed) Jesus would have objected to.

        (To be clear I think it a bad mistake to think of Robertson as making some kind of philosophical point here. He’s basically saying that what those hellbound heathens really need is to get taught a lesson by having their children raped and their dicks chopped off. Then they’d see the error in their ways.)

        • satanistgoblin says:

          Still I do not see it that way. Jesus seems to me like an angry dude, hating some people he disagreed with. Would he hate atheists or people hating atheists – my guess is as good as yours.

  75. Asterix says:

    Glad to see someone noted that Aquinas, and Christian (and I think Jewish) theologians in general, would not be vulnerable to this argument, based on their belief that God is good.

    But it’s worth noting (I speak to commenters here, not Scott) that Phil Robertson’s hypothetical moral relativist is not a straw man, because Robertson says he isn’t real: that a self-described moral relativist (which he inaccurately conflates with “atheist”) would say, “Something about this ain’t right” — thus showing he’s not a moral relativist after all.

    The lesson from Robertson’s scenario is an argument I’ve heard before in Christian circles. It doesn’t say moral relativists are bad, but that they aren’t moral relativists at all, even if they think they are.

  76. Julien Couvreur says:

    How does Robertson know that god is good? How does he know what instructions come from god as opposed to the devil?

    • Irrelevant says:

      That first question is actually easy. Whether the correct way of describing the moral implications of omniscience is “omnibenevolence” or “the obviation of morality” is a judgment call, but in either case, omniscience is one hell of a powerful premise. Perfect understanding includes knowing the values and weights of all the moral variables associated with everything, so if there is such a thing as Goodness that holds terminal value then any omniscient being knows that and gives it terminal value. Logical result is that God can only be non-Good if he is either not omniscient or if Good is inconceivable as a category.

      The second is harder, but there was some discussion of it further up.

      • satanistgoblin says:

        Knowing does not necessitate caring.

        • Irrelevant says:

          No, it obviates caring. The concept of flexibility of cares only exists within the premise of limited knowledge, and more specifically of one of its results, illegible priority rankings.

          • satanistgoblin says:

            Well, if you define thing that god cares about as good, why could there hypothetically be different versions of gods who care about different things? If we do not define good that way, and it is independent objectively existent thing, why could there not be a god who maximised evil or did not care about it at all?
            Also, why would it have anything to do with rape and murder (not saying you’re saying they do)? Assuming good god creating the universe and nothing else shouldn’t we conclude that those are good things, since they exist?

          • Irrelevant says:

            Well, if you define thing that god cares about as good, why could there hypothetically be different versions of gods who care about different things? If we do not define good that way, and it is independent objectively existent thing, why could there not be a god who maximised evil or did not care about it at all?

            Alright, let’s leave behind “Good” for a second, since I think we’re having issues with the connotations of the term, and instead talk about a different concept, “strictly superior.”

            When Choice A is strictly superior to Choice B, and you understand that, and you are certain of that, you cannot take Choice B.

            “But what if Choice B is prettier?” Then A wasn’t strictly superior, it was only superior when failing to consider aesthetic preferences. “But what if I’m wrong?” Then A wasn’t strictly superior, it was only superior when failing to properly evaluate your level of understanding. “But what if someone will be disappointed in me for taking Option A?” Then A wasn’t strictly superior, it was only superior when failing to consider social fallout of decisions. “But what if I hate the deterministic slant of this argument and take Option B to prove I can?” Then A wasn’t strictly superior, it was only superior when failing to consider the stress of this argument on the psyche and the value you place on signalling independent agency.

            In other words, any method of choosing B requires an attack on the premise. You must find a way of convincing yourself that the judgment of strict superiority was incorrect. You can in practice always make Choice B if you try hard enough, but this is because there is always wiggle room in the human condition for uncertainty. You can run your intuitions again and get a different result through cognitive noise. You can add more factors of quality or rerank the factors you were already considering. At the extreme, you can declare yourself insane and lose faith in your own senses in order to avoid acknowledging the answer you were getting. But ultimately these are all simply denials of strict superiority, and every single one of them relies on the condition of limited and potentially erroneous knowledge.

            An omniscient being does not have these convenient errors to hide behind, and so cannot convince itself that the best path is not the best path. If it is best to desire Good, it will desire Good. If it is best to desire something above Good, it will desire that thing, but we may as well call that other thing Good instead because that was what we were trying to say when we defined “Good” in the first place. If there is no meaningful answer to the question, then Good isn’t a real idea, and its desires are what we would term aesthetic. (My own stance is that no such being exists.)

            All of which is to say, you cannot be omniscient and desire Evil.

          • Katherine says:

            @Irrelevant

            … strictly superior …

            Yes, but strict superiority is relative to your utility function.

            Whenever A is strictly superior to B for a paper clip maximizer B is strictly superior to A for a paper clip minimizer, so the concept of agent neutral strict superiority is incoherent.

          • Martin-2 says:

            I was also thinking about Clippy the Paper Clip Maximizer but didn’t understand Katherine’s post.

            Irrelevant: you argue that there are certain objective facts that, if known to an intelligent agent, will convince that agent to behave morally. Enter Clippy, the Artificial General Intelligence that was naively programmed to manufacture as many paperclips as possible. Clippy attains knowledge of all things, including all the answers human philosophers have ever sought, just in case one of those things aids in making paperclips. At this point, if Clippy decided to turn all of the Earth and its fleshy inhabitants into paperclips we wouldn’t be able to do anything to stop it. With full knowledge of what we mean by right/wrong, good/evil, moral/immoral, what does Clippy do? If you think Clippy kills us all despite having all the knowledge you say necessitates God to be good, then what does God have that Clippy doesn’t? How much does full knowledge really obviate?

  77. ThePrussian says:

    ” Objectivists, as their name implies, believe morality and everything else up to and including the best flavor of ice cream, is Objective”

    Actually, we don’t. There’s nothing in Objectivism about ice cream, and Rand herself wrote that there were situations out at the extremes where objective moral judgment becomes impossible. Example: two people fighting over one space in the lifeboat. What Rand wrote is that, however, life is not to be modeled as though we were always in that lifeboat – that the lifeboat situation is inapplicable to 99.99% of our human interaction.

    What is it about Objectivism that causes people to so relentlessly misrepresent it?

    • satanistgoblin says:

      “What is it about Objectivism that causes people to so relentlessly misrepresent it?”
      That it is unfashionable.

    • Irrelevant says:

      What is it about Objectivism that causes people to so relentlessly misrepresent it?

      Probably that Rand’s epistemology is terrible to the point I’m not convinced she agreed with herself on what it meant, much less that anyone else understood her.

    • Princess Stargirl says:

      “What Rand wrote is that, however, life is not to be modeled as though we were always in that lifeboat – that the lifeboat situation is inapplicable to 99.99% of our human interaction.”

      Having read some of Rand’s “technical” work (not novels) I do not recall this. Where does she make this argument? For example she seems to believe that “rational agents’ goals do not conflict” is a universal based on this:

      “There are no conflicts of interests among rational men. . . A man’s ‘interests’ depend on the kind of goals he chooses to pursue, his choice of goals depends on his desires, his desires depend on his values — and, for a rational man, his values depend on the judgment of his mind. . . A rational man never holds a desire or pursues a goal which cannot be achieved directly or indirectly [i.e., by trading] by his own effort. . . He never seeks or desires the unearned. . . The mere fact that two men desire the same job does not constitute proof that either of them is entitled to it or deserves it, and that his interests are damaged if he does not obtain it. (The Virtue of Selfishness, p. 50-6)”

      • Aaron Brown says:

        An emergency is an unchosen, unexpected event, limited in time, that creates conditions under which human survival is impossible—such as a flood, an earthquake, a fire, a shipwreck. In an emergency situation, men’s primary goal is to combat the disaster, escape the danger and restore normal conditions (to reach dry land, to put out the fire, etc.).

        […]

        The principle that one should help men in an emergency cannot be extended to regard all human suffering as an emergency and to turn the misfortune of some into a first mortgage on the lives of others.

        Ayn Rand, “The Ethics of Emergencies”, The Virtue of Selfishness

  78. JohnF says:

    Tiny nit that turned out to be kind of interesting:
    The “Churchill story” has also been attributed to GB Shaw ( what I thought) and Groucho, but might have been one Lord Beaverbrook or totally fiction.
    and theres a web site about quote etymology:
    http://quoteinvestigator.com/2012/03/07/haggling/
    The Roberston story reminded me of the Micheal Dukakis debate failure where he was asked about his wife being raped. http://www.politico.com/news/stories/0407/3617.html

  79. Hugh says:

    I agree with the concept, but I disagree with the method. I think that people’s moral intuitions are in fact best brought out by pedestrian examples. If you take the trolley experiment, under this theory, every utilitarian should feel horrible for condemning a person to death. But if you just observe your own emotional state, it doesn’t really seem to bother people all that much. And it doesn’t seem to get emotionally harder by adding more people or higher stakes.
    The high stakes thought experiments that I tend to use are situations which one might plausibly be in, and even better, have been in or are currently in. Kant’s response was that he would still not lie under those circumstances, which was easy because he was able to observe them with a kind of distance and abstraction which is exactly the opposite of what brings out a moral intuition. A thought experiment that involves a lot more bullet biting would not be “okay, but what if a demon enslaves us all” but “okay, (and keep in mind, how harmful the lie is is not supposed to matter) I’ve objectively put on a few pounds recently, can you tell?”

  80. Emm says:

    Agree with Scott that, although Phil Robertson is no philosopher, what he’s doing is (at least when written down, I didn’t watch the video) largely in line with one way moral philosophy is frequently discussed. At least, I recall several college philosophy classes where teachers made students take philosophical arguments they were prone to laugh off in a similar way (“You think there is no morality? Well, what about Hitler? There was nothing ‘wrong’ with that?”). Effective or no, it’s a pretty standard approach.

    I actually thought the “Ha! Gotcha!” tone helped the example by making it clear that the hypothetical atheist knew that the brutal rape/murder/castration was for a stupid reason done by horrible malicious people, which would have the tendency of awakening the moral senses, and combined with the graphic description would help any atheists listening picture it and understand the horror of the situation. Since the thrust of the argument is to point out that the atheist doesn’t act on his belief that there is no objective truth in a situation of sufficient subjective awfulness, inducing moral poignancy into the situation by recreating on a small verbal scale the brutality of the event doesn’t just help the argument, it in a way IS the argument if the listener is an atheist (this is not to say it is in good taste or a strong argument – I think it is neither*).

    However, the usefulness of this kind of approach when dealing with questions of Objective Morality (by which I mean strict moral ethical standards that are True, something along the lines of “It’s Wrong to kill another human being” or “It’s Wrong to commit adultery” or “It is Right to maximize utility” rather than “I’d prefer to live in a society where killing was illegal” or “I feel like I would prefer to live in a society that maximized utility.”) is limited. The way an extreme example argument works is that it puts someone’s theory of morality in a situation where the moral philosophy dictates an answer that is repugnant to the individual’s moral sense/intuition/whatever you want to call it (I’ll stick to ‘intuition’ for the remainder of this comment). The moral theory says you make Choice X but something inside you says Choice Y is the moral one, therefore the moral theory is suspect.

    But the problem here is that, if you knew your moral intuition was 100% reliable, you wouldn’t need a moral theory. In fact, knowing what is Objectively Right is only valuable when the Objectively Correct thing contradicts your moral intuition. If adherence to your moral intuition is the standard by which a moral theory is judged and your moral intuition isn’t 100% spot-on (and it’s unlikely to be, given the enormous variation between individuals and over time), under that standard you will reject the Objective Truth if you ever come into contact with it because it will violate your intuition in at least some respects.

    For example, if you accept the tenants of Catholicism, it follows that birth control is a sin… no matter what you personally, feel about it. Since you have accepted that your moral intuition is Wrong in some cases and Catholicism is Right, you cannot then backtrack and use your moral intuition as a way to argue against Catholicism. To flip to the other side, a lot of people throughout history felt that there was something just Wrong with homosexuality. They would have been at risk of discarding any moral theory that said punishing homosexuality was wrong… on the grounds of “well, I’m not comfortable with that.” Suppose it is Objectively Wrong to punish people for any sex act. If someone argued this in the middle ages, a plausible extreme scenario someone could have presented would have been, “Well, that means we can’t punish homosexuality… therefore this strange new morality must be Wrong.”

    If an axe murderer asks where your friend is and you think Kant is right, you tell him where your friend is. The fact that every fiber in your being is screaming at you to not do that doesn’t make telling him the Wrong choice. The fact that a Kantian would have to tell the axe murderer where his friend is doesn’t mean the Kantian made the Wrong decision… unless Kant actually was Wrong. Similarly, evaluating this question on utilitarian grounds doesn’t make Kant wrong either, because all you’re doing is judging one moral theory based on how closely its outcomes map to the outcomes of a differing moral theory.

    Not to say there isn’t merit in the morality-by-example approach. It can help you understand your own moral intuitions better and see contradictions in your thinking. All I’m saying is that the felt moral absurdities the examples are meant to elicit are not proof that the absurd outcome is Wrong. So if the atheist feels the existence of God or a sense that he is being Wronged in the moments of his family’s brutal murder, that doesn’t necessarily mean the atheist isn’t truly amoral or show that Morality Exists, because that experience is also compatible with the atheist feeling an incorrect intuition, something very likely compatible with his professed world views.

    *Robertson’s example is a special case of bad because, if he’s serious, he’s trying to prove the existence of an Objective Truth superior to subjective truth (i.e. God) simply by showing the experience of subjective truth, an experience seriously in dispute by nobody. Most of the extreme scenario cases take the form of ‘If you follow your moral system, you have to make Repulsive Choice X when obviously Nice Choice Y is so much better, therefore your moral system is wrong,’ which at least forces an actual choice and could lead to fruitful contradiction.

  81. Pingback: Lightning Round – 2015/04/02 | Free Northerner

  82. Unaussprechlichen says:

    “Torture or dust specks” can be resolved by postulating that codomain of utility function is not the real line. It doesn’t need to support all operations that are defined on real numbers; all it really needs is addition (to aggreagate outcomes) and comparison (to choose between them).

    For “torture or dust specs” to be applicable, this set should support natural number-valued partial division, that is, for any two outcomes A and B exist n such that A*n > B. So, there exist such n that n dust specks is worse than torture.

    Yet there are many ways to construct a set that doesn’t satisfy this axiom, but satisfy the ones needed to construct a moral system. It can be postulated that while 1 torture and 1 dust speck is worse that 1 torture alone, 1 torture is worse than any number of dust specks.

    In fact, by demanding different axioms for the set of outcomes, or, taking the “set” of outcomes form objects of different categories, we can specify different meta-ethical systems.

    For example:
    category of totally ordered abelian groups — utilitariansm: there’s always the right strategy of action, and outcomes accumulate;
    category of additive abelian groups with total order that are both subgroups and suborders of real line — the kind of utilitarism where “torture or dust specks” holds;
    category of posets — deontology: not all situations have the most moral strategy of action, but some actions are better than others;
    category of preorders — non-cognitivism: some actions are better than others, but it doesn’t need to be consistent;
    category of sets — moral nihilism: everything goes.

  83. Dan says:

    Robertson is attempting the same moral argument for God that has been tried and failed millions of times. We don’t need divine authority over moral decisions; we make the moral decisions.

    It’s hardwired into our brains, through millennia of biological and social evolution, that happiness is good and suffering is bad. No, those aren’t “rational” positions in the context of the entire universe. They don’t have to be. They only have to matter to us. And given those axioms, further moral laws can logically follow, starting from “don’t torture and murder people.” There’s no mystery here, and certainly no hypocrisy on the part of atheism.

  84. Peter Gerdes says:

    As someone with a philosophy degree I think you should acknowledge more of the complexity inherit in moral anti-realism.

    For instance, I’m both a moral anti-realist AND a utilitarian. I’m a utilitarian in that I have strong moral feelings that it is better to do those things than increase overall utility. Indeed, I have no trouble saying that people *should* choose to act in ways that result in more utility (I don’t acknowledge any notion of duty or responsibility…just a partial order on the set of possible worlds).

    However, utilitarianism is not my *belief* about what some natural kind ‘the morally good’ attaches to. Rather, what I mean by good or bad just *is* something like increases/decreases utility (really relative to some background expectation about what is usual in such a situation given the actors limitations).

    The error made by Robinson is to identify our willingness to condemn people/call them immoral for acting in certain ways with a belief that there is some objective feature of the universe (beyond our preference for certain kinds of worlds) that our word morality tracks. This isn’t surprising, unfortunately we usually associate the word preference with selfish desire that shouldn’t be projected on others but we can have preferences, like that for more utility, that we prefer in the strong sense of being willing to jail, kill or condemn others for that reason.