You Kant Dismiss Universalizability

I.

Like most right-thinking people, I’d always found Immanuel Kant kind of silly. He was the standard-bearer for naive deontology, the “rules are rules, so follow them even if they ruin everything” of moral philosophy.

But lately, I’ve been starting to pick up a different view. There may have been some subtleties I was missing, almost as if one of the most universally revered thinkers of the western philosophical tradition wasn’t a total moron.

I was delighted to see nydwracu say something similar in the comments to my recent post:

I [now] realize that Kant is not actually completely ridiculous like I once thought he was

I don’t know if it’s just that nydwracu and I have been thinking about some of the same problems lately, but he took the words right out of my mouth.

I’m not a Kant scholar. I’m not qualified to explain what Kant thought, and it’s possible the arguments I express as Kantian here are going to be arguments of a totally different person who merely reminds me of Kant in some ways. James Donald’s objections to steelmanning are well taken, so I will not call this a steel man of a guy who is too dead to correct me if I am wrong. At best I will call this post Kant-aligned.

First, I want to take another look at one of Kant’s most-reviled arguments: that you should truthfully tell a murderer who wants to kill your friend where she is hiding.

Second, I want to talk about how I find myself using Kantian principles in my own morality.

And third, I want to talk about big unanswered questions and the reason this still isn’t technical enough for me to be comfortable with.

II.

Kant gives the following dilemma. Suppose that an axe murderer comes to your door and demands you tell him where your friend is, so that he can kill her. Your friend in fact is in your basement. You lie and tell the murderer your friend is in the next town over. He heads off to the next town, and while he’s gone you call the police and bring your friend to safety.

Most people would say that the lie is justified. Kant says it isn’t, because lying.

I think most people understand his argument as follows: you think “I should lie”. But suppose everyone thought that all the time. Then everyone would lie to everyone else, and that would be horrible.

But Kant’s categorical imperative doesn’t urge us to reject actions which, if universalized, would be horrible. That’s rule utilitarianism, sort of. Kant urges us to reject actions which, if universalized, would be self-defeating or contradictory.

Suppose it was everyone’s policy to lie to axe murderers who asked them where their friends were. Well, then axe murderers wouldn’t even bother asking.

Which doesn’t sound like a sufficiently terrible dystopia to move us very much. So let me reframe Kant’s example.

Suppose you are a prisoner of war. Your captors tell you they want to kill your general, a brilliant leader who has led your side to victory after victory. They have two options. First, a surgical strike against her secret headquarters, killing her and no one else. Second, nuking your capital city. They would prefer to do the first, because they’re not monsters. But if they have to nuke your capital, they’ll nuke your capital. So they show you a map of your capital city and say “Please point out your general’s headquarters and we’ll surgical-strike it. But if you don’t, we’ll nuke the whole city.”

You decide to lie. You point to a warehouse you know to be abandoned. Your captors send a cruise missile that blows up the warehouse, killing nobody. Then they hold a huge party to celebrate the death of the general. Meanwhile, the real general realizes she’s in danger and flees to an underground shelter. With her brilliant tactics, your side wins the war and you are eventually rescued.

So what about now? Was your lie ethical?

Kant would point out that if it was known to be everyone’s policy to lie about generals’ locations, your captors wouldn’t even ask. They’d just nuke the city, killing everyone.

Your captors are offering you a positive-sum bargain: “Normally, we would nuke your capital. But you don’t want that and we don’t want that. So let’s make a deal where you tell us where your general is and we only kill that one person. That leaves both of us better off.”

If it is known to everyone that prisoners of war always lie in this situation, it would be impossible to offer the positive-sum bargain, and your enemies would resort to nuking the whole city, which is worse for both of you.

So when Kant says not to act on maxims that would be self-defeating if universalized, what he means is “Don’t do things that undermine the possibility to offer positive-sum bargains.”

This is very reminiscent of Parfit’s Hitchhiker. Remember that one? You are lost in the desert, about to die. A very selfish man drives by in his dune buggy, sees you, and offers to take you back to civilization for $100. You don’t have any money on you, but you promise to pay him $100 once you’re back to civilization and its many ATMs. The very selfish man agrees and drives you to safety. Once you’re safe, you say “See you later, sucker!” and run off.

The selfish man’s “I’ll bring you back to civilization for $100” offer is a positive-sum bargain. You would rather lose $100 than die. He would rather gain $100 and lose a few hours bringing you to the city than continue on his way. So you both gain.

But if everyone were omniscient and knew that people who promise $100 will never really pay, or if your decision not to pay could somehow affect his willingness to make you the offer in the first place, the ability to make the positive-sum bargain disappears.

On this model, Kant isn’t being a weird super-anal stickler for meaningless rules at all. He’s being the most practical person around: don’t do things that spoil people’s ability to make a profit.

(and sort of pre-inventing decision theory)

(man, it’s a good thing everyone is omniscient and the future can cause the past, or else we’d never be able to ground morality at all)

III.

A while back I suggested it is wrong to fire someone for being anti-gay, because if every boss said “I will fire my employees whom I disagree with politically”, or every mob of angry people said “We will boycott companies until they fire the people we disagree with politically” then no one who’s not independently wealthy could express any political opinions or dare challenge the status quo, and the world would be a much sadder place.

This is not strictly Kantian. “The world would be a much sadder place” is not self-defeating or a contradiction.

But it could still be framed as a positive-sum bargain. In a world where all the leftists refused to hire rightists, and all the rightists refused to hire leftists, everything would be about the same except that everyone’s job opportunities would be cut in half. If the people in such a world were halfway rational, they would make a deal that rightists agree to hire leftists if leftists agree to hire rightists. This would clearly be positive-sum.

This is easy to say in natural language like this. But when you try to make it more formal it gets really sketchy real quick.

Let’s say Paula the Policewoman is arresting Robby the Robber (she caught him by noticing his name was Robby in a world where everyone’s name sounds like their most salient characteristic). No doubt she thinks she is following the maxim “Police officers should arrest robbers”. But what about other maxims that lead to the same action?

1. Police officers should arrest people
2. Everyone should arrest robbers
3. Paula should arrest Robby
4. Paula should arrest other people
5. Everyone should arrest Robby
6. Everyone should arrest EVERYONE ELSE IN THE WORLD

This sounds kind of silly in this context, but in more complicated situations the entire point hinges upon it.

Levi the Leftist, who owns a restaurant called Levi’s Lentils, finds out that his head waiter, Riley the Rightist, is a homophobe (in Levi’s defense, he thought he was safe to hire him because his name wasn’t Homer). He fires Riley, who ends out on the street.

Candice the Kantian condemns him, saying “What if that were to become a general rule? Then nothing would change except everyone only has half as many job opportunities.”

Levi says “Oh, I see your problem. You think my maxim is ‘fire people with different politics than me’. But that’s not my maxim at all. My maxim is ‘fire people who are homophobic’. If that becomes universalized, it will be a great victory for gay people everywhere, but no one whose politics I agree with will suffer at all.”

In fact, Levi might claim his maxim is any one of the following:

1. Everyone should fire people they disagree with politically
2. Everyone should fire people who are politically on the right
3. Everyone should fire people who discriminate against minority groups
4. Everyone should fire people who are homophobic
5. Everyone should fire people who are mean and hateful
6. Everyone should fire people who hold positions that are totally beyond the pale and can’t possibly be supported rationally

(before I get yelled at in the comment section, I’m not necessarily claiming all these maxims accurately describe Riley, just that Levi might think they do)

(5) runs into this problem where you can never say “fire people who are mean and hateful” without it in fact meaning “fire people whom you think are mean and hateful”. Presumably all the rightist bosses will find good reasons to think their leftist employees are mean and hateful.

There seems to be some sense in which we also want to protest (2), say that if Levi is allowed to use (2), then that instantly morphs to rightist bosses being allowed to say “everyone should fire people who are politically on the left”. But just saying “universalizability!” doesn’t automatically let us do that.

(3) seems even sneakier. It is in fact the maxim promoted by the people who are actually doing the firing, since they seem to have some inkling that universalizability and “fairness” are important. And it sounds totally value-neutral and universalizable. And yet I feel like if we allow Levi to say this, then some rightist will say actually his maxim is “everyone should fire people who want to undermine traditional cultural institutions”, and the end result will be the same old “job opportunities halved for everyone”.

IV.

This is a hard problem. The best solution I can think of right now is to go up a meta-level, to say “universalize as if the process you use to universalize would itself become universal”.

Suppose I am very greedy, and I lie and steal and cheat to get money. I say “Well, my principle is to always do whatever gets Scott the most money”. This sooooooorta checks out. If it were universalized – and everyone acted on the principle “Always do whatever gets Scott the most money”, well, I wouldn’t mind that at all.

But if we say “universalize as if the process you use to universalize would itself become universal”, then we assume that if I try to universalize to “do what gets Scott the most money”, then Paula will universalize to “do what gets Paula the most money” and Levi will universalize to “do what gets Levi the most money” and we’ll all be lying and cheating and stealing from one another and no one will be very happy at all.

(Kant notes that this also satisfies his original, stricter “self-defeating contradiction” criterion. If we all try to steal from each other, then private property becomes impossible, the economy collapses, and the stuff we want isn’t there to steal. I don’t know if I like this; it seems a little forced. But even if contradictoriness is forced, badness seems incontravertible)

As for Levi, he knows that if he universalizes to “everyone should fire people who discriminate against minority groups”, his process is “pick out a political value that’s important to me and excludes a lot of potential employees, then say everyone should fire people who disagree with it”. This is sufficient to assume rightists will do the same and we’ll be back at half-as-many-jobs.

Next problem. Suppose I am a very rich and very selfish tycoon. I say “No one should worry about helping the needy”. I am perfectly happy with this being universalized, because it saves me from having to waste my time helping the needy. Although other people also won’t help the needy, I’m a super-rich tycoon and that’s no skin off my back.

We can climb part of the way out of this pit with meta-universalizability. We say “If I say things like this, everyone will only act on maxims that benefit them personally and appeal to their own idiosyncratic characteristics, rather than the ones that most benefit everyone.”

But I worry that this isn’t enough. Suppose I’m not just a tycoon, I’m a super-rich and powerful tyrannical king. I come up with maxims like “Everyone do what the tyrant says or be killed!” Candice the Kantian warns “If you do that, everyone will come up with maxims that benefit them personally, and the moral law will be weakened.”

And so I kill Candice for disagreeing with me.

If you are so much stronger than other people that you are immune to their counter-threats, you can get away with doing pretty much anything under this perversion of not-at-all-like-Kant we’ve wandered into.

We might have gotten so far from Kant at this point that we’ve stumbled into Rawls. Put up a veil of ignorance and the problem vanishes.

V.

What about utiltarianism?

I would love to universalize the maxim “Do whatever most increases Scott’s utility”.

Given concerns of meta-univeralizability above, I might end up instead wanting to universalize “Do whatever most increases global utility”.

This seems certain, maybe even provable, if you throw in the veil of ignorance accessory.

Utilitarianism has a lot of the same problems universalizability does. A very stupid utilitarian would automatically condemn Levi for firing Riley since now Riley is unemployed and this lowers his utility. More sophisticated utilitarians would have to take into account the various society-wide effects of Levi setting a precedent here. I think that’s what Mill’s rule utilitarianism tries to do and what precedent utilitarianism tries to do as well. The problem is that it’s really hard to figure out what rules and precedents have how much weight. Universalizability kind of plows through some of those objections like a giant steamroller. It probably prevents a couple of little incidents where you could steal something or kill someone to gain a little extra utility, but it more than makes up for it in vastly increasing social trust and ability for positive-sum deals.

I’m not sure whether consequentialism is prior to universalizability (“universalize maxims because if you don’t you’ll end up losing out on possible positive-sum games and cutting your job offers in half”), whether universalizability is prior to consequentialism (“be a consequentialist, because that is a maxim everyone could agree on”), or whether they’re like a weird ouroboros constantly eating itself.

I think maybe the idea I like best is that consequentialism is prior to universalizability is prior to any particular version of utilitarianism.

Because if universalizability is prior, that would be an interesting way to explore some of the problems with utilitarianism. For example, should we count pleasure or preferences? I don’t know. Let’s see what everyone would agree on.

Does everyone have to donate all of their money to the most efficient charity all the time? Well, if you were behind the veil of ignorance helping frame the moral law, would you put that in?

Does everyone have to prefer torture to dust specks? You’re behind the veil of ignorance, you don’t know if you’ll be a dust speck person or a torture person, what do you think?

I think this is a good point to remember the blog tagline and admit I am still confused, but on a higher level and about more important things.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

194 Responses to You Kant Dismiss Universalizability

  1. Protagoras says:

    I do fancy myself something of a Kant scholar,* and what you say in sections I and II does not sound like a totally unreasonable interpretation to me. The issues you raise in III are more complicated; I think it’s impossible to address without bringing up issues of retribution. Kant thought that the correct way to police people’s behavior, to enforce willingness to go along with properly universal rules, was to treat anti-social acts as if they were intended to be universal by the person engaged in them, and treat the person according to their own (implicit) principles; e.g. the murderer is following some principle that justifies murder when it is sufficiently advantageous, it is pretty darn advantageous to get rid of murderers, so by the murderer’s principle, he should be executed, and out of respect for the murderer’s rationality, we should apply his own principles to his own case. Obviously, that’s wildly oversimplified, but hopefully it’s enough to establish the general idea for purposes of discussing your example.

    This account of retribution points in a couple of different directions on the kind of example you cite. First, it does seem that for Kant it is acceptable (indeed mandatory) to be intolerant of intolerance; that is the only time it is allowed, and required. But on the other hand, and probably more importantly in the specific case you cite, there don’t seem to have been any intolerant acts, and Kant definitely only permits retribution for things you actually do; no thought police for him.

    * Not a Kantian, though. Still, like you, I used to think Kant was completely silly, and have over time come to think that there’s actually a lot of insight there.

  2. gunlord500 says:

    “The world would be a much sadder place” is not self-defeating or a contradiction.

    Well, it is pretty undesirable, which is close enough, I’d say.

    It’s not (entirely) a droll nitpick on my part either, though, since I think the “self-defeating” part of Kant’s deontology could use some examination. Is it not possible some dystopian outcomes which seem terrible to everyone else might not be “self-defeating” for others, even a small minority of disturbed people? Let’s take your example of not lying to a selfish person who saves your life in return for cash. Perhaps a radical Spencerian Social Darwinist would encourage you to lie, because he would like to live in a world where no-one bothers to help people trapped in a desert even for personal gain, because anyone dumb enough to get trapped in a desert deserves to die so their stupidity won’t be passed on to the next generation.

    Or, an even more extreme example, the Kantian order not to kill. It’s wrong to murder, supposedly, because if murdering was a universal law, everyone would be dead, including you. This would, at first glance, be self-defeating. But what if that’s exactly what you want? There are many spree killers who want to kill themselves as much as they want to kill everyone else (I’m thinking of the “suicide by cop” people and the school shooters who kill as many as they can before turning their gun on themselves), and a world in which every single person was dead would be their greatest wish. I suppose this is my skepticism of Kant’s ethics–it works well enough for people who have a good, mostly-held-in-common view of how they’d like the world to be, but not so much so when it comes to deviants.

    • Zakharov says:

      I’ve got a fairly similar objection to Kant. The argument against stealing works just as well as an argument against finding a cure for a disease. If we found a cure for all the diseases, there’d be no diseases left to cure, making disease-curing self-defeating.

  3. jaimeastorga2000 says:

    I think most people understand his argument as follows: you think “I should lie”. But suppose everyone thought that all the time. Then everyone would lie to everyone else, and that would be horrible.

    For what it’s worth, the few times Kant’s first formulation of the categorical imperative was mentioned in my university classes the professors explained pretty clearly that Kant’s objection was to self-contradictory universal laws, not horrible consequences. But I suppose it’s possible that most people who hear about the imperative hear a badly mangled pop-version of it, much like with Christopher Columbus. May I ask how you formed this impression?

    Suppose you are a prisoner of war. Your captors tell you they want to kill your general, a brilliant leader who has led your side to victory after victory. They have two options. First, a surgical strike against her secret headquarters

    Must you do this? You’re one of the best non-fiction writers I know, but it completely breaks my immersion when you insert such a jarring example of progressive signalling in your text. Just like when you use Spivak pronouns.

    • Onanymous says:

      such a jarring example of progressive signalling

      To people used to communities further left, it’s pretty usual and not jarring at all. Seems like a semi-arbitrary convention thing to me… your side prefers “similarity to current real life”-style in hypothetical pronoun gender and the other side prefers “more ideal future which we can maybe increase the chances of a bit by using pronouns this way”-style. I’m fine with either.

      • Amanda L. says:

        “more ideal future which we can maybe increase the chances of a bit by using pronouns this way”-style.

        I love it when Scott uses female pronouns for exactly this reason. It was jarring at first, but now it feels natural, in part because I’ve also been reading other fiction with female leaders as well. And it’s amazing to me that it feels natural now, that I have this mental archetype of a female commander to be drawn upon easily the way I do with my mental archetype of a male commander. Because as humans we can’t help but categorize people into tropes, and so the mental availability of a trope makes it so that real-life people can step into those roles without it being jarring (including to themselves). Even though I don’t even want to be a military general the fact that I now have a mental archetype of one that is my gender makes me feel a bit more free.

        • Andy says:

          Also, female commanders have existed and done pretty darn well in history – Boudica (though her campaign didn’t end that well) or Trieu Thi Trinh or the Trung Sisters, or St. Olga of Kiev (though she was more “backstabbing genius” than “tactical genius,” you have to admire her ability to lure her enemies into traps) and probably others I’m forgetting. Though I’m not sure Kant would admire Olga’s predeliction to lying to her enemies, it worked out pretty well for her.
          (I’m not sure on that last point, because Kant makes my head hurt.)
          Trieu Thi Trinh:
          http://www.badassoftheweek.com/index.cgi?id=72264235774
          Trung Sisters:
          http://en.wikipedia.org/wiki/Tr%C6%B0ng_Sisters
          St Olga of Kiev:
          http://en.wikipedia.org/wiki/Olga_of_Kiev
          Those are just off the top of my head.

        • jaimeastorga2000 says:

          @Andy: “The marvel is not that the bear dances well, but that the bear dances at all.”

    • ozymandias says:

      I am mostly self-educated in ethical philosophy and I definitely heard the badly mangled pop cultural version of it.

    • Kaj Sotala says:

      Must you do this? You’re one of the best non-fiction writers I know, but it completely breaks my immersion when you insert such a jarring example of progressive signalling in your text.

      Q: So, why do you write these strong female characters?
      A: Because you’re still asking me that question.
      (Joss Whedon)

      If someone finds it this jarring to see a text briefly imply that a brilliant military leader could be female, that sounds to me like a pretty strong indication that it is indeed necessary to continue implying that.

      • Berna says:

        +1, Like, Upvote!

      • nydwracu says:

        I write a novel where every single character won the lottery; one won it twice. When people react with confusion, I respond that it’s not impossible. Does my response make any sense?

        No, it doesn’t. There’s a difference between impossibility and statistical unlikelihood.

        I write a novel where every single character won the lottery; one won it twice. When people react with confusion, I respond that it’s not impossible. Also, I work for the state lottery, in marketing.

        The point isn’t that it’s impossible; it’s that something that’s possible but statistically unlikely (what percentage of brilliant military leaders in the real world have been female?) is being intentionally inserted — in a manner that signals elthede membership.

        (Yes, everything is political; no, having an all-male cast of brilliant military leaders isn’t political, unless you’re in a thede that makes doing such a thing elthedish. That’s just how signaling works.)

        That is: the objection is to the signal. From the point of view of someone opposed to progressivism, it’s a flashing neon light saying “the author is aligned with a faction that pushes an agenda antithetical to the maintenance of a functioning civilization.”

        • MugaSofer says:

          That sounds like an interesting novel. I’m not sure how it’s a bad thing.

        • peterdjones says:

          Scott is opposed to Reaction.You know that. What’s objectionable about someone sending an accurate signal of their beliefs? You feel excluded? You want warm progressive fuzzies from being included..? Awww!

        • Randy M says:

          Peterjones: Maybe it comes off as descriptive when it is really proscriptive? ie, propaganda. Wish masquerading as tendency.

        • peterdjones says:

          @Randy

          An advertisement for any untried theory must be wish masquerading as tendency, eg, reaction fixes everything.

        • Randy M says:

          I’m sure there’s some irony to unpack with a progressive attacking reaction for its novelty; that aside, I thought Scott was primarily interested in uncovering truth, rather than marketing.

        • Multiheaded says:

          Of course all Reaction is fundamentally novel! To succeed, the counter-revolution must by definition be more innovative and future-oriented then the triumphant revolution; it cannot be so suicidal as to wish to roll things back exactly to the moment where the old order began to rot away – since the old order did rot away, it failed, it cannot and should not be mechanically recreated! If the past is dead, and the present is Jacobinism, then Reaction needs to place itself as the future. You get this from the ideologues of the Confederacy, from Nietzsche, from M.M…

          When such a complete convulsion has shaken the State, and hardly left any thing whatsoever, either in civil arrangements, or in the Characters and disposition of men’s minds, exactly where it was, whatever shall be settled although in the former persons and upon old forms, will be in some measure a new thing and will labour under something of the weakness as well as other inconveniences of a Change. My poor opinion is that you mean to establish what you call ‘L’ancien Regime,’ If any one means that system of Court Intrigue miscalled a Government as it stood, at Versailles before the present confusions as the thing to be established, that I believe will be found absolutely impossible; and if you consider the Nature, as well of persons, as of affairs, I flatter myself you must be of my opinion. That was tho’ not so violent a State of Anarchy as well as the present. If it were even possible to lay things down exactly as they stood, before the series of experimental politicks began, I am quite sure that they could not long continue in that situation. In one Sense of L’Ancien Regime I am clear that nothing else can reasonably be done. – Edmund Burke

          Folks, you absolutely need to read some Corey Robin.

        • Multiheaded says:

          @Nydwracu

          “Signals elthede membership” is a meta-recursive turn of phrase.

        • peterdjones says:

          @randy

          The tried theory of reaction is a failed one…mediaeval is not a complement …. NEO reaction is untried.

        • Randy M says:

          That’s an easy assertion, with some truth.
          Mostly that the social mores of the past combined with current technology would be novel enough to be described as untried; though the same could be said for the (already outdated) social mores of the 00’s with the current technology of today vs ten years ago.

          But I think there’s more non-truth to that, that I don’t have the time, data, or rhetorical skill to really unpack (which was why I was happier to let it pass with the throw away line you both called me on), involving the metrics which the past would be said to fail by, who judged such things, just how differrent neo-raction is compared wtih reaction, whether the old order rotted away or was amputated overzealously, and so forth.

          I mean, certainly, the past (reaction) failed compared to the present (‘progress’) inasmuch as they were in a contest for dominance (rather tautologically), but whether it failed as a guiding paradigm compared to the present –well, that takes us back to the “Anti-reaction FAQ” and to just how much technological progress masks the problems of “social progress” (or to be fair, is dependant up it).

        • Zathille says:

          I’d just like to say the most frustrating part of these discussions is the tendency of ‘haha, gotcha!’ arguments to be employed like this. While it can be said that to criticize another’s tone is indication of not having an argument, I think most people would much rather prefer to a counter-argument that is not coated in snark or smugness.

          “Of course all Reaction is fundamentally novel! To succeed, the counter-revolution must by definition be more innovative and future-oriented then the triumphant revolution; it cannot be so suicidal as to wish to roll things back exactly to the moment where the old order began to rot away – since the old order did rot away, it failed, it cannot and should not be mechanically recreated! If the past is dead, and the present is Jacobinism, then Reaction needs to place itself as the future. You get this from the ideologues of the Confederacy, from Nietzsche, from M.M…”

          I think this will do as an example, no offense intended, but couldn’t this argument be framed in a way as to lower the signal-to-content ratio? I’d wager it’d make for better communication.

        • Multiheaded says:

          I basically seem to alieve that being a jerk at people magically opens up their perspectives and expands the overton window. That, and I also kind of wanted to say something cool.

          Because I’ve absorbed too much of this horrible, awful online culture of “intellectual provocation” and “insight porn”. I belong in a re-education camp.

          P.S.: Oligopsony is guilty of worse, but his comments are also more content-heavy and intellectually rigorous than mine. He actually refers people to serious academic sources and shit. Perhaps I shouldn’t delude myself into thinking I can get away with it like he does.

        • Zathille says:

          Well, yes, the more content in a post, the harder it is to attack its rhetoric, as it makes up a lesser share of its substance. But I should not pretend my posting now was result of rigorous timing. Indeed, much like you, I wanted to get something off my chest, your post was merely the one that reminded me of the issue, I did not intend it as a personal attack.

        • Multiheaded says:

          VALIS knows I of all people am not going to complain about personal attacks…

        • Oligopsony says:

          That was a very flattering way to put it, but I agree with your criticism(-self criticism) here, and hadn’t put it to much conscious consideration. Thanks!

        • Multiheaded says:

          @Oligopsony:

          That was really not intended as even slightly negative. I genuinely do envy you, having the learning and moxie to unapologetically go around knocking some sense into people from a really locally unpopular standpoint and be perceived as sensible and valuable.

          (DAE euphorically enlightened by comrades’ critical praise?)

        • Mark says:

          It’s pretty much standard nowadays for articles in analytic philosophy to use “she” as much as “he.” Since IIRC Scott was a philosophy major, I took his pronoun usage as a signal of his academic background more than anything else.

      • peterdjones says:

        Plus another one for the Joss Weedon quote.

      • Randy M says:

        And here I thought the world would be peaceful if the leaders were women. Now I know that women are also just as good at war as men.

        • ozymandias says:

          “The world would be peaceful if the leaders were women” is sexist garbage. Unfortunately, feminists are not free from peddling sexist garbage.

        • So far as getting a peaceful world is concerned, the question isn’t whether women are good at war, it’s whether women are as warlike as men. It’s hard to have a sensible opinion on that question because there have never been enough women leaders to see whether they’re less likely to go to war with each other than men are.

        • peterdjones says:

          One can, thankfully, be good at the bagpipes without being compelled to play the bagpipes.

        • Randy M says:

          Thanks! I guess I should have realized that women are both less prone to war, and better at it when compelled to partake (by non-gender specific parental instinct, perhaps). I guess it’s common sense, when you put it that way!

    • MugaSofer says:

      Spivak pronouns are pretty jarring, yeah, but I don’t really find female pronouns jarring. Everyone has different irritants, though.

      On the other hand, it might just be that, y’know, some leaders are female. A world-model thing.

      Now, if Scott used “she” ALL the time … I can see how that would be odd.

    • David says:

      This seems to imply that you can only immerse yourself in fiction if it does not stray very far from the majority position of current reality. I hope that that is not actually true, but you just need to remember that there were times not so very long ago where a fictional female doctor, for instance, or politician, or bagpiper, would have been just as anomalous, and note that no one reading a piece of fiction set in the industrialised West would bat an eyelid now at the idea that women can practice medicine, participate in government or play the bagpipes.

      I will agree that Spivak pronouns sound a bit inelegant. I am a big fan of gender-neutral singular ‘they’ for that reason; you can divest your language of structural sexism in a way that doesn’t draw attention to itself, except in occasional odd constructions like ‘themself’. Would you be okay if it had been phrased ‘a surgical strike against their secret headquarters’?

      • Anonymous says:

        I think Scott has abandoned Spivak for “they,” but that isn’t the end of the story. this essay may be helpful in identifying other issues. In particular, it would be very difficult to use “they” in the later examples with named characters. The unnamed general is a middle case.

        • David says:

          Yeah, I don’t have any disagreements with that link. Keep on shifting the Overton window 🙂

    • JBay says:

      It depresses me that the mere suggestion of a gifted female military commander is so outside the boundary of your imagination as to be ‘immersion-breaking’. I wonder what the world must look like through your eyes.

      But thinking it over, the fact that you ask “must you do this?” is, in fact, strong evidence that doing this remains necessary.

  4. blacktrance says:

    You’re still interpreting Kant as a consequentialist of some kind, which is tempting because when you’re a firm consequentialist, you wouldn’t understand why anyone would be a deontologist. Nevertheless, Kant is a deontologist, and as Alicorn wrote here, deontology is not consequentialism. “Don’t do things that undermine the possibility to offer positive-sum bargains” is a rule that cares about utility, and is thus non-deontological.

    As for Kant vs Rawls vs utilitarianism, I recommend giving up on all of them and going with simple Hobbesian contractarianism instead.

    • Protagoras says:

      The phrasing may be unfortunate, but I don’t see that Scott has really committed to any more than Kant already does. Kant rejects principles when they are self-undermining, and they’re self-undermining when their universal form would involve them obstructing rather than furthering the purposes for which they were intended. If appealing to the purposes or goals of the principles there is covert consequentialism, then Kant himself is a consequentialist, and if it isn’t, then I don’t think Scott’s version is notably more troublesome.

      • blacktrance says:

        “Self-undermining” and “undermining the possibility to offer positive-sum bargains” are not the same thing. The former is objected to because it can’t be universalized, the latter may be possible to universalize, just with bad consequences.

        To illustrate a related difference, in the Parfit’s Hitchhiker scenario, the faux-Kantian consequentialist would say that you should endorse a moral law that forbids lying because such a law would create the possibility of positive-sum bargains, and that the law is ultimately justified by those bargains. The Kantian would say that you shouldn’t lie because you’d be using the selfish man as mere means, and people should be treated as ends, not means – Kant would say that the faux-Kantian is not motivated morally.

    • Matthew says:

      Your link isn’t pointing where you meant it to go.

    • suntzuanime says:

      I feel like the disclaimer sort of covers that. A sane person steelmanning a for-real deontologist is going to make him out to be a deontologically-flavored consequentialist, because deontology is crazy. If you want to get something useful out of steelmanning Kant, you can’t leave him deontological.

      • gunlord500 says:

        That may be true, but if it is…man, I’m starting to wonder if steelmanning, as well-intentioned as it may be, might be a fool’s errand.

        • AR+ says:

          It could still be useful if you consider steelmanning to be like the Principle of Charity, in that it is for YOUR benefit, not your opponent’s, however much it might superficially seem otherwise given how politics normally works. In both cases, you can interpret them as rules designed to prevent you from missing good arguments due to your own prejudices, not necessarily to reflect anybody’s actual opinion.

          Although it isn’t really the intent of the idea, trying to justify a conclusion in terms of world-views that would never have produced them, as James Donald objects to, is exactly what can make it a useful sort of thought-puzzle for producing arguments that no actual world-view would ever think to make from within its own context, some of which might be very good, or very interesting.

          But for steelmanning in its original form, I think you have to already be very close to the other person’s position on some basic values, or you will have nothing to argue about, or will have to instead try to first find other things (which probably won’t be “arguments” as such) that would potentially change your values to theirs, and see if you still keep the ones you started with.

        • Perhaps it would work for steelmanning to be clearly labelled, especially when it makes a major shift in the argument.

          Note: I have a high incentive to comment because it’s the only way for me to get email notifications. For ethical reasons (consequentialist, I think), I limit myself to relevant comments, but I’d rather have the option of switching the comment page between threaded and newest first.

        • peterdjones says:

          The need for steelmanning arises because of a communication problem, where the listener can’t make sense of the speakers comment in the listeners terms and so has to start making guess about the speaker’s assumptions. But the listener is likely to have very imperfect knowledge of the speakers assumptions for the same reasons that the communication gap arose in the first place.

          Since that is kinda logically inevitable, it is kinda unfair to put all the blame in the steelmanner.

          The best efforts on the part of the speaker to guess the listeners terms and assumptions are likely to fails s well, but if both make an effort, they might meet in the middle

      • blacktrance says:

        Perhaps I’m misunderstanding how steelmanning works, but I thought that in a successful steelmanning, the person whose argument is being steelmanned should agree with the steelmanned argument. As I understand it, a steelmanned argument is the strongest formulation of an argument, not merely something that looks kind of similar and reaches the same conclusion.

        • suntzuanime says:

          I don’t think that works. Much of the value of steelmanning is about taking the crazy out of an argument and seeing what you can construct from what’s left. Crazy people will naturally object to this.

          I don’t know that we disagree on any actual point: it just seems like we need two different terms to describe these two different things. I propose we call the type of opposing-argument improvement where you use only the elements of the original argument “ironmanning”, and the type where you mix carbon into the argument to make it stronger “steelmanning”.

        • AR+ says:

          So, if I’ve been following this place’s comments closely enough, we now have proposals for effigy metaphors that form a spectrum, from weakest to strongest, consisting of:

          Strawman: Argument constructed to make an extremely poor case for its position, so as to easily destroy the actual positions in effigy.

          Tinman: Actual arguments or examples from one’s opponents specifically selected for their weakness or non-representativeness of its class, so as to be easily destroyed without actually making stuff up.

          Ironman: Actual arguments or examples from one’s opponents specifically selected for their strength, so as to ensure that they will either persuade you, or be something worth defeating.

          Steelman: Arguments based on the positions of one’s opponents except altered so as to make them more convincing to your own side, even if it means adding arguments or appealing to values your opponents do not actually hold. Hard Mode for your convictions. If using this does not occasionally change your mind then you’re almost certainly not doing it sincerely.

        • Alrenous says:

          @ AR+

          +1

          Minor addendum: the ironman should be selected so that they pass the ideological Turing test as well as possible.

      • Josh Haas says:

        Since deontology seems to be the underdog in this discussion, I wrote a brief post explaining why I disagree with consequentialist, specifically utilitarian, thinking: utilitarianism tries too hard to be right.

        I feel pretty strongly that deontology isn’t crazy, or if it is, it’s not quite as crazy as consequentialism. I’d be interested in the opinions of smart consequentialists to whom it does seem obvious that you’d have to be nuts to be a deontologist.

        • kappa says:

          That post is amazing. It expresses a lot of things about utilitarianism that make sense to me as though I have been thinking them all along even though I never quite put them into concepts so clear.

  5. Cyan says:

    Levi might think Homer was a homersexual instead of a homerphobe.

  6. Matthew says:

    [I am not even remotely a Kant scholar, so based on my poor understanding of history, science, and ethics] It seems like Kant chose a spectacularly poor example to make his point, assuming you are correct — murderers are already refusing to follow the closest thing we have to a truly universal principle of behavior, so refusing to extend rules that would apply everywhere else to dealings with murderers doesn’t seem like a very long slippery slope.

    • Protagoras says:

      That is a wrinkle in the murderer scenario; Kant advocates retributive punishment. If the murderer is already guilty of some wrongdoing, is it so obvious that misleading him isn’t part of the appropriate response to his current wrongdoer status (the rules on retribution are incredibly confusing)? And if the murder is not guilty of any wrongdoing, by what means have we become so certain that he’s going to murder? But it is true that supposing we somehow have such perfect psychic knowledge, Kant is not at all willing to punish people in advance for what we believe they are going to do, which is perhaps the point of his response to the example (not an example he came up with; someone wrote to him and asked him about the case).

  7. Matt says:

    I’m not entirely clear on your position. As an individual, at the moment of decision and with control over only my own actions, do you think I have any good reason to act on a universalisable maxim, rather than simply consequentially maximising? (Understanding that the latter of course requires full consideration of e.g. precedent-setting and trust-erosion.)

    I’ve never seen an argument for this approach that doesn’t rely on one of the following: weird metaphysical premises; the assumption that I am in fact transparent and incapable of deception; the assumption that for some reason I should pretend that I am choosing for everyone in similar circumstances, not just myself; or a non-standard decision theory that (implicitly) assumes either backward causation or a paradoxical form of compatibilist free will.

    (Obviously there are good reasons to preach moral codes other than act-consequentialism, and to set up systems to enable and encourage positive-sum co-operation, but this seems to me an entirely separate point.)

    • Elissa says:

      I think the argument Scott favors is the one involving non-standard decision theories?

      There may be another answer: The question of How to Be Good is deeply compelling, and Scott, like Aristotle, intuits that this is what ethics is about. Any system that gives an individual a reason for acting ethically other than “because it is right” isn’t even really ethics at all.

      • Xycho says:

        A system which presents ‘because it is right’ as the reason to ‘act ethically’ is basically circular, and somewhat creepy to boot. I’m a moral relativist (approximately) for the same reason I’m an atheist: the only coherent understanding of the alternative involves the existence of some sort of supernatural element – the Platonic ideal applied to moral frameworks, effectively. Ethics are as real as deities and contracts, not as real as gravity; i.e. they’re all invisible, but you can ignore ethics and gods and nothing will happen to you unless someone else forces it to. If you walk off a cliff it hurts whether you believe in it or not.

        On that basis, I could understand an ethical system based on the physiological ‘ugh’; there’s an unpleasant sensation attached to doing some things (For example, I’ve been told by a vegetarian that they experienced literal, physical pain when they came rabbit hunting. I had an acquaintance who never lied if he could help it because it made him feel ill.) which I’m sure is an excellent proxy for a personal moral compass.

        • blacktrance says:

          Ethics are as real as deities and contracts

          But deities and contracts are real in very different ways! That is, deities aren’t real at all, and contracts are real even though they’re not physical entities.

        • Xycho says:

          Not at all; they’re both utterly imaginary constructs which affect the behaviour of those who agree to pretend they’re real. The same goes for concepts like honour, and even ‘rights’ and ‘laws’. We pretend these things have substance, and attempt to construct a world in which there are social consequences for not believing in them, but ultimately they’re all the same amount of imaginary.

        • blacktrance says:

          Nothing external will automatically happen to you if you act unethically – neither Zeus nor moral law will smite you with a lightning bolt if you commit murder. But lumping together deities and ethics together in the category “imaginary” is misleading. Claims about deities are claims about the external world – “Zeus will smite you with a lightning bolt” is a similar claim to “This brick will fall when I drop it”, it’s just that the claim about Zeus happens to be false because Zeus is imaginary. So, claims about deities are false claims about the external world. Claims about morality are more complicated, partially because people rarely mean exactly the same thing when they talk about it. But perhaps I can shed some light with an example.

          Suppose I say make the moral claim “You shouldn’t murder”. What does that mean? One thing it could mean is something similar to what “You should defect in a one-shot prisoner’s dilemma” means in game theory. Lightning won’t rain down from the heavens if you cooperate, but that doesn’t change the fact that you should defect. It’s not a physical fact except in the sense that you will experience more utility if you defect and would thus be contradicting your own desires if you were to cooperate – you can’t say “Defecting would get me a higher payoff so I should defect, but ‘should’s are imaginary so I’ll cooperate” because that would contradict the fact that defecting has the higher payoff. In effect, it would be saying “I don’t want what I want”.

          Returning to the murder example, if I say that you shouldn’t murder, it could unpack to “If you commit murder, that is the equivalent of defecting in the prisoner’s dilemma, which is suboptimal according to your own preferences. You have the option to not commit murder in exchange for not being murdered, by making an agreement that murderers are to be punished, and taking this option would be in your interest.”. If I am correct that your payoff in Murder-Is-Punished-Land is higher than your payoff in Murder-Is-Not-Punished-Land, you would be in contradiction if you didn’t endorse Murder-Is-Punished-Land. Lightning would not strike you down if you’d fail to endorse it, but endorsing it follows from your already existing desires – which are real.

          Though morality is a social construct, it’s not imaginary in the same sense that a deity is.

        • peterdjones says:

          You need to justify the claim that ethical onjectivism always requires a supernatural metaphysics.

          You also need to justify the claim that moral relativism works at all. Relativism works doesn’t follow realism doesn’t work, since nihilism is an option.

        • peterdjones says:

          Discussing the relative realness of deities and contracts misses the point. The question is whether they can have completely arbitrary characteristics,….or whether, like a bridge , they are the kind of construction that has to be built within certain parameters to function at all.

        • ozymandias says:

          I am very puzzled about why things only existing because humans say they do means that they don’t exist. If everybody woke up one day and decided the US government wasn’t real it would stop being real, but the US government has much more ability to affect my life than Zeus.

          Also morality is a human universal, so in that sense it is real like gravity or at least real like emotions and vision and rationalization: it is a thing which brains are born to do.

        • Xycho says:

          My general use of ‘real’ is ‘would this exist as a possibility without people’? I.e. if we all woke up tomorrow and had forgotten about gravity, someone would notice and re-formalise it pretty fast. If we had instead forgotten about English, and all the books (which themselves are real) had been miraculously translated into Japanese, nobody would come up with English again. I prefer to think of ethical systems as more like languages than anything else; English isn’t a real thing, and neither is Japanese, but they’re both valid ways of accomplishing the intended function, despite the fact that they’re mutually unintelligible, because people have agreed to use them that way.

          @BlackTrance: That is an ethical system that is internally consistent; effectively ‘maximise your own utility’, with terms included for the fact that irritating people generally results in them inflicting negative utility on you. No supernatural elements needed there. You’ll also get very different answers depending on your preferences and culture – in a society in which murder didn’t carry much consequence, killing people to further another goal would be much less wrong than in one where it generally resulted in application of the electric chair.

          @peterdjones: Ethical objectivism (robust moral realism) implies that the ‘right thing’ can be discovered, and that there is some framework wherein a universal ethical system exists and could be applied if it could be found. From this it appears to follow that there must be some place where this would be stored. Morals change across time and culture, so that place appears to be external to people yet not physical, which means that it must in some manner be supernatural.

          I’m not sure what you mean by ‘moral relativism works’. Works in what sense? Moral systems themselves ‘work’, if they’re at least somewhat coherent, in that they allow the individual who holds them to interact with the world in a way they’re comfortable with, but moral relativism itself doesn’t pay rent in quite the same way. It does, however, helpfully account for the broad variation in what people consider to be ‘right’. I have an ethical system which isn’t the same as yours, which isn’t the same as the Pope’s, though they probably all share some characteristics.

          Moral nihilism is an option, but practically applied ends up looking a lot like moral relativism; either things are neither right nor wrong, in which case the only way to function is to do only the things that other people have mistakenly decided are right, or things are moral according to the attitudes and beliefs of the people involved, in which case ditto sans the ‘mistakenly’. Either way if you stab someone you get arrested, and in the absence of Hell it really doesn’t matter which is the case. It is therefore mostly useless as a tool for decision making.

          @Ozymandias: I agree that morality is (nearly) a human universal, but that doesn’t imply that the moral statements themselves are. We all have fingerprints, and they all share some characteristics, but they’re not the same nor is there a template. The US government thing is an interesting one; the government itself is definitely not real, but the people who believe in it and act as if it did are. Consequently it’s pretty much irrelevant that it doesn’t exist, because if you’re outside it the world looks exactly the same whether you believe in it or not. God is not real, but the Catholic church is, so if one is going to spend time in church anyway it really doesn’t matter whether you’re praying to something that doesn’t answer or something that isn’t there – either way the result is identical.

          Morality, on the other hand, is internal. The realness of other people’s moral systems is neither here nor there, since the problem is the existence of other people, not whatever’s inside their heads. On the other hand it does matter whether your own moral system is a copy of something real or purely a construct, since in the former case it oughtn’t be changed and people who disagree with you are actually wrong, while in the latter case it can be anything that satisfies you and people disagreeing with you is morally irrelevant, though practically important (see lynch mobs).

        • blacktrance says:

          That is a very strange definition of “real”, especially since it includes things like languages.

          You can’t have a society in which murder doesn’t have much consequence. It’s killing someone, which is a huge impact, and therefore people have a reason to agree to ban it.

          You also seem to be assuming that moral facts are not reducible to anything else, which need not be the case. If moral facts are similar to and/or a subset of game-theoretic facts, they can be discovered. I don’t know if you’d call that “external to humans” or not.

        • Zathille says:

          Maybe we’re not distinguishing between the reality of something and its concreteness? Could be a matter of definitions.

        • Xycho says:

          @blacktrance: Moral facts could be reduced to game-theoretic facts, iff ‘right’ is chosen to be ‘things which maximise my preferences’. To take the single-shot Prisoners’ Dilemma as an example: the ‘right’ thing to do is cooperate (be nice/kind/honour among thieves, whatever), while the ‘correct’ thing to do is defect. A moral system where the two came out the same way in all such cases would be comprehensible and pleasantly external, but it wouldn’t be the only valid moral system out there. It would probably look a lot like utilitarianism, actually.

          ETA: Having read your most recent blog post, it would most resemble contractarianism, which I had never heard of before.

          We also differ on the ‘people dying is a big deal’ thing, but that’s a whole different discussion, I think.

      • Matt says:

        Thanks. I suspect you’re right on the first point — given the Less Wrong connection, and the orthodoxy there of one-boxing on Newcomb’s Problem — and I would be interested to see Scott explicitly defend that approach.

    • Caspian says:

      Yes, I think it was reading a compatibilist view of free will on Less Wrong, and finding that reasonable, that made Updateless Decision Theory or Timeless Decision Theory seem somewhat plausible to me. And that implies something like Kant’s categorical imperative, which I’d previously considered to be silly.

  8. Onanymous says:

    …did you miss when this came up on Less Wrong?

    …oh…

    • Scott Alexander says:

      I had forgotten about that, yes.

      But if you’re not a strict Kantian, it’s possible to combine universalization and not blowing up the world.

      All you need to do is think “Suppose everyone disobeyed their oath when they thought following it would destroy the world. Then where would we be?”

      And the answer is – in a world where a couple of people make the military less efficient, but the world isn’t destroyed.

      One could make a stronger counterargument based on self-defeatingness – if the higher-ups knew that lower-downs might disobey their oath when it came time to mutually assured destruction, they would automate the whole thing. To that I might counterargue that higher-ups also want there to be a little bit of leeway in when you do or don’t destroy the world.

      If it were a matter of him being *absolutely certain* that American missiles were heading his way, and a choice between firing back or sacrificing Russia to prevent the whole world from being destroyed, I think there’s a good decision theoretic argument for going ahead and firing back.

      • blacktrance says:

        That sounds like adding epicycles upon epicycles to the argument until you make it reach the conclusion you want. I could easily say that the higher-ups wouldn’t want the leeway to be in the hands of some relatively low-ranking soldier. I’m sure you could come up with a counterargument to that, and I a counterargument to the counterargument, and so on, but it doesn’t sound like something that would get us closer to the truth.

      • MugaSofer says:

        “If it were a matter of him being *absolutely certain* that American missiles were heading his way, and a choice between firing back or sacrificing Russia to prevent the whole world from being destroyed, I think there’s a good decision theoretic argument for going ahead and firing back.”

        There are a few problems with this line of argument; specifically, you simply can’t be certain whether America has actually launched.

        So in practice, you either precommit to launching when your sensors say they have (and doom the world), or precommit not to launch at all.

        • Or you could instead precommit to flipping a coin to choose between the two, at the time that the sensors tell you that missiles have launched. That might still be an effective deterrent, if the aggressors judge that the positive utility from the hypothetical attack is less than half the negative utility of the hypothetical counterattack. And it would give a 50% chance of saving the world if a sensor mistake happens.

  9. Oligopsony says:

    I like this, and I think it helps articulate where exactly we disagree: a further universalization of “everybody fires people who disagree with them” is “everybody acts on moral principles they find compelling, even when those are not generally accepted.” And I would will that – job opportunities getting cut in half (and world wars and the Holocaust or whatever) seems like a small price to pay.

    Like, someone eloquently expressed agreement with you on the firing thing once, I forget who it was, saying, we have a bunch of people who think that abortion is murder, they literally think the other half of the country is involved in the biggest genocide in human history, and yet we all basically get along, isn’t that beautiful? No it isn’t! It’s awful! I mean, I hope that abortion clinic bombers fail, but will that the maxim upon which they do their act, yes.

    Mostly this is not how we operate (certainly not I; I am as cowardly as they come,) and so consensus morality in any given place and time tracks amoral interests (and not in an unweighted fashion, either.) The deviations from this are not, I believe, random, and tend to push things in a better direction, although of course there are frictional costs along the way.

    • Matthew says:

      we have a bunch of people who think that abortion is murder, they literally think the other half of the country is involved in the biggest genocide in human history, and yet we all basically get along, isn’t that beautiful? No it isn’t! It’s awful!

      Alternatively, it’s evidence that lots of people don’t actually alieve what they claim to believe.

      • ozymandias says:

        Or they’re deontologists who believe murder is wrong even to save a life. Or the human race tends to be low on heroes.

        That said I tend to model the average pro-lifer as believing that a fetus is a life but a less morally relevant life than an adult human’s, much the way as I (vegetarian) feel about killing a pig or a dog.

        • Matthew says:

          >That said I tend to model the average pro-lifer as believing that a fetus is a life but a less morally relevant life than an adult human’s, much the way as I (vegetarian) feel about killing a pig or a dog.

          Possibly. But since that’s also a fair description of how the average pro-choicer views a fetus, it makes me a bit wary of typical mind fallacy.

      • Oligopsony says:

        The particular example isn’t important here – I know that cowardice and hypocrisy are genuine things as I myself am a coward and a hypocrite.

    • suntzuanime says:

      It would in fact be terrible if people acted on their moral principles. Moral principles are airy signalling unmoored from real-world concerns, if you actually try to implement them they get you the Thirty Years War and abortion clinic bombings and North Korea and Prohibition and so on and so on.

      People were not designed to take what they say seriously – don’t wish for sincerity until after you wish for wiser things to be sincere about.

      • Zorgon says:

        This x100.

        People moderate because they know their “moral principles” are around 90% social status games of identity. It’s that other 10% that causes change, and that change is not always good from an external-to-context and can often be horrific.

        I don’t want to encourage apocalyptic cultists to infiltrate the CDC to gain access to biological weapons and release them simply because they have the courage of their convictions, even if I may be rather impressed that they possess sufficient said courage to do it rather than just signalling to one another about how everyone else should die.

        If arguments are soldiers, we live in a world with 10,000 armies and the only thing keeping the world from a million Sarajevo’s every day is the rules of war combined with the fact that most of the vectors don’t actually believe in their soldiers most of the time. I’m against anything which gives any given side more ammunition than others, because while “emboldening the terrorists” was always bullshit, “emboldening the nutters” is a very real problem.

      • Anonymous says:

        Signaling is not something divorced from real-world concerns; particular signals are signal-worthy for reasons. Signaling intelligence and class status lead to a world with much more cultural output than one where such things are obvious, for instance. A perfect simulation of a thing is that thing, and an imperfect simulation of that thing is that thing imperfectly.

        The neoreactionaries are basically right, for instance, when they call out telescopic compassion as the result of a sort of bullshit signaling arms race where people try to display their propensity to cooperate – a good thing to have in allies that correlates with the telescopic kind even though the telescopic lind’s marginal contribution to alliability itself is nil. Where their own signaling matrix deviates from reality (as such things, and especially their own, are wont to do) is in considering this arms race globally immiserating.

        edit: this is Oligopsony. Don’t want to miss out on the sweet signaling value of having my pseudo-identity linked to my dumb thoughts!

        • Zathille says:

          I think the NRx’s point is not that it is necessarily or inevitably imiserating, but that it tends to be so to the extent it is more a signal and the signal is proportional to the amount of resources sent rather than their actual effectiveness, in effect being a non-efficient charity, one making use of public resources.

          There’s also the angle of such practice fostering dependency and potentially limiting the ability of the aided countries to develop infrastructure on their own and in a way that makes them increase their own productive capacity.

          And also that such signalling is used as a costly sledgehammer against any opposition of such aid: ‘if you don’t approve of foreign aid, you are closed-minded and callous and therefore less moral than us paragons of altruism’ which obfuscates the debate in a way that there’s rarely if ever any incentive or possibility of reducing aid politically. A supporting argument being that goodwill and fellowship extends first to those of a community near someone (the whole tribe question which to an extent divides Ethnonationalists, Traditionalists and Techno-commercialists, forgive me if the denominations are incorrect or outdated) so an extension of ‘generosity’ far beyond that becomes automatically suspect.

          In short, I don’t think the argument is that it’s DIVORCED from real-world concerns, just that these concerns are not as pure as claimed and the pursuit of them is counterproductive.

          I am aware this is only my perception of what the NRx argument is. If any could point out missing or related points I’ve failed to expose, it’d be appreciated.

          This also reminds me of a book entitled Humanitarianism Contested which I’ve heard about, it touches on related subjects:

          http://www.amazon.com/Humanitarianism-Contested-Angels-Routledge-Instiutions/dp/0415496640

          And a Trotskyist perspective on the book:

          http://www.wsws.org/en/articles/2013/12/30/huma-d30.html

        • ozymandias says:

          Is reducing or eliminating foreign aid really that far out of the Overton window? Admittedly I tend to avoid foreign policy stuff, but I was under the impression that “we should reduce foreign aid” was a fairly thinkable position.

      • MugaSofer says:

        I’m sorry, no. You are vastly underestimating the catastrophic loss involved here.

        To pick a low-hanging fruit: how many people die slow, agonizing deaths by starvation every day because we don’t take our moral principles seriously enough?

        EDIT: Also … you claim we would have horrible regimes and wars. But real wars are not usually started for moral reasons; and real corrupt regimes are, well, corrupt. What makes you think this “courage of our convictions” wouldn’t lead to more anti-war protests? More dissention in the literal ranks, as it were?

      • Error says:

        “People were not designed to take what they say seriously – don’t wish for sincerity until after you wish for wiser things to be sincere about.”

        Objection: The proper approach to sincerity is the other way around. If people claim principles that they don’t actually hold for signaling purposes, and those principles would be bad to realize, and if we further care about sincerity, then there’s a problem. But the solution isn’t to suddenly start sticking to your claimed, flawed principles; it’s to *stop claiming principles you don’t have*.

        That is, change the signal to match reality, rather than trying to change reality to match the signal.

        (how do I get a proper blockquote in here, by the way?)

        • suntzuanime says:

          Objection: What makes you think we care about sincerity? Society is much more rigorous in demanding that you claim to hold certain principles than that you live up to the principles you claim to hold. That’s what makes hypocrisy a better choice than amoralism.

        • Error says:

          *I* do, or at least I believe that I do. I acknowledge that society as a whole probably doesn’t, though.

        • Sniffnoy says:

          Use <blockquote> tags.

    • MugaSofer says:

      I see what you mean, but I’m really uncertain that terrorism would actually reduce abortions long-term.

    • ADifferentAnonymous says:

      How do you respond to the following toy model:
      There are two decision-making groups, rightist employers (REs) and leftist employers (LEs). Each group decides whether or not they’ll tolerate employees of the opposing politics. All rightist employers think alike, and all leftist employers think alike, so there are four possible outcomes:
      1) REs fire leftists, LEs fire rightists
      2) REs don’t fire leftists, LEs don’t fire rightists
      3) REs fire leftists, LEs don’t fire rightists
      4) REs don’t fire leftists, LEs fire rightists

      If one side fires the other, the nation’s politics will move in favor of those who fire; if both sides do the same thing, politics will stay put. Firing employees hurts everyone economically, but this is lexically less important than any political gains. So to an RE, 3 > 2 > 1 > 4, and to an LE, 4 > 2 > 1 > 3. Notice how 2 is a Pareto optimum and in particular a strong Pareto improvement over 1.

      Is your position that sincerity of action is valuable to you and so you’d prefer 1 to 2? Or do you agree that we should prefer 2 to 1 in this scenario but argue that this toy example is not reflective of reality?

      If your view is the latter, I don’t have much to add to https://slatestarcodex.com/2014/02/23/in-favor-of-niceness-community-and-civilization/ in disagreeing with you. btw this is the source of the abortion thing in your second paragraph.

      • It isn’t obvious that the side which fires will have an economic advantage– they’re shrinking their pool of potential employees and they’re probably damaging morale in their organizations, assuming there are neutrals as well as partisans.

  10. Anonymous says:

    (For me to think about later) – Do some of these get solved when you resolve to punish defectors? (Axe murderers are considered defectors, for example)

  11. ozymandias says:

    “Reject actions which, if universalized, would be self-defeating or contradictory” gets you a really weird morality. Like, there’s nothing particularly self-defeating about “kill people whenever you want to.” Everyone kills people when they want to, and presumably all of society ends up armed to the teeth and in a Hobbesian state of nature, and that sucks pretty hard, but it’s not contradictory.

    It seems a bit weird to have a single moral rule that doesn’t produce “it is generally bad to kill people.”

    • Scott Alexander says:

      See Protagoras at the top. But there are also arguments like “But you would have been killed, so you can’t kill people” and “If everyone killed everyone else, that would be bad.”

    • Protagoras says:

      Well, it is self-defeating, in the strange Kantian sense. Ultimately, a principle is self-defeating for Kant if it ends up hindering values rather than furthering them, and people getting killed get their pursuit of their values hindered pretty hard. I’m not sure how this interacts with taking risks of accidental death, or duties to bring about life extension (such practical details are, I think, where Kant really gets into trouble), but killing people is definitely not OK on Kant’s principles.

    • MugaSofer says:

      “Kill people whenever you want to” results in you being killed, doesn’t it?

      That makes it self-defeating, unless you have “as many people die as possible, including me” as your terminal goal here.

  12. Douglas Knight says:

    Kant says that you may not lie to the murderer, but he also says that you may lie to the thief.

    • Berry says:

      Just want to pop in and say thank you for making me aware of the latter’s existence. 🙂

  13. Alrenous says:

    Kant says it isn’t, because lying.

    There is a real point here. A couple, actually. First, if something is truly wrong, it means it trumps all other considerations. That’s what being morally wrong means. Second, there aren’t really gradations of wrong. There’s right, there’s catastrophically wrong, not much in between.

    He screwed up when he said lying is wrong, though. Are novels evil? Absurd. Deception is closer. A novel lacks the usual implicit statement, “I believe this is true.”

    Empowering evil is truly evil, though. The thing is, one wrong can’t really trump another. There has to be some reason lying here is just not-wrong. Ergo, lying to a murderer to disempower them must be correct. If we can’t see how, then don’t bite the bullet, be humble. Accept your Ignorance. (I admit this is some awfully well-tuned humility I’m proposing.)

    Well, then axe murderers wouldn’t even bother asking.

    Walking your shots back away from idealism defeats this though. Due to errors and information imperfection, it’s still worth lying to axe murderers. They might think you’re dumb, or they might be dumb, in either case, it will work. Costs are small. (City-nuking games’ costs are large.)

    If you are so much stronger than other people that you are immune to their counter-threats, you can get away with doing pretty much anything under this perversion of not-at-all-like-Kant we’ve wandered into.

    Perversion is irrelevant. No moral system whatsoever can restrain Fnargl.
    In broad strokes there’s two kinds of people: generally decent ones who don’t want to hurt anyone, and undecent predators. The first kind don’t need formal morality, a little communication is fine. No morality works on the second. We can try to put up Rawls’ veil, but as a matter of fact we’re not behind it, so…

    There’s a maximum level of meta-universalizability.

    Do with your property whatever you want.
    Property is defined as anything you can reasonably expect to control.
    ‘Reasonable’ is a weasel-word in philosophy, but in practice societies haven’t found it difficult to agree on who can control what. It gets badly recursive in that everyone agreeing that I control my wallet makes my wallet more controllable by me, but again from practice we know it doesn’t diverge. I basically mean that we should exclude argument from the clinically insane and liars, and not disqualify on honest mistakes.

    It cashes out to mean that English common law is fully Kantian, modulo some understandable errors. (Not to exclude alternatives like Xeer; however I can’t rule them in either.)

    As an example, a competent pickpocket has a greater claim to being able to control my wallet than I do. But if I could reasonably know the pickpocket exists, I would have defended myself. Not taking my wallet that day, for example. Which means the pickpocket is deliberately and unavoidably attempting to subvert reasonable expectations by falsifying mine while confirming their own.

    If there’s no defence against pickpockets, I would never have bought a wallet in the first place. So a pickpocket is deliberately attempting to subvert reasonable security. We don’t worry about stealing air, because nobody can secure air in the first place.

    I should probably bring up explicitly (again) that you can’t refute Kantian logic on utilitarian harm/care grounds. For Kant, moral harms trump material/subjective harms, and his ethics are merely an attempt to work out which harms are moral. You don’t agree, fine, but it’s not science; it’s not falsifiable and that’s okay as long as it isn’t self-contradictory.

    I don’t agree. I think if morals have no physical manifestation, it’s likely they don’t exist and even if they do it will be impossible to access them epistemically; you can’t check your work. Which cashes out to mean they don’t exist. I have no idea what Kant would have said to this objection.

    • ozymandias says:

      There’s right, there’s catastrophically wrong, not much in between.

      That seems not true? Giving one percent of your income to charity is better than giving none of it, but worse than selling all you have to give to the poor. Rationally arguing with someone is better than insulting them, which is better than sending them a death threat, which is better than beating them up.

    • suntzuanime says:

      There is a real point here. A couple, actually. First, if something is truly wrong, it means it trumps all other considerations. That’s what being morally wrong means. Second, there aren’t really gradations of wrong. There’s right, there’s catastrophically wrong, not much in between.

      Nah.

      • Anonymous says:

        that’s what happens when you use binary words like Wrong/Right to talk about a probabilistic grey mess like morality and consequences.

    • blacktrance says:

      I think if morals have no physical manifestation, it’s likely they don’t exist and even if they do it will be impossible to access them epistemically; you can’t check your work. Which cashes out to mean they don’t exist.

      This assumes the dichotomy that either morality is some physical thing outside of us, or it doesn’t exist. However, there are other alternatives, such as that morality is a construct. That would mean that it exists, but not as some external physical thing.

      • Alrenous says:

        I can’t agree I make that assumption. If the construct is real, it will have some causal tendrils reaching into the physical world.

        For example, you might say mathematics is a construct. That’s there’s no such physical thing as ‘three.’ But there are physical three-like things which obey three’s relationship with other numbers.

        • blacktrance says:

          “Three-ness” is a property of sets of objects, and when we imagine or encounter such a set, we can derive the abstraction “three” from them. But morality is different: while it’s dependent on states of the world, it’s not something derived purely from the external world – it’s in part internal, the rules we choose because we believe them to be correct.

        • Alrenous says:

          Correct according to what standard?

        • blacktrance says:

          That is the topic of much of moral philosophy.

          As it’s 1 AM, I don’t think I’d give the best summary of my views right now, and anyway this isn’t the best place to do it. But to partially answer your question, morality is somewhere in between empirical investigation and pure rule-making – it’s directed rule-making informed and partially determined by knowledge of the outside world.

        • Alrenous says:

          You seem to accept the existence of a framework, even if nobody knows what it is. Would you say it’s all internal, then?

          The rules must be instantiated by physical actions. Actions that conform to the framework will have different kinds of effects compared to those that don’t. It should be possible to tell the difference without going circular. If it isn’t possible, even in principle, to tell a moral society apart from an immoral one after tabooing your moral theory, then your theory has issues.

          (Circularity:
          “The Indians are far more moral than us.”
          “Oh?”
          “They follow all our rules!”
          “Err…”)

        • blacktrance says:

          The rules are instantiated by physical actions, but they can’t be derived from looking at physical actions alone. It’s sort of similar to the rules of chess: they are instantiated as physical actions, but the position when the king is in check is not discovered purely by empirical investigation, but is determined by the rules, and empirical investigation merely determines what part of the rules are relevant to the situation and how they apply.

        • Berry says:

          @Blacktrance

          Did you intentionally bring up chess here to defend Moral Realism as a reference to Mackie? Either way, quite ironic 🙂

        • peterdjones says:

          If you act on morality, then it has to effects. That makes it at least as real as real as maths.

        • peterdjones says:

          Moral rules need to be discoverable, to avoid circularity, but they also need not be empirically discoverable. Kants theory is a theory of how that might work

  14. Padoson says:

    This seems like a terrible idea. So if everyone becomes a psychiatrist, there’s no one to farm and everybody will die. So no one should become a psychiatrist?

    By the way, what’s up with the gender bias? The good people are women and bad ones are men and there’s not a single bad woman?

    • suntzuanime says:

      It is not a moral imperative to become a psychiatrist. This doesn’t mean no one should become a psychiatrist. This is a pretty trivial misreading of Kant.

      And for the love of all things holy let’s not start keeping a hypothetical-character-gender quota. What the hell.

      • Padoson says:

        Sure it is. Rephrase it as “if everyone tries to improve the mental health of people, no one will be there to feed them, so everyone will die, and no one will be left to improve mental health anymore.” The problem isn’t even that if everyone tried to improve mental health the world would go to hell. The problem is no one would be left to keep on improving mental health.

        The original version of the first thought experiment didn’t have a woman in it (don’t know this, but pretty sure). So he inserted her. Then he inserted more. Nothing’s wrong with this. But you can’t make at least one of them bad, and instead have to make all the bad characters male? What’d be the reaction if all the bad characters were female instead? I don’t want to dwell on this, but I think I have a valid point.

        • Where character gender is concerned, I suspect you’re mistaking the results of a random number generator for signal.

        • Padoson says:

          @Matt S Trout (mst)

          It’s hardly a random number generator. People here are far too old to naturally use the pronoun she instead of he when referring to hypothetical people like those in thought experiments. So he’s making a political point. If so, he might as well do it right.

          Also, it’s not about the numbers. It’s about the fact that all the bad characters are men, and all female characters are good. No one would think it’s random if all the female characters were bad and male characters were good instead.

        • Noumenon72 says:

          I agree that the bias is something Scott would do, but there’s only two female and three male characters at all (Paula, Kandice, Robby, Riley, and Levi), so it doesn’t seem a large enough sample size.

        • Padoson says:

          @Noumenon72

          There’s also the unnamed female friend who’s in the closet, and the unnamed female army general. So there’s at least four women.

        • ozymandias says:

          And here I thought one of the advantages of not being a feminist is that you didn’t have to have boring wanky conversations about how Your Fave Is Problematic.

        • ozy, there’s something contagious about SJism. It gives a tremendous number of tools for hassling people. I’m expecting things to get worse.

        • ozymandias says:

          IDK I feel like leftist groups have been prone to boring, wanky conversations about how Your Fave Is Problematic since at least the 19th century. (Remember “if there’s no dancing at the revolution I’m not coming”?) I am less familiar with non-leftist ideological history and if the disease is spreading that’s worrisome, but there’s a reasonable chance this is just a constant trait of ideologies. In which case there’s nothing to worry about (although plenty to mock).

        • nydwracu says:

          All acts change society in some way, everything is political, and so on and so on.

          This is, of course, true, but, for obvious reasons, it is probably best ignored — though writers should take it into consideration. As he may have, since he seems to be worried about feminists declaring SSC haram.

          Either way, I didn’t notice until I saw the comment. And maybe it’s a random number generator.

          (The whole thing strikes me as a bit silly anyway, especially since there are languages where the default gender is feminine and I’ve never heard of someone doing a study on whether they’re more or less ‘sexist’ than other cultures in the area.)

    • MugaSofer says:

      Everyone should choose a job according their comparative advantage (I’m simplifying, of course.)

      Just as the employer is using “fire people who disagree with my political views” – not just “fire people who disagree with [specific political view].”

  15. endoself says:

    I think that these ideas are much harder to apply in practice than you make them seem. If you want prosperity for yourself, friends and family, and larger groups that you identify with, then yes, there’s an obvious way to universalize that, and it’s Rawls. However, if what you want is much more complicated (example) and none of the people who negotiated the Schelling maxims in the first place even considered that someone might want these things, it’s not obvious whether it is actually acausally beneficial to go along with their maxims. I don’t even know if marginal possibility of what people think of as positive-sum trades actually benefits the far future. Then there’s the meta question of how much effort it’s worth to determine the answers to this stuff.

    Anyway, I think meta-universalizability and Rawls are tempting because they work in the simple cases and they’re arguments that our society actually uses in practice to cooperate, but they don’t help in the general case. If you’re already a utilitarian on the object level before universalizing and almost everyone is confused and sort of utilitarian but it’s not clear whether that’s logically prior to universalizing or not, the simple cases are not enough.

  16. DB says:

    People from [insert equivalence class of your choice] only need to give useful information with some small positive probability p in order for it to be rational for the axe murderer/captors/etc. to bother to ask. So, formally, if you precommitted to telling the truth in a set of world states with combined probability p (and the antagonist had no way of distinguishing these world states from the rest), there’s no philosophical problem. This type of mixed strategy extracts value from situations where all pure strategies fail, and it’s a better model of actual human behavior (even if most humans don’t explicitly work out the math) than “always lie” or “always tell the truth”.

  17. Army1987 says:

    A while back I suggested it is wrong to fire someone for being anti-gay, because if every boss said “I will fire my employees whom I disagree with politically”, or every mob of angry people said “We will boycott companies until they fire the people we disagree with politically” then no one who’s not independently wealthy could express any political opinions or dare challenge the status quo, and the world would be a much sadder place.

    The best solution to that would be implementing a basic income guarantee so that everyone is independently wealthy in the relevant sense. 🙂

    If it were universalized – and everyone acted on the principle “Always do whatever gets Scott the most money”, well, I wouldn’t mind that at all.

    As soon as people would stop selling you stuff, you sure would mind that!

    • Deiseach says:

      Also, if I know that I’m going to be fired from my place of work on the grounds that I’m homophobic, then of course I’m going to lie about it when given the HR Department Diversity Checksheet to fill out.

      So can Levi the Leftist be absolutely sure I’m not a secret homophobe? Well, he can either believe that every single person fills out forms with complete 100% limpid honesty, or he can hire the services of Felix the Ferret who snoops around to check up on people’s real attitudes by going through their bins, talking to ex-lovers, tapping their phones, hacking their email, and more nefarious methods.

      Does that really make for a better society, where Big Brother (or his privatised son) is out there monitoring your compliance, citizen, and sifting through every detail of your private life as distinct from how you behave at work?

      And Levi runs the risk of being accused of discrimination against minorities himself, because there’s surely some group or sub-group out there that he offends or can be perceived to be offensive towards, even unwittingly.

      Levi calls his place “Levi’s Lentils”? How hurtful and unjust, not to mention dangerous, to those with allergies to legumes such as lentils!

      16) People who have a food allergy are more likely to develop other food allergies TRUE: This is known as being “atopic” and refers to a tendency to develop allergies. Being atopic can mean you react to a number of unrelated allergens, for example peanuts and cats. Other people can react to different foods that contain either the same allergen or an allergen with a very similar structure, which means they can cause similar allergic reactions. This is known as allergic cross-reactivity. This means that if someone is allergic to peanuts, they might react to other foods in the legume family such as soya, peas, lentils, lupin and beans.

      I protest Levi’s discriminatory practice of lentil and legume-based cookery, whereby he ensures a policy that no-one suffering such an allergy can eat at his restaurant, and therefore he deliberately excludes these persons and treats them as second-class citizens and refuses them the same level of service that is provided to the general public! I demand that Levi fire himself!

  18. Deiseach says:

    Your example of “which is better – to lie or tell the truth when faced with the threat of murder?” is examined in St Thomas Aquinas’ “Summa Theologica”:

    Article 3. Whether every lie is a sin?

    Objection 4. Further, one ought to choose the lesser evil in order to avoid the greater: even so a physician cuts off a limb, lest the whole body perish. Yet less harm is done by raising a false opinion in a person’s mind, than by someone slaying or being slain. Therefore a man may lawfully lie, to save another from committing murder, or another from being killed.

    Reply to Objection 4. A lie is sinful not only because it injures one’s neighbour, but also on account of its inordinateness, as stated above in this Article. Now it is not allowed to make use of anything inordinate in order to ward off injury or defects from another: as neither is it lawful to steal in order to give an alms, except perhaps in a case of necessity when all things are common. Therefore it is not lawful to tell a lie in order to deliver another from any danger whatever. Nevertheless it is lawful to hide the truth prudently, by keeping it back, as Augustine says (Contra Mend. x).

    And earlier, St. Augustine wrote “De Mendacio” (“On Lying”) and “Contra Mendacio” (“Against Lying”); the first is a general examination on ‘what is a lie?’:

    I have also written a Book on Lying, which though it takes some pains to understand, contains much that is useful for the exercise of the mind, and more that is profitable to morals, in inculcating the love of speaking the truth. This also I was minded to remove from my works, because it seemed to me obscure, and intricate, and altogether troublesome; for which reason I had not sent it abroad. And when I had afterwards written another book, under this title, Against Lying, much more had I determined and ordered that the former should cease to exist; which however was not done. Therefore in this retractation of my works, as I have found this still in being, I have ordered that it should remain; chiefly because therein are to be found some necessary things which in the other are not. Why the other has for its title, Against Lying, but this, Of Lying, the reason is this, that throughout the one is an open assault upon lying, whereas great part of this is taken up with the discussion of the question for and against. Both, however, are directed to the same object.

    The second holds the position that it is not right to lie even to liars, in the case of certain Catholics who, in order to hunt out the Priscillianist heretics, were pretending to be Priscillianists themselves:

    Then also I wrote a Book against Lying, the occasion of which work was this. In order to discover the Priscillianist heretics, who think it right to conceal their heresy not only by denial and lies, but even by perjury, it seemed to certain Catholics that they ought to pretend themselves Priscillianists, in order that they might penetrate their lurking places. In prohibition of which thing, I composed this book.

    • nydwracu says:

      Oh, cool, I don’t have to defend my claim that Kant is secularizing a wacky form of Christianity; it’s already been done to some extent for me.

      (If Catholicism is wacky, but to a secularized-Calvinist just about everything is wacky.)

  19. Peng says:

    Universalize as if the process you use to universalize would itself become universal.

    If you’re proposing that, in practice among humans, there’s likely to be more agreement about “the process you use to universalize” than about object-level policy… Fine.

    But if you’re trying to ground morality, then you’ve only passed the buck, not eliminated any of the degrees of freedom. Suppose Alice’s universalization process is “pick whichever rule would give Alice the most money if everyone followed it”. If everyone universalized that way, Alice would win. And adding yet more levels of meta wouldn’t help either.

    If you are so much stronger than other people that you are immune to their counter-threats, you can get away with doing pretty much anything under this perversion of not-at-all-like-Kant we’ve wandered into.

    That sounds to me like the expected conclusion for a correct theory of universalization. After all, universalization is just a kind of acausal bargaining, and the outcome of a bargain depends on who has how much bargaining power. Knowing how to bargain well will help anyone achieve any goal they happen to have, including evil goals.

  20. Zorgon says:

    So, given you’ve gone from “Kant? Bollocks!” to “Kant? Not entirely useless…” in a relatively short space of time, what do you think is the likelihood of you returning through “Kant? Bollocks!” again?

    I’m definitely interested in this process of self-reinvention in terms of shifting position in relation to cultural and academic standpoints (ideally dead ones, it’s a real pain when they move on you instead). I’ve done a lot of moving in these last few years, mostly towards something resembling Old!Yvain, and watching you shift to whatever New!Yvain is going to be like is quite intriguing.

  21. Ken Arromdee says:

    My greatest objection to Kantianism has usually been your section III. Exactly what constitutes a universalization of a statement? (And I’m not convinced of your own solution’ see Peng’s reply.)

    But even ignoring that, I would think there’s a reply to the case of lying about where the general is. The enemy would only give you a choice to tell and not nuke the city if nuking the city is less valuable to them than just killing the general. So to avoid the lie being self-defeating, you should tell a lie with probability X, where the enemy’s expectation from nuking the city is less than their expectation from an X chance of a lie (the enemy gains less than if they nuke) and a 1-X chance of the truth (the enemy gains more than if they nuke).

    Depending on exactly how much the enemy stands to lose from nuking the city compared to killing just the general, it may even be that just the chance of you breaking under pressure and telling the truth serves the purposes of X and it’s fine to lie if you can bring yourself to do it. For that matter, the enemy can’t distinguish between you and a non-Kantian ahead of time. If a large enough percentage of the captives are non-Kantians, for all the Kantians to decide to lie might not become self-defeating since just the percentage of non-Kantians who would tell the truth is enough to have the same effect as telling the truth with probability 1-X.

    Needless to say this is far beyond steelmanning; the idea of a mixed strategy would be utterly foreign to Kant.

    • Creutzer says:

      If a large enough percentage of the captives are non-Kantians, for all the Kantians to decide to lie might not become self-defeating since just the percentage of non-Kantians who would tell the truth is enough to have the same effect as telling the truth with probability 1-X.

      You’re bringing up a very important point here: it matters greatly for the outcome of the universalisation procedure whether you view other people as agents who would act according to the universalisation or as part of the environment. For example, you could justify never helping the victim of a crime because your policy is not to be nasty to others, but also not care when someone else is nasty to them – universalised to all humans, this is perfectly self-consistent and it would even be a wonderful world because nobody ever commits a crime.

    • Paul Torek says:

      The idea of a mixed strategy would be foreign to Kant, but truer to the spirit of universalization. To wit, behind every maxim is a motive. The motive indicates the Real Maxim. The Real Maxim is then tested. I think this is what Scott’s trying to get at.

      For example, if I offer the maxim “everyone give all their money to someone named Paul Torek” the Real Maxim is self-love. As Scott discussed, this leads to Levi seeking money for Levi and Paula for Paula.

      The Real Maxim in the nuclear terrorism case is “save my city, extremely important, and if possible save my general too.” The mixed strategy captures that perfectly.

      • Protagoras says:

        Actually, the purpose that the maxim is directed at is canonically supposed to be part of the maxim for Kant. He just leaves it unstated in nearly all of his examples, at massive cost to clarity.

  22. This is really a case of reference class tennis.

    I have a notion that people are choosing their universalizations according to some standard, but I’m not sure what standard. Deontologist? Consequentialist? Ease of analysis?

    Would the world be a better place if everyone said what they thought would have the best easily predictable effects?

    This solves the the question of whether you lie to people who are asking about someone you’re concealing, since it’s much more likely that person will get killed if you tell the truth, but not particularly likely that everyone will become completely consistent about lying.

    It doesn’t give a clear answer about the general vs. the city, but fortunately, that sort of deal doesn’t get offered. I think I’d go with protecting the general because the people who offer that deal are probably untrustworthy and have just shown how much they care about killing the general.

    • Caspian says:

      The right universalization would need to be one people will agree with, so it should be a Schelling point, and sufficiently favourable to all parties.

  23. Troy says:

    My sense is that most contemporary philosophers aren’t too enamored by Kant’s Formula of Universal Law, in part because of these problems concerning what maxim is being universalized. His Formula of Humanity, however, which says to never use others as mere means, is more popular nowadays.

    Of course, interpreting and applying the FoH is another difficult task. But I think this is a more promising route to go myself too — basically, I think we should understand the FoH as the claim that we should treat people with respect because they are rational agents. Applying it to the salient cases at hand, you’re disrespecting those who disagree with you politically when you think the way to respond to them is to fire or protest them rather than talk to them.

  24. Troy says:

    Ironically, the Kantian advice in the general case might be to just refuse to answer — you shouldn’t lie because that would be lying, but you shouldn’t tell them the truth because that would be cooperating with evil. Of course, refusing to answer will (we’re supposing) get the city blown up, but the hard-nosed deontologist will say that that’s solely on the militants’ heads, and you do nothing wrong in not cooperating with them even if you know this will lead them to act wrongly.

  25. Pingback: A Semi-Practical Approach to Kant

  26. PhilosopherBob says:

    “If it is known to everyone that prisoners of war always lie in this situation, it would be impossible to offer the positive-sum bargain, and your enemies would resort to nuking the whole city, which is worse for both of you.”

    Your lying won’t cause it to be know that prisoners of war always lie in this situation, however. You’re just one POW of many. This actually reminds me of a multiplayer PD (with truth=cooperate and lie=defect), because each POW is better off lying if they know the others will tell the truth, but each player prefers C,C,C….,C to D,D,D….,D*.

    In this model, we might expect that the utility of lying was inversely related to the frequency of situations in the correct reference class. If POWs are only offered this deal once every 80 years, it might as well be a 1-shot PD, and you should definitely lie. If POWs get offered this choice once a day, and each player thought the other players were rational but not to the extent that superrationality would work, we’d expect some sort of tit-for-tat analogue to emerge as the dominant strategy. In the multiplayer case I think this comes out to roughly “lie with roughly the probability you think other people in your situation lie (which is presumably pretty close to the probability your captors think you will lie).”

    Of course, you might not suppose the other POWs are rational. Maybe you think some are rational, and some are cooperate bots (or at least cooperate more than they should, and will not change this behavior in further iterations). The more cooperate bots you think are present, the more you should lie, because it will be less likely to cause the captors to always assume that POWs are lying.

    *This assumes that each POW in consideration is being held by same captor, or more weakly that all the captors can at least gain information on the other POWs behavior at a low enough cost that they will do so.

    • MugaSofer says:

      Scott is using Timeless Decision Theory, which works by taking into account the fact that other prisoners reason much the same way you do, and thus will hopefully reach whatever conclusion you do. (Well, roughly.)

      The upshot is that TDT reasoners cooperate in the PD against other TDT-like reasoners; so they all receive the C,C[insert other C’s here if you like] result they prefer.

  27. Vadim Kosoy says:

    It seems to me you are mixing two different things: one is using the correct decision theory and the other is using the correct utility function. The correct decision theory (UDT) ensures that all players using it arrive at a Pareto optimal outcome. Of course there are situations in which you can unilaterally screw everyone else (e.g. if you’re a powerful dictator and there are no other universes in which you aren’t with which you might want to do acausal trading). In those situations the correct reason *not* to do it is not decision theory / cooperative game theory, it is that your own utility function* already includes caring about everyone else. The evolutionary cause behind it has to do with cooperative games, but we are adaptation-executors, not fitness-maximizers. That is, there is no general game theoretic reason why you should be nice to other people. It’s just that you want to be nice to other people.

    *Yes, morality is subjective in the sense of referring to the utility function of a given person (“you”).

  28. Eli says:

    I’m not sure whether consequentialism is prior to universalizability (“universalize maxims because if you don’t you’ll end up losing out on possible positive-sum games and cutting your job offers in half”), whether universalizability is prior to consequentialism (“be a consequentialist, because that is a maxim everyone could agree on”)

    In the absence of Universally Compelling Arguments, consequentialism is prior to universalizability: you can only universalize maxims when you’re entangled with other decision-makers who are reasonably like yourself as to follow the doctrine of universalizing maxims.

    Because if universalizability is prior, that would be an interesting way to explore some of the problems with utilitarianism. For example, should we count pleasure or preferences? I don’t know. Let’s see what everyone would agree on.

    Since preference for pleasure is only one preference, we should obviously use preferences.

    Does everyone have to prefer torture to dust specks? You’re behind the veil of ignorance, you don’t know if you’ll be a dust speck person or a torture person, what do you think?

    Behind the veil of ignorance, I definitely prefer dust specks to torture. Noise-level misery rounds down to zero before being added up to OHBLOODYHELLHUGENUMBER.

    But then again, I’m the kind of evil bastard who supports what you called post-hoc consequentialism (“ethical calculations about people take place only with respect to actually existing people, roughly speaking, given my degree of entanglement with used-to-be people and will-someday-be people”), so to a proper torture-picking utilitarian I’m just some kind of warped monster who’s refusing the obvious moral imperative to tile the universe with people.

    • MugaSofer says:

      “Behind the veil of ignorance, I definitely prefer dust specks to torture. Noise-level misery rounds down to zero before being added up to OHBLOODYHELLHUGENUMBER.”

      … how does the Veil of Ignorance in any way imply rounding?

  29. peppermint says:

    My favorite examples of problems with utilitarianism are Bentham’s preserved corpse on display and books bound in human leather. I find these aritfacts disgusting and I do not want to live in a world where people are okay with them being created.

    The problem with utilitarianism is that it is not a theory, but a justification for not having a theory, and doing whatever sounds like a good idea at the time.

    It is individual pride pretending to be philosophy. No one needs to study. They just need to recite this slogan and then go forth and do what they will, that is the whole of the law.

    • ozymandias says:

      Do people… want books bound in human leather? I mean, displaying preserved corpses, sure, Bodies: The Exhibition is a thing. But I am pretty sure that most people’s reaction to human leather books is “Ew! Gross!” so that is hardly utility-maximizing.

      The rest of your complaints seem to be the exact opposite of everyone else’s (much better grounded, imo) complaints about utilitarianism. Consistent utilitarianism says you must give everything beyond basic necessities to charity. I have never met a person who wills to give 90% of their income to charity, but if there were such a person they would surely be a saint.

      • peterdjones says:

        What’s more, peppermints own morality seems to be based on peppermints own yeuch reactions, so peppermint can hardly blame others for having moralities that rubber stamp begat they wanted to do anyway.

        • Hainish says:

          I’m wondering if it’s the artifacts themselves that are objectionable, or the implied lack of consent needed to create them. And, if it’s the latter, is the problem with utilitarianism different from the one peppermint has in mind?

      • Princess_Stargirl says:

        Its an old article but I want to point out books bound in human flesh would not bother me at all. I am not sure I would outright purchase one but I would vaguely consider it. Assuming I was sure this would not lead in any way toward people getting killed.

        Though I think I have an extremely unusually low level of digust toward most things (except bugs).

    • Vertebrat says:

      What if a book-lover wants their skin used for bindings when they die? Or if a musician wants their bones made into flutes? Are these still objectionable?

    • Xycho says:

      “I find these aritfacts disgusting…”
      That’s fine, everyone has preferences.

      “…and I do not want to live in a world where people are okay with them being created.”
      That, to me, is where you fall into ‘very not OK’ territory. Not wanting such things to be created is fine, but wishing to influence the mindset of every other human being such that they don’t want it either is a pretty close approximation to outright evil. (This does result in a curiously looped mindset where for example both institutional slavery AND political opposition to it are unconscionable, for almost exactly the same reason. Also it means that it would be wrong of me to hope that my expressing this belief would lead to anyone changing their attitude or behaviour.)

      Do people want books bound in human leather? I have no data, (probably, people want nearly everything) but as long as nobody was actually killed specifically to procure said leather, I can’t really see why anyone would object.

      On an unrelated note, “do what thou wilt shall be the whole of the law” is Aleister Crowley, and as far as I know he wasn’t a utilitarian.

      • ozymandias says:

        I’m confused. Surely there is a difference between “it is morally wrong to completely eliminate a particular preference” and “it is morally wrong to persuade someone of a thing.” We may wish to tolerate a certain number of slavery-supporters, if for no other reason than that completely eliminating a belief would probably require very unethical tactics, but that’s no reason to think we can’t try to keep them from being the majority of the population.

        • Xycho says:

          It’s not quite that; it is morally wrong to attempt to nonconsensually adapt or control someone else’s behaviour or personality to satisfy your own preferences.

          Of course, society doesn’t function as society if people aren’t evil, by that standard.

        • Oligopsony says:

          Of course, society doesn’t function as society if people aren’t evil, by that standard.

          Extending even further, I doubt anyone would have gotten the preference “don’t attempt to nonconsensually adapt or control someone else’s behaviour or personality to satisfy your own preferences” except after a long process of exactly that. So perhaps Kant would disapprove!

        • Xycho says:

          You happen to be correct; I don’t like it done to me, which flags things although it doesn’t preclude doing unto others, and attempting it generally results in negative consequences.

          The only coherent position on the issue I could develop was that the contents of people’s heads are sacred and nobody gets to mess with them even if they’re utterly horrific and broken to start with. This has resulted in a great deal of antipathy towards journalists, politicians, advertising agents, mental health professionals and salesmen, to name but a few. It’s also circular, in that I have to tolerate all forms of intolerance, including intolerance of tolerating intolerance.

        • peterdjones says:

          There’s an important difference between “argue against” and “eliminate”.

      • Nornagest says:

        On an unrelated note, “do what thou wilt shall be the whole of the law” is Aleister Crowley, and as far as I know he wasn’t a utilitarian.

        Crowley’s ethics in general and that quote in particular are a lot more complicated than they seem to be at first glance. “Will”, to Crowley, didn’t mean whim or even naive preference, but something closer to what one’s preferences would be if one was perfectly enlightened and free from akrasia: part fully informed desire, part destiny, part divine law. Uncovering one’s will is a major project within Crowley’s religious system, involving a great deal of introspection, ritual, and meditative practice.

        It’s not entirely unlike Eliezer’s concept of extrapolated volition, actually, though it’s wrapped up in a lot more mystical gingerbread.

    • MugaSofer says:

      What on earth do any of these things have to do with utilitarianism?

      “The problem with utilitarianism is that it is not a theory, but a justification for not having a theory, and doing whatever sounds like a good idea at the time.

      “It is individual pride pretending to be philosophy. No one needs to study. They just need to recite this slogan and then go forth and do what they will, that is the whole of the law.”

      Correct me if I’m wrong, but isn’t kind of the only detail of utilitarianism … utility calculations? Hardly “doing whatever sounds like a good idea at the time… No one needs to study.”

  30. Josh Haas says:

    If you’re interested in going down the “Kant isn’t crazy” rabbit hole further, I strongly recommend Christine Korsgaard’s writing, especially The Sources of Normativity. She interprets / modernizes / steel-mans Kant’s ethics in a way that I find very readable and convincing.

    To me, the most interesting takeaway from her / Kant’s work (it’s a little hard to tell how much is her vs him), isn’t the ethical conclusions so much as the theory for why morality even exists in the first place. I think consequentialist answers to this question — why care at all about this — tend to be weak. Korsgaard’s argument is that normativity arises from the condition of having to make choices, which seems to me like a prima facie sane place to start looking, since making choices are what normative questions are about. Anyway I can’t do justice to the whole argument in a comment, but I highly recommend reading her book.

    • Protagoras says:

      I second the Korsgaard recommendation. Korsgaard convinced me that Kant was a compatibilist, which I should have figured out much sooner. Like far too many people, I was misled by Kant claiming not to be a compatibilist, but Kant was obviously confused and trying to criticize some other aspect of Hume, or some detail of Hume’s version of the story (which he probably misunderstood) when he said that. Since I knew Kant was a determinist, I don’t know how I failed to connect the dots before encountering Korsgaard’s account. And that’s far from the only place where I’m quite sure Korsgaard has interpreted Kant correctly, where I formerly held a completely silly interpretation.

  31. It seems to me that the public discussion of the movie Avatar was very SJish— the discussion wasn’t limited to the fringe left.

  32. Platypus says:

    I don’t think it’s fair to describe your prisoner-of-war scenario as a “positive-sum bargain”.

    The captors in your example have a huge range of options. Two of their options are nuking the enemy city or firing a missile at the enemy general, but they also have options like “negotiate a peace treaty” and “let the prisoner go free, perhaps in a prisoner exchange”. But the captors turn the prisoner and they say: “Well, we know you have a policy of accepting positive-sum bargains, so we’ve made a precommitment to nuking your city if you don’t tell us where your general is hiding. This allows us to reframe your situation as choosing whether to accept a positive-sum bargain.”

    I’d like to propose a policy of only accepting positive-sum bargains when the utility generated for both sides is actually a positive number, and not “oh, we could have done something much worse to you, so you should consider it a favor that we didn’t”. Under this policy, it’s actually totally reasonable to lie to an ax-murderer or to give false information to one’s captors.

    You might argue: “But if it became common knowledge that we lie to people who are already breaking the social contract, then we lose the ability to make positive-sum bargains with people who have broken the social contract”. I think the counter-argument is: “If it became common knowledge that we lie to people who are already breaking the social contract, people would be less willing to break the social contract in the first place.”

    See also: “We do not negotiate with terrorists.”

    • Douglas Knight says:

      Don’t fight the hypothetical. It’s true that there are difficult issues about precommitments, but that is a separate issue. What if those really are the choices?

      • Platypus says:

        I would like to fight the hypothetical for a moment. 🙂

        The prisoner is asked to accept a huge loss of utility here, by being honest rather than lying. To justify this loss of utility, I would want to be really certain about several things.

        (1) I would want to be really certain about how the war would have gone, had the captors believed that prisoners would lie. Would they have tried harder to reach a peace treaty, given their obvious reluctance to use nuclear weapons?

        (2) I would want to be really certain that the captors actually were capable of nuking the capitol. Clearly the prisoner’s nation’s brilliant general thinks this is impossible — else she would never have situated her base in the capitol. The captors have assured us that they can do this, but captors are not known for their honesty to prisoners of war.

        (3) I would want to be really certain that this actually was an iterated prisoner’s dilemma, of the sort that we can universalize as Scott describes. How often is the prisoner’s nation going to be at war with another nation that can nuke their capitol? (Or other events of comparable scale.)

        Clearly, we can make these things part of the hypothetical situtation. We can say: “Well, the prisoner’s nation unilaterally declared the war, and they’ve been completely unwilling to accept any sort of peace treaty, so the captors have absolutely no choice other than to nuke the capitol. That’s part of the hypothetical. And the captors have provided the prisoner with absolute unquestionable proof that they can nuke the capitol — they’ve kept the ability very secret, rather than announce it and use it as a deterrent, and we don’t have to think about why because it’s part of the hypothetical. And this totally is an iterated prisoner’s dilemma, because the prisoner’s nation declares war on a technologically superior nation every year, it’s Declare War On A Potential Nuclear Superpower Day, May 10 of every year, that’s part of the hypothetical too.”

        If you say that, then I have to agree with you that cooperating with the captors is the right thing to do.

        But I don’t think it’s a very good hypothetical situation. I think that, in practice, many situations are going to be like the bank robber who took the teller hostage: locally there’s benefit from being honest, but globally there’s more benefit from being dishonest to discourage people from breaking the social contract in the first place.

        • suntzuanime says:

          Acausality means that the prisoner’s dilemma doesn’t have to be iterated for you to want to universalize it.

        • Platypus says:

          It doesn’t? Hmm.

          My first guess is that I’ve caused a problem by using the phrase “acausal trade” without a clear understanding of what that phrase means. When I said “acausal trade”, what I meant was “a thing like Parfit’s Hitchhiker where someone does something and trusts that you’ll hold up your end of the implied bargain.”

          I’ve edited my post to remove the phrase “acausal trade”, in the hopes that this will remove the contradiction you’re pointing out. ^_^; Apologies if it doesn’t.

        • suntzuanime says:

          Well… yes… Parfit’s Hitchhiker isn’t iterated. Cooperating on the one-shot PD is more or less analogous to Parfit’s Hitchhiker, just in Parfit’s Hitchhiker you already know your interludor chose to cooperate.

    • MugaSofer says:

      Except they would have nuked the city anyway, had they not had a prisoner at all.

      They did not precommit to it for the purpose of persuading the prisoner, and so there is no point in rejecting their bargain to discourage such precommitments.

    • Platypus says:

      Let’s back up for a second.

      Scott describes the problem of Parfit’s Hitchhiker above: “The person being rescued should assume that, if he does not pay the $100, it will eventually be common knowledge that people being rescued will not pay the $100, and no more people will be rescued.” So in this case, under the universalizability assumption, it makes sense to pay the $100.

      Here’s another hypothetical. A bank robbery has gone bad, and the robber has taken a teller as hostage. “This teller insulted me, so I slightly prefer to kill him,” says the bank robber, “but I didn’t, because I thought you would prefer to give me a million dollars and guaranteed passage to Cuba, if I save his life.” At this point we have a sniper who could safely kill the robber, but the robber has a good point that he could have killed the teller earlier. Assume that we would in fact prefer to pay the money rather than have the teller die.

      We reason: “If we behave dishonestly in this situation by sniping the bank robber, it will become common knowledge that we snipe bank robbers who take hostages, so bank robbers will stop taking hostages. This might lead to bank robbers killing tellers, but it’s more likely to lead to people deciding not to become bank robbers in the first place.”

      So I think the right thing to do is to snipe the bank robber.

      And, more generally: I think you can’t just blindly accept any positive-sum Parfit’s-Hitchhiker-thing somebody offers you. You have to think about what the other person would have done globally had they believed you would defect.

  33. naath says:

    Hum.

    I do think it is bad to simply fire someone for being homophobic in an abstract way. Because that leads to firing people for being all sorts of other things.

    But:

    I think it is good to fire someone for being homophobic *in a way that costs you customers* (because that is a sound business decision) or *in a way that harms your other employees* (because otherwise they might all quit, and that would be bad).

    When it comes to people boycotting things… I think that’s a lot harder actually. I think that “I will boycott you because BAD THING” (you are homophobic so I won’t buy your stuff) shades into “I will not buy your stuff because the experience of buying it sucks” (for instance if all your waitstaff are homophobic then I don’t want to eat in your cafe because the experience of being there is unpleasant to me!) which shades into “your stuff sucks” (your food is crap, I’m not buying it). I think it’s hard to say “you should never decide not to buy from someone who is homophobic” without sorta incidentally imply “you should never decide not to buy from someone *for any reason*”.

    • nydwracu says:

      Sure, but those maxims universalize differently — and that’s the thing I like about Kant: while my years-old assessment that he was an obscurantist whose ethical writings are long-winded, incoherent (to the extent that I’ve met committed Kantians who didn’t know how to get from one form of the Categorical Imperative to the other) secularizations of a particularly bizarre form of Christianity still stands, I’ve come to understand that, despite that, he came pretty close to discovering a certain sort of applied political decision theory (there’s probably a more specific name for it but I don’t know what it is), and probably would have if not for the wacky deontology stuff.

      (I forget if I said this before, but when I was writing the comment that prompted this post, I realized that there were good strategic reasons to frame it as deontological even without actually believing in deontology. I can’t remember my exact reasoning, but my impression is that deontology is more common in ruling formulas than consequentialism; and I don’t know how it was back then, but today, the main formalist function of philosophy departments is the generation of potential ruling formulas. I recall reading something about — I think it was Derrida’s deconstruction, but I know this happened to object-oriented ontology — being attacked as insufficiently communist.)

      When you carry out an action, especially when you give a public justification for it, you’re not just carrying out an action; you’re shifting the norms of the society, changing the payoff matrix and so on — so you have to make sure that you’re acting according to maxims that would universalize well. (This is totally consequentialist, of course. Maybe you could get a deontological version that makes the same point, but I am not a deontologist.) When Brendan Eich or Scott Eckern got fired for donating to Prop 8, those weren’t isolated incidents with no effects on anyone besides Eich and Eckern; they were displays of power by a particular political faction — actions based on the maxim, “people who aren’t progressive should be fired”, which is generated by the meta-maxim “people who signal different political thede-affiliation shouldn’t make money”, so you have all these people going around boycotting Firefox because they fired Eich, and I opposed that at the time because it was further entrenching the same meta-maxim that got Eich fired, except the people uninstalling Firefox were on the side with less power so reinforcing that meta-maxim will only screw them further.

      On another level: meta-maxims against centralization of power and punishment of political opponents tend to be adopted by people who don’t think they have sufficient power to make use of them, and meta-maxims toward it tend to be adopted by people who think they do. (Obvious example: the Skokie case. Another obvious example: the ACLU itself.) Of course, whether or not they think it is one thing, and whether or not they’re right is another — conservatives think they have the power to do this stuff, but if they do, they haven’t displayed it anywhere outside getting some North Carolina sheriff’s aide or whatever fired.

      (I wonder: if you gathered up a list of all those incidents and sorted them by side, what effect would show up from which side holds the White House?)

      Anyway: firing someone for being homophobic in a way that costs you customers is justified by the meta-maxim “fire people who cost you customers”, and firing someone for being homophobic in a way that harms all your other employees (NB: the degree of opposition to Eich from within Mozilla was vastly amplified by the media, and conflated with opposition to Eich from people outside Mozilla, and there were many pro-Eich statements that went totally unreported-on) is justified by the meta-maxim “fire people who harm all your other employees”. So that makes sense, and it’s slightly more difficult for a powerful faction to organize serious boycotts (see: Chick-Fil-A) or get enough entryists hired to make a case for firing the person the entryists want out than it is for them to go to the media and start a two-minutes hate about how this person should be fired for signaling against that powerful faction. Not that it can’t be done, of course.

      I did end up switching from Firefox, but because they made their UI almost identical to Chrome’s, and I hate Chrome and its UI. I suspect that there’s something interesting down this road (how can maxims and meta-maxims deal with preferences, especially when they’re of the sort to have political consequences?) but I haven’t had enough coffee yet to figure out what it might be.

      • This is why “everyone else is doing it” is a somewhat reasonable response from someone who’s accused of wrong-doing. They’re making an argument that they haven’t significantly shifted social norms.

        Someone who’d done something *unusually* bad would have done considerably more damage.

      • MugaSofer says:

        “a certain sort of applied political decision theory (there’s probably a more specific name for it but I don’t know what it is)”

        Timeless decision theory, perhaps?

    • MugaSofer says:

      The trouble is, some people will threaten to boycott anyone who doesn’t fire you for having (say) homophobic beliefs. Thus reducing *any* use of free speech to “costing them customers”.

      This is not a hypothetical; this is a real, current problem that needs to be addressed or there will be an increasing chilling effect on free speech.

      (Personally, I’m in favour of restricting which things you can fire someone for, so there’s no incentive to boycott. But you could argue the boycotts are what’s actually immoral, not the firings.)

  34. naath says:

    And as a separate point to my last.

    An interesting historical fact – once the Allies had Cracked Enigma they did not save every target. Because they knew that saving every target would lead to the Germans knowing that Enigma Was Cracked. A choice that whilst probably the right one seems like it would have been very hard to actually make.

  35. jd_k says:

    (Kant notes that this also satisfies his original, stricter “self-defeating contradiction” criterion. If we all try to steal from each other, then private property becomes impossible, the economy collapses, and the stuff we want isn’t there to steal. I don’t know if I like this; it seems a little forced. But even if contradictoriness is forced, badness seems incontravertible)

    I read “self-defeating contradiction” more literally — more “contradiction” and less “leads to bad stuff.”
    (a) To steal something is to take another person’s private property. If we all do this, then private property becomes impossible. But if private property is impossible, then so is stealing. Thus stealing, when universalized, is self-contradictory in that it denies the terms of its own possibility.
    (b) To lie is to deceive someone by making a falsehood appear to be a truth. If we all do this, then no one would trust what anyone says. But if we do not trust what anyone says because we know them to be lying all the time, then it is impossible to pass a falsehood off as a truth. Thus lying, when universalized, is self-contradictory in a similar way.

    I like your post quite a bit; I just wanted to point out this additional sense in which an ethical action could be considered self-contradictory when universalized.

    • Paul Torek says:

      I agree with your interpretation of Kant. However, complications quickly arise over who gets to define the action. Is lying, for example, necessarily about deception? Maybe the liar claims that he just likes saying the opposite of the truth sometimes, and he doesn’t care what the listener believes. If we define lying via deception, the falsehood-teller denies that he is “lying”. Etc.

      If Kant has a satisfactory answer, I’m not aware of it.

  36. Patrick says:

    Regarding applying the ‘veil of ignorance’ to interpersonal ethics, have you read What We Owe to Each Other by T.M. Scanlon? He develops this idea in great detail. I found it to be a very interesting read and highly recommend it.

  37. Pingback: Variations on a theme: heroic protectiveness | Carcinisation