Whose Utilitarianism?

[Trigger warning: attempt to ground morality]

God help me, I’m starting to have doubts about utilitarianism.

Whose Superstructure?

The first doubt is something like this. Utilitarianism requires a complicated superstructure – a set of meta-rules about how to determine utilitarian rules. You need to figure out which of people’s many conflicting types of desires are their true “preferences”, make some rules on how we’re going to aggregate utilities, come up with tricks to avoid the Repugnant Conclusion and Pascal’s Mugging, et cetera.

I have never been too bothered by this in a practical sense. I agree there’s probably no perfect Platonic way to derive this superstructure from first principles, but we can come up with hacks for it that come up with good results. That is, given enough mathematical ingenuity, I could probably come up with a utilitarian superstructure that exactly satisfied my moral intuitions.

And if that’s what I want, great. But part of the promise of utilitarianism was that it was going to give me something more objective than just my moral intuitions. Don’t get me wrong; formalizing and consistency-ifying my moral intuitions would still be pretty cool. But that seems like a much less ambitious project. It is also a very personal project; other people’s moral intuitions may differ and this offers no means of judging the dispute.

Whose Preferences?

Suppose you go into cryosleep and wake up in the far future. The humans of this future spend all their time wireheading. And because for a while they felt sort of unsatisfied with wireheading, they took a break from their drug-induced stupors to genetically engineer all desires beyond wireheading out of themselves. They have neither the inclination nor even the ability to appreciate art, science, poetry, nature, love, etc. In fact, they have a second-order desire in favor of continuing to wirehead rather than having to deal with all of those things.

You happen to be a brilliant scientist, much smarter than all the drugged-up zombies around you. You can use your genius for one of two ends. First, you can build a better wireheading machine that increases the current run through people’s pleasure centers. Or you can come up with a form of reverse genetic engineering that makes people stop their wireheading and appreciate art, science, poetry, nature, love, etc again.

Utilitarianism says very strongly that the correct answer is the first one. My moral intuitions say very strongly that the correct answer is the second one. Once again, I notice that I don’t really care what utilitarianism says when it goes against my moral intuitions.

In fact, the entire power of utilitarianism seems to be that I like other people being happy and getting what they want. This allows me to pretend that my moral system is “do what makes other people happy and gives them what they want” even though it is actually “do what I like”. As soon as we come up with a situation where I no longer like other people getting what they want, utilitarianism no longer seems very attractive.

Whose Consequentialism?

It seems to boil down to something like this: I am only willing to accept utilitarianism when it matches my moral intuitions, or when I can hack it to conform to my moral intuitions. It usually does a good job of this, but sometimes it doesn’t, in which case I go with my moral intuitions over utilitarianism. This both means utilitarianism can’t ground my moral intuitions, and it means that if I’m honest I might as well just admit I’m following my own moral intuitions. Since I’m not claiming my moral intuitions are intuitions about anything, I am basically just following my own desires. What looked like it was a universal consequentialism is basically just my consequentialism with the agreement of the rest of the universe assumed.

Another way to put this is to say I am following a consequentialist maxim of “Maximize the world’s resemblance to W”, where W is the particular state of the world I think is best and most desirable.

This formulation makes “follow your own desires” actually not quite as bad as it sounds. Because I have a desire for reflective equilibrium, I can at least be smart about it. Instead of doing what I first-level-want, like spending money on a shiny new car for myself, I can say “What I seem to really want is other people being happy” and then go investigate efficient charity. This means I’m not quite emotivist and I can still (for example) be wrong about what I want or engage in moral argumentation.

And it manages to (very technically) escape the charge of moral relativism too. I think of a relativist as saying “Well, I like a world of freedom and prosperity for all, but Hitler likes a world of genocide and hatred, and that’s okay too, so he can do that in Germany and I’ll do my thing over here.” But in fact if I’m trying to maximize the world’s resemblance to my desired world-state, I can say “Yeah, that’s a world without Hitler” and declare myself better than him, and try to fight him.

But what it’s obviously missing is objectivity. From an outside observer’s perspective, Hitler and I are following the same maxim and there’s no way she can pronounce one of us better than the other without having some desires herself. This is obviously a really undesirable feature in a moral system.

Whose Objectivity?

I’ve started reading proofs of an objective binding morality about the same way I read diagrams of perpetual motion machines: not with an attitude of “I wonder if this will work or not” but with one of “it will be a fun intellectual exercise to spot the mistake here”. So far I have yet to fail. But if there’s no objective binding morality, then the sort of intuitionism above is a good description of what moral actors are doing.

Can we cover it with any kind of veneer of objectivity more compelling than this? I think the answer is going to be “no”, but let’s at least try.

One idea is a post hoc consequentialism. Instead of taking everyone’s desires about everything, adding them up, and turning that into a belief about the state of the world, we take everyone’s desires about states of the world, then add all of those up. If you want the pie and I want the pie, we both get half of the pie, and we don’t feel a need to create an arbitrary number of people and give them each a tiny slice of the pie for complicated mathematical reasons.

This would “solve” the Repugnant Conclusion and Pascal’s Mugging, and at least change the nature of the problems around “preference” and “aggregation”. But it wouldn’t get rid of the main problem.

The other idea is a sort of morals as Platonic politics. Hobbes has this thing where we start in a state of nature, and then everybody signs a social contract to create a State because everyone benefits from the State’s existence. But because coordination is hard, the State is likely to be something simple like a monarchy or democracy, and the State might not necessarily do what any of the signatories to the contract want. And also no one actually signs the contract, they just sort of pretend that they did.

Suppose that Alice and Bob both have exactly the same moral intuitions/desires, except that they both want a certain pie. Every time the pie appears, they fight over it. If the fights are sufficiently bloody, and their preference for personal safety outweighs their preference for pie, it probably wouldn’t take too long for them to sign a contract agreeing to split the pie 50-50 (if one of them was a better fighter, the split might be different, but in the abstract let’s say 50-50).

Now suppose Alice is very pro-choice and slightly anti-religion, and Bob is slightly pro-life and very pro-religion. With rudimentary intuitionist morality, Alice goes around building abortion clinics and Bob burns them down, and Bob goes around building churches and Alice burns them down. If they can both trust each other, it probably won’t take long before they sign a contract where Alice agrees not to burn down any churches if Bob agrees not to burn down any abortion clinics.

Now abstract this to a civilization of a billion people, who happen to be divided into two equal (and well-mixed) groups, Alicians and Bobbites. These groups have no leadership, and no coordination, and they’re not made up of lawyers who can create ironclad contracts without any loopholes at all. If they had to actually come up with a contract (in this case maybe more of a treaty) they would fail miserably. But if they all had this internal drive that they should imagine the contract that would be signed among them if they could coordinate perfectly and come up with a perfect loophole-free contract, and then follow that, they would do pretty well.

Because most people’s intuitive morality is basically utilitarian [citation needed], most of these Platonic contracts will contain a term for people being equal even if everyone does not have an equal position in the contract. That is, even if 60% of the Alicians have guns but only 40% of the Bobbites do, if enough members of both sides believe that respecting people’s preferences is important, the contract won’t give the Alicians more concessions on that basis alone (that is, we’re imagining the contract real hypothetical people would sign, not the contract hypothetical hypothetical people from Economicsland who are utterly selfish would sign).

Whose Communion?

So what about the wireheading example from before?

Jennifer RM has been studying ecclesiology lately, which seems like an odd thing for an agnostic to study. I took a brief look at it just to see how crazy she was, and one of the things that stuck with me was the concept of communion. It seems (and I know no ecclesiology, so correct me if I’m wrong) motivated by a desire to balance a desire to unite as many people as possible under a certain banner, with the conflicting desire to have everyone united under the banner believe mostly the same things and not be at one another’s throats. So you say “This range of beliefs is acceptable and still in communion with us, but if you go outside that range, you’re out of our church.”

Moral contractualism offers a similar solution. The Alicians and Bobbites would sign a contract because the advantages of coordination are greater than the disadvantages of conflict. But there are certain cases in which you would sign a much weaker contract, maybe one to just not kill each other. And there are other cases still when you would just never sign a contract. My Platonic contract with the wireheaders is “no contract”. Given the difference in our moral beliefs, whatever advantages I can gain by cooperating with them about morality are outweighed by the fact that I want to destroy their entire society and rebuild it in my own image.

I think it’s possible that all of humanity except psychopaths are in some form of weak moral communion with each other, at least of the “I won’t kill you if you don’t kill me” variety. I think certain other groups, maybe along the culture level (where culture = “the West”, “the Middle East”, “Christendom”) may be in some stronger form of moral communion with each other.

(note that “not in moral communion with” does not mean “have no obligations toward”. It may be that my moral communion with other Westerners contains an injunction not to oppress non-Westerners. It’s just that when adjusting my personal intuitive morality toward a morality I intend to actually practice, I only acausally adjust to those people whom I agree with enough already that the gain of having them acausally adjust toward me is greater than the cost of having me acausally adjust to them.)

In this system, an outside observer might be able to make a few more observations about the me-Hitler dispute. She might notice Hitler or his followers were in violation of Platonic contracts it woud have been in their own interests to sign. Or she might notice that the moral communions of humanity split neatly into two groups: Nazis and everybody else.

I’m pretty sure that I am rehashing territory covered by other people; contractualism seems to be a thing, and a lot of people I’ve talked to have tried to ground morality in timeless something-or-other.

Still, this appeals to me as an attempt to ground morality which successfully replaces obvious logical errors with complete outlandish incomputability. That seems like maybe a step forward, or something?

EDIT: Clarification in my response to Kaj here.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

86 Responses to Whose Utilitarianism?

  1. Qiaochu Yuan says:

    “Utilitarianism says very strongly that the correct answer is the first one. My moral intuitions say very strongly that the correct answer is the second one. Once again, I notice that I don’t really care what utilitarianism says when it goes against my moral intuitions.”

    The agents that inhabit this far future are not humans. You should regard them as having killed the humanity that you knew and loved and behave accordingly. I don’t see this as being inconsistent with utilitarianism; when I encounter strange agents with very different preferences from myself, I don’t automatically feel a need to satisfy them.

    • Scott Alexander says:

      Can you justify your human/nonhuman distinction in some way other than your own moral intuitions? And didn’t you support extending utilitarianism to bugs the last time we talked?

      Do you think your claim is a natural result of utilitarianism, or something you have to add on as a hack to continue making utilitarianism give only results that you like?

      • Kutta says:

        (Why on earth should any current human not rely on moral intuitions when thinking about personhood? Is there something else?)

      • Qiaochu Yuan says:

        Nope.

        I suggested that I was coming around to assigning moral value to dolphins, whales, and possibly pigs. No bugs, though. Definitely no bugs.

        I’m also never entirely sure what people mean by utilitarianism, so it’s possible that I’m just using the term incorrectly. Is it inaccurate to use it as a rough synonym for consequentialism? I use it to mean “behave to maximize my expected utility” without trying to be too specific about exactly what my utility function is (which seems hard). I don’t necessarily endorse total utilitarianism, average utilitarianism, or preference utilitarianism (which all seem to be making claims about my utility function that break down in extreme cases).

        • Sniffnoy says:

          Yes, that is using the term “utilitarianism” incorrectly. Utility functions in the VNM or Savage or etc. sense are not related to utilitarianism. Maximizing one of those is just what a rational consequentialist does (well, if we agree that the VNM axioms or Savage axioms or whatever are rational).

          Utilitarianism is specifically the cloud of ideas such as average utilitarianism, total utilitarianism, etc. — it involves some sort of computation over all people. “The greatest good for the greates number.”

          So, utilitarianism is a particular variety of consequentialism. It is not defined by using VNM utility functions, which are just something any (VNM-)rational consequentialist uses. And be careful, because when utilitarians speak of “utility”, they may mean the thing they’re averaging/totalling/whatever — they may not mean VNM-utility.

          Yes, this terminology is annoying.

        • Qiaochu Yuan says:

          Oh, gotcha. In that case, let’s toss out utilitarianism.

    • Lucidian says:

      Then you get into the very difficult question of who counts as human. Your attitude towards the wireheaders seems analogous to the attitude of the Catholic Spanish conquistadors towards the natives they conquered. Except, instead of post-humans lacking all humanity, these were perceived as savage pre-humans lacking humanity.

      Scott’s question is “do we leave these people alone, or change their fundamental nature to be something we consider good?” The conquistadors were faced with the same question, and they chose to convert all the immoral savages into Christians so those people could be saved by Jesus.

      For the record, I’m with you and Scott and the conquistadors on this one. If you believe in something strongly enough, you have to fight for it, no matter what.

      • Qiaochu Yuan says:

        It’s debatable whether conquistadors were actually doing what they did for what they regarded as good reasons or whether they were just rationalizing their behavior, though. But in the sense that you mean, yes, I’m with the conquistadors too. Retrospective judgment is hard, but I don’t see a reason to retrospectively judge people who thought they were doing the right thing but who we now think are doing the wrong thing (except signaling, of course).

    • jaimeastorga2000 says:

      One of the most important parts of the superstructure of Utilitarianism is who or what exactly counts as an entity with moral worth and how much moral worth they have. Whether an utilitarian counts fetuses, chimpanzees, other animals, women, people in comas, people with brain damage, Nazis, hispanics, heavy metal listeners, possible people, future people, past people, wireheads, twins, a thousand identical emulations, non-neuromorphic AIs, and so on as having moral worth, individually or as a group, and to what degree relative to each other, is in my experience mostly arbitrarily chosen by the utilitarian in question based on his moral intuitions.

      • Qiaochu Yuan says:

        Sure. Is there any reason to expect something better than this?

        • jaimeastorga2000 says:

          Not that I know of, but things like that tend to annoy people who are trying to build some kind of objective morality from first principles. Then again, as far as I can tell, objective morality/moral realism is an inherently confused concept, just like magic.

    • Army1987 says:

      That’s pretty much what I was going to comment after reading those paragraphs.

  2. im says:

    Oh crud.

    Taking communion boundaries seriously might even out some of the ‘culture war’ style disputes that one has in society from time to time. I don’t know. One problem I see from time to time is an attempt to grow communion boundaries as large as possible without making their content explicit. THis causes cataclysm when the conclusions are revealed.

    My response to the wireheaders is somewhere between prolonged disbelief that their preferences matter any more, and planning to branch off a non-wireheaded race from them while leaving the wireheaders alone. Of course nosy preferences might end up being a problem

  3. Intrism says:

    But what it’s obviously missing is objectivity. […] This is obviously a really undesirable feature in a moral system.

    Why is it undesirable? Why do you want objectivity? What makes it better (for you, or for anyone else) than purely subjective formulations of morality?

    (As a note, the way I’d solve the problem in “Whose preferences?” is by assuming for the purposes of utilitarian calculation that everyone shares my utility function, even if they don’t.)

    • Scott Alexander says:

      “In my defense, Your Honor, I work on a moral system that permits me to assume the defendant also would have preferred I have his wallet.”

      • Berry says:

        That’s not how universal prescriptivism works, but it would make for a novel new legal theory.

      • Intrism says:

        Then I guess the judge will just have to be objectively evil according to the defendant’s moral code, won’t he?

      • Deiseach says:

        G.K. Chesterton, “The Man Who Was Thursday”:

        Thieves respect property; they merely wish the property to become their property that they may more perfectly respect it.

        🙂

  4. Berry says:

    To me, it seems like you’ve misidentified the problem. You say that “[O]bjectivity…is obviously a really [desirable] feature in a moral system.” but I’d say that at that moment, you should have gone meta and didn’t.

    What does it mean for a moral system to have desirable or undesirable features? Desirable for whom, is the question I see being begged. Do you mean that your intuitions feel like things that should be objective? Do you mean that you have a second-order desire for your desires to be a certain way, and for the desires of others to be a certain way? Do you mean that we should only accept as truthful moral systems that have this feature? As effective means of guiding our behavior, perhaps? (Then again, this formulation is also question-begging.)

    A few weeks ago you endorsed a moderate version of Verificationism. What possible test could you run that would discover whether or not objectivity is a “good” feature for a moral system to have or not? If your metric for success is how much the moral system expresses your moral intuition, then why talk about meta-ethics at all? Why not cut out the middleman, do what Ross did and say “Hey, these things are good, don’t argue with me, let’s talk about how to achieve them.” (Which is what you more or less did in your Consequentialism FAQ, IIRC)

    Have I asked enough questions? 🙂

    • Max says:

      Hope you don’t mind me jumping in. I have thought about precisely this question (What does it mean for a moral system to have desirable or undesirable features? Desirable for whom?). My take was that for me a goal of a moral system is to take fairly easy/moderate cases of my moral intuitions (learning data), formulate a system that has not too much complexity (in whatever sense; I’d guess Kolmogorov works); check that it works in other easy and maybe moderate cases (test data) and then guide me in the more difficult cases (prediction). I would even say this is how mathematical or science theories work as well.

      There is a trade-off between complexity and fit. “Cutting off the middleman” is opting for perfect fit but huge complexity. Some simple moral code would be opting for low complexity. That’s why every time new element of meta-structure is added to, say, utilitarianism, it’s such a sad thing – the overal “complexity-fit” optimum takes a hit. At some point utilitarian theories may loose their lead – or maybe the “contractualism” will just outperform them all right away. Of course what is leading will depend on one’s moral intuitions. If your moral intuitions perfectly line up with average utilitarianism with humans as only moral agents – lucky you.

    • Scott Alexander says:

      It’s a desirable feature for me, or at least one of the moral intuitions I have is that a moral system ought to be objective, and this is sufficiently important that I’d be willing to trade off a lot of other ways in which a moral system could match my desires in order to satisfy objectivity.

      This is part of why I’m saying I should be willing to compromise my morals in order to establish a communion of people who all agree on the same moral system and can act as if it’s objective.

      • novalis says:

        Wait, do you want it to be actually objective, or do you just want to act like it’s objective? Those are very different desires!

    • peterdjones says:

      Moral systems have a practical job to do, which is to ground and justify the apportiomimg of resources, awards and sanctions. Since these are objective…you are either in jail or out, you either get the whole cake or half…it is desirable for moral systems to be objective.

  5. jaimeastorga2000 says:

    “In this system, an outside observer might be able to make a few more observations about the me-Hitler dispute. She might notice Hitler or his followers were in violation of Platonic contracts it woud have been in their own interests to sign. Or she might notice that the moral communions of humanity split neatly into two groups: Nazis and everybody else.”

    In the first case, the most the observer could say is that Hitler is irrational under a certain strain of decision theory which involves platonic contracts as a method of deciding which action to take. In the second, the observer might notice two different clusters of people in platonic contract space, but is still left with the problem of deciding which one she likes better. Neither of these seems to match at all with the intuitive idea of an objective morality.

    • endoself says:

      Agreed. If you’re going to derive something from first principles, you have to do a lot less anthropomorphising. Preferences that other people be happy and the sort of timeless acausal contracts you discuss here are both somewhat similar to usual concepts of morality, but they are not the same thing and, in particular, they are not the same as each other. You can’t carry anything over intuitively without checking to see that it’s still justified.

    • Deiseach says:

      There is also the problem that, from Hitler’s point of view, he is not merely interested in “genocide and hatred”; he too (according to his lights) is trying to achieve a world of freedom and prosperity for all, and he is trying to maximize the world to his preferred state.

      How or on what basis, then, does the objective observer decide that Hitler’s methods are inferior to Scott’s methods? Maybe the world would be better without these kinds of/groups of/numbers of people inhabiting it; how can you tell that the Thousand-Year Reich would be a worse state for the surviving populations (once all the naturally inferior and criminal types have been eliminated, of course)?

      You need something to judge between Scott and Hitler other than “Okay, Hitler is gassing the undesirables but Scott* would just put a drug into the water so they couldn’t reproduce and would die off naturally, so this is a kinder and less painful method and so Scott wins”.

      *’Scott’ not intended to represent or attribute any views to the real Scott, just using the name as the opposite side in the dispute

  6. Sniffnoy says:

    I’m a bit confused about your comment regarding what you call “post hoc consequentialism”. (Actually, I’m not too clear on why you call it that either, but I guess that’s not the point. I don’t think that’s a good name anyway because it makes it sound like it’s some general thing that can be applied to consequentialism in general, when really it’s a version of utilitarianism in particular; people get those mixed up often enough already.)

    So first let me say I like this idea, it does seem to avert some of the problems of utilitarianism. Though I don’t think that’s really much of anything to do with utiliatarianism in particular; I suspect sticking to state-of-the-world-based preferences produces more sensible results in general. (Although perhaps that should be history-and-trajectory-of-the-world-based?) Uhhh. I think I have seen stuff about this before on LW but am failing to find it right now.

    That said I don’t see how it averts the particular problems you say it does. Changing “everybody’s preferences about everything” to “everybody’s preferences about the state of the world” doesn’t change who’s considered in “everybody”, so it seems like if you accept that you were obligated to make more people and divide up the pie more ways before, you still are afterward. Well, OK, not necessarily. I can see a way or two it could affect it. But it’s certainly not obvious; it’s something that needs to be explicitly argued, I think.

    Meanwhile Pascal’s Mugging isn’t even really related to utilitarianism in particular.

    (And, I’m not a utilitarian, but I don’t think the Repugnant Conclusion is actually that much of a problem because — well, see Eliezer Yudkowsky’s argument here.)

    What this change does seem good for is preventing things like “I should wirehead everyone” or “I should build an Experience Machine and force everyone into it”, etc. But then doesn’t preference utilitarianism already take care of that sort of thing? Now I’m wondering why I thought this change was a good one in the first place. Crap. Wish I could find the thing I mentioned earlier.

    I guess this comment ended up a bit incoherent. Oh well.

  7. Fnord says:

    I’m don’t think any of your issues are unique to utilitarianism, rather than morality (or at least moral realism) in general.

    • Scott Alexander says:

      Well yes, everything else is obviously wrong. It’s only when something that looked right turns out to be wrong that it’s worrying.

  8. Charlie says:

    But, of course, after you have done all this, an outside observer without any standards of her own still couldn’t say you were better than Hitler, and if that troubled you before why shouldn’t it keep troubling you? It’s the ol’ Open Question problem, which reflectively improving your own moral system will not solve.

    One simply has to accept that the “objective moral observer with no desires of her own” is not some perfect judge in white robes and a halo, but is instead someone whose moral bits have been cut out and replaced with Jell-O.

    • Scott Alexander says:

      Yeah, but we do want to have something that allows us to say things like “Hitler shouldn’t kill all those people”, and it would be really nice if it was something stronger than “I don’t like Hitler killing people.”

      • Charlie says:

        Hm, I dunno, I’ve gotten kind of attached to not having morality be somewhere outside me. After all, what if we discover the True Source Of Morality and it tells us to do bad stuff?

        (This is a restatement of the Open Question argument)

        • MugaSofer says:

          Pretty sure that the True Source of Morality telling us “to do bad stuff” is a contradiction in terms. Equivalently, you want morality to be “inside you”, but what if you wanted to do bad stuff?

        • Steve says:

          Most people’s morality isn’t contained solely inside themselves. If you could gain 10 utils by depriving others of 100 utils, without suffering any consequences–even acausal ones–would you do so?

          It’s impossible to answer “no” unless you have an externally located morality, at least to the degree that Scott’s described here.

        • peterdjones says:

          What if we discover the True Physics, and it goes against our intuitions? We did, and it did, and we accepted our intuitions are wrong. You should be worrying about the rightness of your intuitions, not about the intuitiveness of the truth.

  9. novalis says:

    Because most people’s intuitive morality is basically utilitarian [citation needed]

    This article contains both a citation for that claim, and a serious challenge to the idea that intuitive morality is anything like consistent enough to be taken seriously:
    http://philosopherinthemirror.wordpress.com/2013/03/01/moral-mindsets-how-remembering-our-past-actions-drives-our-future-behaviour/

  10. asdf says:

    Your moral intuitions contain information that you can’t consciously classify. Thus it is sometimes a more accurate view of the world then your conscious utilitarian brain can come up with.

    • Scott Alexander says:

      ACCURATE BY WHAT STANDARD??!

      • asdf says:

        If you could easily quantify and classify it then it would probably make its way into your conscious brain.

        When I used to play poker professional I would occasionally get strong impressions about a hand that I couldn’t explain. Often these impressions would be counter to my usual logical decision making process. However, they were also usually correct.

        I had acquired, through a great deal of experience, some intuition. And my intuition was sometimes more accurate then my logical brain (if making money on poker is to be “the standard”). Similarly your moral intuitions often have much wisdom your conscious brain doesn’t.

  11. DB says:

    I’d frame the overarching problem as “build a stronger Schelling point”.

    • jaimeastorga2000 says:

      If you really want a Schelling point, figuring out some way of formalizing value-space, plotting people in it, and looking for clusters seems like a better solution. Something like Coherent Volition without Extrapolation.

      But why would you even want a Schelling point anyway? Who are you cooperating with, towards what goal, what power do they hold that makes cooperating with them a good idea, and is a Schelling point the best way to coordinate cooperation with them in the first place?

  12. Jed Harris says:

    Good stuff! Working upward from the bottom:

    I’m not worried about incomputability. Approximations will often be good enough. Plus, we can already see there are tractable subcases — moral communions that are fairly stable — and that our toolkit for building better communions has been improving.

    Why objectivity? (a number of comments ask.) Because it is an important tool in building and extending moral communions. Objectivity should be read more as “reliably reproducible intersubjectivity” — which is very helpful if we want to bring others into our communion or create a communion that embraces some others.

    More generally, I like this because it is a stepping stone to an engineering theory of morality — how to create more effective, more general, and more good conducive moral communions.

    The last attribute is recursive in an interesting way — of course “good” is defined within the communion itself. But none the less communions can be more or less conducive to the goods they define in various ways. All other things being equal, the easier a communion is to extend, the more good it creates, since new members (by definition) try to achieve the same goods. Also, all other things being equal, the more effective members of a communion are in achieving its goods, the more good conducive it is.

    Of course this way of handing morality also has pathologies — religious wars being one. Interestingly the results of pathologies are typically regarded as bads even from within the communions that contribute to them — though as lesser bads than the alternatives. So communions might, and sometimes have, evolved to be more good conducive by reducing their pathologies.

  13. Kaj Sotala says:

    Wait, what?

    You’re doubting utilitarianism because it isn’t as objective as you thought it was? I’m seriously confused now. I was certain that you’d be in the “objective morality is an oxymoron” camp, which I’d taken as the LW consensus (with some prominent dissenters).

    But upon reflection, I realize that I should have seen the other signs of that not necessarily being the full consensus a long time ago. Which I guess should make me reduce my confidence in “objective morality is an oxymoron”, which I had so far assigned something like a >99% certainty. Huh.

    • Scott Alexander says:

      Uh, a different kind of objectivity is the problem here. Like I thought you couldn’t prove utilitarianism was right, but once you accepted it, you could use the single word “utilitarianism” to instantly derive an elegant moral system out of thin air, which was the obvious Schelling point for anyone and would correspond to all my moral intuitions.

      Instead, it turns out I basically have to enter in all my moral intuitions by hand, and what I’m doing is *obviously* just doing what I want in a way that doesn’t make a convenient Schelling point at all.

      It’s like…if we found a giant glowing tablet at the center of the galaxy that corresponded to all my moral beliefs, I couldn’t *prove* that tablet was right, but it would sure be nice. On the other hand, if I learned I had written that tablet myself and covered it in glow-in-the-dark paint and blasted it to the center of the galaxy without anyone noticing and then taken drugs to forget the whole affair, that would be really disappointing and diminish the tablet’s authority.

      • Kaj Sotala says:

        Huh, okay. Still, I thought that “there isn’t any single elegant ethical system like that and that’s why we probably need a superintelligence to do something complicated like CEV and even then it might still be impossible to get a consistent morality that would incorporate all of our moral intuitions” was the very point of all of Eliezer’s “value is complex” posts. Did you disagree with them all along, or did you just read them differently?

        • Scott Alexander says:

          I associated CEV with the type of utilitarianism I’m now more doubtful of. It gives a lot of wiggle room to the people deciding exactly how the volitions should cohere, and if it takes too much input from people I’m not in “moral communion” with I might hate the result.

          I also think there’s somewhat different criteria for the political problem of “How to make something govern well” and the moral problem of “What is right?”, though I’m becoming less certain.

        • Kaj Sotala says:

          I associated CEV with the type of utilitarianism I’m now more doubtful of.

          Ah, so you figured that CEV could be something like an objective solution for the problem of aggregating human values, and thought of that as “utilitarianism”? Okay, that makes more sense. I used to think that way too, but eventually realized that you have to make a bunch of value-laden decisions while defining CEV in any case, so it doesn’t really work very objectively either. (E.g. the notion that our values should be extrapolated in the first place is already a very value-laden notion, to say nothing about picking the criteria according to which they should be extrapolated…)

      • Misha says:

        So basically you want “utilitarianism” or whatever moral system you end up on to have a bible you can point to that corresponds to the ad hoc one in your brain without requiring anyone to actually scan your brain.

        Put this way: a good moral system should enable hyper rationality or at least allow enitities to approximate it.

      • Jack says:

        I thought you couldn’t prove utilitarianism was right, but once you accepted it, you could use the single word “utilitarianism” to instantly derive an elegant moral system out of thin air, which was the obvious Schelling point for anyone and would correspond to all my moral intuitions.

        Instead, it turns out I basically have to enter in all my moral intuitions by hand…

        That’s an awesome description, which should probably be used to lead off these debates.

        My impression is that if if you start sharing some assumptions, eg. a fixed number of people, agree what’s a person, and agree what’s utility-positive and what’s utility-negative, then utilitarianism helpfully says that what’s important about a decision is what utility it produces.

        Eg. as opposed to having a fixed set of commandments that everyone obeys, which works quite well, but only as long as society never changes.

        However, it seems like most explanations of utilitarianism ignore questions like:

        * Should we have more people or fewer
        * Who/what counts as a “person”
        * Can people have different amounts of total utility?
        * What happens if we disagree about how much utility something is?

        And some people have good ideas for answering those questions, but there’s certainly not been any answers sufficiently conclusive they’ve filtered down to my wikipedia-reading level of knowledge about moral philosophy.

        And I rather think those questions are difficult and complicated in their own right.

        However, I think the good thing is that in the real world, we _usually_ do in fact agree about utility. Something like abortion is controversial because people supposedly disagree about what counts as a “person”, and debate whether an unformed person’s existence is inherently utility-ful. But a lot of questions, we generally agree what a good outcome would be, but we disagree about who should get it.

      • endoself says:

        Even classical utilitarianism isn’t this nice; you need to look at a blob of neurons and neurotransmitter and pull out a number representing the high-level concept of happiness. It feel like you know how happy you are from the inside, but lots of things feel simple from the inside.

  14. St. Rev says:

    I’d suggest interrogating your intuitions about the morality of Wirehead World, before using those intuitions to reject utilitarianism. My own intution says it’s bad because it’s unstable; it has no conscious, aware defenders and maintainers. OK, let’s pursue that intution further: take that objection away. What if Wirehead World is perfectly safe and stable, world without end? Is it still wrong?

    Well, maybe, but it’s also isomorphic to many concepts of heaven. Hmm.

  15. Tom Hunt says:

    It strikes me that any attempt to codify moral systems at all runs into an unavoidable metaproblem, i.e. that which just asks “Why?” over and over again. The argument:

    1. It’s quite obvious (unless you’re a theist, I suppose) that there’s no actual measurable objectivity to morality; no quality of the universe differentiates you from Hitler, or a pile of bodies from a crowd of people; the only distinction that can be drawn between these that’s useful on a moral level is how various minds react to them.

    2. There are no universally compelling arguments; for any moral argument you make, which must per the above boil down to an argument about the preferences of minds, you can always come up with a hypothetical mind that has a different preference, and there’s no objective reason to prioritize your own reaction above that of the other mind.

    3. One-level-meta arguments such as utilitarianism, which says (roughly and with variance) “we should strive to satisfy the most people’s preferences”, are arguing circularly. They define the satisfaction of people’s preferences as “good”, then use the satisfaction of people’s preferences in their definition of what “good” is. It essentially boils down to another case of “because I said so”, or just following desires. (The crippling question, as always, is just “Why?” Why is satisfying people’s preferences a good?)

    From here I basically end up punting on the question of systematizing morality, because from what I can see it’s impossible. “Good” and “evil” are shorthands for “utility in my utility function” and “disutility in my utility function”. Individual utility functions can, in fact really should, have terms about the happiness of others, which prevents it from being entirely selfish; however, there’s nothing there about the happiness of others being an inherent good, only it leading to utility in your utility function. Of course, humans not actually being AIs that rigorously compute anything, the terms in your utility function unrelated to your (immediate or time-horizoned) personal utility end up simply being your moral intuitions. This whole argument hasn’t particularly changed the way I feel about anything morally (being, essentially, just an excuse to condone what I already believe), but it has enabled me to shut down a lot of essentially manipulative arguments that try to make me accept some Grand Unified Moral System, then force me to endorse its edge cases. (Also a lot of plain whining that doesn’t actually rise to the level of “argument”, though this may make you sound like an uncaring bastard in conversation.)

    • Berry says:

      This sounds remarkably like Eliezer’s metaethics sequence (or at least, what I got out of it.)

    • Paul Torek says:

      Your whole argument rests on an unacceptable premise tightly linking morality to preference. Basically a strong form of meta-ethical motivational internalism.

      It doesn’t matter that there are no universally compelling arguments. We already knew that. The fact that psychopaths aren’t convinced by a moral argument doesn’t mean that the moral argument was defective. The fact that they would prefer the social rules to be more tolerant of exploitation, doesn’t mean those rules are morally defective. Nor does it mean that their exploitative acts are morally right. Those acts might be perfectly rational, but that’s a different kettle of fish.

    • peterdjones says:

      You say morality isn’t objective because it isn’t an empirical detectable property.
      But it could be objective logically.

      You say there are no universally compelling arguments. But an argument only has to compel sane and rational minds, and rationality is value.

      You say good is a shorthand for what you want, but it isn’t because you say can what is not good.

  16. komponisto says:

    As I alluded to recently in a comment on Overcoming Bias, I’m coming to see utilitarianism as a collection of theories that say that the utility function is a simple counter of some kind, such as “number of non-suicidal people”, “number of dust-specks prevented”, or (for that matter) “number of paperclips”. So, if you’re having problems with it, well, you should be.

    The tension between EY’s complexity-of-value thesis and his apparent endorsement of utilitarianism in some places is probably the biggest problem in the Sequences. He has noticed the contradiction (and has given it the name “Pascal’s Mugging”), but I’m not sure that he has noticed that he has noticed, if you see what I mean. (Pascal’s Mugging is mostly treated on LW as an epistemic problem, which in my view kind of misses the point.)

    • endoself says:

      Eliezer uses ‘utilitarianism’ to mean ‘consequentialism’, though he does endorse some of the counting ideas that you are taking about.

  17. Pingback: Whose Utilitarianism? « Random Ramblings of Rude Reality

  18. Patrick (orthonormal) says:

    Yep, there’s no rescuing the project of Benthamite utilitarianism in a future where self-modification into various kinds of posthuman are possible. But it’s OK, because my personal consequentialist priorities for the world include a preference for people to be happy, achieve meaningful things, and have agency, and I can cooperate quite well with people who have compatible preferences.

    Your various ways of approaching this all seem to be dancing around the idea of decision theory: agents with individual utility functions (which refer to the welfare of other agents as well as themselves) finding the most effective ways to compromise with the other agents whose utility functions agree to various degrees.

  19. Sam Burnstein says:

    Yes, yes! I can feel you slipping to the dark side! Embrace Error Theory! You know it to be true.

    All moral statements are false. It’s like arguing about the proper classification of fairies.

  20. spandrell says:

    If an objective moral system were easy to come up we wouldn’t be where we are at, right?

    Utilitarianism is bankrupt as is everything else.

    • Berry says:

      If you don’t have an objective system of morality, from what perspective could anything be bankrupt other than purely your opinion?

  21. Romeo Stevens says:

    >This allows me to pretend

    Probably, morality seems to be about signalling what sort of ally you would be.

  22. Gilbert says:

    I think the “term for people being equal” hides an ambiguity. Most people probably have a term for people being equal in the agreed moral rules but not for having equal influence on those rules. So basically if the Alicians have more guns than the Bobbites the Platonic contract will look a lot more like Alicianism than Bobbismm. In more practical terms, very few people think what they regard as rights issues should be left to the democratic process. And the wirehead example is skewed by them having no chance of winning. If they had such a chance you might be a lot more interested in a contract.

  23. bilbo says:

    The fundamental issue here seems to be that the wireheads problem doesn’t have a stable solution: if you became wired up, you’d probably agree afterward that your utility increased. Likewise if you freed some wireheads and modified their brains to appreciate art, they’d probably agree that you increased their utility. But then if you wired them up again they’d say once again say it increased, rather than decreasing to it’s original value.

    The problem is that changes in both directions (wirehead non-wirehead) appear to increase utility, as judged by the person after the fact rather than before it (which is usually the measure I use, it’s kinda related to the “Coherent extrapolated volition utilitarianism” you mention in the FAQ). Change in utility is now not simply a function of where you are in societal parameter space, it’s also a function of how you got there, like an integral over the complex numbers or a non-conservative vector field.

    Maybe the most moral thing to do is to rapidly oscillate between the two types of society, building up a never ending history of increasing utility.

    Or maybe you should just measure the subjective change in utilities for each direction, and stick on whichever side has the greatest change in utility from the previous one. And if they’re equal, then hey, both societies are equally moral.

    However because the two societies find each other abhorrent (presumably), the smaller society (you, in the example where it’s just you) should make the switch, since homogeneity is in both utility functions. And if the two societies are of equal size, flip a coin.

  24. Randy M says:

    I’m not sure if you are being tongue-in-cheek with the whole ‘trigger warning’ thing, or you believe you have the most delicate of readerships imaginable.

    What exactly is an attempt to ground morality likely to trigger that must be warned against?

  25. Sam Rosen says:

    I always thought Hofstadter’s superrationallity was a pretty objective way to ground certain aspects of our moral intuitions. What are your thoughts on that?

    But, preference-utilitarianism is a closer map to my moral instincts. I mean I can’t imagine a more lovely system for organizing humans than one that tries to get people as much of what they want as it can.

  26. Bruno Coelho says:

    The point of (rule)utilitarianism is: you’re not allowed to be a egoist, if this will lead to a set of immoral actions. Contrasting your preferences vs the utilitarian theories, sure will cause some trouble.

    One point most moral theories have in common, is that people have something sacrifice(normally, hedonistic preferences) for the greater good.

  27. Andrew Rettek says:

    I was thinking about this post recently, you said that you would take your moral intuitions over things like what utilitarianism would tell you. My question is, as you’ve studied moral systems, have your moral intuitions changed? I noticed mine have at a deep level. My guess is that studying something like utilitarianism doesn’t give you a command above your moral intuitions, but shapes your intuitions to conform with it’s own frame.

  28. Paul Torek says:

    Scott, you may be reinventing the wheel of contractualism, but you’re doing a fine job of it so far. Maybe read some giants, then try standing on their shoulders?

  29. I think this post is an important step on your journey towards expressivist moral nihilism. Nothing except atoms and the void, baby.

  30. Pingback: The disappointing rightness of Scott Alexander | The Last Conformer

  31. Doug S. says:

    Where is it written that if a reasoned argument reaches a conclusion that an individual does not like, that this proves that the reasoned argument must be flawed? People have an annoying tendency of asserting that our “moral intuitions” are so flawless that if any reasoned argument comes into conflict with a moral intuition that the moral intuition must be preserved.

    I hold that moral intuitions are nothing but learned prejudices. Historic examples from slavery to the divine right of kings to tortured confessions of witchcraft or Judaism to the subjugation of women to genocide all point to the fallibility of these ‘moral intuitions’. There is absolutely no sense to the claim that its conclusions are to be adopted before those of a reasoned argument.

    In fact, the prejudice that we have ‘moral intuitions’ that are superior to any type of reasoned argument is a groundless conceit – something children should be warned against the instant they can understand the warning.

    Alonzo Fyfe

  32. peterdjones says:

    Re uncomputability: do you realise that in tackling uncomputability by approximationsand rules of thumb you are heading at least half way towarrds deontology?

  33. James says:

    I wonder whether you’d get anything out of reading Richard Rorty. In particular, his book Contingency, Irony and Solidarity deal in large part with this kind of issue.

    He makes the case that we ought to acknowledge that our moral intuitions aren’t amenable to grounding in irrefutable, logical arguments (he calls this antifoundationalism). Having done so, we can instead focus on honing and broadening those intuitions — an exercise more likely to benefit from things like novels or documentaries and more in the realm of ‘judgement’ (even ‘taste’) than logical argument.

    He doesn’t seem to get much mention in the rationalist/LW part of town, perhaps because he’s coming from more of a humanities-ish, ‘theory’-ish background – more continental than analytic. I think he’s sound, though.

  34. blacktrance says:

    For more on this, I recommend reading David Gauthier’s “Morals by Agreement”. It gets a little mathy at times, and I don’t agree with all of its conclusions, but it does something like adapting Hobbesianism to the present day. You may find it of interest.

  35. blacktrance says:

    It’s worth noting that while there are some similarities between utilitarianism and contractarianism (what you’re describing sounds more like contractarianism than contractualism, but I may be wrong) in that they’re both consequentialist, they sometimes lead to wildly different conclusions. In your example of the Alicians and the Bobbites, they agree to mutual non-aggression because they both gain from it. A utilitarian would perhaps come to the same conclusion. However, when there aren’t benefits from cooperation, the contractarian and utilitarian conclusion may be strongly at odds. In particular, when it comes to animal suffering – unlike the Aliciians and the Bobbites, humans don’t have anything to gain from trying to cooperate with cows/pigs/etc, so they can go on eating them if they want, while a utilitarian would say that even if humans have to give up some utility, eating meat would be wrong because it reduces net utility.

    In short, contractarianism is about maximizing your utility, specifically about when you should restrict yourself in exchange for others restricting themselves. Utilitarianism is about maximizing world utility.

  36. Lorxus says:

    You know… this whole idea of an abstract Platonic pretend contract sounds a hell of a lot like coherent extrapolated volition. Also, there is nothing wrong with wanting to become an immortal god-king(/queen/Grand High Poobah/non-/other-gendered term of royalty of choice) and remake large parts of your local universe in your image. I myself want to do so when I grow up.