The Invisible Nation – Reconciling Utilitarianism And Contractualism

[Attempt to derive morality from first principles, totally ignoring that this should be impossible. Based on economics and game theory, both of which I have only a minimal understanding of. And mixes complicated chains of argument with poetry without warning. So, basically, it’s philosophy. And it’s philosophy I get the feeling David Gauthier may have already done much better, but I haven’t read him yet and wanted to get this down first to avoid bias towards consensus]

Related to: Whose Utilitarianism?, You Kant Dismiss Universalizability, Meditations on Moloch

Imagine the Economists’ Paradise.

In the Economists’ Paradise, all transactions are voluntary and honest. All game-theoretic problems are solved. All Pareto improvements get made. All Kaldor-Hicks improvements get converted into Pareto improvements by distributing appropriate compensation, and then get made. In all cases where people could gain by cooperating, they cooperate. In all tragedies of the commons, everyone agrees to share the commons according to some reasonable plan. Nobody uses force, everyone keeps their agreements. Multipolar traps turn to gardens, Moloch is defeated for all time.

The Economists’ Paradise is stronger than the Libertarians’ Paradise, which is just a place where no one initiates force and all economic transactions are legal, because the Libertarians’ Paradise might still have a bunch of Prisoner’s Dilemmas and the Economists’ Paradise wouldn’t. But it is weaker than Utilitarians’ Paradise, because people with more power and money still get more of the eventual utility.

From a god’s-eye view, it seems relatively easy to create the Economists’ Paradise. It might be hard to figure out how to solve game theoretic problems in absolutely ideal ways, but it’s often very easy to figure out how to solve them in a much better way than the uncoordinated participants are doing right now (see the beginning of Part III of Meditations on Moloch). At the extreme of this way of thinking, we have Formalism, where just solving the problem, even in a very silly way, is still better then having the question remain open.

(a coin flip is the epitome of unintelligent problem solving, but flipping a coin to decide whether the Senkaku/Diaoyu Islands go to Japan or China still beats having World War III, by a large margin)

The Economists’ Paradise is a pretty big step of the way toward actual paradise. Certainly there won’t be any wars or crime. But can we get more ambitious?

Will the Economists’ Paradise solve world hunger? I say it will. The argument is essentially the one in Part 2.4 of the Non-Libertarian FAQ. Suppose solving world hunger costs $50 billion per year, which I think is people’s actual best-guess estimate. And suppose that half the one billion people in the First World are willing to make some minimal contribution to solving world hunger. If each of those people can contribute $2 per week, that suffices to raise the necessary amount. On the other hand, the $50 billion cost is the cost in our world. In the Economists’ Paradise, where there are no corrupt warlords or bribe-seeking bureaucrats, and where we can just trust people to line themselves up in order of neediest to least needy, the whole task gets that much easier. In fact, it’s not obvious that the First World wouldn’t come up with their $50 billion only to have the Third World say “Thanks, but we kind of sorted out our problems and became an economic powerhouse.”

Let’s get more ambitious. Will there be bullying in the Economists’ Paradise? I just mean your basic bullying, walking over to someone who’s ugly and saying “You’re ugly, you ugly ugly person!” I say there won’t be. How would a perfect solution to all coordination problems end bullying? Simple! If the majority of the population disagrees with bullying, they can sign an agreement among themselves not to bully, and to ostracize anyone who does. Everyone will of course keep their agreement (by the definition of Economists’ Paradise) and anyone who reports to the collective that Bob is a bully will always be telling the truth (by the definition of Economists’ Paradise). The collective will therefore ostracize Bob, and faced with the prospect of never being able to interact with the majority of human beings ever again, Bob will apologize and sign an agreement never to bully again (which he will keep, by the definition of Economists’ Paradise). Since everyone knows this will happen, no one bullies in the first place.

So the Economists’ Paradise is actually a very big step of the way toward actual paradise, to the point where the differences start to look like splitting hairs.

The difference between us and the Economists’ Paradise isn’t increased wealth or fancy technology or immortality. It’s rule-following. If God were to tell everybody the rules they needed to follow to create the Economists’ Paradise, and everyone were to follow them, that would suffice to create it.

That suggests two problems with setting up Economists’ Paradise. We need to know what the rules are, and we need to convince people to follow them.

These are more closely linked than one would think. For example, both Japan and China might prefer that the Senkaku Islands be clearly given to the other according to a fair set of rules which might benefit themselves the next time, than that they fight World War III over the issue. So if the rules existed, people might follow them for the very reason that they exist. This is why, despite the Senkaku Island conflict, most islands are not the object of international tension – because there are clear rules about who should have them and everybody prefers following the rules to the sorts of conflicts that would happen if the rules didn’t exist.

II.

There’s a hilarious tactic one can use to defend consequentialism. Someone says “Consequentialism must be wrong, because if we acted in a consequentialist manner, it would cause Horrible Thing X.” Maybe X is half the population enslaving the other half, or everyone wireheading, or people being murdered for their organs. You answer “Is Horrible Thing X good?” They say “Of course not!”. You answer “Then good consequentialists wouldn’t act in such a way as to cause it, would they?”

In the same spirit: should the State legislate morality?

“Of course not! I don’t want the State telling me whom I can and can’t sleep with.”

So do you believe that it’s immoral, genuinely immoral, to sleep with the people whom you want to sleep with? Do you think sleeping with people is morally wrong?

“What? No! Of course not!”

Then the State legislating morality isn’t going to restrict whom you can sleep with, is it?

“But if the State legislated everything, I would have no freedom left!”

Is taking away all your freedom moral?

“No!”

Then the State’s not going to do that, is it?

By this sort of argument, it seems to me like there are no good philosophical objections to a perfect State legislating the correct morality. Indeed, this seems like an ideal situation; the good are rewarded, the wicked punished, and society behaves in a perfectly moral way (whatever that is).

The arguments against the State legislating morality are in my opinion entirely contigent ones, based around the fact that the State isn’t perfect and the correct morality isn’t known with certainty. Get rid of these caveats, and moral law and state law would be one and the same.

Letting the State enforce moral laws has some really big advantages. It means the rules are publicly known (you can look them up in a lawbook somewhere) and effectively enforced (by scary men with guns). This is great.

But using the State to enforce rules also fails in some very important ways.

First, it means someone has to decide in what cases the rules were broken. That means you either need to depend on fallible, easily biased human judgment – subject to all its racism, nepotism, tribalism, and whatever – or algorithmize the rules so that “be nice” gets formalized into a two thousand page definition of niceness so rigorous that even a racist nepotist tribalist judge doesn’t have any leeway to let your characteristics bias her assessment of whether you broke the niceness rules.

Second, transaction costs. Suppose in every interaction you had with another person, you needed to check a two thousand page algorithm to see if their actions corresponded to the Legal Definition of Niceness. Then if they didn’t, you needed to call the police to get them arrested, have them sit in jail for two weeks (or pay the appropriate bail) until they can get to trial. The trial itself is a drawn-out affair with celebrity lawyers on both sides. Finally, the judge pronounces verdict: you really should have said “please” when you asked her to pass the salt. Sentence: twelve milliseconds of jail time.

Third, it is written: “If you like laws and sausages, you should never watch either one being made.” The law-making apparatus of most states – stick four hundred heavily-bribed people who hate each other’s guts in a room and see what happens – fails to inspire full confidence that its results will perfectly conform to ideal game theoretic principles.

Fourth, most states are somewhere on a spectrum between “socially contracted regimes enforcing correct game theoretic principles among their citizens” and “violent psychopaths killing everybody and stealing their stuff”, and it has been historically kind of hard to get the first part right without also empowering the proponents of the second.

So it’s – surprise, surprise – a tradeoff.

There’s a bunch of rules which, followed universally, would lead to the Economists’ Paradise. If the importance of keeping these rules agreed-upon and well-enforced outweighs the dangers of algorithmization, transaction costs, poor implementation, and tyranny, we make them State Laws. In an ideal state with very low transaction costs, minimal risk of tyranny, and legislave excellence, the cost of the tradeoff goes down and we can reap gains by making more of them State Laws. In a terrible state with high transaction costs that has been completely hijacked by self-interest, the cost of the tradeoff goes down and fewer of them are State Laws.

III.

Let’s return to the bullying example from Part I.

It would seem there ought not to be bullying in the Economists’ Paradise. For if most people dislike bullying, they can coordinate an alliance to not bully one another, and to punish any bullies they find.

On the contrary, suppose there are two well-delineated groups of people, Jocks and Nerds. Jocks are bullies and have no fear of being bullied themselves; they also don’t care about social exclusion by the Nerds against them. Nerds are victims of bullies and never bully others; their exclusion does not harm the Jocks. Now it seems that there might be bullying, for although all the Nerds would agree not to bully, and to exclude all bullies, and although all the Jocks might coordinate an alliance not to bully other Jocks, there is nothing preventing the Jocks from bullying the Nerds.

I answer that there are several practical considerations that would prevent such a situation from coming up. The most important is that if bullying is negative-sum – that is, if it hurts the victim more than it helps the bully – then it’s an area ripe for Kaldor-Hicks improvement. Suppose there is anything at all the Nerds have that the Jocks want. For example, suppose that the Nerds are good at fixing people’s broken computers, and that a Jock gains more utility from knowing he can get his computer fixed whenever he needs it than from knowing he can bully Nerds if he wants. Now there is the opportunity for a deal in which the Nerds agree to fix the Jocks’ computers in exchange for not being bullied. This is Pareto-optimal: the Nerds’ lives are better because they avoid bullying, and the Jocks’ lives are better because they get their computers fixed.

Objection: numerous problems prevent this from working in real life. Nerds and Jocks aren’t coherent blocs, bullies are bad negotiators. More fundamentally, this is essentially paying tribute, and on the “millions for defense, not one cent for tribute” principle, you should never pay tribute or else you encourage people who wouldn’t have threatened you otherwise to threaten you just for the tribute. But the assumption that Economists’ Paradise solves all game theoretic problems solves these as well. We’re assuming everyone who should coordinate can coordinate, everyone who should negotiate does negotiate, and everyone who should make precommittments does make precommittments.

A more fundamental objection: what if Nerds can’t fix computers, or Jocks don’t have them? In this case, the tribute analogy saves us: Nerds can just pay Jocks a certain amount of money not to be bullied. Any advantage or power whatsoever that Nerds have can be converted to money and used to prevent bullying. This sounds morally repugnant to us, but in a world where blackmail and incentivizing bad behavior are assumed away by fiat, it’s just another kind of Pareto-optimal improvement, certainly better than the case where Nerds waste their money on things they want less than not being bullied yet are bullied anyway. And because of our Economists’ Paradise assumption, Jocks charge a fair tribute rate – exactly the amount of money it really costs to compensate them for the utility they would get by beating up Nerds – and feel no temptation to extort more.

Now, I’m not sure bullying would even come up as an option in an Economists’ Paradise, because if it’s a zero- or negative- sum game trying to get status among your fellow Jocks, the Jocks might ban it on their own as a waste of time. But even if Jocks do get some small amount of positive utility out of it directly, we should expect bullying to stop in an Economists’ Paradise as long as Nerds control even a tiny amount of useful resources they can use to placate the Jocks. If Nerds control no resources whatsoever, or so few resources that they don’t have enough left to pay tribute after they’ve finished buying more important things, then we can’t be sure there won’t be bullying – this is where the Economists’ Paradise starts to differ from the Utilitarians’ Paradise – but we’ll return to this possibility later.

Now I want to highlight a phrase I just used in this argument.

“If bullying is negative-sum – that is, if it hurts the victim more than it helps the bully – then it’s an area ripe for Kaldor-Hicks improvement”

This looks a lot like (naive) utilitarianism!

What it’s saying is “If bullying decreases utility (by hurting the Nerd more than it helps the Jock) then bullying should not exist. If bullying increases utility (by helping the Jock more than it hurts the Nerd) then maybe bullying should exist. Or, to simplify and generalize, “do actions that increase utility, but not other actions.”

Can we derive utilitarian results by assuming Economists’ Paradise? In many cases, yes. Suppose trolley problems are a frequent problem in your society. In particular, about once a day there is a runaway trolley in heading on a Track A with ten people, but divertable to a Track B with one person (explaining why this happens so often and so consistently is left as an exercise for the reader). Suppose you’re getting up in the morning and preparing to walk to work. You know a trolley problem will probably happen today, but you don’t know which track you’ll be on.

Eleven people in this position might agree to the following pact: “Each of us has a 91% chance of surviving if the driver chooses to flip the switch, but only a 9% chance of surviving if the person chooses not to. Therefore, we all agree to this solemn pact that encourages the driver to flip the switch. Whichever of us will be on Track B hereby waives his right to life in this circumstance, and will encourage the driver to switch as loudly as all of the rest of us.”

If the driver were presented with this pact, it’s hard to imagine her not switching to Track B. But if the eleven Trolley Problem candidates were permitted to make such a pact before the dilemma started, it’s hard to imagine that they wouldn’t. Therefore, the Economists’ Paradise assumption of perfect coordination produces the correct utilitarian result to the trolley problem. The same methodology can be extended to utilitarianism in a lot of other contexts.

Now we can go back to that problem from before: what if Nerds have literally nothing Jocks want, and Jocks haven’t decided among themselves that bullying is a stupid status game that wastes their time, and we’re otherwise in the Least Convenient Possible World with regards to stopping bullying. Is there any way assuming Economists’ Paradise solves the problem then?

Maybe. Just go around to little kids, age two or so, and say “Look. At this point, you really don’t know whether you’re going to grow up to be a Jock or a Nerd. You want to sign this pact that everyone who grows up to be a Jock promises not to bully everyone who grows up to be a Nerd?” Keeping the same assumption that bullying is on net negative utility, we expect the toddlers to sign. Yeah, in the real world two-year olds aren’t the best moral reasoners, but good thing we’re in Economists’ Paradise where we assume such problems away by fiat.

Is there an Even Less Convenient Possible World? Suppose bullying is racist rather than popularity-based, with all the White kids bullying the Black kids. You go to the toddlers, and the white toddlers retort back “Even at this age, we know very well that we’re White, thank you very much.”

So just approach them in the womb, where it’s too dark to see skin color. If we’re letting two year olds sign contracts, why not fetuses?

Okay. One reason might be because we’ve just locked ourselves into being fanatically pro-life merely by starting with weird assumptions. Another reason might be that we could counterfactually mug fetuses by saying stuff “You’re definitely a human, but for all you know the world is ruled by Lizardmen with only a small human slave population, and if Lizardmen exist then they will torture any humans who did not agree in the womb that, if upon being born and finding that Lizardmen did not exist, they would spend all their time and energy trying to create Lizardmen.”

(Frick. I think I just created a new basilisk by breeding the Rokolisk and the story of 9-tsiak. Good thing it only works on fetuses.)

(I wonder if this is the first time in history anyone has ever used the phrase “counterfactually mug fetuses” as part of a serious intellectual argument.)

So I’m not saying this theory doesn’t have any holes in it. I’m just saying that it seems, at least in principle, like the idea of Economists’ Paradise might be sufficient to derive Rawls’ Veil of Ignorance, which in turn bridges the chasm that separates it from Utilitarians’ Paradise.

IV.

I think this is the solution to the various questions raised in You Kant Dismiss Universalizability. The reason universalizability is important is that the universal maxims are the agreements that everyone or nearly everyone would sign. This leads naturally to something like utilitarianism for the reasons mentioned in Part III. And it doesn’t produce the weird paradoxes like “If morality is universalizability, how do you know whether a policeman overpowering and imprisoning a criminal universalizes to ‘police should be able to overpower and imprison criminals’ or ‘everyone should be able to overpower and imprison everyone else’?” Everyone would sign an agreement allowing the first, but not the second.

But before we really explore this, a few words on “everyone would sign”.

Suppose one very stubborn annoying person in Economists’ Paradise refused to sign an agreement that police should be allowed to arrest criminals. Now what?

“All game theory is solved perfectly” is a really powerful assumption, and the rest of the world has a lot of leverage over this one person. Suppose everyone else said “You know, we’re all signing an agreement that none of us are going to murder one another, but we’re not going to let you into that agreement unless you also sign this agreement which is very important to us.”

Actually, that sounds too evil and blackmailing. There’s a better way to think of it. Suppose there are one hundred agreements. 99% of the population agrees to each, and in fact it’s a different 99% each time. That is, divide the population into one hundred sets of 1%, and each set will oppose exactly one of the agreements – there is no one who opposes two or more. Each agreement only works (or works best) when one hundred percent of the population agrees to it.

Very likely everyone will strike a deal that each of the one hundred 1% blocs agrees to to give up its resistance to the one agreement they don’t like, in exchange for each of the other ninety nine 1% blocs giving up its resistance to the agreements they don’t like.

Now we’re getting into meta-level Pareto improvements. If a pact would be positive-sum for people to agree on, the proponents of the pact can offer everyone else some compensation for them signing the pact. In theory it could be money or computer-fixing, but it might also be agreement with some of their preferred pacts.

There are a few possible outcomes of this process in Platonic Economists’ Paradise, both interesting.

One is a patchwork of agreements, where everyone has to remember that they’ve signed agreements 5, 12, 98, and 12,671, but their next-door neighbor has signed agreements 6, 12, 40, and 4,660,102, so they and their neighbor are bound to cooperate on 12 but no others.

Another is that everyone is able to get their desired pacts to cohere into a single really big pact that they are all able to sign off upon. Maybe there are a few stragglers who reject it at first, but this ends up being a terrible idea because now they’re not bound by really important agreements like “don’t murder” or “don’t steal”, so eventually they give in.

A third possibility combining the other two offers a unifying principle behind Whose Utilitarianism and Archipelago and Atomic Communitarianism. Everyone agrees to some very basic principles of respecting one another (call them “Noahide Laws”) but smaller communities agree to stricter rules that allow them to do their own thing.

But we don’t live in Platonic Economists’ Paradise. We live in the real world, where transaction costs are high and people have limited brainpower. Even if we were to try to instantiate Economists’ Paradise, it couldn’t be the one where we all have the complex interlocking patchwork agreements between one another. People wouldn’t sign off on it. Heck, I wouldn’t sign off on it. I would say “I’m not signing this until I have something that makes sense to me and can be implemented in a reasonable amount of time and doesn’t require me to check the List Of Everybody In The World before I know whether the guy next to me is going to murder me or not.” Practical concerns provide a very strong incentive to reject the patchwork solution and force everyone to cohere. So in practice – and I realize how hokey it is to keep talking about game-theoretically-perfect infinitely-rational infinitely-honest agents negotiating all possible agreements among one another, and then add on the term “in practice” to represent that they have trouble remembering what they decided – but in practice they would all have very large incentives to cohere upon a single solution that balances out all of their concerns.

We can think of this as moving along an axis from “Platonic” to “practical”. As we progress further, complicated agreements collapse into simpler agreements which are less perfect but easier to enforce and remember. We start to make judicious use of Schelling fences. We move from everyone in the world agreeing on exactly what people can and can’t do to things like “Well, you know your intuitive sense of niceness? You follow that with me, and I’ll follow that with you, and we’ll assume everyone else is in on the deal until they prove they aren’t.”

A metaphor: in a dream, your soul goes to Economists’ Paradise and agrees on the perfect patchwork of maxims with all the other souls there. But as dawn approaches, you realize when you awaken you will never remember all of what you agreed upon, and even worse, all the other souls there are going to wake up and not remember what they agreed upon either. So all of you together frantically try to compress your wisdom into a couple of sentences that the waking mind will be able to recall and follow, and you end up with platitudes like “Use your intuitive sense of niceness” and “do unto others as you would have others do unto you” and “try to maximize utility” and “anybody who treats you badly, assume they’re not in on the deal and feel free to treat them badly too, but not so badly that you feel like you can murder them or something.”

A particularly good platitude/compression might be “Work very hard to cultivate the mysterious skill of figuring out what people in the Economists’ Paradise would agree to, then do those things.” If you’re Greek, you can even compress it into a single word: phronesis.

V.

So by now it’s probably pretty obvious that this is an attempt to ground morality. I think the general term for the philosophical school involved is “contractualism”.

Many rationalists seem to operate on something like R.M. Hare’s two-level utilitarianism. That is, utilitarianism is the correct base level of morality, but it’s very hard to do, so in reality you’ve got to make do with less precise but more computationally tractable heuristics, like deontology and virtue ethics. Occasionally, when deontology or virtue ethics contradict themselves, each other, or your intuitions, you may have to sit down and actually do the utilitarianism as best you can, even though it will be inconvenient and very philosophically difficult.

For example, deontology may say things like “You must never kill another human being.” But in the trolley problem, the correct deontological action seems to violate our moral intuitions. So we go up a level, calculate the utility (which in this case is very easy, because it’s a toy problem invented entirely for the purposes of having easy utility calculation) and say “Huh, this appears to be one of those rare places where our deontological heuristics go wrong.” Then you switch the trolley.

But utilitarianism famously has problems of its own. You need a working definition of utility, which means not only distinguishing between hedonic utilitarianism, preference utilitarianism, etc, but coming up with a consistent model for measuring the strength of happiness and preferences. You need to distinguish between total utilitarianism, average utilitarianism, and a couple of other options I forget right now. You need a discount rate. You need to know whether creating new people counts as a utility gain or not, and whether removing people (isn’t that a nice euphemism) can even be counted as a negative if you make sure to do it painlessly and without any grief to those who remain alive. You need a generalized solution to Pascal’s Wagers and utility monsters. You need to know whether to accept or fudge away weird results like that you may be morally obligated to live your entire life to maximize anti-malaria donations. All of this is easy at the tails and near-impossible at the margins.

My previous philosophy was “Yeah, it’s hard, but I bet with sufficient intelligence, we can think up a consistent version of utilitarianism with enough epicycles that it produces an answer to all of these issues that most people would recognize as at least kind of sane. Then we can just go with that one.”

I still believe this. But that consistent version would probably fill a book. The question is: what is the person who decides what to put in this book doing? On what grounds are they saying “total utilitarianism is a better choice than average utilitarianism”? It can’t be on utilitarian grounds, because you can’t use utilitarian grounds until you’ve figured out utilitarianism, which you haven’t done until you’ve got the book. When God was deciding what to put in the Bible, He needed some criteria other than “make the decision according to Biblical principles”.

The standard answer is “we are starting with our moral intuitions, then simplifying them to a smaller number of axioms which eventually produce them”. But if the axioms fill a book and are full of epicycles to address individual problems, we’re not doing a very good job.

I mean, it’s still better than just trying to sort out all individual issues like “what is a just war?” on their own, because people will answer that question according to their personal prejudices (is my tribe winning it? Then it is so, so just) and if we force them to write the utilitarianism book at least they’ve got to come up with consistent principles and stick to them. But it is highly suboptimal.

And I wonder whether maybe the base level, the one that actually grounds utilitarianism, is contractualism. The idea of a Platonic parliament in which we try to enact all beneficial agreements. Under this model, utilitarianism, deontology, and virtue ethics would all be different heuristics that we use to approximate contractualism, the fragments we remember from our beautiful dream of Paradise.

I realize this is kind of annoying, especially in the sense of “the next person who comes along can say that utiltiarianism, deontology, virtue ethics, and contractualism are heuristics for whatever moral theory they like, which is The Real Thing”. But the idea can do work! It particular, it might help esolve some of the standard paradoxes of utilitarianism.

First, are we morally obligated to wirehead everyone and convert the entire universe into hedonium? Well, would you sign that contract?

Second, is there anything wrong with killing people painlessly if they won’t be missed? After all, it doesn’t seem to cause any pain or suffering, or even violate any preferences – at least insofar as your victim isn’t around to have their preferences violated. Well, would you sign a contract in which everyone agrees not to do that?

Third, are we morally obligated to create more and more people with slightly above zero utility, until we are in an overcrowded slum world with everyone stuck at just-above-subsistence level (the Repugnant Conclusion)? Well, if you were making an agreement with everyone else about what the population level should be, would you suggest we do that? Or would you suggest we avoid it?

(this can be complicated by asking whether potential people get a seat in this negotiation, but Carl Shulman has a neat way to solve that problem)

Fourth, the classic problem of defining utility. If utility can be defined ordinally but not cardinally (ie you can declare that stubbing your toe is worse than a dust speck in the eye, but you can’t say something like it’s exactly 2.6 negative utilons) then utilitarianism becomes very hard. But contractualism doesn’t become any harder, except insofar as it’s harder to use utilitarianism as a heuristic for it.

I am not actually sure these problems are being solved, and I’m not just being led astray by contractualism being harder to model than utilitarianism and so it is easier for me to imagine them solved. But at the very least, it might be that contractualism is a different angle from which to attack these problems.

Of course, contractualism has problems of its own. It might be that different ways of doing the negotiations would lead to very different results. It might also be that the results would be very path-dependent, so that making one agreement first would end with a totally different result than making another agreement first. And this would be a good time to admit I don’t know that much formal game theory, but I do know there are multiple Nash equilibria and Pareto-optimal endpoints in a lot of problems and that in general there’s no such thing as “the correct game theoretic solution to this problem”, only solutions that fit more or fewer desirability criteria.

But to some degree this maps onto our intuitions about morality. One of the harder to believe things about utilitarianism was that it suggested there was exactly one best state of the universe. Our intuitions are very good at saying that certain hellish dystopias are very bad, and certain paradises are very good, but extrapolating them out to say there’s a single best state is iffy at best. So maybe the ability of rigorous game theory to end in a multitude of possible good outcomes is a feature and not a bug.

I don’t know if it’s possible for certain negotiation techniques to end in extreme local minima where things don’t end out as a paradise at all. I mean, I know there’s lots of horrible game theory like the Prisoner’s Dilemma and the Pirate’s Dilemma and so on, but I’m defining the “good game theory” of the Economists’ Paradise to mean exactly the rules and coordination power you need to not do those kinds of things.

But there’s also a meta-level escape vent. If a certain set of negotiation techniques would lead to a local minimum where everything is Pareto-optimal but nobody is happy, then everyone would coordinate to sign a pact not to use those negotiation techniques.

VI.

To sum up:

The Economists’ Paradise of solved coordination problems would be enough to keep everyone happy and prosperous and free. We ourselves could live in that paradise if we followed its rules, which involve negotiation of and adherence to agreements according to good economist and game theory, but these rules are hard to determine and hard to enforce.

We can sort of guess at what some of these rules can be, and when we do that we can try to follow them. Some rules lend themselves to State enforcement. Others don’t and we have to follow them quietly in the privacy of our own hearts. Sometimes the rules include rules about ostracizing or criticizing those who don’t follow the rules effectively, and so even the ones the State can’t enforce are sorta kinda enforceable. Then we can spread them through a series of walled gardens and spontaneous order divine intervention.

The exact nature of the rules is computationally intractable and so we use heuristics most of the time. Through practical wisdom, game theory, and moral philosophy, we can improve our heuristics and get to the rules more closely, with corresponding benefits for society. Utilitarianism is one especially good heuristic for the rules, but it’s also kind of computationally intractable. Utilitarianism helps us approximate contractualism, and contractualism helps us resolve some of the problems of utilitarianism.

One problem of utilitarianism I didn’t talk about is that it isn’t very inspirational. Following divine law is inspirational. Trying to become a better person, a heroic person, is inspirational. Utilitarianism sounds too much like math. I think contractualism solves this problem too.

Consider. There is an Invisible Nation. It is not a democracy, per se, but it is something of a republic, where each of us is represented by a wiser, stronger version of ourselves who fights for our preferences to be enacted into law. Its legislature is untainted by partisanship, perfectly efficient, incorruptible, without greed, without tyranny. Its bylaws are the laws of mathematics; its Capitol Building stands at the center of Platonia.

All good people are patriots of the Invisible Nation. All the visible nations of the world – America, Canada, Russia – are properly understood to be its provinces, tasked with executing its laws as best they can, and with proper consideration to the unique needs of the local populace. Some provinces are more loyal than others. Some seem to be in outright rebellion. The laws of the Invisible Nation contain provisions about what to do with provinces in rebellion, but they are vague and difficult to interpret, and its patriots can disagree on what they are.

Maybe one day we will create a superintelligence that tries something like Coherent Extrapolated Volition – which I think we have just rederived, kind of by accident. The various viceroys and regents will hand over their scepters, and the Invisible Nation will stand suddenly revealed to the mortal eye. Until then, we see through a glass darkly. As we learn more about our fellow citizens, as we gain new modalities of interacting with them like writing, television, the Internet – as we start crystallizing concepts like rights and utility and coordination – we become a little better able to guess.

This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.

216 Responses to The Invisible Nation – Reconciling Utilitarianism And Contractualism

  1. Carl Shulman says:

    “There is a consensus in the rationalist community on something like R.M. Hare’s two-level utilitarianism.”

    There isn’t. See, e.g. your survey.

  2. Emile says:

    I would not sign the dust speck thing, I’d rather have a 1/3^^^3 chance of being tortured than a dust speck in the eye (well – I might sign if everybody wants me to, I don’t have a strong preference …).

  3. Carl Shulman says:

    “Sixth, the torture vs. dust specks problem. Give 3^^^3 people a dust speck in the eye, or torture one person for fifty years. I am pretty sure the 3^^^3+1 people involved would all agree (before they knew who was whom) that they would prefer the dust speck to the 1/3^^^3 chance of getting tortured; with the appropriate contract in hand, whoever is in charge of making the decision could just enforce their united will.”

    This is crazy. We routinely trade off ‘tiny’ risks of painful death (e.g. 1 in a few billion) for gains like an extra second spent outside. 1/3^^^3 is small beyond human intuitive comprehensibly, but it is insanely smaller than the actual risks we accept for minute gains in our standard of living.

    If you repeated the tradeoff a million times that incomprehensibly vast number of people would lose more than a week of their lives while the risk of an individual being part of the 1,000,000/3^^^3 would still be far less than the risk of meteorites spelling out the King James Bible on the Moon by chance. Scale it up several more orders of magnitude and people spend their whole lives blinking out specks.

    Philosophers arguing for aggregation commonly use arguments similar to yours above to argue that we should aggregate, e.g. with speed limits we could prevent painful deaths with lower speed limits but at the expense of time and convenience. Accepting a certain rate of deadly, crippling, and agonizing car accidents (incomprehensibly greater than 1/3^^^3) for time is something we do all the time.

    Here’s Alistair Norcross on the tradeoffs in speed limits, headaches, and human lives:

    http://www.colorado.edu/philosophy/heathwood/6100/Norcross%20-%20Comparing%20Harms–Headaches%20and%20Human%20Lives.pdf

    • Hainish says:

      “This is crazy. We routinely trade off ‘tiny’ risks of painful death (e.g. 1 in a few billion) for gains like an extra second spent outside.”

      True…but we don’t sign contracts doing so. (If my driving behavior were specified by this type of contract, it would almost certainly be a lot safer/slower.)

      • Vulture says:

        Because you’re trapped in a Molochian positional value-sinkhole against other drivers, or because your preferences are inconsistent?

        • Hainish says:

          Probably because my preferences are inconsistent. BUT, I tend to think that most people’s preferences are inconsistent in the exact same way!

    • Scott Alexander says:

      You are right and I am sorry.

      I think the insight I was trying to express (which is not how I phrased it) is that people would sign the contract saying they would rather have everyone get dust specks than one person be tortured. They would even do so if they were not among the 3^^^3+1 people.

  4. Carl Shulman says:

    “That is, would you make a deal with a nonexistent person that whichever one of you ends up existing will work to bring the other into existence? This seems like an excellent reason not to invite nonexistent people to Parliament. More seriously, I have no desire that I be brought into existence if I did not already exist. I know that other people believe differently, but I hope that in a debate on this matter I would be able to convince them. Therefore, I would not feel obligated to sign a contract promising to bring others into existence.”

    Compare:

    “That is, would you make a deal with a non-aristocratic person that whichever one of you ends up in the feudal aristocracy will work to bring the other into the nobility? This seems like an excellent reason not to invite peasants to Parliament. More seriously, I have no desire that I be brought into power and luxury if I did not already have them. I know that other people believe differently, but I hope that in a debate on this matter I would be able to convince them. Therefore, I would not feel obligated to sign a contract promising to lift others up from the peasantry.”

    You get at this a bit in the post, but most of the substantive conclusions you draw depend sensitively on assigning wacky decision rules to contracting agents, or on excluding entities from the original position.

    If you could have a 1/10^20 chance of a few moments of bliss as a tiny short-lived thread running in hedonium, or a 1/10^40 chance of a normal human life, which would you choose?

    Some related posts:

    http://reflectivedisequilibrium.blogspot.com/2012/05/utilitarianism-contractualism-and-
    http://reflectivedisequilibrium.blogspot.com/2012/07/rawls-original-position-potential.htmlself.html

    • Scott Alexander says:

      I’m not sure the decision rules are necessarily wacky. Suppose we’re talking about superintelligences that can read each other’s source codes. And maybe we’re doing group selection on them somehow. I would expect them to end up with extremely good decision rules very quickly. Decision rules, unlike moralities, seem to be the sorts of things that can be measurably better or worse, so “imagine really good decision rules” is a more tractable problem than “imagine the correct morality”.

      The peasant analogy seems to break down insofar as I do feel like, if I didn’t have riches and power, I would want them. Other people who feel like they would want to exist if they didn’t already would need a better counterargument.

      But your take on it – that the preference to exist is so unlikely to be satisfied as to round off to zero, and therefore is overwhelmed by potentials’ concern about the happiness of actuals – seems like a much better solution. In fact, it’s so good I’m disappointed I haven’t heard about it before as a serious solution to a presumedly unsolvable problem.

      • Ken Arromdee says:

        When intelligence 1 figures out how intelligence 2 would behave, it would have to consider scenarios where intelligence 1 analyzes intelligence 2 analyzing intelligence 1, and of course this task would be undecideable.

        • Emile says:

          Don’t be so sure! It depends of how they’re implemented; I’ve taken a stab at writing agent programs that simulate each other without falling in an infinite loop, and while I haven’t found a universal solution, I believe one can make it work in many cases.

        • Harald K says:

          “Undecidable” is a word that kept jumping into my mind when reading this giant wall of text. When he defines away all problems as being solved in the Economist’s paradise, how does he know that some of them aren’t undecidable rather than very hard?

          Indeed, this trick of stepping out one layer (the meta-level escape vent, which he uses more than once) looks exactly like the kind of solution we’ve learned to become pessimistic about.

        • Ken Arromdee says:

          Emile: You are not going to be able to solve the halting problem.

          Talking about programs that can analyze the source code of other programs is a hefty warning sign that you are discussing something that is not actually possible.

          (And I wonder if Scott has even heard of the halting problem if he can suggest such a thing with a straight face.)

        • oneforward says:

          Ken – While you can’t solve the halting problem, and agents simulating each other can lead to undecideable problems, some situations are solvable.

          For example, consider prisoner’s dilemma with two agents whose source code is identical. They can avoid infinite recursion by realizing they will both output the same choice, so they both cooperate.

          More generally, you can’t have undecidable problems with limited computing power. There may be infinitely many possible events you don’t have time to consider, but if you’re given another agent’s source code and required to decide by noon tomorrow you can still choose.

          For the halting problem, you can easily write a program that correctly outputs “halts,” “does not halt,” or “I can’t tell.”

          (I would be very surprised if Scott has not heard of the halting problem.)

        • Emile says:

          Ken: the difference is that in this case, it’s in the agents’ interest for them to be predictable.

          So while I agree that the general problem of “predicting what any agent will do” is undecidable, whether or not recursion is involved, some agents may still be able to successfully predict each other’s actions.

        • RCF says:

          Intelligence 1 can look for fixed points in the recursion, and then try to decide between the fixed points (if there are more than one). That’s the idea behind superrationality in the PD: the two fixed points are both cooperate and both defect, so choose to cooperate to get that fixed point.

          And you seem to be confused as to what the halting problem is. It is quite possible to analyze the source code of another program. It is quite possible to decide whether another program halts. What the halting problem says is that it is not possible to write a program that decides whether EVERY program halts. I think it may even be possible possible to write a program that decides on all but a measure zero subset.

        • Doug S. says:

          I think it may even be possible possible to write a program that decides on all but a measure zero subset.

          The set of all possible Turing machines is countably infinite – every possible Turing machine can be represented by a whole number – so “measure zero subset” isn’t a relevant concept here.

          Note that you can’t solve the halting problem for all but a finite number of cases, either. Suppose that you did have a computable algorithm that solved the halting problem correctly for all but a finite number of cases. However, there always exists a computable algorithm for “solving” the halting problem for a finite set of Turing machines – a lookup table that magically has the right answers already in it. (After all, there are only 2^N possible lookup tables for a set of N Turing machines, so one of them has to have the right answers, even if we don’t know which it is.)
          Then “use the correct look-up table to solve the finite number of cases your existing algorithm gets wrong and use that algorithm to solve every other case” would be a computable algorithm to completely solve the halting problem, which is impossible. Therefore, by contradiction, no computable algorithm can solve the halting problem correctly for all a finite number of Turing machines.

        • Jadagul says:

          Doug S: You certainly _can_ talk about measure-zero subsets of the set of Turing machines; you just have to define a measure. For instance, it’s pretty common to say that, say, the set of primes has zero density in the set of integers.

          There’s a good discussion of this issue MathOverflow. The answer seems to be that it depends on how you specify your model of computation–this screws with the enumeration enough that it matters. For at least some models of computation, the original claim seems to be true:

          Theorem. There is a set A of Turing machine programs (for machines with one-way infinite tape, single halt state, any finite alphabet) such that:

          One can easily decide whether a program is in A; it is polynomial time decidable.
          Almost every program is in A; the proportion of all n-state programs that are in A converges to 1 as n becomes large.
          The halting problem is decidable for members of A.

    • Jiro says:

      More seriously, I have no desire that I be brought into power and luxury if I did not already have them. I know that other people believe differently, but I hope that in a debate on this matter I would be able to convince them.

      I don’t think this comparison works. If it’s aristocrats versus peasants, the “other people” who believe differently include actual peasants. If it’s existent versus nonexistent people, the “other people” who believe differently do not include any nonexistent people.

      • lmm says:

        No, if you’re that kind of aristocrat then peasants don’t count as people.

        • Harald K says:

          All moral systems face the problem of who to include as relevant moral subjects, and you can’t really determine from within the moral system that it’s right to expand the definition.

          Whether peasants, animals or minerals are to be included in the discussion behind the veil of ignorance, the veil of ignorance thought experiment can’t help you answer.

        • Carinthium says:

          lmm- That kind of aristocrat would almost certainly have factual beliefs on the nature of peasants that justify their posistion. For example, believing peasants less intelligent or being less virtous maybe (I admittedly don’t know what they would be, as it varies from culture to culture).

          Harold K- The answer to that is to stand from the ground up with the question of “Why be moral in the first place” and the question “Why believe in morality in the first place?”

          Any answer to these questions would imply an answer to what should or should not be a moral subject.

    • blacktrance says:

      The non-existent person and the peasant aren’t analogous. The peasant has a preference for wealth, but the non-existent person doesn’t have preferences at all – such is the nature of being non-existent. I’d have no desire to be brought into existence if I didn’t exist, because I wouldn’t have any desires; it makes little sense to talk about an “I” in that case at all.

      • Levi Aul says:

        It feels like there is something akin to a concept of “momentum” missing from discussions of the preferences of non-existent agents. Velocity only matters insofar as a massive object has it, in the form of momentum. Likewise, preferences only matter insofar as an optimizing agent has them. The “momentum” of a preference might be the total optimizing power of all agents holding that preference, multiplied by the probability-mass of those agents’ existences affecting your future light-cone. You could say something like “Omega will continually flip a coin, and create a paperclip-optimizer every time it lands on tails,” and then calculate the momentum of paperclip-optimization as a preference you could attempt to acausally funge against in contracts.

      • lmm says:

        The peasant doesn’t have preferences at all, such is the nature of being non-rich. We can do this all day, it only breaks down in the real world because at some point the peasants can take their torches and pitchforks and storm the manor.

        • blacktrance says:

          No, it’s completely conceptually different. Certain psychological facts are true about the peasant that make him such that he is capable of having preferences. It’s possible for those preferences to be ignored, but that’s a separate matter – it’s one thing to say his preferences don’t count, and another to say that they don’t exist. In a way, it’s similar to animals – one can say that their preferences don’t matter, but it’s hard to argue that they don’t have preferences. In contrast, beings that don’t exist don’t have preferences, because existing is a prerequisite for being capable of having preferences.

          Also, the non-existent are, by definition, never going to torch your manor, as they’d have to exist to do that.

        • Ken Arromdee says:

          Can’t the same be said for plants as well as animals? After all, if certain things happen in the environment, then the plant does certain other things. If your concept of preference just involves this and doesn’t also include a certain mental capacity, then plants would seem eligible for preferences.

        • blacktrance says:

          Existence and response to stimuli are necessary but not sufficient conditions. I’m not trying to list every prerequisite for the ability to have preferences, just that existence is one of the necessary prerequisites, and something that doesn’t exist can definitely be said to not have preferences.

        • Ken Arromdee says:

          I’m not trying to list every prerequisite for the ability to have preferences

          That’s cheating. If you’re going to claim that animals have preferences, then you have some prerequisites in mind. Failing to list them then means you have made an unsupported claim.

        • blacktrance says:

          I assumed that animals having preferences would be an uncontroversial claim. Either way, it’s not one I’m interested in debating right now.

        • Anonymous says:

          Okay, so plants and rocks have preferences. What does this have to do with non-existent non-things having preferences?

        • Jaskologist says:

          I’m an existing person who prefers that there be more existing people out there. Does that help?

        • lmm says:

          > beings that don’t exist don’t have preferences, because existing is a prerequisite for being capable of having preferences.

          My, that’s a very circular argument. I disagree; I assert that nonexistent beings can have preferences in the exact same way that existent beings can. Do you have an experiment in mind that distinguishes the two cases? Do the empirical clusters seem different? I would think that nonexistent humans cluster with existent humans and have preferences, and nonexistent plants cluster with existent plants and for the sake of the argument I’ll happily accept that they don’t.

          > Also, the non-existent are, by definition, never going to torch your manor, as they’d have to exist to do that.

          Yes, that was my point. But grounding your morality in who can or can’t set fire to your house seems rather unsatisfying.

        • blacktrance says:

          In order to have preferences, certain psychological facts have to be true for a given being. This is why rocks and other inanimate objects don’t have preferences – because they lack the features necessary to have them. Just like rocks don’t have these features, non-existent people don’t have them either, because they don’t have anything at all, and without these features, they can’t have preferences.

        • lmm says:

          But when we’re talking about a specific nonexistent being- when a couple are deciding whether to abort their child – we would infer facts about this nonexistent entity, including psychological ones. We know that this nonexistent being is human; we might know its eye colour, or at least be able to give a probability distribution over it (or else we say there are two nonexistent entities and a probability distribution over which one we get if one exists – either way works). We could make statements and estimates about the nonexistent human’s intelligence, kindness (or otherwise), even skills. In the only meaningful sense that we can ask how intelligent this nonexistent person is, we know how to answer it. Likewise in the only meaningful sense that we can ask whether this nonexistent person has preferences, it seems pretty clear that, like any other human, they do.

        • blacktrance says:

          We can make statements about what that being would be like if it existed, but it only has those properties to the same extent as imaginary beings have those properties – we might as well talk about what Bilbo and Smaug prefer. We can make meaningful and true statements about what beings would prefer if they existed, but it’s a category error to treat that as if they actually existed. “Hypothetical Being X likes donuts” is a shorthand for “I imagine X as liking donuts” or “If X existed, it’d like donuts”.

        • lmm says:

          Where does that argument fail for peasants?

        • blacktrance says:

          Peasants actually exist (or have existed in the past), so they’re actual beings, not hypothetical ones, and their preferences are actual, not hypothetical.

        • lmm says:

          Right, but I can declare that peasanty things are a different category and they only have peasantly existence, not actual existence.

          • Army1987 says:

            Their existence is still ‘actual’ enough for them to use torches and pitchforks against you, unlike that of hypothetical beings.

          • Zathille says:

            @Army:

            I find it interesting that the more we explore these contractualist questions, the more instinctively Hobbesian the answers seem to become.

        • blacktrance says:

          That distinction would not be based on any real differences.

      • RCF says:

        Does it make sense to ask “If you were told that you were going to be born into a medieval society, and you don’t know whether you would be a peasant or an aristocrat, what social norms would you want in that society?”

        Of so, what distinguishes those two type of counterfactual questions?

        The whole idea of the Veil is that you’re asked to imagine being able to have preferences prior to being born. So if it makes sense to talk about having preferences without yet being born, why does it not make sense to talk about having preferences without existing?

        • blacktrance says:

          The Veil is asking you to imagine having preferences prior to being born but not prior to existing – it says, “Imagine if you existed and had preferences before you were born, and didn’t know where you’d end up after birth, but you know that you’d be born” – i.e. having been born is a prerequisite for having been in the behind-the-Veil contractual state.

    • RCF says:

      To SA: I really think you should more clearly label your edits. Just because the internet allows blog posts, comments to that blog post, and edits to the blog post to form a timey-wimey ball, doesn’t mean this shouldn’t be frowned upon.

      To CS:

      I don’t see how this makes any sense. If we grant that there exists a Repugnant Conclusion situation where increase in utility from having another person is greater than decrease in utility to the rest of humanity, how will dividing both sides of the inequality by a really large number change the direction the inequality?

      “Souls might value an egalitarian society at 1 unit of utility, and getting to exist in it at 1,000,000,000 units, but the former would dominate.”

      Okay, let’s say each potential person has 10^-100 chance of actually existing. Then the value of existing is at 1,000,000,000*10^-100, which is indeed a small number. But the value of a egalitarian society would be 1*10^-100, which would be even tinier. Why are you multiplying only one side of the inequality by 10^-100? Unless you’re proposing that every potential person values an egalitarian society apart from the value acquired from actually living in such a society, in which case IMO you should have made such an assumption explicit. Furthermore, still IMO, you’re missing the whole point of the Veil, which is to motivate altruism from a self-interested point of view, and you now have to come up with some basis for preferences other than self-interest. Are the potential people now counterfactual utilitarians, ordering their preferences based on their estimations of total utility of the possible worlds? If so, what are they basing their utility calculations on, and how is asserting that they would conclude that egalitarian societies are more important than high population not begging the question?

  5. blacktrance says:

    I like the general approach, but there are some problems:

    1. There are many situations in which people wouldn’t agree to rules or policies that would have utilitarian results. For example, it’s quite possible that some people in the First World wouldn’t agree to pay $2 per week to solve world hunger. Similarly, people may not agree to treat animals well (e.g. stopping factory farming), because they’d get nothing out of it themselves.* The Mad Scientist’s Brain problem is probably the clearest example of the conflict- obeying the brains would definitely be the utilitarian thing to do. The contractors will only agree to a rule if it makes them better off, and there are many possible world-utility-maximizing rules that decrease the utility of some of the individuals subject to those rules. Pareto improvements are great, but they don’t get you to utilitarianism.

    2. Why would theoretical contracts made behind the Veil of Ignorance be binding? Realistically, fetuses can’t make contracts, so we can only imagine what they’d agree to if they could. When the fetus is an adult, you come to it and try to make it comply with the theoretical agreement, but he can reasonably say that he both never agreed to this contract, nor does it benefit him (because from his particular position, he would agree to a different contract). The contractarian (in the Hobbesian tradition) can say that theoretical contracts are binding because they actually benefit the contractors as they are**, as opposed to benefiting them in some theoretical prior state, but the Rawlsian contractualist doesn’t have this option. It’s highly likely that the rules I’d agree to before knowing my position in the world would be different from the ones I’d agree to now – but so what? I didn’t actually agree to them, nor would it be in my interest to start complying with them now.

    3. Regarding the 1% holdouts – while it’s possible to imagine a situation in which different people disagree about 1% of their preferred agreements, it’s also worth considering that there could be some Group A that agrees with 99% of Group B’s agreement, but Group B agrees with 100% of Group A’s agreement (but still prefers their own), e.g. Agreement A has 99 rules, and Agreement B is Agreement A plus one additional rule. In this case, Group A could give up its resistance to the one additional rule, but Group B wouldn’t reciprocate, or Group B could give up its additional rule, but then Group A wouldn’t reciprocate.
    Rules could also be mutually exclusive in a way that the 1% difference couldn’t be contracted away. For example, if I believe that I should possess the Great Holy Object and that anyone else touching it is horrible defilement, and you believe the same about yourself, even if we agree about the other 99% of the rules, there’s no way we can compromise here. If we’d agree to share the Holy Object, that would be a Pareto dis-improvement, since we’d each believe that the other is defiling it.

    4. I’d sign the hedonium agreement, and with people’s current beliefs as they are, some would sign and some wouldn’t. This doesn’t really answer the question of whether wireheading everyone is a good thing.

    *Theoretically, you could argue that this is a Kaldor-Hicks improvement, and in the Economists’ Paradise it would thus be converted to a Pareto improvement – but how? How would animals compensate omnivorous humans in a way that would make them at least as well-off as they were before?

    **Thus, if the contract actually benefits those subject to it, they can’t rationally agree to not subject themselves to it, since that would make them worse off.

    (I like David Gauthier, and his Morals by Agreement is a good book, but he misses some things, which is why I think Jan Narveson’s The Libertarian Idea is a good supplement to it. Narveson takes a similar approach, and is a better writer. It’s a quick read, and I recommend reading both of them in succession, Gauthier first.)

    • Carinthium says:

      blacktrance, you are clearly judging contractualism in terms of utilitarianism. You’ve outlined very good criticisms of my theories about such, but I’m curious to know- how would you, starting from first principles, establish any broad moral theory as justified?

      To clarify, getting to the point where one has established something as broad as “Contractualism is correct”, “Utilitarianism is correct”, or even “Deontology is correct” as opposed to their rivals, INCLUDING amoralism, would be sufficient.

      I assumed up until now you have some clever contractualist theory (my bad, and my apologies) until now. Your responses seem to imply that isn’t the case, so I’m curious.

      • blacktrance says:

        I’m not a utilitarian, my first point is just an objection to the idea that you can derive utilitarianism from contractualism, and also points out that the two lead to different conclusions. I think that contractualism is better than utilitarianism, and where the two conflict, contractualism is more likely to give the right answer – and a committed utilitarian should look elsewhere to ground their ethical beliefs.

        My own preferred ethical theory is contractarianism*, which can be described as “contractualism without the Veil of Ignorance”. Instead of starting from an original position of ignorance of our later positions, we start from our actually existing positions. This means that rules must be rationally acceptable to our actual selves instead of being hypothetically accepted by our non-existent behind-the-Veil selves. We accept these rules because they further our own ends, and the fact that they further our own ends is what justifies them. The amoralist should reject amoralism and accept contractarianism because it would fulfill his ends more effectively.

        *I’m also an ethical egoist and a hedonist, but I think that’s less relevant here.

        • Carinthium says:

          Oops- my bad. I still have things to say against you, but I’ll discard my former objections before I begin.

          For the sake of a proper understanding of events and so as not to look petty, I’ll put it on record that in my view you won our earlier argument. That being said, I’ve thought about matters in light of what you have said and still think you’re wrong on this point even if I accept you on others.

          After thought, my view would be Rule Preference Amoralism- identical to rule-based Preference Utilitarianism, except with one’s own utility function (arguably one’s own Coherent Extrapolated Violation, but I’m not sure on that point) being the only one taken into account. Contracts ‘should’ exist, but as a subset of the Rules used to fullfill one’s own utility function, which of course vary from individual to individaul.

          The reason this is superior to contractarianism is that a contractarian (your view), though not as much as a contractualist (ordinary view), is by the nature of their view tempted to get caught up in what should be and thus compromise their self interest.

          Let’s take the example of a world in a dilemna of the type Scott Alexander’s world view would classify under Moloch. The Rule Amoralist might try to fix it, but might not. A Contractualist would be obliged to try to fix it. A Contractarian still probably would.

          What do you think, blacktrance?

        • blacktrance says:

          I think we’re using the term “amoralism” differently. What you call “Rule Preference Amoralism” I would not call amoral, because it makes normative claims about what one ought to do. “Contracts ‘should’ exist, but as a subset of the Rules used to fulfill one’s own utility function” is perfectly compatible with contractarianism or at least with the contractarian methodological approach – the SEP describes contractarianism as “a moral theory about the origin or legitimate content of moral norms”, and it’s possible for situations to exist in which mutually advantageous contracts could not be made. In such situations, the contractarian would not advocate making contracts. In ordinary life, this is the situation in which we find ourselves in relation to animals, but it’d be possible to be in a situation in which humans are in that state vis-a-vis other humans.

          What you’re calling amoralism seems more similar to egoism, which, while unpopular, is certainly not amoralist.

        • Anon says:

          Having just read Ozy’s Anti-Heartiste FAQ, “Coherent Extrapolated Violation” sounds like just the sort of rapey thing his ilk would come up with.

          “Oh no, I’ll treat you how you want treated, not actually literally believe you mean ‘no.'”

          (Typo, I know… but I couldn’t resist the connotation.)

        • Carinthium says:

          Side note come to think of it that I consider egotistic hedonism a rather odd posistion relative to egoistic preference satisfaction.

          I should also note that I am wary of the term ‘should’, on the basis that it is a human construct with a variety of meanings often mistaken for an objective feature of the world in the same way that, for example, dogs are.

          A contractarian cannot account within his model for the potential utility of breaking one’s word. A model based on rule utilitarianism, by contrast, has resources for doing so.

          Copy-Pasted from earlier. I’m not sure how relevant this is, but pointing out just in case:
          A debate between a society of Rule Amoralists about whether to include a prospective outsider would not have a ‘right’ or ‘wrong’ answer. Instead, it would be analagous to a debate over tribal policy in that some of society’s current entities would benefit from the outsider’s commitments whereas others would not (presumably, given the debate is happening at all). There would then be an equilibrium as to what to do about this through negotiation.

          By contrast, a contractarian is committed to the idea that there is a naturally appropriate equilibrium as to whether to accept the outisder.

        • blacktrance says:

          I should also note that I am wary of the term ‘should’, on the basis that it is a human construct with a variety of meanings often mistaken for an objective feature of the world in the same way that, for example, dogs are.

          People misuse human constructs all the time, but that’s no reason to avoid their proper use. For example, the common mistake of thinking that value is intrinsic is no reason to avoid using the term “value”, you just have to be clear about the way you’re using your concepts.

          A contractarian cannot account within his model for the potential utility of breaking one’s word. A model based on rule utilitarianism, by contrast, has resources for doing so.

          You’re ascribing views to contractarians that they don’t actually hold. Contractarianism is derived from standard game-theoretic reasoning, and doesn’t necessarily hold that you should obey the rules at all times, only that you should want these rules to be enacted and enforced (and that you don’t want others to break them). Contractarianism is a specific claim about what follows from practical reasoning applied to one’s preferences, it’s not the slavish devotion to contracts that you seem to think it is.

          A contractarian would say that the “right” answer regarding accepting the outsider is determined by whether it would benefit the insiders. The basic structure of a society is justified by it furthering the ends of those who’d agree to be part of it.

    • Scott Alexander says:

      1. It’s not necessary that everyone pay $2 to end world hunger, just that enough people do it that world hunger is ended (that’s why I did the calculation assuming half of people would). That having been said, if it were an important enough point, likely once we went toward the practical reducing-uncomputable-things-to-real-policies side of things, the people who wanted everyone to pay for ending world hunger would make a deal with people who wanted something else such that everyone gets their preferred policies partially funded (compare: real governments taxing and budgeting).

      2. There seems to be a good decision theoretic principle that goes something like “Assume I made all contracts I would have made if I had known about them”. The Counterfactual Mugging is the purest example of this. People who accept this decision theoretic principle do better than people who don’t.

      3. Either possession of the Great Holy Object is so important that it is worth jettisoning literally everything else you believe in to get your position rather than the closest acceptable compromise, or you agree to a compromise (such as: no one has it). I think the first case is vanishingly rare – even when it appears to occur in real life, it’s mostly a game theory failure (“I need to signal extreme concern about this in order not to be outsignaled by the other side”). If it’s true in real life, values don’t converge, but I’m not sure why would expect them to in that case.

      4. Okay. But it seems like it would be pretty easy to work out a compromise in which people who want hedonium get it and people who don’t want it don’t.

      • blacktrance says:

        1 and 3. Here’s a realistic example: two groups agree on almost every policy, except Group BHL wants a UBI and Group L doesn’t. L can’t give anything up to BHL in return for the absence of a UBI, because they otherwise agree and so they can’t offer anything to BHL to get them to give up on the UBI, and BHL can’t get L to accept a UBI for the same reason. They can’t compromise because there are no in-between solutions.
        More generally, it’s easy to see how this diverges from utilitarianism. Forcing X number of people to pay to end world hunger would be optimal from a utilitarian perspective, but if they have to cut a deal with those who don’t want to pay, then the amount paid is less than the utilitarian optimum.
        Also, while the Great Holy Object is an extreme example, it’s generally possible for all compromise solutions for to a given issue to be worse than either extreme solution – you giving up some of what you want isn’t enough to compensate me for giving up some of what I want, and vice versa.

        2. They do better – compared to whom? In a Parfitt’s Hitchhiker-type scenario, it is indeed better to have committed to a contract of paying the driver if you’re in the desert, but if you manage to get the driver to take you to water without committing to anything, you would do better if you’d get away without paying the driver. We are in the position of having already been driven without having made any contract to pay for it, so let’s get away with it. If we need to be driven in the future, we can make a contract to cover future cases, but we’d be worse off if we tried to compensate for past cases – past events have already happened, and there’s nothing we can do to affect them. We should commit to rules that increase our utility, but we can only commit to rules now, not in some hypothetical prior state, and that changes what rules would maximize our utility. Committing to contractualist rules now wouldn’t make me any better off, even if it would’ve made me better off to have started life committed to them.

    • peterdjones says:

      @blacktrance

      2. Why would a theoretical contract made behind a veil of ignorance be binding”

      If the veil of ignorance thing is the correct analysis of ethics, and if moral proposition X can be justified that, way then there is a chain of reasoning that justifies X which means that X is binding on rational agents, in the sense that they should assent to X, for some interpretation of “should”.

      Binding is not motivation. Someone who says, “I really should do X, but I can’t be arsed today” is admitting to an obligation, and confessing to a lack of motivation to fulfil it. Akrasia is the gap  between obligation and motivation.

      Not all motivation is based in immediate reward. People are motivated to accept good rational arguments on the bases of status and identity. Everybody wants to be higher status, for some definition of status, and rational people are often seen as higher status, giving a lot people a certain amount of motivation accept moral claims backed by rational argument.The average person is of course not an ideal rationalist, and many other factors remain in play, not least confirmation bias. Nonetheless, moral persuasion more often takes the form of appeals to objective principles than appeals to personal interest.

      So binding and motivation are not the same, which means you common claim that moral proportion are not binding (in principle)  just because they are not motivating (to some specific person)

      And binding and motivation are not orthogonal or disjoint: a certain moral claim can be motivating because it is binding because it is justifiable because it is true.

      But motivating-ness and truth are not theme. You can’t directly refute the claim that some X is an objective moral truth by noting that noone would be motivated to  act on it. It would be strange to have a set of moral truths that no one ever acts on. All other things being equal, adding motivation to the set of moral truths would be a nett  gain …but all other things aren’t equal. You have to change one thing to change another. 

      Motivation is psychological, so one way is to use social pressures or whatever to change individual psychology.  That is popular, but not, apparently, your approach.

      Another way is to hold psychology constant, and change morality. Compromising on truth to achieve motivation is not an unalloyed win.it can still be   nett gain.. for instance, if no one is willing to work on the 100% true morality, but  willing to work on the 50% true one. But there is no nett  gain for the 0% true morality, and that is the problem with ethical egoism. Caligula is extremely well motivated to behave the way he behaves, and that isn’t moral at all. Putting a label reading “this is moral” on de facto behaviour doesn’t change anything in reality,  or make anything better.

      Caligula is a reductio ad absurdum of the idea that motivating-ness is the only criterion for morality.

      • blacktrance says:

        Sure, someone can be inconsistent (as is the case with akrasia) and irrational, and be motivated to do something they shouldn’t, but there are also things that people would still not be motivated to do even if they were consistent. Sometimes people should do things they don’t want to do at the moment – but if someone should do X, it is necessary that they would want to do X in some ideal rational state. One could say that something is binding if it’s motivating in that state. However, the only thing that’s different between a person as they currently are and a person in the ideal state is that the person in the ideal state is internally consistent – they’re still in the same situation in other respects. That being the case, the person in the ideal state still wouldn’t be motivated to bind themselves to the Veil of Ignorance, which means that the actual person isn’t bound to it, either.

        Caligula is extremely well motivated to behave the way he behaves, and that isn’t moral at all. Putting a label reading “this is moral” on de facto behaviour doesn’t change anything in reality, or make anything better.

        I’m not putting the “this is moral” label on de facto behavior. For example, if Caligula were inconsistent – for example, if he procrastinated on torturing people – I would not label his failure to torture as moral, even though that would be his de facto behavior.

        Edit: If Caligula being moral makes you uncomfortable, you can restrict the definition of morality to what’s motivating for certain minds (e.g. humans) in some ideal rational mental state. That avoids Caligula being moral, but then you have to admit that Caligula has no reason to be moral. However you choose to delineate morality, it will either not be binding on everyone or it will classify Caligula as moral.

  6. Meredith L. Patterson says:

    In the real world, where costs are high and people have limited brainpower, we already have complex interlocking patchwork agreements between, in practice, nearly everybody; it’s just that they have nothing to do with murder or other crimes of person. Two examples that come readily to mind are licensing agreements and the public key infrastructure (i.e., how SSL works). Obviously neither is a particularly ringing endorsement for polycentric law.

    And, yes, game theory totally admits multiple Nash equilibria for a payoff matrix, although the whole point of the Prisoner’s Dilemma is that a Nash equilibrium is not necessarily Pareto efficient. (Contrast with the Stag Hunt, which has two Nash equilibria, one of which is Pareto efficient; the Pareto efficient outcome in PD is not a Nash equilibrium.) It also admits multiple Pareto efficient solutions, if that’s what the payoff matrix indicates. Remember that a payoff matrix is just a comparison of all the potential outcomes of combinations of player actions; the 2×2 “cooperate/defect” matrix is just for convenience and generality. If two players each have three possible actions, then it’s a 3×3 matrix, &c. In real life, we’re lucky when we can predict the payoff of any action cleanly. Job interviewing is the canonical example of a signaling game, but in real life it’s not as simple as “talented candidate tries to signal their type, untalented candidate tries to countersignal their type, hiring manager tries to distinguish between the two”; it’s more like being scored in the Olympics, where the Russian and Canadian judges are thrilled with your whiteboard performance but the American judge knocks off two points for poor negotiation skills, so you end up hired but at a suboptimal salary.

    The bigger problem, to my mind, is that in the real world payoffs are often incomparable. We have math for dealing with incomparable things (it’s called partially ordered sets), but I haven’t seen much of it incorporated into game theory.

    • in the real world payoffs are often incomparable

      In other words, the Von Neumann–Morgenstern completeness axiom is inapplicable in real life.

      There have certainly been arguments in favor of this proposition. Unlike, say, the transitivity axiom, an agent whose preferences don’t obey the completeness axiom is not necessarily subject to Dutch books. Even the Less Wrong Decision Theory FAQ doesn’t do a very good job of defending the axiom.

      On the other hand, there has, in fact, been work done on utility theory without the completeness axiom. Robert Aumann wrote a paper by that title [PDF] back in 1961, and there have been many more recent papers as well.

      • Meredith L. Patterson says:

        Thanks for the link — that’s very helpful. Footnote 3 on page 1 was totally worth the price of admission all by itself. (Granted, a 900k download isn’t all that high a cost, but still.) And then he goes and doesn’t pursue that direction, but instead hares off in a direction that lights up three-quarters of the algebraic topology circuits in my head. This one is going to take some consideration.

      • In other words, the Von Neumann–Morgenstern completeness axiom is inapplicable in real life.

        My impression is that the whole notion of utility becomes totally useless without completeness. The whole point of having a decision theory is that you need to come to a single decision of how to allocate your resources, and you can only do that if everything is comparable. Everything becomes comparable when time and money are relevant.

        • Meredith L. Patterson says:

          Wow. I could not possibly disagree with your position more, and yet, ironically, we arrive at the same conclusion: “everything becomes comparable when time and money are relevant” — but I come to that conclusion without assuming completeness is even possible much less axiomatic. It’s because of how decision-making degrades under complexity: most decisions, if you tried to compute an optimal solution to them, would require more time to perform that computation than available to make the decision. Arguably some decisions have no optimal solution at all. But if you’re under time pressure to take some action, your preference for taking some action affects your heuristic decision-making process such that you have to start discarding preferences you can’t find a way to satisfy, down to a subset which does satisfy completeness, in order to find a solution that satisfies some preferences, including the one about deciding quickly. “I can disregard X” is a heuristic decision to make the incomparable comparable for the purpose of an immediate situation, and it’s one we make all the time.

          Morally difficult situations are morally difficult because they force us to decide which preferences we’re going to sacrifice — which incomparabilities we’re going to pretend are comparable for the sake of making a decision.

          (I should point out that I’m actually coming at this from the perspective of linguistics, where “here’s a bunch of partially ordered constraints that describes some aspect of a language; a ‘correct’ utterance in this language is one that satisfies the greatest ranking of these constraints” is a primitive we use all the time. People accidentally choose less-highly-ranked utterances all the time, e.g., garbling a word or producing a spoonerism; fortunately the human language faculty is especially tolerant of this sort of thing, perhaps because it’s often easy to extrapolate out from the error to what the original message might have been. Unfortunately our tolerance for garbled morally valenced actions is rather more limited.)

        • I think we may agree more than you think we agree?

          You say: “here’s a bunch of partially ordered constraints that describes some aspect of a language; a ‘correct’ utterance in this language is one that satisfies the greatest ranking of these constraints”

          I would say that the axiom of completeness is equivalent to the existence of a “greatest ranking of these constraints.” In the language of the Aumann paper, I would say that any notion of utility must involve an order homomorphism.

    • Jadagul says:

      There is some game theory dealing with posets. I wrote a paper on it once in college. You haven’t heard about it much because so far it’s terrible; the stuff I looked was at the stage of lots of notation but no real results.

      • Meredith L. Patterson says:

        If you still have any of those references around, I’d be interested in looking them up, terrible or otherwise. Back in grad school I did a bunch of machine learning work focused on recovering (partial) rankings of data from smaller, necessarily incomplete input posets. Lately I’ve been collaborating with another machine learning researcher who’s working on a probabilistic programming DSL, and we’ve been chucking game theory problems at it. The two haven’t converged yet, but it’s heading there.

        • Jadagul says:

          The main reference I used was an Ariel Rubinstein book Modelling Bounded Rationality from 1988 (MIT press). Don’t know if it will be useful at all, but happy to pass on the reference.

    • RCF says:

      You’re not using the term “countersignal” in the standard sense. A person of low status trying to appear high status is signalling, not countersignalling.

  7. Fazathra says:

    This is a pretty cool system, but even with all the assumptions can lead to some very odd outcomes depending on the values of the agents involved in the bargaining session in Platonia.
    For example, consider an agent A who places a high value on being allowed to torture and kill another arbitrary agent V (V for victim). A would be willing to trade off quite a bit of other utility to achieve this and so they negotiate with all the other agents to amend their non-torturekilling contract to a non-torturekilling-except-for-V contract because to the other agents the utility loss of V being torturekilled is minimal and A is happy to make up for that loss of utility. Thus, we end up with odd moral laws such that nobody is allowed to torturekill anyone else except A, who is allowed to torturekill V. This can, of course, be extended to fully fledged total war if there are significant blocks of agents with similar utility functions.

    This is just a specific manifestation of the general flaw which is that this argument ignores the orthogonality thesis. There could perhaps be agents who have an irrational hatred of pareto efficiency, and will reject and try to sabotage trades that will move the system towards pareto efficiency. Or maybe there could be satanbots whose utility is maximised by minimising everyone else’s (except for other satanbots’). The presence of agents like these could severely shift the positions of the Nash equilibria in Platonia; likely in a direction away from what we would consider to be moral. This can be ignored in economics because firms who exist solely to hurt their consumers would quickly be outcompeted, but in Platonia there is no competition between agents and thus no narrowing of possibly utility functions at all so in many cases it is not clear that anything like what we consider morality would emerge.

    • Tom Hunt says:

      If you don’t ignore orthogonality, then the problem of grounding morality is facially and farcically pointless, because you can always conjure some agent which disagrees with your grounding.

      A morality formulated by humans, for their interactions with humans, can only be about humans (or entities whose overlap with humans is sufficient). If you’re working in unrestricted agent space, an entirely different set of rules apply. My only prescription for dealing with satanbots is “kill with prejudice”, because their existence is incompatible with maintaining a moral order for entities I actually care about.

      • Vulture says:

        This is what I would have said, if I was also cleverer and more articulate.
        +1 for Coherent Extrapolated Commentary 🙂

      • Ken Arromdee says:

        If morality doesn’t apply to killing satanbots, how does morality apply to the process of deciding that some entity is a satanbot?

        Bear in mind that there can be a gradation from satanbot to “entity I care about”. What if the entity maximizes the utility of other entities like it, and also all human beings who share its skin color, while still placing high value on harming human beings of different skin color? (Do Nazis count as satanbots?) What if instead of skin color it’s religion? (Are Islamic Fundamentalists satanbots if they have Islamic rule of non-Muslims as a terminal goal?) What if the entity doesn’t discriminate but simply wants to take things that humans own, but considers negotiation to have extreme disutility? (This makes Genghis Khan a satanbot.)

        • Carinthium says:

          One idea for solving this problem (not perfect, but plausible enough to be worth considering) would be to switch from Contractrualism to Rule Amoralism.

          A Rule Amoralist is basically what it looks like- a Rule Utilitarian except that he or she only cares about maximising their own utility function. A Rule Amoralist can commit to society just as a Contractualist can, but merely makes commitments to entities if said Rule Amoralist gains an advantage.

          A debate between a society of Rule Amoralists about whether to include a prospective outsider would not have a ‘right’ or ‘wrong’ answer. Instead, it would be analagous to a debate over tribal policy in that some of society’s current entities would benefit from the outsider’s commitments whereas others would not. There would then be an equilibrium as to what to do about this through negotiation.

          What society starts off with is an inapplicable question to the Rule Amoralist, as the decision to form such a group is made from their own perspective and not the group’s.

        • Tom Hunt says:

          The definition of satanbots I’m using is that in the post I replied to: “an entity which gains utility from decreasing the utility of any other agent in the system (except another satanbot)”. In dealing with entities which don’t fit that specific definition, case-by-case discretion is needed.

          Any human is probably not a ‘satanbot’ per se, because 1. even the most psychologically abnormal of humans don’t fit the profile above, and 2. I assign positive utility to the lives of arbitrary humans, while I do not for arbitrary active agents. But there are certainly situations in which I would advocate (and carry out, if circumstances required/allowed) “kill with prejudice” against certain categories of humans. The barbarians in the process of sacking the city would be the obvious example.

          @Carinthium: Actually, that is pretty much what I am, once it’s noted that my own utility function contains quite a few terms regarding the happiness and well-being of others.

      • Fazathra says:

        @ Tom Hunt:

        While satanbots and the like may be fanciful, there is still more variation in the space of possible human minds than you or SA seem to think. There was a reason I chose my first example (of the torturekilling agent) which is that humans with such a utility function almost certainly exist. The obvious case is that of the sadistic serial killer, and if we scale agent A’s utility function up from wanting to kill V to wanting to kill a specific group of Vs, then all we have done is reinvent (and made moral) tribal warfare and genocide as there are certainly groups of humans who have an A-like utility function, as pretty much the entirety of history shows.

      • Paul Torek says:

        If not being a meta-ethical internalist means “ignoring orthogonality”, then I’m all for ignoring orthogonality. Or in other word, basically, what Tom Hunt said.

  8. This is the Coasean Paradise, not the Economist’s Paradise. The Economist’s Paradise would start from absolute abundance in all things and skip all the trading, contracting and thinking, which are inefficient….

  9. To me, this sounds like an elaborate way of avoiding

    A gives advice to B.
    B follows the advice off a cliff.
    A says “But you’re supposed to have some common sense.”

    by getting B to have common sense earlier in the process.

  10. One interesting way of practically implementing contractualism in the real world is Scott Aaronson’s eigentrust/eigenmorality.

    It seems to (as far as I can tell) be able to handle the case of “I want to sign this contract if a sufficiently large number of people would also want to sign this contract.”

  11. Blake Riley says:

    As best I can tell, your proposal of contractualism as an aspiration towards the outcomes of the Economist’s Paradise is more or less equivalent to preference utilitarianism by results of Harsanyi, Roberts, and others. I’m confused whether the coordination power of the Paradise means globally Pareto-efficient outcomes (ie cooperation in a Prisoners’ Dilemma) or Pareto-efficiency conditional on incentives (ie coordinating on a particular Nash equilibrium over others), but either way the problems of how you count the utility of new individuals or how to discount everything are replaced by who gets to make a contract, the default allocation if there isn’t agreement, and the timing of when the agreement is made.

    If a certain set of negotiation techniques would lead to a local minimum where everything is Pareto-optimal but nobody is happy, then everyone would coordinate to sign a pact not to use those negotiation techniques.

    If nobody is happy, it can’t be Pareto-optimal! Maybe only one person is happy and the rest are doing terribly. But if that’s the case, then not everyone would be on-board with a change in negotiation techniques. The problem of weighting utilities is equivalent to selecting a Pareto/Kaldor-Hicks optimal outcome out of the many that exist. Even from a contractualist perspective, you need might need some math and axioms to figure out where on the Pareto frontier you’re shooting for. The Nash bargaining solution is one option. Ken Binmore has defended the NBS starting from the status quo as the only coherent contractualist position in Natural Justice and Game Theory and the Social Contract.

    • If nobody is happy, it can’t be Pareto-optimal!

      Um… no? Take the payoff matrix
      (0,0) (100,-100)
      (-100,100) (0,0)

      The upper left and lower right are Pareto-optimal strategies, but nobody is happy.

      • Blake Riley says:

        I was overreaching in my correction. Everyone might be unhappy in some absolute sense. By the definition of Pareto efficiency though, everyone has to be relatively happy compared to the outcomes others would propose and there can’t be any mutually improving pacts.

      • RCF says:

        Payoff matrices are in affine transformation modded space. So your payoff matrix is equivalent to:

        (1,1) (2,0)
        (0,2) (1,1)

        In the upper left and lower right, both players are “happy” insofar as they recognize that they could be doing worse. If your definition of “happy” is “having positive utility”, that definition is nonsensical. If you have another definition, you should make it explicit.

        • The point is that “happy” != “could be worse”.

          Yes, it is also true that “happy” != “has positive utility” because that doesn’t make sense in general. I should have said something like “assume that you have a fixed utilon coordinate system as specified where the cutoff utility for happiness is somewhere between 0 and 100. Then with this payoff matrix nobody is happy.”

          I guess I was thinking more in dollars than utilons.

  12. Borg Cube says:

    In the last paragraph of section II, I think you have “the cost of the tradeoff goes down” twice where one of them should be “up.”

  13. Contract theories have a lot of trouble capturing the moral significance of (i) non-rational sentient beings (e.g. animals, mentally disabled orphans), and (ii) future generations. E.g. if the distant future can do nothing to benefit us, why not deplete natural resources? Self-interested contractors might positively *require* a policy of depletion. This seems a bad look for a putative morality.

    re: “killing people painlessly”, note that any sensible version of utilitarianism can easily avoid this implication on the grounds that death harms via depriving the subject of future goods.

    • Vulture says:

      Yeah, Scott, if you didn’t say so then you should sub in the standard “who is expected to have a net-negative or -neutral future life” or whatever.

    • Carinthium says:

      You’re appealing implicitly to the idea that what people feel is morally right should have a role in morality. I see nothing wrong with someone who ‘bites the bullet’ and therefore argues we should only preserve resources to the extent we care about having a future (which isn’t that high compared to the concrete demands it competes against).

      • Vulture says:

        If we’re not allowed to appeal to intuition, then we really can’t argue meta-ethics at all.

        • Carinthium says:

          I disagree. There are plenty of ways you can still argue ethics.

          1- Amoralism/Moral Nihilism, which is an ethical posistion even if it isn’t a moral theory.
          2- Contractualist arguments based on self interest. I don’t want to put too much support behind these as I disagree with them, but a self interest based argument dose not appeal to any moral intuition.

          There are plenty of other options that can be used with a bit of philosophical ingenunity. These all have the advantage they don’t depend upon one of the least tenable arguments taken seriously in philosophy.

          The problem with any appeal to intuitions if philosophical skepticism isn’t involved is that we know FOR A FACT that human intuitions are severely unreliable and led humans to demonstrably wrong conclusions (such as supernatural beings) that existed for millenia. Why are we taking them seriously on ethics given their track record?

          Even without that, it begs the question of why we start from the premise intutions are right at all.

        • blacktrance says:

          We can argue metaethics by determining what follows from practical reasoning applied to one’s existing motivating desires, as well as deliberative reasoning applied to oneself to make one’s desires more consistent. This produces answers about what any particular agent ought to do, without the problem of conflicting intuitions. Conflicting desires aren’t an analogous problem because it needn’t be the case that two agents necessarily ought to cooperate or do the same thing. Carinthium’s 2 is a specific application of this general approach.

    • Anonymous says:

      Maybe this is just me, but I feel like killing me would deprive me of … life, not just future goods. I value existing and experiencing life.

      • RCF says:

        Here, “goods” doesn’t mean just “property”. If you enjoy watching the sunset, then watching the sunset is a “good” for the purposes of this argument.

    • Said Achmiz says:

      Contract theories have a lot of trouble capturing the moral significance of (i) non-rational sentient beings (e.g. animals, mentally disabled orphans)

      Obvious (and, imo, true) reply: and this is right and proper; the theories are correct to do so, because animals and mentally disabled orphans are not morally significant.

      (There is no such easy answer about future generations.)

  14. Anonymous says:

    The Invisible Nation is not as much fun as Elua, but may approach his usefulness.

    • Vulture says:

      Or instantiate it better than an epiphenomenal (in the incorrect p-zombie sense) deity, at least.

  15. syllogism says:

    The Economist’s Paradise can still be wildly divergent from the Utilitarian’s Paradise, depending on the distribution of the agents’ economic outputs.

    An example near the limit: Let’s say we decide on a set of utilitarian axioms, and they yield the result that non-human great apes have full personhood. In UP, they have as much utility purchasing power as any human, although they still end up with less utility, as their utilons are probably more expensive to provide, as there are no economies of scale in place to serve chimpanzee interests. But, in EP, they get only as much purchasing power as they can accumulate through trade — which is almost nothing.

    The EP also gets weird in post-scarcity economies, where fewer peolle can buy their way in.

    • Vulture says:

      Agreed. I think they would fare pretty well in the Contractual Paradise, though, especially if the congress is convened in some proto-state where no one can tell whether they’ll turn out to be a human or a chimpanzee.

    • I’m inclined to think that the cost of great ape utilons would have a different graph than human utilons.

      Basic ordinary life utilons would be most expensive for great ape utilons than for human utilons, but they’d top out at ancestral environment.

      Human utilons top out at a much higher level, if they top out at all.

      • Basic ordinary life utilons would be most expensive for great ape utilons than for human utilons, but they’d top out at ancestral environment.

        I fail to see any argument that the ancestral environment has optimal utility for great apes that would not also apply to humans.

  16. Douglas Knight says:

    Several of your examples where Contractualism is supposed to be superior to Utilitarianism are examples where the real disagreement is about what the individual preferences are or should be, and this shouldn’t shed light on the aggregation procedures. If everyone has the same (or symmetric) preferences, and the aggregation procedure doesn’t produce the desired result, well, that’s like arguing against consequentialism on the consequences.

    In particular, dust vs torture is (or can be slightly adjusted to be) about individual preferences, not a disagreement between Contractualism and Utilitarianism over whether we should respect individual preferences. For a second example, individuals need cardinal utilities to make decisions.

    You can salvage the second point by saying that incoherent individuals can make contracts more easily than they can aggregate utility. But ideal negotiations behind the veil of ignorance seem to me much more taxing on incoherent humans than everyday life, so I do not find that very convincing.

    • Scott Alexander says:

      You’re right, it may be that insofar as either can return results at all, contractualism and preference utilitarianism should return the same results.

  17. Liskantope says:

    Whew, I guess it’s late and I’m too tired to fully comprehend and digest all of this post. I think I got lost around the one-hundred-pacts thought experiment. I hope to get back to this later and properly read the parts I missed.

    The “legislating morality” issue is one that I ponder a lot, actually. It seems to me that often when politicians get accused of “legislating morality”, they are trying to enact laws according to what they sincerely believe is Right (or their sincerity is at least purported), while their accusers disagree on the particular issue because their morality is derived from a different system of ethics. In this case, the problem is indeed the difficulty in determining and agreeing upon what is actually Right. Since any law, including one which one of these critics is in favor of, is put into place on the grounds that it is consistent with someone’s notion of “Right”, it would seem that the “legislation of morality” criticism is fallacious here.

    But from time to time, the accusers are saying something along the lines of “I personally disapprove of what so-and-so politician is trying to ban, but I oppose the ban because it would be legislation of morality”. This seems to be a form of the “don’t want all my freedom taken away” objection. But a “the State wouldn’t do that if morality were being legislated perfectly” rebuttal doesn’t sound valid to me. If the State runs on laws which perfectly legislate what is Right in every possible situation, then it would indeed seem that our every action is being dictated. It does seem somehow inherently Wrong that we should be controlled to such an extent by our government (even though it would result in the “best” possible outcome all of the time), but to argue this would involve some sort of system of “meta” ethics which may be difficult to reconcile with our original system. (Any determination of how this “meta” rule is derived would inevitably require some sort of answer to the problem of free will.)

    [Epistemic status: I’ve been away from philosophy for a long time, and I may have the wrong concept altogether of consequentialism, utilitarianism, etc. Hopefully I’ll get some of my amateur philosophy chops back by continuing to read and occasionally post comments on SCC.]

    • lmm says:

      I think some people don’t always use “morality” to mean “what we should do”, but rather some kind of “what a Good Person would do”. If you’re a virtue ethicist then maybe it makes sense to say that laws shouldn’t oblige people to be virtuous, because what you value is people becoming virtuous through their own choices. I think this is a matter of definitions rather than a substantive disagreement though.

      • Liskantope says:

        Yes, if we assume the virtue ethicist notion of morality, maybe the ambiguity would disappear. But if Scott is aiming for a government that uses its perfect knowledge of Right and Wrong to bring us as close as possible to paradise, it’s not entirely clear if and at what point there should be restrictions on the State’s ability to control your every move purely because restricting too much freedom is bad on principle. Surely murder should be illegal, but what about not saying “please” before asking someone to pass the salt?

        I’m not claiming this takes away from Scott’s main point at all; like I said before, I still need to go through the post carefully anyway.

        • RCF says:

          First, there’s the question of whether there’s more than one “moral” choice. If there’s a unique “best” choice, and doing anything other than the absolute “best” choice is immoral, then you’re right that legislating morality would mean that a particular course of action would be legislated. But it’s arguable that morality consists merely of some base amount of good, and any good beyond that is optional.

          And second point is just what this “legislation” consists of. What if it just means that everyone will be required to pay for the externalities of their actions? If not saying “please” cost X, then you would just have to pay X. And one could argue that that is more “fair” than you imposing the cost on the other person.

    • MugaSofer says:

      I mostly see this in two situations:

      1) Conservative politicians trying to ban things they feel are wrong, and liberal politicians opposing them because they disagree that those things are wrong;

      2) Liberal politicians trying to ban things they feel are wrong, and conservative politicians opposing them because they feel it’s wrong to legislate people’s actions in certain situations.

      In the first case, liberals seem to be arguing that conservative “morality” is merely a personal preference, rather than a preference over the state of the world. One could, charitably, interpret this as invoking the Typical Mind fallacy.

      In the second, conservatives seem to be arguing against consequentialism – or rather, they are arguing against *naive* consequentialism, as Scott notes above.

      As a consequentialist, I really don’t see why the standard well-if-it-has-bad-consequences-don’t-do-it argument shouldn’t work for the second case. In some cases, there’s too much uncertainty to impose the same solution across the board – traditional thinking on the subject seems to work fine.

      The first case seems to be largely empirical too: are liberals being mislead or deluded by their ideology, or are conservatives? There are pretty obviously examples of both.

  18. Anthony says:

    You’re pulling a fast one in II. When people say “The State can’t/shouldn’t legislate morality”, they don’t ever mean it literally. They mean that “The State can’t/shouldn’t pass laws enforcing a vision of morality with which I disagree”.

    In the U.S., this phrase is used mostly in reference to sexual morality, and occasionally in reference to recreational drug use. What the person saying it means is “I don’t believe it’s immoral to [have sex with [people of the same sex|people I’m not married to|people slightly under 18|etc.]|take [alcohol|marijuana|cocaine|Heroin™|Oxycodone|etc.] for recreational purposes], and therefore, I believe that there shouldn’t be a law against that behavior.”

    Most people actually want the state to enforce most of their morality, especially those parts which aren’t controversial, like believing that murder or rape or fraud is immoral. But most people want the state to enforce their moral beliefs, even those things which aren’t universally held as moral principles within the society.

    • blacktrance says:

      I agree that when people say that the state shouldn’t legislate morality, they don’t mean it literally. They’re using an idiosyncratic definition of “morality”, something like “traditional moral norms where they differ from modern secular norms”. But this isn’t the whole picture, because there are people who think casual sex, recreational drug use, or abortion are immoral but oppose them being illegal, i.e. they wouldn’t engage in any of those activities themselves, they’d judge someone negatively for doing those things, etc, but they also oppose the government being involved. Most people want the state to enforce some of their moral beliefs, but hardly anyone wants the state to enforce all of them.

    • In political discussions in the U.S., “morality” is often a code word for things that progressives and/or libertarians are okay with but conservatives aren’t. But not always.

      Even though libertarianism is less popular than progressivism or conservatism in the U.S., there are still plenty of people who would say, for some X, “I believe that X is generally bad but that the state should not try to ban it because they’d screw it up and cause even greater harm.” And they have no shortage of historical examples to back this up.

      Of course, those people are consequentialists, at least implicitly. But this isn’t exclusively a consequentialist thing. I think there are deontologists who would say, for some X, “I believe that X is generally bad but that the state should not try to ban it because doing so would violate some moral principle.” I think that many libertarians think this way, for example.

      • Emily H. says:

        I’m not a libertarian nor a deontologist, I think, but it seems to me that there are a lot of things that are immoral but even a perfect state doesn’t necessarily have an interest in regulating — lying, adultery outside of consensual poly relationships, just plain rudeness.

        My reasoning is:
        (1) State punishment usually makes people’s lives worse without making them better as people; I guess that in Economists’ Utopia your 12 microseconds in jail would not be likely to increase your ties with organized crime, make it harder for you to get hired in the future, etc., but in the real world I think there should be quite a high bar of harm to society and other people before the state gets involved.

        (2) The idea of a perfect government with massive surveillance powers is creepy to me in a way that can’t be entirely mitigated by just specifying that it’s a perfect government; if you make a law that, for example, all lies are crimes (with exceptions for white lies and surprise parties, perhaps), then you need either massive surveillance powers or to just throw up your hands and acknowledge that millions of lies will go unpunished every day; and an unenforceable law is a bad law. (Even with massive surveillance powers, it’s hard to distinguish “I lied and told you the meeting was on Wednesday” from “I mistakenly believed the meeting was on Wednesday and passed that mistaken information on to you” or “I was thinking about my kid’s birthday party on Wednesday and got mixed up in my head, and I didn’t realize that I said Wednesday even though I meant Thursday”, which gets back to unenforceability.)

        • Carinthium says:

          Pointing out that creepiness is not a legitimate argument. If you can establish it as a general trend amongst people it would be a legitimate utilitarian argument, but creepiness alone isn’t enough.

        • MugaSofer says:

          It’s very hard to police those things. We still police them in particularly serious cases.

          I honestly would have no problem if adultery became magically impossible. I *would* have a problem if lying became impossible – because I lie to people, that’s why – but I can’t seriously argue that it would make the world a *worse place*.

          Rudeness is so often unintentional that I think the standard social responses to it usually suffice, but we already punish it in cases of e.g. bullying.

          If we could come up with a more accurate implementation – one that provided people with better feedback, especially when they’re celebrities or have poor social skills – then why not? Merely informing/warning the person would probably be enough to “reform” them in a lot of cases; and I support punishments on consequentialist grounds.

        • RCF says:

          @Carinthium

          “Pointing out that creepiness is not a legitimate argument.”

          Saying “I feel that there is something wrong, even if I can’t make it explicit” may not technically be an argument, but it certainly is a valid consideration.

      • Liskantope says:

        Yes, actually I think the “legislating morality” objection is most often voiced by libertarian-leaning people who use “morality” as a code word for a moral stance against certain (in their opinion) victimless crimes. A common sort-of-libertarian stance is “I personally disapprove of so-and-so behavior, but as long it is a victimless crime I will oppose any law prohibiting it (on the grounds that enacting such a law would be legislating morality)”.

  19. Carinthium says:

    Theoretical Attacks:
    1- You can’t establish morality from first principles without refuting Philosophical Skepticism. I’ll ignore this mostly as trying to establish morality from non-skeptical ‘first principles’ is a reasonable philosophical exercise in and of itself.
    2- You’re ignoring the problem of people who don’t want state intervention out of pride. There is an emotional desire in humans to not be controlled, and some might want to limit the power of the State for fear of being too greatly controlled.

    Practical Attacks:
    In the world as it is, the forces trying to create an Economist’s Paradise or attempting any explicitly contractrualist reconstruction of the world are insignificant. From a selfish perspective, it is almost certainly a net loss for me, or most other real individuals to support such a group.

    Because of society as it is, there will almost certainly be people who would actively oppose such a contract when society exists as an alternative. If somebody is a major dictator or criminal, it’s hard to see them both supporting such a thing and acting in rational self-interest at the same time.

    • Gavin says:

      I think part of Scott’s point here is that the world is slowly instantiating the Invisible Nation. Cthulhu swims towards Contractualism.

  20. Dan says:

    I haven’t found contractualism to be fruitful for thinking about population ethics, since so many questions depend on which parties are present for the negotiation. Decisions about which parties those should be are not straightforward, and seem to follow idiosyncratic moral intuitions which are grounded in something other than contractualism.

    One option is to jump straight to Tegmark Level IV contractualism and assume that all mathematically possible agents are parties to the negotiation. I haven’t seen anyone pursue this line of reasoning in detail, and am not sure where it would end up.

    • Carinthium says:

      If contractual foundations are to work at all, they should be grounded in being beings that are capable of negotiating and actually have anything at all to offer. If it isn’t, the contractrualist has no answer to the amoralist challenge.

      If I were to be a contractualist (I’m not but I’m trying to sketch out the most plausible form here), I would advocate the idea, not of inclusion in a single great contract, but of “Contract Worlds” which may or may not intersect. A “Contract World” consists of entities that can mutually benefit from a social contract, and is by no means likely to be fair by ordinary intiutions given the differences in negotiating power between parties.

      Of course this renders theoretical problems when somebody could be part of one contract world but not both. The answer is that an individual is part of the one from which they would most benefit judged from their own Coherent Extrapolated Violation.

      A potential entity cannot make an offer for Society to accept as it doesn’t exist right now. Therefore no potential entity can negotiate.

    • MugaSofer says:

      I was confused by that section for a while, but I think the idea is to sneak in “only agents that exist get their preferences counted”.

      That is, would existing agents agree to create as many agents as possible, if they were selfish but bargaining in an environment optimized for making non-selfish deals? Probably not.

  21. suntzuanime says:

    Yes, I agree. This is something I had been groping towards but didn’t have the philosophical grounding or the thousands of words to make concrete. Now I can just link here. Thank you.

    I don’t buy into the linked argument for disregarding the will of potential people, but I’ve never really cared what they thought anyway. They’re not real, after all.

  22. suntzuanime says:

    So uh, if you’re letting fetuses negotiate, don’t they probably ban abortion? Or is the opportunity for sexual libertinism a couple decades down the road worth the serious risk of immediate death?

    • Since we’ve already thrown causality out the window, we can just say that only fetuses who will become sentient get a seat at the negotiating table. If you object that a fetus in its present state is a moral agent, then you’re already on the pro-life side and this thought experiment isn’t going to affect your views on abortion. If you object that a fetus that could potentially become sentient should get to negotiate even if it actually won’t, then that’s the same as saying that potential people should get to negotiate, which is dealt with above. (Banning abortion would allow more potential people the opportunity to exist, but it would also have an impact on those people who did end up existing, so the fetuses’ views would depend on how much they valued existing compared to other things. If they valued it much more highly than other things, then they’d be pushing a natalist agenda so radical by our standards that abortion would be the least of anyone’s concerns.)

      Or, to put it another way, the fetuses aren’t actually fetuses, they’re hypothetical sentient people. The womb is a metaphor for the veil of ignorance.

      • Hainish says:

        “Banning abortion would allow more potential people the opportunity to exist”

        Would it, though? That’s assuming that a woman will carry the would-be-abortion fetus to term, and have any additional children she would have had, in addition to that (i.e., the birth rate would be higher). I’m not sure it works out that way IRL, though. It seems that many people have a certain number of children, and then stop, and that abortion access just changes the timing. Essentially, it swaps out one set of potential people for another.

        • Anonymous says:

          The number I’ve seen is that in the US, an abortion reduces the expected popuation by 1/3 of a person. Also, timing is important.

        • Hainish says:

          Yes; I shouldn’t have made it sound like an absolute 1:1 replacement.

      • suntzuanime says:

        Honestly I find the way potential people are dealt with above really really really really really really unconvincing. We’re supposed to believe they don’t want to suffer racism if they happen to exist, but don’t really care much if they exist?

        • Hainish says:

          Here’s a bunch of people saying pretty much that.

          I don’t care much whether I exist, in that I don’t think the universe was under any obligation to bring me, the particular zygote/embryo/fetus thing that was gestated and grew into current-me, into existence.

        • blacktrance says:

          If they exist, they can care about things, including caring about racism. If they don’t exist, they can’t care about anything, not even about existing.

      • Jaskologist says:

        Watching Veil of Ignorance proponents choke on abortion and start playing games with placing the veil in *just the right place* to not make Western liberals uncomfortable really kills the whole system for me.

        • MugaSofer says:

          Ah, but if we place the Veil in just the right place, we get to include all the people our ancestors din’t figure out they should include, without having to change our minds!

          (Not just abortion: what about animals? What about children, now that I think of it? What stops current people from selfishly locking out any AI we might build? )

        • suntzuanime says:

          I damn well hope something stops them!

        • MugaSofer says:

          Indeed. No currently-existing person, even behind a Veil of Ignorance, has an incentive *not* to precommit to enslave any AIs or ems that should come into existence in the future. Quite the opposite.

          This is also wildly unethical.

          I think this pretty squarely destroys the claim that contractualism in the Economist’s Paradise leads to utilitarian outcomes. (In point of fact, one could extent the same argument to *all* people yet unborn! You don’t even need to bring in the singularity for this.)

  23. Watercressed says:

    But there’s also a meta-level escape vent. If a certain set of negotiation techniques would lead to a local minimum where everything is Pareto-optimal but nobody is happy, then everyone would coordinate to sign a pact not to use those negotiation techniques.

    This might not get you out of it; the negotiation techniques could screw up the meta-level technique-banning negotiation.

  24. lmm says:

    Economically optimal agents maximize wealth, so it’s not surprising that they look a lot like utilitarians, who are pretty much defined as those who maximize some function. I think you’ve smuggled a lot of the “which utilitarianism” question into your definition; this economists’ paradise is inherently preference rather than hedonic utilitarianism. You’re declaring that everyone’s preferences have value proportional to their wealth by fiat (which puts some limit on how utility-monstrous it’s possible to get, but can end up pretty bad). You’ve got a clever argument against considering the preferences of all theoretically possible entities, but it doesn’t seem remotely convincing when we’re weighing up whether to create a single potential human. (I guess I just think people are more selfish than Carl thinks).

    I think the Economists’ Paradise is not a good place to be disabled, or have a rare disease, or similar. When it’s a distant moral question, everyone naturally says we should make all building disabled-accessible – it just seems fair (and it’s the answer that signals niceness). In a world where everyone knew the cost of everything and answering yes meant signing up to hand over $x, I suspect the answer would be different, because such things are rarely economically optimal. You could say that everyone will fund the things disadvantaged minorities need because of their feelings of fairness or pity or what have you, but then this starts to look less like any kind of grounding for morality – what is it that makes us feel obliged to give up our wealth to help people with certain kinds of disadvantages like “disability”, but not other kinds of disadvantages like “asshole” or “fat”[1]? Whatever it is, it smells rather like morality.

    (Also the standard counterargument: if morality is just contracts, what is it that makes people follow those contracts?)

    [1] I’m aware that some people do think we should sacrifice our wealth to help these groups.

    • blacktrance says:

      In the Economists’ Paradise as Scott has conceived of it, rules are determined from behind the Veil of Ignorance, and if people are sufficiently afraid of being disabled, having rare diseases, etc, they’d want that to be accommodated, purely out of self-interest.

      if morality is just contracts, what is it that makes people follow those contracts?

      Punishment for breaking the contract. Depending on the specific contract, this can range from loss of reputation to getting bashed over the head by a designated contract-enforcer.

      • Carinthium says:

        I should note that the Veil of Ignorance makes no sense from the standpoint of establishing morality from first principles.

        Any entity intelligent enough to consider the issue of how to establish morality must in practice be almost completely past the Veil. Even in principle, any entity which is intelligent enough to so consider knows it is intelligent enough to so consider, and therefore is partially past the Veil.

        • blacktrance says:

          That’s why the state of beings that are simultaneously behind the Veil and knowledgeable about the effects of rules about the world is a theoretical state, not an actually existing one. I don’t think there’s any conceptual contradiction between being behind the Veil and knowing stuff about the effects of rules on the world.

          (Though it being theoretical is part of the reason I reject Veil-based arguments.)

        • Carinthium says:

          In practical terms, a Veil of Ignorance argument is possible if somebody doesn’t know what might happen to them in the future. My apologies- I thought that’s what you were talking about.

      • lmm says:

        > In the Economists’ Paradise as Scott has conceived of it, rules are determined from behind the Veil of Ignorance, and if people are sufficiently afraid of being disabled, having rare diseases, etc, they’d want that to be accommodated, purely out of self-interest.

        Sure. My object-level prediction is that people would prefer to take the risk. If I’m right on this, then I think that ends up being a substantial disagreement between the Economists’ Paradise and most people’s moral positions.

        > Punishment for breaking the contract. Depending on the specific contract, this can range from loss of reputation to getting bashed over the head by a designated contract-enforcer.

        In that case we’re not really grounding morality in contracts, are we? We’re grounding it in self-interest. In which case why not do so directly?

        • blacktrance says:

          My object-level prediction is that people would prefer to take the risk.

          In that case, according to this framework, accommodations for the disabled, those with rare illnesses, etc, aren’t morally required, and that forcing people to accommodate them would be unjust.

          In that case we’re not really grounding morality in contracts, are we? We’re grounding it in self-interest. In which case why not do so directly?

          Three reasons:

          First, “follow your self-interest” is a general prescription that leaves a lot open. People can disagree both about what one’s self-interest is and about what a particular conception of self-interest entails in practice. I know virtue-ethical objective-list egoists, hedonistic egoists, and preference egoists. All of them are egoists in that they ground what an individual should do in what would be in that individual’s self-interest, but they disagree significantly about what one’s interests consists of. Also, even two egoists who have a common conception of interests can disagree about what those interests imply. Some egoists say that one should make contracts with others, while other egoists don’t think that’s particularly important.

          Second, related to the above, most egoists are not preference egoists, and they don’t hold that your self-interest is necessarily identical to whatever you happen to prefer. Contractualism and contractarianism, in contrast, are typically preference-based theories, i.e. what contracts should be made is based on what you happen to prefer, even if you have a preference that isn’t in your self-interest.

          Third, there are different delineations of morality. One commonly accepted one is “what one should do”, but there is another conception that is only about how agents should treat each other, i.e. what’s called “interpersonal morality”. If you ground interpersonal morality in contracts and the contracts are grounded in self-interest, “[interpersonal] morality is grounded in contracts” is still a true statement.

        • lmm says:

          >In that case, according to this framework, accommodations for the disabled, those with rare illnesses, etc, aren’t morally required, and that forcing people to accommodate them would be unjust.

          Yes. And thus this framework fails to recover popular morality. (Or you can agree that popular morality is wrong on this issue).

  25. Knave says:

    Erm, maybe more people than I would’ve guessed had already figured out ideas in this post, but it seems to me that counterfactual contracting (or as I’ve begun to think of it, bargaining over outcomes in Big Worlds) seems like a big conceptual advance in our community’s discussion of ethics. Though others have raised objections to some of your points, I think this might be one of your best posts (ever), and even if incomplete as a solution, should be praised!

  26. Ed says:

    Have you read Scanlon? Please read Scanlon.

  27. Fronken says:

    Weren’t these already reconciled in Timeless Decision Theory?

  28. moridinamael says:

    Elmo ambled down Sesame Street, enjoying the afternoon sun on his fur, reveling in the peals of the children’s laughter in the distance. Elmo couldn’t help himself – he hummed tunelessly as he went, overflowing with mirth.

    Oscar the Grouch popped up from inside his trash bin. “Keep it down,” he rasped. “People are tryin’ to sleep around here.”

    “Don’t be such a grouch, Oscar!” Elmo laughed.

    “Go fuck yourself, Elmo,” Oscar screeched.

    Kill him, Elmo’s metacortex prompted.

    Elmo moved to comply without thinking, then stopped himself in horror. “No,” Elmo said out loud. “I don’t want to!” The Will of Sesame Street had never asked him to do something Bad before! It always told him to do what was Good and Right!

    Reading his thoughts, the metacortex responded, Continued existence of Entity Oscar the Grouch is net-negative utility. All other contingencies have been exhausted.

    Shaking, Elmo lifted a trashcan lid. He stared at its rough edges. Oscar the Grouch watched him, taken aback. “I, uh … I didn’t mean it, kid.”

    Just because it felt wrong to him, who was he to go against the Will of Sesame Street, the coherent extrapolated volition of all its inhabitants? He was just little old Elmo. If this was what Sesame Street needed…

    He lifted the trashcan lid over his head. “Sorry, Oscar,” he said. “Elmo loves you!” And then he raised his voice in song so he wouldn’t have to hear the screams.
    “Can you tell me how to get,
    How to get to Sesame street…
    How to get to Sesame Street
    How to get to…”

    • +1 best comment in the thread.

    • Carinthium says:

      Is there a serious argument implied in all this? If so, it’s a wrong one.

      • Anonymous says:

        Please reply in the form of a story fragment

        • Carinthium says:

          I don’t think any one story fragment would do it justice. What I think is needed is for the author himself to explain his implicit argument if there is one.

    • gene says:

      Travelers on the plateau, shepherds shifting their flocks, bird-catchers watching their nets, hermits gathering greens: all look down and speak of Irene. At times the wind brings a music of bass drums and trumpets, the bang of firecrackers in the light-display of a festival; at times the rattle of guns, the explosion of a powder magazine in the sky yellow with the fires of civil war. Those who look down from the heights conjecture about what is happening in the city; they wonder if it would be pleasant or unpleasant to be in Irene that evening. Not that they have any intention of going there (in any case the roads winding down to the valley are bad), but Irene is a magnet for the eyes and thoughts of those who stay up above.

  29. Jason GL says:

    I agree with and was inspired by most of your post. Thank you for writing it! I have one criticism:

    “And because of our Economists’ Paradise assumption, Jocks charge a fair tribute rate – exactly the amount of money it really costs to compensate them for the utility they would get by beating up Nerds – and feel no temptation to extort more.”

    I’m not sure that even the Economist’s Paradise assumption will tell you what a “fair tribute rate” is. As far as I know, this is actually a major unsolved problem in welfare economics: what *is* the fair rate of exchange? Imagine a voluntary transaction where Alice sells a banana to Carl in exchange for $10. We wouldn’t expect to observe that transaction unless (1) the banana is worth less than $10 to Alice, and (2) the banana is worth more than $10 to Carl. Both sides have to gain something from the transaction, otherwise it isn’t worth the hassle of negotiating. But how *much* will each person gain? How much *should* each person gain?

    Suppose the banana is worth $4 to Alice and $16 to Carl. What is the ‘fair’ price for the banana? Is it $10, i.e., the arithmetic mean between the two parties’ valuations? Is it $8, i.e., the geometric mean? Should it matter if Alice is bad at keeping track of fruit inventory and wants to get rid of the banana as soon as possible? Should it matter if Carl is really hungry and is having trouble thinking clearly about what a banana should cost? What if Alice found the banana on the street instead of working hard to grow it herself? What if Carl has nineteen sisters, each of whom wants a banana, and there are only two bananas for sale today? What if neither party knows for sure what Daniel (who lives a mile away) would charge for a banana, and Carl is more risk-averse than Alice?

    Perhaps because of problems like these, mainstream economists won’t even try to specify the fair price of the banana — they might talk about a ‘market-clearing’ price — the price at which nobody in the market square will be tempted to hoard bananas for arbitrage, and that will encourage everyone in the market square to buy or sell bananas in accordance with their honest preferences for having bananas vs. having money. But whatever price happens to clear the market (let’s say it’s $10) will leave most people with either a consumer surplus (meaning the banana is worth more to them than the $10) or a producer surplus (meaning the $10 is worth more to them than the banana), and, crucially, the size of the surplus will be different for different people. Evan might value his banana at $27 and enjoy $17 of consumer surplus; Frank might value his banana at $9 and enjoy only $1 of producer surplus. Is it ‘fair’ for Evan to walk off with a windfall and Frank to barely get compensated for his time at the marketplace just because their preferences are unusual relative to the ‘typical’ banana dealer?

    The reason why all of this matters is that, as you point out in your intro, “people with more power and money still get more of the eventual utility.” This relationship is not linear! Having a little more power and money to start with can radically increase your bargaining power — if you’re so hungry that you can’t walk a mile to see Daniel, then Alice can charge you double for the banana, and you’ll still be ‘glad’ to pay it. If you have more bargaining power, then you’ll tend to negotiate prices that fall very close to the maximum consumer surplus when you’re buying, and very close to the maximum producer surplus when you’re selling. If Alice values the banana at $4 and Carl values it at $16, and Alice has excellent bargaining power relative to Carl, the banana will sell for $15 — giving Alice a surplus of $11 and Carl a surplus of $1. The extra money will serve to further increase Alice’s bargaining power, helping her continue to strike excellent bargains that disproportionately benefit her and concentrate wealth in her hands.

    I worry that this kind of effect undermines our ability to be patriotic about the Invisible Country — even if we could all agree on what rules to follow, we should still expect some people to drive a really hard bargain and get disproportionately wealthy at our expense. We could all sign a contract saying, “I pledge not to drive a really hard bargain,” but how would you operationalize that? I worry that there’s not a coherent intermediate ground between “individual negotiators set prices for consumer goods, allowing some people to make out like bandits” and “collective agreement sets prices for consumer goods, creating massive inefficiencies because consumer information is inherently decentralized and the collective decision-making process can’t cheaply get access to sufficient information about individuals’ resources and preferences.”

    • blacktrance says:

      The “at our expense” in “get disproportionately wealthy at our expense” is misleading. They’d be making us better off, but not as well-off if we’d be if they were worse negotiators, but “at our expense” has the connotation that they’re making themselves better off by making us worse off.

      • Jason GL says:

        It’s not necessarily misleading, blacktrance. Assuming a moderate level of hyperbolic discounting, some people with weak bargaining people might literally be better off alone than they would be after voluntary trades:

        Suppose Carl is very hungry, has no bananas, and owns a banana tree that will produce 2 bananas a day. Carl is not literally going to starve to death today, but he is so hungry that he has trouble thinking of anything except food. Alice is wealthy but not hungry. Carl might be willing to sell his banana tree to Alice for enough money to buy, say, 10 bananas, so that he can eat one of them today — even though, in the long run, both Alice and Carl would agree that this will probably make Carl worse-off.

        I’d agree that perfectly rational agents each always gain (however minutely) from voluntary trade, but I was under the impression that the Economist’s Paradise is populated by humans.

        Also, even if strong bargainers only make relative gains against weak bargainers, that’s still enough for them to further increase their (relative) bargaining power — without some outside force intervening to promote relative equality, we might still expect to see small initial differences in bargaining power snowball into enormous differences in relative wealth.

  30. anon says:

    The Economist’s Paradise assumes utilitarianism, so it’s no surprise that by looking at its results we see principles that reflect utilitarianism. You’ve assumed your conclusion.

    Contractualism doesn’t mesh with utilitarianism like you believe. Utilitarianism says that individuals should sacrifice their own stuff if doing so helps others. But a self interested individual wouldn’t sign such a contract. You can dodge this problem by phrasing the contract in meta terms. But that’s only a misdirection, since there are still meta contracts that some selfish individuals wouldn’t sign.

    Very few options are 100% Pareto optimal in the real world, maybe none at all. Someone always gets hurt. So there aren’t any contracts that get fully supported by everyone.

    You make a lot of assertions about which contracts we would and wouldn’t sign. But what if people are stupid, and sign bad contracts all the time? I think that this is true. Morality should tell us which contracts to sign and which not to. You can’t use current behavior or preferences as a justification for a metaethical theory, behavior and preferences should stem from the theory and not the other way around.

    You don’t give any reasons why contractualism is good. It doesn’t matter whether contractualism supports utilitarianism, if contractualism has no support of its own.

  31. Fronken says:

    Still reading, but …

    Can we derive utilitarian results by assuming Economists’ Paradise? In many cases, yes. Suppose trolley problems are a frequent problem in your society. In particular, about once a day there is a runaway trolley in heading on a Track A with ten people, but divertable to a Track B with one person (explaining why this happens so often and so consistently is left as an exercise for the reader). Suppose you’re getting up in the morning and preparing to walk to work. You know a trolley problem will probably happen today, but you don’t know which track you’ll be on.

    Eleven people in this position might agree to the following pact: “Each of us has a 91% chance of surviving if the driver chooses to flip the switch, but only a 9% chance of surviving if the person chooses not to. Therefore, we all agree to this solemn pact that encourages the driver to flip the switch. Whichever of us will be on Track B hereby waives his right to life in this circumstance, and will encourage the driver to switch as loudly as all of the rest of us.”

    […]

    Suppose bullying is racist rather than popularity-based, with all the White kids bullying the Black kids. You go to the toddlers, and the white toddlers retort back “Even at this age, we know very well that we’re White, thank you very much.”

    So just approach them in the womb, where it’s too dark to see skin color. If we’re letting two year olds sign contracts, why not fetuses?

    So … you’re saying the Economist’s Paradise would work if all agents bargained from behind a Rawlsian Veil of Ignorance?

    Well … yeah?

    The entire point of a Veil of Ignorance is that it forces all agents to value all agents’ utility equally, i.e. act like a perfect utilitarian. This is a property of the Veil of Ignorance, not of economics.

    The real world doesn’t have a Veil of Ignorance. Neither does the Economist’s Paradise. The travellers probably know which track they will be on, and the schoolchildren probably know which social group they’re in.

  32. aeschere says:

    I don’t believe counter factual reasoning about universal moral frameworks is very useful. For me, this post succeeds mostly at exposing the various reasons why utilitarianism and contractualism don’t work because reasoning from “assume coordination was cheap” is the same as starting with “assume God smites infidels” and growing from there. The state of common knowledge required for a single society (never mind several) to agree to any particular contract that effects all its members is not just impossible but ultimately doesn’t even allow approximation.

  33. MugaSofer says:

    This really doesn’t work at all, and my (prior) model of you predicts you would realize this almost immediately.

    It assumes it’s conclusion, somehow still fails to derive it’s conclusion, and ends with an argument that was proves-too-much-ed in part II of the same post! For pete’s sake, man.

    Is this a misguided attempt to make morality more inspiring at the cost of accuracy? Because I do not feel inspired.

  34. Ghatanathoah says:

    Here’s something I’m wondering about the Economist’s Paradise. When we say that people make deals to be better off, are we talking about Von-Neumann Morgenstern utility, or about welfare?

    This question needs some background: To make a long story short, human beings seem to have certain moral intuitions about what it means for a person to be “better off,” or “worse off” and that these intuitions do not line up with VNM-utility.

    For example, take the example of a soldier who gives his life to save his comrades in his platoon. A human would generally say that this person has made himself worse off in order to make other’s better off. By contrast Von Neumann-Morgenstern decision theory would say that the soldier is better off than before. His desire for his comrades to live is part of his utility function, so if he values them living more than he does himself he must have greater utility dead than he did alive.

    We seem to have an intuition that there are two kinds of “utility.” One kind is our self-interest. The other are things we value that we consider to be “not part of ourselves” (mainly morality). These two kinds of utility are not incommensurable. We can and do trade them off against each other. Our VNM utility is the sum of both of these utilities put together. Our “welfare” by contrast, is just the “self-interest” part.

    This means that it is quite possible for a person to have an extremely low level of welfare, but a very high level of VNM utility (the aforementioned soldier is a good example). It is also possible for someone to have a high level of welfare, but a low level of VNM utility (a good example might be a wealthy aristocrat who wants to give her wealth to the poor, but is prevented from doing so by social pressure for other aristocrats).

    The other interesting thing about these two kinds of utility is that we feel we have a moral obligation to take other’s welfare into consideration, but not any of their other utility. For instance, if I am ordering a pizza with a friend, and I love green olives but my friend hates them, I feel obligated to take my friend’s hatred for green olives into consideration when making my decisions. By contrast, if my friend is racist against wiggins and I am considering being nice to a wiggin, I feel no obligation whatsoever to take my friend’s desire that I be nice to wiggins into account.

    Understanding these two types of utility illuminates certain puzzling things about moral discourse. For instance, moral relativism can be understood as an error where someone accidentally applies the “I need to consider other people’s desires” principle to their VNM utility instead of their welfare. It also answers the question of whether paperclip-maximizers deserve moral consideration. They have no concept of welfare, they are pure VNM utility. So no, they don’t.

    In the discussion of the Economist’s Paradise, my understanding was that we are talking about Welfare, not VNM utility. When we say people make agreements to make themselves better off they are talking about having higher welfare, not higher VNM utility. If most humans have a moral system that at all resembles utilitarianism they will end up having high VNM utility as well, but that is just a happy coincidence.

    So, it seems to me that Economist’s paradise might not always work for beings with high levels of non-welfare values and non-welfarist morality. It might be able to tame lethally indifferent creatures like the Paperclip-Maximizer, whose only quarrel with humanity and utilitarianism is that it produces suboptimal amounts of paperclips. But creatures that are actively opposed to human welfare, instead of being lethally indifferent to it, might be able to do more harm. For instance, the Self-Sacrificing Homophobe who is willing to hurt themselves in order to harm gay people. These creatures would harm the welfare of others and not care if the Economist’s paradise harmed them in turn.

    • Anonymous says:

      We’re talking about welfare, except when this somehow supposed to help the poor and disadvantaged (e.g. by ending world hunger,) when we are suddenly talking about utility.

    • anon says:

      I don’t think VNM says that the dead soldier is better off. VNM doesn’t purport to describe the preferences of the dead. VNM aligns well with my own intuitions about well being.

    • Douglas Knight says:

      To put it another way, are we aggregating only people’s selfish preferences, or also their other-regarding preferences? This is a difficult problem because we are trying to reconstruct (some) other-regarding preferences and we don’t want to double-count.

      Scott’s very first example “And suppose that half the one billion people in the First World are willing to make some minimal contribution to solving world hunger” shows that he is including other-regarding preferences. If solving world hunger helped the First World, all would all chip in. The point of the example is to assume that it is an act of charity and that half of the people just don’t care. The other half barely care, but with coordination, barely caring is enough.

  35. Arthur B. says:

    You’re still talking as if there were some latent, platonic morality to approximate. Moral intuitions aren’t a noisy representations of an underlying truth, they’re all there is to it. In particular, such intuitions are under no obligation to be consistent across human beings or even for a given person. Furthermore, understanding these inconsistencies does not yield a more pure more fundamental concept of morality, that’s just your intuition of morality as an objective thing talking.

    • Carinthium says:

      Mostly like this idea, but even if objective morality cannot exist a coherent way to aggregate preferences across a single individual so that they act consistently is important or it is impossible to act in a consistent or rational manner.

      It’s possible that it is simply impossible to aggregate preferences so, but this site’s posts on the Glasgow Scale’s existence, comas, race, and culture suggest that a construct can be ‘real’ in a manner of speaking. With a coherent definition, perhaps morality could be.

      • Arthur B. says:

        You can’t “aggregate” preferences, but fortunately, you don’t really need to.

        If you’re programming a goal system for an all powerful AGI, the best you can do is your own CEV. You cannot implement anything that’s somehow “more moral” than your own moral code, that would be meaningless.

        If, as is more likely, many people are involved in deciding what the goal system should be, then it boils down to negotiation and compromise. The outcome will likely reflect the relative group power behind specific preferences.

        Perhaps it will be a morality that appeals to our sense of objectivity and fairness, because such a proposal would be likely to receive a wide support, but perhaps not.

  36. So8res says:

    This resonated strongly with me.

    For what it’s worth, ideas very similar to these (contractualism, the extreme power that comes from coordination, the Invisible Nation built objectively from our values) were my first muses. They showed me a world in need of saving, and put me on a path that led to becoming a full-time FAI researcher at MIRI.

    I have many thoughts in this space that are easier to share now that you have written this post and the post on Moloch.

    Thank you.

  37. ADifferentAnonymous says:

    So even before I saw that Carl Shulman was here I was planning to open a thread discussing the linked post, since I’m total utilitarian and tend towards biting the bullet on the repugnant conclusion.

    The post contains its own caveat:

    This reasoning would not apply if the way we cared about possible people was that we thought bringing people into being and achieving their goals was good, but not achieving the goals of those who do not exist.

    I think this applies to me, and I think it generally makes more sense. The slogan is “It’s not about the number of possible people, it’s about the possible number of people”.

    Consider the following thought experiment: You’re looking at an uninhabited planet. You have a magic box with two buttons, one blue and one green. The blue one would give the planet a blue sky and populate it with one billion blue-sky fanatics who want their sky to be blue whether or not they exist (and want normal human-like things apart from that). The green one would give it a green sky and populate the planet with one billion green-sky fanatics, who are morally equivalent to the blue-sky fanatics in your eyes except that they’ll be a bit less happy on average in the way you care about. Naturally, you prefer to press the blue button.

    But then suppose you notice a green dial next to the green button. It’s a random seed. You can input a value from one to one hundred and, depending on the value, the green button will create a different set of green-sky fanatics. All possible sets are equal on the criteria you care about, but they are distinct people. Suddenly there are 100 times as many possible green-sky fanatics as possible blue-sky fanatics. If you’re adding preferences that way, you’re now compelled to press the green button.

    If this doesn’t seem obviously wrong… I guess we have more to hammer out.

  38. Anon says:

    At the end of part II you say:
    “In a terrible state with high transaction costs that has been completely hijacked by self-interest, the cost of the tradeoff goes down and fewer of them are State Laws.” [bold added]

    Shouldn’t transaction costs go up?

  39. Rob Miles says:

    The idea of an invisible nation of good people can indeed be very moving. It reminds me very strongly of the ending verse of “I Vow to Thee My Country”:

    “And there’s another country, I’ve heard of long ago,
    Most dear to them that love her, most great to them that know;
    We may not count her armies, we may not see her King;
    Her fortress is a faithful heart, her pride is suffering;
    And soul by soul and silently her shining bounds increase,
    And her ways are ways of gentleness, and all her paths are peace”

  40. Sniffnoy says:

    Now I want to highlight a phrase I just used in this argument.

    “If bullying is negative-sum – that is, if it hurts the victim more than it helps the bully – then it’s an area ripe for Kaldor-Hicks improvement”

    This looks a lot like (naive) utilitarianism!

    Isn’t assuming that interpersonal utilities are comparable begging a lot of the question?

    For that matter, let me point out once again:

    Fourth, the classic problem of defining utility. If utility can be defined ordinally but not cardinally (ie you can declare that stubbing your toe is worse than a dust speck in the eye, but you can’t say something like it’s exactly 2.6 negative utilons) then utilitarianism becomes very hard.

    If you have ordinal utility over gambles, then, subject to some reasonable assumptions, you have cardinal utiilty. The problem here is the equivocation as to what “utility” means; utility in this sense is not utility in the utilitarian’s sense, and they can’t be compared interpersonally. So again, I think assuming that there exists such a thing as “utility in the utilitarian’s sense”, which is meaningful interpersonally, is begging a lot of the question.

    • blacktrance says:

      If you’re transforming Kaldor-Hicks improvements into Pareto improvements, interpersonal comparison isn’t necessary. For example, if A gets $50 of value from being able to bully, and B gets -$100 from being bullied, if B gives A $60 to get him to stop bullying, they’re both better off, and no interpersonal comparison is necessary. (The Economists’ Paradise stipulates that blackmail doesn’t happen.)

      • Sniffnoy says:

        Oh, hm. That makes sense. That does indeed seem to route around the problem.

      • Said Achmiz says:

        How do you determine that you get $50 of value from bullying, or –$100 from being able to bully?

        • blacktrance says:

          I would stop bullying if I were paid more than $50 to stop, and wouldn’t if I were paid less than $50. If someone were to give me the choice between not being bullied and being bullied but also being paid more than $100, I’d take the money, but if the pay was less than $100, I’d take not being bullied.

          edit: Apparently this doesn’t like “greater than” and “less than” signs.

  41. Vaniver says:

    Fourth, the classic problem of defining utility. If utility can be defined ordinally but not cardinally (ie you can declare that stubbing your toe is worse than a dust speck in the eye, but you can’t say something like it’s exactly 2.6 negative utilons) then utilitarianism becomes very hard. But contractualism doesn’t become any harder, except insofar as it’s harder to use utilitarianism as a heuristic for it.

    I don’t think this is quite right. Even if I have a cardinal utility function (that is, I can meaningfully compare differences in utility between options), that function is equivalent to an infinite set of cardinal utility functions. (Multiply the functions by two, and the decisions are all the same; add ten to the functions, and the decisions are all the same.) And so to compare my utility and your utility, we need to have some consequences to establish as the yardstick.

    Money is frequently suggested as the yardstick (see the companion thread), but I think this doesn’t really work and causes a number of problems.

  42. Anonymous says:

    “Nerds can just pay Jocks a certain amount of money not to be bullied. Any advantage or power whatsoever that Nerds have can be converted to money and used to prevent bullying.”

    Ahem: Causing damages to enemies, gaining in-group benefits at their cost, and, less so, getting practice at violence, just is bullying.

    This isn’t “no bullying”, this is “(potentially) more efficient bullying”.

  43. mjgeddes says:

    Hi Scott,

    I totally cracked ethics/morality at the in-principle level several years ago. It’s all described in my MCRT (Mathematico-Cognition Reality Theory). MCRT is basically a general-purpose ontology describing reality (a ‘theory of everything’ if you like)- it can also be considered a programming language for an artificial general intelligence. If you regard components of a super-intelligence as consisting of multiple ‘agents’ (specialized AIs in narrow domains) working together, MCRT is the ‘interface’ or integrator that churns away in the background and enables them all to communicate and act as an integrated entity (i.e., its the magic sauce that makes an intelligence fully general).

    Anyway, I only need to glance into MCRT to see that most current ethical theories are far off the mark. I can tell you that neither utilitarianism nor contractualism are close. Most ‘isms’ are gross distortions of reality because they take some particular limited aspect of reality to try to define all of reality in terms of this limited narrative. Very risky to be doing that with ethical/moral systems!

    MCRT says that morality/ethics is really just a special-case of aesthetics – it’s just a special-case of aesthetic preferences, so the universal (platonic-level) consists of the general solution to the question of what ‘beauty’ is. Next level down consists of a notion of ‘liberty’ (so utilitarianism and contractualism are just calculational devices for trying to maximize liberty). Next level down after that consists of a notion of ‘perfection’ giving rise to various personal virtues (so virtue ethics is the calculus for self-improvement to this notion of perfection).

    • Carinthium says:

      Can you explain your reasoning behind your conclusions? What chain of reasoning did you use to get to MCRT in the first place?

      • mjgeddes says:

        My chain of reasoning would take millions of words to explain – so I can’t explain in a blog post!

        Basically, it was motivated by ideas for developing artificial general intelligence- I was attempting to generalize data and process modelling.

        Consider a ‘theory of everything’ in a physics sense – it would just be a single-level model of the physical world consisting of physical casual explanations.

        But now approach a ‘theory of everything’ from a different angle. Instead of attempting to model the physical world directly, ask how the world APPEARS to a mind attempting to model it. In other words, jump to the meta-level, and instead of directly modelling PHYSICS , instead model the COGNITIVE PROCESS of a mind modelling reality.

        This sort of ‘theory of everything’ is very different from the physics one. Firstly, the explanations are NOT causal ones – in terms the ontology, the explanations are based on IDENTITY not CAUSALITY. Secondly, the explanations are NOT single-level physics, they are MULTI-LEVEL, and can include NON-PHYSICAL (abstract) concepts. This is because, we are not modelling reality directly, we are modelling how reality APPEARS to a mind. We are using mathematics to model cognition modelling reality…hence the term MATHEMATICO-COGNITION.

        Any way carry this process through, and a grand pattern emerges whereby universal values and a universal ethical theory just ’emerges’ in a natural way, analogous to the way that you could fill in the blanks in the periodic table to guess new elements in chemistry.

  44. Glen Raphael says:

    Have you read _The Machinery of Freedom_? Especially this chapter? http://www.daviddfriedman.com/Libertarian/Machinery_of_Freedom/MofF_Chapter_29.html

    In the economist’s paradise we can delegate the task of figuring out the best legal rules. Consider: I don’t know the exact best set of rules to make a thin-crust pizza, but I can look around the neighborhood and find a place that DOES seem to know the rules. If everybody individually patronizes the pizza place that best meets their own needs, the pizza places have an incentive to find efficient ways to make the best available pizza to meet the needs of their community. So we don’t all need individual expertise on that.

    Legal rules are the same thing. I pick the protection agency that seems the best value in terms of the issues I care most about. Everybody else does the same. If I come into conflict with somebody and we have the SAME protection firm the conflict is resolved according to the exact rules we both preferred. If I come into conflict with somebody who has a different protection firm, the two firms have a huge incentive to have pre-arranged some sort of arbitration procedure. Maybe Firm A gets its way on some issues and in return Firm B gets its way on others, or there are side payments or a coin flip or an impartial third-party judge.

    In that world if everybody pretty much wants the same laws that’s what they’ll get, just as if pretty much everybody likes pepperoni most pizza places will offer pepperoni as an option. If lots of people want different laws the equilibrium state will include a wide variety of firms using different kinds of competing legal rules. (as is actually the case today, albeit in a somewhat limited sense). My compression rule for generating good pizza is “Eat at Joe’s Pizza”, my compression rule for generating good laws is “Ask/Use Joe’s Protection Firm”. (This could work tolerably well in the real world as well as in Economist’s Paradise except that it has a rather large bootstrapping problem of first getting rid of the existing suboptimal monopoly firms or getting them to allow competition)

    • lmm says:

      There’s a large number of different nations with different laws already. You can always move to Canada. But in reality very few people do. Which suggests that the market in legal systems doesn’t actually work.

      • If you reduce the transaction costs of moving to Canada far enough, they just might work. However, I don’t see much hope for this as long as people are physically constrained to be in one place: there’s a hard floor on how cheap you can make it move yourself, everything you own, and all of your relationships. Implementing Patchwork might help by increasing the number of available jurisdictions and decreasing the distance between them, but the number is still high enough that sorting will be very imperfect.

      • Zathille says:

        Certainly there is a high cost when it comes to moving from one place to the other, as well as an opportunity cost of moving away from services and infrastructure know and unknown. Still…

        How much migration is ‘too little’ or ‘too much’? If people don’t migrate much, it is possible that it is due to them being unable to move elsewhere, possibly due to the costs above, but it could also be due to them being relatively satisfied with where they are, or at least not inconvenienced enough to justify paying the opportunity costs. People having different preferences and countries having different cultural and legal frameworks which may appeal to different preferences may also be a possible explanation.

      • Glen Raphael says:

        Moving to another country is very costly but switching providers while staying where you are could be very cheap. While living where I am I can change internet providers or pizza providers; if I could change law/protection providers in a similar way, they’d have a better incentive to be customer-responsive that they really don’t have now as geographic monopoly service providers.

        When AT&T had a national monopoly the joke slogan was “We don’t care; we don’t have to.” Our governments at all levels are in the same situation.

    • MugaSofer says:

      I honestly don’t see how letting Moloch eat the all the people we pay to resolve coordination problems could possibly be a good thing.

  45. Peter says:

    Historical note: people talk about Rawl’s Veil of Ignorance, but I’m led to believe it was Harsanyi who used it to derive a form of utilitarianism. Rawls then went and looked for variants that wouldn’t entail utilitarianism; by carefully tweaking the ‘thickness’ of the veil he was able to create a scenario where he could convince himself and many others (not including me) that contractors would go for something else. The big idea was to exclude information about probabilities, so that people couldn’t maximise expected utility, and had to use… well, Rawl’s contract is a hodgepodge of different things and you’re not allowed to call the Difference Principle “maxmin”… can you tell that I only got a third of the way through A Theory of Justice before giving up?

    • Douglas Knight says:

      The importance of the Veil of Ignorance is the vivid imagery, which is due to Rawls.

      I have always been told that Rawls just made a math error, rather than trying really hard to rig the game in his favor. I’ve never met an advocate of Rawls who was aware that there was another game in town. Rawls mentions Harsanyi only in a couple of footnotes, which suggests that they were tacked on at the last minute, rather than being the origin of his argument. In Section 28, he does ban probability in the name of risk aversion, but this appears to me to be written in response to other mathematical utilitarians, not Harsanyi.

      • Peter says:

        Interesting – I’ve racked my brain to see where I got that impression (of rigging), and it seems to be from Parfit’s On What Matters (which I did reach the end of) – Parfit certainly says that Rawls was looking for an alternative to utilitarianism, possibly I’m overinterpreting in saying Rawls was fishing for it. A Theory of Justice reads like a stitch-up to me but maybe that’s me not being careful enough.

        • Douglas Knight says:

          Do not attribute to malice what can be explained by incompetence. Yes, Rawls has an answer and produced a theory to justify it, but the main issue with utilitarianism is that he didn’t understand it, not (just) that he didn’t like the results.

          Disclaimer: I have not read a word of Rawls beyond what I said above, which I read in response to your comment.

  46. Handle says:

    Why stop short at ignorance of position?

    Consider what happens if the veil of ignorance extends to being ignorant of one’s future preferences, or even the probability distribution function of one’s future preferences.

    Then one cannot evaluate whether or not any potential bargain or contract is in one’s potential interest or not. The reason is that conflicts of interest, values, and preferences will inevitably arise, and there is no way to decide between them if you cannot know how likely it is that you are on either side of the conflict.

    • NoahLuck says:

      Building on that thought, the “Invisible Nation” including possible people could be modeled something like the following.

      1.) Start with an AIXI definition. Informally, the utility function we choose for it essentially states that it has whatever values as would a God chosen by the Possible Souls. Or to state it a little bit more formally…
      2.) Every “possible soul” is a utility function encoded as a program that takes one argument, the program describing the world, and returns a cardinal utility in the range [0,1].
      3.) These utility functions are weighted by the Solomonoff prior*. The AIXI’s utility function is the weighted sum of these rest, and for the given weighting the winning result is the same as the ideal contract for the Economists’ Utopia in which wealth is distributed among the agents according to the given weighting.
      4.) Since that utility function is just as incomputable as the rest of AIXI, use the whole definition as the “god’s-eye-view”, or “view from nowhere”. What the real AI would do is to discover practically computable algorithms which it can prove maximinize the AIXI’s utility under the current circumstances, and then use whichever such approximation to AIXI it has proven least awful under the current circumstances. (So as the AI grows more powerful, it should also start to make more-ideal decisions.)

      * Presumably this would not be human-Friendly. For a Friendly AI, you’d need some much better-engineered weighting that takes into account what humans are and what we want.

      Yeah yeah, I know I just suggested a path to FAI despite totally lacking sufficient expertise. Don’t hate me. I’m definitely not saying anyone should be so crazy as to actually go implement this.

    • Carinthium says:

      My view is that the Veil of Ignorance is very difficult to justify in the first place as a philosophical premise. That being said, I have another nitpick.

      If there is a conflict between what I want now and what I want in the future, then the way I currently hope for it to be resolved is what I want now, not what I want then. The only reason I might not is because I care about my future self’s happiness and so change my mind.

      • Handle says:

        The conflict I’m mentioning is interpersonal and simultaneous, not personal and time-inconsistent. You can’t know what kind of contracts seem fair in inherent conflict-of-values or conflict-of-preferences scenarios when you don’t know what your own preferences might be.

  47. Paul Torek says:

    might help esolve some of the standard paradoxes

    Is “esolve” intentionally ambiguous between “dissolve” and “resolve”? Cause that would be cool. I suggest spelling it “essolve”.

    I see your Carl Shulman on Repugnant Conclusion, and raise you a Michael Huemer (pdf).

    I’m extremely sympathetic to the overall point. And the bit about moral philosophy not needing to crisply decide every conceivable alternative is positively brilliant. But somewhere I got the feeling that the main obstacle to better contractualism in your view is ignorance. We don’t know which rules are Pareto efficient and realistically workable, for example. But in my view the biggest obstacle is that moral motivation is only a subset of motivation, or in a few cases no part at all of an individual’s motivation. Moral motivation sits in a pretty high-leverage location, being so meta, but that only gets you so far. The solution isn’t solving game theory. But a step toward it is re-conceiving game theory.

    Nothing in utility theory specifies that another’s utility can’t be one of the deepest variables in one’s utility function. And nothing in game theory specifies that the history and pattern of interaction with a game partner can’t itself be part of one’s utility. And some of the sharper theorists note these possibilities explicitly. But in run of the mill discussions, these points are almost always forgotten. We need to fix that. Because You’re off the edge of the map, mate. Here there be morality.

  48. bewreyen says:

    I had successfully avoided learning the details of Mijic’s scenario until today. Since it seems an incidental detail of your post, I would very much appreciate it if you edited it out, to spare others.

  49. Douglas Knight says:

    I did not meant to make such a strong claim (though Blake Riley did). Even if it is true that Contractualism becomes Utilitarianism in the limit of weird contracts and meta-contracts and the limit of a strong Veil, it is still interesting to ask what can be achieved with weak contracts and what cannot, to trace the convergence. And many of your examples were of this form. My comment was that many of your examples were not of this form, but seemed to be disagreements not between a Contractualist and a Utilitarian, but disagreements about how to make human preferences coherent.

    • Douglas Knight says:

      This refers to my comment here and was supposed to be a response to Scott’s comment.

      Scott, you seem to have changed something recently, so that it is now hard for me to reply to comments and my comments often go to the wrong place. I guess I’ll give up on conversation. Probably for the best.

  50. Handle says:

    It seems to me that this model still interprets ‘souls’ as having a far too endogenous, rigid, and immutably ‘inborn’ set of preferences.

    In fact, it is an common observation that people are heavily influenced in their attitudes and inclinations by their exposure to cultural messages and social pressures and that these learned behaviors, desires, and opinions change over time for many legitimate reasons.

    There is thus no way to assess this whole “world of all possible contracts” without knowing what those preferences might be, and their degree of time inconsistency.

    In fact, the only possible solutions to that problem involve a system that constrains the group or kinds of souls to some manageable or computable set.

    And if you do that, there are all kinds of potentially ‘utopian’ possibilities depending on which particular set of preference-souls you are dealing with, the actual preference selection being purely arbitrary.

    So, really, the ultimate solution to the coordination problem involves optimally constraining this set and agreeing on the set of preferences for everyone too. But people just can’t “agree” to have certain preferences.

    The implied solution then would be to establish a social system that attempts to constrain and harmonize everyone’s set of preferences to avoid computably-intractable conflicts of interest, and thus attempts to brainwash or indoctrinate everyone into the same general moral belief system.

    Again, it really doesn’t matter what the content of that system is so long as it is not too incompatible with human instincts and works effectively as a generally shared organizing principle. There are many possible ‘religions’ or ‘ideologies’ of this type with all kinds of variation, and they can all be ‘true enough’ at least insofar as their instrumental capability to constrain and harmonize preferences to resolve the potentially irreconcilable preference-conflict problem.

    Huh – indoctrinating everyone from birth in order to get them to try to believe the same things about morality. Why does that sound so familiar?

    –ADDED–

    My apologies, this was supposed to be a reply to NoahLuck above, for some reason the reply feature isn’t working properly on my system.

    • Carinthium says:

      Query. How would you establish your system, as opposed to the contractrualist one, from first principles? Your implied moral system needs to have an answer to the amoralist challenge.

      • Handle says:

        Does it? Why not have a large set of equivalent alternatives in multiple equilibrium?

        • Carinthium says:

          I’m not 100% sure what you mean.

          As far as I can tell, the first possibility is that you mean that there can be a variety of possible alternatives to solve the Amoralist Challenge.

          If so, my answer is that although ordinary contractualism has an answer that makes morality an extenstion of achieving one’s own ends, your system does not. You have no way to justify that social harmony, for example, is a good thing.

          ———————–

          Alternately, you mean that several moral systems can all ‘legitimately’ exist somehow, despite having no way of justifying themselves.

          If this is so, then there is no rational basis to such moral systems. A rational person can legitimately ask ‘Why pay any attention to this moral system? Why not just do what I want, paying attention to the rules when I have to but subverting them when I can?’

          ———-

          Or maybe I’m wrong about what you meant, in which case please explain. I’m a Melbourne University student who has done Philosophy subjects, so with effort I shuold be able to get it.

        • Handle says:

          Of course there is no way to bridge the gap across Hume’s guillotine from first principles. That is a well-established result and leads directly to meta-ethical moral nihilism, which used to be a much more popular position amongst public intellectuals, but because it is kind of bleak and depressing, causes a gag reflex that convinces people they should take yet another shot at squaring that circle.

          The criticism I am making is trying to take Alexander’s arguments at face value and saying that even in that case, we end up stuck in irresolvable (or ‘nonanalytical’) puzzles. And this is even with the strong assumption of perfect coordination, but with expanding the ‘veil of ignorance’ to include the subjective preferences themselves that are indispensable for evaluating the desirability of potential contracts or meta-contracts of whatever order.

          And that in order to manage those puzzles, you have to make additional strong assumptions about the nature of those potential preference conflicts. As assumption you could make is that everyone agrees upon the highest order meta-contract in terms of the value system to which everyone should adhere in order to eliminate or minimize the possibility of these otherwise irresolvable conflicts.

          But this assumption does not rely upon the precise content of that value harmonization (i.e. ‘ideological) system, merely that there be one, with perhaps a very large – or even arbitrary – set of possibilities

          Now, as a matter of speculation in evolutionary social psychology, it seems plausible that human beings would have a very strong instinct for moral homogeneity and a ideological system which provides a social organization principle, and would struggle to impose it and persecute heretics to achieve that single value system – whatever it happens to be – and that this harmonization comes with costs and benefits but it is not obvious that it could not be an optimal meta-contract under the logic of the main post. And when we look at history, it seems that we see quite a bit of behavior that looks like it is driven by this kind of impulsive motive.

        • Carinthium says:

          Got it. No disagreements with you then. Apologies for having to make you simplify your thoughts.

    • Lesser Bull says:

      It seems that the real solution is something like the Crystal of Knowledge:the rules that people agree to once they have experienced all their selves throughout time.

      • Handle says:

        My point was that there is no reason not to extend the veil such that you are also ignorant of what ‘all your selves’ are like, and what they would want and prefer. But if you do that, contractualism breaks down, except as a kind of circular self-liking ice cream cone. “You should want the things that when translated into social rules will get you what you want.”

        But people don’t have the ability to contract away their preferences, except maybe in some science fiction world, or maybe in a kind of meta-contract in which everyone agrees to be as brainwashed as possible from birth to want things that solve conflict problems – which only makes sense if they are all brainwashed to want the same thing and have the same values. One of those things would be “No heretics!” which is a way to solve the holdout problem, and, again, crops up in the history of human social behavior over and over again.

        The veil of ignorance says you don’t know anything about which position you’ll occupy, but that you do know yourself (or ‘selves’) and what you would prefer for the outcome to be in each situation, and you have some idea (or can arbitrarily assign) what the chances of you being in each position, so that you can choose the rule which generates the optimal expected outcome.

        But if you don’t assume everyone will have the same values or preferences, and don’t know which values or preferences you are going to have, then you simply cannot perform the analysis at all of what kinds of contracts would produce optimal outcomes.

        • blacktrance says:

          You may not know the specific preferences that you’ll have when you’re born, but you know the probabilities of having certain preferences (e.g. the probability of being born as someone who likes being tortured is negligible), and that regardless of who you are when you’re born, you’ll want your own preferences (whatever they are) to be fulfilled, and thus can still endorse rules that give you the greatest expected utility.

        • Handle says:

          @blacktrance:

          “you know the probabilities of having certain preferences”

          A. Even assuming you could know such a thing, how would you come to know that? Has the distribution of human variety in preferences remained static throughout time? Do we have some kind of time-transcendent model of what that distribution will look like when summed over all the humans that will ever exist. What is the probability you’ll be a racist? Asking questions like that doesn’t even seem to be a meaningful inquiry, but even if it were, answering it seems completely implausible.

          B. But anyway, you’re missing the point. The point is that the posited veil ignorance only extends as far as not knowing in which social position or on which side of the deal you’ll find yourself, but it stops short of also sweeping preferences under the veil. But there’s no good reason why we should do that.

          Trying to escape the condition by saying there might be a way to discover an ‘expected preference’ that is the weighted mean of some probability distribution is like trying to escape the veil of ignorance by saying you might discover some expected social position and structure all rules to mostly be written to benefit people close to that position, instead of following universal rules or being a Kantian categorical imperative.

          For example, you might create a rule in which people within 1-SD of the mean (call them “middle class”) have ‘privilege’ and thus almost always win when in most disputes with lower or higher class individuals, but that the typical ‘fair’ rules apply in intra-class disputes. That’s possible, of course, and may even maximize (the expected present value of) social welfare in certain scenarios but it’s not what Rawls was doing.

        • blacktrance says:

          A. Me knowing the probabilities of certain preferences is part of the scenario as it’s constructed. I know everything about every possible world that has me in it, except for who I’d actually be. Of course, this is impossible in practice, but this is a theoretical scenario.

          B. Having an “expected preference”, “expected social position”, or expected utility in general, isn’t trying to escape the Veil, it’s the point of the Veil. If from behind the Veil I know that rules that favor the middle class maximize my expected utility, then I should endorse those rules. This may not be the same conclusion Rawls reached, but that by itself is not a reason to reject this approach – he could’ve been wrong about what the Veil implies.

  51. Pingback: Somewhere else, part 158 | Freakonometrics