Cooperation Un-Veiled

Related to: The Invisible Nation – Reconciling Utilitarianism And Contractualism

Contractualism tries to derive morality from an agreement that even selfish agents would willingly sign if they knew about it. In theory, you would gain from such an agreement, since the costs of not being able to behave unethically towards others would be at least balanced by the benefits of other people not behaving unethically to you.

Such attempts crash into the brick wall that not everybody would, in fact, sign such an agreement. For example, the King might reasonably argue that he is able to reap the benefits of oppressing lots of people, but almost nobody can oppress him. To give another example, rich people might feel no need to give to charity, since they don’t need anyone else to give charity to them.

One classic solution to the problem is Rawls’ “veil of ignorance”. Rawls asks: what if we have to make the agreement before we know who exactly we’re going to be? The future King, not knowing he will be born a King, will agree oppression is bad along with everyone else; the future rich, not knowing they will be rich, will want to create a strong social safety net and tradition of charitable giving.

The great thing about this thought experiment is that it works pretty well to get us what we want – assuming a veil at just the right spot, we end up with something like utilitarianism being in everyone’s best interests.

The bad thing about the thought experiment is that there is not, in fact, a veil of ignorance. There’s just a King, who when asked will tell you he knows perfectly well he’s a King and would like to keep on oppressing people. So what can we do with the universe we actually have?

Here’s a model I have been playing around with recently.

Suppose there is a society of one hundred men, conveniently named Mr. 1, Mr. 2, and so on to Mr. 100. Higher-numbered people are stronger than lower-numbered people, such that a higher-numbered person can always win fights against a lower-numbered person at no danger to themselves. Further, suppose this society has a god who enforces all oaths and agreements, but who otherwise stays out of the picture.

(in order to avoid finicky math distinctions between choosing with replacement and choosing without replacement, it might help to think of these as arbitrarily large clans of people with with specified strength instead. Whatever.)

This society is marked by interactions where two randomly selected people meet each other. Sometimes the people nod at each other and pass each other by. Other times, the stronger of the two people overpowers the weaker one and oppresses them in some way, where an oppression is an interaction where the stronger person gains and the weaker person loses some utility.

One person proposes a rule: “no oppressing anyone else.” How much support does the rule get?

Well, that depends on the character of the oppression. Some oppression can give the oppressor exactly as much utility as it costs the victim – for example, I steal $10 from you, making me $10 richer and you $10 poorer. Other oppression can cost the victim more than it benefits the oppressor – for example, I steal your wallet, which gives me only whatever small change you have in there, but you have to replace all your credit cards and licenses and so on. Still other oppression could help the oppressor more than it hurts the victim – for example, starving Jean Valjean steals a loaf of bread from a rich man.

So let’s be more specific. One person proposes a rule: “No zero-sum oppression.” Who agrees?

Naively – and I’ll challenge this later – Mr. 1 through Mr. 50 agree, but Mr. 51 through Mr. 100 refuse. Analyzing Mr. 25’s thought process should explain: “In 25% of interactions, I will be the oppressor. In 75%, I will be oppressed. Assuming one of my utils for one of their utils, that means in a hundred interactions I will on average lose fifty utils. Therefore, I should ban this type of interaction.”

Mr. 99, on the other hand, likes this kind of oppression. He thinks “In 99% of interactions, I will gain. In 1%, I will lose. So in a hundred zero-sum interactions, I will on average a gain of 98 utils. Therefore, I like this type of interaction.”

But Mr. 99 might have a different rule he would agree to. He might say “No oppression so bad that it hurts the victim >100x as much as it helps the oppressor.”

It’s easy to think of examples of this kind of oppression. For example, if I’m having a really bad day and just want to beat someone up, breaking your ribs might make me feel a little bit better, but probably not even one percent as much as it makes you feel worse.

Mr. 99 thinks “In 99% of interactions I will be the oppressor; in 1% I will be the victim. Each time I am the oppressor, I gain one util; each time I am the victim, I lose 100. Therefore, in 100 interactions I will lose on average one util. Therefore, I don’t like this kind of oppression.”

And it’s easy to see that Mr. 1 through Mr. 98 will agree with him and be able to sign this contract.

The logical conclusion is a hierarchy of agreements. Mr. 1 signs an agreement banning all oppression, Mr. 1 and 2 together sign an agreement banning oppression that helps the oppressor less than 50 times as much as it hurts the victim, Mr. 1 and 2 and 3 together sign an agreement banning oppression that helps the oppressor less than 33 times as much as it hurts the victim, and so on all the way to everyone except Mr. 100 signing an agreement banning oppression that helps the oppressor less than 1/100 as much as it helps the victim. Mr. 100 signs no agreements – why would he?

Before I explain why this doesn’t work, I want to think about what it means in real world terms.

It would replace the one-size-fits-all principle of utilitarianism with the idea of power-based utility ratios. This seems to kind of map on to real life experience. For example, the King may order his servant to spend hours getting the floor polished absolutely spotlessly. Having a perfectly spotless floor (rather than a very clean floor with exactly one spot) gives the king only a tiny utility gain, but may require many more hours of the servant’s time and labor. That the King can command a large amount of the servant’s utility to improve his own utility only a tiny bit seems a lot like what it means to say there’s a power differential between the King and the servant. If the servant tried to reduce the King’s utility by a large amount in order to improve his own utility by a tiny amount, he would be in big trouble.

I notice this in my own life as well. Last year I worked under a doctor who was consistently late. The way it would work was that he would say “I have a meeting at 8 AM every morning, so you should be in by 9 so we can start work together.” Then his meeting would invariably run to 10, and I would be left sitting around for an hour doing nothing. It might seem that the smart choice would have been for me to just sleep late and arrive at 10 anyway, but suppose one day a week, my boss’ meeting finishes exactly on time. Then if I’m not there, he has to wait for me, and he considers this unacceptable. So if my boss and I value an hour of our times the same amount, it would seem this arrangement implies my boss’ utility is worth at least seven times as much as my own.

There are some features of this power-ratio utilitarianism that are repugnant: the rich seem to be held to a very low standard, whereas the poorer you are, the more exacting a moral standard you’ve got to live up to. That seems like if anything the opposite of how it should be. But other features actually seem better than our current morality – if giving charity to the poor improves their utility 100x as much as it decreases yours, then the 1% have to donate, probably quite a lot.

Enough of that. The reason this doesn’t work is simple. Mr. 1 through Mr. 50 would want to sign the zero-sum agreement. But if he knows the rules of the thought experiment, Mr. 50 can predict that Mr. 51 through Mr. 100 won’t sign the agreement. None of the people who could conceivably oppress him will consider themselves bound by the rule. So he’s not trading his right to oppress others in exchange for others’ right to oppress him, he’s giving up his right to oppress others but should still expect exactly the same amount of oppression as he had before. Therefore, he does not sign.

But now Mr. 49 is in the same such position. He knows nobody stronger than he is, including Mr. 50, will sign the agreement. Thus the agreement is useless to him.

And so on by induction all the way to Mr. 2 refusing to sign (it doesn’t matter much for poor Mr. 1 either way).

This produces some weird results. Mr. 99 is no longer willing to accept his “No breaking people’s ribs just to let out some stress” agreement that banned utility exchanges worse than 1:100, because the only person whose help he wants, Mr. 100, isn’t going to sign. That means Mr. 98 won’t sign, Mr. 97 won’t sign, and again, so on all the way down to Mr. 2.

In other words, even the second weakest person in a society has no interest in signing an agreement not to punch people weaker than you when you’re having a bad day.

But this is a stupid result!

It reminds me of a problem noticed in Iterated Prisoner’s Dilemma. Conventional wisdom says the best thing to do is to cooperate on a tit-for-tat basis – that is, we both keep cooperating, because if we don’t the other person will punish us next turn by defecting.

But it has been pointed out there’s a flaw here. Suppose we are iterating for one hundred games. On Turn 100, you might as well defect, because there’s no way your opponent can punish you later. But that means both sides should always play (D,D) on Turn 100. But since you know on Turn 99 that your opponent must defect next turn, they can’t punish you any worse if you defect now. So both sides should always play (D,D) on turn 99. And so on by induction to everyone defecting the entire game. I don’t know of any good way to solve this problem, although it often doesn’t turn up in the real world because no one knows exactly how many interactions they will have with another person. Which suggests one possible solution to the original problem is for nobody to know the exact number of people.

(now I want to write a science fiction novel about a planet full of aliens who are perfect game theorists, but who always behave kindly and respectfully to one another. Then some idiot performs a census, and the whole place collapses into apocalyptic total war.)

It seems like there ought to be some kind of superrational basis on which the two sides in the iterated-100 prisoners dilemma can cooperate. And along the same lines there ought to be some kind of superrational basis upon which everyone in the society of 100 people should stick to some basic utility-ratio principles. But I’m not sure what it would be.

Some other variations of this problem might be more interesting, but I don’t think I’ve got the math ability or the time to think about them as carefully as they deserve:

1. What if all fights contained a random element? For example, suppose your chance of overpowering someone else (and thus being able to oppress them) was your_strength/(your_strength + opponent_strength)? In societies of this type, agreements to ban strongly negative-sum interactions would be more salient for everyone, since even Mr. 100 would have some chance of being beaten in a typical interaction.

2. How about a meta-agreement, in which people say “I agree to sign the agreements requested by people weaker than myself if and only the people above me agree to sign the agreements benefitting people weaker than they?” Such an agreement wouldn’t make sense for Mr. 100, and so Mr. 99 would not sign, and so on down, but is there a superrational solution?

3. What if one type of agreement people were allowed to make was a coalition to gang up against opponents? This seems one of the most important real-world considerations – one of the things that does make Kings behave at least somewhat morally is the knowledge that they will be overthrown if they do not; likewise, some countries implement social welfare systems with the explicit goal of decreasing the poor’s incentive to overthrow the rich (I think Bismarck tried this). On the other hand, it also gives the powerful an incentive to band together to better oppress the weak. I’m pretty sure the effects of this would be impossible to really calculate, but might we lump them together into saying “This is so nondeterministic that no one can ever be sure they’ll end up in the winning as opposed to the losing coalition, therefore they are less certain of victory, therefore they should be more likely to agree to rules against oppression”?

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

246 Responses to Cooperation Un-Veiled

  1. Ialdabaoth says:

    I notice this in my own life as well. Last year I worked under a doctor who was consistently late. The way it would work was that he would say “I have a meeting at 8 AM every morning, so you should be in by 9 so we can start work together.” Then his meeting would invariably run to 10, and I would be left sitting around for an hour doing nothing. It might seem that the smart choice would have been for me to just sleep late and arrive at 10 anyway, but suppose one day a week, my boss’ meeting finishes exactly on time. Then if I’m not there, he has to wait for me, and he considers this unacceptable. So if my boss and I value an hour of our times the same amount, it would seem this arrangement implies my boss’ utility is worth at least seven times as much as my own.

    When I was much younger (like, age 8-13), this sort of inequity would send me into paroxysms, and I would regularly point it to my superiors.

    I learned not to do so, but I never quite got over the resentment.

    An interesting question: how do you anticipate you would act in this specific scenario, when you’re the one with the power?

    • Scott Alexander says:

      Possibly the same, depending on how I get power. It might be I have power because my time is worth ten times is much (in terms of monetary compensation) as other people (and we assume this difference to be a fair reflection of the value of our work). In that case, it seems fairest to whatever institution is paying us, or to whoever is gaining from our labor, that I make the other person wait (thus wasting on average seven hours of their time) instead of making me wait (thus wasting one hour of my time).

      There’s also an altruism angle. In a lot of these cases, these are doctors who have volunteered time out of their busy and lucrative schedule to teach others. I don’t want to disincentivize that by making it inconvenient for them.

      Overall I think I’d rather try to arrange things so the issue doesn’t come up, like by not scheduling the thing where I have to work with another person directly after my meeting.

    • Zorgon says:

      For my part (having had that power previously) I wouldn’t be so ridiculously insensitive and stupid as to connect my employee’s commencement of work to my own meeting times. I can’t see any good reason to do that, when a set commencement would mean my employee could embark on other work while waiting for me to finish my meeting, rather than sitting about doing nothing, and thus would feel valued.

      I mean, it’s almost like the doctor in Scott’s example wanted Scott to hate him. One possible explanation is that he was deliberately creating that situation to reinforce his superiority over Scott.

    • Anonymous says:

      The solution to this, and what often ends up happening, is to come in at 9:20. That way you can balance who’s time is worth more.

  2. Tom Hunt says:

    It seems to me that the solution to the 100-iteration-Prisoner’s-Dilemma game isn’t superrational, but subrational. That is, even in this precise situation, the vast majority of people don’t have the game theory to figure out that they really should be defecting all the way, and those who do know that most of the people they’re playing with, not being perfect game theorists, won’t follow their rules, and hence the reduction doesn’t work anyway. See also: why the 100-blue-eyed-men-on-the-island question is so hard, because most people will model the islanders as at least something like people, whereas the problem states explicitly that they’re all actually perfect game theorists.

    I suppose my takeaway is that perfect game theorists aren’t actually much like people at all, and if you’re trying to come up with results that are useful for actual people, reasoning using perfect game theorists is kind of silly.

    • Carinthium says:

      In that case, how can you justify moral behaviour as rational behaviour for an individual? Doesn’t this mean you have no answer to the amoralist challenge?

      • Tom Hunt says:

        I don’t have any answer to the amoralist challenge. I pursue moral behavior because it is aesthetically pleasing to me. Given the power, I would enforce moral behavior in others, similarly because it is aesthetically pleasing to me.

        I don’t think the amoralist challenge has any consistent answer. Whatever your utility function is, it has axioms you can’t justify.

        • Skeptical Excitement says:

          I never thought of morality as an aesthetical preference before. Is there a name for this kind of philosophy?

          • Well-Manicured-Bug says:

            Emotivism

          • Creutzer says:

            Not to my knowledge. “Emotivism” is not it, although emotivists are likely to hold the view that moral and aesthetic preferences are the same kind of thing.

            I haven’t seen this position actually defended in academic moral philosophy, but I may well have just missed it.

            Someone I once discussed this with pointed out how much, if you look at it, aesthetic and moral judgments behave alike in many people; they will condemn you for liking the wrong kinds of things in the same way that they condemn you for doing something they consider immoral. People will similarly argue about objective aesthetic worth in the same way as they will talk about objective morality.

      • peterdjones says:

        Instrumental or epistemic rationality?

        • Carinthium says:

          I should note that I use a standard version which I use with most philosophers, many of whom make the distinction along other lines.

          But either, if you can pull it off.

        • peterdjones says:

          An epistemic rationalist values, truth, valid argumentation, good epsitemology, lack of bias, etc.

          Therefore, an epistemic rationalist will be motivated to accept the truth of well justified moral claims.

          Therefore, morality is not incompatible with rationality.

          That’s about all I need to answer the challenge.

          To answer some further questions:

          An epistemic rationalist won’t necessarily act on a moral claim she accepts.They can be akrasic. Altruistic morality requires them to lose utility in some areas, which may or may not be balanced out others areas. That doesn’t mean they are irrationality losing nett utility every time they act morally.

          • knyazmyshkin says:

            I think Hume is facepalming in his grave. That line of reasoning ducks the entire is-ought problem.

            EDIT: To clarify, by using truth, valid argumentation, etc, one could potentially come up with a beautifully designed social contract that Mr. 1 through Mr. 100 could agree to, but that has everything to do with utility and nothing to do with morality.

          • Anonymous says:

            >An epistemic rationalist values, truth, valid argumentation, good epsitemology, lack of bias, etc.
            >Therefore, an epistemic rationalist will be motivated to accept the truth of well justified moral claims.

            I don’t understand.

          • peterdjones says:

            @knyazmyshkin

            The is-ought divide is really a fact-normativity divide…it isn’t specific to ethical normativity. And the point of basing ethical normativity on some other normativity is that you don’t have to leverage it from non-normativity.

          • Creutzer says:

            Yes, precisely. That’s why people try to get morality from game theory, because the normativity of game theory is about something that by definition you care about already.

            I read knyazmyshkin as objecting to the idea that epistemic rationalists, seeing the truth of moral statements, would immediately be motivated to follow them. And rightly so, because that assumes a whole lot, not only that moral statements are truth-apt, but also motivational internalism.

          • peterdjones says:

            The fact that democratic politicians lie is a symptom of the fact that they need consent. Those who have absolute power don’t lie because they are too good and noble, it’s because they don’t need to.

      • Ghatanathoah says:

        My answer to the amoralist challenge is that I am a motivational externalist. The amoralist challenge doesn’t work on non-internalist theories of morality.

        http://en.wikipedia.org/wiki/Internalism_and_externalism#Moral_philosophy

        • Paul Torek says:

          This. The motivational internalism just keeps popping up around here, without justification or even acknowledgement.

        • blacktrance says:

          The amoralist challenge is especially applicable to externalist theories. If morality really is entirely external, then the amoralist can coherently and rationally reject morality and not have it bind him.

          • Paul Torek says:

            Um … duh? The problem with the hardened sociopath is not irrationality, it’s not giving a crap about the rest of us. That’s what externalism says, and the diagnosis seems to agree better with common sense, than does the contrary view.

          • blacktrance says:

            But for someone to have no reason to be moral conflicts with common sense. and more importantly makes morality itself a questionable enterprise. If it’s possible for someone to have no reason to be moral, why should I bother being moral? If I reject morality in the same way that the amoralist does, perhaps the things I’d do would coincide with morality more often that the things the sociopath would do, but why should I be concerned with whether I’m being moral?

          • Paul Torek says:

            You’re not someone though, you’re you, and people like you (who take morality at least somewhat seriously) are in the majority, and can talk freely about morality without confusing anyone. I don’t understand what the problem is supposed to be. I mean, for example, I wouldn’t say “why should I care what happens to my wife?” even though I recognize that not everybody cares about their spouse (or about mine!) or even has a spouse, and lots of that not-caring is perfectly rational.

          • blacktrance says:

            The world isn’t divided into sociopaths and people who care about morality*, there’s a spectrum of psychological constitutions, preferences, etc, that have different degrees to which they have reasons to care about morality (or aspects of it). Sociopaths are at one end of the spectrum, and they presumably have no reason to be moral – let’s say their optimal rational behavior is called Optimal_Sociopath, and it rarely recommends the same behaviors as morality. Now suppose that I’m somewhere in the middle of the spectrum, and my optimal rational behavior is Optimal_Me. Because I’m in the middle of the spectrum, Optimal_Me’s behavior recommendations coincide with morality much more often than Optimal_Sociopath’s do. But Optimal_Me doesn’t always agree with the prescriptions of morality. If the sociopath has no reason to be moral – if there’s no reason he should abandon Optimal_Sociopath behavior for moral behavior – then why should I abandon Optimal_Me behavior for moral behavior?

            *By “morality” I mean “morality as it’s popularly conceived of”.

          • peterdjones says:

            @blacktrance

            There’s a difference between being obligated and being motivated.

            Sane adults are obliged to follow the law. Not all are motivated to do so, including criminals. Criminals are punished for their motivation because they are considered to be under obligation.

            You should do moral things because that’s what moral means. That’s the obligation.

            The motivation you have to supply yourself, if you can.

          • blacktrance says:

            The law is assumed to be designed such that the penalties for violating it are sufficient to motivate people to follow it. There may be additional reasons to follow some particular laws, but they don’t apply to laws in general. If there’s a law I disagree with, it’s likely that I should follow it anyway, not because of some obligation but because something bad will happen to me if I don’t. I’m not obliged to follow the law, but it’s the prudent thing to do most of the time.

            In contrast, there’s no external punishment for acting immorally as such. There are sometimes punishments for some immoral acts, but at most that gives you a reason to avoid acting immorally in cases where it would get you punished. “If I’m not going to be punished for it, why should I act morally?” is a significant problem for external conceptions of morality. If you say that I should do moral things because that’s what “moral” means, then that just shifts the question to “Why should I do the things that are in the set labeled ‘moral’?” (e.g. give to charity, not murder, etc.) The problem is connecting “What I should do” to specific moral acts. It’s a dilemma, and there are two options:
            1. You can conclude that not everyone should be moral, because for some agent, “What I should do” isn’t any act that’s commonly labeled “moral”. You then preserve the intuitive content of morality (maybe), but you lose moral universality.
            2. You can widen the set of possible moral acts to anything that “What I should do” can be, and concede that a moral act for Agent A may not be a moral act for Agent B. You then have a morality that applies to everyone, but then you lose the intuitive content of morality.

            You can say that morality is what you should do, or that morality is stuff like helping old ladies cross the street, but it cannot necessarily be both.

          • peterdjones says:

            There are an endless variety of norms, each of which generates its own set of shoulds, mays and shalt-nots.

            Some are opt of ional and/or localized, such as medical ethics or chess rules.The distinguishing feature of moral normativity, of morally-should is that it is binding on all sane adults.

            So if there are true moral claims, and you are a sane adult, there are things you morally-should do.

            That establishes, conditional on there being any morality, why you should be moral.

            Your argument that you should not be moral is based on collapsing various different should into one. For instance, saying that you should not do X because it is not in your self interest.

            But if there were only one “should”, there would be no moral dilemmas or conflicts.

          • blacktrance says:

            If you restrict “morality” to what (presumably human) “sane adults should do”, that’s fine. That’s taking Option 1 in the dilemma. But then that means that neither Caligula nor sociopaths have a reason to be moral. Also, even if you restrict morality to what sane adults should do, there is still significant variance in what one should do depending on who one is, and it’s perfectly possible to be a sane human adult who shouldn’t donate to charity, for example. There are moral claims because there are true statements about what sane human adults should do, and these statements don’t prescribe behaviors like paperclip maximization or torturing everyone, but they may still not prescribe behaviors that are commonly considered to be moral.

          • peterdjones says:

            @blacktrance

            I have no problem giving up on “universal” motivation, since it only ever exsted in a qualified sense. I have never heard of a moralism which required all humans to be morally motivated. Asserting that not everyone is morally motivated is fairly banal, especially if the unmotivated coincide with the people we don’t consider moral agents anyway,

            I don’t know why you say there is variance in what people should do. I don’t know which meaning of should you are using, or whether you agree that there is more than one.

            I could guess, based on past experience, that you mean “should in order to satisfy their preferences”. If so, I would not default to the assumption that that is any kind of morality.

          • blacktrance says:

            If “what they should do” (i.e. morality) is something other than the satisfaction of their preferences, where would this “should” come from?

          • Paul Torek says:

            The identification of a “should” based on satisfying preferences, and treating that as a master or ultimate “should”, seems cart-before-horse-ish. People can change some of their preferences on a whim, so to speak: they can embrace a certain point of view. And that changes the balance of preferences. So it’s useless to ask “what favors the final balance of my preferences?” when the latter is a moving target. The very act of discussing morality, which involves trying to justify your actions to your interlocutors and get them to justify theirs, tends to make you more inclined to prefer justifiable actions.

          • blacktrance says:

            People aren’t always consistent, and they can change points of view if the new view is more consistent with their deeper-seated preferences. For example, if I believe that pleasure is the only good, and yet I oppose wireheading, and then am asked to justify my opposition, I change my view, having discovered it to be inconsistent, but I change it because of a more fundamental foundation that doesn’t change. Justifiable actions are justifiable with respect to a consistent framework, and the act of discussing morality imposes consistency.

          • peterdjones says:

            @blacktrance

            There is more than one theory of morality, each suggesting a different source; for instance utilitarian says you should maximize aggregate morality.

            If you don’t like external morality, you don’t get to assume egoism as a default. The idea that satisfying your own preferences is moral at all needs justification.

          • blacktrance says:

            A successful theory needs some at least ideally motivating foundation – that is, a rational being must have sufficiently motivating reasons to act morally. So, whatever morality is, it must be motivating for the consistent rational being. Preferences are motivating for normal people, and they’re even more motivating for the ideally rational person, because they don’t suffer from akrasia and other forms of inconsistency. So we start with preferences, because they give us reasons to act, and if morality is anything beyond individual preference-satisfaction, there most be something that is simultaneously motivating to the rational person and overrides their motive for preference-satisfaction. But I don’t think there is any such thing, and the burden of proof is on whomever is arguing for the existence of such a thing.

          • peterdjones says:

            @blacktrance

            Motivating moral theories don’t involve giving up on all your preferences. It’s not all-or-nothing.

            Altruistic morality requires you to go against some of your preferences, but not all of them,.

            You shouldn’t start with preferences, because they might not be moral at all. Caligula morality is a worse failure mode than unmotivating morality.

            If you can argue that X is moral, the argument itself is a motivation to an epistemic rationalist.

          • blacktrance says:

            External morality means giving up on at least some of your preferences some of the time, so there must be something that overrides own-preference-satisfaction at least in some cases.

            You should start with preferences because they give us reasons to act. If a preference can be immoral, that means that the consistent rational person would have reasons not to act in accordance with that preference. The burden of proof for the existence of such reasons lies on whomever is arguing for their existence.

            If you can argue that X is moral, the argument itself is a motivation to an epistemic rationalist.

            Yes, but that’s tautological. That pushes aside the problem of what it means for X to be moral. You could just as well say that if the argument isn’t a motivation to the epistemic rationalist, that means that X isn’t moral.

          • Paul Torek says:

            but I change it [preference] because of a more fundamental foundation that doesn’t change.

            Maybe, but you haven’t shown that this “more fundamental foundation” must itself be a further preference. Presumably, a zygote has no preferences, and an adult human does, so at least sometimes preferences can evolve out of non-preferences. If sometimes, why not often?

          • blacktrance says:

            It’s certainly possible to bring one’s beliefs about how one should act into consistency with more fundamental beliefs about how one should act, and to have none of those beliefs be grounded in preferences. But your point was that preferences change “on a whim”, which isn’t what’s happening here – it’s merely a case of beliefs being made coherent with each other.

            The case of the zygote and the adult human is completely unrelated, because it’s not a case of preferences being derived from non-preferences, but the psychological constitution of a being changing from being incapable of having preferences to being capable of having them.

          • Paul Torek says:

            By “on a whim, so to speak” I just meant preferences can change relatively suddenly (as opposed to gradual change with age, say) and not based on any apparent pre-existing preference.

        • peterdjones says:

          @blacktrance

          ” External morality means giving up on at least some of your preferences some of the time, so there must be something that overrides own-preference-satisfaction at least in some cases.”

          …overrides your. self-centered preferences.

          Yes, I largely agree, since that is largely what I have been saying. But so what? You think it’s irrational to sacrifices some preferences others? I think it’s inevitable, if you have complex sets of preferences, and not restricted to ethical issues.

          “You should start with preferences because they give us reasons to act.”

          Not in the the sense that if you start anywhere else, you necessarily dont get motivation.

          “If a preference can be immoral, that means that the consistent rational person would have reasons not to act in accordance with that preference. ”

          If there reasons for morality.

          “The burden of proof for the existence of such reasons lies on whomever is arguing for their existence.”

          Up to a point. And there are many attempted justifications. However, if you going to assert that amoralism, rather than “dunno”, is the correct answer, there’s a burden on you.

          “Yes, but that’s tautological. That pushes aside the problem of what it means for X to be moral.”

          I am doing that quite deliberately because so far, this discussion has been about motivation, not truth. Moral truth has its problems, to be sure, but they’re not the same as the problems of motivation.

          • blacktrance says:

            Yes, I largely agree, since that is largely what I have been saying. But so what? You think it’s irrational to sacrifices some preferences others? I think it’s inevitable, if you have complex sets of preferences, and not restricted to ethical issues.

            If your preferences are internally consistent, then sacrificing the net fulfillment of your preferences would be a failure to be instrumentally rational. It’s certainly possible and rational to follow one preference rather than another (when they’re mutually exclusive) when the fulfillment of the first would give more utility than the fulfillment of the second, but then your action would still be fully in line with your preferences, so it wouldn’t be like morality overriding your preferences.

            Not in the the sense that if you start anywhere else, you necessarily dont get motivation.

            Maybe (though I don’t know what you’re suggesting to start with instead), but if you start with preferences, you get behavior that is justified by instrumental rationality. You don’t get that with anything else.

            And there are many attempted justifications. However, if you going to assert that amoralism, rather than “dunno”, is the correct answer, there’s a burden on you.

            Someone arguing for the existence of a morality bears the burden of proof. Until they’ve proved it, you assume that morality doesn’t exist, which means amoralism.

          • peterdjones says:

            @blacktrance

            If your preferences are internally consistent, then sacrificing the net fulfillment of your preferences would be a failure to be instrumentally rational.

            If your preferences are internally consistent, such that you can always satisfy all of them under all circumstances,you are probably quite unusual.

            It’s certainly possible and rational to follow one preference rather than another (when they’re mutually exclusive) when the fulfillment of the first would give more utility than the fulfillment of the second, but then your action would still be fully in line with your preferences, so it wouldn’t be like morality overriding your preferences.

            It wouldn’t be overriding all your preferences, but it would be overriding some of them. If you give money to charity, you are losing utility in a limited sense. But you can still ”
            have an overall gain in utility even if some of the quantities involved in the calculation are negative. So it can be rational to perform actions which are altruistic in the sense of involving an moment of sacrifice.

            “Maybe (though I don’t know what you’re suggesting to start with instead), but if you start with preferences, you get behavior that is justified by instrumental rationality. You don’t get that with anything else.”

            If you dont get morality out of that, what’s the point?

            “Someone arguing for the existence of a morality bears the burden of proof. Until they’ve proved it, you assume that morality doesn’t exist, which means amoralism.”

            Morality , undoubtedly exists in some sense. The question is how it is justified. Some justifications multiply entities, some do not.

          • blacktrance says:

            It wouldn’t be overriding all your preferences, but it would be overriding some of them. If you give money to charity, you are losing utility in a limited sense. But you can still have an overall gain in utility even if some of the quantities involved in the calculation are negative. So it can be rational to perform actions which are altruistic in the sense of involving an moment of sacrifice.

            An overall gain in utility compared to what? Compared to not giving to charity? That’s certainly possible, but that’s also contingent on an individual’s preferences. If I can rationally reject me giving to charity, i.e. if me giving to charity would be a net loss of my utility, the externalist moralist would say that I should give to charity regardless – that’s why this view is called “external”. If I should give to charity if and only if it would maximize my utility, then external morality (as commonly conceived of) doesn’t exist.

            If you dont get morality out of that, what’s the point?

            You get morality in the sense of there being truths about what you should do. But the content of these truths may not be the same as the content of popular morality or utilitarianism. It generates morality, but not external morality.

          • peterdjones says:

            > An overall gain in utility compared to what? Compared to not giving to charity? That’s certainly possible, but that’s also contingent on an individual’s preferences.

            Assuming instrumental rationality, yes. Assuming epistemic rationality, it is more contingent on good arguments.

            > . If I can rationally reject me giving to charity, i.e. if me giving to charity would be a net loss of my utility, the externalist moralist would say that I should give to charity regardless – that’s why this view is called “external”. If I should give to charity if and only if it would maximize my utility, then external morality (as commonly conceived of) doesn’t exist.

            Then it is false, rather.

            But you don’t have an argument that “external morality” — moral shoulds that don’t amount to self interest — are false.

            Nor do you have an argument that self interest is moral…all you have done is exploit the ambiguity of “should”.

            Motivation is a not relevant.

            > You get morality in the sense of there being truths about what you should do.

            Only if you can establish that they are moral shoulds.
            There are things you should do to be a good chess player, or a notorious criminal, but they are not moral shoulds.

            There are burdens on all proponents of all moral theories, including egoism. You need to show that doing what you want is moral. Showing that it is motivating is not the same thing.

            > But the content of these truths may not be the same as the content of popular morality or utilitarianism. It generates morality, but not external morality.

            Show that it generates morality.

          • blacktrance says:

            But you don’t have an argument that “external morality” – moral shoulds that don’t amount to self interest – are false.

            Nor do you have an argument that self interest is moral…all you have done is exploit the ambiguity of “should”.

            If external morality is true, then it’s possible for me to have reasons to donate to charity regardless of whether it would be instrumentally rational for me to do so. However, if morality would cause me to act in a way that’s contrary to my instrumental rationality, I can rationally reject it. That means that it’s not something that I should do, and because “morality” means “what I should do”, it’s not morality. Therefore external morality is false.

            There are burdens on all proponents of all moral theories, including egoism. You need to show that doing what you want is moral. Showing that it is motivating is not the same thing.

            As you yourself said, “morality” means “what you should do”. If I show that it’s what you should do, that means I’ve shown that it’s morality.

          • peterdjones says:

            > If external morality is true, then it’s possible for me to have reasons to donate to charity regardless of whether it would be instrumentally rational for me to do so. However, if morality would cause me to act in a way that’s contrary to my instrumental rationality, I can rationally reject it.

            If there are reasons to donate to charity, then you can’t epistemically rationally reject it.

            > That means that it’s not something that I should do,

            The default meaning of “should” is not the actual behaviour of a realistic version of a an entity. In fact, another word, “would”, labels realistic predictions. “Should” relates to ideals and optimization. There are different things that can be optimised, so there are different shoulds.

            Giving to charity optimizes human happiness, or something, so it is what you morally-should do.

            If it js not what you would do,that just means you are an imperfect moral agent, not that there is something wrong with morality itself. You are kind of blaming ideals for being ideals.

            >{and because “morality” means “what I should do”, it’s not morality. Therefore external morality is false.

            Morality means what you morally-should do and not what you would do, or even instrumentality-should do.

          • blacktrance says:

            If there are reasons to donate to charity, then you can’t epistemically rationally reject it.

            Yes, but whether there are epistemically rationally non-rejectable reasons to donate to charity is determined by whether there are instrumentally rationally non-rejectable reasons to donate to charity.

            The default meaning of “should” is not the actual behaviour of a realistic version of a an entity. In fact, another word, “would”, labels realistic predictions. “Should” relates to ideals and optimization. There are different things that can be optimised, so there are different shoulds.

            That’s true, but most shoulds are themselves dependent on other shoulds, so they’re not fundamental. For example, if I should do X to be a good chess player, whether I should do X is dependent on whether I should be a good chess player. Whether I should do X and be moral (if doing X would make me moral) is dependent on whether I should be moral. Indeed, if you want to be “moral” by some commonly accepted content of morality, there are things that you should do, but that doesn’t mean you should be moral. Whether you should do or be any particular thing ultimately comes down to instrumental rationality, because that’s the only thing you can’t rationally reject.

      • JME says:

        I’d be happy to concede not having an answer to an amoralist challenge (which I understand to be “there is no rational basis for morality”) so long as the amoralist is willing to not use the little sleight-of-hand where you say “morality has no rational basis, therefore some conception of self-interest is the only rational basis for behavior.” I’m not allowing the assumption that pursuit of self-interest (however defined), or any other potential motive, has some innate rationality without subjecting it to the same scrutiny as morality.

        • Carinthium says:

          If you define “self-interest” in a very conventional sense, I agree with this.

          But if you define “self-interest” as “maximise what you want in the world, to the extent you want it”, then I don’t. Naturally this sometimes leads to moral behaviour and sometimes to non-moral.

        • peterdjones says:

          Some conception of self interest is the basis of all rational behaviour, since one conception of self interest is satisfying your utility function.

          However, that notion of self interest is so broadly defined as to be compatible with altruism.

          • Carinthium says:

            That much is true, I agree.

          • JME says:

            To me, this comes close to a tautologically meaningless “if rationality is defined as doing whatever you want, then doing whatever you want is rational” type argument.

          • peterdjones says:

            Doing what you want effectively is instrumental rationality, and isn’t vacuous, since people can make bad moves that decrease their utility.

          • Carinthium says:

            JME- It’s more than the traditional conception of morality is either false or incoherent depending on how you look at it (see amoralist challenge), therefore “do what you want” replaces it.

          • peterdjones says:

            And see my replies.

            No one has shown that altruistic morality is false or incohere.nt, because no one has shown how lack universal motivatingness ads up to falsity or incoherence.

            Moreover, someone, Scott F, has shown that maximally unmotivating moral truth is a coherent idea.

            Furthermore, the truth of ethical egoism doesn’t follow from any falsehood of altruism. See the nihilistic challenge.

          • Carinthium says:

            Moral right and wrong as we know it at the moment is an absolutely pointless concept to have due to the amoralist challenge, and an incoherent concept in the same sense that grue is if defined by human moral beliefs as they differ.

            Key to the argument, however, is the amoralist challenge. If morality isn’t motivated, why bother trying to figure out a vaguely defined concept of right and wrong.

            Let me give you an analogy here. Say I create a concept of “Sumerness”- similiarility to ancient Sumerian civilisation. In a very broad sense this is objective, even if there are many details in which parts of Sumerian civilian differ and the borders of what is and is not Sumerian are vaguely defined.

            But there is no point in the existence of the concept. It may be coherent, but it is so useless as to not be worth discussing and has only a vague connection to reality as we know it.

          • peterdjones says:

            “Moral right and wrong as we know it at the moment is an absolutely pointless concept to have due to the amoralist challenge

            Challenge doesn’t mean ‘successful challenge’

            ” If morality isn’t motivated, why bother trying to figure out a vaguely defined concept of right and wrong.”

            Show that morality isnt motivated.

            Note that I have already answer both challenges: it is pointful to research morality, because any good arguments that are uncovered will motivate epistemic rationalists.

    • Scott Alexander says:

      How can it be less rational if it consistently works better?

      • Tom Hunt says:

        It is “less rational” in the sense that it works specifically by throwing away the first-level rational solution, and most of the people doing this do so not because they’ve parsed that solution and decided they don’t like it, but because they can’t come up with that solution in the first place.

        If you define “rational” as “wins”, then yes, it’s more rational. But this implies that an RNG which has a long lucky streak is also being “rational”, which seems counterintuitive.

    • Sniffnoy says:

      “Superrational” is being used here in Hofstadter’s sense, not in a general “more than rational” sense.

    • Perfect Game Theorist says:

      Oh, so perfect game theorists aren’t people now? You know who else singled out groups and said they weren’t people…

  3. hamiltonianurst says:

    Most IPD tournaments do have a somewhat randomized number of rounds, to prevent exactly this issue (D,D forever).

    • Scott says:

      IPDs with evolution (tit for tat defect last x, where x changes over generations) settles on x = 2n/3 (where n is the number of rounds). Can’t remember if this is due to the standard payoff matrix or true for all matrices.

  4. anon says:

    I am not sure why you see this as a problem with anything other than the veil of ignorance and/or utilitarianism. I find it interesting because IMO both of these theories suffer from what I can only call a measurement problem. In the Veil’s case, the problem of accurately appraising pain and distress of others. Actually, stated that way, in both cases, since it’s really a consequence of “de gustibus non est disputandum”—I don’t know not only if I will inhabit such-and-such but also that I don’t know how much I’ll care (e.g., the socially just distribution of pork rinds). This axis is really hard to analyze, I think.

    It seems to me that you can artificially create almost any scenario you like simply by tweaking your util function, in what I’ll sloppily call “Arrow’s Impossibility Analogy”. Though in a pretty specific sense you are talking about something like voting on an outcome from behind the veil. Now suppose that there is something like an objective measure of utils in some things, and then we go to vote, and the method we choose for aggregating our preferences is…

    Ah, I always get sidetracked in these thoughts.

    Mostly it reminds me that I need to unpack my Rawls because I don’t know how far you can really extend Rawlsian veils to the things it is often applied to. Also, it always seems to me to be something that some clever behavioral economist somewhere should have already written three papers on and I’m vaguely saddened that your post doesn’t have two of them with seven commentaries each linked.

    • Douglas Knight says:

      The whole point of the example was that it takes place after the veil of ignorance, so it cannot possibly be a problem with the veil of ignorance. It is harder to remove interpersonal comparison from the analysis. But the core of the analysis applies to zero-sum muggings for cash. The purpose of the range of positive sum and negative sum interactions is really to fill in a range of intuitions, and not because the whole range is needed.

      In short, I have never seen someone bring up Arrow in a utilitarian context who seems to have understood a word he claims to have read.

      • Anonymous says:

        Well aren’t you a contrary fellow. If I apologize for having an independent thought spurned on by the post, will you not hit me again?

      • Anonymous says:

        Well my response was less than charitable. Please consider the following.

        The whole point of the example was that it takes place after the veil of ignorance, so it cannot possibly be a problem with the veil of ignorance.

        Scott’s hypothetical fails to update the participants’ util functions to account for “results Scott thinks are stupid.” From behind that veil they’ve found a stable result. He thinks he’s removed the veil, but he forgot that he was judging the outcome, not the hypothetical participants. His preferences were unrevealed.

        The Rawlsian veil is there so that we cannot gain advantage from neither our preferences nor our place in society. Scott’s example removes that veil for the participants judging their own situation, but these 100 hypothetical souls don’t know their place isn’t in the society their util functions are determining, but in the meta-context of Scott’s estimation of that society. They picked the correct results through flawless reasoning. If anyone says otherwise, then their prior preferences were not revealed to the participants. And if the participants picked the “not stupid” result, it would be a coincidence.

        It is analogous to Arrow’s Impossibility Theorem because AIT is about problems of voting systems, not the outcomes of voting. It’s a “stupid result” that someone is a dictator or etc. But it doesn’t say that the outcome selected by the social choice function is “stupid.”

        There is no prisoner’s dilemma. The participants make their decisions based on the payoff matrix. The payoff matrix does not include the post-analysis of “if you did anything but coop/coop then you failed to choose ‘optimally.'” The “dilemma” exists only to outsiders; to insiders the stable strategy is determined by the payoff matrix and possibly the form of the game (in e.g. iterated cases).

        • Douglas Knight says:

          Yes, Scott doesn’t like the results only because of interpersonal utility comparisons. It took you an awful lot of words to say that.

    • Andy says:

      It seems to me that you can artificially create almost any scenario you like simply by tweaking your util function,

      This does work in reality. Pro-slavery apologists would create theories by which their slaves didn’t actually feel suffering, and were instead best suited to a life of constant labor and torture. Some of the more toxic SJWs occasionally argue that privileged people can’t actually be bullied. These arguments seem to hinge on arguing that:
      (in the pro-slavery case) making someone pick cotton all day, whipping them to work harder, not letting them marry without permission, and selling their children on the auction block did not decrease their utility,
      (in the toxic-SJW case) any time privileged people claim to feel pain they should be ignored, as these are simply attempts to undermine the march of justice.
      Both arguments are really, really hard to argue against.

      • Mary says:

        Torture? The pro-slavery advocates portrayed slaves as happy-go-lucky slaves who would willing do the sort of drudge-work they would be stuck with in any scenario because of their lack of ability. Which lack also, happily, scuttled their imagination, thus preventing their being unhappy from ambition.

  5. social justice warlock says:

    1. What if all fights contained a random element? For example, suppose your chance of overpowering someone else (and thus being able to oppress them) was your_strength/(your_strength + opponent_strength)? In societies of this type, agreements to ban strongly negative-sum interactions would be more salient for everyone, since even Mr. 100 would have some chance of being beaten in a typical interaction.

    This is almost precisely Hobbes’ argument – he notes that almost anybody can bash in almost anybody else’s head in, assuming the latter is asleep or something.

  6. Sniffnoy says:

    I don’t know of any good way to solve this problem

    The usual way is not having a fixed number of rounds. Maybe each round there’s a fixed probability of there being another round. Or you could use some other distribution, I don’t know. (Note that the conventional “tit-for-tat” wisdom is not for the fixed-rounds case!) I mean, you kind of mention this, but I think it is worth making explicit how this is actually handled normally.

    • Scott Alexander says:

      Yeah, I know that, but if you do get stuck in a tournament with a fixed number of rounds, it seems really dumb that you can’t get two perfectly rational agents to cooperate.

      EDIT: Also, Liskantope brings up just below that this is similar to the Unexpected Hanging problem. And that does seem to have a solution, in that it is indeed possible to hang the prisoner unexpectedly.

      • Arguably, Breaking Bad’s premise is that a rational actor learns the end of his Iterated Prisoner’s Dilemma is coming to an end, and starts defecting.

        This makes me really glad that most people don’t know the date of their death.

        • Scott F says:

          (breaking bad spoilers ahead)

          Oh boy. The growth/transformation of Walter into Heisenberg looks like climbing from Mr 1 (loses to schoolkids and car wash managers) up the ranks to Mr. 100. First defeating drug dealers, then motorcycle gangs, then large-scale distributors, then… Nazis?

      • Carinthium says:

        Rationality is a human construct, not an objective feature of the world. I know nowhere near enough to categorically state there is no solution, but it sounding ‘really dumb’ isn’t a good enough reason to rule one out.

        Theoretically speaking, perhaps rationality itself as rationalists understand it is ultimately incoherent because at some level there are unsolvable paradoxes?

      • RCF says:

        Is there any “real dumbness” that exists in the iterated case, that isn’t in the one-off case?

        • Zakharov says:

          Yes – if you consider only two strategies to fixed-iteration, always-cooperate (C) and always-defect (D), C-C works out better for both players than D-C. In a regular PD, D-C works out better for the defector than C-C.

          • RCF says:

            Huh? First of all, the hyphen is generally understood to indicate time-differentiated strategies, and ordered pair used to indicate player-differentiated strategies, so your notation is rather confusing. As for the content of your post, (D,C) is better for the defector than (C,C) in all versions of PD.

      • suntzuanime says:

        It’s possible to hang the prisoner unexpectedly. It’s not possible to promise to. Similar to the iterated prisoner’s dilemma, perfect common knowledge screws everything up when a slight relaxation of that assumption makes everything work out like we would expect.

        • Except for the part where the judge does promise to hang the prisoner unexpectedly, and then successfully makes good on that promise.

          • Doug S. says:

            What happens in the really degenerate case of the unexpected hanging paradox?

            Judge: Tomorrow I will hang you, but only if it will be a surprise. Which it will be, when I hang you.

            Prisoner: I know the judge is going to hang me tomorrow, so I’ll expect it. But he won’t hang me if I expect it, so there won’t be a hanging. Which means that I should anticipate no hanging, which means that I will be hanged unexpectedly, which means that I should anticipate it, which means that I won’t be hung unexpectedly, which means I won’t be hung… AAUGH!

            At this point, the prisoner dies of spontaneous head explosion.

          • RCF says:

            Also, what happens if the judge says to the defense attorney “Your client will be hanged tomorrow, but won’t expect it”? The defense attorney knows that the prisoner will be hanged tomorrow, so the defense attorney expects it, but that’s not paradoxical. But even if the prisoner overhears the statement, the prisoner will not be able to consistently conclude that the statement is true.

            It’s basically a Godel statement: “This statement is true, but you will not know that it’s true”.

          • suntzuanime says:

            He “promises” to hang the prisoner unexpectedly, but it is not an infinitely iron-clad and infinitely credible promise, which is why he is able to fulfill it. The “paradox” arises because philosophers see the judge saying words, and then the words coming true, and they confuse that with a successful promise in the hyper-idealized ultra-rigorous philosophical sense.

        • peppermint says:

          it’s also possible to promise to hang the prisoner, then never quite get around to doing it for 30 years; but only in a democracy.

      • Luke Somers says:

        > Yeah, I know that, but if you do get stuck in a tournament with a fixed number of rounds, it seems really dumb that you can’t get two perfectly rational agents to cooperate.

        Their rationality is clearly flawed.

  7. Liskantope says:

    It just so happens that a similar type of thought experiment was brought up in my department earlier today. The scenario is that a professor tells the students on the first day of class that there will be exactly one pop quiz this semester. Now suppose the professor were to set this pop quiz on the last day of the semester. Then the students would come in on the last day knowing this quiz would take place, since there was no pop quiz any of the other days of the semester. That would take away the element of surprise, so it would not be a pop quiz; thus, one concludes that the pop quiz couldn’t fall on the last day. But then, by the same argument (combined with “reverse induction”), the pop quiz couldn’t fall on the day before, or the day before that, etc. so that the professor couldn’t give the quiz at all.

    Right now I’m scratching my head over this; maybe I’ll be able to think of the “right” way to resolve this type of paradox when I’m feeling more clear-headed.

    • Scott Alexander says:

      The way I’ve always heard this joke ends with “The students walk out, satisfied that the pop quiz is impossible. The next day, the professor gives a pop quiz. None of the students were expecting it.”

      See unexpected hanging paradox

      • Zakharov says:

        Let’s say that I owe you $1000. I promise you that I’ll have your $1000 tomorrow. I only have $500. I head to the roulette tables, put it all on red, and win. I pay you your $1000. Did I lie to you when I promised the $1000, a promise I wasn’t sure I could keep?

        Let’s call this a pseudo-lie.

        In the unexpected hanging problem, we could say that the judge pseudo-lied to the prisoner. He promised that the hanging would be a surprise, knowing that there was the possibility the prisoner could be hanged on Friday unsurprised.

        • Liskantope says:

          This seems to boil down to judging truthfulness based on intentionality/belief versus action. That is, you promised me the $1000 tomorrow, not actually intending to pay me all of it tomorrow, because you didn’t anticipate that you’d have it by then. So if your definition of “promise” involves some kind of intention, then your promise was a lie. If, on the other hand, the truthfulness of a promise is reflected solely by one’s later actions, then in this case the promise turned out not to be a lie. I lean towards the former position: the truthfulness of a promise depends on one’s intentions. So in your scenario, I would consider the judge to be lying.

          (Then again, maybe “intentionality” isn’t the best word to use here. I’m reminded of the time I witnessed two philosophy professors get into an argument about whether or not, if I pay for a lottery ticket and beat the million-to-one odds by winning, I intentionally won the lottery.)

          • Army1987 says:

            I’m reminded of the time I witnessed two philosophy professors get into an argument about whether or not, if I pay for a lottery ticket and beat the million-to-one odds by winning, I intentionally won the lottery.

            BTW, see the Knobe effect.

      • Liskantope says:

        I’m not sure I see how this is a solution, though. Wouldn’t the students go on to say, “If the correct conclusion is that the pop quiz is impossible, then the professor will anticipate us concluding that the pop quiz is impossible. Then the professor will go ahead and give us the quiz, knowing that we will be surprised. So we’re back at square one.”

        • Nick says:

          If I understand it correctly, yes, that’s how the students ought to reason, hence why it’s a paradox. Scott’s “solution” is just the typical telling of the joke. It’s funny because most people wouldn’t think immediately to continue the student’s reasoning the way you did and so laugh at the result for the poor students; and your continuation is not so funny as that, so no one tells the joke the “right” way, assuming students who reason better.

          • Liskantope says:

            Yes, Scott did present it as the punchline of a joke rather than as a solution to the paradox. But in a comment in an above thread, he refers to the Unexpected Hanging Paradox and says, “that does seem to have a solution, in that it is indeed possible to hang the prisoner unexpectedly”. So I’m a little confused as to what the purported solution is.

    • Bugmaster says:

      Wait a minute, I don’t see the problem.

      Let’s say that there are 100 days in a semester. Before classes start, the professor secretly writes down a random number from 1 to 100; he will give the surprise quiz on that day.

      On the first day of class, the probability of the professor will give the quiz is 1/100; so, encountering the quiz on this day would be quite surprising. On the second day, this probability is 1/99; still surprising, but not as surprising as before. On day 100, it’s 1/1, which is not surprising at all.

      Where’s the paradox ? The professor never promised to give the quiz on the maximally surprising day, did he ?

      • Scott Alexander says:

        No, the problem is the professor promises it will be at least a little surprising.

        So we can’t just say “The probability on the last day would be 1/1” because that wouldn’t be surprising, so the professor would have broken his promise by giving it on a non-surprising day. He can’t give it on that day.

        And so on by induction forward.

        • David says:

          Perhaps I’m missing something, but couldn’t the professor randomise over 100 to select the number of days that will be excluded from the end of the semester (eg, if they roll 30, the test won’t happen on any of the last 30 days), and then randomise over whatever number is left to decide which day the test will fall on? All they have to do is tell the students that they’re doing this (so they know that there is at least one day at the end of the semester that the test won’t fall on) but not tell the students what number they rolled on the first roll (so they don’t know when to start inducting back from), and then whatever number they get on the second must be a surprise.

          • Alejandro says:

            The first randomization cannot include the number 0, because if 0 comes up then the second one could give the number 100, and the test would be on the last day and not be surprising. Could the first randomization include the number 1? No, because then the second one could give the number 99, and on reaching day 99 the students would know (having already replicated the first stage of reasoning) that the test must be today. And so on…

            If the professor must be 100% certain of fulfilling the promise of surprise, then the two-step randomization doesn’t work.

          • 27chaos says:

            The hanging paradox is ultimately a consequence of the false assumption that if one knows an agent’s goal and part of the agent’s reasoning process, one can predict that agent’s actions.

            Alternatively considered, it’s a consequence of the implicit premise that the professor is constrained by/to inductive reasoning alone.

            If the professor was a computer program that could use no reasoning other than inductive reasoning, then I think the program would return an error message. I don’t think it is actually possible to build a program limited to this logic that would output an action it expected to be surprising. If actually limited to the described sequence of logic, the program or professor would simply be incapable of action.

            There is no true paradox in showing that some reasoning processes are insufficient to guarantee some types of promises. The apparent paradox is only a consequence of the hidden premise that is first assumed and then violated.

          • Paul Torek says:

            27chaos, your comment is especially refreshing, given that Scott’s whole post is struggling with a few other false assumptions about how un-veiled reasoning must proceed.

          • Creutzer says:

            But isn’t the problem that the students are reasoning in this fashion and that their reasoning outputs no coherent expectation, thus nothing to be violated, no matter how the professor reasons?

            I’m confused.

    • Charlie says:

      In this problem, because of how the students gain information, surprised students have to be purchased in the coin of unsurprised students. You can only surprise the students on some day if there is some chance it would be the next day – therefore in a bounded set there is some day that will be unsurprising. This is just a cost of doing business, like a compost pile or a sewage treatment plant.

      • 27chaos says:

        You are saying, in other words, it is correct that the professor not give the test on the final day, but it would still be acceptable for him to give it any other day?

        • Charlie says:

          Almost the opposite 🙂

          Suppose the professor decided which day to give the quiz by using some random process like die-rolling or coin-flipping. I’m saying that if this random process outputs “last day of class”, the professor realio trulio should give the quiz on the last day. The random process’s surprisingness is bought with very occasional unsurprisingness.

        • Charlie says:

          Let me put it another way. There is no strategy that is always surprising to a good reasoner in this game. That is what the induction proves.

          But this does not mean that the students can never be surprised. The situation is not surprise-proof, it is only certain-surprise proof.

  8. Ken Arromdee says:

    It seems like the standard veil of ignorance argument also concludes that we should have a society which taxes everyone to supply the world’s poor with whatever intervention maximizes their utility. Yet nobody does this.

    • Scott Alexander says:

      Since our intuitions tell us that not everyone is perfectly moral all the time, it doesn’t seem any discredit to a theory of morality if people fail to follow some of its recommendations.

      • Ken Arromdee says:

        Ther’s a difference between a theory of morality that some people don’t follow, and one which pretty much nobody follows, ever. A theory in the latter category I would conclude fails to capture something important about what we really consider moral.

        • dublin says:

          Morality isn’t about what you do, it’s about what you’re willing to endorse.

        • Paul Torek says:

          As a meta-ethical motivational externalist, I strongly endorse Ken’s message. A morality needn’t motivate everyone, but if it motivates no one, the moral thinking that led to it has gone awry.

        • Scott F says:

          I don’t think there’s any difference between a theory of morality that some people don’t follow and a theory of morality that nobody follows at all, except for contingent environmental factors that affect adoption and persistence of the theory.

          For example: You put five chimps in a room, but this time instead of putting a banana on the ladder, you put a baby chimp being endlessly tortured. (Rescuing a baby chimp from endless torture is something all chimps agree is morally right.) Whenever a chimp tries to rescue the poor baby chimp, we blast all five chimps with ice cold water.

          Pretty soon, the chimps learn that if they see another chimp trying to climb the ladder, they better drag that guy down before we all get hurt.

          Swap one chimp out for a new one, who immediately tries to climb the ladder to rescue the poor baby chimp, and who immediately gets thrashed by the four chimps (who ‘wisely’ administer the beating to the ‘naive’ chimp to stop bad things from happening). Wait until the new chimp has internalised this lesson, then swap out one of the original four ‘wise’ chimps for a new one.

          After a while, you have a room with a baby chimp being endlessly tortured, with five chimps sitting around not helping it and beating up anybody who tries to. Even if one of the chimps inspected the cold water system and discovered that years of disuse had rendered it nonfunctional, this wouldn’t have any relevance – chimp beatings are now reinforced by chimp beatings, not cold water.

          If one of these chimps told you about their moral theory (that considers saving baby chimps from endless torture a moral thing to do), would you respond that their theory fails to capture something important about what they really consider to be moral?

          I think it would be more accurate to say their moral theory fails to direct their actions in certain hostile environments. From the outside, changing the environment to allow their moral theory to start directing their actions seems obviously the right thing to do, and adopting a different moral theory to reflect the environment seems like the obviously wrong thing to do.

    • RCF says:

      Would you prefer such a society to all others?

      • Ken Arromdee says:

        I would prefer it to all others if I was behind the veil of ignorance and didn’t know whether I was going to be one of the people paying the taxes or one of the people receiving them, since by supposition the gain to those receiving them is maximal.

        • RCF says:

          “by supposition the gain to those receiving them is maximal.”

          No, that was not the supposition. The supposition is that the utility is maximal given that this distribution scheme is in place, not that it’s maximal over all possible worlds. Second order effects could easily leave everyone worse off.

  9. suntzuanime says:

    These iterated prisoner dilemma type problems require that everyone’s perfect rationality be common knowledge. This is an even more unrealistic condition than everyone being perfectly rational! Even if everyone happens to be perfectly rational, they can’t necessarily be sure that the other guy isn’t going to be an irrational tit-for-tat player or an irrational anti-oppressionist. If you’re playing 100 round IPD, a >1% chance of your opponent being tit-for-tat instead of rational makes tit-for-tat the rational thing to play. And so a >1% chance of your opponent thinking there’s a >1% chance of his opponent being tit-for-tat instead of rational makes tit-for-tat the rational thing to play. And so on and so on, for as many levels of meta deep you need to go to convince you to cooperate.

    The trick with these game-theoretical constructs is that they’re infinitely fragile. An infinitesimal probability of deviation from ideal conditions can lead to a large change in outcomes. If you’re not thinking probabilistically, you’re not really thinking.

    • RCF says:

      A condition generally is that there is no secret knowledge (other than the players’ strategies themselves). So if one player knows they are perfectly rational, then everyone else knows that, too. Stating this rigorously take a bit of work, though, since it involves self-reference and infinite recursion.

      • Carinthium says:

        Which begs the question- how on earth can this be used to found a moral theory in the real world, where entities are mostly highly irrational and rationality levels are impossible to guess?

        • RCF says:

          Most disciplines have an implicit premise that the best way to study real cows is to posit spherical cows and then add attributes until the result is reasonably similar to actual cows.

          Also, if I were more of a prescriptivist, I would accuse you of misusing the phrase “beg the question”.

          • suntzuanime says:

            But this doesn’t work for problems where perfect spheres have special properties which are wildly different from every other solid, including spheres that have been deformed to an infinitesimal degree. Which is basically the situation with perfect common knowledge game theory.

          • Scott F says:

            Suntzu – the issue might be that game theory’s perfect spheres actually do approximate our real spheres in most situations (e.g. ducks in a pond) but we are specifically fixating on just the questions where real spheres do not behave like perfect spheres at all.

            Some objections to these game theory questions take the form of “of course your ‘if-the-sphere-is-perfectly-round-then-flash-green-otherwise-flash-red’ machine gives different results when applied to real life spheres!”.

      • suntzuanime says:

        Yes but that’s a dumb condition which leads to results that are infinitely fragile. That’s the point I was making.

        • Carinthium says:

          I know that- I was clarifying because I thought somebody would make a counterargument and I wanted to make sure that they covered that point.

  10. Whateverfor says:

    Interestingly enough, if you assume that pro-social behavior is due to hard-coded moral urges instead of conscious game-theory, then there is a real world veil! Your genes do not necessarily know what your eventual ranking is, the source code is behind the veil.

    • RCF says:

      Your genes “know” that you’ll be a being with those genes, which can be a substantial bit of information. Furthermore, there’s no coordination mechanisms; precommitting to an altruistic strategy doesn’t make others altruistic as well.

      • Zakharov says:

        There’s a bit of a superrational element in that the genes “know” they’ll be interacting with other copies of themselves (people will be interacting with their relatives).

        • RCF says:

          For every gene, there is some first mutation, and that first mutation does not have any other copies of the gene to take advantage of. And it’s not clear how it benefits from other copies of the gene even if they do exist; reproductive fitness is entirely relative, so the gene would have to benefit from other copies of gene more than other genes do. If you’re appealing to the fact that organisms will be interacting with relatives, then it’s not clear how this is adding anything to pure kin selection.

  11. somnicule says:

    Fixed-length IPD can, in a sense, reduce to one-shot IPD. Does your system reduce to one with two agents, a more powerful one and a less powerful one? How might you look for solutions to that? Are there any?

  12. drethelin says:

    If oppression is usually negative sum rather than positive or neutral sum, and there are other activities that are positive sum but interfered with by “oppression” everyone including 100 has an incentive to agree to SOME oppression-lowering contract to claim a smaller slice of a bigger pie. This is basically how serfdom works as opposed to outright theft. “If I let you farm land on your own and pay me 10 percent of all your food every year, I will end up with more wealth than if I kill you and take all the food you have now”

    Once anyone is in a partial oppression relationship, they have incentives to push out total oppressors.

    • drethelin says:

      Or to put it another way: Cooperate and defect don’t map very well to real moves like Kill, Enslave, Trade, Ally With, and last but certainly not least: Leave.

    • drethelin says:

      Also this whole thing reminds me of Fnargle: It feels like 100 should be able to bully everyone else into some sort of arrangement that is best for 100 which ends up with net less oppression.

      • Scott F says:

        It should be possible, yes, and if there is no cost to 100 for beating someone else up, something like that will happen. You can see that if you add in a random element or allow coalitions, the “I’ll take you all on!” approach quickly becomes disincentivised.

  13. Princess_Stargirl says:

    I am usually pretty skeptical of “privilege” but reading the stuff about the doctor actually made me feel privileged. I have never had a job where I had to interact with a boss in this way. I have always been able to just ignore “stupid” rules and hae never shown even moderate deference to a boss/professor/etc.

    I try to treat “superiors” the same way I treat everyone. I intned to be as nice as I can, and don’t want to set anyone back if I can avoid it. But I would find it pretty funny if someone actually expected me to always be on time, nevermind early. At least for activiteies that do not critically depend on being on time (I have taught before and I always show up on time and prepared).

    I think I actually have the opposite ethic of most people. I feel terrible if I even plausibly mistreat someone below me. Of course I can only do so much, but I try. And I certainly don’t expect any deference. But with respect to my “bosses” I am pretty quick to say to myself “she is being unreasoanble and hence I’m ignoring her, probably she will find someone else to bother.”

    Financially I am doing fine btw.

    • Skeptical Excitement says:

      What is your job and industry?

      • Princess_Stargirl says:

        Computer programmer right now. I have also taught and tutored both privately and as someone’s employee. In high school worled in a pet shop lol.

        My theory is this a mix of things. One is that the ideology in most organizations is intentional made to seem more repressive than it is. Most people will obey rules even if no real punishments occurs for ignoring them. So organizations best benefit by pretending to care alot about many things they don’t care about. Though some things are obviously serious (stealing for example).

        The other reason is I always try to avoid direct conflict. If chastized I say I will follow the rule from now on, then just keep ignoring it. Getting into an open fight does not leave your “surperior” a line of retreat (stop bothering you).

        I only break dumb rules. My quality of work is pretty high imo. I actually think I am a signifigant benefit to my employers. Relevantly in high school I cut over 1/2 of my classes and in college more like 3/4s.

        The other humurous possibility is the redpill people arre right and ignoring rules signals high status (whether you have it or not). So its actually beneficial or neutral to your standing in an organization. I really hope this one is true! And I don’t usually root for the fing redpill.

        • Ialdabaoth says:

          It’s actually a blend. For example, people like me can’t break rules and get away with it, because people like me are useful tools for organizations to signal that they care about arbitrary rules.

        • Emile says:

          I like the saying “It’s better to ask for forgiveness than for permission”; I don’t really see myself as a rule breaker though (mostly because I can’t think of many explicit rules I’ve been told to follow at work).

          (this probably does make me somewhat privileged, thanks for pointing that out by the way)

  14. Toggle says:

    Intuitively, it seems like the best case for your model is an unstable equilibrium- and that’s unlikely. You posit only two types of activity (rule-building and oppression), within which utility can only flow up. With that kind of unidirectional utility transfer, there’s no balancing force to motivate rule-making. The lowest energy state looks an awful lot like Mr. 100 taking all the utils from everybody.

  15. RCF says:

    The original hypothetical is highly unrealistic, not only in that it simplifies things, but that it ignores salient characteristics in the real world, to wit that even the most powerless can hurt the most powerful, even if it costs them much more utility that it costs others. For instance, a serf committing suicide would hurt their lord an minuscule amount, but it would still be a cost. A lord who oppressed his serfs so much that their lives are not worth living would soon find himself with no one to work his fields. So we should move onto the random case as more legitimately modeling the real world. And in this case, the utility multiple doesn’t have to be especially high for constant oppression to not be Kaldor–Hicks efficient. For instance, Mr. 100 could agree to reduce oppression of Mr. 1 by 1% in exchange for Mr. 1 not oppressing Mr. 100 at all. I’m not going to work out the math, but it’s likely that Mr. 1 could get an even better deal by “buying” the retaliatory power of others; for instance,

    Mr. 1 could promise to not oppress Mr. 2 at all if
    Mr. 2 will not oppress Mr. 100 iff
    Mr. 100 promises to reduce his oppression of Mr. 1 by 2%.

  16. Pat55word says:

    I think your variation 1 covered it but people don’t normally don’t hold fixed positions in society. So Mr 50 has a chance to be demoted to Mr 49. Which means that he should accept agreements that would positively affect people of 49 strength. People might hold out on agreements because they might promoted as well though which might make an equilibrium.

    • Scott F says:

      Mr 82 has good reason to go around to all the Mr 70s and insinuate that they will very shortly be promoted to the 80s. If convinced, the 70s will be complicit in making themselves easier targets for the 80s.

  17. Jack V says:

    This makes me think of the united nations. Everyone thinks the united nations is _supposed_ to be fair, even while blatantly scheming to take advantage of other countries. But from one angle of view what it actually does (very imperfectly) is to rearrange intra-state conflict so that everyone gets what they want in proportion to how likely they’d be to win a war if there was one. Which is still equally unfair, but a lot better simply by not wasting a lot of resources on _having_ wars.

  18. Stuart Armstrong says:

    A bit problem remains inter-person utility comparisons… http://lesswrong.com/lw/d8z/a_small_critique_of_total_utilitarianism/

  19. James James says:

    “If the servant tried to reduce the King’s utility by a large amount in order to improve his own utility by a tiny amount, he would be in big trouble.”

    Do you mean, “If the servant tried to reduce the King’s utility by a tiny amount in order to improve his own utility by a large amount, he would be in big trouble.”?

  20. wentnuts says:

    Assuming that it’s not raw utility that is exchanged in these interactions, but something that is limited or can otherwise affect the payoff (like money), it makes sense for Mr. 100 to sign agreement banning extreme negative-sum interactions, otherwise there would nothing left for him to steal. Or perhaps he will be even more agreeable to the proposal that others do not engage in any negative-sum “trades”, and he in return will refrain from unreasonably brutal beatings. If necessary, it can be stated in a technically uniform way, where everybody signs the same document: “you agree that if you happen to be Mr. 100, you will do this, and if you are mere mortal, you will do that”.

    In reality agreement of course can be multilayered, where the more powerful one is, the more one favors altruism (at least among one’s inferiors), because one wants healthy substrate to feed on.

    Specifics depend on wealth stealability rules.

  21. Brett says:

    Fun fact – the prisoner’s dilemma was completely solved a few years ago by Freeman Dyson and William Press, who identified a class of zero-determinant strategies, of which tit-for-tat is one. These are strategies in which each player completely controls the score of the other player, meaning that it turns into an instance of the ultimatum game.

    • Brett says:

      Seriously, if you’re talking about IPD without including these strategies, you’re wrong. That’s how big a deal this was. http://m.pnas.org/content/109/26/10409.abstract

      • Army1987 says:

        That’s about what happens at equilibrium and seems to ignore how long it takes to reach the equilibrium. Colour me unimpressed.

        • Blake Riley says:

          The weird thing is that it’s not even an equilibrium result. Or if they think of it that way, they don’t spell out what solution concept they’re considering.

    • Blake Riley says:

      Press and Dyson have an interesting model, but their claim about reduction to an ultimatum game is off. If a player has the power to commit to one strategy and the other is forced to react, they’re implicitly assuming an ultimatum game in the background. Their contribution is to show the extent of power someone might have with commitment.

      If they talk about repeated games without at least mentioning the extensive literature on the folk theorem, something is missing. I don’t see how this is a big deal.

    • dublin says:

      I remember reading that paper and thinking they were overselling their results. I forget what my issues with it were though.

      edit @ Blake Riley: that sounds right. not meaningless, but far from “completely solved”.

  22. roystgnr says:

    it also gives the powerful an incentive to band together to better oppress the weak

    I’ve played games before where I was offered the agreement “Let’s team up against him because he’s in position X” and noticed “but after we do that, I’ll be in position X”. Such an offer is only superficially attractive.

    On the other hand, it’s stereotypically tragic that people do fall for “I did not speak out, because I was not an X” in real life…

    • RCF says:

      It seems to me that there are some games that are extremely amenable to such reasoning, primary examples being Risk and Illuminati. And of course it’s a major part of gameplay in such reality shows as Survivor. So a lot of the game comes down to figuring out how much of Position X you can be in without everyone ganging up on you.

  23. Blake Riley says:

    The unique Nash equilibrium of a finitely repeated prisoners’ dilemma is due to the unique Nash equilibrium in the stage game.

    Games with multiple Nash equilibria can support more interesting strategies with finite repetitions because credible punishments exist. For instance, consider something like a prisoners’ dilemma plus a seemingly useless action E, played over two rounds:

    ... C / D / E
    C (15,15) ( 0,18) ( 0, 2)
    D (18, 0) ( 5, 5) ( 0, 0)
    E ( 2, 0) ( 0, 0) ( 1, 1)

    The two-stage game has lots of subgame-perfect equilibria, including both unconditionally playing D each round or E each round. The most promising equilibrium strategy would be “On the first round, play C. If (C,C) happened on the first round, play D on the second, and otherwise play E,” which is available only because of the threat of E.

    • RCF says:

      That’s not a Nash equilibrium. It is strictly dominated by C on first round, D on second round.

      • Blake Riley says:

        Do you mean the strategy of “C on the first round and D in every subgame of the second round”? That’s one best response to my proposed strategy, but that doesn’t stop my strategy from being a mutual best response. It’s also not an equilibrium strategy on its own since “D in the first and every subgame of the second” is the only best response to it.

        • RCF says:

          I’m not clear what you mean by “mutual best response”. And of course this strategy isn’t a Nash equilibrium. The term “Nash equilibrium” doesn’t refer to a strategy, it refers to the set strategies that the players have.

          “I still maintain the above is an equilibrium strategy,”

          What do you think “equilibrium strategy” means?

          • Blake Riley says:

            By “equilibrium strategy”, I meant a strategy that appears in some Nash equilibrium. Since there are multiple equilibria, I could have clarified that I was referring to the symmetric equilibrium where both players use the same strategy. I also should have said the strategy is a best response to itself rather than say the implicit pair of strategies are mutual best responses.

            Do you still think the strategy is strictly dominated? If the other player uses that strategy, which action would you choose in each subgame?

          • RCF says:

            If the other player had that strategy, I would be performing the same actions (C on first round, D on second), but from a different strategy (C on first round, D on second round no matter what).

      • Blake Riley says:

        I still maintain the above is an equilibrium strategy, but on second thought the “best” equilibrium strategy would be “Play C on the first round. Play D on the second, unless (D,C) or (C,D) happened on the first round. In those two subgames, play E”. That gives the same payoffs on the equilibrium path, but is more forgiving off the equilibrium path.

  24. 27chaos says:

    Has my rtsoc [rest of name omitted] email been banned for some reason? I don’t recall posting anything that would have earned me a ban, but I cannot comment using my normal account or when changing the account’s name.

    It would be nice if I were notified when banned. I spent a long time trying to figure out whether something was wrong with my computer. It would also be nice if whatever comment(s) that earned me the ban were pointed out to me.

    Did I simply comment too often? I was worried about that at one point. If that was so, I’ll try to be quiet more often in the future, if you confirm.

    Was it my concur comment that irritated you? I didn’t know what the rules were regarding comments like that. I understood brief comments might be irritating, but I also thought it was important that agreement be expressed so that your impression of the community’s opinion would be more accurate. But if you say so, then I won’t make such comments.

    Please take into consideration that social norms are difficult for me to recognize as I have Asperger’s. This is another reason feedback would be nice. 🙂

    I did expect to receive a warning message of some kind before getting banned. Was the policy changed?

  25. James Miller says:

    Everyone agrees to the following oath “whenever I encounter someone stronger than me I will truthfully state the most I would pay in dollars to not be oppressed by him (X), and whenever I encounter someone weaker than me I will truthfully state the most I would pay in dollars for the opportunity to oppress him (Y). If Y>X the oppression will happen, otherwise the weaker party will pay the stronger party [X+Y]/2 and no oppression will occur.”

    • David Simon says:

      “[…] the weaker party will pay the stronger party [X+Y]/2 and no oppression will occur.”

      I don’t think that’s quite correct. In the scenario oppression is defined as “an interaction where the stronger person gains and the weaker person loses some utility”, so making the reasonable assumption that dollars have utility, there is still oppression going on.

      However, I agree that your system is better than a scenario with no pacts.

  26. Charlie says:

    But it has been pointed out there’s a flaw here. Suppose we are iterating for one hundred games. On Turn 100, you might as well defect, because there’s no way your opponent can punish you later. But that means both sides should always play (D,D) on Turn 100. But since you know on Turn 99 that your opponent must defect next turn, they can’t punish you any worse if you defect now. So both sides should always play (D,D) on turn 99. And so on by induction to everyone defecting the entire game.

    One common way to preserve the effectiveness of tit-for-tat here is to have an ecosystem of iterated prisoner’s dilemma games. Many players are in the ecosystem, and they are trying to accumulate points – one common game type even has the players reproduce, with success in reproduction governed by how many points you have.

    How does this favor tit-for-tat players? Well, consider a pool containing both tit-for-tat players and defect-rocks. When the defect rocks play each other they gain little. When two players from different sub-populations play, the tit-for-tat loses one turn of exploitation and the defect rock gains one turn of exploitation, but then after that they play like defect rocks and gain little.

    But when two tit-for-tat players play, they gain the entire game length of cooperation. As long as tit-for-tat players are common enough in the ecosystem that they can run into each other and reap the rewards, they will gain more points than defect-rocks.

    One might ask “what about defecting on the last turn?” And indeed, if tit-for-tats have taken over the population, a mutant who defects on the last turn has a competitive advantage. If tit-for-tat is a plant, defecting on the last turn makes you an herbivore. So the population of last-turn-defectors will grow and grow.

    But induction does not lead us back to defect-rocks. We have already seen that tit-for-tat can take over an ecosystem of mostly defect-rocks, because defect rocks gain such little payoff when they play against each other. The trend to start defecting on earlier and earlier turns will continue until the players in the ecosystem are paying so much in defection-costs that a small band of tit-for-tat players could gain a foothold again.

    In the eventual equilibrium, there’s a diverse mixture of levels of defection, but everyone in the ecosystem actually gets the same average payoff. So even though it looks like the more defecting strategies are “exploiting suckers,” they get the same average payoff as those suckers, because the suckers can cooperate with each other and the defectors can’t. Both defectors and cooperators are just filling a niche in the ecosystem.

    • Matthew O says:

      What if you took the “Public Goods Experiment,” (where there are pairs of agents randomly assigned to each other each round and $10 up for grabs that round, and whoever gets picked as Person A gets to propose how to divide the amount between the two, and person B can either accept the division or refuse, in which case both people get $0 that round), and you had both humans and computer programs playing in the game, but you don’t know which is which except for being given information about that agent’s past deals with others (this has been done, I think), AND (this is the novel part) you make it a real competition, a real professional sport, centered around seeing who could end up with the most money after a random number of rounds.

      I wonder who would do better: humans or AI scripts?

      • Scott F says:

        I would find it hard to believe that a) the best method for maximising dollars in this game is complicated enough that something humans can do that computers can’t is relevant (computers are better than humans at guessing an agent’s computer-or-human likelihood based on past actions, etc) and b) humans can carry out this method using their privileged knowledge consistently enough that they average a better return than a computer perfectly carrying out a simpler method.

  27. MugaSofer says:

    Why would Moloch spontaneously decide to be nice?

    I don’t think markets between selfish agents are likely to spontaneously mimic the decisions of perfectly altruistic agents, any more than I expect them to mimic the decisions of paperclip optimizers.

    • David Simon says:

      They should all spontaneously decide to be nice, because it would end up benefiting almost everyone.

      Suppose they all gang up on Mr. 100, kill him, and replace him with an equally string robot that is programmed to agree to all pacts which everyone else agrees to. This benefits everybody but Mr. 100. Therefore, if it were possible, Mr. 100 should agree to act as the robot would act, because he prefers losing some oppression opportunities over dying.

      • MugaSofer says:

        Well, yeah.

        If they could get everyone to co-operate, it would benefit everyone; but they can’t, and the most rational option for the individual screws everyone.

        That’s kind of the essence of Moloch.

        [In the actual example, they can’t co-operate because the strongest person is always tempted to defect, even though acausally this will have caused the person above them to defect.

        In your example … IIUC the strongest coalition would be 2-100 ganging up on 1 to enforce their preferences on him. But regardless, if 1-99 ganged up on 100, then 1-98 would gang up on 99, and so on by induction until 2 rules a kingdom of robots. They should refuse to gang up in the first place, not agree to serve 2 – or act like utilitarians, as you suggest.]

  28. dustydog says:

    From a story point of view, the competitions between Misters 1-100 shouldn’t be one on one. They should be team on team.

    Mister 1 has value, in that he can help Mister 99 tie Mister 100.

    Mister 10 has to spend his time intimidating 1-9 (because he can, if he catches them alone), which gets his team power up to 55. So maybe his team can intimidate 11 through 14 to join. Now he has a team with 105 power, enough to take on Mister 100 alone.

    Who ends up winning and losing depends on energy, confidence, leadership, and luck, as well as unequal advantages starting out. Just like real life.

  29. Kiboh says:

    Thoughts on all of Scott’s alternate scenarios:

    1. Depending on what opportunities for oppression exist with what frequency, one of the key tactics for stronger participants in this scenario could be to say things like “If you don’t resist my mild oppression, I promise to pass up opportunities to severely oppress you later; and if you do, I promise to take them”. They’d probably end up publicly signing contracts to that effect. This has complicated tactical implications, but I’m not going to try and puzzle them out.

    2. I don’t think superrationality could apply here. Mr 100 has no reason to sign anything, ever, because his share of the pie is already at its theoretical maximum. No-one can offer him anything he can’t take. Only reason TDT etc. could make him want to ease up is if he’s worried about the possibility of Mr 101 moving in next door.

    3. In this scenario, I think it would become a race to see who can get more than half the world’s power on their side. For the top quarter of the population, the ‘universal harmony’ solution is dominated by the ‘form an unbreakable, undefeatable alliance composed of everyone from Mr 75 to Mr 100, then freely oppress everyone else forever’ solution.

  30. AJD says:

    (now I want to write a science fiction novel about a planet full of aliens who are perfect game theorists, but who always behave kindly and respectfully to one another. Then some idiot performs a census, and the whole place collapses into apocalyptic total war.)

    That’s basically the premise of the blue-eyes logic puzzle.

  31. Paul Torek says:

    Tom Hunt comes kinda close to what I want to say, but I don’t think cooperation is sub-rational. Rather, it’s part non-rational, and part superrational, in that order of importance. The non-rational part is that humans typically come with basic preferences for empathy and fairness. Not all humans, but most. As a direct consequence, crudely applied utility theory and game theory fail to describe human beings, completely aside from human irrationality. When we pretend that outcomes can be evaluated for one human without taking into consideration both the outcomes for others, and the relationships/actions that led there, we get crude utility and game theory. Sometimes, crude theory works well enough for practical purposes; usually, not so much.

    The superrational part is that rationality itself stacks the moral deck. Rational discussion requires openness, honesty, consideration of interlocutors’ points, and other factors that are definitely not morally neutral. Rationality doesn’t require morality, but it is pro-moral. (David Velleman puts this nicely, so I stole his phrase.) Rational people prize rationality – perhaps not by definition, but that is the only way it works in actual fact; those who do not prize it will not achieve it. This puts a thumb on the scale – one that can be outweighed, for sure, but that doesn’t make it nonexistent – in favor of cooperation.

    So indeed, you Kant dismiss universalizability, and cooperation needs to be un-veiled. But please let’s leave the toy (Homo economicus, atomistic) game theory problems behind, and remember what real humans value.

    Of course, even within the atomistic paradigm, there are good responses available, as social justice warlock points out about Hobbes. Not trashing that. But without the two points I’ve emphasized, I think you only get a feeble shadow of morality as we know it.

  32. Emile says:

    A minor tweak to the hundred-people scenario: say everybody can spend some resources to increase their strength, but if everybody does so, they end up in pretty much the same place. In this case, even Mr 100 may have an incentive to sign some agreements so that he can stop spending those resources, provided that cost is greater than the gain he gets from fights (but he pays it anyway because if he didn’t he’d lose fights and lose even more).

    That constitutes one more mechanism (along with uncertainty and teaming up on each other) by which I believe people are inclined to get into such agreements in the real world. An arms race can be expensive!

    Another mechanism is powerful people that care about the welfare of non-powerful people (typically, their children).

  33. Matthew says:

    As long as people are adding epicycles…

    What sorts of agreements when you get when you care about your offspring as well as yourself, and the people on top can mold the system so that social mobility is low but nonzero (say +/- 10).

  34. Anonymous says:

    Another simple solution to this is to make knowledge of each person’s number uncertain, such as if it’s impossible to tell if Mr. 80 from Mr. 75-85 until you fight him. (You also have to introduce a range of possible maximum numbers, so that Mr. 100 doesn’t know if a perceived Mr. 102 is impossible because there are only 100 people.) This should make everyone agree to the contract.

  35. peppermint says:

    one of the things that does make Kings behave at least somewhat morally is the knowledge that they will be overthrown if they do not;

    No. The reason kings behave much, much better than presidents and prime ministers is that they have a stake in the future, beyond the next election, or even the next few decades.

    The other thing that’s wrong with this entire article is the insistence that people should behave “rationally”, instead of according to a combination of the incentive structure and their conscience. We are not robots, we are not angels, we are humans.

    Think about conscience and private property. Then, as an added bonus, think about some people being born into ethnic groups, and some ethnic groups having different types of consciences. I.e. think like a neoreactionary of the religious, commercialist, or nationalist strains.

    • Carinthium says:

      1- Even ignoring points Scott Alexander has already made, I cite the behaviour of the Great Powers of Europe from 1648 to 1815 since I actually know enough about them to make a reasonable argument there.

      a- England/Britain was considered the best governed (as French Revolutionary demands eventually admitted) but was pretty far from the neoreactionary model of how things should be.
      b- Far less of the budget was spent on ordinary people and far more attention was spent on war with other powers comparatively. Yes democracies do plot against each other, but it doesn’t take up as much time and money.
      c- Large numbers of wars were started for no reason that modern people (I’m an amoralist so I don’t really count) would accept. Proportionate to population, far more people died from their wars then die in modern wars because democracies are held back from wars by their own people.

      2- It’s far more reasonable to say that rationality is the ideal against which actual behaviour is measured. People really are capable of considerable improvement in their rationality even if they can never perfect it. Understanding what perfectly rational behaviour is is a useful step towards improvement.
      3- What’s the evidence that different ethnic groups behave differently regardless of cultural raising? Don’t studies of adoptions contradict you?

      • suntzuanime says:

        Is 1c actually true? I mean sure it is if your definition of modern is “post WW2”, but that would be really disingenuous. My understanding is that the European death rate due to war was pretty similar for the 18th and 20th centuries.

        1b is certainly true but kind of a pointless claim, since you can achieve a huge fraction of your budget being spent on ordinary people by taxing ordinary people their whole salaries and then redistributing it back to them, without making their lives better at all on net. A more reasonable way to measure the costliness of military spending is as a fraction of GDP, not as a fraction of government budget, to give governments that tax less fair credit.

        • peppermint says:

          for a response to 1c see http://www.moreright.net/on-the-absence-of-war/

          It’s an amazingly audacious lie. Pick up a book from the time of the War between the States – democracy had been grinding away at Americans for a hundred years – to find out how much more civilized that was than WWII or the post-WWII conflicts. Much less the American Rebellion.

          The French Revolution was exceptionally bloody, with the city of Nantes in particular being harrowed by mass executions – of less than 10,000 people. One form of execution was called “Republican Marriage“, but France was at the time a dictatorship by Robespierre and later Napoleon, certainly not a democracy. The French Revolution was definitely not a war fought between democracies or even by a democracy, so democracy has her hands still clean.

          • Creutzer says:

            That response to 1c feels fishy, insofar as lots of civil wars in undevelopped countries are of questionable relevance to the issue of which forms of government are more or less conducive to peace.

            And isn’t the biggest selling point the absence of wars between democracies, anyway?

          • peppermint says:

            lots of civil wars in undevelopped countries are of questionable relevance to the issue of which forms of government are more or less conducive to peace.

            Yes, let’s ignore the effect of the US and EU, so we can pretend that third-world violence isn’t our problem. The US is the arsenal of democracy; when the US decided to no longer sell weapons to Egypt, the government fell.

            absence of wars between democracies,

            What if I told you [morpheus.jpg] that in this context democracy means nothing other than US puppet?

            Is Venezuela a democracy? Is Russia? Was South Vietnam? Is North Vietnam? Is Pakistan a democracy? How about Afghanistan?

            So yes, there have been few wars between democracies, with the notable exception of Argentina and the UK. The UK is of course not a democracy but a monarchy, and Argentina at the time was 100% democratic; so predictably the bumbling US chose the less-progressive side in that conflict, like in every other conflict since WWII.

          • peterdjones says:

            The UK has a democratic government with a monarch as head of state. Argentina at the time was a dictatorship.

          • Creutzer says:

            Yes, let’s ignore the effect of the US and EU, so we can pretend that third-world violence isn’t our problem.

            By implicature, you’re claiming that democracies are especially conducive to war in the rest of the world, just not among themselves. Is the idea that democracy, and the consequent leftward shift in policy, caused colonies to be abandoned, which caused them to descend into a mess?

            It’s still hard to see how such effects, which depend on the behaviour of a large number of political players on the global scene, bear on the question of which system of government is best for an individual state. And if we’re not talking “best for an individual state”, then what is the criterion we want to evaluate systems of government on, anyway?

          • suntzuanime says:

            World War 2 should have been enough to keep the silly notion that democracy is enough to prevent war from ever arising. Hitler was elected.

            It’s a very “No True Scotsman” thing; Hitler did nasty things, therefore his government is categorized as “fascist” instead of “democratic” and we talk about how wonderful “democratic” regimes are as though we did not artificially define that wonderfulness into existence.

          • peppermint says:

            The US and the EU give money, weapons, and various kinds of random NGO crap to the third world.

            Mugabe, who was supported by the UK back when he took over Rhodesia and took away the civil rights of Whites, recently said that he prefers to deal with China instead of the West, because China isn’t trying to impose current Western moral standards like homosexuality.

            Does the US have any responsibility for the effects of what the US imposes?

          • Creutzer says:

            @suntzuanime: Are you saying that Nazi Germany was a democracy, as opposed to a dictatorship that had arise from a democracy? Of course the potential to morph into a harmful dictatorship is a relevant consideration, but it seems to me that it is reasonable to separate this from the question of how belligerent states with a certain form of government are.

            @peppermint: Nothing to reply to since you aren’t addressing my points.

          • peterdjones says:

            @Peppermint

            You have a gift for telescoping historic events to give a misleading impression. The UK backed a democratic election which was won, as it turned out, with typical chicanery, by Mugabe, rather than their favoured candidate, Bishop Muzarewe.

            And Mugabes policies about the white Farmers weren’t exactly on his manifesto.

            And what would you have done? Recolonised Zim/Rhod? Sent in troops on the ground,? Backed Ian Smith to create another South Africa?

          • peterdjones says:

            @SunTzu

            Hitler was elected “Kinda Sorta”

          • peterdjones says:

            @peppermint

            II’m aware of the idea that Democratic Peace is actually Pax Americana. It’s on wikipedia, so it’s not exactly Red Pill.

          • peppermint says:

            And Mugabes policies about the white Farmers weren’t exactly on his manifesto.

            I am shocked that a democratic politican would lie. Democracy is based on people telling the truth so that the best ideas can get implemented.

          • peterdjones says:

            You are not going to prove that democracy is worse than something else by repeating that democracy is imperfect.

          • Matthew says:

            @peppermint

            There is an actual political science literature on campaign promises. Politicians generally try to keep them.

          • suntzuanime says:

            @peterdjones: Bush was elected “Kinda Sorta”, let’s not split hairs. The point is that people abuse the fuzzy definition of what counts as a “true” democracy to engage in all sorts of hypocrisy to make their preferred political system look better or worse, to the point where “no wars between democracies” isn’t a meaningful thing to say unless you pin down your definition pretty explicitly.

          • peterdjones says:

            @SunTzu

            I never said that, though.

            And the problem remains that pointing out the shortcomings of democracies that might not be democracies does nothing to counteract the shortcomings of monarchies that are definitely monarchies.

      • peppermint says:

        far more people died from their wars then die in modern wars

        are you serious

        the ghosts of WWII, South America and Southeast Asia, and a thousand African bush wars say what?

        Oh, but maybe all the constant, occasionally genocidal warfare over the past 50 years wasn’t against democracies. That means it’s not democracy’s fault.

        What you said for (2) is completely unrelated to what I said.

        As to (3), no, adoption studies do not contradict genetic differences in behavioral traits. I can’t imagine how anyone can take evolution seriously and at the same time believe that behavioral traits should be the same in all populations, or that anyone who has had serious contact with other groups could believe that behavioral traits are the same in all populations.

        • Nornagest says:

          How about instead of signaling disapproval, we look at some actual numbers?

          Judging from this list, WWII blows everything else out of the water in absolute numbers (unless you take the high estimate for the European colonization of the Americas, an option I’m not happy with for various reasons beyond the scope of this post). As a percentage of world population, though, the Mongol conquests seem likely to take that dubious crown, and most of the other high slots would go to Chinese civil wars.

          The Napoleonic Wars and the Thirty Years’ War — by most measures the most destructive European wars prior to WWI — are no more than middling by comparison. The worst Cold War conflicts are more than an order of magnitude less significant.

          • peppermint says:

            Fortunately, the Napoleonic Wars were between a dictatorship of Napoleon, who was not democratically elected, and the Ancien Regime of old reactionary Catholics who really needed to codify and update their laws anyway.

            What if I told you [morpheus.jpg] that jingoistic nationalism with an simplistic, irrevocably idealistic foreign policy is the result of democratic tendencies?

            Hear me out – who was singing “we don’t want to fight, but by Jingo, if we do / we’ve got the guns, we’ve got the ships, we’ve got the money too / we’ve beat the Bear before and while we’re Britons true / no Russian will set foot in Constantinople”?

            Why were they singing that?

            What, exactly, is a “Briton” anyway?

            But really, invading Iraq and setting up a farcical government there is the least of democracy’s crimes.

          • Nornagest says:

            What if I told you [morpheus.jpg] that jingoistic nationalism with an simplistic, irrevocably idealistic foreign policy is the result of democratic tendencies?

            If you told me that, I’d tell you that you’d neglected to support that statement with anything resembling an argument.

            Nationalism has a complex history but I don’t see a case for tying it closely to democracy, unless you’re defining “democratic tendencies” as something closer to “modernity” (and I’d still disagree; there’s plenty of pugnacious nationalism in Herodotus and Tacitus, among other classical authors). The usual type specimen for a nationalistic war — WWI — was fought mainly between aristocratic powers, though the transition to constitutional monarchy had begun in several and was largely complete in a few.

          • peppermint says:

            I mention a war that the English were convinced to fight, under the name Britons, to prevent the Russians from taking a city from the Turks.

            You mention wars fought by Romans and Greeks for the territorial aggrandizement of Romans and Greeks.

            Just as long as you can find some evidence for something you can call nationalism, though. I mean, I could have defined it better, but Pericles is pretty persuasive

            We secure our friends not by accepting favours but by doing them….We are alone among mankind in doing men benefits, not on calculations of self-interest, but in the fearless confidence of freedom. In a word I claim that our city as a whole is an education to Hellas .

            Do you know what the acutal English plan for Constantinople was?

        • Carinthium says:

          On Point 1, okay okay I screwed up!

          Point 2 was because you attacked the idea that people should behave rationally.

          Point 3 does demonstrate differences, but I never said they didn’t. I said differences amongst ethnic groups that cannot be explained by cultural upbringing.

          If you have studies to cite, why don’t you show them?

          • peppermint says:

            sure. on point (2) I was attacking the narrow game-theoretic rationality that this article talks about; which I believe that few in the Rationalist community actually subscribe to. I think, if I was going to play iterated prisoner’s dilemma with Scott Alexander, we would end up cooperating every time, including the last time.

            As to point (3), it’s tricky, because it’s probably the most emotionally charged political issue on this patch of the globe today. From 30,000 feet, there’s a pretty extensive literature on both sides making their respective claims.

            There are these two species of baboons, the Hamadryas baboon of the highlands and the Yellow baboon from the lowlands. If they interbreed, they get a hybrid with mixed characteristics.

            The Hamadryas prefers to live in harems, and the Yellows don’t. There’s a video on Youtube of some researchers taking a Yellow female into Hamadryas territory, but leaving her in a cage: https://www.youtube.com/watch?v=4LTWi13_jjk (starts at 17:30).

            Are their behaviors cultural, or instinctual, or both? Are our behaviors cultural, or instinctual, or both? How about groups of humans who have had particular types of families for thousands of years – does that affect their instinctual behavior? If it doesn’t, there needs to be a compelling reason.

            I think the idea of anti-racism as a scientific principle is ultimately going to sound as silly as Aristotle’s four humors theory of psychology. Biology has evolved and psychology is increasingly grounded in it.

          • peppermint says:

            anyway… the reason I come here is because arguing with reasonable people is a great way to discover things. For point 1, episodes that could be embarrassing to reactionaries include the constant wars against heretics that left part of central Europe depopulated, and the constant Arab slave raids that left parts of the European coast depopulated. The relevant question isn’t so much why the heretics ruined everything they touched (see also Shafarevich’s The Socialist Phenomenon), or why the Arabs wanted slaves, but why the kings of the middle ages weren’t able to protect their peoples.

          • Carinthium says:

            Looking at this stuff in more detail, I think on Point 2 you’re being quite reasonable and I don’t know enough about Point 3 so I’ll concede the argument.

          • peterdjones says:

            @peppermint

            “Are their behaviors cultural, or instinctual, or both?”

            Can can complex behaviours be stored in DNA?

          • peppermint says:

            can complex behaviors be stored in DNA

            …yes, they can. If you disagree, please create a computer program capable of checking the syntax and tagging articles based on the semantics of human language, or identifying human faces.

            Who tells a goat to butt heads with the other goats to determine their social ranking, and thus the order that they choose mates when they meet the female herd? Who tells them to stick their hoohah in the wazoo?

            Of course complicated behavior is partly in the DNA. It could not be otherwise.

          • peterdjones says:

            ” Who tells a goat to butt heads with the other goats to determine their social ranking, and thus the order that they choose mates when they meet the female herd? Who tells them to stick their hoohah in the wazoo?”

            Not complex behaviour.

          • peppermint says:

            Sure it isn’t. Anything you can do isn’t complex behavior.

            Can you define complex behavior? I would go with anything that a computer would have trouble with, like walking around, recognizing faces, speaking fluently and picking up chicks. I probably wouldn’t include the logic puzzles I spend a lot of time on.

            Of course, humanity spent quite a long time evolving certain behaviors – and different ethnic groups spent quite a long time using various cultural norms to determine who mates with whom, thereby exerting an influence on human evolution.

    • peterdjones says:

      The reason kings don’t behave much better than elected politicians is that the qualifications consist of popping out of the right womb, and can’t be deposed peacefully.

      • peppermint says:

        What does being deposed peacefully mean? It means incentivizing that that the time horizon is the next election cycle and policies be judged based on winning the next election and/or how to disingenuously push something through.

        For example, the Bush tax cuts that the CBO dutifully marked according to the written text that they would be phased in over the next ten years, then abruptly canceled.

        Or Obamacare, which no one really knew what it was going to have in it when it was passed, and the major effects were intended to be delayed for long enough to make it difficult to repeal.

        Or Social Security – why is it paid for by the most regressive tax possible?

        Why is fracking in 100 places possible, but nuclear reactors in 1 place impossible?

        What is Corexit and who approved of its use at Deepwater Horizon, and why?

        Democracy has a punitive cost in terms of the things that are possible. But that’s okay, because at least you get to pretend that there’s no difference between you and the people in charge.

        • peterdjones says:

          You can add longer time preferences to democracy by having career civil servants, second chambers, etc. Most democracies have something like that.

          Your examples of the alleged failures of democracy are nothing compared to Ivan the Terrible or whatever.

          • peppermint says:

            career civil servants

            yes, like the CBO. Or the IRS. Or the CIA.

            second houses

            Ah yes, the US Senate. Or did you mean the House of Lords – which, being “aristocracy” with titles, not actual ownership of anything, is reliably progressive.

            Ivan the Terrible

            Meanwhile, Joseph Stalin, who was not a dictator, was not the democratically elected leader of the USSR. The Chairmen of the Presidium of the of the Supreme Soviet of the USSR while Stalin was a secretary of a political party were Mikhail Kalinin and Nikolay Shvernik.

            The USSR also had a bicameral legislature.

          • peterdjones says:

            @peppermints

            What is your actual point? The not so great branches of the civil service are still nothing compared to Ivan the terrible or whatever. Could you not predict that response?

            Stalin…what??? I can’t even guess your argument.

          • peppermint says:

            You say you like career civil servants and bicameralism. I will of course note that career civil servants mean less democracy, not more, and are therefore a good idea. However, bicameralism doesn’t distinguish the USSR from a democracy.

            Maybe you can find a principle that distinguishes the USSR from a democracy. Is it that the USSR didn’t have very many political parties? If so, China and Venezuela are democracies with multiple parties. Well, we need to let the other parties win. And then we have the West as it stands; and the few wars between Western countries proves that democracy is okay.

            There are of course books about why multiple parties with a chance at winning causes the great technological advances and social progress of the 20th century. There has indeed been sublime social progress – where else would the mainstream press come to bat for female game developer Zoe Quinn, or the robber who punched the cop?

            But when you look at the actual machinations of a political system with multiple parties that could win, what you see is that they try to sneak their ideology in through lying, try to steal everything that is not nailed down and burn everything else so the other party can’t have it, and the time horizon is at most two years.

            You may not have a problem being lied to. After all, it’s for the best – Western countries haven’t fought any wars recently, and ‘citizen’ sounds so much better than ‘peasant’.

          • peterdjones says:

            The fact that democratic politicians lie is a symptom of the fact that they need consent. Those who have absolute power don’t lie because they are too good and noble, it’s because they don’t need to.

          • peppermint says:

            how about that, democracy creates incentives for lying, vote buying, two year time horizons and slash-and-burn politics, whipping up the public into a murderous fury and other kinds of media manipulation; and all you get out of it is the ability to tell yourself that there’s no difference between you and the people in charge.

            That pretty much sums it up.

            I don’t care if the king is noble or not. Metternich was prime minister to a Habsburg monarch whose only coherent order was “I am the Emperor, and I want dumplings”. In 1848, there was an revolution – he asked Metternich, “But, are they allowed to do that?”

            Why are they allowed to do that?

          • peterdjones says:

            You keep saying the same thing to me, so I’ll keep saying the same thing to you.

            Democracy has all those problems, at its worst, and monarchism is nonetheless worse still, because it’s handing absolute power to someone who could be a psychopath.

            Metternich: so monarchy is a great system, providing you bolt something else onto it to fix the problems. Hmmm.

          • peppermint says:

            you’re missing the point. It’s about private property.

            Metternich was a manager, hired by the owner to manage.

            Now, we can say that the President of the US is also a manager, hired by the owners to manage. That’s the etymology of republic.

            Of course, today’s democratic republics suffer from control fraud and other moral hazards.

    • RCF says:

      “No. The reason kings behave much, much better than presidents and prime ministers is that they have a stake in the future, beyond the next election, or even the next few decades.”

      I think that it’s rather bad form to quote someone’s statement, claim disagreement, and then present a supporting point that doesn’t actually contradict the statement, and in fact is a corollary. Not wanting to be deposed is a subset of “stake in the future”.

      “Then, as an added bonus, think about some people being born into ethnic groups, and some ethnic groups having different types of consciences.”

      Ethnic groups do not have consciences, people do. You are posting nonsense, and the most apparent reason for doing so is racist motives.

      “for a response to 1c see http://www.moreright.net/on-the-absence-of-war/

      That is not at all a response to 1c. You have now lost any assumption of good faith I may have had with regard to the idea that if you post a link in a manner implying that it establishes a claim, that it does in fact establish that claim.

      • peppermint says:

        I think that it’s rather bad form to quote someone’s statement, claim disagreement, and then present a supporting point that doesn’t actually contradict the statement, and in fact is a corollary. Not wanting to be deposed is a subset of “stake in the future”.

        Yes, elected politicians have a stake in the next two, four, or six years, depending on what office they are elected to.

        Ethnic groups do not have consciences, people do.

        When did I say ethnic groups have consciences? I said that people of different ethnic groups have different behavioral traits. You even recognized what I actually said, though you still had to pretend I said something retarded.

        You are posting nonsense, and the most apparent reason for doing so is racist motives.

        You are posting nonsense, and the most apparent reason for doing so is anti-racist motives.

        this is not at all a response to 1c

        Point 1c was that today’s world is less violent than the pre-democracy times.

        • RCF says:

          “When did I say ethnic groups have consciences? ”

          What the hell do you mean, when did you say that? I clearly quoted you saying that. What, you want me to include the timestamp? Here you go:

          September 6, 2014 at 10:28 pm:

          and some ethnic groups having different types of consciences.

          “I said that people of different ethnic groups have different behavioral traits.”

          So, there exists pairs of a trait T and an ethnic group E such that T is unique to E? That’s really not much less idiotic than your original claim. I could steelman your claim into something reasonable such as “There are substantial variations in the prevalence of some traits among ethnic groups”, but I shouldn’t have to write your argument for you. You either hold a manifestly absurd position, or you are wording your claim in an outrageously sloppy manner that displays a deep indifference to clarity and precision, and looks for all the world like a precursor to a bunch of racist equivocation.

          “You even recognized what I actually said, though you still had to pretend I said something retarded.”

          Reported for managing to be dishonest, rude, and ableist in the same sentence.

          “Point 1c was that today’s world is less violent than the pre-democracy times.”

          Yes, the operative word being “less”, not “not at all”.

  36. social justice warlock says:

    You might find this interesting. Like all simulation studies, it’s argument from fictional evidence, but an interesting proof-of-concept of a world of game-interacting agents where equity norms are the ultimate destination, but where it can take an arbitrary amount of time being non-equitable and class-based to get there.

    More generally I think the fact that humans exhibit bounded rationality means that instead of a transcendental deduction of the categories of moral thought the empirical process will be one of distributed computation over history, with flashes of transcendence here and there.

  37. Eli says:

    Scott, it seems to be as if you’ve been poisoned by philosophy classes, and are thus attempting to reason morality out from first principles and thought experiments rather than by looking at the cognition, sociology, and history of actual people. Why not start from reality and then try to work back to a theory? You’re enough of a rationalist that you ought to dismiss the Open Question Argument with a simple wave of “Morals/values are complex — but reducible!”

    For reference, I am a realist in the sense that I do think something genuine is going on in our moral cognition, that moral statements have truth-values and some of those are “true!”, but I also think that phrasing these things in the language used by contemporary normative ethics and meta-ethics researchers biases the search space of hypotheses to make the truth extremely difficult to locate.

    • blacktrance says:

      The most you can get from cognition, sociology, and history is what people do, not what they should do.

      • social justice warlock says:

        That depends on how strong of a realist you are. From weak realist standpoints like desirism or the like oughts are at least in principle derivable from ises about what we would do under meta-meta-reflexiveness or something.

        (Of course getting meta-reflexive necessarily involves “philosophy,” though not necessarily as a starting point.)

      • Eli says:

        Hence my remark on the Open Question Argument. Either “should” means something, or it doesn’t: we can examine its actual semantic content. The Open Question Argument basically says, “Yes, you’ve examined the real structure that’s generating ought-propositions, but that doesn’t mean I have to endorse that structure as any more normatively compelling than paperclip maximization! And don’t try to get to me by talking about what’s normatively compelling to my mind-design, or even for truth-optimizing minds in general, as Real Ethics is from the point of view of the universe!”

        At this point, you are implicitly anthropomorphizing the universe (or rather, demanding that the universe must have a mind in order for compelling normativity to exist), acting as an amoralist (by putting your own norms on a level with Clippy), and also denying that ethical intuitions have any normative grounding whatsoever (which runs against the vast majority of theoretical and applied ethicists). Yes, error theorists come out and endorse this exact position, but then they’re left with the work of explaining why this vast collective delusion called “right and wrong” actually has an observable effect, through its ability to compel human actions, on the real world.

        By then you’re admitting that morality exists, you are most likely at least a bit compelled by it yourself, but you’re kvetching at it like a wannabe-rebellious teenager who refuses to recognize normative obligations that result in having to walk the dog.

        • Carinthium says:

          (NOTE: In case you’re asking, I intend to get back to other posts here- I’m just dealing with this first because it’s way easier)

          Moore obviously has a different definition of moral right and wrong from you. Yours appeals to human instincts, whilst Moore’s doesn’t. I don’t know if he would, but Moore could easily say that what we see in human brains is an inbuilt perception of right and wrong, not proof of right and wrong itself. Error theorists most certainly would.

          What most ethicists think is an argument ad populorum, and a silly one given you don’t like philosophy.

          Not all moral realists think of morality as from the point of the universe. There are other ways, such as the mind of God (admittedly bunk, but not incoherent).

          Morality from the point of view of intitutions starts to look a lot less credible when you start to see conflicts between moral intuitions. That’s only the start of it- there’s even worse.

          1- A lot of intuitions are culturally produced. If you get rid of these you get a result we really won’t like. If you keep them, your morality is culturally subjective.
          2- Even ignoring cultural upbringing, personality alone can lead to drastic differences in moral intuitions. How does an ethicist account for these?

          Ethicists try to solve these problems, but none of them have a decent answer.

          The normative thrust of all this? That normatively speaking it is best to view moral intuitions as simply another kind of want, and weigh them just as you do other wants. If the moral intuitions come up short, so much worse for the moral intuitions.

          To be compelled by a sense of right and wrong as more than a mere desire is a cognitive bias, and should be treated like all the others.

    • no one special says:

      reason morality out from first principles and thought experiments rather than by looking at the cognition, sociology, and history of actual people.

      This exercise is particularly interesting to the Friendly AI/LessWrong believers, because it is necessary to have a grounded theory of morality to think about the morality of a supergenius AI, which is not an actual person.

      My model of Scott suggests that it’s some portion this, and some portion obsession. (In a friendly, non-attacking way.)

    • Paul Torek says:

      Eli’s call for a naturalistic approach (in both senses) encourages me.

      A recurring theme in philosophy: someone proclaims a heavyweight Principle (metaphysical and/or epistemic) which is supposedly absolutely necessary for X. For example, X might be free will and the Principle might be “being the sole ultimate uncaused cause of an action.” X might be ethical value, and the Principle might be “a fundamental non-natural property sensed by intuition.” X might be knowledge, and the Principle “absolute certainty.”

      Skeptics point to the thin evidence for the Principle and a large body of evidence against, and declare that Science has discovered there’s no such thing as X! Meanwhile, people go about their everyday lives making choices and holding each other responsible, evaluating characters and acts and results, and knowing or doubting, just the same, without any obvious contact with the mysterious Principle. People have erotic love affairs without believing in Eros, and biologists study life without regard for elan vital. Why, it’s as if the Principle was beside the point all along!

      • Carinthium says:

        There is a difference between saying that morality (in the sense of huamns having moral beliefs) exists, saying that morality (in the sense of objective morality in the sense believed in by non-philosophers), and saying that rational morality (as in it being rational for humans to consult moral codes to decide how to behave) exists.

        1 clearly exists. 2 clearly doesn’t. 3 is a point of legitimate dispute which I hold doesn’t exist.

        • peterdjones says:

          Meaning that there it is irrational to follow moral codes even if they successfully maximize human wellbeing , and if you are motivated to do that…or that there are no such principles ….or that there is no such motivation?

          • Carinthium says:

            On Version 1- If you don’t want to do something but do so because you believe you have a moral duty, your action is irrational. If you do it because you care and it’s truely seperated from a belief in a moral duty to act it’s a different matter.

            On Version 2- Such principles are an incoherent conception- they don’t exist objectively in the universe. Humans have varying beliefs about them, but see 1.

            On Version 3- People clearly are motivated, but in most cases they are motivated delusionally by their belief in moral norms.

            —————————-
            That’s not very helpful, so let me clarify. I’ve been thinking about my ideas a bit, so this is a partial revision. I’m using standard Sequences ideas of CEV as I understand them, but I may have got them wrong.

            -Moral truths do not exist.
            -Therefore, to the extent a person is motivated by percieved moral truths to act they are breaking their own CEV.

            Things get more complicated when dealing with somebody who attempts to define moral truth in terms of human intuitions. However,

            -First, such a person needs to deal with how to resolve issues with contradicting intuitions.

            -Second, they need to have at absolute minimum a rational response to a person who contradicts their moral code based on said person’s own conscience, even if such a person is merely hypothetical.

            -Third, such a person needs to factor for the problem of Culturally Created Intuitions. If they allow them to factor into their system, morality becomes subjective. If they don’t, it becomes abhorrent to their target audience.

            Even if they suceed in all three, without an answer to the amoralist challenge there is no rational reason to be moral besides ‘Because I want to’. If a person genuinely doesn’t want to follow their system despite seeing their logic, they have no rational answer to them.

          • peterdjones says:

            > On Version 1- If you don’t want to do something but do so because you believe you have a moral duty, your action is irrational.

            Rationality is more than one thing. If there is a good rational argument for behaving morality, then it would irrational not to acknowledge it, and hypocritical not to act on it. But that’s epistemic rationality.

            > Such principles are an incoherent conception- they don’t exist objectively in the universe. Humans have varying beliefs about them, but see 1.

            Needs justification. Also, you seem to have confused an ontological argument with an epistemological one. It is possible to argue for the truth of ethical claim without introducing novel entities: there’s an exampre in the OP.

            > On Version 3- People clearly are motivated, but in most cases they are motivated delusionally by their belief in moral norms.

            You appear to be appeal to an assumption that morality is nonsense in order to conclude it is nonsense.

          • Carinthium says:

            Part of my contention is that such an argument does not exist. Try one on me if you like, but I have seen none that actually work.

          • peterdjones says:

            “That’s not very helpful, so let me clarify. I’ve been thinking about my ideas a bit, so this is a partial revision. I’m using standard Sequences ideas of CEV as I understand them, but I may have got them wrong.”

            In order to argue for the conclusion that objective moral truths don’t exist, you ideally need an impossibility proof, one that shows that there CANNOT be any.

            Failing that, it would be good to have a comprehensive refutation of all known theories. You are not going to get that from Mr Yudkowsky, who is 
            proudly unacquainted with the philosophical tradition.

            “Moral truths do not exist.”

            Truths don’t exist. When I assert the truth of “there is no pegasus”, I am denying the existence of a purported entity, namely Pegasus. What would be the point of doing that if the act of making a claim involved asserting the existence of some true-claim-entity?

          • peterdjones says:

            > Part of my contention is that such an argument does not exist. Try one on me if you like, but I have seen none that actually work.

            Don’t work because…? You have repeatedly attacked ontological arguments, arguments that assert the existence of some kind entity that pins down morality. Not all arguments work that way: Scots argument in the OP is an example. So what’s wrong with it? Well, apparently, what’s wrong is the Amoralist Challenge. Which says there are no moral.truths. So Scott’s theory isn’t a moral truth because there are no moral truths, anfpd there are no moral truths because you haven’t seen a good theory, and Scott’s theory isn’t a good theory because there are none ….

            See the problem?

          • Carinthium says:

            1- I’ve seen two ways to try to argue for morality without introducing a novel entity.

            i- Appeal to intuitions.
            ii- Appeal to self-interest.

            If you appeal to intuitions, you get severe problems with conflicting intuitions. If you appeal to self-interest, you forget that your argument can at most be correct some of the time. Sometimes it is a person’s interests to double-cross another person.

            If a theory is fundamentally based on self-interest, exists from the perspective of an individual rather than society (people’s interests clash, so a societal contractrualist theory would be pointless), and sometimes advocates as a ‘moral course’ of action double-crossing society or undermining society from within, it is so far from the ordinary understanding of ‘moral’ as not to merit the term.

            2- The understanding most people have of a moral norm is utter nonsense- see moral realism. I’ve just explained why intuitions or self-interest don’t work either.
            ——————

            4- Absence of evidence is evidence of absence. The burden of proof is on somebody who postulates the existence of something.

            5- I don’t get what you’re saying with ‘there is no pegasus’.

  38. peterdjones says:

    1.i Intuition has its problems. So does every approach, including amoralism. Skepticism about the real world has the problem that no one really believes it, in the sense of acting on it..skeptics cross the road with their eyes open. Likewise, amoralists still get aggrieved about things they don’t like. You have been treating amoralism as an unproblematic default, in that you never mention its problems, but it isn’t. If amoralism has its own problem , the correct answer is a noncommital “dunno”.

    1.ii There is a difference between short and long term self interest, If everyone defects by choosing short term interest, then Moloch wins and everyone ends up with less utility. So if co-operation is the winning strategy why would defection be rational?

    Note that your objection to contractarian morality isn’t particularly specific to morality. Why not cheat in games or break business contracts for short term gain?

    2 The understanding most people have of everything is utter nonsense.

    4. Assertions of altruism, or ethical Objectivism are not per se assertions about the existence of entities, so Occams razor us not applicable. Moral realism asserts entities, other stances don’t.

    5. Are you not arguing for a default to amoralism on the basis of something like Occams razor, on the basis that moralism …all amoralism, not just moral realism…asserts the existence of some extra entity, apparently a truth?