The Lizard People Of Alpha Draconis 1 Decided To Build An Ansible

I.

The lizard people of Alpha Draconis 1 decided to build an ansible.

The transmitter was a colossal tower of silksteel, doorless and windowless. Inside were millions of modular silksteel cubes, each filled with beetles, a different species in every cube. Big beetles, small beetles, red beetles, blue beetles, friendly beetles, venomous beetles. There hadn’t been a million beetle species on Alpha Draconis I before the ansible. The lizard people had genetically engineered them, carefully, lovingly, making each one just different enough from all the others. Atop each beetle colony was a heat lamp. When the heat lamp was on, the beetles crawled up to the top of the cage, sunning themselves, basking in the glorious glow. When it turned off, they huddled together to warmth, chittering out their anger in little infrasonic groans only they could hear.

The receiver stood on 11845 Nochtli, eighty-five light years from Alpha Draconis, toward the galactic rim. It was also made of beetles, a million beetle colonies of the same million species that made up the transmitter. In each beetle colony was a pheromone dispenser. When it was on, the beetles would multiply until the whole cage was covered in them. When it was off, they would gradually die out until only a few were left.

Atop each beetle cage was a mouse cage, filled with a mix of white and grey mice. The white mice had been genetically engineered to want all levers in the “up” position, a desire beyond even food or sex in its intensity. The grey mice had been engineered to want levers in the “down” position, with equal ferocity. The lizard people had uplifted both strains to full sapience. In each of a million cages, the grey and white mice would argue whether levers should be up or down – sometimes through philosophical debate, sometimes through outright wars of extermination.

There was one lever in each mouse cage. It controlled the pheromone dispenser in the beetle cage just below.

This was all the lizard people of Alpha Draconis 1 needed to construct their ansible.

They had mastered every field of science. Physics, mathematics, astronomy, cosmology. It had been for nothing. There was no way to communicate faster-than-light. Tachyons didn’t exist. Hyperspace didn’t exist. Wormholes didn’t exist. The light speed barrier was absolute – if you limited yourself to physics, mathematics, astronomy, and cosmology.

The lizard people of Alpha Draconis I weren’t going to use any of those things. They were going to build their ansible out of negative average preference utilitarianism.

II.

Utilitarianism is a moral theory claiming that an action is moral if it makes the world a better place. But what do we mean by “a better place”?

Suppose you decide (as Jeremy Bentham did) that it means increasing the total amount of happiness in the universe as much as possible – the greatest good for the greatest number. Then you run into a so-called “repugnant conclusion”. The philosophers quantify happiness into “utils”, some arbitrary small unit of happiness. Suppose your current happiness level is 100 utils. And suppose you could sacrifice one util of happiness to create another person whose total happiness is two utils: they are only 1/50th as happy as you are. This person seems quite unhappy by our standards. But crucially, their total happiness is positive; they would (weakly) prefer living to dying. Maybe we can imagine this as a very poor person in a war-torn Third World country who is (for now) not actively suicidal.

It would seem morally correct to make this sacrifice. After all, you are losing one unit of happiness to create two units, increasing the total happiness in the universe. In fact, it would seem morally correct to keep making the sacrifice as many times as you get the opportunity. The end result is that you end up with a happiness of 1 util – barely above suicidality – and also there are 99 extra barely-not-suicidal people in war-torn Third World countries.

And the same moral principles that lead you to make the sacrifice bind everyone else alike. So the end result is everyone in the world ends up with the lowest possible positive amount of happiness, plus there are billions of extra near-suicidal people in war-torn Third World countries.

This seems abstract, but in some sense it might be the choice on offer if we have to decide whether to control population growth (thus preserving enough resources to give everyone a good standard of living), or continue explosive growth so that there are many more people but not enough resources for any of them to live comfortably.

The so-called “repugnant conclusion” led many philosophers away from “total utilitarianism” to “average utilitarianism”. Here the goal is still to make the world a better place, but it gets operationalized as “increase the average happiness level per person”. The repugnant conclusion clearly fails at this, so we avoid that particular trap.

But here we fall into another ambush: wouldn’t it be morally correct to kill unhappy people? This raises average happiness very effectively!

So we make another amendment. We’re not in the business of raising happiness, per se. We’re in the business of satisfying preferences. People strongly prefer not to die, so you can’t just kill them. Killing them actively lowers the average number of satisfied preferences.

Philosopher Roger Chao combines these and other refinements of the utilitarian method into a moral theory he calls negative average preference utilitarianism, which he considers the first system of ethics to avoid all the various traps and pitfalls. It says: an act is good if it decreases the average number of frustrated preferences per person.

This doesn’t imply we should create miserable people ad nauseum until the whole world is a Third World slum. It doesn’t imply that we should kill everyone who cracks a frown. It doesn’t imply we should murder people for their organs, or never have children again, or replace everybody with identical copies of themselves, or anything like that.

It just implies faster-than-light transmission of moral information.

III.

The ansible worked like this:

Each colony of beetles represented a bit of information. In the transmitter on Alpha Draconis I, the sender would turn the various colonies’ heat lamps on or off, increasing or decreasing the average utility of the beetles.

In the receiver on 11845 Nochtli, the beetles would be in a constant state of half-light: warmer than the Draconis beetles if their heat lamp was turned off, but colder than them if their heat lamp was turned on. So increasing the population of a certain beetle species on 11845 Nochtli would be morally good if the heat lamp for that species on Alpha Draconis were off, but morally evil otherwise.

The philosophers among the lizard people of Alpha Draconis 1 had realized that this was true regardless of intervening distance; morality was the only force that transcended the speed of light. The question was how to detect it. Yes, a change in the heat lamps on their homeworld would instantly change the moral valence of pulling a lever on a colony 85 light-years away, but how to detect the morality of an action?

The answer was: the arc of the moral universe is long, but it bends toward justice. Over time, as the great debates of history ebb and sway, evil may not be conquered completely, but it will lessen. Our own generation isn’t perfect, but we have left behind much of the slavery, bigotry, war and torture, of the past; perhaps our descendants will be wiser still. And how could this be, if not for some benevolent general rule, some principle that tomorrow must be brighter than today, and the march of moral progress slow but inevitable?

Thus the white and grey rats. They would debate, they would argue, they would even fight – but in the end, moral progress would have its way. If raising the lever and causing an increase in the beetle population was the right thing to do, then the white rats would eventually triumph; if lowering the lever and causing the beetle population to fall was right, then the victory would eventually go to the grey. All of this would be recorded by a camera watching the mouse colony, and – lo – a bit of information would have been transmitted.

The ansible of the lizard people of Alpha Draconis 1 was a flop.

They spent a century working on it: ninety years on near-light-speed starships just transporting the materials, and a decade constructing the receiver according to meticulous plans. With great fanfare, the Lizard Emperor himself sent the first message from Alpha Draconis I. And it was a total flop.

The arc of the moral universe is long, but it bends to justice. But nobody had ever thought to ask how long, and why. When everyone alike ought to love the good, why does it take so many years of debate and strife for virtue to triumph over wickedness? Why do war and slavery and torture persist for century after century, so that only endless grinding of the wheels of progress can do them any damage at all?

After eighty-five years of civilizational debate, the grey and white mice in each cage finally overcame their differences and agreed on the right position to put the lever, just as the mundane lightspeed version of the message from Alpha Draconis reached 11845 Nochtli’s radio telescopes. And the lizard people of Alpha Draconis 1 realized that one can be more precise than simply defining the arc of moral progress as “long”. It’s exactly as long as it needs to be to prevent faster-than-light transmission of moral information.

Fundamental physical limits are a harsh master.

This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.

169 Responses to The Lizard People Of Alpha Draconis 1 Decided To Build An Ansible

  1. cvxxcvcxbxvcbx says:

    Very nice!

  2. Machina ex Deus says:

    Like all great psychiatrists, Scott is clearly insane.

    This post will be footnoted in a Ph.D. thesis in fifty years.

  3. meltedcheesefondue says:

    Excellent.

    Now I’m off to imagine a world in which this ansible works…

  4. There are many problems with the story, including the fact that negative average preference utilitarianism is false, as well as the claim that the same things will be moral for different kinds of creatures. But if you overlook the falsehoods, something like this would actually happen.

    • hlynkacg says:

      I think it’s more probable that negative average preference utilitarianism is true and that we live in an unjust and uncaring universe.

      • Nabil ad Dajjal says:

        What would it mean for a moral system to be objectively correct and for the universe to be unjust and uncaring?

        I’m having a lot of trouble wrapping my head around this concept.

        • Nancy Lebovitz says:

          That doesn’t seem too hard. Morality applies to how people behave, but the universe isn’t a person, so all sorts of good and bad things happen to people that aren’t the responsibility of a moral agent.

          • random832 says:

            If the universe is unjust and uncaring, a moral system being “objectively correct” makes no testable predictions. There’s no-one being sent to hell for immoral actions, there’s no force acting to ensure ‘karmic balance’ – an unjust or uncaring universe where utilitarianism is true looks absolutely identical to an unjust or uncaring universe where ‘human sacrifice is morally good’ is true – you won’t actually be rewarded for killing people in the latter, so who cares?

            (And if it’s possible for a moral system to be “true” without that truth having any consequences, then it raises the question of why a system has to be self-consistent or rigorously defined to be “true”)

          • susanneah says:

            From a scientific perspective, I doubt you can verify any of those claims – how can you possibly know all those ‘facts’ you pronounce? Where is your proof? Every day we have new shreds and threads of extraordinary information provided to us, yet I have, so far, never, ever, heard verified with hard, real data – any of those ‘facts’ you here describe. Onwards and upwards, maybe someday … peace, Susanne

        • hlynkacg says:

          What Nancy said ^

        • marobane says:

          I think the problem with “objective” is that unless you have strict criteria for the meaning of the term, it is very difficult to talk about what is and isn’t objectivity. If you are a big fan of Immanuel Kant (which I am), then you have an useful toolkit – what counts as objective is commonly available phenomena presented through human language and logic in a consistent and intelligible fashion. What is objectively true or false then, are those propositions which universally true to all humans who can grasp the reasoning which produces it.

          The universe is the universe. We only get what our faculties allow, and within that, there are subjective elements beyond our capacity to communicate (qualia, direct sensation), and objective elements, which are constructed from universal faculties which, practically speaking, relate to faculties of language and logic and can be represented using these.

          So the universe is whatever, but if an ethical principle were “objectively correct”, it would be something which would be accepted as true by any that could grasp the necessary conceptual structure. I use the term ethical principle rather than “moral system”, since morality is merely what is considered right or wrong within a society, and ethics is about what morality ought to be (meta-ethics then being what methods or rules we ought to employ in discussing ethics).

    • Rick Hull says:

      This comment frustrates my preference for reasoned argumentation. Can you enumerate the problems you have detected and explain why NAPU is false?

      • the verbiage ecstatic says:

        Or better yet, could someone explain what it even means for a theory like NAPU to be false?

        It seems pretty unfalsifiable, if it’s meant as a claim about what is morally true, since moral truths don’t appear to be empirically-discoverable facts about the universe.

        If it is meant as a claim about human moral intuition, I think Scott has just presented a counterexample. I’m also extremely skeptical that human intuition about morality is consistent enough and culturally invariant enough that any theory could capture it, and I’m skeptical that a utilitarianism-derived theory could ever capture the intuition of even one person fully accurately, because of the ease of constructing counterexamples like this one.

      • the verbiage ecstatic says:

        To avoid critiquing without making any positive claims, I’ll say that imo the point of moral discourse is to establish community or society or familial contracts that allow people to cooperate around agreed norms.

        From that perspective, morality isn’t a right / wrong question, it’s a design-and-engineering question. There might be multiple working solutions, some might perform better or worse in different circumstances, and they can be evaluated on both aesthetic and functional grounds.

        I personally prefer virtue-ethics-based and contractual / deontological theories over consequentialist, because they tend to be more stable in the sense that the range of actions that an agent who follows them can be expected to do are more predictable. I think they are also easier to learn, teach, and cohere around.

        • Wency says:

          Hear, hear!

          Excellent comments — a lot of these debates about whose utilitarianism is the true utilitarianism fail to grasp this fact.

          I would only add that the one philosophy that would align with morality being something like “an empirically discoverable fact about the universe” is divine command theory, or at least some form of theism (for a very soft definition of “empirical”).

          The trouble is with rejecting theism but then arguing morality as a theist would.

          Or as Pascal wrote, “It is certain that the mortality or immortality of the soul must make an entire difference to morality. And yet the philosophers have constructed their ethics independently of this: they talk to pass an hour.”

        • Yosarian2 says:

          I personally prefer virtue-ethics-based and contractual / deontological theories over consequentialist, because they tend to be more stable in the sense that the range of actions that an agent who follows them can be expected to do are more predictable.

          Both utilitarianism and deonatological systems have problems; personally I think utilitarianism is more useful, because while utilitarianism systems do break down in some cases, they’re mostly situations you’d never encounter in the real world (I’ve never met a utility monster) while deontological ethics seems to lead people wrong in the real world all the time (the euthanasia debate is a good example imho.)

          • Jaskologist says:

            Utilitarianism generally breaks down on questions like “should we have children?” and “should we just kill everybody?” These are not edge cases.

          • Schmendrick says:

            I actually have met a utility monster, or at least what I understand as a small one. My friend, let’s call him “Jack,” is so frightened of dying that he physically cannot get into an airplane. His estimated value of his own life is so high as to approach infinity, and even though he consciously knows that terrorism or catastrophic malfunction of the airplane are vanishingly small possibilities, even the .0001% chance that something could happen which he could not possibly respond to and which would lead to his death results in crippling fear. He has a lesser degree of trouble getting into cars and trains for similar reasons – he can still get on, but experiences severe anxiety. He has told me that the difference is because if something catastrophic happens on a train or in a car, at least he is on the ground and could conceivably escape. It’s already led to him missing his best friend’s wedding in England, reneging on several vacations, being unable to visit a woman he deeply loves and who loves him but lives across a major ocean, and may lead to adverse consequences at his place of employment, which expects him to travel for work.

        • random832 says:

          From that perspective, morality isn’t a right / wrong question, it’s a design-and-engineering question.

          Engineering for what goal? What are you solving for? Structural engineering can tell you how to build a bridge, but it can’t tell you that bridges that don’t collapse are better than bridges that do.

          • the verbiage ecstatic says:

            I don’t think anything can tell you what to engineer your moral system for: you have to decide for yourself, against the existentialist backdrop of the materialism of the unviverse and your impending death.

            Personally, I would like to live in a society with norms that lead to peace, prosperity, and a relatively high degree of individual freedom.

          • susanneah says:

            Yes, sounds entirely appropriate. Thus the hackneyed, but still valid approach of “Think global, Act local”, I adopt, linked to the work of, and some of his ideas, Victor Papanek. peace, S

      • NAPU is false because every kind of utilitarianism is false, false because it answers the wrong question. It does not answer the question that is being asked. Utilitarianism answers the question, “Which result will be best?” But that is not what people want to know; it is not the moral question. People want to know the answer to the question, “What should I do?”, and that question is the moral question.

        The word “should” there, however, is hypothetical. That is, “I should make my bed,” only makes sense in view of a certain goal. I should make my bed if I want a clean house, for example. And precisely for this reason, utilitarians think that they are answering the right question. People are asking what they should do. What do they have in mind, then? Surely they mean to ask what they should do in order to accomplish the best results.

        But that is not what people have in mind, and it is not the moral question. When people ask, “What should I do?” they mean, “What should I do, in order to do something good, and avoid doing evil things?” This is a very different question, because one can do evil in order to bring about good results. And this is why every utilitarian system recommends sometimes doing evil, namely in situations where doing evil has the best results. But it is still doing evil, and so it ignores the question of what people should actually do.

        The second problem that I noted is that, precisely because it is judging based on outcomes rather than actions, the story assumes that some outcome or other, rather than some action or other, is moral. This is wrong. The moral question for the white mice is “what should I do”, and the answer is the answer that applies to the white mice. There is no reason why it has to match the answer to “what should I do” asked by the grey mice, since these are different questions about different individuals and even different groups. They are not asking “is the world better with the lever up”, but what each individual should do, and there is no reason for the answers for the various individuals to match.

        Third, the story doesn’t understand why things get better. Basically it is a process like entropy. Everything is trying to make things better, and sooner or later they will succeed. But this depends on acquiring information: you find out that one possibility won’t make things better because you try it out, find out it doesn’t work, and then try something else. So things won’t get better in a way which is independent of acquiring information.

        When I said that something like this would actually happen, ignoring the false suppositions, I meant e.g. the transmission of any sort of information, including moral information, will always be consistent with the laws of physics, even in situations where you would have imagined something else.

        • susanneah says:

          Perhaps the positive result actually is entropy-caused. As in entropy happens everywhere – to the bad, as well as to the good? Which is certainly an encouraging thought! As, generally in my experiences, those working towards good are far more enterprising in terms of productive work than those who embrace the opposite place. Those being humans; I’m not familiar with how entropy might work in the universe of mice and beetles. peace, Susanne

        • kitpeddler says:

          But that is not what people have in mind, and it is not the moral question. When people ask, “What should I do?” they mean, “What should I do, in order to do something good, and avoid doing evil things?” This is a very different question, because one can do evil in order to bring about good results. And this is why every utilitarian system recommends sometimes doing evil, namely in situations where doing evil has the best results. But it is still doing evil, and so it ignores the question of what people should actually do.

          How do you define what things are “good” and what things are “evil” apart from “things that have good results” and “things that have bad results?”

          • susanneah says:

            Great questions – which self-answer? Because it seems quite clear we are not precise future-readers – especially those today who nominate themselves our ‘leaders’. So, we can only be guided my our internal impulses – and the actions and words of those we chose as our personal role models? peace, Susanne

    • Shion Arita says:

      I think the problem with it is ultimately that the fact that morality improves over time (things bend toward justice) is only true in an aggregate, averaged-out way, and that the mice’s preferences for the lever don’t apply because they are a single thing and arbitrarily imposed. So it doesn’t follow that the mice will be able to find the ‘correct’ position of the lever

      • tjohnson314 says:

        If it works in an averaged-out way, doesn’t that still imply that you could use an error-correcting code to transmit some amount of information?

        • Shion Arita says:

          I still don’t think so because morality is only improving because humanity is getting older, wiser, more experienced, and more knowledgeable. It’s not just happening by itself for no reason.

          If they don’t know what’s happening to the beetles, which is the thing that determines the right position of the switches, they have no way of becoming more knowledgeable about that matter. So it’s limited by the speed of light not for mysterious cosmological reasons, but because ‘normal’ communication is the only way to know what’s happening far away.

          To explain further, let me give a different version of the story:

          Suppose that the lever positions on 11845 Nochtli are connected to an actual ansible that we assume to work through ‘normal physics’ means (whatever that would be), but the communication is strictly one-way; the lever position data is transmitted back to Alpha Draconis, but no information can go from Alpha Draconis to 11845 Nochtli faster than light.

          The switches control a nuclear bomb placed under a major city on Alpha Draconis. At a certain set time in the future, long enough for the arc of justice to take its course, the bomb will either go off or not, depending on what the position of the switches is.

          The beetles are removed in this version, but the mice remain. The mice do not know which switch position corresponds to ‘armed’ or ‘disarmed. They have to come to a consensus as to what position to put the switches in.

          Does anyone think that the mice could get it right at a higher rate than chance?

          Or even to take it further, suppose the transmission goes back in time, like through a wormhole whose mouth was flown from Alpha Draconis to 11845 Nochtli, so the explosion will have already occured centuries in the mice’s past. I still maintain that if they don’t know about the outcome they have no way of formulating an optimal strategy.

      • Machine Interface says:

        A different perspective is that morality doesn’t exist at all, and what improves over time is *cooperation*, because there is an evolutionary pressure for it to do so: cooperation is the most efficient competitive strategy, and so structures that cooperate are more likely to outcompete structures that don’t — this is true regardless of the zoom level you’re at, from free-floating single-gene self-replicating RNA strands to human superstructures.

        Then we humans come along, become self-aware, invent the disciplines of history, anthropology and sociology, notice that we are more cooperative than our ancestors, and we say “Well hm gee! The past sure does look terrible! It sure is a nice thing that we are more moral than our ancestors and can now see the errors of their ways! Truly, irrepressible moral progress must be a law of the universe!”

        *Machine Interface waves a tiny “Moral Antirealism” flag*

    • andagain says:

      I’d like to see a proof that any moral principle is false. Or true.

  5. bean says:

    I knew it was going to be good when I got to ‘an ansible made of negative preference utilitarianism’.

    • hlynkacg says:

      Likewise, but now we need a write-up on the naval applications of such a device. Seems like it would revolutionize submarine warfare until the enemy determines which frequency species of beetles you’re using and starts spoofing/jamming the signal.

      • bean says:

        Note that the arc of justice is long. I believe it’s too long for tactical communication to be practical.
        Also, I have hobbies that are not naval things. Not many, but a few.

      • Nornagest says:

        I feel like there’s a Gravity’s Rainbow style joke here relating the moral universe to the trajectory of submarine-launched weapons, but I can’t quite make it work.

        The arc of the KN-11 is long, but it bends toward San Diego? No, that’s lame. Maybe I should stick to dirty limericks about rocket equipment.

        • J says:

          Like dirty manufacturing processes?

          A contractor known as Nantucket
          Cast nozzles inside an old bucket
          They’d fill it and spin
          Wet ceramics within
          And FEA? They’d say nah, f*ck it

      • Yosarian2 says:

        If humans have higher moral value then beetles, you’d just always get the signal that prevents you from sinking anyone. Probably not optimal for a military submarine.

  6. beleester says:

    I read part I, and thought it was nice that Scott was writing fiction again. Then I got to part II and realized it was going to be one of those articles where it’s a metaphor for something. Then I got to the end, and now I don’t know. I think it’s both.

    • gwern says:

      As we know from Unsong, everything is simultaneously itself, a metaphor for itself, and a metaphor for indefinitely large numbers of other things in the universe.

    • the verbiage ecstatic says:

      So my read is that this piece is:

      -An argument demonstrating that for all the engineering effort put into developing negative average preference utilitarianism as a breed of utilitarianism immune to weird edge cases, you can still derive crazy things from it, namely, that an act’s moral value can depend on events outside of its light cone

      -An opportunity to make a thematically-, if not logically-, related aside about how fundamental limits can doom intellectual flights of fancy

      -A shaggy-dog joke based on an extremely literal interpretation of a famous MLK quote

  7. InferentialDistance says:

    Bravo.

  8. tane says:

    Long time lurker. This is the one that finally got me to register. 🙂

    Does negative average preference utilitarianism not mandate killing dissatisfied people (or rather, people who want a large amount of unreasonable / unfulfillable things?)

    • Rick Hull says:

      No, because of their bedrock preference for existence. Also, nonexistence would frustrate any other preference, satisfiable or not.

      • robirahman says:

        The fact that killing them prevents the satisfaction of their other preferences is irrelevant because it’s negative preference utilitarianism.

        • moridinamael says:

          The most concise summary from the paper is this:

          NAPU … is about reducing the average level of preference frustrations.

          If killing them prevents the satisfaction of their other preferences, isn’t that exactly failing to reduce the level of preference frustrations?

          Also this gets me wondering if it might be useful to consider a preference (or a preference-ordering) as an ontologically fundamental thing, distinct from the entity holding the preference.

    • J says:

      It does seem to me that it’d be susceptible to utility monsters: say, murderous creatures with strong individual preferences that are frustrated for each other creature that hasn’t been killed yet.

    • Koken says:

      My first thought was to wonder about whether you could lobotomise people so that they had fewer (unsatisfied) preferences.

  9. Nancy Lebovitz says:

    Is there something immoral about building the ansibles?

    It takes a lot of misery for the beetles and the mice (rats?– there’s an inconsistency) to do the experiment, and the rodents are sapient.

    Anyway, thanks for the story.

    • tane says:

      Are the lives of the ansible-component animals better or worse than those of similar creatures in the wild? Maybe it’s immoral to /not/ build ansibles…

      • Nancy Lebovitz says:

        “The lizard people had uplifted both strains [white and gray rodents] to full sapience.”

        Presumably, they should be counted as equivalent to lizards.

    • the verbiage ecstatic says:

      Yes, it is immoral, but we can safely presume that the reason they care about FTL communication in the first place is high-frequency trading, so…

  10. Aylok says:

    It’s “ad nauseAm”.

    I’m sorry. I’m a small, petty man.

    Great story though.

    • JShots says:

      Seems a little odd to capitalize the “A”, but I’ll go with it! *Pettiness serve, returned*

  11. NotDarkLord says:

    Shortly thereafter, the lizard people designed another ansible based on the same principle, but using themselves instead of the rats. There were some engineering details to smooth out in order to transition from a lever position to their own more complex issues, but these were resolved in due time.

    Their transmitter they put on their moon, a light-minute away.

    The lizard people of Alpha Draconis 1 have resolved all their moral dilemmas and live idyllically, with the original ansible preserved mainly as a museum, though kept ready in case the philosophers come up with new moral issues.

    (The physicists are exploring what happens when they put receivers and transmitters closer and closer together, to study the exotic particles formed when the long arc of moral progress is forced into being unusually short. A morality collider is under construction – fears that it might create a destructive mini black hole were calmed by determining that creating the collider would be a morally positive event.)

    Anyways, great story!

  12. The Nybbler says:

    Next post is going to about psychedelics, right?

    (Why do you say that? Oh, no reason.)

  13. ricg says:

    Both Scotts, almost at the same day, have posted great articles. What an unexpected and lovely coincidence.
    http://www.scottaaronson.com/blog/?p=3376

    • eyeballfrog says:

      If you’re going to ask your commenters not to talk about the glaringly obvious modern example of your essay, maybe you should do the same and not throw in a Trump = Nazis jab.

      • VivaLaPanda says:

        I’m not seeing the glaringly obvious modern example, nor where he asks commenters not to talk about it?

    • spinystellate says:

      You say “Both Scotts”, as if there are only two! Clearly, Mount Scottmore has 4 heads: Alexander, Aaronson, Sumner, and Adams [n.b. this not an endorsement of all 4].

      • Nabil ad Dajjal says:

        Can we count James C. Scott?

        He’s not a proper Scott, since that’s his surname rather than given name. But it’s amusing and our Scott has reviewed one of his books before.

      • Aapje says:

        See the slippery slope in action. We must demand ideological name purity, sheeples!

  14. Squirrel of Doom says:

    If negative average preference utilitarianism can produce an Ansible, I’m all for it!

  15. Philosophisticat says:

    Cute stuff.

    Some points on the negative average utilitarian stuff in the spirit of humorless pedantry:

    Standard average Utilitarianism doesn’t actually tell you to kill below average people. On any plausible picture (as plausible as this kind of picture gets, anyway) you maximize the average utility, over the entire life, of all the people who do, did, or will exist. Killing a lower than averagely happy person doesn’t increase the average – if they had more happiness coming in the future than suffering, it lowers the average, since they still go in the denominator but the numerator is smaller.

    Also, the negative average preference view doesn’t solve Parfit’s problems with the average view. It still implies that if the universe is full of people who do nothing but suffer agonizing torture for a thousand years, you ought, if you can, create more people who do nothing but suffer agonizing torture for five hundred years, since it improves the average.

    • Philosophisticat says:

      There are quite a few other howlers in Chao’s paper:

      In response to the objection that his view implies that the world would have been better if nobody had existed:

      “However, this argument is false as it is based upon an impossible assumption. This
      argument contains an implied premise (which is necessary, given that someone
      is giving this argument) that there is a conscious being evaluating existence
      without existing themselves, which is necessarily impossible.”

      This is wrong on so many levels it hurts.

      • Rick Hull says:

        I don’t see that it’s wrong. Rather it’s a cheap way to dismiss an argument in lieu of a better answer. He can’t dismiss a more sophisticated version of the question, such as allowing the sole evaluator to exist, or simply considering a hypothetical utility calculator which is not a conscious being.

        • Philosophisticat says:

          You don’t have to make the question more sophisticated. The response is just straightforwardly nonsensical. It takes the fact that someone must exist to give an argument to mean that someone’s existence is an implicit premise in the argument itself. And then, as best I can put together, to get the supposed impossible commitment, it conjoins that with the antecedent of a counterfactual embedded in a conditional in one of the premises of the argument being criticized, as though that were also an assumption of the argument. Words cannot adequately describe the crimes against logic being committed.

          And that’s not even mentioning the absolute mess being made of “possible” and “impossible”. It’s like a Guinness book attempt at the most basic philosophical errors made in a single sentence.

          • Rick Hull says:

            What’s the most charitable reading of his dismissal? He’s saying that it is nonsensical to even apply utility theory to a world in which conscious beings never existed. In my opinion, this idea is not without merit, but it is easily answerable and neutralized. Your deconstruction of his dismissal falls into the same trap as his dismissal, looking for syntactic nitpicks to dismiss the semantics of the argument.

            I’d rather see interesting ideas and viewpoints get argued on their intrinsic merits and not dismissed on the details of their exposition.

  16. Tracy W says:

    Brilliant story. Asimov would have been jealous..

    I happened a few months ago to come across some essay of Frederich Hayek, arguing our morality is evolved (in both the biological and social senses). Namely we morally believe things not from some grand theory but because they’re what works for the survival of ourselves and our cultures. And there’s no particular reason why evolution would have arrived at a theory that is explicable in our terms.

    So, we can argue around the edges, over particular cases, but it’s quite possible we never will have a worked out theory that doesn’t in some cases strike our evolved morality as repugnant.

  17. balzacq says:

    I haven’t seen the movie Interstellar, but I have heard this quote:

    Love is the one thing we’re capable of perceiving that transcends dimensions of time and space.

    This would make an ansible absurdly simple: as the transmitter, put a psychopath capable of deliberately manipulating their own emotional state such that they truly believe it, and put a highly empathic person as the receiver.

    Now, the transmitter-psychopath tells himself “I love her. I love her not.” The empath-receiver notes down these feelings.

    And we have just transmitted the number “2”.

    • James says:

      Not the dumbest line I’ve ever heard in a movie, but probably the dumbest I’ve ever heard a movie’s ostensibly skeptical, rational, scientist character deliver.

      • John Schilling says:

        There’s an interesting story to be told about a skeptical, rationalist scientist discovering (now with a new and improved p<0.005) that this is actually literally true, and dealing with it. I'm still not sure whether "Interstellar" was trying to tell that story, but it didn't quite pull it off if they were.

        But that's just a special case of me not being able to understand what they were going for at all. Like, the central act of villainy was a "Dr. Mann" faking climate data to make the planet seem warmer than it really was. There's an obvious message in that, but it seems contrary to the spirit of the rest of the movie.

    • andagain says:

      I think it is worth pointing out that the character who says that line is a) trying to come up with an argument to save someone she is in love with b) subsequently treated as being slightly irrational about the wisdom of an act that would incidentally save her lovers life.

      It is an argument that is meant to come off as desperate, not convincing.

  18. Nornagest says:

    Never change, Scott.

  19. J says:

    I have to confess that as much as I dislike puns, after all the setup about lizards and mice and pheromones and beetles, that I was disappointed not to see something punny at the end. I mean, you’re not going be kraken the record you set with unsong anytime soon, but you do have a reputation to uphold.

  20. tmk says:

    I know obfuscated writing is a grand old Rationalist tradition, but posts like this really need to end with a straight forward explanation of what the author means. The only reason not to, is to let people signal their in-group knowledge.

    • Creutzer says:

      Why would you assume that the author means anything at all?

    • beleester says:

      I agree that it’s more of a joke than anything, but if I had to give it a straightforwards meaning, it’s this: Negative average preference utility might be the perfect moral system in terms of avoiding the repugnant conclusion and other pitfalls, but it’s impossible to use for decision-making, because it’s based on the number of unsatisfied preferences in the entire universe.

      Bringing new life into the world could be moral, if there are billions of other people suffering and a happy, healthy kid raises the average a little bit, or immoral, if most other people are really happy and your screaming kid will bring down the average a little bit, and you can’t know which one unless you’ve gone around the world and checked how many people are happy or unhappy. Scott’s story takes this to an absurd extreme – the happiness or unhappiness of a colony of beetles on one side of the galaxy instantly changes the morality of birthing new beetles on the other side of the galaxy.

      In short, NAPU’s pitfall is that it’s possible for the same act to be moral or immoral depending on the existence of people you’ve never met and can never interact with.

      • aNeopuritan says:

        If you decide to be maximally pedantic (I don’t think the hypothesis is unfair 😛 ), doesn’t the same go for any utilitarianism (Do you *know* that you living doesn’t, or won’t 100 years from now, hurt non-baryonic entities in the Andromeda Galaxy?) ? If you decline to care about that, you’ll do what normally is done, decide based on the limited information you have – obviously you can’t claim to build faster communications based on that!

    • Deiseach says:

      The only reason not to, is to let people signal their in-group knowledge.

      So then you will understand why this joke is funnier in a Kemetic context.

      (If you are going to say “The only point of this is for the people on here to show off their big brains and knowledge of obscure trivia”, you are going to have to put up with my “I just saw this earlier and I understand that reference!” geeking out).

  21. Speaker To Animals says:

    I, for one, would walk away from Omelas.

    https://en.m.wikipedia.org/wiki/The_Ones_Who_Walk_Away_from_Omelas

    There was a Doctor Who story, The Beast Below’, based on a similar theme. The survival of a human colony depended on the enslavement and suffering of a giant space whale that carried the colony on its back.

    Every year the truth would be revealed to the citizens and they would face a choice; free the whale and face extinction, or wipe away the memory of the whale’s existence until the next election.

    Time after time they chose the latter.

    • Ninmesara says:

      The concept of forgetting and debating again each year is pretty cool.

      • Speaker To Animals says:

        Alas the internet never forgets.

        • Ninmesara says:

          That’s not the point I wanted to make. The idea is that you have a choice between two moral wrongs. Whatever you choose, it will make you unhappy, so it makes some sense to erase your memories of the situation.

          You’ve made your choice, now you want to live in blissful ignorance.

          On the other hand, you want to give every generation the opportunity to decide, and that requires holding constant elections. So you reveal the truth every ear and erase it afterwards. That way, people only suffer for the brief duration of time during which they know the truth.

          The idea was to avoid the annoyance of constant debate (yes, that is helped by the internet) and the suffering caused by knowing the truth. I don’t think this situation has a real world counterpart…

    • Jaskologist says:

      They also killed anybody who voted to free the whale, which is definitely hacking the election.

      • Ninmesara says:

        Yes. The premise of the episode looked amazing as told in the parent comment, but meanwhile I’ve read the synopsis and it’s pretty dumb (haven’t seen the episode, though)… The whole situation is extremely artificial. The moral dilemma is fake (the whale is actually our friend!). The election is hacked by killing people who choose to free the whale. I was expecting something way better.

      • Randy M says:

        This fall… the election won’t be the only thing hacked!

  22. Chalid says:

    In fact, it would seem morally correct to keep making the sacrifice as many times as you get the opportunity. The end result is that you end up with a happiness of 1 util – barely above suicidality – and also there are 99 extra barely-not-suicidal people in war-torn Third World countries.

    As you’ve set it up, there’s no reason to stop at 1 util, or for that matter at zero or -1 or any other number.

    • Chalid says:

      and as long as I’m doing pointless nitpicks of a joke story, I don’t get why you need each box to have a different beetle species? You’re looking at the average utility within species, and then doing some sort of aggregation-of-averages?

      • beleester says:

        Each beetle species is transmitting a separate bit of information. You need to have a separate beetle species for each bit, because if you had one box of beetles turned on and another box of the same species turned off, the changes in utility would cancel out. The mice can only detect the average utility for a species.

        • 4gravitons says:

          That’s the part that confused me. It seems to imply that the version of utilitarianism here is species-centric, but in that case, wouldn’t the mice have no reason to care about the beetles at all? Or is the idea that for some reason you only average over a species and not over all relevant moral entities?

      • TheEternallyPerplexed says:

        Parallel transmission. Like multiple colours in a fibre-optic cable.

  23. sustrik says:

    In short: with NAPU you can’t do a local moral decision (in physicist’s meaning of “locality”). But doesn’t that apply to most moral systems?

    • beleester says:

      No. For virtue ethics or deontology, you only have to decide if your action violates your personal moral code. And for standard utilitarianism, you only have to know if the people you’re interacting with increases or decreases in utility. You do have to be able to predict the future – saving someone’s life might turn out to be immoral if that person becomes the next Hitler – but you don’t need to know about the utility of someone on the other side of the galaxy that you never interact with.

      • sustrik says:

        True wrt. deontology.

        However, wrt. utilitarianism, if your action is going to affect someone on the other side of the galaxy you should at the very least know that they exist to make a moral decision. Which can take, say, 100,000 years.

  24. alchemy29 says:

    Alright that final conclusion was incredible. Bravo.

  25. GrishaTigger says:

    This may well be the nerdiest joke I have seen in my life.
    And brilliant! Thank you.

  26. beleester says:

    There’s a fairly simple solution to this problem: measure the rate of change of the moral arc of the universe rather than its value. You would probably need a more sensitive ansible to detect it – perhaps involving multiple populations of rats for each bit so that you can see how many of them have flipped one way or the other – but in principle it should work.

  27. Walter says:

    Ahahahaha! Love it!

  28. Scott says:

    Why was the ansible a flop? It seems to me that it succeeded brilliantly at uncovering a new aspect of the physics of morality, and teaching the lizard people something they didn’t already know. I assume, in particular, that the rats at the receiving station had no direct sensitivity to the radio waves arriving there—and hence, there really is a moral force that has causal effects on the physical world, and that’s totally unknown to early-21st-century human physics. Even if that force doesn’t propagate faster than light, still worth multiple Nobel Prizes.

    • Jaskologist says:

      Yeah, as a way to objectively measure morality, this is a yuge advance.

      Is meat eating bad? We can construct anscombles to find out. I don’t think they even need to pair up. Just engineer half your beetles to eat the other half and let the mice decide whether or not they get to continue existing.

      Is abortion moral? Make an anscomble with a bunch of beetles which abort their young at different rates, and see how quickly the mice add to or subtract from the groups.

      Are anscomble experiments moral? Put an anscomble in your anscomble so you can anscomble while you anscomble.

    • Chevron says:

      It was a flop because it was supposed to be an ansible, not a lightspeed-limited ethical receiver.

  29. Raistlin117 says:

    It should be Roger Chao not Richard Chao FYI.

  30. Iain says:

    This was satisfying.

  31. Dan says:

    The problem is that if 11845 Nochtli and Alpha Draconis I aren’t in the same rest frame, then a moral arc that appears to be bending towards justice in 11845 Nochtli’s coordinate system may be bending away from justice in Alpha Draconis I’s coordinate system. C’mon, that’s just basic relativity.

  32. The Element of Surprise says:

    What conclusions does “negative” average preference utilitarianism draw that a “positive” average preference utilitarianism wouldn’t? If I say that world W is morally preferable to world W* whenever I would prefer to be a randomly chosen individual in W than W*, am I a negative or positive average preference utilitarian, or am I just confused?

  33. Ninmesara says:

    The story is pretty cool. I’m proud to have deduced the conclusion at: The arc of the moral universe is long, but it bends to justice. But nobody had ever thought to ask how long, and why. Maybe a bit too late… But very interesting concept.

  34. Mark Lu says:

    replace everybody with identical copies of themselves

    I’ll take this, I think destroying the Earth and replacing it with an identical copy is a morally neutral act.

  35. tentor says:

    Doesn’t the “average utilitarianism” even in its basic form already suffer from this problem? If you can fork a 6 util individual into two 3 util individuals, the morality of the act depends on wheather the average utility in the univers is above or below 3.

    Is there a version of utilitarianism that tries to optimize util-seconds over time instead of scalar utilitiy at an instant?

    • moridinamael says:

      I don’t think utils work that way. Utils are best understood as a measure of relative preference. A new car is worth 100 utils to you, a new house is worth 1000 utils to you, a broken leg is worth -50 utils to you. It’s a way of numerically handling complex preference orderings, and is especially useful in probabilistic situations where you need to consider tradeoffs of a 50% chance of a car versus a 10% chance of a house, or whatever. In this framework there’s no such thing as a “6 util individual”.

      I get that people tend to talk about utility in ways that abstract it from this framework of quantification over preference ordering, but I’m pretty sure that’s a mistake.

      • tentor says:

        I was just going by what Scott wrote:

        Suppose your current happiness level is 100 utils. And suppose you could sacrifice one util of happiness to create another person whose total happiness is two utils

        What you write makes a lot of sense. Saying that someone has a happiness of X implies some absolute reference. Maybe, just like the universe can have no absolute coordinate system because of relativity, there is no such thing as an absolute level of happiness.

        • moridinamael says:

          I hear people use utils that way all the time but I don’t think it’s a very useful way of looking at it. From your frame of reference as the decision-making gagent, another person doesn’t “have” utils, but a person’s life can be worth a certain number of utils to you. If you could make your full preference ordering over all possible states of the universe explicit, you would assign utility values to every person’s life, and every person’s health, and every person’s preferences, etc.

          If you were trying to adhere to some kind of average utilitarianism, you would try to impose and equal weighting of some kind across people. Perhaps you would say every person’s life is worth the same amount of utility to you regardless of who that person is. This would probably not be true, because all psychologically normal people value those closer to them more than strangers, and would prefer the deaths of strangers over the deaths of siblings, but if you’re okay with using a preference ordering that contains such idealizations (aka lies) then that’s okay I suppose.

          I think you can get back to something like happiness as a proxy for utility if you consider that you prefer other people to by happy rather than sad, you want to minimize the suffering in others, and you prefer other people’s preferences to be satisfied, generally speaking. Then your personal net utility is increased by increasing the happiness of others. But this is not the same thing as “increasing the objective absolute utility in the universe” because such a concept is incoherent.

          I almost wish somebody would challenge me on this, because I see “utility” abused so often in the rationalsphere that I almost suspect I’m the one who’s wrong.

          • Chrysophylax says:

            You’re confusing multiple different meanings of the word “utility”. The thing that economists use to solve mechanism design problems is almost always not the thing that moral philosophers care about. Both meanings of “utility” are sensible in context (and the philosophers have been using the word for a lot longer than the economists).

            The von Neumann-Morgenstern utility theorem says that any preference scheme that satisfies certain assumptions is equivalent to maximising expected vNM-utility. That is, “a [vNM-rational] decision-maker faced with risky (probabilistic) outcomes of different choices will behave as if he is maximizing the expected value of some function defined over the potential outcomes at some specified point in the future”. (They assume causal decision theory, which is horribly broken, but never mind that.)

            This is a very useful tool if you are an economist, because it gives you a sensible way to talk about individual preferences and what a rational agent would do in different situations. But economists refuse to compare utilities. They’re mathematical constructs that describe what people prefer, but they aren’t the preferences themselves. You can transform all the utility values and get exactly the same predicted behaviour. So it’s meaningless to compare vNM-utility across people.

            Philosophers are (for once!) interested in what people are actually feeling: their experiential utilities, the true preferences that are actually encoded in their brains, or some other measure of pleasure, happiness or satisfaction. Comparing these is entirely reasonable.

            However my brain causes my experience of happiness, it does it through some lawful physical process. Such a process is, in principle, measurable and reproducible. So it makes physical sense to talk about a person who is “just as happy as I am right now” – that’s just a hand-wavy way to point towards some particular pattern of neural activation spikes.

            It’s also measurable in practice – people who object tend to only object when debating philosophy, not when making real-life decisions. Similarly, nobody seriously doubts that heroin feels really good and is much more pleasurable than a broken arm, even if they’ve never personally experienced either alternative.

  36. carvenvisage says:

    an act is good if it decreases the average number of frustrated preferences per person.

    Therefore it is always right to stifle potential before it has the chance to recognise itself.

    And I guess it’s also better to be miserable without hope or desire than happy but ambitious for more.

  37. carvenvisage says:

    Unsong spoilers warning

    __
    __
    __
    __
    __
    __
    __
    __

    Unsong’s solution for the problem of evil is similarly based on taking a single bizarro world assumption (something about identity and how a duplicate isn’t really a seperate/new thing (which it totally is)), as an axiom, and building carefully up from that chink in the foundations.

    Is this a new genre? I hope so

    • SEE says:

      C.S. Lewis (discussing why the Father did not beget multiple Sons) pointed out that when we try to visualize duplicates, we visualize them as having a different position in space or time or both. But spacetime is a property of the universe which does not necessarily exist outside of it. If “two things” are genuinely identical in all properties, including location in spacetime (or lack of location in spacetime), how can they be different things?

      • carvenvisage says:

        If two things are ‘identical in all properties including location’ they are one thing which is itself, not two things which are identical.

        -‘Identical’ implies at least two distinct objects. it doesnt make sense in singular because we have a term for ‘is itself’ (-‘is itself) and that’s a pretty strong default assumption anyway.

        Maybe god’s realm has only singular forms (or something) but if so ‘duplicate’ and ‘identical’ become inapplicabls concepts there, they don’t acquire a special meaning that can be translated back to our spacial one.

  38. b_jonas says:

    This reminds me to other crazy theories proposed for faster than light communication.

    There’s Pratchett’s kingon and queenon particle: when the king of a country dies, his heir instantly becomes the king, because there has to be exactly one rightful king at any one time. (This is even more important in Discworld, where the speed of light is much lower than in our world.)

    Then there’s Douglas Adams’s bad news drives. It is well-known that bad news travels faster than anything, so you could build faster-than-light ships powered by bad news. A few people tried that, but it soon got such a bad reputation that nobody dared to use such ships anymore.

  39. Deiseach says:

    This is very, very good.

    I really need to think about this; I’m getting those little prickles of “yes, but look closer” from it that means immediate first and fast impressions are not just probably not what is going on, but that it’s structured so that the immediate first and fast impressions go one way, then the brain prickles kick in.

    Excellent work 🙂

    • Rick Hull says:

      I find it interesting for the discussions it spawns, but the work itself is sort of vacuous. Why would we believe that moral judgements necessarily improve over time, and why would this even be expected to be transmissible and furthermore FTL? The mouse deliberations have no logical connection to the moral situation outside their cage. It’s a very tall tale that seems to illustrate only incoherent ideas.

      I don’t understand why Scott concludes that NAPU “implies faster-than-light transmission of moral information” in section II. Only the proposed ansible implies such. We can still view NAPU as a framework for moral judgement within our light bubble.

      • Rick Hull says:

        Hm, light bubble is the wrong term. Distance does affect (the likelihood and amount of) moral impact though. Sphere of morality may be a better term. Perhaps there should be a moral impact discount rate for both distance and time. Spheres of morality could be nearly entirely disjoint, particularly if it takes more than a human lifetime for a message to be acknowledged.

      • Scott Alexander says:

        Suppose that the average human had utility 100. It would be good to bring babies into existence if we expected them to have utility of 101.

        But suppose there’s another planet a million light years away, with equivalent population, with average utility 200. Now the average human utility (across the universe) is 150, so bringing babies into existence with utility 101 is immoral.

        So the moral act here on Earth depends on the state of a planet a million light years away. Therefore, moral information must travel faster than light.

        • Naclador says:

          This is only true if you assume that the morality of an act must be measureable instantaneously. The morality of the act could just be indetermined as long as you only have limited information about the universe. Which would mean that you needed to develop an omniscient panuniversal superintelligence to make definitive moral judgements about actions.

          I suggest that we limit our moral standard such that utility scales with distance, like suggested by Rick. Let us scale utility by an exponential decay function with a decay constant on the order of 1000 km.

          But if the morality ansible worked, quantum entanglement should do the same trick with much less experimental effort.

        • Jaskologist says:

          Couldn’t it also imply that morality is relative, in both the “moral relativism” and “special relativity” senses of the word?

          If I understand Einstein correctly (this is a big If), whether two events take place “at the same time” depends on your frame of reference.

          I might bring a new beetle with util 100 into the world while observing the beetles on Alpha Draconis 1 having an average util of 98. A clearly moral act!

          But a third party observer might me see me doing that while the Draconis beetles have an average util of 102. From where he’s standing, this is an immoral act.

          In fact, I think that the ansible experiment, by proving that moral information moves slower than light, also proves that morality is Specially Relative.

        • thepenforests says:

          To me the obvious answer is that we should judge people based on the decisions they made with the information available to them at the time, precisely to avoid paradoxes of this sort. I favour consequentialism as a system for determining which acts are ultimately the right thing to do, and expected consequentialism (which is an inherently local phenomenon) as a system for determining which acts one should take, and determining the standard by which one should judge another person’s acts.

        • @Jaskologist

          This is great. Beyond utilitarianism, doesn’t this also apply to deontological natural rights morality as well? A lot of moral claims about who is commiting a violation and who is responding with legitimate self-defense depend on the apparent timing of events. Could we set up a scenario where it’s not clear who broke the NAP using special relativity?

  40. ldsgsm says:

    The reason that the arc of the moral universe bends towards justice at precisely the rate described above is fairly obvious.

    Building the ansible was clearly an immoral act, since it resulted in a massive number of mice whose most important preferences were perpetually frustrated. Of course, the beetles would also have their preferences frustrated until the experiment stabilized. If it had been a successful experiment, then the lizard-people would have replicated it repeatedly, decreasing the net good in the universe under the terms of negative average preference utilitarianism. It is therefore (from the perspective of timeless decision theory) a moral imperative of the universe not to transmit moral information faster than the speed of light.

    On the other hand, as Scott correctly points out, if the levers are in morally incorrect positions then the universe will be less good than it could be. Therefore, to maximize the universe’s goodness, the mice should come to the proper conclusion as soon as possible–subject to the constraints discussed above.

    Thus the arc of moral history bends towards justice at a rate precisely commensurate the speed of light.

  41. Subb4k says:

    Many people have already pointed out that the assumptions in the statement “the moral arc of the universe is long, but it bends towards justice” are unsatisfied here since the mice have no access to the information they need to make the morally better decision.

    However, there is also the issue of the very arbitrary separation of beetles. Suppose beetle species A have the heat lamp and beetle species B doesn’t. Why would the morally correct course of action be to increase the population of species B but not of species A?

    I think this part can be salvaged though : the lizard people just need to engineer exactly one out of the millions of beetle species to be a utility monster so their preferences matter more than all other beetles combined, and then not tell anyone which one it is. Then the mice need to act on each beetle species independently : if it’s not the utility monster it does not matter, but if it is they need to make the right decision regardless of the others.

    Of course then you run into the problem already mentioned, but compounded. The mice need to learn through some magical effect about the position of the heat lamp (they need to learn that information because at the very least they would be able to deduce it from the position of the lever), but not learn which species of beetle is a utility monster (otherwise, the optimal choice is to put the lever for all beetles in the same position as for the utility monsters, since the transmitter utility monsters basically set the “base state” we’re starting from). For some reason, specific magical transmission of morally-relevant information is even harder to believe than if it were universal.

  42. Null Hypothesis says:

    Your kids are going to grow up knowing the most awesome parables.

    But you might be getting regular visits from CPS if they ever repeat them in school.

  43. Arpan Saha says:

    A while ago, the string theorist Ashoke Sen wrote an essay arguing that we need to invest in interstellar travel beyond cosmological horizons because we don’t know if the vacuum we’re living in is metastable or not and if it is, a more stable vacuum bubble could spontaneously form any moment and wipe us all out without any warning (since such a bubble would have to expand at the speed of light).

    But given that FTL transmission of moral information is forbidden, venturing beyond the cosmological horizon would make the arc of moral progress infinitely long. So now I don’t know if that would be such a good idea.

  44. Sam Reuben says:

    The moral I get from this, at least, is that utilitarianism is an excellent mental heuristic for solving certain weighed-choice problems on a large, social scale, but falls completely apart if you try and pretend that it’s remotely capable of describing the universe. The short version is: if the utile is a numerical object, what instrument measures it? Theoretically, the beetle-mouse-monolith, but realistically, nothing. Thus, the utile isn’t properly considered as a part of objective reality.

  45. susanneah says:

    and what of the beetles…. WIFthem? Enjoyed, peace, Susanne

  46. hnau says:

    a very poor person in a war-torn Third World country who is (for now) not actively suicidal.

    First clue that your ethical model is wrong: equating suicide with terrible material conditions, when if anything the opposite is the case. Even “not having one’s preferences satisfied” is a pretty awful understanding of what might drive people to suicide.

    There’s an article about this in the most recent links post, for goodness sake!

  47. balzacq says:

    The arc of the moral universe is long, but it bends to justice.

    This has always struck me as a dogmatic statement of faith, not susceptible to proof in the real world. Like Zhou Enlai is supposed to have said about the French Revolution, “it’s too early to say.”

    As a cynic (and an atheist), as far as I know humanity is only one cataclysm away from universally knifing each other for scraps of bread, and putting each other in chains to work the moisture farms, or what have you. This is why I’m suspicious of moral systems that require me to worry about the effect on or opinions of others, except as far as that worry is inversely proportional to their distance from me in terms of relation, similarity, or time.

    As I put it to a FB friend, who was complaining about some variety of “why can’t we all just be good to each other”, people care about their children, themselves, their other family, their neighbors, their countrymen or co-religionists, and everyone else, in that order. Getting someone to prioritize the well-being of a lower-ranked person when it would conflict with that of a higher-ranked person is well-nigh impossible, or at least unlikely and unsustainable.

    • susanneah says:

      Although I sympathise with your views, now still, at 70, and being what is called ‘white’, I ‘work’ for community, free/pro bono, for some time each year. In various of what might be termed “The Other” types of countries. So I cannot agree. I find the minute contributions I make to enrich me (in mind & spirit) enormously, and I see positive impact in those places where I make my tiny contributions. I put it to you that communication has much to do with mind-set,and firmly say I have seen this happen, time and time again. Even – sometimes, to some with your types of mind-set may change if they meet “the Outsider” person, or persons, who seem real and true to them… peace, Susanne

      • balzacq says:

        That’s great and all, and good for you. But if your time spent working pro bono somehow harmed your grandchildren…? I suspect you’d find a good reason why your community could wait. That’s just human nature.

        I probably should have split my comment into two: one saying that “bends toward justice” is an aspirational statement that we can’t prove and literally have to take on faith, predictions being hard to make especially about the future and all; and another saying that all this stuff about adding up utils is tantamount to so many angels dancing on the head of a pin, when it’s plain to see what really matters to human beings and therefore how they generally make moral calculations in the real world.

  48. susanneah says:

    Well, thanks for the thoughtful reply, though that’s a lot of words to say something pretty simple – and, although I totally respect your views relative to your personal beliefs, I’m still not convinced – relative to my views. As you say, we cannot (we the masses, that is) know of the future: so, are we to be totally skewered, frozen, from making any efforts to assist others less fortunate, because of our to-date inability/ies there? BTW – I don’t have any grandchildren – and I believe, though it’s not really my business in any case – that it’s by thoughtful decision. peace, S

  49. Baeraad says:

    I had to read through that three times before I understood the logic at work. But yes, then I laughed.

    I very much doubt that a coherent model for maximising “goodness” is even possible, by the way. Insofar as I think morals can be codified, I expect it to look something more like:

    “Values inherent to the human organism are X0, X1, … , Xn.
    “The weight an individual can ascribe to X0 while being considered mentally healthy is on a range between X0low to X0high.
    “The weight an individual can ascribe to X1 while being considered mentally healthy is on a range between X1low to X1high.
    “…
    “The weight an individual can ascribe to Xn while being considered mentally healthy is on a range between Xnlow to Xnhigh.
    “An action is moral if it is genuinely based on those of the acting individual’s values that are relevant to the situation in accordance to the proportional weight the individual places on those values.
    “An individual with one or more values weighted higher or lower than the mentally healthy range should be regarded as incapable of making moral decisions in matters affected by one of their ‘impaired’ values.”

    Something like that. And I’m not even sure that that’s possible to make work. It leaves it very fuzzy, for instance, just how you’re supposed to determine to what extent a given value is involved in a particular choice compared to other values, and whether or not it should be relevant that choice in the first place, and then there’s that weasel word “genuinely”… But insofar as morality is an objective thing, this is as close as I can see to a way of evaluating it.

    • susanneah says:

      So, do ‘elevated beings’, in the first instance, then be the entities who ascribe your ‘values’ of goodness (whatever that may be decided)? As personally I am not in favour of any type of ‘elevated beings’ of any type, don’t think they validly exist, I’m not sure I’d listen or take not of their nominated values. And as for ‘genuinely’ – that’s indeed a weasel word. peace, Susanne

      • Baeraad says:

        Hmm? I don’t think I used the term “elevated beings.”

        Do you mean, who gets to decide what is to be considered a normal and healthy amount of concern for any given value? If so, what I had in mind was that it would be determined in the same way that we determine the difference between a mental illness and a personal oddity today. Which I’m sure is a lot more complicated than I can even imagine, but they still do it somehow. You don’t have anger issues because you’re not always calm and serene, and you don’t have a phobia just because you’re not completely one hundred percent okay with heights, but somewhere a fuzzy line is crossed and it’s considered abnormal in comparison to the majority of roughly “healthy” people.

        So the most morally perfect being under this model would be the world’s most thoroughly average and unremarkable person, one with a medium level of care for each and every value that human beings are capable of treating as a value. Though even that person wouldn’t have any particular advantage over anyone else whose feelings was still within the accepted range. So someone might be, say, more compassionate than someone else, but the other person would not be either better or worse than the first as long as their compassion was not below the critical treshold – or, conversely, so exaggerated as to be considered excessive.

        • susanneah says:

          Hmnn, it’s complicated. That’s not in doubt, (and sorry if I took some liberty in my written form – no offence intended). Everything we humans consider is, I guess, to some extent or another, complicated. Important matters like this are perhaps really demonstrated by the white and grey mice? As they show how it is time – 50 years at least, that it takes for humans to perhaps peacefully, almost, to get around to agreeing on anything. Especially when, or whether a thing has value, or not. I remember reading the story of how, in earlier times, the river Arno completely flooded Florence. So much so, and with such ferocity that quite a number of beautiful, and already ancient marble statues, much-adored by the Fiorentino public, toppled off a major bridge, fell into the raging river and were lost. There was huge public sorrow. Several hundred years or so later some, most, were found in the mud during a particularly dry spell, far away down-river, and badly mangled, but nonetheless still beautiful, still recognisable and still in good enough order to be re-mounted. Then came the supreme problem. Which then took more than another 100 years for that good public to decide – which statue should be re-mounted where. Nonetheless, in the end they did it. So I think we can have faith, although disquiet is good, too! peace, Susanne

  50. jasonhise64 says:

    If the moral preferences of a society evolve over time, then when observed from the perspective of someone in the present won’t it necessarily appear that the morals of past societies were worse? Any point in moral-preference-space is going to measure itself as the peak – the set of moral preferences that best maximize that point’s set of moral preferences. Since each point in the space considers itself to be the most moral set of preferences to have, any path toward that point looks like an optimization problem being solved… a hill climbing operation culminating in society finally learning the ‘right’ moral values. But wouldn’t it look like that even if the evolution of societal moral preferences was a random walk?

    • susanneah says:

      Perhaps we ‘hope’ we have advanced? As we know we, the common lot, now do have a great deal of new, (it seems) assistances, tools and freedoms in our efforts to do so. Also perhaps, we also have empathy and compassion for those before us, which they cannot now have for us? As we can see they did not, most likely, have all the advantages we, the commoners, today have? And yet, conversely, still we do not achieve freedom – for all, from the most basic need, hunger? Really, how can we compare ourselves to any other type entity in any era, at this stage? peace, Susanne

  51. Doesn’t negative average utilitarianism imply we should never have kids – because they will always be *some* preference they have that isn’t met?

    • Never mind – the linked paper addresses this.

      What it doesn’t address is that average preference utilitarianism is equivalent to negative average preference utilitarianism.

      It’s argument against APU is that it violates the mere addition principle, but so too does NAPU.

      NAPU just minimized C-Utility; this is trivially equivalent to maximizing utility.

  52. Doesn’t negative average utilitarianism imply we should never have kids – because they will always be *some* preference they have that isn’t the?

  53. TheEternallyPerplexed says:

    @Scott
    The tone reminds me of Stanislav Lem. Sooooooo great! (Both of you!)

  54. susanneah says:

    One question, please: Why is it that you constructed: “There was one lever in each mouse cage. It controlled the pheromone dispenser in the beetle cage just below.”? Or please direct me to more information about what it is these particulara pheromones do? peace, S

  55. romeostevens says:

    Every utilitarianism works by pushing the hard parts into a corner where it thinks you won’t notice. The aether variable in NAPU is the number-intensity of frustrated preferences tradeoff.