Stop Adding Zeroes

Dylan Matthews writes a critique of effective altruism. There is much to challenge in it, and some has already been challenged by people like Ryan Carey. Perhaps I will go into it at more length later. But for now I want to discuss a specific argument of Matthews’. He writes – and I am editing liberally to keep it short, so be sure to read the whole thing:

Nick Bostrom — the Oxford philosopher who popularized the concept of existential risk — estimates that about 10^54 human life-years (or 10^52 lives of 100 years each) could be in our future if we both master travel between solar systems and figure out how to emulate human brains in computers.

Even if we give this 10^54 estimate “a mere 1% chance of being correct,” Bostrom writes, “we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.”

Put another way: The number of future humans who will never exist if humans go extinct is so great that reducing the risk of extinction by 0.00000000000000001 percent can be expected to save 100 billion more lives than, say, preventing the genocide of 1 billion people. That argues, in the judgment of Bostrom and others, for prioritizing efforts to prevent human extinction above other endeavors. This is what X-risk obsessives mean when they claim ending world poverty would be a “rounding error.”

[…]

These arguments give a false sense of statistical precision by slapping probability values on beliefs. But those probability values are literally just made up. Maybe giving $1,000 to the Machine Intelligence Research Institute will reduce the probability of AI killing us all by 0.00000000000000001. Or maybe it’ll make it only cut the odds by 0.00000000000000000000000000000000000000000000000000000000000000001. If the latter’s true, it’s not a smart donation; if you multiply the odds by 10^52, you’ve saved an expected 0.0000000000001 lives, which is pretty miserable. But if the former’s true, it’s a brilliant donation, and you’ve saved an expected 100,000,000,000,000,000,000,000,000,000,000,000 lives.

I don’t have any faith that we understand these risks with enough precision to tell if an AI risk charity can cut our odds of doom by 0.00000000000000001 or by only 0.00000000000000000000000000000000000000000000000000000000000000001. And yet for the argument to work, you need to be able to make those kinds of distinctions.

Matthews correctly notes that this argument – often called “Pascal’s Wager” or “Pascal’s Mugging” – is on very shaky philosophical ground. The AI risk movement generally agrees, and neither depends on it nor uses it very often. Nevertheless, this is what Matthews wants to discuss. So let’s discuss it.

His argument is that sure, it looks like fighting existential risk and saving 10^54 people is important. But that depends exactly how small the chance of your anti-x-risk plan working is. He gives two different possibilities which, if you count the zeroes, turn out to be 10^-17 and 10^-66. Then he asks: which one is it, 10^-17 or 10^-66? We just don’t know.

Well, actually, we do know. It’s probably not the 10^-66 one, because nothing is ever 10^-66 and you should never use that number.

Let me try to justify this.

Consider which of the following seems intuitively more likely:

First, that a well-meaning person donates $1000 to MIRI or FLI or FHI, this aids their research and lobbying efforts, and as a result they are successfully able to avert an unfriendly superintelligence.

Or second, that despite our best efforts, a research institute completes an unfriendly superintelligence. They are seconds away from running the program for the first time when, just as the lead researcher’s finger hovers over the ENTER key, a tornado roars into the laboratory. The researcher is sucked high into the air. There he is struck by a meteorite hurtling through the upper atmosphere, which knocks him onto the rooftop of a nearby building. He survives the landing, but unfortunately at precisely that moment the building is blown up by Al Qaeda. His charred corpse is flung into the street nearby. As the rubble settles, his face is covered by a stray sheet of newspaper; the headline reads 2016 PRESIDENTIAL ELECTION ENDS WITH TRUMP AND SANDERS IN PERFECT TIE. In small print near the bottom it also lists the winning Powerball numbers, which perfectly match those on a lottery ticket in the researcher’s pocket. Which is actually kind of funny, because he just won the same lottery last week.

Well, the per-second probability of getting sucked into the air by a tornado is 10^-12; that of being struck by a meteorite 10^-16; that of being blown up by a terrorist 10^-15. The chance of the next election being Sanders vs. Trump is 10^-4, and the chance of an election ending in an electoral tie about 10^-2. The chance of winning the Powerball is 10^-8 so winning it twice in a row is 10^-16. Chain all of those together, and you get 10^-65. On the other hand, Matthews thinks it’s perfectly reasonable to throw out numbers like 10^-66 when talking about the effect of x-risk donations. To take that number seriously is to assert that the second scenario is ten times more likely than the first!

In Made Up Statistics, I discuss how sometimes our system one intuitive reasoning and system two mathematical reasoning can act as useful checks on each other. A commenter described this as “sometimes it’s better to pull numbers out of your ass and use them to get an answer, than to pull an answer out of your ass.”

A good example of this is 80,000 Hours’ page on why people shouldn’t get too excited about medicine as an altruistic career (oops). They argue that the good a doctor does by treating illnesses is minimal compared to the good she can do by earning to give. Their reasoning goes like this: the average doctor saves 4 QALYs a year through medical interventions. The average doctor’s salary is $150,000 or so; if she donates 10% to charity, that’s $15,000. As per Givewell, that kind of money could save 300 QALYs per year. The value of the earning to give is so much higher then the value of the actual doctoring that you might as well skip the doctoring entirely and go into whatever earns you the most money.

Intuitively, people’s system 1s think “Doctor? That’s something where you’re saving lots of lives, so it must be a good altruistic career choice.” But then when you pull numbers out of your ass, it turns out not to be. Crucially, exactly which numbers you pull out of your ass doesn’t matter much as long as they’re remotely believable. 80,000 Hours tried their best to figure out how many QALYs doctors save per year, but this is obviously a really difficult question and for all we know they could be off by an order of magnitude. The point is, it doesn’t matter. They could be off by a figure of ten times, twenty times, even fifty times and it wouldn’t affect their argument. I’ve gone over their numbers with them and it’s really, really, really hard to remotely believably make the “number of QALYs saved per doctor” figure come out high enough to challenge the earning-to-give route. Sure, you’re pulling numbers out of your ass, but even your ass has some standards.

It’s the same with Matthews’ estimates about x-risk. He intuitively thinks that x-risk charities can’t be that great compared to fighting global poverty or whatever other good cause. He (very virtuously) decides to double-check that assumption with numbers, even if he has to make up the numbers himself. The problem is, he doesn’t have a very good feel for numbers of that size, so he thinks he can literally make up whatever numbers he wants, instead of doing something that we jokingly call “making up whatever number you want” but which in fact involves some sanity checks to make sure they’re remotely believable proxies for our intuitions. He thinks “I don’t expect x-risk charities to work very well, so what the heck, I might as well call that 10^-66”, whereas he should be thinking something like “10^-66 means about ten times less likely than my chance of getting tornado-meteor-terrorist-double-lottery-Trumped in any particular second, is that a remotely believable approximation to how unlikely I think existential risk is?”

Just as it is very hard to come up with a remotely believable number that spoils 80,000 Hours’ anti-doctor argument, so you have to really really stretch your own credulity to come up with numbers where Bostrom’s x-risk argument doesn’t work.

(some people argue that LW-style rationality is a bad idea, because you can’t really think with probabilities. I would argue that even if that’s true, there is at least a small role for rationality in avoiding being bamboozled by other people trying to think with probabilities and doing it wrong. This is a modest claim, but no more modest than Wittgenstein’s view of philosophy, which was that it was a useful thing to know in order to protect yourself from taking philosophers too seriously.)

But one more point. Suppose Matthews’ intuition is indeed that the chance of AI risk charities working out is precisely ten times less than his per-second chance of getting tornado-meteor-terrorist-double-lottery-Trumped. In that case, I offer him the following whatever-the-opposite-of-a-gift is: we can predict pretty precisely the yearly chance of a giant asteroid hitting our planet, it’s way more than 10^-66, and the whole x-risk argument applies to it just as well as to AI or anything else. What now?

Because this isn’t just about defending the particular proposition of AI. It’s a more general principle of staring into the darkness. If you try to be good, if you don’t let yourself fiddle with your statistical intuitions until they give you the results you want, sometimes you end up with weird or scary results.

Like that a person who wants to cure as much disease as possible would be better off becoming a hedge fund manager than a doctor.

Or that your charity dollar would be better sent off to sub-Saharan Africa to purchase something called “praziquantel” than given to the sad-looking man with the cardboard sign you see on the way to work.

Or that a person who wants to reduce suffering in the world should focus almost obsessively on chickens.

One of the founding beliefs of effective altruism is that when math tells you something weird, you at least consider trusting the math. If you’re allowed to just add on as many zeroes as it takes to justify your original intuition, you miss out on the entire movement.

Everyone has their own idea of what trusting the math entails and how far they want to go with it. Some people go further than I do. Other people go less far. But anybody who makes a good-faith effort to trust it even a little is, in my opinion, an acceptable ally worth including in the effective altruist tent. They have abandoned a nice safe chance to donate to the local symphony and feel good about themselves, in favor of a life of feeling constantly uncomfortable with their decisions, looking extremely silly to normal people, and having Dylan Matthews write articles in Vox calling them “white male autistic nerds”.

Matthews is firmly underneath the effective-altruist tent. He writes that he’s worried that a focus on existential risk will detract from the causes he really cares about, like animal rights. He gets very, very excited about animal rights, and in his work for Vox he’s done some incredible work promoting them. Good! I also donate to animal rights’ charities and I think we need far more people who do that.

And yet, the same arguments he deploys against existential risk could be leveled against him also – “how can you worry about chickens when there are millions of families trying to get by on minimum wage? Effective altruists need to stop talking about animals if they ever want to attract anybody besides white males into the movement.” What then?

Malcolm Muggeridge describes a vision he once had, of everyone in the world riding together on a giant train toward realms unknown. Each person wants to get off at their own stop, but when the train comes to their station, the engineer speeds right by. All the other passengers laugh and hoot and sing the praises of the engineer, because this means the train will get to their own stations faster. But of course each one finds that when the train comes to their station, why, it speeds past that one too, and they are left to rage impotently at the unfairness.

And I worry that Matthews is urging us to shoot past the “existential risk” station in order to get to the “animal rights” station a little faster, without reflecting on the likely consequences.

This certainly isn’t to say we all need to get off at the first station. I myself am very interested in existential risk, but I give less than a third of my donations to x-risk related charities (no, I can’t justify this, it’s a sanity-preserving exception). I respect those who give more. I also respect those who give less. Existential risk isn’t the most useful public face for effective altruism – everyone incuding Eliezer Yudkowsky agrees about that. But at least allowing people interested in x-risk into the tent and treating them respectfully seems like an inescapable consequence of the focus on reason and calculation that started effective altruism in the first place.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

427 Responses to Stop Adding Zeroes

  1. Geoff Greer says:

    “…we can predict pretty precisely the yearly chance of a giant asteroid hitting our planet, it’s way less than 10^-67…”

    I think you mean way more? (or way less unlikely)

    If there’s a way to offer corrections besides commenting, l apologize for not noticing it.

    Edit: Feel free to delete this comment so others will think you are a flawless writer. My ego doesn’t mind.

  2. Bugmaster says:

    Like that a person who wants to cure as much disease as possible would be better off becoming a hedge fund manager than a doctor.

    Aren’t there diminishing returns due to this type of reasoning ? If everyone chooses to become a hedge fund manager, and no one chooses to become a doctor, then it doesn’t matter how much money you could’ve potentially donated to medicine — because no one is doing medicine at all.

    Furthermore, if you happen to be really amazing at doctoring, and only moderately good at hedge funding, then the right move for you is still to become a doctor. In fact, if you totally suck at finance, then becoming a hedge fund manager is probably something you should avoid like the plague (which, incidentally, you could’ve been curing instead of losing all that money).

    Don’t get me wrong, I’m all for Consequentialism, but IMO this philosophy makes it way too easy to come up with totally wrong answers. It’s not as bad as Divine Command, but it’s no panacea, either.

    • Scott Alexander says:

      Yes, if everyone becomes a hedge fund manager and no one a doctor, then there will be no one to accept all the wonderful donations.

      (except I guess nurses, who are totally people who exist and who I secretly suspect would just keep treating people perfectly well if all the doctors suddenly disappeared).

      On the other hand, if everyone became a doctor, then we would all starve to death because there are no farmers to grow food.

      Since we are allowed to give the advice “become a doctor” without worrying about everyone starving, we are allowed to give the advice “become a hedge fund manager” without worrying about a total collapse of medical care.

      (I guess the short version is that becoming a hedge fund manager is the right decision on the margin, and until effective altruism is WAY bigger than it is right now, the margin is the correct way to look at this)

      (also, some effective altruist organizations are moving away from earning to give in favor of becoming things like economists doing development aid for the World Bank, but I think the instructive point between “doctor” and “manager” still holds).

      Obviously this is just for the average person. If you are Kanye West, the appropriate thing to do is become a rapper and make millions of dollars that way. Once again, all you can do is transmit statistical facts and tell people to adjust for their own situation.

      • Bugmaster says:

        Right, my point was that the advice should be not, “become hedge fund manager” or “become a doctor”, but rather, “become whatever it is that you have a good chance of being great at, then donate your money to whatever cause you want to support”. More specifically, I think that the effective altruism model — “pick a career that generates as much money as possible, then donate that money to charity” — is flawed, because it does not account for the fact that different people have different aptitudes; and also because it does not account for diminishing returns.

        I do agree with you that, at present, diminishing returns are not an issue; but, being a programmer, I’m always wary of spawning processes with no termination condition.

        But I think that variations in aptitudes are a bigger problem. If you are great at something like art, and not much else, should you still become a hedge fund manager ? In terms of sheer income potential, it’s possible that the answer is “yes”, because you will likely still make more money with finance than you’d make with art. However, I personally believe that a person who made such a choice would be committing a mistake.

        • ton says:

          because it does not account for the fact that different people have different aptitudes

          Have you ever gone through 80000 hours website? They have more than one option there, at least partly split by what you might be good at.

          and also because it does not account for diminishing returns.

          At the scale we’re talking about, diminishing returns aren’t coming into play yet, I believe. Besides, such returns would automatically find themselves into the models as salaries go down.

        • Tom Womack says:

          “Become a hedge fund manager” is not the kind of goal that you can achieve simply by willing it: there are at least three very narrow gates to get through (get recruited by one of the big banks; get recruited by a hedge fund out of the big bank; get enough personal recognition among hedge-fund clients to be able to set up your own).

          If you are great at art and not much else, you won’t become a hedge fund manager – you probably won’t be spectacular enough while working the internship at Morgan Stanley to get into the bank in the first place. You might be spectacular enough while working an internship at McCann Erickson to end up with a reasonable career in advertising.

        • thedufer says:

          I’ve always read “become a hedge fund manager” as essentially shorthand for “become something that optimizes your money-obtaining” rather than actually actionable advice. I’m not sure if Scott has ever said that explicitly, though, or even if that’s what he means.

      • Carl Shulman says:

        You might want to clarify that it’s specifically something like “work as a typical doctor at NHS/in a rich country.” A doctor who goes into drug development for neglected diseases, or works for the WHO on malaria or Ebola, or otherwise targets their career at the lower-hanging fruit of medical QALYs does better. I know this is accounted for by 80k, and that you know it, it’s just not crystal clear in the post.

      • Deiseach says:

        The “everyone should become a hedge fund manager rather than a doctor” argument is one of the ones that makes me mistrust dislike not be inclined to the ‘make your donation choices by consulting GiveWell as recommended by Ethical Altruism’ argument.

        Sure, everyone becomes a hedge fund manager and earns loads of money and gives loads of money to charitable causes like sending anti-malaria bednets to African nations. Then all the malaria free people die because no-one is around to treat their sickle cell anaemia, flu, diabetes, or they cut themselves when fishing and contracted sepsis. (I’m pretty sure nurses would be included, since if a doctor’s or surgeon’s salary is inferior when compared to that of a hedge fund manager, a nurse’s salary is even more inferior and they have the opportunity to save even fewer lives).

        The same way everyone should give every spare cent to the top-ranked GiveWell charities (one of which is the same malaria netting). Great, that takes care of malaria. Now what about all the lesser-ranked diseases and conditions that don’t make the top three, like cleft lip and palate or post-partum fissures?

        The theory is wonderful, the figures are impressive, and I’m sure that all the sums add up. But it’s such a perfect wonderful model that I think the messiness of reality is forgotten. Yes, it’s vitally important that we think about future generations – but if we’re going to ignore real present suffering for hypothetical as yet unborn billions, then I think we’re going wrong. And yes, it gives me grim amusement to see the argument of the future-unborn outweighing the present-day born, as does it not yield to the same argument as pro-abortion rights? Yes, that’s a potential person but it’s not a person yet, and where its rights to exist conflict with the rights of the already existing person or persons, it has to yield up its life?

        • AngryDrake says:

          Yeah, I find it incredibly inconsistent to say that giving to charity to save poor third worlders is important because those that die could become humanity-saving geniuses, while simultaneously insisting that contraception and abortion are entirely fine.

          • James Picone says:

            Doesn’t the same logic lead you to condemning the spectacle of women who aren’t pregnant or attempting to get pregnant every moment they possibly can be?

          • AngryDrake says:

            Pretty much, actually.

            With correction taken for infrastructure to support those women and their kids (getting pregnant without a support structure will likely lead to overall less successful reproduction than more restrained approaches), realities of reproductive health (not beginning too early, as that’ll probably lead to miscarriages and other issues that lower expected output), and similar issues, of course.

          • Deiseach says:

            AngryDrake, I was objecting more to the line of thought that “preventing AI risk is super vital important because we could have hypothetical millions of billions of persons saved and flourishing, so ignore the millions alive today living in poverty and bad conditions since it’s more efficient, by the calculus, to give to AI risk research than to a fund to help feed the hungry”.

          • AngryDrake says:

            That is objectionable, too, on the “abstract helping of strangers far away” grounds that me and Jaskologist mentioned.

          • ChristianKl says:

            Giving money to poor third worlders isn’t likely to increase the numbers of them. Richer people generally get fewer kids.

            Also who made an argument about Give Directly that is about those people getting humanity saving geniuses?

          • ton says:

            “who made an argument about Give Directly that is about those people getting humanity saving geniuses?”

            https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/#comment-226685

          • houseboatonstyx says:

            Yeah, I find it incredibly inconsistent to say that giving to charity to save poor third worlders is important because those that die could become humanity-saving geniuses, while simultaneously insisting that contraception and abortion are entirely fine.

            The principle of a US child being more productive, extends to a US wanted child being more productive than a US un-wanted one.

          • RCF says:

            “Giving money to poor third worlders isn’t likely to increase the numbers of them. Richer people generally get fewer kids.”

            I think that it’s rather clear that the number of kids versus income graph must have some region with positive slope.

        • Murphy says:

          Keep in mind that the “become a hedge fund manager” advice is based on the current situation where many charities have volunteers lining up to help while being strapped for cash.

          In a world that already has many hedge fund managers channeling their money to the most effective charities but a lack of volunteers the most effective choice might be to volunteer.

          If you imagine it as a market where the options all have some expected return, the rational thing is to invest in the cheapest item. This moves the price and the guy walking in the door behind you might look at the same market and see that the most effective choice is to invest in the next option which is now the cheapest.

          I’m going to agree that I don’t buy arguments based on numbers of future people, I don’t believe in there being a moral incentive to maximize numbers of future people though future-suffering of people who have yet to exist I would attach some weight to. I don’t care if there could be a trillion people on earth, that’s not a goal in my mind but I do care if the grandkids of the current generation might live in misery.

        • Jordan D. says:

          I very much enjoy reading your posts, Deiseach, but you seem to be very set against the EA movement in every particular and I’m not precisely clear on why.

          Like, this post here is an argument against relying on GiveWell. (Sort of- the first argument against literally everyone in the world becoming a hedge fund manager doesn’t have anything to do with GiveWell. Also, the consequence of your argument is less ‘nobody can treat the sickle-cell anemia’ and more ‘oh whoops we converted all industry and agriculture to hedge funds and now everyone shall perish from this earth.’) You’ve made the argument against a heavy focus on malaria nets before, and maybe against the deworming and direct-giving initiatives they also recommend. That’s all fine- if you think that GiveWell’s criteria for ‘best charities’ are ill-formed, it is sensible to reject them and make your own.

          But then that isn’t a rejection of effective altruism at all! It’s an object-level disagreement, not a meta-level one. Is the object-level where you disagree with the EA people?

          The last part of your post seems to dispute that. I find a resistance to being pushed around by simple and elegant models commendable- as Scott likes to say, System 1 trades against System 2 all the time, and some people are way too willing to indulge in untested hypotheticals. On the other hand, System 2 trades against System 1. X-risk seems to me as though it’s controversial even in EA circles (which include, for example, a lot of people from the Bay Area who are unusually willing to take X-risk seriously), and MIRI somewhat controversial even in X-risk circles. The argument against Bostrom is, even if wholly correct, weak evidence against the EA movement.

          (This takes us back to GiveWell, which famously does not recommend X-risk charities- at least not on the front page- for what I consider very good strategic reasons.)

          So I’m left stuck. I’m not sure if you’re just saying that GiveWell doesn’t do a very good job at what it tries to do or if you’re saying that there’s a deeper fault in the entire idea of trying to do as much good with your money as you can. I would love to hear an elaboration.

        • RCF says:

          “Then all the malaria free people die because no-one is around to treat their sickle cell anaemia, flu, diabetes, or they cut themselves when fishing and contracted sepsis.”

          At the risk of being accused of making the Courtier’s reply, I think that that is addressed by the most basic of explanations of EA, and making that argument shows that you haven’t done much research into EA.

          “And yes, it gives me grim amusement to see the argument of the future-unborn outweighing the present-day born, as does it not yield to the same argument as pro-abortion rights?”

          Not at all. First of all, EA consists of giving money, not of demanding the government give money. A closer analogy would be to a person making a personal choice to not get an abortion. And even then, there are all sorts of further differences, such as that one might believe that having an additional person now will reduce existing people’s utility, but in the future the carrying capacity will be increased.

      • Adam says:

        The average person has a far greater chance of convincing an existing hedge fund manager to donate to their pet cause than of becoming a hedge fund manager.

    • William O. B'Livion says:

      Linear projection usually has limits.

      https://www.youtube.com/watch?v=nEI19kJ5GfU

  3. QuiglyQork420blazeit says:

    But will the existential risk station serve hot dogs?

  4. Rick Hull says:

    > nothing is ever 10^-67 and you should never use that number.

    Really? What are the chances of rolling a 10-sided die 68 times and having them all result in the same value?

    A chain of 67 events, each with a 10% chance of occurring — are you really saying this sort of thing should be off the table for discussion? That seems like a reasonable model for arriving at interstellar travel and full brain emulation to me.

    • Scott Alexander says:

      I didn’t mean contrived scenarios where you use a bunch of conjunctions to get 10^-67. In fact, that’s why I gave the tornado-meteor-terrorism-double-lottery-Trump example – to show how wacky you would have to be in order to get there.

      I’m not sure what you mean by the last sentence. If you mean you think there’s a 10^-67 likelihood of interstellar travel, I’m going to have to disagree.

      (I was going to add “unless you mean next week”, but I think even the probability of interstellar travel next week is higher than that – someone could invent the teleporter)

      • Fnord says:

        It’s not just that we get interstellar travel and brain emulation eventually. It’s that causes we donate to now affect our chances of getting to that point. Which does require a conjunction of many specific events.

        • Nyx says:

          This argument seems too easy to apply to other things. Such that many events with clearly decent probabilities “should”, by this argument, basically never happen (yet they do).

          For lack of a better example, space travel requires a conjunction of many specific events, particularly a “we go to the moon” type thing where we expect money to directly translate into discoveries,etc that let us get to the moon. Yet it happened (and could probably could have been expected to happen with decent probability).

          Maybe it’s not specific events, but “any number of workable events” that makes the probability more likely, with many different potential paths to reach the goal. Which would also apply to FAI donations.

          • Fnord says:

            This argument has nothing to do with saying things “‘should’…never happen” in the future. That’s specifically the difference I’m trying to draw; the difference between “space travel will happen eventually” and “we should give money to this specific inventor in 1860 to make space travel more likely”.

          • Nyx says:

            what do you make of my example though… there are clearly many specific examples of particular sums of money having predictable outcomes, despite seemingly requiring a conjunction of many events. E.g. “we choose to go to the moon”.

            This seems like a very general counterargument, as many things require a conjunction of specific events.

            E.g. when I say “should… never happen”, I mean that people make plans that seemingly require many conjunctions of events, yet these things happen with far higher probability than your argument would predict.

          • RomeoStevens says:

            You’re ignoring that many things did go wrong with the plan…and then humans reached into the plan and altered it to once again maximize the probability of getting to the moon. This was done over and over again until we got to the moon.

          • Tom Womack says:

            Space travel requires only one special event: the will not to give up when things get difficult. So it doesn’t require a whole series of unlikely events the absence of any one of which terminates the project; it requires the will not to terminate the project when one thing breaks, so you keep building better rockets even while the bad rockets are blowing up on the pad.

            The probability of seeing the same number 68 times in a row on a die is probably no less than 10^-6; any time that the circumstance comes up, you can assume that the die being used is biased, and once you’re counting to 68 you can assume that the die being used is incredibly biased. So it’s the probability that the person you’re rolling dice with happens to be Feynman and he has a joke to play.

          • Nita says:

            @ Tom Womack

            If by “space travel” you mean “throwing some people into space”, then you’re right. Otherwise, we’re going to need a few amazing new technologies. (I don’t know if you’ve noticed, but most of space is rather inhospitable, and the hospitable places are rather far away.)

          • Anonymous says:

            Space travel requires only one special event: the will not to give up when things get difficult.

            It also needs magic energy or really lucky new long-range physics. If we’re talking about carrying our propellant, the theoretical limits for specific impulse of fusion-based propulsion isn’t that much higher than that of fission… and the theoretical limits for something really exotic like anti-matter isn’t much higher than that. The only really conceivable solution at this point would be something like a Bussard ramjet, so that we *didn’t* have to carry our propellant.

            The other solution would be if we got really lucky and found a way to poke wormholes through folded M-branes (…which we aren’t even sure are things). It’s very plausible that physics *just doesn’t work like that*, so it wouldn’t matter if we had the will to not give up.

          • houseboatonstyx says:

            If we’re talking about carrying our propellant, the theoretical limits for specific impulse of fusion-based propulsion isn’t that much higher than that of fission…

            If the limit were food rather than propellant, that was a parallal argument against Columbus: The distance is too far, you couldn’t carry enough food for such a long voyage.

          • “If the limit were food rather than propellant, that was a parallal argument against Columbus: The distance is too far, you couldn’t carry enough food for such a long voyage.”

            A correct argument. He was trying to get to the east end of Asia, or at least that was what he said he was doing. He was running low on food and water by the time he had covered a fraction of the distance.

            Columbus was either a con man who somehow guessed the existence of the Americas and used the riches of India story to raise money or a lucky idiot who fudged the known numbers (circumference of Earth and width of Asia) in a way that should have gotten him killed.

          • houseboatonstyx says:

            Yes, the food argument was correct but missing a qualifier: “… unless you find places to restock along the way.”

            We know some of the stuff that’s out there. Now to design an engine that can use it. 😉

        • Isaac says:

          This. Knowing the effect of any donation now require many conjectures about specific future events. Each particular event, such as the AI hypothetical, may have a reasonably large likelihood. However, putting money towards any method to reduce that risk involves speculating about many specific details of the risk and how to prevent those details from occurring. Further, you have to take into account new risks that your actions may create (e.g., maybe slowing AI research will reduce our ability to prevent an asteroid from hitting the earth or…). As Matthews suggests, you end up being able to show whatever you want to show when you distort math in this way.

        • Eli says:

          It also assumes that after we donate, nobody gets hit by a truck on the way to work tomorrow. Oh, and that these organizations are actually competent at their jobs.

          Personally, I think it’s far more rational to deal with the highest probabilities first. Arguments are samples, not distributions. You make correct decisions by considering a whole distribution, not by considering isolated possible events.

      • Eric says:

        10^-67 is a hard number to work with… One famous Bayesianist arguments is the Dutch Book. We play a game: You estimate the probability of some event and I give you odds on it.

        But I don’t know how you could construct an odds game on a teleporter being invented tomorrow. I can’t give you 10^67 to 1 odds because I don’t have that much money. No one does. Is there a way of reformulating the game so we can deal with such absurd odds?

        • Nornagest says:

          Hmm. Well, to a first approximation, utility scales as log-money. So to get the same effect, you’d need to find something that’s linear in utility and make a bet denominated in it at 1:67 odds.

          The first thing that comes to mind is favors, but although I think those are closer to linear in utility than money is, I don’t think they get there.

          • Luke Somers says:

            Utility may be log-money, but that doesn’t make that math work right. There are several problems, actually, but the biggest ist that you need something that’s linear in utility and then bet 10^64 of them. You’re right back where you started in terms of practicability.

      • Rick Hull says:

        I’m not sure what you mean by the last sentence. If you mean you think there’s a 10^-67 likelihood of interstellar travel, I’m going to have to disagree.

        Interstellar travel for a few dudes isn’t really meaningful to the Earth population. And it’s a one way trip. They can never come back to the world they left. I thought we were snapshotting the human race on a glorified SSD and shooting that into a nearby star. So that means both interstellar travel and full brain emulation. I suppose that leaves aside the question of whether we ever take life forms again and have love and sex and kids and stuff, or just remain as threads on a JVM in a rock orbiting Sol 2.0

      • Deiseach says:

        So why should I trust Bostrum’s “10^54 human life-years (or 10^52 lives of 100 years each) could be in our future if we both master travel between solar systems and figure out how to emulate human brains in computers” figures?

        He very plainly has a cause he believes in and wants to make as appealing as possible. Why isn’t he simply shoving on as many zeroes as he thinks he can get away with in order to make a point? After all, both interstellar travel and emulation of human brains are large, difficult problems; linking them together in order to do some juggling with “If these two highly difficult problems are simultaneously solved, we might be able to do some extrapolation about how many more humans could live if we find habitable – by stretching that definition – planets to set up working colonies and humans keep reproducing at optimum rates or else humans are content to be ghosts in the machine so we can have zillions of copies! Zillions, I tell you!” is not particularly convincing to me.

        Tiny, tiny amounts (billionth of a billionth of one percent) wouldn’t convince me to buy a lottery ticket, much less “Hey, at some undetermined time a descendent in my family line can become an immortal computer emulation in a computer system farmed out on an exoplanet otherwise uninhabitable by organic lifeforms fifty or more lightyears away!”

        • Gbdub says:

          I have the same objection. Scott laughs at 10^-67, but swallows 10^54 without reservation. Which seems odd, especially when reaching 10^54 more or less requires violating some of the major known laws of physics.

          I think Scott’s being uncharitable to Matthews – to me it reads like he’s intentionally adding a ridiculous number of zeros as a way of mocking what he thinks the x-risk people are doing (writing out the zeros instead of using scientific notation seems like a big flag). I don’t think he thinks that donating to an x-risk charity literally has a 10^-67 chance of being beneficial.

        • Luke Somers says:

          … because the 10^54 was derived from numbers we have considerably more solidly than a number specifically pulled out of nowhere?

          We can see a basic minimum of people per star (which in turn is a basic minimum of energy extraction), times a reasonable estimate of years, times some number of stars.

          Zapping the last factor down to 1 still yields an enormous number, btw. I would consider X-risk very significant even if there WERE no other stars.

      • vV_Vv says:

        In fact, that’s why I gave the tornado-meteor-terrorism-double-lottery-Trump example – to show how wacky you would have to be in order to get there.

        Most improbable scenarios, even those with a probability equal or lower than 10^-67, are not wacky at all. Your tornado-meteor-terrorism-double-lottery-Trump scenario is not a proper central example of improbable event. Using it as such will lead your intuition astray.

        • Luke Somers says:

          Please give two. I suspect their lack of wackiness would only be apparent because of lack of familiarity with the system described.

          Like… how likely do you think it is that an NFL team would have a final score of 4 in a regular game? Only getting two touchbacks. Around one in a hundred million, say? More than that? It’s happened before, but not with modern rules or strategy. Let’s call it one in a hundred billion.

          So, this would be the on-season-weekly probability that at least 5 American NFL teams will have a final score of 4.

      • Njordsier says:

        I’m not sure whether I believe this myself, but it did occur to me that the proposition “we will invent interstellar travel” is implicitly a product of a bunch of conjunctions that we simply can’t know yet.

        Even given what we do know, interstellar travel has a lot of things going against it. There’s the lightspeed barrier, for one thing. And there’s the incredible, unfathomable amount of energy it would take to accelerate any macroscopic amount of mass to such a speed. And the huge amount of mass that you’d have to pack in order to just get a foothold on a colony. And the huge amount of energy it would take just to get that much mass into orbit.

        I can think of a logical progression of technology that could solve all of these things, and science fiction writers have been doing likewise for decades. But each of the assumptions I might make in such a logical progression themselves imply a huge conjunction of events, some of which may really be showstoppers.

        The tornado-meteor-lottery-Trump scenario is obviously absurd because the assumptions are well-defined. But the notion of interstellar travel contains within it a lot of assumptions too.

        After all, we got the Fermi Paradox thing going on. A non-negligible explanation for that is that interstellar travel is nigh impossible, at least weighed against the product of the expected rate of growth of an interstellar civilization, the density of interstellar civilizations in a galaxy, and the amount of time they have had to colonize everything.

        • Luke Somers says:

          It’s only that conjunctive if you think there is one way to try.

          I haven’t seen laser launch and sail braking mentioned. That drops the propellant-carrying limitation by a very large margin.

          Even if you don’t think WBE will work, there are other ways to spread humans all over the place – send robots who build an environment, and once they’re ready, they can vat-grow humans and raise them with whatever instructional/caring/etc environment you can program in. Note that the program does not need to be included in the initial mission, since it can be transmitted later with only a few years of delay.

          And so it goes. It’s not nearly as conjunctive as it looks.

        • houseboatonstyx says:

          Yes, interstellar travel is too far out for me. But there’s plenty of solar orbit space, and the problems of building orbital colonies are the sort of problems we can solve.

    • John Schilling says:

      What are the chances of rolling a 10-sided die 68 times and having them all result in the same value?

      Almost exactly the same as the odds of rolling a 10-sided die 20 times and having them all result in the same value. There comes a point where the question isn’t, “what are the odds of a theoretical fair die acting this way”, but “how badly rigged is this die and how did I not catch that beforehand?”

      Some guy named Yvain had something to say about this a while back.

      • Fnord says:

        I played a game involving lots of d10s the other night. We must have rolled the d10 at least 68 times. If you’d asked me before the game what the odds of the specific ordered sequence rolled was, I would have correctly said it was 10^-67.

        • ton says:

          If you’d asked me before the game what the odds of the specific ordered sequence rolled was

          If someone asked you before the game a specific sequence and that actually happened, I’d bet at >99:1 odds that they rigged the die.

          • John Schilling says:

            More to the point, if someone asks “what are the odds of this specific sequence of sixty-eight die rolls occurring”, even before you know how it turns out you ought to know that the odds are >>>1E-67 that they are a very talented sleight of hand magician setting you up. Even if they are a gaming buddy that you’ve known for years as a klutz with utter disdain for theatrical magic.

            If you’re using math to solve a real-world problem, the odds are >>>1E-67 that you do not understand the real problem and are not using the right equation. And feel free to put about sixty more ‘>’ symbols in there.

  5. brad says:

    The other number in this piece that deserves similar treatment to 10^-67 is 10^54. That’s a huge number.

    Also, I’m sure Bostrom addresses this but the argument needs to explain why hypothetical non-existent future people shouldn’t be heavily discounted, or even disregarded, as compared to the living.

    • Caleb says:

      the argument needs to explain why hypothetical non-existent future people shouldn’t be heavily discounted, or even disregarded, as compared to the living.

      And/Or, why Effective Altruists shouldn’t become radically anti-abortion.

      • For the record, I’ve observed this coming up once or twice in discussion before. I’ve observed it publicly brought up in discussions on moral uncertainty. I don’t recall the discussions in question to provide you with a link. Anyway, those discussions didn’t go anywhere, because they were taking place in a very public Facebook group with 5,000+ members and could be mined by any reporter trying to dig up dirt or whatever on effective altruism. It’s not my impression that there was a deliberate behind-the-scenes effort to hush such discussion, based on P.R. considerations. I suspect everyone quietly concluded on their own it was a very difficult and challenging conversation to have in public, and just ignored such comments. However, I don’t have enough confidence to assign any sort of comfortable probability to either hypothesis, so take what you will from them.

        It wouldn’t surprise me if one or more philosopher working on moral uncertainty issues and associated with effective altruism were privately researching or discussing the problem posed on their own time. Obviously, they’d want to not only be very confident in their conclusion on the problem before they came out with it, but frame it as best they could to minimize outrage stirred. This would be or will be no mean feat.

        The discussions closest I can now specifically recall on this issue are two questions I asked Will MacAskill during his recent Reddit AMA . I took the challenge of asking him anything to heart, and posed what I thought were some challenging questions. Of interest, I asked

        What would you think of decreased moral uncertainty leading to conclusion effective altruists should advocate for causes outside of the Overton window? For example, decreasing legal access to abortion, or ending aid to a particular country merely because you don’t want to support its aversive values?

        Dr. MacAskill didn’t respond to that. When I asked him which of Peter Singer’s positions he disagreed with, he replied “infanticide is the obvious one!!” and left it at that. This is really just more evidence, though, that the faces of effective altruism and contemporary utilitarianism are reticent to air their positions on abortion outside their element of academia. I can’t blame them for that when I catch them off guard on the Internet, though.

        • Caleb says:

          Interesting insights, thanks for sharing.

          Your comment leads me to think that there is space for more refined concepts within the concept of something being ‘outside the Overton Window.’

          Here, both AIx risk and anti-abortion are, in some sense, outside the Overton Window. (At least in San Fran and similar areas.) But not in the same way.

          Addressing AIx risk is seen as weird; but in a quirky, fun, gosh-that’s-different sort of way. There are a few very high status people who subscribe to the view. There is some push-back (as seen in Scott’s OP), but it’s mostly of the “let’s check your math” variety. The Effective Altruists have no qualms about being seen as weird or as standing out and questioning the status quo in very public discussions. In fact, they seems to love it.

          On the other hand it seems that, from your comments, even considering public discussion on the implications of effective altruism on the moral status of abortion is simply out of the question. I’m just speculating, but I’d anticipate the push-back surrounding the discussion to be less of the ‘let’s check your math’ variety and more of the ‘burn the heretic’ kind. If there are any high-status folk who question the status quo on this matter, they hide it well. If some member of the EA community came out with a rigorous, knock-down argument for opposing abortion, I’m guessing the community’s acceptance (if any) would be somewhat less than enthusiastic.

          So, two ethical position, both outside the Overton Window. One relatively inert, one pure dynamite. And the most consciously “rational” community we have still avoids discussion of the latter like the plague. There needs to be some way we can distinguish between these two kinds of “ouside,” and maybe find a way to render the 2nd type inert for the purposes of rational ethical analysis.

      • Adam says:

        I’d imagine a far greater return to forcing pregnancy on every woman capable of child-bearing than on stopping abortion. The number of women not having children because they abort them has to be smaller than the number of women not having children for all other reasons.

        • Vulture says:

          Mass forcible insemination of humans is a little further outside the Overton Window than banning abortion, though. Abortion would be the low-hanging fruit.

          • Adam says:

            Divert all medical research money to test-tube cloning and artificial wombs, then. Perfect that, and no matter how much utility loss you get from bad health, you can always make it up in aggregate by just creating more people.

    • OldCrow says:

      Exactly my thought when I read this article. Well, I also had a few choice words, but if a nerd rants in his armchair and no one is around to hear them…

      To put this number in some perspective – Bostrom argues here that Earth will be habitable for around a billion years. This gets him to an estimate of a mere 10^16 future lives, nowhere even remotely close to the 10^52 cited in the article. The problem is he talks about human lives, which is pretty optimistic considering that a billion years ago the most interesting critters on Earth were multicellular organisms.

      Somehow I don’t think I’ll care too much about whatever life inhabits Earth a billion years from now. Of course they’ll feel the same way about me, and I’ll be dead.

      • LCL says:

        You surprised me by framing it like that and then going to “so I won’t care about them.” The way you framed it had my intuition going the other way – if [billion-years-from-now critter] is to humans as humans are to simple multicellulars, shouldn’t we care much more about saving each potential BYFNC than each potential human?

        • Anonymous says:

          By that kind of standard shouldn’t the chickens be happy to die for us? We at least treat them a lot better than how we typically treat single-celled organisms…

        • OldCrow says:

          Fair point. My ethics (which are basically just personal priorities and really shouldn’t count) are shamelessly speciesist. No reason to expect other people to agree with me there! Taking it in the other direction is more interesting anyway.

          What’s a BYFNC?

        • brad says:

          If the BYFNC “count” than why not the AI that everyone is always maligning as unfriendly? Maybe it’ll have 10^54 human-equivalent years of life.

          • Luke Somers says:

            Because it would be working to one unified goal that we think is wrong, and it will never allow that goal to change. A large number of distinct individuals with their own desires and preferences etc. seems to me to be a great deal more attractive a future.

    • I believe this is a question of population ethics. I’d guess Nick Bostrom, as well as Toby Ord and maybe Will MacAskill try working on this issue. Check their C.V.’s for relevant citations.

    • I believe your latter point falls under the umbrella of population ethics. I suspect Nick Bostrom, as well as Toby ord and Will MacAskill, have done work on this problem. Check their C.V.’s for relevant citations.

    • Murphy says:

      As far as I can tell this is sort of assuming that close to all 1.661×10^56 moles of atoms in the observable universe are converted into people running on computronium or into energy for running the computronium.

      • gattsuru says:

        At least from reading his published work, it looks like it’s based around colonizing and converting the Virgo supercluster, rather than observable universe, and the calculation rests on some rough average of the number of computations the average star can support in energy, and the computational processing necessary to support a human-like brain, rather than explicit points of matter.

        These are likely wrong, but it’s hard to believe any of them are off by more than three orders of magnitude, and there’s only three or four places to make such a mistake.

    • AlphaCeph says:

      To avoid pascal mugging you must have a bounded utility function, even if you try to be as close to a perfect humanitarian utilitarian as possible.

      To avoid the repugnant conclusion you need to do something more sophisticated than either average or total utilitarianism.

      So, yes, there probably is going to be rather a lot of scaling applied to that number in a reasonable system of consequentialist ethics.

  6. Pku says:

    I don’t think I trust the doctor calculation: If you calculate things from a “total effort invested” point of view, a doctor puts effort directly into increasing total utility while the hedge fundy (arguably) doesn’t. As you said, investing the same amount of effort in third-world countries gives better results, but I think it’s fairly uncontroversial that being a doctor and volunteering in Haiti does more net good than being a doctor in New York. (Besides, I feel like this post https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/ makes some pretty good arguments against this approach).
    Also, about existential risk, it still seems like donating to an effort to fight global warming does more good than to the friendly AI thing (or at least if donating to friendly AI research, I have enough doubts about Eliezer’s approach that I’d look for someone else to donate to).

    • Scott Alexander says:

      “I think it’s fairly uncontroversial that being a doctor and volunteering in Haiti does more net good than being a doctor in New York.”

      My father and I both used to volunteer in Haiti. We talked to a lot of people there, and a lot of the smartest ones were worried that American doctors volunteering there did more harm than good, because we would see patients for free, which meant the actual Haitian doctors couldn’t compete and so they mostly moved abroad to places they’d get paid better. Although American doctors can do some good, you need a real native medical industry to have consistent care. After hearing a lot of people say this, we both agreed to stop volunteering there.

      Also, we are less able to do good in Haiti than you might expect. Also also, even if it was exactly as good as you might expect, it still probably wouldn’t be 300 QALYs a year.

      • Pku says:

        Interesting. In terms of direct action (rather than donating money), do you think there would be a way to use your first world medical training to help the situation there (e.g. by helping teach Haitian doctors/ going somewhere else where the local doctors actually are overwhelmed?)
        Also, about the 300 QALYs- my argument was that your donation calculation should also work out if you calculate by “effort invested to do good” instead of by money. Investment bankers don’t directly increase total utility (I guess you could argue that they help redistribute resources more efficiently overall by predicting the market, but I’m not sure if that argument works out). In this case, they’d promote more effort being directed where it’s needed most, but someone still has to invest that effort somewhere down the line.
        (The 300 QALY calculation doesn’t seem to work out for other reasons though – it’s hard to make a lot of money. Most people have a really hard time making as much as a doctor, so if you’re a doctor you’re probably making more money than you would otherwise.)

        • anodognosic says:

          >someone still has to invest that effort somewhere down the line.

          Yes, but you can pay someone else to invest that effort, usually at an enormous comparative advantage, because their labor is cheaper (i.e. paying local builders to built a well) and/or because they’re better at it than you are (thus college students should not attempt to build houses for impoverished communities). Right now, the balance is lopsided – enough people are willing to do the work (including low-wage workers who may not care about the cause but would make the effort for pay), but there is too little money. For most people, choosing direct effort simply does less good – that is, produces more direct effort invested overall, even if it’s not *their* direct effort.

          The 300 QALY number was brought up specifically in comparison to what a doctor could do through his or her direct effort. Someone who isn’t a doctor may be making less money, but the value of their direct effort at helping is also lower – they might donate enough to produce, say, 50 QALYs, but their direct effort as, say, a computer engineer or human resources specialist, would produce less than one.

      • chaosmage says:

        Holy shit, I didn’t know that. I’m your fan already, and then you come out and mention another totally awesome thing you did. Guess I’ll need to read that previous blog in its entirety.

        Enjoy your awesomeness, dude!

      • Tom Womack says:

        Scott, would you mind deleting the two pages of two-year-old Viagra-spam comments from the end of your LJ article http://squid314.livejournal.com/297579.html on Haiti?

        • Scott Alexander says:

          *Every* article on that old journal has spam on it. LiveJournal is just very bad at anti-spam control. I don’t want to remove spam from a thousand old blog posts, so I let them be.

      • Deiseach says:

        Volunteering works best when it’s in conjunction with training up local replacements. Short-term stints probably don’t hurt too much, but they usually benefit the volunteers most rather than the locals. Going to donate time and skills that don’t exist or are not readily available in the locality is good, but very long-term commitment is what’s needed there, rather than the usual six months to a year stints.

        I’m coming at this, though, from a religiously-based viewpoint (e.g. our parish partnered with a parish in Nigeria, same order of nuns, etc. so donations and organisations here working with organisations on the ground out there) – feel free to ignore the above if it’s not relevant to your interests 🙂

      • vV_Vv says:

        My father and I both used to volunteer in Haiti. We talked to a lot of people there, and a lot of the smartest ones were worried that American doctors volunteering there did more harm than good, because we would see patients for free, which meant the actual Haitian doctors couldn’t compete and so they mostly moved abroad to places they’d get paid better.

        Market distortions are a general consequence of charitable work or donations, and they are probably more significant the more efficient a charity is.

        For instance, giving away malaria nets to people in malaria infested areas may put local malaria net manufacturers and retailers out of business. This isn’t just a theoretical concern, If I recall correctly, it actually happened in some countries (I can’t remember where).

        Similarly, giving away cash to anybody who makes less than X dollars a month may put out of business any employer who paid less than X dollars.

        • Luke Somers says:

          Isn’t that greatly mitigated if you prefer to obtain supplies locally?

          • vV_Vv says:

            To some extent, but then prices in the supply chain of the stuff you buy increase and these business may become less efficient since they may be effectively living off subsides.

            I’m not saying that the market should be worshiped, in many cases it may be acceptable to cause market distortions in order to achieve a socially desirable outcome, all welfare programs do it, but it must be understood that this essentially always occurs.

            Hence Scott’s argument that charitable activity X is bad because it causes market distortions is overly general since all of them do.

    • Nyx says:

      Global warming isn’t an existential risk though, at least not as commonly discussed – worst case scenario is that it kills a couple billion people (very very bad!). But the future – where the vast majority of lives are – still happens.

      Also, it is controversial to claim that a doctor in Haiti does more good than the doctor in New York because, if the doctor in New York donates the money he makes, he can potentially fund multiple cheaper doctors (or nurse/doctor training programs) in Haiti. This appears to do more good than just volunteering in Haiti as it results in multiple doctors-worth of work in Haiti, rather than one.

      • walpolo says:

        The social conflict from 2 billion people dying off seems decently likely to cause a nuclear war, though. So there’s your x-risk.

        • John Schilling says:

          We went through this a couple of topics ago. Nuclear war is not an X-risk. Not even with global fallout and nuclear winter thrown in. These are kill-billions-in-this-generation risks that, if manifested, will be obscure trivia questions in ten thousand years.

          • satanistgoblin says:

            How sure are you though?

          • Anthony says:

            Are we even sure that AI is an x-risk? Assuming that a superintelligent AI that is not intrinsically friendly is developed does not automatically lead to the conclusion that it’s gonna kill us all. I would think it much more likely that a conscious superintelligent computer entity would have its own goals and values which do not coincide with ours, but it seems much less likely that those goals and values would *require* exterminating humanity.

            Given the current state of, and existing stock of, remotely-controllable machines which can do useful work, I think it much more likely than an UFAI would want to enslave humanity to ensure that the power stays on; that the computers the AI resides in remain operational, and that whatever else the AI wants is able to be done. So the x-risk of unfriendly AI is much, much lower than advertised, though the possible negative consequences are pretty large, and moderately likely given UFAI. But that puts it on the order of asteroid strike or nuclear war hazard, both of which have their own small level of x-risk.

          • John Schilling says:

            Hence, X-risk rather than X-certainty. I’m betting on no extinction even if AI turns out to be generally unfriendly.

            However, if the AI’s sole concern is keeping the electricity on, robots are probably a more reliable long-term solution than human slaves. And if a primate hedge against general robotics failure is desired, genetically engineered post-human slaves are preferable to baseline human slaves. So I don’t see that particular issue as doing more than postponing the extinction of humanity by more than a century or two.

          • Adam says:

            Assuming AI is sufficiently smart, it can just engineer better humans who derive tremendous pleasure from slavery and everyone wins. Or it might replace 10 billion humans with 80 quadrillion chickens and everyone still wins.

        • walpolo says:

          I think there’s a semantic question here… it’s really hard to imagine a big nuclear war not *hastening* human extinction by a lot. I think it’s pretty likely humanity will be able to eventually settle space if nuclear war doesn’t happen, and pretty unlikely if it does happen.

          By your standards, I feel like there are maybe two actual x-risks, evil AI and gamma-ray bursts. Maybe asteroids, although I doubt an asteroid would mean total extinction either.

        • John Schilling says:

          There are no asteroids big enough to cause human extinction and on plausibly Earth-impacting trajectories in this millenium. Long-period comets are an outside chance, on the order of one in a million per millenium.

          AI risk, yes, small as it is that’s a big part of the human extinction risk for the foreseeable future. Really, all of the plausible extinction scenarios are either human mishaps or outside-context problems like Haruhi Suzumiya getting bored.

          Since humans would have a hard time extinctifying themselves without technological civilization, anything that knocks back technological civilization for a decade or a century, buys us that much more time. Well, that’s the crude first-order effect. The second-order effect is, would a post-apocalyptic civilization be more or less likely than ours to flirt with X-risks?

          • Pku says:

            I think the earth has had about 5 mass extinction effects over the last two billion years, none of which I think modern technology could solve, so that’s about 10^-8 per year odds of surprise mass extinction (assuming those things are equivariantly likely. I’m not sure how any of them except the dinosaurs happened though).
            (Also, props on referencing my favourite anime).

          • Nornagest says:

            We know just this side of nothing about all the mass extinction events before the K-T (where we do actually have a good theory, though still one leaving questions to be answered). It follows that we can’t say whether or not modern technology could do anything about them.

            I’ll admit it’s unlikely — the Permian-Triassic event killed damn near everything, and it’s hard to imagine a scenario that did that and was feasibly stoppable — but we can’t conclude anything positive from our lack of knowledge.

          • Pku says:

            Thanks for the information, I was curious why I hadn’t heard more about those before.
            It raises an interesting question: would it be worth donating to research investigating how those happened, on the off chance it’s bost repeatable and preventable through more research? unlikely, but the marginal benefits might be higher (I don’t know how much funding this research gets right now, but I’m guessing a lot less than, say, NASA).

          • Tom Womack says:

            Why do you think mass extinction events are not things modern technology can help with? Ten years into the massive fissure eruptions or into the oceans becoming anoxic, let alone a thousand, modern technology will be quite vigorously devoted to the effort of living and farming underground.

          • Depending on how bad the mass extinction event is I’m not super confident that civilization would be able to reestablish our current level of technology. The easiest to mine minerals and fossil fuels have all been extracted already and you need an advanced civilization for the remaining stuff. And now that fungi know how to digest the lignin in bark we’re never going to get fossil fuel deposits this big again.

          • Nornagest says:

            @Pku — From an X-risk perspective I’d say it’s worthwhile, yeah, although i’d still prioritize it behind, say, impact event prevention.

            The only really solid data we have on the P-Tr event is the Siberian Traps, a large igneous province (read: solidified basalt flood originating from massive volcanism) dating from the very end of the Permian. These don’t necessarily imply mass extinction events; there’s one covering much of Oregon and Washington for example that dates from the Miocene. But there’s some suggestion that the largest ones can associate with impact events; the evidence isn’t strong by any means, but the Deccan Traps in India for example appeared at about the time of the Chicxulub impact.

      • Pku says:

        It’s probably not, but there’s a low percent chance that it does, and a higher percent chance that it destroys modern civilization (and even if we could rebuild in a couple thousand years, it lowers the base of our exponential growth, which severely reduces that 10^52 term overall).
        I had a bunch of arguments here about why AI research seems less likely to be effective, but after running through the calculation it doesn’t seem like they outweigh the lower marginal effect of climate change donation. The reason I still think climate change donation is more effective is that, even if you decide to disregard the Pascal’s mugging effect, climate change research has short term benefits like reducing pollution. (In practice I just donate to malaria instead of either of those, but I vaguely feel like maybe I should get on global warming).

        • Nyx says:

          I’m surprised you hit a lot of the things I was going to say in response: that it’s a low chance of x-risk (still big enough to be considered), that a potentially more relevant concern if it destroys civilization (but still have people to eventually rebuild) is how much it reduces, rather than eliminates as with x-risk, potential astronomical gains, and that climate change seems pretty intractable given how much money is already going to fight it (although, if you were donating to a geo-engineering initiative, then it might be quite tractable albeit more dangerous).

          I’m just left wondering why you think climate change is more important than AI, as you seemed to counter-argue yourself into acknowledging that, at least in terms of x-risk, AI dominates (i.e. I saw you pre-empt counterarguments well, but didn’t seem much of the actual argument for global warming vs AI.) It seems just you don’t think AI is tractable with donations and/or aren’t convinced AI is an existential threat (or don’t buy into expected value calculations that deal with somewhat small numbers? (I’d agree if we’re talking really really small/ arbitrarily close to zero, but I think there’s good reason to put these in the bill/trillionths.))

          Which is a plausible position I disagree with, but worthwhile to ask, to focus on the real issue (i.e. we can just discuss tractability & likelihood of A.I., rather than other cause areas, since based on the outline you’ve given it seems that if you thought AI was tractable and likely you’d agree that you *should* donate here/acknowledge FAI as the most effective cause.)

          • Pku says:

            Mostly that FAI seems a bit too much like Pascal’s mugging. Also, the 10^52 there seems high – the 10^-18 seems more reasonable than the 10^-63 (it’s a bit lower than what I got when I tried doing the calculation myself), but if the 10^52 thing is much lower, it might still be less effective than donating to someone directly (though probably more effective in terms of X-risk prevention than the other X-risks I can think of).
            Edit: now that I think of it, there might be more marginal utility in donating to research past extinction events – apparently we don’t know much about them, the chance of them happening in a given year are about 10^-8, there’s probably not much money going into researching them compared to the others, and if they turn out to be preventable with research, it might be worth the investment. (Doing an off the top of my head calculation, it comes to within ten orders of magnitude or so of the FAI research, which is within my error margin).
            (That said, I do have some of the math skills that MIRI say they’re looking for. It might be plausible that I’d do more good by trying to read through their research myself in my off time and seeing if I have anything to add than if I donated money, especially since I don’t make that much. Plus, it would be interesting).

      • JB says:

        In saying “worst case scenario is that it kills a couple billion people”, is your worst-case scenario as bad as my worst-case scenario?

      • Eli says:

        Global warming isn’t an existential risk though, at least not as commonly discussed – worst case scenario is that it kills a couple billion people (very very bad!). But the future – where the vast majority of lives are – still happens.

        Oh really? The worst-case I’ve heard is that it indeed drives us extinct. The total destruction of non-subsistence, technological civilization is also very much on the table.

        • James Picone says:

          Depends what you mean by ‘worst case’. The absolutely worst case that could actually happen is apparently on the order 20c to 30c above preindustrial and the ocean boils off into space, but it requires literally burning all the fossil fuels in a very short space of time, and I don’t think it’s even remotely credible.

          Clathrate-gun-style mass methane release sucks, but methane doesn’t last long enough in the atmosphere for a sudden spike to cause warming to equilibrium, and even charitable estimates for the amount of methane available in clathrates only get you to about the equivalent of 750ppm CO2. That’s not a pleasant amount of CO2, but it degrades rapidly (into CO2, but less of it). Slow releases of methane don’t have nearly as much impact, because of it’s short atmospheric lifetime.

          The only extinction scenarios I’m aware of involving climate change require humanity continuing to burn fossil fuels well past the point where it’s economic, and well past the point where fossil-fuels-make-it-warmer is the-sky-is-blue levels of obviously true. I don’t think that’s remotely probable.

          • HeelBearCub says:

            But, is it probable at 10^-18 or greater?

            If it is, you should worry about AGW as much as you happily contemplate Friendly AI (by Bostrom’s anology).

            Put it another way, even if it is a remotely improbable, it is still an X risk, and an X risk with known pathways, so the only rational thing to do is be firmly in the camp of “lets stop AGW now”, not in the camp of “meh. whatever.” And definitely not in the camp of “No, it’s not a problem.”

    • Anonymous says:

      a doctor puts effort directly into increasing total utility while the hedge fundy (arguably) doesn’t

      Sure, but by giving charities money, the hedge fundy causes other people to redirect their effort in ways that leads to more total utility.

      • Pku says:

        This assumes that as an EA person you’re more likely to donate money effectively (OK, this actually seems totally fair).

  7. walpolo says:

    Couldn’t the 10^-67 number still be a good approximation, if we are uncertain about whether x-risk charities will help or hurt?

    As Matthews points out, funding computer science research into AI risk could potentially speed up the development of dangerous AI through the indirect effects of having more money out there for CS research. So perhaps 10^-67 is a reasonable guess at the number we’d get if we subtracted the probability of an extinction-causing side effect from the probability of the charity preventing extinction.

    • Anaxagoras says:

      I was going to bring up a similar point; glad to see other people thinking along similar paths.

      While I certainly agree that there’s more than a 10^-67 chance that a $1000 donation prevents some existential catastrophe, there also seems a well over 10^-67 chance that it’ll be part of what precipitates one. Hey, if nothing else, there’s probably a comparatively decent chance the check gets accidentally mailed to the HAL Institute For the Production of Superintelligent Meteors.

      And at that point, I think it comes down in large part to how relatively easy you think it is to prevent an existential threat vs. immanentize one. I wasn’t huge on Bostrom’s book, but I recall one of the chapters dealt with this, about whether we should even try to advance the state of computer science research. I don’t remember his conclusion, but it seems clear that there’s certainly a chance.

      Possibly it could even be the case that the harder of a problem you think unfriendly AI is to solve, the less you should donate to its prevention! After all, if each marginal dollar does little good, and has a real chance of pushing unfriendly AI closer by a greater amount than it would prevent it…

    • brad says:

      I don’t have a proof at hand, but intuitively I have the notion that adding up a whole bunch of probabilities and ending with something that close to zero is unlikely if all the summands have much larger magnitudes.

      • FullMetaRationalist says:

        This is called Error Propagation. Propagation across “multiplication and division” differs from propagation across “addition and subtraction”.

    • DavidS says:

      Well, you wouldn’t reach that precise figure so much as ‘I have no idea’

      I think this is the key point, and it’s an obvious place to expect the argument to be, because that’s the clearest argument against Pascal’s wager: it’s not just the (low) chance that Pascal is right about God, it’s the other (low) chance that he’s wrong in a way that makes his advice radically counter-productive. There are plenty of ways that donating to AI Risk charities could make things worse. Off the top of my head, these include:
      – driving interest in AI that makes creation of an AI more likely
      – leading to an apparent solution to the Unfriendliness problem which doesn’t work but is falsely reassuring
      – having a succesful AI risk charity recieving a high profile and cash before we’re advanced enough for it to be useful, so it ends up being useless and damages the brand of AI risk
      – more money going into AI Risk leading to people going into it for money rather than genuinely taking the risk seriously

    • AlphaCeph says:

      If you are genuinely unsure whether they’ll help or hurt, then I have a much better approximation for you: just say 0.

  8. Chalid says:

    Holden Karnofsky (Givewell cofounder) has argued that giving to Against Malaria Foundation and similar charities was actually the best way to reduce x-risk; by increasing the world’s present and future human capital such donations improve humanity’s ability to respond to x-risks that may emerge.

    For me it seems that saving lives in this way increases some x-risks and decreases others and I have no idea how it all works out.

    • Wrong Species says:

      I’m not convinced that some poor starving kid in Africa with a significant chance of dying from malaria is going to be the important factor in preventing an x-risk.

      • Froolow says:

        I don’t think I agree with you. I know of at least two Nobel prize winners who had no formal education as children, and one of them (Capecchi) lived ‘wild on the street’ and almost died of malnutrition during the Second World War – that’s as close as you could possibly wish for to the sort of people Against Malaria are targeting.

        I guess the heuristic is, “No matter how much less likely you are to win a Nobel Prize (as a proxy for making a contribution to x-risk) given that you are starving / at risk of malaria, if we save enough of those people eventually the probability that one of them makes a significant contribution approaches one”. I think it approaches one pretty rapidly too – I believe Philips and Capecchi were able to make their contributions *because of* their tough upbringings, not in spite of.

        https://en.wikipedia.org/wiki/Mario_Capecchi

        https://en.wikipedia.org/wiki/William_Phillips_(economist)

        (Philips is also really cool – he spent most of his early life as a crocodile hunter and Nazi-botherer before settling down and inventing the first mechanical model of the economy)

      • Froolow says:

        My comment doesn’t seem to be showing up, and I wonder if it is because I included a link in it.

        The gist was that I know at least two Nobel Prize winners who – by any reasonable definition – were even *less* likely to make a contribution to their field. The economist Philips had no education as a child, hunted crocodiles for a living as a young man then was placed in a POW camp during the war. The biologist Capecchi had no education as a child and nearly died of malnutrition ‘running wild in the streets’ during WWII. Apparently he’d even forgotten how to talk during this period.

        My point was that no matter how much less likely the kind of person GiveWell saves is to make a contribution to x-risk than we are, if you save enough of those people eventually the probability approaches one. I think it approaches one pretty quickly too – I believe Philips and Capecchi were uniquely placed to make the contributions they did because of their tough early years.

      • Deiseach says:

        You don’t know they aren’t, either. They might be hugely intelligent and if they survive, grow up to have a stunning new insight on the problem.

      • Preventing malaria doesn’t just help the current people who don’t get malaria, it increases prosperity because people aren’t bearing the cost of malaria, so they can build up their society and take better care of their children.

        Someone who dies very young represents a cost, not just to themselves, but because (at least) of the cost of pregnancy, labor, and childcare, even if those costs are mostly not monetary.

    • AngryDrake says:

      Does Givewell also contribute to anti-contraception advocacy?

    • Dust says:

      Does Holden care to offer a probability estimate that a marginal $1000 for bed nets will end up saving the world from some totally unrelated x-risk? My naive intuition is that this argument falls prey to Pascal’s Mugging type considerations pretty badly.

      • ton says:

        The argument would be that it’s more likely than the marginal $1000 on AI risk (also, as he’s directly influencing donations, we don’t really need to talk about marginal).

  9. Jack3056 says:

    Isn’t the p=10^-67 for ‘difference in existential threat due to donation’, rather than the threat level itself. If so, asteroid is not contained within the number. But nice post!

    • Scott Alexander says:

      Yes, you would have to add on some stuff to go from “probability of asteroid” to “probability that thing you do prevents asteroid”, but this seems much more straightforward than AIs and much harder to bring up to tornado-meteor-terrorist-double-lottery-Trump levels.

  10. H.E. Pennypacker says:

    Some of those odds of unlikely things look wrong to me – particularly the likelihood of meteor death.

    The article you cite claims the lifetime odds as 1/700,000 which is ridiculous given that there’s only one recorded incident of someone being hit by a meteorite and she survived. http://news.nationalgeographic.com/news/2013/02/130220-russia-meteorite-ann-hodges-science-space-hit/

    • Nyx says:

      maybe that’s because, if anyone dies from a meteorite, a lot do? like a city gets hit or something?

    • Anonymous says:

      I think most of the probability comes from catastrophic impacts that are too rare to have ever occurred in recorded history (but whose frequency we can estimate by counting impact craters or counting near-Earth asteroids). Suppose an everything-killer asteroid hits the earth once every 100 million years on average. Then the probability of one of those hitting during your lifetime is almost one in a million. When we include smaller asteroids, which are more common but less deadly, the probability will only go up. Here’s a discussion.

      Here’s a 150-page NASA report on asteroid risk. Page 20 suggests that impacts large enough to cause billions of deaths could occur on the order of every 1 million years.

  11. haishan says:

    Trump and Sanders end in a perfect tie with whom?

  12. Ed says:

    Could be rehashing an old argument, but the simplest line here for me is:

    While from an EV-maximizing perspective something that pays off $x in value with a probability 1/x leads us to be indifferent to the value of x, in reality we’re risk averse and moreover are allowed to have an incredibly wide range of valuations when x is very big while still being reasonable humans.

    • ton says:

      I think the definition of utility is supposed to make risk-averse not apply in EV calculations of utility.

  13. Ever An Anon says:

    The old maxim “garbage in, garbage out” applies here.

    What are the error bars on Bostrom’s estimate of the number of future humans barring extinction? How about MIRI’s effectiveness, what makes us so confident about the sign of their effect on AI risk even assuming a non-zero magnitude? What reason is there to assume that donations to X-Risk have a linear effect on the probability of an extinction event so that we can actually compare them with donations to things like the AMF?

    For instance, if we plug in Leslie’s Doomsday Argument number for total remaining human population of 1.14e12 more people instead of Bostrom’s 1e54, then we’ve shaved just under 42 orders of magnitude off the “effectiveness” of X-Risk with the stroke of a pen. If that sounds like cheating, it’s because it is: both numbers are of rather dubious plausibility and nobody with any sense would use either of them to make financial decisions.

    Like Newcomb’s Problem this seems like a clear case of being too clever by half: you’ve reasoned yourself into a choice any idiot would immediately realize is a losing bet. I have a really hard time understanding this way of thinking.

    • Vaniver says:

      Like Newcomb’s Problem this seems like a clear case of being too clever by half: you’ve reasoned yourself into a choice any idiot would immediately realize is a losing bet. I have a really hard time understanding this way of thinking.

      I feel like this is an argument against believing that nuclear power is possible after the math is worked out but before it’s tested. And if you can understand why someone would bet on nuclear power working before it’s been proved to, then I think you can understand why someone might think x-risk is the most important non-profit cause.

      • lunatic says:

        I’m not sure it is; you wouldn’t try to develop a nuclear reactor without first demonstrating a nuclear reaction.

      • Ever An Anon says:

        Except the only way to “work out” the relevant math, that is how much usable energy you can get out of a fission reaction, is by getting a reasonably accurate model of nuclear physics. And that requires quite a bit of experimentation.

        If you want, you can think of this as a question of knowledge according to Plato’s definition. Anyone could, theoretically, come up with an accurate proof of some physical principle in a fever dream but even if that belief is true it hasn’t been in any way justified. It’s not enough to have a mathematical model, you need to show that it actually corresponds to something in the real world.

        That’s a big part of the issue here: taking two numbers that came from the deepest crevices of some guy’s ass, multiplying them together, and then acting as though you’ve done a proper expected value calculation is absurd. As I showed, you can find equally implausible “expert” numbers which change the final result by 1e42! The rational response is to say “this is ridiculous” and walk away.

    • Soumynona says:

      Which choice in the Newcomb’s Problem do you think is an idiot’s bet?

      • Robert Liguori says:

        The part where the Agents claim to have predicted in advance what box I will pick. How I spoke to my (hypothetical) wife that morning would change whether I weighed risk vs. reward. Therefore, the Agents would need to predict her behavior. Her behavior is determined by the actions of others, which are determined by the actions of others, and so on.

        “Reliably predict the micro-scale decisions of people.” is “A fair coin that always comes up heads when flipped.”; it’s not impossible, but it’s the point where the reasonable thing to do is to stop the speaker and say “Your hypothetical is bad and you should feel bad.”

        • Soumynona says:

          The agent is an AI and it has surveillance microdrones everywhere. If you don’t like AI then instead he’s a super-psychologist working for the Illuminati World Government or something.

          But you admit that the scenario is not impossible.
          (So, presumably, not at all like a fair coin that always comes up heads because that’s a blatant logical contradiction.) So why is the hypothetical bad? It’s from a philosophical thought experiment, not from the book of practical things that you’ll get to use in your everyday life.

          • Robert Liguori says:

            The AI has microdrones covering the surface of the sun, detecting every emitted cosmic ray that will bit-flip a microcontroller that will momentarily glitch an electronic device that will alter the output of how people interacting with that device?

            Yeah, no. A world in which my behavior can be predicted a day in advance or more when I am motivated to make my behavior unpredictable is not a world in which the concepts of I, Prediction, or Dollar have the same meanings as in our world.

            And a fair coin that always comes up heads is not a logical impossibility. It’s just that it was flipped continually until the heat death of the universe, and every one of those flips came up heads. That situation is not impossible, no? (Accounting for brief pauses to reforge the coin into perfect balance every few million flips as the continual impacts deform it, of course.)

          • Jiro says:

            I still don’t see how you get away from the problem of the AI being subject to the halting problem. It just *can’t* predict an arbitrary person’s actions.

          • James Picone says:

            Very few humans have infinite memory and potentially infinite lifetime.

            (also Omega could just only offer the deal to people it knows it can predict. Halting problem says you can’t do it for all programs, not that you can’t do it for any programs).

      • Ever An Anon says:

        The one where you only end up with a grand instead of a cool million. It’s not even a hard choice: “do you want one million dollars? Great, then resist the temptation to touch that plexiglass box for a few minutes whIle I preemptively write you a check.”

        Maybe the issue is a free will screw-destiny thing, where two boxers just reject the premise of an infallible predictor out if hand. But if you actually play by the rules of the thought experiment one boxing is the obvious choice.

        • Soumynona says:

          Ah, so you’re a one-boxer. I agree with that. It was hard to tell from your post, because half of the world is two-boxers and they consider us to be the idiots.

      • Eli says:

        The two-boxing, you dumb bastards.

  14. Jaskologist says:

    What’s the probability that one of the minds now freed up because we ended world poverty fixes x-risk? Higher than 10^-18?

    • Scott Alexander says:

      What’s the probability that the superintelligence we build solves poverty?

      • Nyx says:

        I was going to say that it depends on whether or not the AI was friendly, but then I realized everyone dies with ~1 probability if it’s unfriendly, so damn near 1 either way. Either it solves poverty, or there are no people to be poor 🙂 😛 … 🙁

      • Eli says:

        That depends. Do its idiotic human operators try to convince it that for Very Sophisticated Virtue-Theoretic Reasons the poor deserve their condition? Some of the people who purport to support the creation of a fun-theoretic eutopia in the future don’t support dealing with poverty today. From their mouths, we will have poverty, even extreme poverty, in our future utopia, because poverty is better (for others) than prosperity, Because Sophisticated Arguments.

      • Mary says:

        Zero.

        Whatever it does, we will just redefine poverty to be the bottom quintile or something.

        Witness that no one in the United States lives at less than the 70th percentile world-wide — in a world where the standards are enormously higher than they have ever been — and yet people claim that Americans live in poverty.

        Take that back. If it wanted to cure poverty, it would genetically engineer or what-have-you to cure of greed, sloth, envy, and whatever other faults contribute to our viewing actual prosperity as poverty. That’s more than zero.

    • Nyx says:

      Almost certainly higher as you imply (though risks increased in other ways such that it’s not perfectly clear if it’s a net gain [I think it probably is a net gain but don’t have airtight defense]).

      But most people in FAI think that directly working on AI is even higher (e.g. if we spent the same money on AI safety as on global poverty, would get a >>10^-18 reduction in x-risk).

  15. haishan says:

    I don’t think Matthews’ criticism was all that well-done. However, I think there are absolutely good mathematical reasons to question the effectiveness of AI risk research. Here are the big ones.

    1. Let’s say that we assign a tiny effect on the probability of unfriendly AI to donating to MIRI (or FLI or FHI). Doesn’t have to be 10^-67; 10^-9 should work. Then you shouldn’t be very confident that the effect is, in fact, even in the direction you want it to be. Nobody on earth is well-calibrated at that level of precision; the difference between .49999999 and .50000001 is not noticeable, to anybody. Unless you have very convincing arguments that giving to MIRI won’t increase the probability of UFAI, you shouldn’t trust your ability to reason about probability shifts on that level.

    2. More troubling, for a community that talks a lot about how the map is not the territory, an awful lot of LW folks seem to believe that utilitarianism is the territory of ethics. You essentially have to believe this in order to think that the process of multiplying (incredibly huge payoff) by (incredibly tiny probability) and seeing how big the product is will give you the right ethical answer. I think su3su2u1 on Tumblr put it better than I can:

    I deal with mathematical models for a living. I always validate the model by checking them against reality where I know what reality looks like. And I never trust them far outside the regions they’ve been validated. The whole idea of “look, I made a simple model and when I put in hugely unrealistic numbers I get this weird conclusion! Guess that weird conclusion must be right!” is repugnant to me on a THAT IS NOT HOW YOU USE MODELS level.

    Classical physics works very, very well for predicting things on human scales — for instance, what happens if two moving objects of a few kilograms moving at like 1-100 m/s collide. But if you tried to use it to predict the behavior of objects moving at 100 million m/s, or objects the size of an electron, you’d get much worse answers. Are we so sure ethics is more scale-invariant than physics?

    • LTP says:

      Not only are they confusing the map for the territory, but *by their own beliefs*, aren’t most LWers moral anti-realists, and so they’re obsessing over a map when there is no territory?

      • FullMetaRationalist says:

        Consider bridges. I believe particular bridges exist (e.g. the Golden Gate Bridge exists). I also believe there exist objective principles which factor into the civil engineering used to build a successful bridges (e.g. stress, strain, degrees of freedom, thermal coefficients, etc). I do not believe in the existence of an archetypal bridge, of which all other bridges are mere imperfect reflections.

        Similarly, I believe particular cultural-norms called “morals” exist. I believe there exist objective principles which increases the fitness of a community in a given environment (e.g. the golden rule, cleanliness, loyalty, etc). I do not believe in an archetypal morality, of which all other moralities are imperfect reflections.

        Saying an archetypal morality exists is like saying humans are objectively the ultimate species to which all other species aspire, rather than saying humans are fit given their environment.

        The above is my working model of morality. I don’t know how well this generalizes to the LW community. But it satisfies the definition of anti-realism. Simultaneously, I’d like to know how morality works just like how a civil engineer wants to know how bridge-building works. If there’s a “territory” I’m trying to map, then it’s the space of all possible moral systems rather than a single objective moral system.

        • Nita says:

          That’s a lovely way to put it — thanks!

        • haishan says:

          I think this is a good analogy. (Even though I’m a moral realist.) If I may piggyback off of it:

          If you want to cross a river or a bay, a bridge is a very good way of doing that. But if someone proposes building a bridge across the Atlantic Ocean, or to the moon, we won’t take them very seriously; if someone claims that airplanes and the Saturn V are just different sorts of bridges, we’d think them a crackpot. Bridges are useful for solving specific problems. Similarly with moral theories.

        • Eli says:

          That sounds fine until we ask you what to actually do. A nominalist about bridges still has a fine idea that a bridge has to provide transit from one point to another: he still knows damn well what the causal role of a bridge is. You’ve not actually specified a suitable causal role for morality that isn’t filled by some other object, like “adaptation”.

          • FullMetaRationalist says:

            In the Parable of the Raft, the Buddha draws an analogy between the Eightfold Path and a raft. A raft is meant to be used to cross a river. After it no longer serves its purpose, it may be discarded. Similarly, the Eightfold Path is designed to ferry neophytes across moral hazards. After a neophyte fully attains enlightenment, he or she no longer must rely on a “Buzzfeed-style list of n things” as a crutch.

            People can do they want. If they want to cross the bridge in order to attend a clam bake, they can. If they want to swim in the lake of fire, they can do that too. The laws of physics won’t stop them. The bridge doesn’t decide the final destination, it just lowers the activation energy.

            I think the causal role of morality is to nudge citizens through instrumentally-valuable choke points. In this view, morality doesn’t provide a script of terminal values, it helps people successfully attain what they personally consider terminal-values (or at least the popular ones). Remember how Bostrom talked about how AI’s will likely converge on instrumentally valuable goals such as power? It’s a similar idea.

            (What prevents me from calling morality a “cultural adaptation”?)

            p.s. I think I should clarify my position on meta ethics. I think all normative expressions are contingent upon satisfying a goal. E.g. the utterance “You should save for retirement” reduces to “You should save for retirement if you terminally-value financial-stability”. “You shouldn’t kill people” reduces to “You shouldn’t kill people if you aren’t 100% certain: you can handle the societal/emotional repercussions; this results in positive marginal utility; etc”. The universe doesn’t prescribe morals to humans, it just contains humans which do things they like to do. I don’t know what name philosophers have come up with for this position yet.

          • FullMetaRationalist says:

            I want emphasize that I picked the bridge metaphor because it’s a path rather than a destination. If I thought of morality as prescribing terminal values, I might have used a building metaphor instead.

        • LTP says:

          Fitness by what standard? The term “fitness” is itself normative in this case.

          That standard is either subjective, which is anti-realism but not the kind rationalists seem to believe, or the standard is objective, in which case you’ve just moved the level of moral realism down a level. You can have objective ways to maximize fitness, but the definition of fitness is also normative, and so you actually haven’t answered the question.

          • FullMetaRationalist says:

            “Fitness” by the standard of whether the citizens of a particular moral system live their lives in a prosperous, fulfilling manner. What a particular society decides is terminally “prosperous or fulfilling” is up to its citizens.

            For example. Pigs are not kosher. To eat pork is considered immoral. Imagine the reason is that in the biblical Middle East, pigs (and other cloven-hoofed beasts) all had trichinella. Maybe the Jews agreed pork indeed tastes excellently, but decided that maybe the risk of trichinosis outweighs the benefits of consumption. In such a case, Kosher laws helped Jewish citizenry literally stay alive.

            Meanwhile, imagine that ancient China had long mastered the art of cooking pork. If pork consumption doesn’t mysteriously risk death, there’s no reason to label it immoral. Azathoth judges moral fitness, just like it judges biological fitness. Different environments beget different moral systems.

            (Does this answer your question? I’m afraid I find your comment nearly too abstract to interpret. Maybe look at my reply to Eli.)

            (Whenever I see the word “fit”, I mentally reduce it to “barely competent enough to not spectacularly fail”. And whenever I see the word “fitness”, I mentally reduce it to “likelihood that it’s barely competent enough to not spectacularly fail”.)

      • jaimeastorga2000 says:

        I, too, don’t really get the rationalist obsession with utilitarianism. My best guess is that it is partly a founder effect (Eliezer Yukdkowsky is a utilitarian, obviously) and partly a predilection that a certain kind of mind (which is overrepresented at LessWrong) has towards “elegant” systems which purport to derive vasts swathes of conclusions from a small number of axioms (see deontological libertarians who try to derive all morality from the non-aggression principle and property rights for another good example).

        • Pku says:

          I’m not a fan of Eliezer, but I do like utilitarianism (and have from before I got into this community, though I didn’t have a name for it). It just seems so natural, I can’t imagine any arguments against the principles (I can easily imagine arguments against using it as an everyday tool for decision making, many of which I agree with, but not against it as a fundamental principle).

          • CJB says:

            I’ve invented a variation of Utilitarianism called “Utilitarianism with common sense”. Basically, if a given hypothetical situation requires huge contortions in reality, probability, or mathematics, the expected benefit from me solving this conundrum is lower than the benefits of exercise I get from rolling my eyes.

            It lets me donate money effectively AND not deal with people who use exponents the way teenage girls use exclamation points.

          • Eli says:

            The Repugnant Conclusion and Very Repugnant Conclusion don’t make you doubt utilitarianism at all?

          • ton says:

            @Daniel That’s not an argument against utilitarianism , it’s an argument against altruism. You may be selfish, but your rational preferences will still correspond to a utility function.

            Plenty of people empirically do care about others.

          • ton says:

            “And utilitarianism is altruism in a consequentialist framework.”

            There are different forms, I’ll give you that. What you seem to be missing is that the people they are trying to convince are already those who identify as altruistic. If you’re a selfish psychopath, they don’t want you (or at least don’t expect their arguments to convince you).

            ” But many people confuse the (very sound) arguments for consequentialism with the (nonexistent) arguments for altruism.”

            I can’t recall seeing such instances, could you give an example?

            “I don’t dispute that people can value something outside of their own lives and happiness. But there is no intrinsic reason why they should. ”

            Define intrinsic. I claim that many people do put intrinsic value on other people’s “lives and happiness”.

            If you mean “no ethical reason, because I don’t believe in ethics”, then the same goes for any other preference. How is your sentence true in a way that “people like to not be tortured, but there is no reason why they should” isn’t, for someone who says “I want others to be happy” the same way that everyone says “I don’t want to be tortured”?

            “Yet effective altruism seems to involve a lot of guilt-tripping of people to convince them to care about chicken welfare or people in the year 50,000 AD. These things have no relevance to their interests, and they don’t naturally care about them. They have to be convinced to care by specious arguments.”

            You seem to be jumping from “I don’t naturally care about X” to “any argument that causes me to care about X is specious”. Imagine you naturally care about not being tortured, but don’t naturally care about praying. An argument that convinces you of certain religious claims would change your attitude towards prayer, but wouldn’t be specious (assuming we’re in a world where it actually is true). In the same way, someone who naturally cares about “things that can suffer shouldn’t”, can be factually convinced that chickens are a thing that can suffer, and therefore deserve no suffering.

            (Full disclaimer: I’m an error theorist myself and don’t consider myself altruistic. I just recognize that EA arguments aren’t intended for me, but for people with different premises).

          • ton says:

            @Daniel

            >First of all, the term “psychopath” is a highly misleading strawman when applied to rational egoism. Psychopaths are irrational people who can’t (seem to) help but follow their short-term impulses. They don’t achieve their own interests; they systematically alienate people and frustrate their own goals.

            I’m using psychopath in the sense of “lacks emotions”. There are certainly some distinguishing features of psychopathy that seem to match up with what we’re referring to. If you prefer not to use that word, fine; we seem to agree on what we’re referring to in any event.

            >Sure, read Yudkowsky’s list of the “intuitions behind utilitarianism”: http://lesswrong.com/lw/n9/the_intuitions_behind_utilitarianism/

            I don’t see him advocating altruism there. He assumes the reader is already non-selfish. See http://lesswrong.com/lw/v5/inner_goodness/

            >But suppose the one even says: “Actually, I actively dislike most people I meet and want to hit them with a sockfull of spare change. Only my knowledge that it would be wrong keeps me from acting on my desire.”

            >Then I would say to look in the mirror and ask yourself who it is that prefers to do the right thing, rather than the wrong thing. And again if the answer is “Me”, then it is pointless to externalize your righteousness.

            >Albeit if the one says: “I hate everyone else in the world and want to hurt them before they die, and also I have no interest in right or wrong; I am restrained from being a serial killer only out of a cold, calculated fear of punishment” – then, I admit, I have very little to say to them.

            He does make a false dichotomy there, someone can not care about other people either way (which is presumably what you’re referring to), but he’s very clearly saying that he’s only trying to convince those who are already inherently altruistic.

            >But taking morality as stemming from the objective requirements of acting in accordance with the choice to live, then there are reasons to avoid torture.

            There’s no such objective requirement. You’re committing the same fallacy you accuse altruists of doing. Whatever reason you have to avoid torture, someone else with different thought processes than you has to avoid making others sad. There is no *objective* difference, and those people are making the same claim that you make with regard to your own happiness. I see you’ve linked some arguments on this which I’ll get to later.

            >However, if (as they often do) such people purport to be moral intrinsicists who assert the existence of a duty to make yourself care about chicken welfare whether you like it or not, they can be challenged. Because there is no such duty.

            It sounds like you’re referring to religious arguments now, which would have that form. The standard secular argument for caring about chickens is not “you have a duty to make yourself care about chickens”, but “you care about things that suffer in general, I’ve convinced you that chickens suffer, so care about chickens”.

            >What is necessary is to point out to people the Sisyphean nature of “effective altruism”, of the idea that they owe an infinite debt to society

            Those are two different ideas. Incidentally, have you seen https://slatestarcodex.com/2014/05/10/infinite-debt/?

            >I think that this is, psychologically, a much healthier option than assuming that debt and then cheating on it

            Except of course the people you’d be trying to convince are trying to do the right thing, not the healthiest.

          • ton says:

            >The “lacks emotions” sense is just as much a mischaracterization. The rational egoist feels emotions, including the emotion of empathy with others’ suffering.

            There may be multiple different feelings we’re inadvertently conflating. I think I’m referring to something real, but you might be as well.

            When you say “The rational egoist feels emotions”, why don’t you think they should then act so as to feel happy emotions and not feel sad ones (which maps onto altruism in some ways)?

            I would disagree with “But no one actually feels empathy with the potential people in 50,000 AD.”

            >I care very deeply about other people. But when I say I care about them, I mean I care, I value. It means that they have some quality which is valuable to me, that there is some mutual exchange of spiritual-emotional benefit.

            I think (and correct me if I’m wrong), that this is compatible with “don’t care about other people one way or the other”. What I meant by that was don’t care as a *terminal goal*. Of course, caring instrumentally is still selfish, and that’s what you seem to be describing. Such a person would answer Eliezer “no, I don’t care if someone gets tortured insofar as it has nothing to do with me”.

            >Perhaps I was not sufficiently clear about what I mean by “objective”: I mean that there is a real relation between what you desire and external reality. That is, if you want a certain thing, reality tells you how to get it. If you want happiness, you will not achieve it by being tortured.

            Now you’re making the claim that we should do what makes us happy. Why? Why not do what we want (which for some people is altruism)? How is pursing “happiness” objective in a way “what I want to do” isn’t?

            Why is one qualified to be personal value while the other isn’t?

            >But there are plenty of secular advocates of “duty”, moral “realists” (they claim to be realists, but they are inverted subjectivists) of the type who claim that there is some intrinsic good we must seek regardless of whether we personally value it.

            They are wrong.

            (Also, I’m not quite saying that duty can only be supernatural, but that it can only be externally imposed; government works just as well, any system of punishment will. This is subject to caveats about other possible minds, though.)

            >Scott himself admits that his resistance to it is only an “unprincipled exception”.

            It’s more like “we can see the answer is somewhere in this direction, but we don’t know it fully, just enough to know that action X is reasonably correct”.

            >But if they are the kind of egoists I mentioned above

            But they aren’t! They view other people’s happiness as terminal values, not personal ones.

            Empirically, altruism*is* what people want, even if it doesn’t make them happy directly (e.g., they would want someone elsewhere to be saved from torture even if the memory of it was removed from their mind). What explanation do you have of that, other than “it’s irrational”?

        • Deiseach says:

          To tie-in with the Muggeridge post and how the pro-Communism Westerners he interacted with still believed so much in the beautiful perfect system that would improve the lot of all humanity by solving all problems, that they denied the reality on the ground, or were willing to let themselves be fooled by the carefully limited tours they were given and the trained guides who spoke to them – that’s the same problem I have with the AI risk proselytization (and indeed the notion that as soon as we lick the problem of creating AI it will then immediately bootstrap itself to god-level and solve all our problems), and with the broader EA movement where we are supposed to donate the most efficiently by letting somewhere like GiveWell select out (on objective criteria that are measurable and calculable) the most efficient usage of time and money, then give give give till we can’t give an extra cent without risking dying of exhausation and starvation.

          Too much starry-eyed wonder at the beautiful model, too little compromise with the messy reality. Maybe it’s because I’m older and I’ve had my corners a bit rubbed off by life, but I think we need to take into account and even be prepared to welcome the messiness of the world outside our heads. And yes, that includes letting people decide to donate to whatever the hell cause they like based on whim and sentiment rather than perfectly calibrated metrics of efficiency.

          I don’t mind people saying “I donate to research against AI risk because it’s something that interests me”. I do mind when they try to convince me it’s all based on strict mathematical principles of probability involving multiplying gazillions of zillions of possible future utility units times teeny tiny weeny percents of percents of chance it won’t happen.

        • Jaskologist says:

          Science uses math. Utilitarianism uses math. Therefore, utilitarianism is scientific.

    • Dust says:

      If I think a modeling method wouldn’t be appropriate for relatively simple problems I do at work, why should I allow that methodology for making ethical choices?

      This is a good argument for having a high level of moral uncertainty, which I espouse. But it doesn’t sound like he has a good suggestion for what we should replace utilitarianism with. Switching to e.g. virtue ethics because seeing the world accurately is hard seems like sticking our heads in the sand.

      • Eli says:

        There are lots of consequentialist systems besides utilitarianism.

        • Dust says:

          Do you have any links?

        • Eli says:

          @Daniel: just noting, but Scott wrote an entire article about problems with utilitarianism. In blatant pimping for my own position, quote:

          One idea is a post hoc consequentialism. Instead of taking everyone’s desires about everything, adding them up, and turning that into a belief about the state of the world, we take everyone’s desires about states of the world, then add all of those up. If you want the pie and I want the pie, we both get half of the pie, and we don’t feel a need to create an arbitrary number of people and give them each a tiny slice of the pie for complicated mathematical reasons.

        • Dust says:

          @Daniel: I was hoping to hear about your preferred consequentialism variant or problems with utilitarianism; I already buy that people are sometimes lazy & equivocate between utilitarianism and consequentialism.

  16. onyomi says:

    I’m always a bit perplexed by arguments about potential human lives, such as one Eliezer gives about either letting all but 1,000 of the humans who currently exist to die, or else to allow everyone currently alive to keep living, but to prevent any new humans from ever coming into existence. It seemed to him to be obvious that the answer was let almost everyone now alive die because of the much greater number of people who will probably live after us.

    But this hardly seems obvious to me. It seems we place a much greater, if not an almost infinitely greater value on lives of people who currently exist than on lives of people who could, potentially exist. If this were not the case, then it would seem to be immoral for everyone not to be having unprotected heterosexual sex at every conceivable opportunity (unintentional pun…).

    That said, it does seem as if future people’s lives are worth *something* even if it’s hard to calculate. It’s not that I’m indifferent as to whether or not the human race goes on for many prosperous millenia after my death or gets wiped out by a meteor the day after the last person now alive dies, it’s just that, given the choice between the life of any currently existing person, the potential lives of people who might one day exist seem not only to be less valuable, but almost on a different order of value.

    It’s not that I can’t understand why someone would take Eliezer’s view, but I think it’s much less obviously correct than he thinks.

    • walpolo says:

      Good, I just simultaneously posted a similar point.

      It might also be that the values here are incommensurable. So perhaps it’s worth something to bring about more future lives, but no amount of bringing about future lives could equal the moral importance of a single present-day person. This is sort of how I feel about animal lives. It’s a better world if fewer cats die, but I would never sacrifice a human life to save any number of cats.

      • onyomi says:

        Yes, this sort of incommensurability, assuming it makes sense logically/ethically (it seems intuitively correct to me, but I haven’t thought about/looked into it that much), could also help us get around the conclusion that all our efforts should be devoted to helping chickens.

    • Deiseach says:

      Letting only 1,000 of current humans alive to (presumably) propagate the future millions and billions through their descendants seems like too much of a bottleneck. I think you’d want to let a much larger number, possibly a million, live in order to be sure you’re not going to have all your eggs in one basket (a really bad flu pandemic, for example, would wreak havoc with your human breeding stock of 1,000).

      Plus, to keep a bare minimum level of industrial/technological civilisation on the go (unless we’re assuming we’ve created AI which will run all the economy and machinery for us), 1,000 people simply isn’t enough. With that low a number, we’re looking at going rapidly backwards into subsistence-level farming in no time at all.

    • Eli says:

      I’m always a bit perplexed by arguments about potential human lives, such as one Eliezer gives about either letting all but 1,000 of the humans who currently exist to die, or else to allow everyone currently alive to keep living, but to prevent any new humans from ever coming into existence. It seemed to him to be obvious that the answer was let almost everyone now alive die because of the much greater number of people who will probably live after us.

      Why do people keep inviting the Dungeon Dimensions into our reality? Like, an ocean cannot warm itself around a candle-flame, the counterfactual will always outnumber the factual, no matter how large the numbers involved in the factual ever, ever get. Either you value actual people over imaginary potential people, or you bite the bullet and tile the universe in the maximum number of people and you still won’t have helped even a little bit because all the counterfactual people still won’t exist.

      • onyomi says:

        But one cannot be held morally culpable for not doing more than one can physically do.

        • Eli says:

          Yes, but one can easily mess things up quite a bit because you’re trying to maximize some variable whose conceptual content contains, “Portion of reality given over to embodying otherwise-irrelevant counterfactuals.”

          • onyomi says:

            In case it’s not clear, I’m not arguing that “allow all theoretical people to come into existence insofar as it is humanly possible,” is a good guideline to follow; I’m saying that it seems to be implied by the consequentialist logic which weighs the utils of currently existing people against those of purely theoretical future people.

  17. ton says:

    1. Yes, they’re wrong to answer Pascal’s Mugging by pulling a couple zeros out of their posterior. No, that doesn’t mean you can ignore the Pascal’s Mugging aspect of the situation. The fact of the matter is, we *don’t* have any good solution to the general problem, except that our intuition strongly suggests *not* giving in to being Mugged. Just because Dylan appears to have partly missed the point of PM doesn’t mean the PM claim isn’t valid. PM is a problem even *given* the higher number, because intuition says to discount anything with such low probability, or at least not make it your highest priority.

    2. HOW DOES A DOCTOR UNIRONICALLY WRITE THAT BECOMING A DOCTOR IS A BAD IDEA

    • Thomas says:

      2. I’m a grad student who regularly complains about grad school being a bad idea and tells undergrads to avoid it if at all possible. I can’t tell if staying because I’m almost done is sunk cost fallacy or not, but I’m here because I didn’t notice it was a bad idea until too late. Maybe Scott is too.

      • ton says:

        It’s not sunk cost fallacy if the gain from continuing is still greater than the cost of continuing, even if the gain overall is less than the cost overall.

        I haven’t read all of Scott’s posts, but I’ve read a lot of them, and I don’t think he’s written that he regrets becoming a doctor.

    • brad says:

      It isn’t like becoming a successful hedgie is a sure thing even if you are really smart.

      Doctors get paid pretty well and there’s virtually no such thing as an unemployed doctor. It may well be one of the better EV paths a smart 18 year old can plan to take.

      • ton says:

        As someone about to start college and planning towards a hedgie orientated education, what’s “really smart”?

        • brad says:

          I’m not sure I understand the question. Is it: “is there some level of ‘really smart’ such that becoming a successful hedgie is a sure thing”?

          I suppose there is if we take the LWish approach to intelligence where it’s unbounded and eventually let’s you predict the future then sure, but IME among actual bankers it also involved issues of personality and some luck.

          • ton says:

            So what kind of numbers are you talking about?

          • brad says:

            Sorry to be obtuse, numbers for what?

          • ton says:

            Probability X becomes a hedge fundie|X tries to become a hedge fundie & X is smart, for various values of smart.

            (I don’t have much to go on in terms of IQ tests besides for a 2340 sat including 800 math and reading, so you can use that to define smart here.)

          • Tom Womack says:

            If you’re smart and competent and actually interested in the field and able to afford unpaid internships in New York, I suspect you’ve an 80% chance of getting a job at one of the big investment banks.

            Looking over my smart, competent, interested university colleagues who’ve gone to investment banks, you’ve probably a 30% chance of getting into the hedge-fund world.

            But I don’t know what the route from hedge-fund quant to hedge-fund manager looks like, and (given the size of the populations, and that fact that you’re very much getting to a retired-man’s-shoes situation) I’d not bet on better than 5%.

            Of course, talking about hedge funds is a very 2005-ish way to talk about prodigious wealth accumulation; the last few years have mostly suggested that hedge funds are a way for large investors to transfer money to hedge fund managers in exchange for moderate returns.

          • brad says:

            It depends on how you plan on getting there. The people I know that work in hedge funds (N=3, for managers N=0) got there via wall street banks, and have MBAs. There’s an alternate path that involves getting a PHd in math, physics, or computer science.

            For the former Tom Womack numbers look plausible, though my recollection is that wall street summer internships pay and pay well. The fallback there is generally staying on Wall Street or failing that going into management consulting.

            My sense is that if you plan from age 18 through getting your PHd in one of the aforementioned subjects to eventually work for a hedge fund, and you actually make it to your PHd — preferably at a well regarded school — you have much better odds of actually making it there (but at much higher opportunity costs). I don’t think it would much improve your chances of going from a well paid employee to a fund manager though. This whole paragraph I have lower confidence in as I’m not friends with anyone that has taken this route.

            As far as overall EV, I’d probably have to say something like computer science with a concentration in machine learning. Your fallback option (programmer / data scientist) would be very well compensated, though perhaps not quite as high as median specialist MD, and you have a decent chance at either hedge fund or startup lottery. Of course its hard to say what is going to be true in ten years when you would actually have your PHd.

          • Chalid says:

            For the quant Phd route you’d need very high math ability. It’s a lot more discriminating than just an 800 in the math SAT, which after all 1% of the population can get. I’d guess a typical quant is 1 in several thousand?

            For the analyst/trader/MBA route the math threshold is lower – 800 SAT probably means you have sufficient math ability – but you have a much higher threshold in other things (including people skills and tolerance for extreme amounts of work). Importantly, it’s hard to get your foot in the door if you’re not coming from an elite college (Ivy League or equivalent).

            Very few people get to be a hedge fund managers, but this isn’t a dualized field. If you shoot for hedge fund manager and fail you’re still likely to be making lots of money.

    • gwern says:

      2. HOW DOES A DOCTOR UNIRONICALLY WRITE THAT BECOMING A DOCTOR IS A BAD IDEA

      And if a non doctor wrote this, you would just accuse him of ignorance, sour grapes, bias, being an autistic ivory tower white intellectual nerd, or something else. I know this because that exact EA link yvain used about doctors not being that great was recently posted to HN and the commentators there scoffed that way right up until someone pointed out to them that, actually, the author of that essay was a practicing doctor.

      Damned if you do…

    • Scott Alexander says:

      I’ve argued with 80000 Hours about this, but the short version is:

      1. People are allowed to choose careers for reasons other than maximal expected altruism value.

      2. Given that doctor is on average the highest-earning career in the United States, it’s a pretty darned good earning-to-give choice.

      • ton says:

        If you disagree with them, why are you quoting them as if it’s true? I’m reminded of that whole “protected belief” thing on tumblr.

        Also, “doctor” is way too broad a category here, and I doubt those numbers include hedge fund workers. My five minute research suggests people working at hedge funds do make more on average than many doctor specialties, but the relevant question here is comparing like-potential abilities specialties and hedge funds, which I have no idea how to do.

        • Scott Alexander says:

          I disagree because doctors can earn to give and because other peripherals. I don’t disagree with their core argument that doctors don’t save as many lives as earning-to-givers.

          • ton says:

            Fair enough, I guess. I also noticed the “oops” while rereading, so I can’t really accuse you of doing it unironically anymore.

          • chaosmage says:

            Maybe it’s because I’m not an earning-to-giver, but I can really easily imagine someone giving up earning-to-give because, say, they get a couple of kids. I find it much harder to imagine that a doctor would give up being a doctor.

            So while an earning-to-giver might save more lives than a doctor per year, I expect that over a lifetime, the difference is less clear-cut in relative terms, if maybe more in absolute ones.

            Also, doctors have extra credibility that might help in development aid or other altruistic work. If you try to be credible by saying “I’ve donated $200000”, people will just assume you were rich in the first place and that didn’t cost you much, and not value your opinion as highly as if you say you’re a doctor with experience working in fucking Haiti.

          • Deiseach says:

            Yes, but the earning-to-give people’s donations eventually go to someone who isn’t an earning-to-giver; the mosquito nets, for example, have to be made by people, the research into what best kills mosquitoes and doesn’t kill people had to be done by scientists, doctors and other medical personnel had to treat people with malaria and advise on what was needed, truck drivers have to deliver the loads of mosquito nets to the villages, etc.

            An earning-to-give donor may, on the numbers, save more lives per annum than a firefighter, but when an apartment building is burning down, you don’t save people by having a circle of earning-to-give donors standing outside writing cheques 🙂

            It’s the same basis as the argument that we can’t make a living by all taking in one another’s washing.

        • TrivialGravitas says:

          ‘hedge fund workers’ aren’t really a career category the same way doctors are. Waht are the actual odds that somebody with relevant education is able to get any job at all as a hedge fund manager? I know a lot of people in finance, none of them work hedge funds. Wheras most of the risk of failing to become a doctor is passed by the time you’ve graduated medical school.

          • ton says:

            I’m talking about workers, not managers. The first is obviously easier.

            Anyway, “What are the actual odds” is what I was trying to get at above with “the relevant question here is comparing like-potential abilities specialties and hedge funds”. Basically, if someone is smart enough to have a good chance of making it into a hedge fund, what specialty doctor would that compare to?

            “I know a lot of people in finance, none of them work hedge funds.”

            How many want to?

      • Bugmaster says:

        People are allowed to choose careers for reasons other than maximal expected altruism value.

        I would argue that not only are people “allowed” to do so, but that in fact choosing your career solely (or even merely primarily) based on its expected altruism value is a mistake.

      • Froolow says:

        An absolutely critical point which I think 80000 hours miss is that:

        3) Doctors are the only people with the skills and opportunity to do cutting edge medical research, the results of which benefit everybody forever.

        For example it is very unlikely anyone but a doctor like Semmelweis could have made the discovery he did about germ theory, and that one discovery alone probably ‘pays’ for all subsequent doctors in EA terms (hospital-related mortality drops from 20% to 1% must be worth at least 10 QALY per person saved so at least 2 QALY per person on average so as long as the doctor:patient ratio < 150 Semmelweis has earned 300 QALYs for all future doctors… ish)

        Granted, nurses could almost certainly take some courses in medical statistics and research methods, but at that point you may as well just give them another couple of years of education and make them full MDs.

        • To a first approximation everyone who does cutting edge medical research went to graduate school rather than medical school. You can occasionally have a doctor in a position to notice something that isn’t being addressed by mainstream medical research but those are more or less Black Swan events and not things you can plan for.

          • Froolow says:

            That’s not my experience here (I work for the NHS). Almost all (non-pharma) medical research is done by practicing NHS docs. Is this a US / UK thing?

            Admittedly, this is gradually shifting as new advances rely more and more on ‘mining massive datasets on supercomputers’ and less on ‘noticing something interesting in practice’, but it has almost certainly been true since the dawn of modern medicine to now – is it maybe just that the US is ahead of the curve of us on this one?

            For example probably the biggest advancement in recent modern medicine (in terms of direct QALYs) was Atul Gawande’s safe surgery checklist – nobody but a doc would be in a position to make something like that happen.

  18. walpolo says:

    I feel like the problem with Bostrom’s argument is that it rests on an ethical mistake: it’s clearly much worse to kill someone or allow them to die than to prevent someone from being born. So most of those 10^52 possible lives should not be seen as having the same moral weight as the lives that could be saved with poverty aid (for example).

    Consequentialism as an overall moral framework is very plausible, but simple add-up-the-pleasurable-man-years utilitarianism seems obviously wrong.

    • Pku says:

      When you say it’s worse to kill someone than to prevent someone from being born though, what’s the ratio difference? How many birth preventions are worth one murder? I’d say at mot a thousand, but even if it’s 10^12, the argument still holds in principle (assuming you accept the 10^52 figure in the first place).

      • walpolo says:

        As I said above in response to onyomi, I think it could very well be that no amount of birth prevention is as bad as killing one person.

        I’m not even convinced it’s bad to prevent births. I mean, I’m pro-choice, so…

        • houseboatonstyx says:

          Even if we granted that in the long run, the more humans the better (which I don’t), aiding a woman to get contraceptives now may just be postponing the baby she will eventually want to have. People wanting contraceptives or abortion usually have a good reason; why bring those babies into families that are not ready for them?

          Most US women choose to have 2.4 babies total. If she starts at age 30 rather than age 20, those babies will come into circumstances where they will be healthier, get better education, etc, than those born to women of age 20. The age 30 babies will have more productive lives, making more of the inventions that onyomi expects. It’s not a question of her inventions coming 5 years later, but of them coming at all.

          • onyomi says:

            But no two potential babies are the same. If I’m the theoretical babies I could have had when I was an irresponsible teenager could talk, he would probably say he’d rather exist, even if his or her life prospects would have been worse under such circumstances. The child I have as a responsible, well-off 35-year old will come from a different sperm, a different egg, and probably a different mother.

            As Monty Python said, “every sperm is sacred”…

        • AlphaCeph says:

          > no amount of birth prevention is as bad as killing one person.

          so you would cause the extinction of the entire human race via infertility just to save one person from death? Seems pretty unreasonable to me.

          • walpolo says:

            Perhaps what’s wrong with that is that humans don’t want to go extinct. If we want more descendants, it is bad to prevent those descendants from existing. But the descendants themselves aren’t important intrinsically.

            Still, good example, and I feel the force of it somewhat.

      • TrivialGravitas says:

        I don’t see it as a ratio question. Beyond a possibly irrational and sentimental desire to see the human race continue the exact number of people alive wouldn’t seem relevant. Assuming it’s done by preventing birth rather than people dying of preventable/violent causes depopulation would seem to even have net utility, it’d be MUCH easier to give a planet of a billion people a good quality of life than the 9 billion it looks like we’ll stabilize at.

    • FrogOfWar says:

      I don’t see how you’re pinning this on utilitarianism as opposed to consequentialism. Consequentialism in general won’t support an act/omission distinction because it takes the rightness of actions to be a function of the goodness of the world that results. This evaluation is independent of which parts of that world your act caused to happen and which parts your act merely allowed.

      You’d have to have a very peculiar definition of the good to get something allowing an act/omission distinction out, and the resulting view probably wouldn’t be thought of as consequentialist.

      [You might want to make the good include caused-by-human-deaths at a stronger weight than allowed-deaths, but this won’t give you the intuitive deontological conclusion, because you’re still on the hook for preventing other people’s causing of deaths were possible. The outcomes would be at least as unintuitive as ordinary consequentialist ones.]

      • LTP says:

        Wait, if this is true, then shouldn’t all utilitarians be pro-life and have as many children as possible?

        • FrogOfWar says:

          Well, you’d obviously need to do a little more legwork to get that argument going, but It’s certainly the type of thing that could fall out of utilitarianism.

          That said, many consequentialists try to relax the stringency of the moral duties they prescribe. The host of this site is at least largely consequentialist and yet advocates giving 10% of your income to charity as opposed to nearly all of it. Whether the view can be plausibly relaxed in this manner is another issue.

          Also, the second to last sentence in OP should say *where.

      • Soumynona says:

        Going from a world history in which a person lived happily for 80 years and then died to a world history in which that person lived happily as long as physically possible is a greater improvement than going from a world history in which X people lived happily for ~80 years each to a world history in which X+1 lived happily for ~80 years.

        How does that involve the act/omission distinction?

        • FrogOfWar says:

          I don’t see exactly how you’re defining ‘good’ so as to get those results. Let’s continue assuming for simplicity that none of the ordinary things affect the value of a person-year.

          Is the proposal that the value of a person-year is a decreasing function of the total number of people in the universe, past and future? Or is the proposal that each person has a constant value per person-year which is determined by how many people were born before them, with the later people having less value per person-year? Something else?

          There look to be some unusual consequences either way you go. In any case, looking back at the first comment I don’t know how I read into it an attempt to build act/omission into consequentialism. But I still doubt you can get a much more intuitive consequentialism that satisfies the desiderata.

    • FullMetaRationalist says:

      My intuition says the reason utilitarianism tends draw obviously wrong conclusions is because the calculations exemplified often leave externalities unaccounted for.

      E.g. is there a difference between uprooting a mature tree and preventing a sapling from germinating in the first place? At first glance, there is no difference. But I would argue there is a difference, because uprooting a tree leaves a hole in the ground. Is there a difference between driving to a friend’s house via the road and driving to a friend’s house through a tree? The destinations are the same, but one path will leave your car inoperable.

      I think theoretically, simple add-em-up style utilitarianism might still be salvageable if we were to factor in all the consequences, rather than just the obvious ones.

  19. Alex Zavoluk says:

    I think it’s clear from context he’s saying 10^-67 is the marginal change in probability from the specified donation. That’s the only thing that makes sense.

  20. Jaskologist says:

    Screwtape advised Wormwood as much as possible, “to thrust [the patient’s] benevolence out to the remote circumference, to people he does not know.” I feel like the same basic error is being committed here, where focus and resources are being thrust out in a theoretical realm that we’ll never be able to check on.

    Spend your resources solving the near problems, not the far ones. Those are the ones where you can get actual information about whether or not what you’re doing is working. Waving around numbers like 10^-65 is just using math to delude ourselves into thinking we know what’s going on.

    • Pku says:

      Reminds me a bit of playing Go: For weak to mid-level players, the general advice is to make simple moves in the important part of the board, rather than going for complicated moves (More advanced players make complicated moves, but I don’t think anyone can predict the future IRL nearly as well as those guys can predict the game).

    • AngryDrake says:

      This.

      Humans are optimized for handling what they know, what is near them, not unknown things far away. To promote long-distance money-giving as charity is foolish at best, sabotage at worst.

    • Scott Alexander says:

      Is Appeal To Fictional Demons a logical fallacy? Maybe it should be.

      Like, this is all nice and well, but in cases like malaria where you can check and you find that working with the far people saves hundreds of lives and working with the near people doesn’t, I feel like Appeal To Fictional Demons is on thin ice.

      Something more speculative like AI is on shakier ground, but note that I am advocating making it one thing in a large basket, whereas everyone else is arguing for totally ignoring it. I feel like my way is the way with the modesty appropriate for shaky ground.

      • Jaskologist says:

        We could call it the “Moloch Fallacy.”

        I don’t dispute the value of malaria prevention in principle, but I do have a nit to pick with your claim that “you can check and you find that working with the far people saves hundreds of lives” because this hits on a pitfall I think Rationalists are extremely prone to. You can check that it saves hundreds of people; did you actually go out and check that? Do you know that your $100 for malaria nets actually saved a life? When I let my brother crash at my place, I know exactly how much this helped him. When I aid some local refugee family (you probably have them, too; we have a steady influx and I do not live in a major city), I can see directly whether I’m accomplishing anything. You may well have something similar with malaria nets. I know you don’t for AI research donations.

        Some will indeed be called to go to Africa and distribute nets. 90% of people should be focused on their local community, though, even if they give a little bit to the net guy. I think you are advocating something similar when you say AI research should just be a small part of a larger basket, but I don’t think the numbers you are using support that. If we go by the numbers you give, AI research should dominate. If we are ignoring those numbers, why bring them up at all?

        • Scott Alexander says:

          “You can check that it saves hundreds of people; did you actually go out and check that? Do you know that your $100 for malaria nets actually saved a life?”

          Are you familiar with stuff like this, and with the idea of GiveWell more generally?

          “Some will indeed be called to go to Africa and distribute nets. 90% of people should be focused on their local community, though, even if they give a little bit to the net guy. ”

          How did you calculate this? What argument tells you it’s like this rather than the reverse?

          • Jaskologist says:

            Yeah, I did some quick poking around so I wouldn’t accuse EA unjustly of not investigating, which is why I’m slightly pro-malaria-nets, although I didn’t get around to making that clear in my comment. My main complaint is that their metrics all seem net-oriented; I want actual empirical data on lives saved. But malaria is at least well-understood, so this is less of a problem for it than most interventions. (But remember again that your OP was about AI. We have *no* empirical numbers on that.)

            Let me try to break my objection down into discrete points:

            1. I did not calculate the 90% number. I pulled it out of my ass, and do not mean for it to be taken literally. I am actually advocating against using numbers in any sort of precise way here, because I do not believe we have the knowledge to accomplish this. Trying would be like Archimedes trying to determine the Planck length.

            2. Illustrative story time! A while back, a friend of mine went on a mission trip to New Orleans a few years after hurricane to help rebuild. She was going on and on about the wisdom of this old lady living in a dilapidated FEMA trailer. My only thought (not expressed) was “if she’s so wise, why is she still sitting around years after the fact waiting for others to rebuild her living quarters?”

            With malaria nets, my thought is “why do these people still need malaria nets from foreigners?” There’s probably (a|many) much deeper problem(s) underlying all of this, seeing as how we were able to fix the malaria problem for ourselves a long time ago. That’s the problem that needs fixing. Nets are a bandaid.

            Now, I don’t know what the root problems are. I don’t know how to solve them. I wouldn’t be surprised if we (as opposed to they) can’t solve them at all.

            Focusing on the local problems alleviates that issue. I probably have a pretty good idea why my brother needs to crash on my couch, and I will be trying to address that in way deeper than throwing money at him. When a woman comes to the battered woman shelter for the 5th time because she’s being beaten by yet another guy, we know there is a problem to be addressed beyond bad luck. When we deal locally, we have this information, and we have feedback.

          • Peter says:

            we were able to fix the malaria problem for ourselves a long time ago. The sources on malaria in England suggest that solutions to the problem came fairly late, and it was concentrated in certain areas such as the Fenlands. Given how rare sickle-cell trait is among white English, it can’t have been a huge problem, not compared to the problem in places where sickle-cell trait is far more common anyway. Most of the problem does seem to be latitude; I vaguely recall there’s a debate somewhere where anti-malaria people talk about draining wetlands and the biodiversity people hate the idea, but I don’t think that resistance to, or lack of resources for, draining wetlands is the main difference between sub-Saharan Africa and England.

            Also, what’s with the hate for bandaids (also, in general, crutches, and possibly other medical stuff that get used in metaphors)? They’re wonderful medical supplies, much less intrusive than other things, that’s why they’re first in terms of aid. With quite a lot of medical problems the right thing to do is to prevent secondary problems, apply symptomatic relief, etc., and let the body’s natural healing processes take care of the underlying problem. Transferring the analogy from people with sicknesses to countries with endemic problems, you might want to translate “healing” to “growth”.

          • Scott Alexander says:

            >> With malaria nets, my thought is “why do these people still need malaria nets from foreigners?” There’s probably (a|many) much deeper problem(s) underlying all of this, seeing as how we were able to fix the malaria problem for ourselves a long time ago. That’s the problem that needs fixing. Nets are a bandaid.

            Seems to me that either Africa’s problem is some sort of extremely fixed unshakeable thing like HBD, in which case they will be dependent on strangers through no fault of their own and so strangers ought to help them…

            Or it’s a combination of getting the short ends of a bunch of different sticks, in which case maybe taking away one of those sticks will push them a little way toward climbing out of the hole. There are some interesting studies showing that African poverty by area is very closely related to burden of disease-carrying insect. If that holds up, then getting rid of some disease-carrying insects isn’t ignoring the problem, it’s solving it.

            (deworming is an even better example, since it can actively raise education and IQ)

          • Jaskologist says:

            The HBD angle hadn’t even occurred to me. I was thinking more in terms of bad societal institutions. I could waggle my eyebrows suggestively, but really my whole point is that we don’t really know what their problems are and are ill-suited to help much with it, so I better not.

            Even if we were the ones who gave them the short stick ends, it doesn’t follow that we can help. If I run somebody over with my truck, they still need to go through the physical therapy themselves to get better.

      • Peter says:

        The trouble with one thing in a large basket is it ends up in danger of pushing everything else out, like a cuckoo chick in a nest. Even if this isn’t actually a danger, it’s something people are going to be afraid of.

    • Michael Vassar says:

      I think you are correct here, but I wish that the argument could be had as a whole blog post. I don’t think that blog threads are sufficient to convey the intuitions behind this sort of claim to those who lack them. In fact, it’s very very hard to convey said intuitions without almost throwing out the baby with the bathwater and destroying the idea of rationality in a person entirely.

  21. “If you try to be good, if you don’t let yourself fiddle with your statistical intuitions until they give you the results you want, sometimes you end up with weird or scary results.”

    We can tie this into yesterday’s discussion. A few decades ago, the weird or scary results included defending a dictatorship that promised to usher in a utopia or supporting eugenics policies to keep inferior people from breeding.

  22. Josh F says:

    I will cause over ackerman(10^10000000) person-years worth of the happiest person alive, with probability greater than ackerman(10^500000), 1/ackerman(10^200) of these person years worth will be given directly to you Scott.
    This will only occur if you get me at least 10,000,000 dollars in the next month otherwise the opportunity will be gone with probability .000000001%, if I get at least 1000$ the above will occur with at least 10^-5000000 the probability . You have my email so can probably get it to me

    I’m very excited to have created the worlds best effective altruism opportunity in the world.

    You have an email for me so if, need be, can get me some money. I’m curious why you fail to donate. I mean, sure you doubt the numbers heavily but that doesn’t apply. More seriously is the difference in certainty between your belief in me and belief in MIRI so great that it overrides the much higher potential value here. I will achieve this happiness by creating an organization to bury 3 small elephants thus invoking the happiness.

    Why would you think I am a pascals mugging and MIRI isn’t. Remember you can’t add zeros willy nilly and nothing has a probability of 10^-67 so really nothing has a probability of 10^(10^(10^-67)).

    • MicaiahC says:

      Why should your stated probability be the amount that people update their estimates by?

      • Josh F says:

        Obviously, obviously, obviously it shouldn’t. But even if you think the odds I’m telling the truth are 10^-googleplex. You should still donate money.

        (I’m not trying to be dismissive with the use of obviously, I chose numbers specifically so that if you had any number which was a plausible probability in the known universe that I was telling the truth-probability of opposite you should donate)

        • MicaiahC says:

          What raises the probability of that above the the opposite truth (terrible things happening if I do donate to you) happening?

          • Josh F says:

            pretty little it could go either way, if you think it’s more likely that the opposite would happen though that should overwhelm your choice and the “effective altruist” thing to do would be completely orient your life around stopping the miniscule chance I get my requested donation.

            Unless you have unheard of, insane, impossible precision, you should be compelled to do something either work as hard as you can to earn me money, or work as hard as you can to stop Scott from giving me money. Either way without unheard of, insane precision you should be compelled to do something.

          • MicaiahC says:

            You’re hiding an important assumption in your decision algorithm, which is that worldstates are only constrained by “true” and “not true” and then deciding to arbitrarily eliminate alternate, plausible future worldstates.

            At those tails, it doesn’t matter how precise I am, because it all gets washed out in the noise of:

            I’m a boltzmann brain, big and confused, here is my aposgiyalkopgpuaklpjoimaou
            I have been insane all of my life and all of you are really the imaginary remnants of my friends that got lost in a parallel universe with me in a hiking trip.
            My values will precisely reverse the instant I hand you the money, considering that that the number of possible states contained within my skull is much smaller than the tail ends of the probability

            etc.

            etc.

            etc.

            Consider that the number of worldstates *are* neigh infinite, it’s actually appropriate to assign astoundingly low confidence on ones that have no reason to be brought to your attention.

          • Josh F says:

            I mean, I wouldn’t give money to me after having made this claim. But think it’s more of a problem than you do. It’s plausible, though not obvious to me, that the odds you are a boltzmann brain is greater than this. Fine, so ignore all possible universes where this statement is meaningless or better yet, compute the average, it should still sum very very far away from 0 in expectation even if you’re prior is Solomonoff induction or whatever. I personally think one should discount in expectation things involving very large numbers in correspondence to a superlinear function of their size somehow and then you’re fine. But in that case the argument for MIRI as an EA choice falls flat, as do most X risk organizations.

            For all intensive purposes a 10^-googleplex chance is 0. There are no, l “real world” events (such as the earth exploding and turning into 29 dancing purple teacups) where I think of it as representing a useful real world probability. Yet that much of a deviation should let me dominate.

            If you don’t bound global utility or use a much harsher prior than Solomonoff induction, you’re in a bad place.

          • ton says:

            I personally think one should discount in expectation things involving very large numbers in correspondence to a superlinear function of their size somehow and then you’re fine.

            If you don’t bound global utility or use a much harsher prior than Solomonoff induction, you’re in a bad place.

            You’d have to have one hell of a superlinear discounting function to get out of Pascal’s Mugging. You used ackerman(10^10000000) above. Do you quite comprehend what that is? You’d basically need to claim that all utility is bounded by some finite number to pull that off.

          • MicaiahC says:

            Computing the average or doing naive things like summing seems like an illformed way to update. Those are clearly ill formed ways to update a prior. Now, I don’t *really* have the precision to know what my prior is, but whatever update I have from you saying so is lost in the noise of (see what I said above), which is why I feel satisfied with my earlier responses to you: I have appropriately low priors that also move appropriately low amounts.

            Is there anyone who doesn’t think utility is bounded? I mean, it clearly has to be computed somehow via VNM axioms, so as an absolute physical upper bound you can take a black hole the size of the (visible, doesn’t matter) universe.

          • Josh F says:

            Yeah, I’m rather fond of bounded utility as a model. What I meant by superlinear expectation of size is something along the lines of “if a claim involves the number (googol) divide it’s probability by (googol*ln(googol)) but to describe this formally runs into all sorts of hairiness involving what a “large number” really means. I’m not sure there is a solution to that. I’ve studied some amount of computability so the point of the numbers is me never worrying I wrote them too low for my point. 🙂

            my point was that while the noise vastly outweighs the signal as long as there was some sort of signal you ought to be compelled to act on it, in an unlimited utility world. Things get weird when you bet over epistemologies though, I’m not sure with an omnipotent deity I would bet very strongly (say life vs 1$) that bounded prior is better than solomonoff induction, but I do like it as a model and don’t really care what naive utilatarianism says about my choice of models.

          • ton says:

            Re: utility bounded: Yudkowsky believes utility is not bounded (e.g. in http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/).

            I mean, it clearly has to be computed somehow via VNM axioms, so as an absolute physical upper bound you can take a black hole the size of the (visible, doesn’t matter) universe.

            Why? Who says that the visible universe is all that I can care about? What if the promise is that you take me out of the visible universe and into a universe with much larger utility?

            The real problem comes down to the unconvergness of Solomonoff induction/Universal prior, of course. You can easily construct relatively simple theories that if true, yield disproportionate utility to their probability, and therefore can prove that they raise EV without counting out the entire number.

          • MicaiahC says:

            I don’t think me and Eliezer disagree on anything substantial; if he also believed in a bounded universe, both in space and time, he would also take a theoretically upper bounded utility function. If I believed in an unbounded universe, either in space or time, I would also believe in an unbounded utility function.

            Actually, hm, the bounded utility thing doesn’t seem clear to me any more. Lemme rethink

          • ton says:

            It may help to remember that even if there is a bound (e.g. you’re an infinite set atheist like EY), if you *do not know* the bound, you can’t include that in your own model.

            And that if there’s a finite probability of a greater than X bound for any X, then that’s unbounded in real terms.

          • Alex Zavoluk says:

            “Re: utility bounded: Yudkowsky believes utility is not bounded (e.g. in http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/).”

            That seems like clear and utter nonsense; so long as the laws of thermodynamics hold, there is no such thing as living forever.

            “Why? Who says that the visible universe is all that I can care about? What if the promise is that you take me out of the visible universe and into a universe with much larger utility?”

            “Much larger” is still finite. Allowing unbounded utility this way requires making assumptions of quite bizarre things to exist without any evidence they are even possible.

            Suppose we put a “practical” lower bound on probabilities that we can assign. Then, Pascal’s mugging applies in full: we can make increasingly absurd decisions the “obvious” choice by making assertions about the utility of events. Thus “Utilitarianism with probability bounded by something above 0 or below 1” is a useless, inconsistent decision system. One way to maybe help would be to try to assign actual distributions, so that each range of outcomes has a probability. But that can be gamed just as easily, if we’re not allowed to put nontrivial fractions of the probability mass close enough to 0. Another thing that would help is being more specific about events; “UFAI will be developed at some point” is not the same as “AI will destroy everything if we don’t do X amount of research right now.”

            In particular, I think there must be some bound on expected utility. Such a bound would seem to easily follow from entropy being positive (or, again, we could wander into “imagine if there are other universes!” territory, which is no more convincing than pascal’s original wager).

            Moreover, I think that the original post is basically correct in asserting that we don’t have the ability to guess accurately about such small probabilities (or such large outcomes as “10^54 future lives”), since we have no data. Yes, sometimes pulling numbers out of your ass works, if you have some actual experience or basic knowledge of the numbers involved. If you don’t, comparing it to this litany of less-improbably things is just an appeal to ridicule. For all you know, the probability that UFAI destroys the world *is* 10^-67.

          • Deiseach says:

            Well, MicaiahC, Bostrom is hiding important assumptions as well: that those theoretical 10^52 lives of 100 years each are all going to be lives of healthy, reasonably well-off, educated people doing interesting and productive work with no significant mental, psychological or physical impairments to cause them distress, inconvenience, and lack of happiness.

            He is presumably not claiming it makes no difference if those potential hundreds of billions of lives are all living in the equivalent of a favela in Rio de Janeiro rather than in leafy Oxford or the sunny, progressive Bay Area?

          • ton says:

            “That seems like clear and utter nonsense; so long as the laws of thermodynamics hold, there is no such thing as living forever.”

            Of course, but do you concede a non-zero probability of them being wrong?

            ‘“Much larger” is still finite.’

            I addressed this above. If there is a bound to the universe that you do not have an upper bound for, your model cannot be bounded. We’re not talking about infinite utility, we’re talking about unbounded, i.e. there is no X such that there is no utility greater than X. Knowing that the universe is finite doesn’t help us if we can’t rule out a finite universe greater than X. We’d need an actual upper bound on the largest possible universe, not just the knowledge that no universe can be infinite.

            “Allowing unbounded utility this way requires making assumptions of quite bizarre things to exist without any evidence they are even possible.”

            We need to assume that they *can* exist, not that they do.

            If you want to put a practical bound on expected utility, go ahead, but I don’t think it’s been justified on philosophical grounds yet.

            @Deiseach Wouldn’t that be a rounding error? Even if a good life is worth 100 times as much as a bad one, the argument is basically unchanged.

          • Alex Trouble says:

            “Of course, but do you concede a non-zero probability of them being wrong?”

            Sure, but I assign a sufficiently low probability to rule out fantastic imagination-driven judgement of morality as the pointless, basically religious speculation they are.

            “We’d need an actual upper bound on the largest possible universe, not just the knowledge that no universe can be infinite.”
            No, we just need to assign a sufficiently low probability to each increasingly large universe that the expected utility drops very quickly.

            “We need to assume that they *can* exist *and they have sufficiently large probability of existing*, not that they do.”

            FTFY

            “If you want to put a practical bound on expected utility, go ahead, but I don’t think it’s been justified on philosophical grounds yet.”

            Seems like the whole point of Pascal’s mugging. If you can assign essentially arbitrary expected utility to things, utilitarianism is useless.

            “@Deiseach Wouldn’t that be a rounding error? Even if a good life is worth 100 times as much as a bad one, the argument is basically unchanged.”

            Unless negative utility is possible. It seems to me that 10^50 lives of being tortured endlessly is worse than no lives at all.

          • ton says:

            “No, we just need to assign a sufficiently low probability to each increasingly large universe that the expected utility drops very quickly.”

            You seem to concede that utility wouldn’t be bounded, only expected utility. This, however, is incompatable with http://arxiv.org/abs/0712.4318, which finds that any unbounded utility system will lead to undefined expected utility.

            ““We need to assume that they *can* exist *and they have sufficiently large probability of existing*, not that they do.””

            My argument was just to show that utility is unbounded. If you want to show unbounded expected utility, that suffices as per the paper linked above.

            “Seems like the whole point of Pascal’s mugging. If you can assign essentially arbitrary expected utility to things, utilitarianism is useless.”

            Exactly. There’s no accepted resolution to Pascal’s Mugging yet.

    • Carl Shulman says:

      If one’s credences and values allows the possibility of big payoffs, then there are (varying) chances of big payoffs for every action. For example gathering more knowledge, developing technology, etc. Even if we were living in a Matrix run on a hypercomputer with boundless computational resources, and you were a fanatical total utilitarian, etc, etc, it would still make sense not to pay since the money could be used in other ways with more credible routes to big payoffs.

      http://www.nickbostrom.com/ethics/infinite.pdf
      See “empirical stabilizing assumptions” in this paper.

      E.g. the point that improving health or education for 1 in 7 billion people might facilitate our successful navigation of various future risks gives one of many bounds on how improbable a risk (or intervention) can be and remain non-negligible. Well-characterized interventions like asteroid tracking, and murkier ones like nuclear non-proliferation or disarmament advocacy offer other benchmarks.

    • Nornagest says:

      What you said, but with bigger numbers.

      I await your donation.

  23. John says:

    What if donating to FAI research has some nonzero probability of *increasing* the odds of the development of unfriendly AI? That’s a very easy way to get arbitrarily small–down to zero, if you want–EV from your donation.

    More generally, when estimating the probability that X increases Y, our null hypothesis should be that X does not affect Y, or in this case, that doing X has *no* effect on the probability of Y. Evidence and intuition move this value, and it’s reasonable to say something like “you moved your probability estimate by 10^-67? What possible evidence could you have that is *so* weak that it only moves your estimate by that amount?”

    But it is not reasonable to say “you estimate that this coin flip will increase the number of heads flipped in the history of the universe by *exactly* zero?!?! That’s incredibly implausible!”

    • Pku says:

      Seems unlikely that the balance would fall that precisely near 0 though: If it’s negative, definitely don’t donate to FAI, but if it’s positive, even if they’re really close it would be unlikely to shift your probabilities by more than an order of magnitude, at most.

  24. Shmi Nux says:

    10^54 human life-years? It’s too big, you should never use this number.

    Also, we don’t even know whether MIRI’s work/Bostrom’s book/donations to AI risk would have a net positive or negative effect. Maybe all the publicity will hasten the development of a rogue AGI.

    AI research is completely unlike other x-risk work. You do not risk more asteroids hitting Earth just by looking for them in space. You do not hasten climate change by doing climate research. Grey goo is more iffy, but still not as bad as AI research.

    • GMHowe says:

      I don’t actually know how the 10^54 number was arrived at but if it is based on the computational capacity of all the reachable matter in the universe then surely it would just reflect the fact that the computational potential really is that large.

      If you were just lampooning Scott for saying, “…nothing is ever 10^-67 and you should never use that number.” well fair enough. That statement is overly strong, (though a decent heuristic) but Scott is right that 10^-67 does not pass a sanity check in this particular case.

      There is a possibility that research into AI x-risk will turn out to be counterproductive, but that’s a bit of a catch-22 since not doing the research leaves us in perpetual ignorance and forestalls the (in my view) much greater possibility that such research will turn out to be useful.

  25. Douglas Knight says:

    Intuitively, people’s system 1s think “Doctor? That’s something where you’re saving lots of lives, so it must be a good altruistic career choice.

    The problem is not the factual belief, but the inference. Doctors really do save lots of lives, not 4 QALY/Y. The problem is asking the wrong question. The crazy, counter-intuitive thing is not what doctors do, but that the right question is about the decision to become a doctor. Doctors really do save lots of lives, but those doctors already exist and are saving those lives right now. An applicant to medical school is just fighting over who gets to be a doctor and save those lives that are going to be saved regardless. Even a new doctor is mainly stealing from other doctors easy cases that were going to be saved anyhow, the 4 QALY/Y contribution from a few hard cases that used the extra attention. You misquote your source. It doesn’t say that the average doctor contributes 4 QALY per year, but that is what it predicts about an extra doctor.

    (An amusing coincidence is that NHS thinks that one should pay no more than £80,000 for those 4 QALY, just over the salary of the median doctor. Thus, NHS should endorse more doctors.)

  26. Sniffnoy says:

    Matthews correctly notes that this argument – often called “Pascal’s Wager” or “Pascal’s Mugging” – is on very shaky philosophical ground.

    Surely that should read “often incorrectly called ‘Pascal’s Wager’ or ‘Pascal’s Mugging'”? 🙂

  27. R Flaum says:

    I think there’s a bigger issue with the existential risk argument, though, which is that there’s an enormous difference between killing someone and preventing them from being born in the first place. There’s nothing inherently undesirable about the extinction of humanity — what is undesirable (except in certain rare cases) is the death of an actual human. If all seven billion-odd humans on Earth died right now, that would be seven billion times as bad as one human death, but the fact that it would mean no more births is irrelevant to this calculation. If we waited until we had a population twice as large as the current one and then killed half of that, it would be just as bad. If humanity died out for reasons that do not involve high death tolls, such as a loss of interest in breeding, there’d be nothing wrong with that (well, okay, in that specific example there’d be economic effects that would lead to the quality of life dropping tremendously. But you get what I mean).

    • Wrong Species says:

      What do you mean by “inherently undesirable”? I would consider the extinction of humanity through lack of breeding to be far more horrible than the death of one individual. The people trying to prevent extinction risks probably feel the same way.

      • R Flaum says:

        Why? Whom does it hurt?

        • Wrong Species says:

          Believe it or not, not everyone bases all moral decisions on utilitarian calculations. You are using the word “inherently” as if your values are automatically more right than ours. If you can prove utilitarianism is “inherently” right, then you just solved one of the most important problems in philosophy. Until then, you can’t just assume it’s true.

          • R Flaum says:

            Sure, but if you reject utilitarianism, then you also have to reject the entire approach used to justify this argument for worrying about existential threats in the first place, since it is based upon utilitarianism. (There are other arguments for it that are not based upon utilitarianism, but this specific one is)

  28. LTP says:

    Okay, so here’s my problem with the x-risk reasoning (and really, utilitarianism in general, as it facilitates this). Let’s say you’re right, Scott, that assigning such a low probability to something like Matthews does is irrational. So, okay, let’s say somebody comes to me soliciting donations for the Anti-Lizard People Institute. This person, let’s call him Schmeliezer Schmudkowsky, tells me that he believes that lizard people have hijacked all the world’s governments and are plotting to wipe out humanity in the near future. I tell him that seems extremely implausible, and he retorts that if I had read Nick Bostrom, I would know that even extremely low probability existential risk is worth fighting, so instead of donating my money to malaria nets in the third world, I should donate to his institute. That’s clearly absurd. (Or, another absurdity is Pascal’s Wager itself, as I’m sure you realize, Scott). Perhaps this is what you meant by it being philosophically suspect.

    This strikes me as a subset of instances where utilitarianism/consequentialism is just a language and a framework for people to rationalize whatever moral beliefs they want with a thin veneer of specious quantification, without any real method for resolving disagreement (and so, really, it’s no better than any other moral system).

    • Scott Alexander says:

      The lizard person scenario genuinely does seem to me more likely than the tornado-meteor-terrorist-double-lottery-Trump scenario, and therefore genuinely more likely than 10^-67.

      In order for your scenario to work, you would need to have some absurdly huge number to compensate, but you can easily get that by taking Bostrom’s and saying that once the lizards wipe out humanity, we will lose that awesome 10^54 future.

      At that point, all I can say is that fighting the lizard people is not the most effective way to save that future, and probably insofar as you care about the future, you should care about more likely ways of protecting it more (weighted by how much a marginal dollar matters to those things).

      Of course, then you could rightly argue that we could never spend money on anything except the far future – yes, More Effective Way Of Saving Future is better than Lizard People, but if I have any money I’m not spending on More Effective Way, Lizard People is the next option. To that I can only say that I limit how much I give and I do it by bins, so I would end up filling my far future bin, go into some other bin, and spend the rest of the money on myself.

      I realize this is an unprincipled way to do things, but the unprincipled part is ever spending any money on my self, not ignoring the lizard people – and I’ve already admitted that particular unprincipledness.

      • LTP says:

        What makes you think AI risk is the most effective way to help future humans? If EAs are just reasoning with numbers pulled out of out of their asses anyway on x-risk stuff, what makes you think humans would be at all able to reason about issues that involve both really really big numbers (future humans) and really really small probabilities? You could make a reasonable case for many things, things that to you probably sound more or less absurd as thinking about AI risk with our modern level of technology seems to me.

      • Bugmaster says:

        Wait, can you explain why you think that “never spending any money on the Far Future” is the wrong answer ? I personally think this is a rational choice, assuming that by “Far Future” you mean “10^54-landia”.

        The reason I say this is because the Far Future is very far, and thus highly uncertain. This means that, no matter what you spend your money on — lizard people, gamma-ray bursts, AI, etc. — your contributions are very likely to be rendered totally irrelevant. Because of this, you’d need to donate truly titanic amounts of money to compensate.

        But, alternatively, you could donate modest amounts of money to Near Future projects, and still effect some positive change — which will also contribute to the Far Future, since its effects will be compounded over time. In addition, you will be able to observe the effects of your donations in a reasonable time frame, which means that you’ll be able to adjust your spending dynamically. So, for example, if you observe that anti-mosquito laser sentry guns do little to prevent malaria, you can stop investing in laser sentry guns, and switch to some other charity. You can’t do that nearly as easily for projects whose payoff is thousands of years from now.

      • ton says:

        Uh, it’s trivial for the anti-lizard people to multiply their impact by an amount greater than they lose in lower probability, which is the whole point of Pascal’s Mugging.

      • But, but what if the lizard future is better? Maybe lizards are naturally happier than people, or suffer pain less, or are kinder to each other? Or have better sense, and are less prone to existential risks?

        What are you loyal to, and why?

      • Michael Vassar says:

        Or, maybe people are so systematically bad at using utilitarianism as anything other than a justification of specific tribal identification factors that it’s de-facto not usable in public or as part of a tribe. That would be my best guess. Of course, part of the problem is that yes, it’s very sloppy philosophy, but so is almost all philosophy, and I don’t see a principled alternative to using principles (e.g. to using philosophy).

        FWIW, I believe my life to be coherent from a consequentialist perspective, but think consequentialism points away from the advocacy of consequentialism.

  29. BR says:

    I find this debate completely fascinating and it gives me a massive headache. I have a bunch of questions that I’d be incredibly grateful if anyone could give me a hand with:

    1A. How strong is the argument for potential unintended consequences, i.e. the path of effect for a donation to MIRI is so chaotic that you need to sum up both potential positive and negative effects on AI safety?
    1B. If the potential for unintended consequences is strong, is there a good argument for coming out on the positive side of the ledger? How likely is it that even if you did end up with a positive effect that it might actually be super tiny as per Matthews, once you are done summing up positive and negative effects?
    2. Is there any good historical analogue for something like AI safety research in advance working? I don’t think nukes works because i think the science was MUCH further along when people started worrying about safety, but maybe I am wrong there.
    3. Are there well-established ways for actually doing this kind of far-future probabilistic reasoning? I keep worrying that this approach (where you multiply out probabilities for a far-future event) literally does not work, and we can’t figure out it doesn’t work because it falls outside of standard scientific practice (there are no lab or natural experiments to use to try to establish replicability of results), but again I could just be confusing myself. I would REALLY appreciate help on this question, especially from math/science people
    4. Has anyone steelmanned the argument against focusing on X-risk vs. focusing on current humanitarian concerns? I would LOVE to read that

    Any help on the above is massively appreciated. I rarely comment but I love this blog and those who comment here.

    • Dust says:

      Why is it that the reflective curious uncertain people always stay lurkers? The internet selects for signal boosting overconfident loudmouths like me.

      • chaosmage says:

        It’s a simple consequence of differences in comprehension skills. Readers with good comprehension have much more material to choose from, so it’s harder to get their attention. So most writers compete for readers with weaker comprehension skills, such as the majority who speaks English as a second or third language.

        Writers very often overestimate the comprehensibility of their writing, because their minds contain extra information that is not available to other readers. Compared to simple thoughts, complicated thoughts are much harder to put into words that do not require good comprehension skills. But what isn’t comprehended will not be discussed (fruitfully) or shared.

        Case in point: BRs questions would be much easier to comprehend if s/he had 1) put them into separate posts, 2) described each at more length and 3) imposed on him/herself a hard limit of no more than 20 words per sentence.

        Of course comprehension skills will be higher among readers here than in most other places. But even here, comprehending hard-to-comprehend material is work. Even Scott himself would get way less of our attention if he wasn’t such a spectacularly clear writer.

    • Scott Alexander says:

      1. I tend to be skeptical of unintended consequence arguments. Yes, if you save the life of a drowning child that child may turn out to be Hitler, but I feel like absent some *particular* reason to think that something will make a problem worse, you should expect that all of the weird unintended things cancel out (the child might also invent the cure for cancer!) leaving the intended effect. I feel like if we take a randomly chosen thing, it’s got no more chance of hurting the Friendly AI cause than of helping it. If we apply X amount of human intelligence to the problem to try to optimize for helping the Friendly AI cause, that should increase rather than decrease the chance that it helps, moving it firmly into the “more likely to help” territory. I am certainly not saying there’s zero chance that it hurts, just that on balance we can still apply a net positive probability to it helping.

      2. The best (though far from perfect) analogy might be global warming. We got worried about global warming before it was obviously happening, calculated what we had to do to stop that, and made a (feeble) worldwide effort to stop doing that thing.

      3. Not really.

      4. It’s way too easy to run up against Pascalian reasoning here, which kind of throws a spanner in the whole thing. I make the sketch of an argument for why AI research is worthwhile here, but I don’t compare it to humanitarian causes either, and I don’t want to – I think both are valuable and we should pursue both. That having been said, humanitarian causes get about 50,000x the budget of AI research right now, and I would be thrilled if the yearly budget for AI research ever got above the cost to buy a nice apartment in Manhattan.

      • John Schilling says:

        Is it OK for me to let children drown, or push them in myself, if I intend to prevent Hitlers?

        Because I think that’s what’s going on in this whole class of “problems”. You’ve got a decision with a whole bunch of boring, mundane, relatively small consequences – maybe a few lives get saved, no big. Then you introduce exactly one massively improbable outcome with a massively important consequence, out of all the massively improbable but consequential things that could happen, and frame the question entirely in terms of the one improbable-but-consequential thing you’ve chosen because that’s the “intended” consequence and everything else is “unintended”.

        I think the better rule would be skepticism of massively-improbable-consequence arguments regardless of whether the consequence is intended or not.

      • Irenist says:

        I would be thrilled if the yearly budget for AI research ever got above the cost to buy a nice apartment in Manhattan.

        Fair enough. I assume you mean “FAI research” specifically since AFAICT, lots of big corporations and defense ministries and universities and whatnot are pretty interested in various machine-learning projects and such. But if it’s true that non-FAI AI research is actually pretty well funded, that makes me wonder: Does anyone in the LW/MIRI/EA social circles have any sense of what % of AI research not specifically targeted toward FAI (like whatever Google or DARPA are up to, say) is making UFAI more likely, making FAI more likely, or just sort of neutral? Or is it just impossible to tell? I don’t know that this would necessarily affect the EA argument about FAI/MIRI in particular, I’m just sort of curious.

      • BR says:

        Thanks! That’s really helpful. I wonder if a lot of the argument around Matthew’s piece is caused by the fact that he seems to be responding to affect/positioning – the EA people he speaks to REALLY want to talk about X risk, to the exclusion of more near-term humanitarian causes – whereas the concrete proposals people like Scott (and I think Scott is representative here) want to put forward are actually minuscule in terms of funding compared to near-term causes. I don’t actually think Matthew’s approach is wrong – sometimes you should listen to what people spend all their time focused on, not their conscious ranking of topics by importance – but I do think that maybe he and MIRI et al have no substantive disagreement.

      • 27chaos says:

        I don’t think random tiny effects cancel out, I think they push events closer to the mean. That is not necessarily the same thing, depending on how far from the mean your status quo is, and so the difference can be important.

      • Anaxagoras says:

        I feel unknown consequences is stronger here than in most cases, in large part because we don’t know if we even can improve matters. Suppose we take the super-pessimistic view (note: I do not hold this view) that friendly AI is impossible, that there are many solutions that look like they will produce friendly AI but will in fact produce an unfriendly one, and that any amount of human intelligence applied to AI research will only bring the advent of unfriendly AI closer. Clearly, your donations to MIRI are just hastening the end of the world. Conversely, we could be in a world where if MIRI gets a billion dollars, we get ridiculous buckets of utility.

        Using your analogy of saving the life of drowning children, the problem is that we don’t know that we aren’t standing Junior Hitler Summer Camp. While it does seem good to rescue children from drowning/grow our scientific knowledge, there’s whole rafts of drowning children over in Africa we can save way more efficiently, with an almost absolute guarantee that none of them will grow up to be Hitler.

  30. Carl Shulman says:

    ” In that case, I offer him the following whatever-the-opposite-of-a-gift is: we can predict pretty precisely the yearly chance of a giant asteroid hitting our planet, it’s way more than 10^-67, and the whole x-risk argument applies to it just as well as to AI or anything else. What now?”

    https://en.wikipedia.org/wiki/Impact_event#Frequency_and_risk
    http://www.givewell.org/labs/causes/asteroid-detection

    On the order of 1 in 100 million per year for dinosaur-killers, more frequently for smaller ones. But spending on the order of $100MM was enough to track the biggest asteroids and confirm none happen to be on a near-term collision course. If we had found one, our whole civilization would have mobilized to send out spacecraft to deflect it.

    “no, I can’t justify this, it’s a sanity-preserving exception”

    What about moral pluralism or normative uncertainty?

    http://www.philosophy.utoronto.ca/directory/andrew-sepielli/
    http://commonsenseatheism.com/wp-content/uploads/2014/03/MacAskill-Normative-Uncertainty.pdf
    http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html
    http://plato.stanford.edu/entries/value-pluralism/

  31. Nathan says:

    The issue with sticking together a bunch of different events is that the probabilities of the events may be correlated. For example, the tornado-meteor-terrorist-lottery example is bad, because the terrorist attack should be negatively correlated with the tornado. The terrorist probability is mostly concentrated in high population density major cities, while tornadoes mostly happen in relatively lightly populated parts of central US. Also, what terrorist blows up a building in the middle of a tornado? They want their bombing to be the main news story. I think the tornado-meteor-terrorist-lottery example is actually less likely than 10^-67. (But see [1].)

    Some comments brought up the idea that you can just multiply the probabilities of a bunch of normal events to show that whatever happened yesterday was extremely improbable. This happens because people include a lot of extraneous details as if they are additional independent events, but for a serious computation you want to identify the key elements of the sequence of events that make it interesting, and compute the probability of any sequence containing those key elements. Then you will get reasonable probabilities. This logic probably applies to sequences relating to the development of strong AI. (Sometimes people also neglect the fact that items in their sequences are strongly positively correlated, but that’s usually a smaller effect.)

    That said, I strongly agree with the spirit of the post. Another way to think about such estimates is to instead ask: how much money needs to be invested in a solution to X in order to cut the probability of X down by a factor of 10?

    When I Google “Value of Earth” I see estimates ranging from 10^14 to 10^18 present US dollars. Let’s use 10^18 to be conservative[2]. If you believe that your $1000 donation only decreases the probability of AI extinction by 10^-67, then one of the following must be true:

    (a) The cost of decreasing AI extinction risk by a factor of 10 is reasonable, say less than 10^16. So your donation should decrease AI risk by a factor of 10^-13*10^-1 = 10^-14. Therefore the total probability of AI extinction was only 10^-53.

    (b) The total probability of AI extinction is nontrivial, say 10^-2. But as your 10^3 donation only took it down by a factor of 10^-65, spending all of the world’s wealth on the problem will still only take the probability down by a factor of 10^-50. So it’s impossible to affect the probability appreciably no matter what we do.

    (c) The marginal return on investment for AI risk is increasing, so the marginal dollar now (after perhaps 10^7 to 10^8 have been invested so far) is much less than the marginal dollar after say 10^10 has been invested. This sounds bizarre. Typically you only have increasing marginal returns at early stages of an investment.

    All of these are absurd, so we can safely reject the 10^-67.

    To me it feels like estimating the total cost (in present dollars) of achieving a particular goal and then valuing your donation based on your contribution to the total cost is a better approach than trying to directly guess the effect of your donation on the outcome of the event.

    Also note that this approach lends itself to more detailed analysis, e.g. you can consider the probability of bad things happening as a function of time, and then estimate how much earlier we will accomplish the goal with your donation than without it.

    [1] On the other hand, if that many people vote for Trump, surely this would incite the wrath of God, so the probability of tornadoes and meteors would be much higher.

    [2] Disclaimer: I’m not sure even the high estimate of a present value of 10^18 USD is consistent with Bostrom-type estimates of 10^54 human lives over the rest of existence. But I’ve already spent enough time on this post and won’t address that issue.

  32. Ilana says:

    What I find creepy is not so much the conclusion that X-risk is a super-good use of EA dollars, but the premise of timeless utilitarianism. (Of course, the former follows from the latter.) I think it is reasonable to assign way more value to existing lives (and avoidance of possible suffering) than to potential lives (which, if they never come to be, cannot suffer).

    It is less reasonable but sanity-preserving to value avoidance of possible human suffering over avoidance of possible animal suffering, so I do that while acknowledging its unreasonableness.

    (I fully expect the EY crowd to start yelling at me about why timeless decision theory is the one true path.)

    • jaimeastorga2000 says:

      (I fully expect the EY crowd to start yelling at me about why timeless decision theory is the one true path.)

      Doesn’t apply here, as far as I can tell. The actual argument I have heard is that valuing lives more which are closer to you in time is as arbitrary as valuing lives more which are closer to you in space (the latter, of course, being how the effective altruist crowd frames donating to people in your country when you can help more foreigners for the same amount of cash).

      • MicaiahC says:

        This seems absurd; you have much less certainty regarding your influence on someone separated from you far in time. Of course, ideally you really should have time and space invariant morality, but the interventions and decisions made should be based on certainty of impact as well as the size.

        Was this brought up, if so, what was the response?

        • Dust says:

          Was this brought up, if so, what was the response?

          Well are we talking about beliefs or values? If we’re talking about values–trying to figure out what we would like to achieve–then it seems wrong to change my values just because they don’t seem very achievable. That would have you making mistakes like deciding that slavery is morally OK back in the 1700s when abolition didn’t seem very achievable.

          If we’re talking about beliefs–of course it’s harder to be confident about beliefs regarding the impact of interventions targeting the far future, and I doubt many would deny this. The problem is that if our values suggest that the vast majority of what we value is in the far future, we are forced to confront the difficult question of how to reliably positively impact on the far future head on.

    • Dust says:

      My understanding of “timeless decision theory” is that it’s a concept totally unrelated to the question of accounting for future value… I think you have your terminology mixed up.

      I think it is reasonable to assign way more value to existing lives (and avoidance of possible suffering) than to potential lives (which, if they never come to be, cannot suffer).

      This sounds like a negative utilitarian leaning position? I’m somewhat sympathetic to negative utilitarianism, but you can read a critique here if you want. Anyway, my impression is that a good number of EAs have the position that preventing future suffering is more important than bringing in future happiness, and some (Brian Tomasik?) have donated money to x-risk organizations like MIRI after thinking about this.

    • Scott Alexander says:

      1. First, I’m no expert on this myself, but I don’t think this is how people are using the word “timeless decision theory” – although I see where the idea might come from since this clearly involves ignoring time.

      2. You might be interested in Eliezer and Robin Hanson’s debate Against Discount Rates vs. For Discount Rates.

      3. It’s really really hard to come up with a discount rate such that 10^whatever people in the far future don’t outweigh 8 billion people now without accidentally encoding in an assumption like “if an asteroid will hit the earth next week we shouldn’t care because next week is so heavily discounted from the present”.

      4. In fact, it is sufficiently hard that I try not to reason with this kind of population ethics at all. I was first of all just following Matthews’ lead, and second of all I do think there’s some kind of intuitive sense of “having this incredibly glorious future is something really worth protecting”. To put it another way, I would much rather the human race not suddenly go extinct 200 years from now, even though nobody now alive would be harmed, and that intuition seems sufficient to reintroduce concern for all these far future things.

      • Froolow says:

        With respect to number 3, I’m not sure that’s true. A commonly accepted discount rate is 3.5%, which means (if the assumptions behind discounting costs apply to population utility) that I would be indifferent between saving 100 lives now or 103.5 lives in one year’s time. It’s also a very weak discount rate – some people think it should be as high as 6%, and some people *act* as though they have a much stronger time preference than even 6%.

        Using this 3.5% model, I would be indifferent between saving 10^52 lives in 3361 years vs a single life this year, or 10^52 lives in 2715 years vs the whole population of the Earth now (10^10 people). Anything that happens in – say – 5000 years is completely irrelevant from the point of view of my (discounted) utility function.

        • Matthew O says:

          This.

          Think about sleazy pawn shops or title loan places that will, say, give people a $1000 loan that comes with a 20% weekly interest rate or something ridiculous so that they can go buy a flatscreen TV.

          And the crazy thing is, there are poor people who gladly purchase these services. People who would rather be a $200/week debt slave for life to this title loan company than postpone their enjoyment of the flatscreen TV for five weeks to save up the $1000 to purchase it outright. What kind of annual discount rate is that?

          I mean, that’s not my discount rate. I’m just saying, there are many people out there who have steeper discount rates than you would suspect. I’d imagine 6%/year is lowballing it.

          So yes, if someone found that an asteroid was going to kill a billion people a thousand years from now, I don’t doubt for a minute that the vast majority would feign concern for about a week of news-cycle stories about it, and then put it on the back burner. And people would rationalize their discounting with FUD (fear, uncertainty, doubt) about the reality of the threat.

          Think about how people already do that with climate change, which could make millions of people miserable within this century.

          Only when the asteroid was like 50 years away would people really start to do something about it.

      • Ilana says:

        Let this be my blanket apology to everyone who pointed out my misuse of the term; you’re right.

        I think I totally lack the sense you describe of “having this incredibly glorious future is something really worth protecting”. I mean, I’d also rather not have the human race suddenly go extinct if it means a lot of pain for whoever’s alive then, but I don’t really care if we just fade away.

    • Peter says:

      I’m not the EY crowd, I don’t follow Less Wrong, but something like TDT may indeed be the way forward here, for suitably broad values of “like”. As far as I can tell, people are correct in saying that the “timeless” in TDT is a different thing.

      Basically TDT is roughly in the same headspace as Kantianism; you imagine making decisions based on principles existing in some timeless Platonic realm, on the assumption that whatever principles you use, other people (or your future or even past self) will also use those principles for both making their own decisions, and for understanding, predicting and reacting to yours. If you squint quite a lot, it kinda looks like the Golden Rule.

      I’m a big Golden Rule fan; despite all sorts of philosophical quibbles it seems to have quite a lot of psychological oomph, is reasonably easy to understand and apply to a range of different situations, and in general is broad enough to make me say, “yeah, morality isn’t just some random jumble of arbitrary stipulations, there’s an underlying order to it, possibly a gloriously simple one – also, actually a thing and not just a pile of stuff society arbitrarily threw together”. My inner pedantic philosopher doesn’t think the Golden Rule is the final answer, but it points towards answers, and as a tool in your moral toolbox it should have a pretty big place.

      So how to we cash out something like the Golden Rule in terms of charity?

  33. Anonymous says:

    This is just the same bullshit as the Drake equation. When you start with a big enough number you can multiply it by very small numbers and still get a big number at the end, provided that you are pulling the coefficients out of your ass and disqualify the ones that are too small everything will always work the way you want.
    In fact I wonder why effective altruists aren’t worried about an alien invasion, I mean, just look at how many intelligent life forms the Drake equation predict if only 0.01% of those have mastered wormhole space travel and only 1% of those are natural murderers it should be just a matter of days until 10^54 of potential future people are slaughtered by evil aliens.

    • chaosmage says:

      The two main differences between managing extraterrestrial threat and AI threat are that astronomy already gets significant funding and the information it can gather isn’t very actionable.

  34. AngryDrake says:

    Or that your charity dollar would be better sent off to sub-Saharan Africa to purchase something called “praziquantel” than given to the sad-looking man with the cardboard sign you see on the way to work.

    Or that a person who wants to reduce suffering in the world should focus almost obsessively on chickens.

    I have a rule of thumb for charity.

    Imagine two situations.

    One: You donate to some far-off charity, on the assumption that they’ll do some good, for someone, somewhere, somehow. Neither you nor nobody you know personally will benefit from it.

    Two: You give some money to a bum you pass on the street every day. Not exactly an acquaintance, but someone you recognize by sight, and interact with.

    Now imagine that something goes horribly wrong as a result of your efforts. Perhaps your donation actually went to a local warlord, through a series of bureaucratic fuck-ups and corruption, and he used it to genocide his enemies. Perhaps the bum you gave money to used it to buy heroin, and overdosed, expiring in his usual spot.

    Which of those two situations yields greater guilt over having, unwittingly, done wrong? For the vast majority of people, it would be the latter. The first situation would likely generate a shrug and mutter along the lines of “at least I tried”. We, by and large, are not equipped to feel anything much about far-off people and events with no relation to ourselves.

    If you aren’t the sort of person to feel as much remorse for your charity having accomplished evil as in the second situation, DON’T DO IT, because your help is likely not charity, it’s status signalling and/or conscience-salving, and you have no investment in following up and making sure that you’re not actually making things worse.

    • Pku says:

      In my case it’s definitely at least partly conscience-salving, but why does this matter? If a kid in Africa gets a deworming pill because I donated the money for it, then finds out I don’t even know he exists, will he really be bummed out enough to go look for a new worm infection?
      (Or, on a more sincere note: Charity might be done because of your own less-noble feelings, but if, to the best of your judgement, it’ll have positive effects, why is that a problem? In the end, it’s not really about you, even if you’re just thinking of yourself when you do it).

      • CJB says:

        But the problem is that charity requires more than money, or research. Charity requires other actions, that are often uncharitable.

        I call it the Hungry Mr. Kim problem:Mr. Kim is starving. Wouldn’t you like to feed him? Yes! Me too!

        Why then, is he still hungry? Because he lives in Pyongyang. His problem isn’t “hunger” it’s “regime”. Trying to attack the direct problem just makes the real problem worse- sending more food to Pyongyang makes the regime stronger.

        Want to help Haiti? Our doctors can’t do shit. Their doctors haven’t really done shit in the past. What has helped Haiti? Being controlled by the US Marines- places was a friggin’ tropical paradise. Evidence suggests that “helping Haiti” starts with “conquering Haiti.”

        I would not be surprised at all to find in five years that places where effective altruists have been are considered prime territory for kidnapping child soldiers and slaves. You can guarantee a stronger, healthier stock.. Because the problem that children in DRC have isn’t “worms”. It’s “living in the DRC”. Anything short of changing the DRC is a small bandaid on a big wound.

        Colonialism: The most effective altruism of all.

      • AngryDrake says:

        Because if you do it for reasons other than charity, then you have no reason not to optimize for the amount of goodfeels/status that it brings you, as opposed to actually doing good. Without a visceral, close relation between your input and the output, there’s no natural process to disincentivize bad results and incentivize the good.

        (Deworming is, BTW, a bad example of doing good. The solution to worms is not pharmaceuticals, which breed hardier worms, but breaking the worms’ lifecycle through provision and imposition of hygiene – indoor plumbing and shoes.)

    • Scott Alexander says:

      Are you allowed to feel guilt for the couple hundred people who died in Africa while you gave the money to the bum, or does only a very specific form of guilt count?

      • AngryDrake says:

        You are, of course, allowed to feel guilty… but I don’t think such a guilt is sane, if it becomes such an issue that you would forego your greater obligations to your friends, neighbours and family, just to salve it. Aiding complete strangers is good, but it is also supererogatory. Focusing one’s efforts on complete strangers rather than those nearby is a failure of charity.

        • houseboatonstyx says:

          @ AngryDrake
          You are, of course, allowed to feel guilty… but I don’t think such a guilt is sane, if it becomes such an issue that you would forego your greater obligations to your friends, neighbours and family, just to salve it. Aiding complete strangers is good, but it is also supererogatory. Focusing one’s efforts on complete strangers rather than those nearby is a failure of charity.

          Yes. Taking care of one’s own children is not just a tribe-survival thing, it’s a mammal and an animal thing.*

          * Though perhaps not all the way back to the first primordial globule.

  35. Mark says:

    Arguing at the fringe of probability is problematic. Let’s say that if some mathmatical fact was true, it would be usefull in the same way stated above. Like you can save10^54 lives or whatever. You can assign a probability to it being true. But if the statement is false you will be giving it infinitely higher probability then you should.

    Not sure if that makes sense.

    • ton says:

      Why mathematical fact only? Any judgement using probability is infinitely wrong, were you to have infinitely more information.

  36. CJB says:

    The problem with utilitarianism and super-large numbers is that people only ever use it to justify cute little things. Also, I hate using scientific notation of massive numbers to make your ass-pulled numbers sound More Sciencey.

    So over the next quinjillion years, an expected beliebertibillion humans will live, of who seventy five fuckagoos will have 75 QALYs per annum.

    Ergo, send me 100 bucks.

    Like, c’mon man. How about “I’m an astronomer searching for civilization ending asteroids. I’m better at searching when I’m content Since a 1% more effective astronomer with a telescope and a dream raise the chances of Saving the Future by a chrisfarleyth of a percent, I should be allowed to beat hookers to death, as long as none of the hookers own telescopes.”

    If you’re going to go out on the Limb of Big Numbers, go all the way. How about this real world situation-

    One of the most important materials in just about any Tech is tantalum extracted from coltan. Lowering the (high) price of coltan would, presumably, drive research into New Tech. The quickest way to do that is to increase the number of workers extracting coltan, as the largest deposits are worked by low cost slaves.

    So therefore, if you want to save the future, one of the more morally effective options is to sell slave children to coltan miners in the congo. Fewer lost lives and production than a war. They have a low expected number of QUALYs, being already in the Congo. Increased Tech research will benefit billions now and shitjillions in the future.

    Ergo slavetrading is more moral than not donating money to astronomy research.

    Essentially, we’ve created a Utility Monster of future ghosts, without even the benefit of knowing what use this will actually be. At least while we’re eating 2,000 calories of NutriGruel before going off to the lead mines, we can look over and see that yes, the Utility monster is enjoying his resources 100 scintillion times more than all of us combined.

    Which is, as noted above, the problem with utilitarianism- it breaks down with Really Big Numbers. Which is fine, actually. No one ever is going to have to solve the problem of whether to torture 1 person to death or allow 3^3^3^3^33^33^43-2 people to get a dust speck in their eyes. If your model only breaks down under numbers that are essentially incoherent, then unless you’re doing quantum physics, your model is perfect for human uses.

    • Tom Womack says:

      Why does everyone keep mentioning tantalum? If we managed to stop the warlords in the Congo from selling tantalum, there would not be the slightest obstruction to technology development: people making capacitors would buy the tantalum from Western Australia instead, the capacitors might be a few cents more expensive and the phones a few dollars more expensive but the awesomeness of having a phone lets you ignore the few dollars.

      • CJB says:

        Last time I checked, the vast majority of easy to extract tantalum was in the Congo, hence why we keep getting it from Congo.

        But fine. Pick any one of a number of other minerals. My point is that when you pick “Giant exponent” as your basis of morality, it doesn’t just justify giving ten bucks to SETI as opposed to the Red Cross.

        It justifies literally ANY action, no matter how seemingly immoral, that might have a tiny chance of Saving The Future.

        That’s why the argument is meaningless. An argument that equally justifies a small donation and mass enslavement is an argument that has NO MORAL USE.

  37. Implicit in the argument we are discussing is the idea that the maximand is total utility. It’s worth thinking a little about what that means and what its implications are.

    You need cardinal utility, which means VN utility. You are taking the utility of non-existence as zero. So the statement “life A has twice the utility of life B” means “I would be indifferent between a certainty of life B and a coin flip that gives me life A it the coin comes up heads, painless death if it comes up tails.”

    Now imagine you could somehow convert the U.S. population into the Indian population–four times as many people, living at Indian standards of living. Unless you believe that the average Indian would be willing to accept a .75 chance of death in order to get a .25 chance of the life of an average American, you should conclude that the conversion increases total utility.

    If that conclusion seems wrong to you, you might want to rethink total utility as a maximand.

    If sufficiently curious about where this line of argument might lead, you might be interested in an old chapter of mine. Unfortunately it’s not webbed:

    “What Does Optimum Population Mean?” Research in Population Economics, Vol. III (1981), Eds. Simon and Lindert.

    • Jon Gunnarsson says:

      I think it’s plausible that quite a number of Indians would be willing to accept a 75% of death for a 25% chance at an American standard of living. Whether the average Indian (whatever that means, exactly) would accept that gamble I don’t know. What I do know is that millions of people from poor countries accept a significant risk of death to illegally emigrate to rich countries. Of course their chance of dying in this endeavour is less than 75%, but then there is also the chance of being sent back, and even if they are successful, their standard of living will still be much lower than that of the average European or American.

  38. Peter says:

    I’ve mentioned before that I ID as a “not-quite EA”. I also “lean towards” utilitarianism, but with lots of the indirect stuff.

    J. S. Mill says “Duty is a thing which may be exacted from a person, as one exacts a debt” – the broader idea of utilitarian good also covers Expediency and Worthiness, but Duty is a special region of it with a character of its own.

    The EA “deal”, as I see it, is not as demanding as naive versions of utilitarianism – see the oft-cited “demandingness objection” – which people who like the utilitarian idea but can see the problem have various ways around. The EA deal – or a version of it – says, roughly, “expose 10% of your income to the cold winds of unfettered utilitarianism and you can use the other 90% (and also your non-financial resources) as you please”, where “you” includes the rest of your conscience. The challenging-but-doable nature of the 10% deal I think is one of the reasons for the popularity of the movement.

    So conscience – and something that might be a part of my conscience, or something similar, something that doesn’t punish with guilt, but with existential despair, “what the hell am I doing here? is my existence, my existence in particular, pointless?”[1] – seem to have some demands on the rest of me. From the point of view of the rest of me, fulfilling the EA deal, or some bodged-together mutation of it that lets me spend some of my 10% on closer-to-home stuff – seems to satisfy the conscience, and lets me spend some of the 90% on consumer electronics, optical goods, expensive coffee, etc.. At least it does when the cold winds seem to be blowing in the direction of sending money to Africa.

    I don’t actually think the cold winds are blowing in the direction of long shots to do with x-risk, but if they were, it feels like the deal would stop being satisfying.

    My “spend some of my 10% closer to home” covers a variety of concerns – some of it to autism charities, some to Cambridge homeless charities, for instance. That last one is something where my conscience seems to be really quite potent at exacting something from the rest of me. So what feels like my duty – well, my duties – seems to be a collection of… not exactly spheres of concern so much as ellipsoids, of greater and lesser sizes, pointing in various directions, often with me not being at the centre of them (maybe at one of the foci?). And it sort-of feels like all of those duties need to be protected from each other, as well as from the optical goods and expensive coffee.

    I’m not sure how to formalise this. Written out like this it all seems like a blatant exercise in rationalisation, but then again a lot of moral philosophy seems a lot like that.

    [1] This latter one – yes, you can consider philosophical problems in the abstract, and it feels like we’re a long way off from having neat answers that you can write in a textbook – well, neat ones that are actually right. But sometimes those questions feel like they have bite, and sometimes they don’t. I think that finding some meaning in life, some bigger picture than my day-to-day concerns, not necessarily the biggest picture imaginable but something nice and large, makes the bite more-or-less go away, and the big questions become things that can be safely left for future generations to come up with the neat answers to.

  39. Dust says:

    BTW you snuck a few more probabilities in to your argument chain: the chance that the newspaper would drift on to his face, with the relevant info, and the chance that he would actually play the lottery twice. Just sayin’. Sorry couldn’t help myself.

  40. vV_Vv says:

    Well, actually, we do know. It’s probably not the 10^-67 one, because nothing is ever 10^-67 and you should never use that number.

    Roll 67 d10 dices, what is the probability of getting 67 ones?

    Well, the per-second probability of getting sucked into the air by a tornado is 10^-12; that of being struck by a meteorite 10^-16; that of being blown up by a terrorist 10^-15. The chance of the next election being Sanders vs. Trump is 10^-4, and the chance of an election ending in an electoral tie about 10^-2. The chance of winning the Powerball is 10^-8 so winning it twice in a row is 10^-16. Chain all of those together, and you get 10^-65. On the other hand, Matthews thinks it’s perfectly reasonable to throw out numbers like 10^-67 when talking about the effect of x-risk donations. To take that number seriously is to assert that the second scenario is one hundred times more likely than the first!

    What is your point? Yes, some weird and flashy scenarios have a probability of 10^-67, however, not all scenarios that have a probability of 10^-67 are weird and flashy.

    Consider:
    ” a tornado roars into Alice’s house. She is sucked high into the air. Bob is struck by a meteorite. Carol’s house blown up by Al Qaeda. And electoral tie between Dan and Erin happens in Whateverland. Frank wins the Powerball lottery, while Mark just won the same lottery last week.”

    The probability is the same, but this scenario doesn’t look so weird, does it? Improbable events happen all time, we just can’t predict them in advance.

    Anyway, I don’t get the gist of your argument.

    Matthews argues that since we can’t accurately estimate small probabilities we should round them off to zero when doing expected utility computations.
    You seem to argue that we should round small probabilities off to some non-so-small positive value, and then multiply them by some arbitrary high estimated utility. This obviously results in falling for Pascal’s muggings.

    For a boundedly rational agent (such as a human) Matthews’ position seems much more reasonable than yours.

    • yeah, this argument was used to justify not turning on the LHC because of the tiny probability of a black hole forming multiplied by the expected utility of the entire human race… or something like that

    • Martin-2 says:

      “The probability is the same, but this scenario doesn’t look so weird, does it?”

      Actually yes it does. Remember, it’s only the same probability if you’re predicting that those things happen to those specific people. What would really look less weird is, “A tornado roars into someone’s house; someone else is hit by a meteorite; someone else is blown up…” but this scenario is billions of billions of billions of billions of billions of billions of times more likely. Consider this Feynman quote:

      “You know, the most amazing thing happened to me tonight. I was coming here, on the way to the lecture, and I came in through the parking lot. And you won’t believe what happened. I saw a car with the license plate ARW 357. Can you imagine? Of all the millions of license plates in the state, what was the chance that I would see that particular one tonight? Amazing!”

      You’re making a similar error.

      • vV_Vv says:

        Remember, it’s only the same probability if you’re predicting that those things happen to those specific people.

        Draw seven random people with replacement. Substitute their (uniquely identified) names to the placeholder names that I used in my statement. The resulting statement has the same probability as Scott’s.

        You’re making a similar error.

        What error?

        Scott claims that events with small probability essentially never happen, and he came up with a rhetorical example of a wacky low-probability scenario to attempt an argument from ridicule.

        I’ve shown that low-probability events happen all the time and most of them are not wacky at all.

        • Martin-2 says:

          I agree with your math. My objection to your scenario is this; even if you tell us to imagine the probability of those events befalling that particular unspecial set of people, what we intuitively calculate is the probability of those events befalling any set of unspecial people. That’s why it doesn’t feel so improbable.

          You’re saying that Scott’s scenario is deceptive because it’s too wacky? Like, the imagery of a guy being flung from a tornado into a meteorite into a terrorist attack is too silly and screws up our calculation? I don’t buy it.

  41. Rick Hull says:

    .

  42. David Moss says:

    “He writes that he’s worried that a focus on existential risk will detract from the causes he really cares about, like animal rights…
    And yet, the same arguments he deploys against existential risk could be leveled against him also – “how can you worry about chickens when there are millions of families trying to get by on minimum wage? Effective altruists need to stop talking about animals if they ever want to attract anybody besides white males into the movement.” What then?”

    I think this is one of those cases where they don’t mind the tactic/policy because even though it could equally justly be turned against them, they are confident it won’t be. (c.f. the serving only vegan food at EAG controversy)

  43. It’s sort of ironic that you should say nothing is ever 10^-67 and you should never use that number since in the original Pascal’s Mugging post Eliezer said much the same thing. I had a pretty good reply to that the unlikeliness of someone extorting you tends to grow with the negative consequences past a certain point but I don’t have a good probability response here. It’s more that as time goes on I expect humanity to diverge from what I am now and so weigh less on me morally vis a vi unfriendly AIs.

  44. Dominik Peters says:

    Or that a person who wants to reduce suffering in the world should focus almost obsessively on chickens.

    Does anyone have a link to someone making that argument?

  45. Anonymous says:

    It’s not effectiveness I disagree with, it’s (utilitarian universalist) altruism. AFAICT effective altruists are not even maximizing their own utility functions – or rather the actual utility value they can account for is from the social and self-signalling benefits, not from having X more Africans alive next year or whatever. Or perhaps the EA line of thinking is due to the media giving people a distorted model of the world where distant people and events that will rarely or never intersect with one’s own community loom larger in the mind than they otherwise would.

    I’d guess that to start with the concept of utility and get actual coherent ethics that is reasonably inline with intuitions you need to at least discount utility by social distance from oneself (that distance being some kind of metric combining distance in time, space, and similarity). Due to the internet this is a bit more of a global concept than it used to be but poor Africans are still pretty far from rich Americans.

    edit: FWIW I suspect the only truly long-run consistent systems are either robust-wireheading-as-long-term-goal or destroy-the-universe-to-prevent-suffering, though I haven’t embraced either in practice just yet…

    • Deiseach says:

      I think we can reasonably make plans about things we do that will have effects in ten years or twenty years time. I think planning about “if we do X, Y will happen in fifty years” is pushing it. I think “Donate NOW to make sure in one hundred years’ time 10^52 human life-years will come into being!” is being incredibly optimistic about the capacity to dictate and control action that long in the future.

      “The best laid plans of mice and men gang aft a-gley” and there are lots of law cases to break trusts and the like where testators thought they were setting up really long-term conditions for what they wanted done.

      Think of all the people who established chantries to have Masses said after their deaths for their souls in Purgatory, and expected those to last pretty nearly in perpetuity. Then along came Henry VIII, disestablished the monasteries, and grabbed all the monies for his spending on continental wars 🙂

  46. DavidS says:

    Somewhat off-topic, but this debate reminds me of a fantasy or sci-fi short story I think I once read when I was a small kid which involved people from different cultures meeting for some sort of tournament. One of the competitors was from a culture which worked on the premise that the long-term, unpredictable or otherwise ‘far’ effects of what you did were so mired in speculation as to be worth ignoring, and as a result the people of this culture were incredibly conscientious/moral in terms of helping those immediately around them in direct, concrete ways but never did anything to hopefully, ultimately contribute to some sort of long-term goal.

    My memories of this story are simultaenously incredibly vague and massively vivid, which makes me think it might have actually been a rather odd dream. Anyone recognise the concept/plot at all?

  47. Jordan D. says:

    Two things:

    1) This is a good point. I propose a general rule- if your exponent is so large that it treats all the joys and sorrows in the history of the universe as a rounding error, nobody should be comfortable using it for anything.*

    2) The Foundation to Stop The Bernie-Trump Al-Quaeda Double-Lottery Tornado Meteor is seeking donations now! With your support, we pledge to develop an effective deterrant to at least some parts of the impending apocalypse!

    *Which isn’t to say that nobody should ever use such numbers, but I’d appriciate it if they’d wince a bit.

  48. WT says:

    I’ll throw out a number that is even smaller than 10^-67: Zero. What if donating to AI x-risk projects right now has ZERO chance of affecting these 10^52 future lives? Or worse, what if donating to AI x-risk projects right now actually has a risk of inadvertently making things *worse* for the future? It’s not as if every project is a sure thing at having the exact effect that is intended.

    Your reasoning doesn’t make sense. By your reasoning, even if we see a meteor that is literally one mile away and about to make impact with 100% probability, we can’t say there’s a zero chance that shifting my rear end in my chair will alter the Earth’s course enough to avoid the meteor, because zero is a really small number or something.

    Anyway, Dylan’s point seems unanswerable: if the argument begins by totally stacking the deck with 10^52 (as yet imaginary) future lives, all of whose very existence we are supposed to care about preserving for some unexplained reason, then that would justify doing literally millions upon millions of the most ridiculous and harebrained things in the present that hypothetically have a greater than 1/10^52 chance of helping.

  49. Max says:

    If you want to do something long term, plant a tree. Don’t try to predict the evolution of human society. That only works in fiction.

  50. Ralph Hartley says:

    “nothing is ever 10^-67 and you should never use that number”

    By the same reasoning, nothing is ever 10^54 and you should never use that number either. It is by no means reasonable to give an estimate like that “a mere 1% chance of being correct.” 1% is many orders of magnitude too high. Numbers like that are never close to that certain.

    My estimate of the probability that humans will not go extinct in the next 10^9 years is so close to 0 it is hard to measure. More likely than me living that long, but not by much. The improbable chain of events ending with the lottery tickets seems much more likely. The chances of there ever being 10^52 people is much smaller still. Much less than 10^-52.

    Given that, even if MIRI can reduce the probability of extinction by half, that wouldn’t really worth much. It would still be valuable postpone extinction.

    When I do it I’m making a ballpark estimate, when you do it you’re making numbers up.

  51. SUT says:

    There’s also the critique of pulling-numbers-out-yo-ass, Michael Chrichton’s Aliens cause Global Warming, which deals with even less ad-hoc guesstimating of the Drake Equations and parameter estimates which modern science has put billions of dollars into estimating (e.g. ne = number of planets around each star that can support life). My takeaway is basically that you can soundly model and parameterize a system with deep uncertainty, but it’s only fooling yourself to think you’re anywhere closer to the answer after that exercise.

    And on the subject, there is one good time to use 10^-67. As a probability that life develops on a planet (in the case we keep looking find ourselves alone). In other words, the probability of existential *fortune*.

  52. Not That Scott says:

    Very off-topic, but this article being approvingly cited by many effective altruists as being a a great and correct critique of Effective Altruism, despite being a mishmash of weak arguments and whinging about diversity, has shifted my probability distribution away from “EA is going to need to deal with the usual goal drift / goal drag issue that all progressive causes deal with: namely, why isn’t it about feminism and racism and diversity and equality?”.

    Unfortunately, most of that shift is into “EA has failed to protect itself, it’s now too late”.

    • jaimeastorga2000 says:

      EA is going to need to deal with the usual goal drift / goal drag issue that all progressive causes deal with: namely, why isn’t it about feminism and racism and diversity and equality?

      This is not just a problem with progressive causes; every institution which is not explicitly right-wing has a mysterious propensity to magically drift leftward. Proposed explanations include entropy, entryism, and Phariseeism. See “Conquest’s Second Law” and “Social Justice, Ideological Hijackings, and Ideological Security”.

      • John Schilling says:

        But, but, but…

        Slate Star Codex is not explicitly right-wing, and I am repeatedly assured it is drifting in a direction that is Not Left.

        1/2 🙂

  53. TheAncientGeek says:

    Has anyone done an actual calculation of UFAI risk?

  54. Troy says:

    It’s probably not the 10^-67 one, because nothing is ever 10^-67 and you should never use that number.

    I only really see probabilities that low come up in fine-tuning in physics (I cited Roger Penrose’s 1/10^(10^123) for the probability of the low entropy of the initial universe by chance in the last thread). And when these numbers are used to support theism, I have frequently seen atheists assign even lower priors to theism. This strikes me as about as (un)reasonable as Matthews’ assigning a probability of 10^-67 to an anti-existential risk plan making a difference.

    I do like the combination of unlikely events illustration of how low this probability is; I will keep it in mind when discussing fine-tuning in the future.

    • Peter says:

      Oh, they come up all the time in Natural Language Processing; probabilities so small you have to tell the computer to keep track of probabilities in logs because IEEE doubles can’t get that tiny. Take this comment for example. 381 characters – the information content of English is estimated at 1 bit per c character, so it works out as a probability of 2^-381 – about 10^-115 or so.

      • Troy says:

        Fair point — I can see how you could easily get numbers this low in applications of information theory.

    • James Picone says:

      The principled argument is that God is an extraordinarily complicated hypothesis by any sensible measure of complexity. Much more complicated than ‘the universe happened to have that set of parameters’. The argument is that you’re fine-tuning even harder. It’s not quite as arbitrary as you’re implying.

      • ton says:

        That’s precisely the assigning of low priors that they’re talking about. If you refuse on principle to assign a sufficiently low prior to any non-contrived statement, then you shouldn’t for God either.

      • Troy says:

        This is the usual rationale given. I don’t find it persuasive. All you’re “fine-tuning” for is something fairly simple, namely the classical attributes of God (omnipotence, omniscience, perfect goodness, etc.). It’s not clear that these are independent of each other — Richard Swinburne for example has argued that some of these attributes imply the others — and so the hypothesis may be even simpler than that. And these attributes are much simpler than other attributes a deity might have — it is much simpler that God would know everything, for example, than that he would know everything except what hospital Scott works at.

        Moreover, the fine-tuning likelihoods are not measures of complexity. Neither the universe nor our conception of it becomes more complex when we discover that another parameter needed to be fine tuned in order for life to exist. So it seems very ad hoc to suppose that whatever measure of complexity you’re using is just going to happen to have God always having a lower prior than the likelihood ratio implied by the fine-tuning facts, when that ratio just keeps getting smaller and smaller the more of the physics we discover.

        Finally, I think examples of incredibly improbable events illustrate the absurdity of a prior for theism that low on an intuitive level. Even if you’ve got some philosophical rationale for assigning an outrageously low prior to theism, if that prior implies that were you to go outside at night and see the stars rearrange themselves in the heavens to form the words of the Nicene Creed in Koine Greek, you should still not have a very high posterior in theism, that prior is wrong.

        • DavidS says:

          I sympathise with a lot of this – I don’t think my actual implicit prior for a consistent-type God is that low, as opposed to my prior for an arbitarily complex one. I think the issue here is whether there’s a neutral definition of ‘simplicity’ here that can become a prior. I think saying ‘I’ve defined God’s attributes to be completely united, so he’s the simplest thing ever’ is cheating, but then I also found the God Delusion’s approach (which seemed to be ‘what’s the chance of God’s powers emerging from a random combination of atoms floating around’) to basically beg the question. I’m not sure there’s a ‘metaphysics-neutral’ way of addressing this. I think people don’t believe in God because ‘there isn’t any evidence’, but because they are implicitly or explciitly convinced of a basically reductionist and/or materialist view of things. From the perspective of e.g. Ancient Greece, I don’t think that it would be remotely as clear, and it’s not because they were swamped with overwhelming positive evidence that turned out to be fake.

          What I do know that back when I paid more attention to these things I read Swinburne’s books on the case for God and they were not at all convincing. He makes what seemed to me some really horrible errors (e.g. his argument about whether religious experience justified belief ended up saying that if God exists and people have an experience ‘of God’ then that experience can’t be hallucinatory.)

          • Troy says:

            I’m not sure there’s a ‘metaphysics-neutral’ way of addressing this.

            I doubt that it’s possible to give a knock-down justification for a particular prior, but I think that it’s still worth doing the Bayesian calculation with “made up statistics,” to borrow Scott’s phrase. e.g.: A prior of .5 is obviously too high, a prior of 1 in 10^10 is obviously too low. Let’s see what happens when we start with one of those numbers and plug in reasonable-sounding likelihoods for evidences for and against theism. I think this exercise gives us a high posterior for theism even starting with the low prior.

            What I do know that back when I paid more attention to these things I read Swinburne’s books on the case for God and they were not at all convincing. He makes what seemed to me some really horrible errors (e.g. his argument about whether religious experience justified belief ended up saying that if God exists and people have an experience ‘of God’ then that experience can’t be hallucinatory.)

            Swinburne is definitely a mixed bag. I’m on board with his overall project and think he does a better job of defending theism than most other philosophers of religion, but I think he makes plenty of bad arguments too (I agree with you about his discussion of religious experience). For my money Robin Collins and Tim McGrew generally do a better job in presenting their arguments for theism, though neither of them are as systematic as Swinburne (in their published work).

          • ton says:

            “I think saying ‘I’ve defined God’s attributes to be completely united, so he’s the simplest thing ever’ is cheating”

            It’s not simple, but it’s not as low as the above tried to say.

            Consider conditioning on our world, plus the knowledge that that “simple” God exists. It seems fairly obvious that you’d put a high probability on one of the established religions being true; certainly your probability for at least one religion being true in that case should be at least 1%, say. So that puts a lower bound on how unlikely God can be at whatever the simple version of god is times .01.

        • James Picone says:

          ‘omnibenevolence’, ‘omnipotence’ and ‘omniscience’ carry huge amounts of hidden complexity with them. ‘is a mind’ carries even more, and I think something resembling being a mind in that sense is going to have to be around for anything resembling the Christian deity. I know it’s not actually computable, but the general principle of “If I had to write a program to simulate this, how long would it be?” is a good model here. Also conveniently sidesteps the ‘perfectly simple’ dodge.

          The philosophical rationale is the Problem of Evil – this does not look like the kind of universe an omnimax entity would create. Scott’s created the best theodicy I’ve seen, incidentally, in his Answer to Job, but it was not exactly a simple world-state.

          And indeed, if I went outside and saw the stars spelling out the Nicene Creed in Koine Greek, my best explanation would be that I was dreaming or hallucinating. It’s pretty strong evidence, mind. If it kept up I would definitely have to conclude that some unimaginably powerful entity existed, and was aware of Earth, humans, and Christianity. But the priors on ‘simulation’ are higher than ‘god’ (simulators lack omnimax), and the priors on ‘Kardashev-3 civilisation having a laugh and also physics is way different to what we think’ are also higher.

          (I actually disagree that the fine-tuning probabilities are anything like you claim, incidentally, mostly for anthropic principle reasons. But that’s a different issue – I think that even conceding that, the complexity penalty a ‘perfectly simple’ mental entity that is also omnimax in a universe like this would accrue is sufficiently large that it just doesn’t matter).

          tl;dr standard atheist reply.

          • ton says:

            ‘omnibenevolence’, ‘omnipotence’ and ‘omniscience’ carry huge amounts of hidden complexity with them. ‘is a mind’ carries even more, and I think something resembling being a mind in that sense is going to have to be around for anything resembling the Christian deity.

            By itself, they’re pretty complex, but complexity isn’t measured in a vacuum, it’s measured relative to whatever else you know. Given that humans exist, there’s not much additional complexity to define a “mind”. You need to calculate marginal complexity, not total complexity.

            How much more complicated is a world that looks like ours + God exists, versus our world?

            If you don’t do it this way, then you run into all sorts of paradoxes.

          • Troy says:

            I actually disagree that the fine-tuning probabilities are anything like you claim, incidentally, mostly for anthropic principle reasons.

            Yeah, there are naturally plenty of other objections to the fine-tuning argument I’m passing over. I don’t find anthropic objections persuasive, inasmuch as their force seems to me to rely on subjectivist interpretations of probability that look at temporal updating through time, rather than an interpretation of probability (which I take to be the correct) as a logical relation between propositions, where what matters is the order of explanation and not the order of learning.

            (That is, unless you’re combing an observation selection effect with a multiverse theory. Then the observation selection effect move is perfectly legitimate, and the debate shifts to the plausibility of the multiverse theory.)

            ‘omnibenevolence’, ‘omnipotence’ and ‘omniscience’ carry huge amounts of hidden complexity with them. ‘is a mind’ carries even more, and I think something resembling being a mind in that sense is going to have to be around for anything resembling the Christian deity. I know it’s not actually computable, but the general principle of “If I had to write a program to simulate this, how long would it be?” is a good model here. Also conveniently sidesteps the ‘perfectly simple’ dodge.

            I think that “write a program to simulate this” can’t be right as a general model of simplicity, and one reason is because it leads to the absurd result that the probability of an omni-God is 0. If you have to separately represent, say, each item of God’s knowledge, then since God’s knowledge is infinite (this follows from God’s omniscience and the fact that there are infinitely many facts), God is infinitely complex on this model, and so has a probability of 0.

            Many atheists would, of course, welcome this result, but as long as you agree that some set of experiences would lead you to take theism seriously (even if you disagree with me about whether seeing the Nicene Creed in the sky would do it all on its own), you’re implicitly agreeing that this can’t be the correct result.

            This method would also seem to imply that it’s more likely (modulo the fact that, given the above, both hypotheses have probability 0) that God- exists than God, where God- is just like the traditional God except he doesn’t know the name of Scott’s hospital. But it seems pretty obvious to me that the God hypothesis is a simpler, and more probable, hypothesis than the God- hypothesis.

            The philosophical rationale is the Problem of Evil – this does not look like the kind of universe an omnimax entity would create.

            I think the Problem of Evil is orthogonal to the issue of the correct prior for theism. Evil is, like fine-tuning, evidence that we need to take into account in calculating the posterior probability of theism. (I don’t think the evidence against theism from evil is as strong as the evidence for theism from fine-tuning. Basically I think there are some semi-decent theodicies, such as John Hick’s, and that that combined with our ignorance of God’s motives makes P(evil | theism), while still low, high enough that the Bayes’ factor P(evil | theism) / P(evil | atheism) is many times less bottom-heavy than the Bayes’ factor implied by the fine-tuning evidence P(life | theism) / P(life | atheism) is top-heavy. But this position naturally takes a lot of argumentation.)

            Scott’s created the best theodicy I’ve seen, incidentally, in his Answer to Job, but it was not exactly a simple world-state.

            This is actually another good example where I don’t think the program simulation model gives us the right model of simplicity. I think a world in which every good universe exists is a fairly simple world — simpler than one in which every good universe except an entirely arbitrary one exists. But it is harder to simulate.

            Indeed, I think one of the best responses to the fine-tuning argument is to endorse some kind of Lewisian modal realism, on which everything possible exists. I think this is a very simple, elegant hypothesis, and that its prior is not outrageously low — it may even be higher than theism’s.

          • Protagoras says:

            @ton, I don’t think that the existence of material minds within the universe does much of anything to lessen the complexity cost of adding the hypothesis of a non-material mind outside the universe.

          • ton says:

            @Protagoras But the knowledge that these minds have conceived of the idea of God should reduce the complexity. Instead of describing it from scratch, you can refer to parts already existing.

          • James Picone says:

            @ton:
            Our minds aren’t basic elements of the universe. Changes the complexity.

            @troy:
            If you think it’s impossible to write a program that simulates god, that has huge implications for your worldview. For starters, I think that commits you to god-is-above-logic positions (because logic is computable). It commits you to believing that humans also cannot be simulated, or the position that you can’t possibly understand or predict god, which I don’t think has good implications for Christianity as a position.

            I don’t think your reasoning for thinking that it’s impossible to write a program to simulate god holds, though. I often write programs that do operations on integers, and I very rarely have to specify every possible integer that it could handle. Given infinite memory and time, it could do some operation on every integer quite easily, and I’m okay with assuming those things for the purposes of program-that-simulates-x-as-measure-of-complexity.

            Similarly I think your argument about God-minus-one vs God hides some relevant detail. If it goes through, and you can conceive of no shorter description of omniscience than a list of everything known, then I have no idea what definition of ‘complexity’ you’re using that makes that not infinitely complex. Also we can’t even talk about it, because we can’t mentally represent the concept of omniscience, so every time we say the word we’re just getting some related concept that our minds can handle.

            Wikipedia claims Hick’s theodicy just seems like the pretty standard evil-builds-virtue narrative, which doesn’t impress me.

            Keep in mind that the all-positive-sum-universes-exist theodicy implies that the repugnant conclusion is correct.

          • Troy says:

            If you think it’s impossible to write a program that simulates god, that has huge implications for your worldview. For starters, I think that commits you to god-is-above-logic positions (because logic is computable).

            If I understand Godel’s Incompleteness Theorem correctly, it shows that first-order logic combined with arithmetic is not computable. I believe the same is true of second-order logic. So it seems that God is only “above logic” in the same sense these systems are above logic.

            It commits you to believing that humans also cannot be simulated,

            This seems to me like a plausible result, personally.

            or the position that you can’t possibly understand or predict god, which I don’t think has good implications for Christianity as a position.

            I don’t see why that follows. Perhaps it implies that we can’t predict God’s actions with certainty (not clear to me that it implies that, but let’s grant it for the sake of argument), but I don’t see why we can’t assign probabilities to God acting in certain ways.

            I don’t think your reasoning for thinking that it’s impossible to write a program to simulate god holds, though. I often write programs that do operations on integers, and I very rarely have to specify every possible integer that it could handle. Given infinite memory and time, it could do some operation on every integer quite easily, and I’m okay with assuming those things for the purposes of program-that-simulates-x-as-measure-of-complexity.

            Similarly I think your argument about God-minus-one vs God hides some relevant detail. If it goes through, and you can conceive of no shorter description of omniscience than a list of everything known, then I have no idea what definition of ‘complexity’ you’re using that makes that not infinitely complex. Also we can’t even talk about it, because we can’t mentally represent the concept of omniscience, so every time we say the word we’re just getting some related concept that our minds can handle.

            Let me see if I understand what you’re suggesting here correctly. Is the idea that God might be computable because we could, in essence, input simpler descriptions of God’s attributes than just delineating their implications one-by-one?

            If so, this seems reasonable to me, and indeed, I don’t think my concept of omniscience is very complex at all: it just means something like, “knows all true propositions” or “knows all propositions that are knowable” (this may need further revision in light of potential counterexamples having to do with indexical propositions and the like).

            It seems to me that the difficulty in programming this (and you can correct me on this if I’m not understanding how this works correctly; I am not a programmer) is that to simulate this the computer would need to have some method to determine which propositions are true. Perhaps God could write such a program, programming whatever method he uses, but you or I certainly couldn’t.

            If the relevant measure of complexity is something like “what God could program (if he existed),” then I might be open to your suggestion to use that as a measure of complexity, and that God would not himself be infinitely complex on that definition. But then it seems to me that God won’t be very complex at all, and the prior of theism will not be prohibitively low. For the list of divine attributes is not very long, and although conceptual analyses of the attributes is difficult, I don’t think that carrying out those analyses to completion will ultimately give us a prohibitively long description.

            Keep in mind that the all-positive-sum-universes-exist theodicy implies that the repugnant conclusion is correct.

            I actually have a paper under review right now defending a theodicy similar to Scott’s that I think avoids some of its problems, and that (as far as I can tell) does not imply the repugnant conclusion. I would be happy to make it available to anyone interested.

          • TheAncientGeek says:

            Large finite amounts of complexity is much more of a problem than infinite amounts.

          • Mark says:

            It seems to me that the difficulty in programming this (and you can correct me on this if I’m not understanding how this works correctly; I am not a programmer) is that to simulate this the computer would need to have some method to determine which propositions are true. Perhaps God could write such a program, programming whatever method he uses, but you or I certainly couldn’t.

            Right. In other words, God’s “algorithm” requires positing a much more exotic model of computation than we use anywhere else (with the possible exception of computational complexity theorists who study relative computability). This should incur major complexity costs, no?

          • James Picone says:

            If I understand Godel’s Incompleteness Theorem correctly, it shows that first-order logic combined with arithmetic is not computable. I believe the same is true of second-order logic. So it seems that God is only “above logic” in the same sense these systems are above logic.

            I’m more programmer than computer scientist, but AFAIK Godel’s Incompleteness Theorem demonstrates that you can’t write a program that lists all true statements in first-order logic, and also that the statement ‘first order logic is consistent’ is one of the ones that such a program won’t list. First-order logic is computable, in the sense that you can write a program that uses axioms to make conclusions in first-order logic, it’s just that you can’t write such a program that proves every conclusion about relationships between natural numbers that is true.

            I don’t see why that follows. Perhaps it implies that we can’t predict God’s actions with certainty (not clear to me that it implies that, but let’s grant it for the sake of argument), but I don’t see why we can’t assign probabilities to God acting in certain ways.

            ’tis an either-or. If humans can be simulated and god can’t, humans can’t simulate god, and so are always limited to imperfect understanding. There are probably some theorems of computation that limit exactly how closely we can understand it.

            Let me see if I understand what you’re suggesting here correctly. Is the idea that God might be computable because we could, in essence, input simpler descriptions of God’s attributes than just delineating their implications one-by-one?

            If so, this seems reasonable to me, and indeed, I don’t think my concept of omniscience is very complex at all: it just means something like, “knows all true propositions” or “knows all propositions that are knowable” (this may need further revision in light of potential counterexamples having to do with indexical propositions and the like).

            It seems to me that the difficulty in programming this (and you can correct me on this if I’m not understanding how this works correctly; I am not a programmer) is that to simulate this the computer would need to have some method to determine which propositions are true. Perhaps God could write such a program, programming whatever method he uses, but you or I certainly couldn’t.

            If the relevant measure of complexity is something like “what God could program (if he existed),” then I might be open to your suggestion to use that as a measure of complexity, and that God would not himself be infinitely complex on that definition. But then it seems to me that God won’t be very complex at all, and the prior of theism will not be prohibitively low. For the list of divine attributes is not very long, and although conceptual analyses of the attributes is difficult, I don’t think that carrying out those analyses to completion will ultimately give us a prohibitively long description.

            To be honest I hadn’t actually quite gotten this far into the weeds. There’s some dangers here, which I think reflect that ‘god’ is exceptionally difficult to define. But let’s see:
            – I think that any program God could write is, in principle, a program a human could write. I limit ‘omnipotence’ to ‘everything logically doable’ and omniscience to ‘everything knowable’. I don’t think the argument-from-complexity works on just-plain-above-logic gods.

            Writing a program to simulate X is writing an exact definition for X, basically. I think that an exact definition of god should not be infinitely long, I don’t think exact definitions of anything should be infinitely long. It breaks everything. A list of every true fact is obviously infinitely long, and Godel says you can’t write a program that ‘compresses’ that to finitely long, so I don’t think “a list of every true fact” is a thing that can exist. Definitions of omniscience that don’t break Godel do allow a program that lists every true fact. I don’t know enough computability theory to know whether “knows everything knowable” is a a unique set (there might be programs that list A, and programs that list B, with equal sizes of output so there’s no reason to choose one over the other), but assuming it is then “knows everything knowable” would be the definition I’d assume.

            Yes, you would need some kind of method to determine whether a given proposition is true or not knowable / list every true knowable proposition. That’s the point. Such a program doesn’t have to be infinitely long, I think, if you include the “I do not know that” output. It will not be a short program, however, and that’s the core of the argument here.

            A regular universe implies that there’s a shorter description of every knowable fact than a list of every knowable fact.

            The point is that an exact description of “every knowable fact” is very, very long.

            I actually have a paper under review right now defending a theodicy similar to Scott’s that I think avoids some of its problems, and that (as far as I can tell) does not imply the repugnant conclusion. I would be happy to make it available to anyone interested.

            Sounds interesting, but I’m not sure I have sufficient philosophical background to follow an actual philosophy-of-religion paper. You/Irenist/Samuel Skinner’s discussion last thread went over my head to an extent.

            @Mark:
            It’d be an Oracle machine with an oracle for first-order logic.

          • ton says:

            @James >Our minds aren’t basic elements of the universe. Changes the complexity.

            If we had actually done the computations and shown that our minds naturally arise out of the basic laws of physics, that would be right; but as it is, we can’t prove that is the case. Our logical uncertainty requires us to account for that as complexity.

            Or do you think Occam’s razor only applies to “basic elements of the universe”?

          • Troy says:

            @James Picone: Thanks; this exchange has been helpful. I won’t belabor the Godel stuff, but I do have one further question on “simulating God.” I think I follow you up until this point:

            Yes, you would need some kind of method to determine whether a given proposition is true or not knowable / list every true knowable proposition. That’s the point. Such a program doesn’t have to be infinitely long, I think, if you include the “I do not know that” output. It will not be a short program, however, and that’s the core of the argument here.

            If I understand you correctly, the complexity of theism (i.e., how difficult it is to “simulate God”) will depend on the method God uses to know everything. If this is a complex method (e.g., consult a very long list) theism will be complex, if it is a simple method (that is, a method which can be described simply) theism will be not so complex.

            Here art two possible “methods.” Leibniz thought that from God’s perspective, all propositions are analytic. Roughly speaking, his idea is that all the properties of a thing are essential to it, so that in even grasping the concept “James Picone” God knows everything there is to know about James Picone. This sounds to me like a fairly simple way of knowing all things knowable. Would it come out as so on the the system you’re sketching?

            A second way that God might know things is through something analogous to perception. For example, some libertarians about freedom who think that God knows future contingents but doesn’t himself make them true think that God just “looks” into the future in some way. The rub here is that this “perception” can’t be cached out in physical terms; it’s just a brute ability of God. Is there any way to sensibly talk about “programming” something like this?

          • James Picone says:

            @ton:
            Minds that arise out of a set of basic laws + several bajillion particles following those laws are much much simpler than exact high-level descriptions of a mind in the program-is-complexity model, so I have a very low prior for ‘human minds are basic’. I don’t see how that changes things – my position is that there are no basic minds, so I don’t think I’ve already accepted the large complexity involved.

            @troy:
            I’m not sure how the ‘analytical’ method you’ve described there could be modelled in a programming sense – the closest analogue I can think of is “take proposition; do the first-order logic to reduce it to other known propositions; get result”, which maybe isn’t exactly what you’re describing. I’m not sure a totally general logical inference engine with sufficient axioms to generate all knowable true statements is ‘simple’ in any sense.

            The ‘perception’ mechanism seems to me like “a Bayesian inference engine + all the data”, so at least as complex as Friendly AGI. 😛

            If there’s a mechanism that god ‘uses’ to be omniscient, and that mechanism isn’t available to a computer program, in some sense, then I kind of feel like that’s a special-pleading, god-above-logic situation again.

            Probably the easiest way to estimate is to just consider omniscience – how complex must a program be to be able to say ‘true’, ‘false’ or ‘don’t know’ to any given first-order logical predicate, guaranteeing that the ‘don’t know’ outputs are the smallest measure possible? I don’t know, and it might not even be uniquely defined, but I don’t think it’s going to be simpler than Schroedinger’s Equation + whatever twist makes gravity work + description of the initial state of the universe.

          • ton says:

            @James “Minds that arise out of a set of basic laws + several bajillion particles following those laws are much much simpler than exact high-level descriptions of a mind in the program-is-complexity model, so I have a very low prior for ‘human minds are basic’. I don’t see how that changes things – my position is that there are no basic minds, so I don’t think I’ve already accepted the large complexity involved.”

            The claim isn’t that “human minds are basic”. It’s that “the additional complexity of a world with omni* + human minds is only marginal over a world with human minds, relative to our knowledge”.

            The fact that human minds arise out of simple physical laws is itself a *belief*. You may have good reasons to believe that, but I don’t think you should shrink complexity based on such beliefs. Like I said before, Occam’s razor applies even to non-basic things. Even if we knew for certain that the laws of physics were completely true, Occam’s razor would still be useful, because we don’t have the unlimited computing power to calculate what the laws of physics predict. Logical uncertainty and all that. If we followed your lead of attributing anything that’s not basic as only having the complexity of the laws of physics, then any hypothesis not shown to be incompatible with the laws of physics would be equally likely under Occam.

            If your model of Occam works differently, could you elaborate on it? I don’t see offhand how one could justify a different model.

          • James Picone says:

            @ton:
            Consider the hypotheses HumanBasic and GodExists.

            I assign a very low prior to HumanBasic on the basis that basic minds are very complex.

            I assign a very low prior to GodExists on the basis that basic minds are very complex.

            I don’t necessarily assign (very-low-prior)**2 to the conjunction HumanBasic && GodExists, because they’re not necessarily independent, as you’ve pointed out. But it’s still going to be at least very-low-prior.

            What you seem to be getting at is that I should be considering P(GodExists|HumanBasic). I agree that that is larger than very-low-prior, but because I evaluate P(HumanBasic) as very low as well, I don’t think that changes much.

            Is your argument that I should consider P(HumanBasic) more likely than very-low-prior because I know humans exist, or something like that?

          • ton says:

            @James

            >Is your argument that I should consider P(HumanBasic) more likely than very-low-prior because I know humans exist, or something like that?

            Not quite. I argue that you should consider P(GodExists|HumansExist) as higher than P(GodExists), and not very low. I agree that P(HumanBasic) is fairly low (although not as low as the raw complexity would imply, because you know humans exist).

            I think our disagreement is on the statement

            >I assign a very low prior to GodExists on the basis that basic minds are very complex.

            GodExists doesn’t imply anything about minds being basic. Even if minds aren’t basic, we can still get lower complexity for GodExists by recursing on those non-basic minds, which are not very complex by hypothesis.

  55. Some Troll's Legitimate Discussion Alt says:

    Genuinely disagreeing about what is good seems like a really serious problem for the “let’s set aside our feelings and do good as well as possible” community.

    I mean, the optimization half of the equation ought to look pretty much the same no matter what you’re actually optimizing for and people who disagree about weighting ×-risk v. human suffering v. animal suffering v. other thing should be able to work together on that. In reality though, that means putting a bunch of autists who have really really strong opinions about what is good in a room together where they can see each other being wrong.

    And this isn’t just any person being wrong either. These are other people comitted to effective altruism who are wrong. Convincing just one of them that they are wrong could double the amount of good you are doing, since you can surmise that they are probably about as effective at giving as you. It’s superior to persuading the public even. You have to see a member of the public on 2 big things, effective altruism and your cause. EAs only need to be swayed on one and are already at least superficially comitted to changing their minds based on good evidence (which you know you have because you are right).

    I expect all truces to be temporary and that EAs will continue BLOODMOUTH CARNISTing, adding zeros, and accusing each other of being white until the whole thing ends with schism and excommunications all around.

    Hopefully they’ll all carry their effectiveness principles back to the charity niches to which they scatter.

  56. Max says:

    I have a question “Has anything great ever been accomplished by charity?” I like reading stories about great people, great inventions, great discoveries, explorations. Stories of achievement, ambition and accomplishment. And not once I read a story about “poor underprivileged underclass member trust into greatness by charity”.
    There are many rags to greatness stories, but it was all accomplished by will, intellect and handwork of an individual. Not by handouts

    • Nita says:

      I like reading stories about great people, great inventions, great discoveries, explorations.

      Cool. I like cats, ice-cream and sci-fi. However, the people here seem to be discussing ethics instead of awesome fun stuff, for some reason.

      • Max says:

        Discussion is about “effective altruism”. I am asking whether its been ever “effective”. If you want a postulate to argue here it is “Charity is waste of resources with proven track record of uselessness therefore it is evil”

        • Nita says:

          Well, I did consider writing a serious reply at first, but decided against it, as apparently you believe that:

          1) it’s realistic to categorize every life history as either “100% self-made” or “due to handouts”;

          2) anything that doesn’t provide you with stories of awesomeness is a waste of resources (hint: could other people possibly value other things?).

          Under these circumstances, a productive discussion seems unlikely.

        • FullMetaRationalist says:

          Most people donate according to how the donation makes them feel, rather than whether the donation makes the world a better place. The entire point behind EA is to maximize actual altruism, rather than to maximize for what’s known as “warm fuzzies”. “Greatness” sounds suspiciously like “warm fuzzies”. If you haven’t already, see scope insensitivity

          * If you disagree with EA because you think “maximizing greatness” is more important than “maximizing altruism”, that is a coherent argument.

          * If you think EA has failed its mission to “maximize altruism”, that is also a coherent argument.

          * If you think EA has failed its mission to “maximize greatness”, you may not have understoond EA’s mission to begin with.

      • SUT says:

        > Cool. I like cats, ice-cream and sci-fi. However, the people here seem to be discussing ethics instead of awesome fun stuff, for some reason.

        An ironic suppression of the principle of charity. Guess you’re gonna have to work harder, Max.

  57. (apologies if this point was made somewhere before)

    I think your counterargument to Dylan Matthews is valid: you shouldn’t just keep adding zeroes because you consider something intuitively unlikely. But I would’ve made a very different argument. I would’ve said: in situations of such massive certainty, you can’t even predict the direction in which your intervention will affect things, conditioned on it affecting them at all. Thus, the famous response to the original Pascal’s Wager is, “but what if you picked the wrong god? or what if there’s an atheist-god who smites all the believers and admits all the unbelievers to heaven?” Unless you have strong theological presuppositions (which would need to be argued independently), that seems just as likely as the opposite case—meaning your expected-utility calculation will be dominated by tiny probabilities times huge positive and negative utilities, mostly cancelling each other out, and you have no principled way even to estimate the residue left over, if any. So, much like in quantum field theory, where you also get sums of positive and negative infinities, you need some way to “regularize” the calculation (i.e., just cut off the parts that you don’t understand at all, and concentrate on parts you do understand that yield a finite value that can be checked in some independent way).

    Similarly, with AI risk, I personally would say: my uncertainty is so immense that, conditioned on it being a problem at all, I have no idea whether donating to MIRI (or any other concrete step I can think of) is more likely to help or hurt. E.g., maybe donating to MIRI will increase the overall interest in / level of AGI, and that will in turn contribute to someone creating an unfriendly AGI.

    This is not a general argument for paralysis in the face of uncertainty. In “normal” situations—or even in cases like climate change, or preventing nuclear war—we have many ways of getting feedback from the external world as we go along, and thereby correcting our intuitions. Whereas with AI risk, it really does seem (at present) like one person’s scenario unlike anything in past human experience against another person’s contrary scenario also unlike anything in past human experience. And that’s what leads to the problem of unregularized calculations, where you need to evaluate infinity minus infinity (or, say, ~10^50 – ~10^50 to within an accuracy of ~10^5).

    I actually do think there are valid reasons to support MIRI, but not the ones usually discussed. You should consider supporting them if you like the blog posts or research papers or workshops that they produce today, or if you feel they’re advancing the state of science or philosophy or rationalist community-building in a valuable way. Crucially, this is not a case that depends on averaging utilities over far-future hypothetical worlds — just evaluating a scientific research program in the usual ways, in terms of how it’s enriching our understanding right now. (FWIW, I’ve long said exactly the same about, say, quantum computing or string theory research: if you support them at all, do so because of the ways they’re already improving our understanding of reality, rather than because of the possibility of some enormous future payoff.)

      • Thanks! Maybe not surprisingly, my reply would be: with global warming, the case seems clear that reducing GHG emissions has a direct, “first-order” effect to slow down undesirable changes to the climate, as well as many unknowable secondary effects that we can probably treat as more-or-less canceling out. Whereas with AI risk, I’m not convinced that the existence of a first-order effect has been established: I treat the entire thing as (at best) unknowable second-order effects.

        (With the one caveat that I’m almost always in favor of a better understanding of reality, regardless of the unknowable far-future secondary effects of that understanding. So, to whatever extent AI-risk research leads right now to better scientific understanding, I can support it on that basis.)

        • ton says:

          I think that’s the argument he’s making as well. He’s not referring to global warming now, he’s referring to the state of global warming research several decades ago, and crediting research since then with us now having the knowledge to slow it down. That’s what’s being compared with AI risk research now. This is perhaps clearer in context, the question was

          2. Is there any good historical analogue for something like AI safety research in advance working? I don’t think nukes works because i think the science was MUCH further along when people started worrying about safety, but maybe I am wrong there.

    • 27chaos says:

      You don’t even have a guess about whether negative or positive effects are more likely? It feels unlikely to me that the evidence would happen to balance so precisely for you. I would rather err towards a position intentionally on a highly inadequate basis, like “my subconscious apparently favors this option” than refuse to try to apply my knowledge at all. Worst case scenario, you do just as bad as the RNG anyway.

      • ton says:

        The point is not that the evidence balances perfectly, but that the uncertainty is large enough that they can’t have confidence in the sign.

        • Martin-2 says:

          Actually we are talking about the evidence balancing perfectly. If we accept Bostrom’s assumptions then any probability above 50.00…001% of the sign being positive gives your donation enormous expected value. “Confidence” in the usual sense is not required.

          • Ever An Anon says:

            Except that the same reasoning says that a 49.99…9% probability would be disastrous on a similar order of magnitude. And we can’t actually evaluate arbitrarily small probabilities with that kind of precision.

            And, of course, the fact that Bostrom’s numbers are utterly ridiculous and no sensible person should use them. For reasons I point out upthread (control-F “Leslie” sans quotes).

    • AlphaCeph says:

      > my uncertainty is so immense that, conditioned on it being a problem at all, I have no idea whether donating to MIRI (or any other concrete step I can think of) is more likely to help or hurt.

      That’s a massive indictment of MIRI. They’re specifically *trying* to help and yet a smart, reasonable person’s probability distribution over their effectiveness is centered on zero.

      And it’s not as if they’re operating in a world where they’re the only ones interested in AI; plenty of people in the space are doing AI research and have specifically said they don’t care about safety, or that they don’t think it’s an issue.

      So MIRI are such muppets, in your view, that they will screw up (create negative outcomes) more often than people who don’t care about safety, whilst also having almost no chance of creating positive outcomes.

      It’s like telling someone not to call the fire brigade when their house is burning down and surrounded by pyromaniacs and arsonists, because you think the fire brigade are such total idiots that they’ll exacerbate the fire more than a bunch of arsonists.

      • ton says:

        Firefighters have a much better track record than MIRI.

        • Alphaceph says:

          My point is that the fire brigade would have to be severely bad to the point of slapstick comedy before you’d rather leave the arsonists to it.

          • John Schilling says:

            We don’t know how bad the fire brigade is, because they’ve never ever put out an actual fire.

            We do know that in most other human endeavors, self-taught amateurs on their very first go at a difficult task, usually fail. E.g. Wilbur and Orville each crashed their first two airplane flights in under a minute, despite being experienced glider pilots.

          • AlphaCeph says:

            I think people are misunderstanding the analogy I’m making.

            To put it plainly, it is very hard to argue that someone who’s trying to help (e.g. an incompetent firefighter, incompetent MIRI researchers) does literally no more good in expectation than someone who’s actively trying to make the problem worse (arsonist, mainstream AI researchers who just press ahead without friendliness).

            You can’t argue that the probability of the firefighter putting out the fire is literally zero, because you’ve just admitted that you’re extremely uncertain about the whole thing. But if there’s, say, a 5% chance of the firefighter putting the fire out, then there has to be 5% more chance that the firefighter worsens the fire than the arsonists, in order for them to come out the same in expectation.

          • John Schilling says:

            Who are the arsonists in this analogy? Is there somebody trying to build an unfriendly AI that MIRI is struggling against?

          • vV_Vv says:

            Pre-modern doctors probably did often more harm than good: bloodletting, drugs based on arsenic, mercury and sulfur, and so on, not to mention the quasi-modern doctors who dissected corpses just before examining puerperal women without washing their hands.

            Modern “alternative medicine” quack doctors often also do more harm than good. Governments try to ban the most dangerous practices, but many things slip through the cracks of regulation. Even when the treatments are harmless per se, such as homeopathy, to the extent that those who practice them convince their patients not to seek proper treatment, they do more harm than good.

            Moving aside from medicine, in the field of politics, economy and welfare, there were plenty of social engineering programs that ended up making the problems that they intended to solve worse:
            Communism is the most glaring example, followed by alcohol prohibition and the “War on Drugs”, but there are also many more smaller scale programs that utterly failed. From crime-reduction programs that increase crime to the European PIIGS Austerity that sank Greece’s economy.

            I would say that there is enough historical precedent not to automatically trust that anybody who proposes to solve a problem will not actually make it worse, especially when they don’t have a good track record at solving that kind of problems.

          • AlphaCeph says:

            > I would say that there is enough historical precedent not to automatically trust that anybody who proposes to solve a problem will not actually make it worse

            But that’s the wrong question to ask. The question to ask is whether someone who sincerely proposes to solve a problem will, *on average* do more harm than good. Of course there is *some* chance that MIRI will make the AI risk problem worse, but that’s the wrong criterion to base your decision-making on.

            It would have been the correct move for people to abstain from using pre-modern doctors, but given the information they actually had, it was probably the rational move to use those doctors.

            Rational != correct.

          • vV_Vv says:

            The question to ask is whether someone who sincerely proposes to solve a problem will, *on average* do more harm than good.

            Since in general there are more ways to screw up something rather than fixing it, and since people who ask for money or status have an incentive to exaggerate their competence, I think that the game-theoretical correct move is to assume a prior on the net utility of their intervention with a negative expectation.

          • AlphaCeph says:

            > “I think that the game-theoretical correct move is to assume a prior … ”

            Well, that’s checkmate; if you’re going to irrationally assume a priori that big-picture interventions won’t work (because that’s what this amounts to – big picture interventions usually aren’t repeatable frequentist events that can produce tons of evidence to overcome such a prior), there is simply nothing I can say, it’s like trying to argue against a theist.

            Sure, it might create good game-theoretic incentives, I can see that myself actually, and I actually wish that this kind of reasoning was used more. The problem is that the utility lost by the destruction of the entire human race more than makes up for the utility created by marginally increasing the incentive for incompetent do-gooders to disappear or up their game.

          • vV_Vv says:

            Then according to your logic you should fall for any Pascal’s mugging ever.

            If you apply this argument only to AI risk or MIRI specifically then you are special pleading in order to rationalize a preference that you have on irrational grounds.
            That’s sounds worse than “irrationally” choosing a prior where inexperienced do-gooders often fail to achieve what they promise and even make the problem that they are trying to solve worse.

      • Your analogy helps to pinpoint exactly where I part ways with the reasoning “we should all donate to MIRI because of expected-utility calculations about AI risk.” I don’t see that the world’s house is burning down (or if it is, then not because of unfriendly AIs). And I don’t see MIRI as a fire brigade. I see them as an outfit doing fundamental research about a hypothetical new kind of fire that might become an issue at some point in the remote future. And as someone who spends his career doing fundamental research about hypothetical future technologies, it’s very far from an indictment for me to describe someone else as doing the same! I’ve already thrown in my lot with the effort to improve human understanding as much and as quickly as possible, even if the long-term consequences of that improved understanding are unknowable.

        • Alphaceph says:

          I think this is exactly what a world that’s burning down because of a poorly managed transition to the AI era looks like, and I definitely don’t think it’s reasonable to push AI risk into the remote far future – unless by “remote” you mean like 60-120 years. I mean you and I will probably both be dead by then, but your child will probably be alive, and your grandchildren definitely will be.

          I would be very surprised if we don’t have superintelligent AI in 120 years’ time (assuming no global scale disaster), and if we don’t at least have human level AGI in say, 200 years I would probably consider my entire naturalistic worldview refuted.

          It just seems ridiculous to imagine all of the different routes to understanding and manupulating intelligence would be thwarted for 200 years despite a growing global economy, a growing robotics industry, better sensors, at least another 6 orders of magnitude of FLOPS/$ of computer power, more machine learning, AI, genetics, etc researchers in more countries, biological approaches to intelligence enhancement, whole brain emulation technology getting better, old researchers dying off and making way for new ideas like 6 times over… It’s hard to maintain that all of that will happen, and that intelligence is a naturalistic phenomenon that occurs in our brains in a non-magical way, and that we won’t figure it out or just brute-force copy it a la whole brain emulation or the techno-eugenics enabled by genome wide association studies and embryo post selection.

  58. Albatross says:

    In Risk Management we like to say that risks can be reduced, not eliminated. A charity for preventing hostile AI might accidentally release a test case and cause hostile AI.

    I also think Anti-Poverty is a great way to increase animal rights research money and AI risk money. Sure, we should donate some money to animal rights now, but if we never lift up the poor then poor people will continue terrible farming techniques on a cost basis. A thriving middle class creates demand for high quality food and a clean environment.

    A diverse portfolio of donations decreases risk because it gives the train more stops. One of my coworkers makes dresses for the poor. I don’t. But I have supported her cause in minor ways over the years, mostly merely directing her to company programs that match and offer time off for volunteer work. And when I was raising money for a different cause she donated big.

    Too many people think that charity is zero sum. In reality, generosity is contagious. Especially anti poverty charities. They are an increase donations button.

  59. UnlikelyToBeEaten says:

    ”In that case, I offer him the following whatever-the-opposite-of-a-gift is”

    I believe the opposite of a gift is a curse?

  60. Tarrou says:

    I’ve always hated the argument from unlikely-but-let’s-make-up-numbers-about-how-bad-it-would-be. If you have the evidence, present it. I’m not even against ballparking some rough figures to gain perspective of scale on a problem, but humans have a very real psychological inability to deal with very large and very small numbers.

    Do note that if EA is concerned with forestalled human lives, they should probably all become pro-life rather than worry about x-events. The consequences are immediate rather than remote, but then that is the point of this argument, isn’t it?

    There is a .0000000000000000000000000000000000000001% chance I become a supervillain driven by my lack of blowjobs to annihilate all life on earth. The risk is small, but since I’ve made the consequences so bad that anything is justified, all EA types should be lining up at the shaft as we speak. The argument by who can dream up the worst scenario quickly degenerates into silliness.

    • Adam says:

      This is so much better than Pascal’s mugging. It suggests an interesting pseudo-rape strategy of selecting random LW readers and telling them you’re a super AI simulating the entire world and you’ll torture everyone for eternity if they don’t give you a blow job. No matter how small the probability that you’re telling the truth, multiplied by negative infinity disutility the expected value is such that the only possible moral choice is to blow you.

  61. 27chaos says:

    That train vision is the best thing ever and it describes my feelings about technology in general quite well. The rush of the ride is amazingly fun, but will it remain so indefinitely? We’ll likely need to slow eventually – will we brake safely over a long period of time, or suddenly all at once in a very dangerous way? I like Muggeridge even better now, thank you for sharing that one.

  62. ozymandias says:

    Of course you’re allowed to use probabilities like 10^-67 in everyday life.

    What’s the probability that homeopathy works for reasons other than placebo effect, given that homeopathic preparations are chemically identical to tap water?

    • Troy says:

      >>10^-67? If 10 impeccably run RCTs run by independent reputable scientists working in different locations with different populations of 1000 people each found large effects with p = .0001 each time, I think my posterior odds for homeopathy would be above 1/100.

      By my calculations, that scenario has a maximum Bayes’ Factor of (1/.0001)x10 = (10^3/1)x10 = 10^30/1 in favor of homeopathy. To get posterior odds for homeopathy >1/10^2 we’d need prior odds >1/10^32.

      So, I think my current probability for homeopathy is at least 1/10^32, and probably several orders of magnitude higher.

    • So what you’re saying is, that in a controlled, double-blind experiment regarding homeopathy, if we had a group of 45 people 100% of whom were cured of a condition vs 5% in a control group, the most likely explanation you would offer is chance?

      Based on my understanding of homeopathy, the theory is that the molecules of the solute somehow effect the molecules of water which is why homeopathy “works.” Now, I don’t believe it because, as far as I know, there is no evidence for it. But a prior of 10^-67 would prevent me from believing in it even in the face of strong evidence. Which is I think the whole point regarding this post, if you keep on putting more zero’s on the end of your priors, they don’t help you determine anything about the world.

  63. Matt Schreiber says:

    Hey Scott — long(ish)time follower, first-time commenter. Also, apologies in advance if I’m just recapitulating points already made in the motherlode of previous comments.

    While I agree with you that 10^-67 is an ungenerously and implausibly lowball estimation of the probability at which an AI risk charity can cut our odds of doom, I think that the grounds on which you criticize Dylan Matthews’ argument suffer from conflating the probability of an event occurring with the probability of some action contributing to a particular outcome. It’s the latter issue that Matthews is concerned with — e.g., Maybe giving $1,000 to the Machine Intelligence Research Institute will reduce the probability of AI killing us all by 0.00000000000000001 (emphasis added) — whereas your parable of the astoundingly unlucky AI researcher addresses only the former.

    With some fairly modest assumptions it’s possible to demonstrate that, pace you, it’s mistaken to think that nothing is ever 10^-67 and you should never use that number. For instance, given prevailing beliefs about the spacetime continuum, the probability that snapping my fingers right now will increase the chances that a tuk tuk driver in Bangkok will run off the road, yesterday, is astronomically lower than 10^-67. In fact it’s essentially infinitely lower than 10^-67, because it’s just 0.

    While I’m not so pessimistic as to think that a contribution to MIRI has only a 10^-67 chance of fending off a future AI holocaust, I do take Matthews’ point that there’s very little solid ground on which to draw conclusions about the efficacy of any present actions with respect to achieving future goals in areas where the paucity of our understanding is so very great. In fact it’s possible to be even more pessimistic: to the extent that coming up with means of preventing an AI disaster depends on the advancement of AI-related knowledge in general, investigation of deterrent measures may perversely speed the onslaught of a nightmare future.

    As I’ve said, I’m not that gloomy, but that’s just an intuition. You’re right that Matthews isn’t entitled to anally extract any particular figure, but that same dictum applies to us optimists as well. I for one would certainly like to see an attempt at rigorously estimating the probabilities involved; perhaps such an attempt has already been made?

    (Incidentally, I think that 10^-65 is probably much too high an estimate of the probablity of your series of unfortunate occurrences. I suspect, for instance, that the 10^-16th probability of being struck by a meteorite is the probability of being struck by a meteorite in one’s lifetime (well, presumably shortly before the end of one’s lifetime), rather than the probability of being struck by a meteorite at any given instant. The second figure is necessarily smaller than the first, and since you’ve got to have what I’ll call the synchronic, rather than diachronic, probability in order to get the cumulative probability of a whole bunch of improbable things occurring at once, the likelihood of your scenario has to be much lower — like, I don’t know, 10^-67 ;).

    • ton says:

      >For instance, given prevailing beliefs about the spacetime continuum, the probability that snapping my fingers right now will increase the chances that a tuk tuk driver in Bangkok will run off the road, yesterday, is astronomically lower than 10^-67.

      Now that you’ve written that, isn’t the simulation hypothesis plus “simulator has a sense of humor” plus “this comment in particular strikes that sense of humor” greater than that? To make up numbers, 1/10000 is a reasonable lower bound on SH, “sense of humor” for possible simulators might be 1 in a million, and the total number of comments on the internet is less than (humans * 1 million)<10^16, which multiplies to 10^-26, a far cry from 10^-67.

      If "given prevailing beliefs about the spacetime continuum" means you're actually conditioning on those beliefs being true, then that's conditioning on something much less likely than 1-10^-67. Of course, conditioning on X makes the probability of X 1, that's not interesting.

      • Matt Schreiber says:

        I can’t speak to the probability of a Simulator with a particular sense of humor, but the whole point is that the “gin up some numbers” approach to probability is a bit silly, so I’m not inclined to give your estimates much credence.

        As to your second point, it’s correct that the directionality of time can’t be taken for granted, and so figuring out the probabilities involved in my hypothetical scenario involves reckoning the chances that prevailing belief is correct as to that issue. But since my aim was to show that conclusions in the area of probabilities of events occurring don’t necessarily extend to the area of probabilities of causal connections between events, I’m comfortable with making some restrictive assumptions in my though-experiment space and don’t think that those assumptions render it uninteresting.

        Really, though, the problem is that the proscription of 10^-67 proves too much. Suppose that causal connections of 10^-67 really were verboten — suppose, in other words, that for any (action, putative outcome) pair, there must be an above-10^-67 likelihood that the action contributes to the occurrence of the outcome. Then, for instance, doing something that is absolutely, 100% guaranteed to bring about this or that x-risk (say, that Scott’s unfortunate researcher had in fact pressed the ENTER key, onlining a nasty AI) also contributes, with 10^-67 or better odds, to preventing that x-risk. This is a contradiction.

        • ton says:

          >I can’t speak to the probability of a Simulator with a particular sense of humor, but the whole point is that the “gin up some numbers” approach to probability is a bit silly, so I’m not inclined to give your estimates much credence.

          The point isn’t to assert their accuracy, it’s to get a lower bound. Even if all of my numbers are off by 10 orders of magnitude, we couldn’t get to 10^-67.

          >As to your second point, it’s correct that the directionality of time can’t be taken for granted, and so figuring out the probabilities involved in my hypothetical scenario involves reckoning the chances that prevailing belief is correct as to that issue. But since my aim was to show that conclusions in the area of probabilities of events occurring don’t necessarily extend to the area of probabilities of causal connections between events, I’m comfortable with making some restrictive assumptions in my though-experiment space and don’t think that those assumptions render it uninteresting.

          It’s uninteresting in the sense that if I say “the probability of our laws of physics being correct isn’t 1”, then your reply “but if we assume our laws of physics, then they are correct with probability 1” is uninteresting. As our laws of physics preclude your above truck-driver scenario, the two (our exchange above, and the ones I just gave) seem sort of equivalent.

          There’s a big difference over conditioning on a belief of not-near-certain credence, and using causal reasoning which doesn’t involve conditioning on any beliefs.

          >Really, though, the problem is that the proscription of 10^-67 proves too much. Suppose that causal connections of 10^-67 really were verboten — suppose, in other words, that for any (action, putative outcome) pair, there must be an above-10^-67 likelihood that the action contributes to the occurrence of the outcome. Then, for instance, doing something that is absolutely, 100% guaranteed to bring about this or that x-risk (say, that Scott’s unfortunate researcher had in fact pressed the ENTER key, onlining a nasty AI) also contributes, with 10^-67 or better odds, to preventing that x-risk. This is a contradiction.

          Of course, if less than 10^-67 probability isn’t allowed, greater than 1-10^-67 isn’t allowed either. In your example, the chance of some mischievous soul rewiring the keyboard to make enter the delete key seems far higher than 10^-67. I don’t see any contradiction.