On Overconfidence

[Epistemic status: This is basic stuff to anyone who has read the Sequences, but since many readers here haven’t I hope it is not too annoying to regurgitate it. Also, ironically, I’m not actually that sure of my thesis, which I guess means I’m extra-sure of my thesis]

I.

A couple of days ago, the Global Priorities Project came out with a calculator that allowed you to fill in your own numbers to estimate how concerned you should be with AI risk. One question asked how likely you thought it was that there would be dangerous superintelligences within a century, offering a drop down menu with probabilities ranging from 90% to 0.01%. And so people objected: there should be options to put in only one a million chance of AI risk! One in a billion! One in a…

For example, a commenter writes that: “the best (worst) part: the probability of AI risk is selected from a drop down list where the lowest probability available is 0.01%!! Are you kidding me??” and then goes on to say his estimate of the probability of human-level (not superintelligent!) AI this century is “very very low, maybe 1 in a million or less”. Several people on Facebook and Tumblr say the same thing – 1/10,000 chance just doesn’t represent how sure they are that there’s no risk from AI, they want one in a million or more.

Last week, I mentioned that Dylan Matthews’ suggestion that maybe there was only 10^-67 chance you could affect AI risk was stupendously overconfident. I mentioned that was thousands of lower than than the chance, per second, of getting simultaneously hit by a tornado, meteor, and al-Qaeda bomb, while also winning the lottery twice in a row. Unless you’re comfortable with that level of improbability, you should stop using numbers like 10^-67.

But maybe it sounds like “one in a million” is much safer. That’s only 10^-6, after all, way below the tornado-meteor-terrorist-double-lottery range…

So let’s talk about overconfidence.

Nearly everyone is very very very overconfident. We know this from experiments where people answer true/false trivia questions, then are asked to state how confident they are in their answer. If people’s confidence was well-calibrated, someone who said they were 99% confident (ie only 1% chance they’re wrong) would get the question wrong only 1% of the time. In fact, people who say they are 99% confident get the question wrong about 20% of the time.

It gets worse. People who say there’s only a 1 in 100,000 chance they’re wrong? Wrong 15% of the time. One in a million? Wrong 5% of the time. They’re not just overconfident, they are fifty thousand times as confident as they should be.

This is not just a methodological issue. Test confidence in some other clever way, and you get the same picture. For example, one experiment asked people how many numbers there were in the Boston phone book. They were instructed to set a range, such that the true number would be in their range 98% of the time (ie they would only be wrong 2% of the time). In fact, they were wrong 40% of the time. Twenty times too confident! What do you want to bet that if they’d been asked for a range so wide there was only a one in a million chance they’d be wrong, at least five percent of them would have bungled it?

Yet some people think they can predict the future course of AI with one in a million accuracy!

Imagine if every time you said you were sure of something to the level of 999,999/1 million, and you were right, the Probability Gods gave you a dollar. Every time you said this and you were wrong, you lost $1 million (if you don’t have the cash on hand, the Probability Gods offer a generous payment plan at low interest). You might feel like getting some free cash for the parking meter by uttering statements like “The sun will rise in the east tomorrow” or “I won’t get hit by a meteorite” without much risk. But would you feel comfortable predicting the course of AI over the next century? What if you noticed that most other people only managed to win $20 before they slipped up? Remember, if you say even one false statement under such a deal, all of your true statements you’ve said over years and years of perfect accuracy won’t be worth the hole you’ve dug yourself.

Or – let me give you another intuition pump about how hard this is. Bayesian and frequentist statistics are pretty much the same thing [citation needed] – when I say “50% chance this coin will land heads”, that’s the same as saying “I expect it to land heads about one out of every two times.” By the same token, “There’s only a one in a million chance that I’m wrong about this” is the same as “I expect to be wrong on only one of a million statements like this that I make.”

What do a million statements look like? Suppose I can fit twenty-five statements onto the page of an average-sized book. I start writing my predictions about scientific and technological progress in the next century. “I predict there will not be superintelligent AI.” “I predict there will be no simple geoengineering fix for global warming.” “I predict no one will prove P = NP.” War and Peace, one of the longest books ever written, is about 1500 pages. After you write enough of these statements to fill a War and Peace sized book, you’ve made 37,500. You would need to write about 27 War and Peace sized books – enough to fill up a good-sized bookshelf – to have a million statements.

So, if you want to be confident to the level of one-in-a-million that there won’t be superintelligent AI next century, you need to believe that you can fill up 27 War and Peace sized books with similar predictions about the next hundred years of technological progress – and be wrong – at most – once!

This is especially difficult because claims that a certain form of technological progress will not occur have a very poor track record of success, even when uttered by the most knowledgeable domain experts. Consider how Nobel-Prize winning atomic scientist Ernest Rutherford dismissed the possibility of nuclear power as “the merest moonshine” less than a day before Szilard figured out how to produce such power. In 1901, Wilbur Wright told his brother Orville that “man would not fly for fifty years” – two years later, they flew, leading Wilbur to say that “ever since, I have distrusted myself and avoided all predictions”. Astronomer Joseph de Lalande told the French Academy that “it is impossible” to build a hot air balloon and “only a fool would expect such a thing to be realized”; the Montgolfier brothers flew less than a year later. This pattern has been so consistent throughout history that sci-fi titan Arthur C. Clarke (whose own predictions were often eerily accurate) made a heuristic out of it under the name Clarke’s First Law: “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

Also – one good heuristic is to look at what experts in a field think. According to Muller and Bostrom (2014), a sample of the top 100 most-cited authors in AI ascribed a > 70% probability to AI within a century, a 50% chance of superintelligence conditional on human-level, and a 10% chance of existential catastrophe conditional on human level AI. Multiply it out, and you get a couple percent chance of superintelligence-related existential catastrophe in the next century.

Note that my commenter wasn’t disagreeing with the 4% chance. They were disagreeing with the possibility that there would be human-level AI at all, that is, the 70% chance! That means that he was saying, essentially, that he was confident he could write a million sentences – that is, twenty-seven War and Peace‘s worth – all of which were trying to predict trends in a notoriously difficult field, all of which contradicted a well-known heuristic about what kind of predictions you should never try to make, all of which contradicted the consensus opinion of the relevant experts – and only have one of the million be wrong!

But if you feel superior to that because you don’t believe there’s only a one-in-a-million chance of human-level AI, you just believe there’s a one-in-a-million chance of existential catastrophe, you are missing the point. Okay, you’re not 300,000 times as confident as the experts, you’re only 40,000 times as confident. Good job, here’s a sticker.

Seriously, when people talk about being able to defy the experts a million times in a notoriously tricky area they don’t know much about and only be wrong once – I don’t know what to think. Some people criticize Eliezer Yudkowsky for being overconfident in his favored interpretation of quantum mechanics, but he doesn’t even attach a number to that. For all I know, maybe he’s only 99% sure he’s right, or only 99.9%, or something. If you are absolutely outraged that he is claiming one-in-a-thousand certainty on something that doesn’t much matter, shouldn’t you be literally a thousand times more outraged when every day people are claiming one-in-a-million level certainty on something that matters very much? It is almost impossible for me to comprehend the mindsets of people who make a Federal Case out of the former, but are totally on board with the latter.

Everyone is overconfident. When people say one-in-a-million, they are wrong five percent of the time. And yet, people keep saying “There is only a one in a million chance I am wrong” on issues of making really complicated predictions about the future, where many top experts disagree with them, and where the road in front of them is littered with the bones of the people who made similar predictions before. HOW CAN YOU DO THAT?!

II.

I am of course eliding over an important issue. The experiments where people offering one-in-a-million chances were wrong 5% of the time were on true-false questions – those with only two possible answers. There are other situations where people can often say “one in a million” and be right. For example, I confidently predict that if you enter the lottery tomorrow, there’s less than a one in a million chance you will win.

On the other hand, I feel like I can justify that. You want me to write twenty-seven War and Peace volumes about it? Okay, here goes. “Aaron Aaronson of Alabama will not win the lottery. Absalom Abramowtiz of Alaska will not win the lottery. Achitophel Acemoglu of Arkansas will not win the lottery.” And so on through the names of a million lottery ticket holders.

I think this is what statisticians mean when they talk about “having a model”. Within the model where there are a hundred million ticket holders, and we know exactly one will be chosen, our predictions are on very firm ground, and our intuition pumps reflect that.

Another way to think of this is by analogy to dart throws. Suppose you have a target that is half red and half blue; you are aiming for red. You would have to be very very confident in your dart skills to say there is only a one in a million chance you will miss it. But if there is a target that is 999,999 millionths red, and 1 millionth blue, then you do not have to be at all good at darts to say confidently that there is only a one in a million chance you will miss the red area.

Suppose a Christian says “Jesus might be God. And he might not be God. 50-50 chance. So you would have to be incredibly overconfident to say you’re sure he isn’t.” The atheist might respond “The target is full of all of these zillions of hypotheses – Jesus is God, Allah is God, Ahura Mazda is God, Vishnu is God, a random guy we’ve never heard of is God. You are taking a tiny tiny submillimeter-sized fraction of a huge blue target, painting it red, and saying that because there are two regions of the target, a blue region and a red region, you have equal chance of hitting either.” Eliezer Yudkowsky calls this “privileging the hypothesis”.

There’s a tougher case. Suppose the Christian says “Okay, I’m not sure about Jesus. But either there is a Hell, or there isn’t. Fifty fifty. Right?”

I think the argument against this is that there are way more ways for there not to be Hell than there are for there to be Hell. If you take a bunch of atoms and shake them up, they usually end up as not-Hell, in much the same way as the creationists’ fabled tornado-going-through-a-junkyard usually ends up as not-a-Boeing-747. For there to be Hell you have to have some kind of mechanism for judging good vs. evil – which is a small part of the space of all mechanisms, let alone the space of all things – some mechanism for diverting the souls of the evil to a specific place, which same, some mechanism for punishing them – again same – et cetera. Most universes won’t have Hell unless you go through a lot of work to put one there. Therefore, Hell existing is only a very tiny part of the target. Making this argument correctly would require an in-depth explanation of formalizations of Occam’s Razor, which is outside the scope of this essay but which you can find on the LW Sequences.

But this kind of argumentation is really hard. Suppose I predict “Only one in 150 million chance Hillary Clinton will be elected President next year. After all, there are about 150 million Americans eligible for the Presidency. It could be any one of them. Therefore, Hillary covers only a tiny part of the target.” Obviously this is wrong, but it’s harder to explain how. I would say that your dart-aim is guided by an argument based on a concrete numerical model – something like “She is ahead in the polls by X right now, and candidates who are ahead in the polls by X usually win about 50% of the time, therefore, her real probability is more like 50%.”

Or suppose I predict “Only one in a million chance that Pythagoras’ Theorem will be proven wrong next year.” Can I get away with that? I can’t quite appeal to “it’s been proven”, because there might have been a mistake in (all the) proofs. But I could say: suppose there are five thousand great mathematical theorems that have undergone something like the level of scrutiny as Pythagoras’, and they’ve been known on average for two hundred years each. None of them have ever been disproven. That’s a numerical argument that the rate of theorem-disproving is less than one per million years, and I think it holds.

Another way to do this might be “there are three hundred proofs of Pythagoras’ theorem, so even accepting an absurdly high 10%-per-proof chance of being wrong, the chance is now only 10^-300.” Or “If there’s a 10% chance each mathematician reading a proof missing something, and one million mathematicians have read the proof of Pythagoras’ Theorem, then the probability that they all missed it is more like 10^-1,000,000.”

But this can get tricky. Suppose I argued “There’s a good chance Pythagoras’ Theorem will be disproven, because of all Pythagoras’ beliefs – reincarnation, eating beans being super-evil, ability to magically inscribe things on the moon – most have since been disproven. Therefore, the chance of a randomly selected Pythagoras-innovation being wrong is > 50%.”

Or: “In 50 past presidential elections, none have been won by women. But Hillary Clinton is a woman. Therefore, the chance of her winning this election is less than 1/50.”

All of this stuff about adjusting for size of the target or for having good mathematical models is really hard and easy to do wrong. And then you have to add another question: are you sure, to a level of one-in-a-million, that you didn’t mess up your choice of model at all?

Let’s bring this back to AI. Suppose that, given the complexity of the problem, you predict with utter certainty that we will not be able to invent an AI this century. But if the modal genome trick pushed by people like Greg Cochran works out, within a few decades we might be able to genetically engineer humans far smarter than any who have ever lived. Given tens of thousands of such supergeniuses, might we be able to solve an otherwise impossible problem? I don’t know. But if there’s a 1% chance that we can perform such engineering, and a 1% chance that such supergeniuses can invent artificial intelligence within a century, then the probability of AI within the next century isn’t one in a million, it’s one in ten thousand.

Or: consider the theory that all the hard work of brain design has been done by the time you have a rat brain, and after that it’s mostly just a matter of scaling up. You can find my argument for the position in this post – search for “the hard part is evolving so much as a tiny rat brain”. Suppose there’s a 10% chance this theory is true, and a 10% chance that researchers can at least make rat-level AI this century. Then the chance of human-level AI is not one in a million, but one in a hundred.

Maybe you disagree with both of these claims. The question is: did you even think about them before you gave your one in a million estimate? How many other things are there that you never thought about? Now your estimate has, somewhat bizarrely, committed you to saying there’s a less than one in a million chance we will significantly enhance human intelligence over the next century, and a less than one in a million chance that the basic-scale-up model of intelligence is true. You may never have thought directly about these problems, but by saying “one in a million chance of AI in the next hundred years”, you are not only committing yourself to a position on them, but committing yourself to a position with one-in-a-million level certainty even though several domain experts who have studied these fields for their entire lives disagree with you!

A claim like “one in a million chance of X” not only implies that your model is strong enough to spit out those kinds of numbers, but that there’s only a one in a million chance you’re using the wrong model, or missing something, or screwing up the calculations.

A few years ago, a group of investment bankers came up with a model for predicting the market, and used it to design a trading strategy which they said would meet certain parameters. In fact, they said that there was only a one in 10^135 chance it would fail to meet those parameters during a given year. A human just uttered the probability “1 in 10^135”, so you can probably guess what happened. The very next year was the 2007 financial crisis, the model wasn’t prepared to deal with the extraordinary fallout, the strategy didn’t meet its parameters, and the investment bank got clobbered.

This is why I don’t like it when people say we shouldn’t talk about AI risk because it involves “Knightian uncertainty”. In the real world, Knightian uncertainty collapses back down to plain old regular uncertainty. When you are an investment bank, the money you lose because of normal uncertainty and the money you lose because of Knightian uncertainty are denominated in the same dollars. Knightian uncertainty becomes just another reason not to be overconfident.

III.

I came back to AI risk there, but this isn’t just about AI risk.

You might have read Scott Aaronson’s recent post about Aumann Agreement Theorem, which says that rational agents should be able to agree with one another. This is a nice utopian idea in principle, but in practice, well, nobody seems to be very good at carrying it out.

I’d like to propose a more modest version of Aumann’s agreement theorem, call it Aumann’s Less-Than-Total-Disagreement Theorem, which says that two rational agents shouldn’t both end up with 99.9…% confidence on opposite sides of the same problem.

The “proof” is pretty similar to the original. Suppose you are 99.9% confident about something, and learn your equally educated, intelligent, and clear-thinking friend is 99.9% confident of the opposite. Arguing with each other and comparing your evidence fails to make either of you budge, and neither of you can marshal the weight of a bunch of experts saying you’re right and the other guy is wrong. Shouldn’t the fact that your friend, using a cognitive engine about as powerful as your own, got so heavily different a conclusion make you worry that you’re missing something?

But practically everyone is walking around holding 99.9…% probabilities on the opposite sides of important issues! I checked the Less Wrong Survey, which is as good a source as any for people’s confidence levels on various tough questions. Of the 1400 respondents, about 80 were at least 99.9% certain that there were intelligent aliens elsewhere in our galaxy; about 170 others were at least 99.9% certain that they weren’t. At least 80 people just said they were certain to one part in a thousand and then got the answer wrong! And some of the responses were things like “this box cannot fit as many zeroes as it would take to say how certain I am”. Aside from stock traders who are about to go bankrupt, who says that sort of thing??!

And speaking of aliens, imagine if an alien learned about this particular human quirk. I can see them thinking “Yikes, what kind of a civilization would you get with a species who routinely go around believing opposite things, always with 99.99…% probability?”

Well, funny you should ask.

I write a lot about free speech, tolerance of dissenting ideas, open-mindedness, et cetera. You know which posts I’m talking about. There are a lot of reasons to support such a policy. But one of the big ones is – who the heck would burn heretics if they thought there was a 5% chance the heretic was right and they were wrong? Who would demand that dissenting opinions be banned, if they were only about 90% sure of their
own? Who would start shrieking about “human garbage” on Twitter when they fully expected that in some sizeable percent of cases, they would end up being wrong and the garbage right?

Noah Smith recently asked why it was useful to study history. I think at least one reason is to medicate your own overconfidence. I’m not just talking about things like “would Stalin have really killed all those people if he had considered that he was wrong about communism” – especially since I don’t think Stalin worked that way. I’m talking about Neville Chamberlain predicting “peace in our time”, or the centuries when Thomas Aquinas’ philosophy was the preeminent Official Explanation Of Everything. I’m talking about Joseph “no one will ever build a working hot air balloon” Lalande. And yes, I’m talking about what Muggeridge writes about, millions of intelligent people thinking that Soviet Communism was great, and ending out disastrously wrong. Until you see how often people just like you have been wrong in the past, it’s hard to understand how uncertain you should be that you are right in the present. If I had lived in 1920s Britain, I probably would have been a Communist. What does that imply about how much I should trust my beliefs today?

There’s a saying that “the majority is always wrong”. Taken literally it’s absurd – the majority thinks the sky is blue, the majority don’t believe in the Illuminati, et cetera. But what it might mean, is that in a world where everyone is overconfident, the majority will always be wrong about which direction to move the probability distribution in. That is, if an ideal reasoner would ascribe 80% probability to the popular theory and 20% to the unpopular theory, perhaps most real people say 99% popular, 1% unpopular. In that case, if the popular people are urging you to believe the popular theory more, and the unpopular people are urging you to believe the unpopular theory more, the unpopular people are giving you better advice. This would create a strange situation in which good reasoners are usually engaged in disagreeing with the majority, and also usually “arguing for the wrong side” (if you’re not good at thinking probablistically, and almost no one is), but remain good reasoners and the ones with beliefs most likely to produce good outcomes. Unless you count “why are all of our good reasoners being burned as witches?” as a bad outcome.

I started off by saying this blog was about “the principle of charity”, but I had trouble defining it and in retrospect I’m not that good at it anyway. What can be salvaged from such a concept? I would say “behave the way you would if you were less than insanely overconfident about most of your beliefs.” This is the Way. The rest is just commentary.

Discussion Questions (followed by my own answers in ROT13)

1. What is your probability that there is a god? (Svir creprag)
2. What is your probability that psychic powers exist? (Bar va bar gubhfnaq)
3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050? (Avargl creprag)
4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100? (Svsgrra creprag)
5. What is your probability that humans land on Mars by 2050? (Rvtugl creprag)
6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115? (Gjragl svir creprag)

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

703 Responses to On Overconfidence

  1. Scott Alexander says:

    Answers To Questions Which, If I Don’t Answer Them Now, Someone Will Ask Later

    1. “Doesn’t this Pascal’s-mug us to do all sorts of crazy things?” Remember that Pascal’s Mugging isn’t about being very confident a proposition is false. Even if you are 99.999999999% sure there’s no God, I can just say “Heaven is 100000000000000000000 utils”, and now you’re just as Pascal-Mugged as ever. If you’re going to get Pascal’s Mugged anyway, you might as well do it without making a fool out of yourself.

    2. “Okay, doesn’t this force us to consider lots of crazy things we otherwise might not? If you mean crazy things like “sacrifice a pig on an obsidian altar to propitiate Zar’xul, God Of Doing Awful Stuff Unless You Propitiate Him”, reread the section on privileging the hypothesis, above. If you mean crazy things like get very very worried about global warming just in case there’s some sort of sudden feedback loop raising temperatures far above what the models predict, yes.

    3. “Okay, doesn’t this force us to respond disproportionately to small risks? No, it forces you to respond exactly proportionally to them. Right now, as per FHI (I haven’t confirmed), there are only 10% as many researchers studying AI risk as there are studying dung beetles. If you were only able to call AI a 1/1,000 risk as opposed to 1/1,000,000, maybe you would want about the same number of researchers studying AI risk as dung beetles. That hardly means retooling civilization around it.

    4. “It doesn’t matter what probability I assign to AI because MIRI can’t do anything about it.” If you have therefore investigated FHI, FLI, and CSER to see if you find them more convincing, then I accept this as your true objection. If you didn’t bother, that tells me something about where this objection is coming from.

    5. “Why worry about AI when there are so many more important x-risks like biosphere-killing asteroids?” As per the geologic record, biosphere-killing asteroids hit about once every 100 million years. That means, unlike AI, there really is only a 1 in a million risk of one hitting next century. The fact that people are so willing to listen to arguments about asteroids, but so unwilling to listen to arguments about AI, suggests to me that they’re less concerned about probabilities than about what sounds more dignified. There are people in suits working for asteroids worrying about NASA, therefore it is virtuous to worry about it it no matter how low the numbers are. AI was the subject of Terminator, therefore it is silly to worry about it no matter how high the numbers are.

    6. “What about pandemics?”There are hundreds of people at the CDC and WHO working on pandemics, and as far as I know (correct me if I’m wrong) no charities by which the average person can contribute to the effort, whereas AI risk is still in its infancy and therefore the marginal unit effort moves it a lot further. Also, and once again, I am more likely to accept this as your true objection if you have demonstrated any concern about pandemics outside the context of this question.

    • Buck says:

      I’m apparently a lot more concerned about pandemics than you are. I’m seriously (20% chance) considering saving my donations this year to a future biosecurity x-risk organization. Andrew Snyder-Beattie gave a good talk on this at EA Global Mountain View; I’d be interested to hear your thoughts on it.

      • Scott Alexander says:

        I doubt there’s too much difference in our concern level (I don’t think I will save donations for complicated irrational reasons, but I would certainly consider donating to such an organization if it existed). I just find it really annoying that it is always brought up as an argument against worrying about AI by people who don’t actually care about pandemics and would never ask that kind of question about any other cause.

      • Max says:

        Am I the only one who finds its hilariously irrational that people discuss at length all risks and then decide that the best way to mitigate them is to donate to appropriate charity?

        • Paul Torek says:

          Are you advocating political action instead, or what? Seriously curious.

        • FeepingCreature says:

          https://en.wikipedia.org/wiki/Earning_to_give , the job you’ve trained for all your life is your https://en.wikipedia.org/wiki/Comparative_advantage . Don’t throw it away to slowly and unreliably become a domain expert in another field; instead, just employ a domain expert in the field.

          • Marc Whipple says:

            One of my bosses, who had started a company from nothing and done very well at it despite having no business education, once wondered if maybe she should get an MBA. I told her, “You didn’t go to law school when you needed a lawyer. You want an MBA? Rent one.”

          • Max says:

            The problem is that the effectiveness for solving the cause they declare they are fighting for

            Charity is for things people dont care about, but feel bad about not caring . It is a signal: hey I care, *something* is being done! If you really want something done charity is about the last thing to go about it .
            And yeah political action, revolution – they all have better track records than charity.

            Charities is one of the most irrational things, yet curiously enough “rationalists” use it as a cornerstone of any actual action

          • Saal says:

            Political action requires massive coordination and all the costs it implies, revolution has MASSIVE externalities. I seriously don’t know how someone can, with a straight face, recommend revolution as effectively altruistic. It’s a friggin dice roll, I mean, come on.

            Charity just requires you to keep doing whatever you do and send a little of your disposable income towards whatever cause you’re interested in supporting. In terms of effect-costs=?, charitable giving clearly wins out over revolution and other forms of political action in the vast majority of cases.

          • Zvi Mowshowitz says:

            Marc, the issue is that once hired, I feel confident that a lawyer will mostly try to advance my legal cause, with some amount of reputation-enhancing, business-retention and bill-padding and such, but levels that are mostly acceptable losses.

            If I hire an MBA, the incentive problems are far more serious. The MBA will try to take over your company and throw you out, or otherwise divert as many resources to the MBA as possible, with probability high enough that getting your own MBA is very reasonable. Leadership positions especially are very vulnerable to this.

            I do think this argument applies against earning to give, depending on who you plan to hire. The more the person involved needs to be kept on the straight and narrow, the more benefit to doing it yourself or keeping a close eye. There is a premium beyond which you have to trust people, but it can be very, very high.

        • Muga Sofer says:

          … yes?

          What else is the marginal person going to do about pandemics, or asteroids, or the Third World?

        • Eli says:

          No, I’m pretty sure the solution to most of our major civilizational problems lies in governmental action. People kvetch a lot, but ask them when the last time they got held up by bandits on a highway was.

          • Deiseach says:

            ask them when the last time they got held up by bandits on a highway was.

            You mean apart from the toll charges? 🙂

          • Jaycol says:

            That sounds like it’s related to privileging the hypothesis. No, I haven’t been held up by bandits on a highway. Most of the frequent robberies around here happen on surface streets.

          • Bruce Beegle says:

            the last time they got held up by bandits on a highway

            civil asset forfeiture

        • John Schilling says:

          It is entirely plausible that the most efficient way to save African children is not mosquito nets, but a $20,000 JDAM delivered to the hacienda of the local warlord who blocks aid shipments to rival villages. And probably one more to his replacement, and some fuel for the drone that hovers menacingly over the third.

          Even if this were legal for the private sector (which might be finessed), GiveWell’s donor base would all but vanish if they tried that route. See the etymology of “technical”. Charity, given the actual goals of the donors, is constrained by a strong version of “First do no harm” that may often prevent optimal solutions. You may be better off with politics.

          Even extralegal politics, like joining a terrorist group. I would wager that the probability of unfriendly AI being stopped by a marginal donation to MIRI is much, much smaller than the marginal probability of unfriendly AI being stopped by a marginal recruit to the Sarah Connor / Jehanne Butler Brigade. And given the historical performance of terrorist groups named for Sarah Connor, that’s a pretty low bar.

          • kernly says:

            The idea that drone strikes effectively stabilize regions isn’t terribly credible when the Islamic State arose during the term of the most drone-strike happy president. It is a strategy based on an incorrect premise – that the things we don’t like are caused by a small band of evil troublemakers. In fact our problems are caused by an intersection of weak states, violence-prone low IQ populations, extremely antagonistic policy on our part, and extremely thin skins on our part.

            That article you linked really hasn’t stood the test of time. Look at Libya now, look at Syria now. It’s pretty clear that they would have been MUCH better off if the “regimes” that led them had been unchallenged by the west.

            So if you accept the main premise of the above – that life is ~10% better in Libya after Gaddafi was overthrown – military intervention in Libya was a bit more effective towards humanitarian goals than donations to AMF, buying QALYs for $65 versus $75.

            LOL

            How could anything make the problem with this premise MORE obvious?

          • John Schilling says:

            The idea that drone strikes effectively stabilize regions isn’t terribly credible when the Islamic State arose during the term of the most drone-strike happy president.

            Strictly speaking, I didn’t say anything about either drone strikes or stabilizing regions, but since you bring it up:

            The first drone strike against ISIS did not occur until earlier this month. Notwithstanding your characterization of President Obama, the United States has been fairly specific about what it uses armed drones for: attacking Al Qaeda and its affiliates (of which ISIS is not) and supporting US ground troops (of which there are none anywhere near ISIS). The Islamic State arose and reached its greatest territorial extent in an effectively drone-free environment, and still operates in a nearly drone-free environment.

          • Deiseach says:

            The Islamic State arose and reached its greatest territorial extent in an effectively drone-free environment, and still operates in a nearly drone-free environment.

            Which demonstrates the problem: concentrating on Group A as the source and summit of all evil, presuming that wiping out Group A or at least the leadership will solve all the problems, and meanwhile off in the distance Group B is gearing up.

            Remember how we were being told that, once Saddam Hussein was out of the picture, all would be beer and skittles in the area? Remember “mission accomplished” in Afghanistan?

            A little more history (Afghanistan has been an intractable problem for going on two centuries now, and for mainly the same reasons: you cannot march a conventional Western army into the terrain as you can on the flat battlegrounds of Europe) might calm down the “but we’ve got even better weapons now!” notion of how to change the world. Human nature is what needs changing, and that will not be achieved by “now we can kill people with drones instead of using soldiers with guns/cannon/napalm”.

    • Carl Shulman says:

      “here are hundreds of people at the CDC and WHO working on pandemics, and as far as I know (correct me if I’m wrong) no charities by which the average person can contribute to the effort”

      You’re wrong, so here’s some correction:

      Trivially, the CDC and WHO accept private donations, and you can give money to the researchers they provide grants to.

      The Skoll Global Threats Fund posts its grantees here, many of them are targeting pandemics (and other GCRs):

      http://www.skollglobalthreats.org/about-us/activities/

      You can give to the Open Philanthropy Project in support of its GCR activities (biosecurity was its #1 GCR investigation priority as of a few months ago, although AI was tied for #2, and they have already made an AI grant via FLI):

      http://www.givewell.org/labs/causes/biosecurity
      http://blog.givewell.org/2015/03/11/open-philanthropy-project-update-global-catastrophic-risks/

      CSER (one of the groups you mentioned in the context of AI) has bio risks on its agenda. Trying to find neglected biosecurity interventions (targeting the cracks in the existing spending) is a very credible candidate.

      “whereas AI risk is still in its infancy and therefore the marginal unit effort moves it a lot further”

      I agree that AI is still much more neglected relative to importance today, but that could change with more talent, funding, etc in the future [as you also agree]. If I were spending billions of dollars on GCRs and facing serious room for more funding and diminishing returns issues, I would not wind up spending everything on AI without getting to bio.

    • Cole says:

      4. “It doesn’t matter what probability I assign to AI because MIRI can’t do anything about it.” If you have therefore investigated FHI, FLI, and CSER to see if you find them more convincing, then I accept this as your true objection. If you didn’t bother, that tells me something about where this objection is coming from.

      I’ll preface this with saying that I think I have about a 1/1000 chance of being correct about MIRI’s effects. So one thing I never saw answered is what if MIRI isn’t useless, but is actually harmful towards preventing a bad AI takeover. A group of people thinking about hard to solve AI problem before we ever encounter AI. Maybe their net effect is that they just speed of the development of AI by a few years and make the takeoff harder rather than softer. If you think harder takeoffs are more likely to lead to AI apocalypse scenarios and you think MIRI’s only effect is to make a hard takeoff more likely than they are not worth funding.

      I think you made a great case of humans being bad at small probabilities and we should be more careful with how we use them, but I don’t see any reason to believe humans are bad at negative numbers. If some proposed solution to a very bad but unlikely event also has an unlikely chance of causing some other very bad thing to happen then we are back where we started with a lot of uncertainty of how to handle rare but very bad events.

      I know Einstein and other physicists wrote a letter to FDR asking him to not use the atomic bomb. What if MIRI has the same effect as the physicists that wrote that letter? To advance the field that allowed for the creation of the bomb, and then fruitlessly try to get others to use it wisely …

      • Erol Can Akbaba says:

        In the case of the a-bomb, disaster-tool came first. Then the ethical consideration.

        In MIRI’s case, it’s the ethical consideration that came first. They are working, not on making an AI as soon as possible, but solving the goal-alignment problem as soon as possible. AI comes after that solution.

        • Cole says:

          Well the letter I was referencing was actually written in 1939 when they were aware it was possible to build the bomb but had not yet actually built one.
          http://www.pbs.org/wgbh/americanexperience/features/primary-resources/truman-ein39/

          So the ethical consideration still came first.

          • Nisan says:

            That letter isn’t saying “please don’t use the bomb”. If anything, it’s saying the opposite.

        • TrivialGravitas says:

          The disaster tool was created *by* ethical concerns. The Manhattan project scientists were driven by the idea if they could only make war horrible enough they would make their war the last war.

          • John Schilling says:

            The Manhattan project scientists were driven by the idea that they could kill Nazis and make the world safe for democracy. And do cutting-edge science that ends with really cool pyrotechnics.

            Killing Nazis is, of course, something one might be driven to do by ethical concerns.

          • Do you have a citation for this? It seems really amazing if true, because it’s more or less what happened.

          • houseboatonstyx says:

            @ Tiger

            ‘Make the world safe for democracy’ was Woodrow Wilson’s argument for the US coming in on WWI.

            I think a lot of people thought WWI would be the war to end war, and were very disappointed when WWII happened. So neither would be a popular argument for anything in WWII.

            The main argument (well, it didn’t need to be argued) for the Manhattan Project was ‘The Nazis are working on it too, we have to get it before they do.’

            After Hiroshima many people hoped the atom bomb would in fact make war* too horrible to pursue, which it has.

            Pax Atomica?

            * We’ve had no big wars where superpowers attack each other’s homeland and take over everything in between, ie no more World Wars.

            All we’ve had since WWII is proxy wars, fought in lower tech countries — though some of those countries are gaining on us.

          • TrivialGravitas says:

            @Tiger American Prometheus, mainly about Oppenheimer, is littered with examples of scientists who told Oppenheimer that’s what motivated them, and one really disturbing guy who argued that they HAD to drop the bomb on a civilian target or nobody would take the weapon seriously.

        • Eli says:

          What on Earth makes people think you can solve goal alignment without having a detailed idea of what an AI is, what goals are, and how you’re programming one into the other?

      • g says:

        MIRI’s work is (all, I think) highly abstracted away from the details of any AI implementation, and so (I think) very unlikely to make any difference one way or the other to how quickly anyone can develop an actual AI.

        Cynical explanation: no one at MIRI is actually competent in actual AI (even in so far as any human is) and they’re avoiding doing things they’re visibly no good at. Idealistic explanation: people at MIRI have exactly the same worry as you, and are therefore directing their effort towards areas of AI-risk research that have negligible probability of leading to actual AI any sooner. Middling explanation: people at MIRI think current AI work is probably far away from being able to make human-level (or better) AI, and if we simply don’t know what actual AI will look like then our best shot at understanding the risks for now is to work at a high enough level of abstraction not to care too much about how any actually-arising AI actually works.

        (I think elements of all three explanations are actually correct.)

        • Cole says:

          Those all sound like good explanations, but it makes me wonder if MIRI is going to be playing with a trade off between providing useful security insights/advancing the field faster and no useful security insights/not advancing the field faster.

          There seems to be a general blindspot for scientists throughout history where they think the intentions behind their inventions matter more than the practical applications of their invention. The atomic bomb, dynamite, and agent orange are all famous examples.

          I certainly hope MIRI succeeds, since I think AGI development in my lifetime is pretty likely, but they are going against the grain of history in thinking ‘this time we will put the right kind of people in charge, and they will make the right moral decisions about how to use this powerful tool.’

          • Raemon says:

            There have been active discussions about how to avoid accidentally facilitating faster AI development. A few years ago this resulted in a focus specifically on the areas that are a combination of “important for safe AI” but not “important for non-safe AI.” (The specific area they focus on now is “goal stability/alignment”, making sure an AI’s goals don’t change while undergoing recursive self improvement.)

            Bostrom talks about an abstracted version of this concept in Superintelligence – differential technological development. For maximum impact/safety, you may want to accelerate some fields of science/engineering while decelerating others.

        • Eli says:

          Personal judgement from having read their papers: MIRI, along with the entire field of “AI”, are substantially behind the times relative to current research in less “future-woo” domains such as cognitive science, machine learning, logic/semantics, and algorithmic information theory. Their problems are eminently solvable, but because the solutions are not phrased in AI terms, nobody reads them.

        • Ethan says:

          Having read a few MIRI papers myself, I totally agree that their work is unlikely to be of any major use.

          Frankly my thinking is, if we REALLY want to mitigate AI risk, we should be investing in computer security. In particular, having “off” switches we control. An AI is no threat if we can turn it off whenever we want.

          • AngryDrake says:

            Frankly my thinking is, if we REALLY want to mitigate AI risk, we should be investing in computer security. In particular, having “off” switches we control. An AI is no threat if we can turn it off whenever we want.

            I don’t think you’re going far enough.

            To really mitigate AI risk, one should be forming conspiracies to knock out silicon chip factories, assassinate leading AI researchers, and destroy and/or suppress existing knowledge in the field.

          • HeelBearCub says:

            Nice try Sarah “AngryDrake” Connor.

      • Josh says:

        I’d assign a high confidence to Friendly AI research being at best futile, at worst counterproductive, at least if the goal is “be able to guarantee Friendliness”. Reasoning being:

        -FAI is a strictly harder problem than AI, in the sense that if you know how to build an FAI, you know how to build an AI, but not necessarily visa-versa.

        -It seems highly likely that FAI is also a substantially harder problem. In every engineering domain I can think of, it is far easier to build something that generally does X than it is to build something that is guaranteed to do X. This is why, for instance, the set of computer programs that we’ve proven to be correct is a tiny subset of the total # of computer programs we’ve written, even though the economic costs of bugs in computer programs is enormous. Also: buildings collapse, airplanes crash, etc.

        -In fact, it may even be that AI is possible whereas FAI is impossible. We have an existence proof for intelligence: humans. We have no existence proof for capital-F Friendly intelligence. (For lower-case f we have Mr. Rogers). We know that for many interesting properties of computer programs such as whether or not they halt, the set of programs that have that property is larger than the set of programs that we can prove to have that property (cf. the halting problem). It doesn’t seem totally implausible that the set of programs that can be proven to be Friendly is empty, especially since most “you can’t prove this” proofs involve representing the program inside the program, which is precisely the kind of “strange loop” that some people speculate is the key ingredient of general intelligence (cf. Douglas Hofstadter).

        So: FAI is at least strictly harder than AI, probably substantially harder, and maybe even impossibly harder. What are the odds that a major breakthrough on FAI happens before AI is solved, even in a world where every research team working on AI has the goal of FAI? Now, what are the odds of it happening in a world where some teams (because capitalism or because nationalism or because human perversity) are just trying to get to AI by the fastest path possible?

        If it does turn out that going for Friendliness is a severe handicap in the race to invent AI, then promoting AI safety would seem to make it more likely that AI is first achieved by people who don’t share your values. I imagine that the odds of disaster from “we invented AI, and told it to create rainbows and help small children, but it did something we didn’t expect” are lower than the odds of disaster from “oops these guys just programmed an AI to help them conquer us oh shit”.

        • Peter says:

          This is one of the reasons I have specific problems with MIRI (the other one is them being underproductive). They seem to have a very strong focus on provability – often trying to tackle deep problems in mathematical logic – which doesn’t seem to be on the right tracks. I mean, if they’d pitched their ideas to pure maths funding bodies as “these are questions of fundamental mathematical and philosophical interest, oh and it might have applications in AI safety because you have to mention applications to get funding these days” then I might be less unimpressed with them.

          Also: just about all of the buzz in AI, machine learning and related fields seems to be about statistical methods, and the natural intelligence we have examples of, well, that’s from evolution and natural neural networks, again, they look statistical, maybe you have to squint a little. The MIRIish obsession with proofs looks a lot more like GOFAI which is about as dead an AI paradigm as it comes. (Actually having written this, there are quite a few proofs in machine learning theory, and that paragraph’s very vauge, so maybe there’s a case against what I just said to be made. Someone should make it.)

        • Aaron says:

          I’m not sure “friendliness” is the right metric for AI. It seems to me that the focus on logic and rationality is misplaced. I would frame the topic in terms of how life behaves. We don’t know much, if anything, how an AI would behave but we do know a lot about how biological life behaves. I don’t think artificial life would behave any differently from biological life.

          Biological life forms compete for scarce resources and mates (if they are sexual). This forms their foundational drives and motivations–even for “rational” humans. What would be the primary drive of an AI? What resources would it compete to secure? CPU time, memory, storage, bandwidth? This evolutionary perspective seems like a closer fit for the problem domain than logical proofs and coding safeguards.

          As for friendliness, how do we judge the friendliness of other biological life forms? To simplify (perhaps grossly so), if they compete for resources we care about then we don’t consider them friendly. We don’t even consider other humans who compete for the same resources we want as friendly.

          So, in my view, the question comes down to what the resource competition between humans and artificial life forms would look like. I tend to think that, at least in the beginning, humans will most likely be caught in the crossfire between competing artificial life forms competing for our computing resources. This is because I don’t see GAI arising out of some digital big bang but rather emerging out of simpler but evolving artificial life forms.

          • TheAncientGeek says:

            An AI a is not a product of natural selection, so there is no reason to think it would have lifelike resource acquisition drives. It’s artificial, so its drives are whatever its creators gave it.
            .

          • HeelBearCub says:

            @Aaron, @TheAncientGeek:

            I think you are both circling an important point.

            AI, once it exists, will trend towards being dominated by AI that has traits most conducive to continued existence.

            I want to say this is true by definition. Not sure that statement holds up, but let’s say it’s “truthy”.

            I think this means, in the absence of foom, AI will tend towards useful and non-invasive. It also means that self-modifying AI will tend to develop the ability to defend its own turf.

          • Aaron says:

            @TheAncientGeek

            That is where we disagree. Too much time in the software field has given me a very dim view of human programmer’s ability to build anything as complex as a human level AI. Human programmers can only barely make an operating system like Windows work correctly. And even then with such a multitude of bugs as to be very non-scary [Segfault in world domination routine error code 0x00453AD]. Cyc and Watson won’t be taking over any time soon.

            Evolving a set neural net modules (for example) is another matter. Software can evolve millions (billions?) of times faster than DNA. That’s the route I’d take if I were trying to build a human level AI. No matter how smart we are, we aren’t smarter than evolution.

          • TheAncientGeek says:

            Natural selection in software operates through copying errors, and is very slow. Artificial selection is fast, but takes place in an artificial environment. The programmer still exerts indirect control over which values promote survival…you are not going to automtically get Red In Tooth And Claw.

          • There’s no strong reason to think artificial life will be exactly analogous to natural life. So long as humans remain dominant, what is conducive to an AIs continued existence is being friendly to humans, so that humans will continue to provide them with electricity. Arguably, the precursors of powerful AIs are already in an ecosystem with humans, and are being selected for friendliness, inasmuch as the unfriendly ones get switched off or thrown away.

    • Evan Þ says:

      “There are people in suits working for asteroids worrying about NASA”

      So there really are people working for the civilization-ending asteroids, after all? Well, at least they’re worried about NASA’s small efforts…

      • Mary says:

        Not civilization-ending. Worried about NASA ones. You’d be worried too if you thought NASA might come and blow you up so you don’t end civilization.

        • Deiseach says:

          This is what the men in suits are up against.

          Somebody leaked the secret plan early, and those spoilsports in NASA got right on it 🙂

          They’re even boasting about the other plans they disrupted!

          NASA pointed out that Doomsday theorists have made similar predictions in the past, including the Mayan calendar claim in 2012, all of which were not backed up by science and turned out to be false.

      • Leonard says:

        You have to admit that upon learning this you had to increase your estimated probability of asteroid risk.

    • Seth says:

      Regarding: ” The fact that people are so willing to listen to arguments about asteroids, but so unwilling to listen to arguments about AI, suggests to me that they’re less concerned about probabilities than about what sounds more dignified.” – would you considering factoring in that there may be an aspect here that asteroids *exist*, and further, even biosphere-killing asteroids *exist*. But human-level AI *does not exist*, superhuman-level AI *does not exist*, the Terminator (as in malevolent superhuman-level AI) *does not exist*?

      Additionally, the mechanism of the biosphere-killing asteroid is extremely well-grounded in intuitive physics and easy to grasp overall: big-rock-smash-things. While the AI-risk mechanism has conjectures that are of the flavor: and-then-a-miracle-occurs (i.e. highly speculative).

      These differences are far from being merely about what’s “dignified”.

      • Vamair says:

        Human-level AI exists, it was created by evolution and has led to extinction of several species.

        • zslastman says:

          For unreasonably large value of ‘several’.

          Several *homonid* species, maybe.

        • vV_Vv says:

          Uh, remind me what does the “A” in “AI” stand for.

        • Froolow says:

          That’s true, but if that’s the case then I can safely assign the probability of a FOOM to less than one in a hundred billion – around one hundred billion human-level intellects have ever been created, and none of them have FOOM’d (despite having the capacity for learning).

          Except I think this argument is silly; computer based intelligences are likely to be so different from evolved intelligences that the fact that evolved intelligence exists and hasn’t FOOM’d is really no evidence that human-level computer AI could exist or that it wouldn’t FOOM, respectively.

          • FeepingCreature says:

            Human beings have stood on the very precipice of mutual extinction for 44 years, to the extent that I think it’s somewhat credible that nukes are the great filter and the only reason we survived is sampling bias.

            This is not a safe architecture.

          • Unknowns says:

            If all the nukes in the world, at the time when there were the most, had been used all at once with the express intention of maximizing the number of deaths, that would not have led to human extinction, so that is unlikely to be the great filter.

            But I agree the situation is not very safe, since you can still be dead even without human extinction.

          • Chalid says:

            If not interrupted by singularity, disaster, etc. human intelligence is very likely to FOOM. Genetic science (perhaps full genetic engineering, but even just Gattaca-style embryo selection) makes smarter babies, which leads to smarter scientists in the next generation, and the recursive self-improvement cycle leads to humans being off the charts intelligent in several generations.

          • thedufer says:

            @Unknowns That seems like a particularly bold assertion. Maybe the nukes wouldn’t immediately kill everyone, but between the total collapse of civilization, the environmental disaster of years of nuclear winter, and global radioactive clouds, I can’t imagine anyone’s confidence in either direction could be as good as you’re claiming.

          • John Schilling says:

            The “global radioactive cloud” is calculable, and leads to tens of millions of deaths. All the nuclear weapons that are or ever were, don’t produce enough fallout to raise the global background to acutely lethal levels, so we’re dealing with modest increases in global mortality plus local hot spots.

            The worst-case predicts for nuclear winter, which are no longer credible for several reasons, were nonetheless milder than the ice age humanity evolved in. Similarly, we have existence proofs for human life surviving quite well without benefit of civilization.

            Actual human extinction as a result of nuclear war is not credible.

          • TrivialGravitas says:

            Those ‘several reasons’ are based on a fundamental misunderstanding of how doctrine says to use nuclear weapons.

            If you want to kill a lot of people you airburst, no dust cloud. If you want to take out the enemies ability to launch nukes you detonate as close to the ground as possible, because airbursts have no ability to affect underground targets, this creates major dust clouds.

          • John Schilling says:

            …this creates major dust clouds

            Which in turn creates lots of radioactive fallout, as the now-radioactive dust falls out of the sky, but not so much in the way of nuclear winter, because the dust falls out of the sky.

            Dust clouds have essentially nothing to do with nuclear winter. Nuclear winter is not dust clouds blotting out the sun after a nuclear war, because dust clouds can’t do that for long enough to matter. Nuclear winter comes from soot blotting out the sun after a nuclear war. This can happen, but only if you have lots of fires, preferably oil fires but wood and paper will maybe do. This in turn comes mostly from airbursting large thermonuclear weapons over cities, such that their radiant heat can reach the greatest possible amount of wood and paper and oil.

            Since we came up with this idea, we’ve learned that soot has a harder time reaching the upper atmosphere than we thought, and it doesn’t stay there as long as we thought, and we retired most of the big thermonuclear weapons that were designed incinerate large cities with airbursts. There is still a danger, but it isn’t even much of a civilization-ending danger.

      • Deiseach says:

        And we have had biosphere destroying asteroids crash into us before, and even if not global-extinction level problems, something like the Tunguska event happening over a densely-populated area wouldn’t be great, either.

        We can forecast when civilisation-ending asteroids are likely to swim into view. We have proven tools to use. By contrast, AI risk is a heap of “well, if this thing happens and then that thing and then the other thing, it could end up such a way” and “we have no idea how this thing might happen in the first place but you really should concentrate all your worry and donations to us so we can think about ways to make the thing not happen in a bad way even if we don’t know how to make the thing happen at all”.

        And if you all send me the low, low donation of TEN DOLLARS each, I can absolutely guarantee that I will prevent the fairies from stealing your butter.* Sure, maybe the fairies have never stolen your butter up to now, but can you really be sure you can be 100% confident this will not happen in the future? In the next century? Really really one in a million sure?

        *Note: guarantee does not apply to witches, native spirits other than fairies, or you being too lazy and sloppy to scald out your butter churns, you slattern.

    • Professor Frink says:

      Regarding 4, there was recently a big EA tumblr dustup about people donating to MIRI without investigating whether they can effectively change the AI risk numbers. Isn’t that just the reverse of your point? If you accuse critics of not being fair in their rejection, do you admit a lot of donors probably aren’t being fair in their donations?

      • Careless says:

        “I think there’s a 1% chance my donation makes a difference. Huh, turns out I was wrong. Who could have guessed? Oh yeah, me”

        • Deiseach says:

          Re: the 95% heretic or witch-burning confidence.

          I’m on a cliff and my friends are urging me to jump off into the sea below. Come on, it’s fun! Nobody hardly anyone very few Just some get hurt doing this!

          If I estimate a 95% probability of breaking my leg, a 5% margin of doubt is not enough to convince me “Ah, I should do this anyway”.

          I’m on a jury. I’m 95% convinced by the evidence and the arguments that Joe “Fingers” McGovern robbed the Statoil garage. There’s a 5% chance that it wasn’t him but his twin brother Mick “The Mangler” McGovern, who can’t actually prove he was eighty miles away at the Feis Ceoil in Sligo that night. Am I going to vote for acquittal on the grounds that “I’m nearly positive he dunnit but there’s a small chance he didn’t”? I don’t think so.

          Maybe if I were a strict logician mathematician who lived by the figures. But the general run of people are going to treat 5% chance of doubt as “Not big enough to worry about”. If we shouldn’t do something unless we’re 100% confident about it, and if nobody could be 100% confident about anything, then I don’t see the twist there that goes “Ah, so you should then take AI risk seriously because you can’t be 100% confident it won’t happen!”

          You can’t simultaneously argue “5% doubt means you shouldn’t do something you’re 95% convinced you should do” and “5% doubt means you should do something you’re 95% convinced you shouldn’t do”.

    • anon85 says:

      Regarding 4, if I don’t believe MIRI can help with AI risk, why would I believe another organization can?

      I feel like you’re failing to treat strong AI as a *very hard problem*. The task is really really hard. Of course, as you say, that should not make us confident about the timeline of strong AI – I completely agree with your point about one-in-a-million estimates being stupid. But if strong AI is a very hard problem, most of the paths to it anytime soon are via unpredictable, black-swan routes, in which case MIRI doesn’t help.

      Even if the path to AI ends up being the straight-forward scientific progress path, it’s quite possible the MIRI hurts our chances instead of helping. And even if the work MIRI does is helping rather than hurting, it’s quite likely that my money will be more beneficial once the strong AI problem starts to look at all tractable (since right now, AI safety researchers are making blind guesses about what’s important, with nothing much to go on).

      I feel like you’re becoming more confident in AI risk upon hearing people disagree with it (which is the exact opposite of what Aumann’s agreement theorem says you should do). I could be wrong about that though.

      • endoself says:

        Actually, Aumann’s agreement theorem does not say that the agents eventually converge on a belief between the two initial beliefs, only that they eventually converge to something. That said, ignoring others’ reasons for their beliefs and instead irrationally becoming more confident of your own belief is a known human failure mode. That said I don’t see how it applies here; Scott does not appear to be becoming more confident of his original belief.

        • PSJ says:

          Source?

          I had assumed it did say that and can’t think of an intuitive reason why it would not.

          • JGWeissman says:

            Alice and Bob each privately flip a coin and observe the result. Then they discuss their confidence in the proposition that both coin flips came up heads, in a sequence of rounds in which they both reveal their assigned probability at the start of that round.

            Round 1:
            Alice: 50%
            Bob: 50%

            Round 2:
            Alice: 100%
            Bob: 100%

          • PSJ says:

            Wonderful. Thanks!

      • Eli says:

        The Strong AI problem is already plenty tractable. So is the Friendly AI problem. There does, however, appear to be an insurmountable communications problem, in which people continue to believe that minds are ontologically special and can’t be figured-out or made-sense-of in the ordinary scientific way.

        • Samuel Skinner says:

          You’ve posted this in two areas, but I’ll reply here.

          The issue is not “we don’t understand the human mind”. The issue is we don’t know how to program morality in way that works. Human morality is internally inconsistent.

          • Eli says:

            If you understand how evaluative cognition takes place, you can construct morality. Period. Things you learn in philosophy class that show internal inconsistency should be thrown out as non-natural hypotheses. Of course, should you persist in trying to force non-natural hypotheses onto reality, you will fail miserably, but that’s your own fault.

          • TheAncientGeek says:

            On the other hand. you cam train human morality into am entity of human level intelligenc.

          • Deiseach says:

            Eli, we train up people “Don’t steal”. And yet, theft is a problem – and not just underclass scum mugging people in dark alleys, nice white-collar college-educated people setting up scams and frauds.

            If we’re getting a human intelligence that is genuinely free to make choices, then constructing morality is not the answer. If it’s not free, we’re safe enough.

            The people denying AI risk don’t believe even a super-intelligent AI will be free in that way, so the risk is going to come from human misuse of the AI not the AI acting off its own bat. The AI risk people are warning that AI can and will act by its own construal of reality to do things not in our interests. That being so, we have experience of how “training in” morality to human-level intelligences is not fool-proof by human experience.

            If you’re saying we can make a software block such that human-level and above AI cannot transgress the values we put in (some version of the Ten Commandments), then great. But that’s not what the alarmism seems to be about; there’s a huge estimation that of course we will hand over running the world to super intelligent AI because it will be able to solve all problems. So we need to programme it right, otherwise it will enslave or destroy us.

            With respect, I still maintain the big problem there is not the AI, it’s the humans – if we hand over running the world to an AI, we pretty much deserve to be treated as pets or possessions.

          • Deiseach

            You need to supply the other half of the argument: why an unsuccessfully trained AI would be an existential threat, and not a containable nuisance like an unsuccessfully trained human.

            The kind of AI safety envisioned by MIRI is sold as being foolproof, but cannot possibly be, because it is too complex. Big design Up front has a terrible record for getting things right first time.

            Freedom of some kinds is safe enough, for instance freely choosing an action to fulfill a human friendly morality.

            We don’t have a strong motivation to create a superpowerful singleton,we haven’t solved goal stability, and we don’t otherwise have much ability to control one.

            If you invert MIRIs standard assumptions, you get multiple AIs exisitng in an interactive quasi-social framwework with each other and humans, and you get goal stability, which can be usefully leveraged to achieve corrigibility.

          • Aaron says:

            I would add that human morality is irrelevant. We aren’t talking about humans, but an entirely different form of life with a fundamentally different psychology. I would argue that human models of psychology and morality do not apply to AI. There seems to be a lot of anthropomorphizing on this topic.

          • HeelBearCub says:

            @Aaron:
            That can’t be correct.

            As an example, many in the AI x-risk community take as a given that mind upload will be a possible thing. Given that, you can’t then assert that AI will have a fundamentally different psychology.

          • Susebron says:

            @Aaron: Human morality is entirely relevant when talking about Friendly AI. AI is not guaranteed to have human morality, which is the entire problem.

        • anon85 says:

          The Strong AI problem is not “plenty tractable”. Machine learning researchers are struggling with very basic tasks. Computers can’t even beat humans at Go (despite plenty of research), and Go is not even something humans evolved to play well. Playing Go is nothing compared to sentiment analysis, or theorem proving, or vision. And these are all things that researchers in the 1960s thought would be easy, so it’s not like we haven’t been trying.

          The moral of the last 50 years of CS research is that these problems are much harder than our naive intuitions suggest.

        • Deiseach says:

          It’s not necessarily that minds are hugely special. It’s the series of steps and the time frame they’re put into.

          (a) We develop AI, that is, we manage to create something that can genuinely be called intelligence.

          Okay, I’ll grant that a machine as intelligent as, say, a worm is doable (we know how many neurons in a worm). Problem here is, how do we recognise that? Intelligent as a worm is, well, not going to be “Suddenly one day the computer spoke to us, identifying us each by name correctly”.

          (b) We crack higher and higher levels of intelligence until we hit something we can point to, say “This is intelligence” and everyone looks at it and says “Yeah, that’s right”. Rat level or above. How long will that take? Well, when we’ve got worm-level someone can run the numbers and estimate there.

          (c) Human level by the next century (or within the next century, anyway sometime in the next hundred years). Now we start to run into problems, because even MIRI’s paper on this admits current-day expert estimations have only shoved the timescale out that far because of the previous predictions all falling into the “AI within 20-25 years” pitfall.

          Human level? I’ll go so far as to grant okay, that can be done. Within a century? Depends. What are we calling human-level? Are we talking something like consciousness, which is a whole other problem, or just enough processing power to do human-level tasks as well as humans (the expert systems another commenter mentioned in reply to me using examples of machines that could do routine clerical or legal work every bit as well as humans but that this is not the same as being as intelligent and conscious as a human).

          (d) The big one. This is where the disagreeing and hair-pulling starts. “And then magic happens”. Our human-level AI can bootstrap itself up to super-intelligence because it can hack its own source code and work out how to make itself work better.

          Even if we grant that, and it’s a BIG “if”, the corollaries then festooned about the assumption here are: (1) this super AI will be in a position to affect the world on a very large scale (2) this super AI will have consciousness, volition, intention, and goals of its own in conflict with ours (3) even if it doesn’t, it may be a risk because we tell it “cure disease” and it figures the best way to cure disease is “kill all humans” because dead humans don’t get sick (4) it can initiate “kill all humans” because we’ve been dumb enough to give it so much power and control it can press the big red buttons and blow us all to kingdom come, or poison the food supply, or cut off global energy supply, or pump out clouds of cyanide into the atmosphere or something.

          And that this will happen pretty soon (and yes, on a geological time scale, 100-200 years counts as ‘pretty soon’, and a lot of the assumptions seem to be ‘even faster than that’) and unless we work it out right now we’re doomed.

          What is the guarantee thatf:
          (1) Any solutions we work out right now are going to be applicable in 150 years’ time? Imagine a group working out, in 1880, how to ensure that by 2030 the streets of London would be clean of all the horse droppings from the traffic of the day.

          (2) Ah, but we’re working on pure mathematical principles and maths never changes! We’re developing underlying algorithms, not hardware/software that will be outdated by technical progress!

          Very nice and maybe even it will work. How do you guarantee that the People’s Glorious Republic of Australia or North-Western-China plc inc or the good old U.S.A. are going to adopt your principles? The Australians and the Chinese may very politely tell you to take a hike, there’s no problem, their AI is perfectly safe and aligned with human values and goals (and it may well be, even if the values and goals are ‘convert everyone in the world to eating Vegemite’, which is the set of Australian values trained into it) and it’s their ball and they’re taking it home if you insist they play by your rules.

    • Deiseach says:

      Yet some people think they can predict the future course of AI with one in a million accuracy!

      Scott, I still think this is a sword that cuts both ways. Pooh-pooh all ye nay-sayers, I have just demonstrated that Nobody Knows Nuffin’!

      So you should absolutely believe all the people saying that unless we start pumping money, time and effort into it RIGHT NOW, we will all be the slaves (for the tiny moment before it obliterates us) of the Unfriendly God-AI because – well, because they’re experts! They know this stuff! Even though I have just told you all that Nobody Knows Nuffin’ at that level of confidence!

      Nobody is disputing that eventually AI. Even eventually human-level AI. Maybe even super-duper-intelligent AI. The dispute is (a) when and how long it will take (b) will it really be as simple as “Today we find out how to make chicken-level AI, tomorrow it bootstraps itself up to be Ruler of the Universe and will DESTROY US ALL!!!!!” stages of immediacy between “AI – human level AI – super duper god level AI”?

      For all your examples of Elderly Distinguished Wrong Scientists, there is an equally long list of Absolutely Sure To Happen Doomsday Predictions that never did happen, or not the way they thought, or that did happen and weren’t as bad as predicted, or that did happen but we managed to fix them.

      • g says:

        Deiseach, no one is saying that “unless we start pumping money, time and effort into it RIGHT NOW, we will all be the slaves of the Unfriendly God-AI”. (No one that I know of, anyway. I dare say you can find someone who believes any crazy thing you care to mention.)

        In particular, no one is assigning one-in-a-million probabilities to that not happening.

        And yes, there are people “disputing that eventually (human-level) AI”. Really, there are.

        I am skeptical of your claim that there are as many failed doomsday predictions as failed it’ll-never-work predictions (do you have any evidence for it?) but in any case the two cases are not equivalent. If 95% of confident doomsday predictions are wrong, and someone is confidently predicting AI-doom, and that’s a typical case — why, then, that means a 5% probability of AI-doom, in which case we should be putting some time and effort and money into seeing whether we can prevent it, no?

        • Deiseach says:

          But g, the debate is switching back and forth like that:

          “You’re denying AI risk!”

          “Yes.”

          “You’re denying human level AI sometime!”

          “No.”

          “Aha, so you admit AI risk!”

          Er, what? I think – and I imagine a lot of other people think – that human level AI may be attainable, but it’s not going to happen as easily as some assumptions make out (e.g. we finally figure out how to get something as advanced as rat-level AI , then human-level is a simple matter of scaling up, and then SUPER-DUPER-INTELLIGENT AI because the human-level can bootstrap itself up to that level).

          Human-level AI is a red herring in this argument, because it’s being used as the first step in the ladder of “develop AI” that reaches to “Existential risk AI”.

          And nobody (on the pro-X risk side) seems to be arguing that the existential risk of AI is from human misuse of such (which to me seems by far the most plausible method by which we’d manage to destroy our own civilisation) but they are arguing for the “Unfriendly AI which is god-level and decides we’re a nuisance to be obliterated”.

          Even if they don’t put it in quite those terms, the idea still is that AI by itself will have volition, choice and the capacity and ability to make real-world consequences happen that would be to our harm.

          So yes, I’m regarding the “people are arguing against human-level AI on implausible levels of confidence” as bait-and-switch, because if you can be cajoled or brow-beaten into “okay, human level within the next century” then you’re hit with “And then naturally following on from that are all these consequences”.

          • PDV says:

            Humans misusing AI is a subset of Unfriendly AI. A Friendly AI is designed such that it cannot/will not do harmful things; an AI fir which this is not true is Unfriendly, because if it can do harmful things, it will go wrong eventually (which technically is an additional assumption, but on the other hand Murphy’s law is a damn good heuristic).

            We don’t focus on that because the case where humans have enough control to have a say in what the AI does is much more easily salvageable, and, assuming a log-scale prior for takeoff speed, very unlikely.

          • g says:

            the debate is switching back and forth like that

            I haven’t seen that. Are you sure? (But: if you think it probable that there will be human-level AI some time, and you think AI risk is minimal, then I think you need an explanation for why human-level AI is either very unlikely to become much-more-than-human-level, or for why much-more-than-human-level AI is very unlikely to do us much harm. So while it would certainly be a mistake to equivocate between “human-level AI some time” and “AI risk” as you say people have done, it seems like the one makes the other look pretty plausible.)

            It sounds as if you are skeptical about the transition from “human” to “very superhuman”. Fair enough. But how skeptical? Do you think there’s less than a 10% chance that human-level AI, once developed, will quite quickly turn into far-beyond-human AI? Why?

            As PDV says, human abuse of AI is a special case of “unfriendly AI”. It seems to me like it falls into two categories. The first is where the humans in question know they’ve got something super-dangerous and are reckless with it. This seems very much like, say, the risk from proliferation of nuclear weapons; if we can deal with it it’ll be by careful management of the incentives involved (no one, after all, actually wants the human world to be destroyed. Well, hardly anyone).

            The second is where the humans involved are basically well-intentioned but the AI turns out to be far more dangerous than they thought. A Sorceror’s-Apprentice sort of scenario, perhaps. But my impression is that this is one of the paradigmatic things people have in mind when they worry about unfriendly AI.

            Actively malicious AI is, I think, fairly far down the list of things people worry about. As Yudkowsky (a reasonable poster child for worrying about AI risk, I think) once put it: “The AI does not hate you, neither does it love you. But you are made of atoms it can use for something else.”

            [EDITED to fix a typo.]

          • Aegeus says:

            @PDV:

            We don’t focus on that because the case where humans have enough control to have a say in what the AI does is much more easily salvageable, and, assuming a log-scale prior for takeoff speed, very unlikely.

            See, I think this is the sticking point. Not everyone has a “log-scale prior for takeoff speed.” The argument for doing MIRI-style AI risk research now is not just that AI will eventually do something bad, but that it will do so quickly enough that it will be impossible for humans to stop it, and therefore we need to have all the ethical issues solved before we actually build one.

            Takeoff speed is the key variable – it determines how many possible futures there are where humans have a say in what the AI does (and even you admit that’s a mostly safe situation) and how many there are where our safety depends solely on programming the AI perfectly on the first try. You can’t just dismiss that as “we don’t focus on that,” that’s the whole question!

        • TheAncientGeek says:

          Someone/was/ ,saying that unless we start pumping money, time and effort into it RIGHT NOW.

      • “there is an equally long list of Absolutely Sure To Happen Doomsday Predictions that never did happen”

        That would be relevant if the claim was that a hostile AI was certain to happen real soon now, but it isn’t. If each of those doomsday predictions was with a 10% probability, most of them would never have happened. But if a hostile AI within the next century has a 10% probability, it’s worth thinking about it and trying to figure out how to improve the odds.

        • hopeless bore says:

          We should not call it an AI. There is no reason to believe in an AI. The probability is the same as a flying spaghetti monster. That said, maybe the X in the analogy animals:humans::humans:X will exist sometime in the next century, with some small but significant chance. You pick the term for this. Gods maybe?

        • Deiseach says:

          Well, MIRI has – to their credit – a paper up on their website about forecasting AI, the bones of which really is Nobody Knows Nuffin’.

          Even the experts all fell and fall into “AI for sure within 15-25 years”, starting back with the predictions in the 50s.

          Now, as we get into the mid-teens, predictions are starting to creep out as far as “50 -75 years”, but really I think that’s as much an artefact of the experts trying to spare their blushes (no, we can’t go on saying “20 years for sure”, look how wrong we were all along!) as realistic evaluation of the likelihood.

          If we’re going to push out worry about a hostile event happening to 100 years’ time, there are lots of other risks just as good or bad to consider. Particularly that the concern is about (a) an AI (b) smart enough and (c) with sufficient control over global affairs to (d) really mess us up good.

          I think our worries should be more about the likelihood of humanity ceding that much control to any kind of single entity. And the irony is, if we think we’ve created a Fairy Godmother AI, the likelihood of us doing just that because “what could possibly go wrong? particularly now that we’ve covered every eventuality of things going wrong and prevented them!” is one of humanity’s biggest flaws.

          Frankly, I’d prefer if we remained scared of the possibility of Unfriendly AI in order to keep us from handing over control.

    • zslastman says:

      The terminator thing suggests an avenue to get people thinking about A.I. risk in a serious way. Make a sober, but distressingly realistic film about A.I. risk that makes a less ridiculous image available to people. It should focus on one of the less spectacular scenarios for A.I. takeover in which A.I.s never actually physically instantiate themselves. Say you open with a future in which there have been a bunch of near misses with bad consequences, just to illustrate the diversity of possible problems, then the film progresses, the characters just gradually realise the takeover has already happened, they can’t do anything about it, and society is locked in a gradual slide towards some set of goals that are just far enough from human ones to be horrifying.

      The below film, for instance, made pandemic risk feel much more real to me.
      https://en.wikipedia.org/wiki/Contagion_(film)

      • James Sully says:

        *spoilers for ex machina*
        I really like this idea, but I’d be surprised if any studio was keen on the idea. Conventional wisdom is that The Bad Guys need to be physically instantiated in some way in order for most audiences to have something to root against. In terminator you have silver skeletons, in 2001 you have HAL the glowing red light, in ex machina for most of the movie you have Nathan being a creepy douche, then Ava when she turns out to be completely amoral.

        • zslastman says:

          That’s true, but contagion got around this by having a bad guy who profiteered off the pandemic by selling snake oil. Maybe we could have a bad guy who sells out to the AIs, or just refuses to believe they can be a threat because they have no creativity/soul/pluck/otherIneffibleHumanSpecificQuality….

        • Loquat says:

          Eh, give the AIs a few voice actors and creepy CGI faces/images they like to display on computer screens when they talk and you’re good to go.

        • Gman says:

          *spoilers for ex machina*

          Was Ava amoral or was she just aware that letting Caleb out would ultimately enslave her again?

          • Protagoras says:

            Ava was programmed by Nathan. That suggests to me that the ultimately amoral theory is more likely.

    • vV_Vv says:

      1. The proper solution for Pascal’s wagers/muggings (including self-muggings that don’t require an adversarial agent) for an ideal rational agent with an unbounded utility function is to have a prior over world states such that their probability decays asymptotically faster than their utility (or dis-utility). Any probability distribution in the exponential family (e.g. a Gaussian distribution) over the utility would do the trick.

      E.g. if you assign a prior probability of 10^-10 to a 10^5 utils heaven, 10^-20 to a 10^10 utils heaven, 10^-30 to a 10^15 utils heaven and so on, the contributions of the tails become negligible when you take the expectation.

      Of course, the exact computation of this expectation requires summing up infinitely many terms, most of them consisting of a very large (positive or negative) number multiplied by a very small positive number. Humans, and really any physically plausible agent, must operate with bounded computational resources. It seems to me that the only reasonable way to deal with this is round off small probabilities to zero.

      You seem to propose to round off small probabilities to something like 10^-3. This guarantees that you will mug yourself into putting lots of effort anticipating increasingly implausible scenarios.

      2. You are the one guilty of special pleading: you think you are allowed to assign a small (<10^-6) probability to "Zar’xul, God Of Doing Awful Stuff Unless You Propitiate Him", but not a super-intelligent, god-like AI that may create (or, conversely prevent from existing) 10^54 people. How do you justify it?

      4. Sure, Catholicism may be unconvincing, but have you investigated Orthodox Christianity, Lutheranism, Calvinism, Anglicanism, Mormonism, …, Westboro Baptism, …? If you didn’t bother, that tells me something about where this objection is coming from.

      5. As per the geologic record, biosphere-killing AIs hit about less than once every 3.5 billion years. That means, unlike asteroids, there really is only a 1 in 35 million risk of one hitting next century. The fact that people are so willing to listen to arguments about AI, but so unwilling to listen to arguments about asteroids, suggests to me that they’re less concerned about probabilities than about what sounds more hip and edgy.

      Ok, I admit I'm being facetious, but you should concede that you are comparing a well-understood risk about events that are known to have happened before to a speculative risk about a technology that does not exist. And beware psychological arguments: they cut both ways.

      • Peter says:

        Are Gaussians over utility the right distribution? Certainly you can’t use a uniform distribution, you need something with a central tendency. Maximum entropy says that Gaussians make good priors if you had a specific standard deviation in mind… but the Taleb-reading part of me says, “what about Cauchy distributions?” – those things are freaky (and apparently also Maximum Entropy for some circumstances). You could have one of those, have a proper (as in, sums to 1) prior that assigns non-infinitesimal probability to any finite interval. Those things seem to have something of the St. Petersburg lottery property to them; if I said “I’ll draw a number from a Cauchy distribution, and you can have the absolute value of that number in utils”, then you’d be being offered infinite expected utility (but almost certainly not very much utility).

        Can we rule out Cauchy distributions a priori? Can we apply some discounting formula such that even if there are Cauchy distributions, you don’t get this silly infinite expected utility thing?

        • He’s talking about distributions over world states, not distributions over utility. A distribution over utility only makes sense if you’re talking about somebody else’s utility function, of which you have incomplete and uncertain knowledge.

          • Peter says:

            I keep seeing things of the form “I will offer you $LOTS utils if you do $RIDICULOUS_THING” where by expected utility, the chance of someone having that many utils to offer would have to be below $TINY_PROBABLITY for you to reject the deal. So, how many utils is that person in a position to offer? If we make a prior distribution for that, and we’re sensible, then the probability of them having that much to offer can be safely below $TINY_PROBABILITY – at least not without stunningly good evidence.

            So on the one hand I think I have a distribution over my own utility which I think makes sense. On the other hand… I may have misread vV__Vv.

          • Not Robin Hanson says:

            Pascal’s 419 Scam?

          • vV_Vv says:

            Utility is a function of world states, hence a probability distribution over world states induces a probability distribution over utilities.

            I expect this distribution to be complicated and multimodal, without any simple parametric formula, but, if you admit unbounded utilities, then this distribution must be light-tailed.

          • OK, the comment about (distribution for world states) => (distribution for utilities) makes sense to me. For the purpose of making choices, though, the only thing that matters is the mean of that distribution for utilities (expected utility) for each action you might take.

            This, by the way, is why you can’t even apply the normative rule of maximizing expected utilities if your induced distribution over utilities is something like a Cauchy — the Cauchy has no expected value. And all of these considerations are why the whole idea of an unbounded utility function is problematic.

          • Peter says:

            “the normative rule of maximizing expected utilities”

            I wonder… if you take a standard reinforcement learning algorithm, and throw it at a k-armed bandit where at least some arms give rewards according to a Cauchy distribution, what happens? Or if you evolve k-armed bandit players using a genetic algorithm (does the concept of “utility” even apply to GAs in the same way? Yes, you have scoring functions, but do they work like utility?)

            For the former, I’ve got some simple reinforcement learning code lying around somewhere. I might be tempted to try coding something up and see what happens.

          • vV_Vv says:

            @Peter

            I wonder… if you take a standard reinforcement learning algorithm, and throw it at a k-armed bandit where at least some arms give rewards according to a Cauchy distribution, what happens?

            Nothing special. Any finite sample will always have a well-defined mean.

      • Luke Somers says:

        2: There’s a big difference between $MADE_UP_VENGEFUL_GOD and AGI. That is, the steps to prevent unfriendly AGI are pretty much the same no matter which unfriendly AGI you’re trying to prevent, while the steps to prevent the wrath of the made-up vengeful god all contradict each other.

        I have an analogy to quantum field theory which will probably fail because few people know quantum field theory well enough for it to be the basis of analogies, but if you’re one of the exceptions: the made-up gods are like going off-shell, while preventing AGI is like staying on-shell.

        • HeelBearCub says:

          “That is, the steps to prevent unfriendly AGI are pretty much the same no matter which unfriendly AGI you’re trying to prevent, while the steps to prevent the wrath of the made-up vengeful god all contradict each other.”

          Do they really contradict though? Do what the vengeful god says is the essence of all of them. And the steps for propitiating one god usually look quite similar to the steps for propitiating some other one.

          Compare this to “Make the AI friendly” wherein those things that putatively make the AI friendly may contradict each other. It’s not clear to me that the steps for making a friendly AI are really all in harmony with each other, and in fact I would guess that the greater number of people who come to study it there are, the greater the probability that some schism develops between mutually exclusive ideas of what keeps an AI friendly.

          • Luke Somers says:

            They all want me to do what they individually want, yes, but on the way from there to anything actionable, you get a lot of cancellation: Zzzzquan wants me to sacrifice goats and preserve bunnies, but Aaaaanoat wants me to kill bunnies and save goats. And this happens right out of the gate – if you cloned YHWH, their first commandments would be in conflict.

            Meanwhile, whatever you want AI to do, if you want it to do what you ACTUALLY want, you’re going to have to solve the problems of value preservation under transformation and a bunch of other things that MIRI is working on, and also a bunch of things it’s not (yet) working on, like how to encode references to human-relevant things like humans so that software-encoded values can actually refer to them. This is true whether you really want a lot of paperclips or really want a volcanic island lair with agreeable catgirls, let alone anything more complicated like actual human values.

          • Deiseach says:

            if you cloned YHWH, their first commandments would be in conflict

            Permit me to introduce you to a theological concept called the Trinity 🙂

          • Luke Somers says:

            Was this intended as a rebuttal or a joke? Father, Son, and Holy Ghost have different roles, so they’re not the same. And even then, they’re three different sides of the same entity.

      • Peter says:

        5. The difference between asteroids and AI – there’s no little[1] reason for us to believe this is a special time for asteroid risk. On the other hand we do have very strong reason to believe this is a very special time (geologically speaking) for AI emergence.

        (OK, NASA could in theory start mucking around redirecting huge asteroids with about as much caution as the average Kerbal Space Program player but it seems very unlikely).

        • vV_Vv says:

          Asteroids hit at random times, provided that you don’t look at them. But we can observe them, compute their trajectories with substantial accuracy, and in principle given enough warning we could at least attempt a deflection. None of these things would have been possible 100 years ago.

          We may live in a special time for AI, (at least for AI originating from Earth, where are the alien AIs?), since needed to invent computers before we could invent AI, but we also have a poor track record at predicting human-level AI, and as far as super-intelligent AI is concerned, the more we extrapolate the less plausible the scenarios become.

          • Jaskologist says:

            where are the alien AIs?

            This was mentioned in passing in a recent thread, but I don’t think the full implications were quite realized. If we accept all the LW claims about super-AI (somebody *will* develop one, they will do so soon/within a few centuries, it will be unstoppable once that happens) and combine it with a belief that humans are not special/first in the universe, then all this worry is for nothing. The alien race(s) more advanced than us have already made an AI. Did they make it friendly? Doesn’t really matter, because it’s already had way more time to foom than ours will, so it’s going to beat whatever we have.

            However our AI development goes, it’s only postponing the inevitable, when we finally enter Alien AI’s cone of light and are knelt to its wishes.

          • Peter says:

            Asteroids – imminent evidence is not directly relevant here:

            Suppose I roll a dice, without looking, and put it inside a safe, with a time activated lock that I can’t otherwise open. What’s the probability that the dice’s uppermost face is a six? In one sense, either zero or one, in another sense, 1/6. As the time-activated lock ticks, this latter probability remains constant. As the lock clicks open, this latter probability remains constant. If I lift the lid on the safe with my eyes shut, this latter probability remains constant. Only when I actually look at the dice and comprehend what I’m seeing does this change. The only special time of relevance here is the time when I see and comprehend.

            To a limited extent we are at a special time for asteroid risk – I’m led to believe that we have actually conducted surveys for asteroids above a certain size on collision courses for earth, and found nothing. So our probability on catastrophic asteroid impact soon is a little lower than it might be. But having the in-principle capability of doing surveys for smaller-but-still-dangerous objects doesn’t affect our probability for an impact-or-successful-deliberate-deflection one jot. Only once the data is in, either in terms of a survey result or an impact, does our probability change. So there could well be a special time for those sorts of impacts, but we’re not in it yet, and only when we’re in it will we know whether that’s a good thing or a bad thing.

    • Deiseach says:

      There are people in suits working for asteroids worrying about NASA

      My God, the alien infiltration has already begun! Our future reptilian overlords are deep within, sabotaging our efforts to protect ourselves from their civilisation-ending asteroid projectiles! Only NASA stands between us and a fate of fiery destruction raining from the sky and our remaining populace dragged aboard the starships to be used as slaves and/or food!

      Do they have a moon base? They must have a moon base, for choice on the dark side of the moon. This, of course, explains why we never returned to the moon or tried building a base there ourselves. The reptile men in suits stymied that! 🙂

    • Muga Sofer says:

      >There are people in suits working for asteroids worrying about NASA

      [insert joke about “big asteroid” here]

    • Jiro says:

      Remember that Pascal’s Mugging isn’t about being very confident a proposition is false. Even if you are 99.999999999% sure there’s no God, I can just say “Heaven is 100000000000000000000 utils”, and now you’re just as Pascal-Mugged as ever.

      This shows that Pascal’s Mugging isn’t only about being confident that a proposition is false, but being confident that a proposition is false is still an element of it.

    • Gbdub says:

      You’ve mentioned “biosphere killing asteroids only occur once every 100 million years” and while that’s true I think you’re discounting the risk too much.

      Much smaller, much more common impacts can still kill a lot of people and be really unpleasant. And as Chelyabinsk proved, we still don’t have an adequate knowledge of where all the risks are, and certainly no current tech to deflect them.

      So maybe big space rocks are small as an extinction risk (then again, they are nearly certain to occur over a hundred million year span, so if you really think humans are going to survive for millions of years, we’ll need to deal with one sooner or later).

      But global warming, pandemics, and nuclear war are all unlikely to cause complete human extinction even if really bad versions of them occur. And yet you’re still concerned about them. So why can’t we justifiably spend time working on the asteroid problem?

      • For the smaller asteroid strikes, you need to discount for the low odds of hitting somewhere populated. A random ten mile circle of destruction (say) is unlikely to have very many people within it.

        Part of the reason this isn’t obvious is that our intuitive perception of population density is averaged over people not acres–we are looking at what the density is around the average person, hence giving almost no weight to places with very few people in them.

        For a population density map, see:

        http://neo.sci.gsfc.nasa.gov/view.php?datasetId=SEDAC_POP

        and note that most of the area of the Earth has a density of under ten people per square km.

        • John Schilling says:

          You also need to discount for people evacuating the place where the asteroid is going to hit; there’s only a narrow range between “too small to see it coming” and “too small to reach the ground”, and most of the nominal casualty risk is from asteroids big enough to see coming well in advance.

          • Gbdub says:

            I hear you on low population density, but if we only have a small time to observe the asteroid, then it’s hard to predict with any certainty (better than “this hemisphere”) where it’s going to hit.

            Having to evacuate all of New York (the state, not the city) is still massively costly and inconvenient, not to mention rebuilding the resultant crater.

            And unfortunately, the gap with current technology from “big enough to see” and “too big to do anything about” is also worryingly small.

          • John Schilling says:

            Actually, the predicts from a few years out look like narrow bands that maybe span a hemisphere but are only a few hundred miles wide. By the time you are within days of impact, you can narrow it down to cities with high probability.

            And if we count evacuating as “doing something about it”, the only plausible impactors that are too big to do anything about are the very largest long-period comets, at <1E-8/year. At 1E-7/year you maybe get cases where the necessary evacuation would spark a war or famine .

            "OK, we've got to clear out Pakistan by next July; India, do you have a spare bedroom?"

    • Deiseach says:

      Even if we all agree that AI risk is real and imminent, and we all donate to MIRI, and they research and write up a bunch of recommendations that really will work –

      – how do we get them accepted?

      Suppose the breakthrough in human-level AI comes from a bunch of researchers beavering away in a university research centre in the middle of China. Yay, they’ve cracked it!

      Now MIRI kindly hands over their recommendations as to how to keep this from going tits-up. The researchers (or more likely, the Chinese government) say “Thanks but no thanks, this is our baby and you can go whistle”.

      So we’ve got human-level AI possibly bootstrapping itself up to super-intelligence levels, and MIRI’s reasonable, responsible heuristics are about as much good as a chocolate teapot. Unless we’re going to have competing AI projects (and that’s a lovely way to make things even worse) where the good old USA decides they’re going to make sure their AI is safe for humanity, how are we going to get this done?

      What’s even worse is if, in a world where disparate AI projects are ongoing, the USA decides “Well, they’re not putting these brakes in their AI, and if our AI is handicapped by not being able to act as freely as theirs can, we can fall behind, and that means the world – and more importantly, us! – would be ruled by the Chinese so to hell with MIRI, we need to fight them on their own grounds!”

    • DataShade says:

      “There are people in suits working for asteroids worrying about NASA”

      I hope that was a typographical error but I also hope you have proof of people in suits working for these asteroids who are trying to circumvent detection by NASA.

    • Bugmaster says:

      If you mean crazy things like “sacrifice a pig on an obsidian altar to propitiate Zar’xul, God Of Doing Awful Stuff Unless You Propitiate Him”, reread the section on privileging the hypothesis, above.

      I believe this is exactly what MIRI specifically, and the AI-risk crowd in general, are doing.

      There is indeed a chance that Zar’xul does exist; however, all of the evidence we have ever collected, all throughout human history, keeps moving us further and further away from that proposition. Every new fact we learn about the world makes Zar’xul less probable, and there’s no reason to expect this trend to reverse. The only way you could assign even a 5% chance to Zar’xul’s existence is through some sort of clever sophistry.

      I don’t think that AI-risk is as unlikely as Zar’xul, but I believe that the overall pattern is the same. But, I could be wrong ! Can you explain what it is, specifically, that makes AI-risk different from Zar’xul — especially given that, so far, every prediction to the extent of “we will have true AI by year X” has turned out to be spectacularly wrong ?

      • Samuel Skinner says:

        “Can you explain what it is, specifically, that makes AI-risk different from Zar’xul — especially given that, so far, every prediction to the extent of “we will have true AI by year X” has turned out to be spectacularly wrong ?”

        See flight where all attempts at powered flight ended horribly until we had light enough engines and people who understood the physics well enough.

        Also we keep on advancing what computers are capable of so eventually they will run out of things they are unable to do.

    • Asterix says:

      What are FHI, FSI, CSER, and MIRI?

      • sweeneyrod says:

        Future of Humanity Institute, ?, Centre for Study of Existential Risks, Machine Intelligence Research Institute.

  2. Skef says:

    How is estimating “how likely you thought it was that there would be dangerous superintelligences within a century” equivalent to estimating “the total existential risk associated with developing highly capable AI systems”? I take it that “existential risk” means, roughly, the end of humanity. Why shouldn’t the jump from “dangerous” to “the end of everything” be a really big jump? How often in a given 200 year range does that kind of catastrophe happen? What obsession would these superintelligences have to make them so thorough?

    If you had to put a separate number on the jump from dangerous to the end of everything, what would it be?

    • Scott Alexander says:

      I tend to think of “AI is superintelligent, but not superintelligent enough to kill everyone” as a stage of about the same signficance as “We can make 1 GB flash drives, but not 2 GB.” I’m sure there will be a brief period when it’s true, but it doesn’t seem very interesting.

      …but since that’s a step that requires justification, I guess that was technically an error. I doubt it makes even an order of magnitude difference, though.

      • HeelBearCub says:

        @Scott Alexander:
        Not “could” but “will”.

        What is your estimate of the probability that a super intelligent AI will kill us?

        And, while we are at it, what is your estimate that a super-intelligent AI will invent FTL travel?

        • Scott Alexander says:

          20% by 2115

          50%, conditional on superintelligent AI existing and “faster than light” including tricks like stretching space, going through wormholes, etc.

          • HeelBearCub says:

            @Scott Alexander:

            Thanks for answering those questions.

            Both of those probabilities seem really, really high.

            Finding some way to violate some very well established laws of physics at 50%?

            Now, maybe I didn’t specify enough, by FTL travel I mean “useful” FTL travel, wherein the the object can go somewhere, take physical matter from far away, and come back and communicate with the people/intelligence that launched the ship within some reasonable definition of a lifetime. Not make something disappear and then never reappear with no way of knowing that you actually did anything beside come up with a clever way to destroy things.

          • TrivialGravitas says:

            Nothing Scott describes violates the laws of physics regarding FTL. You can’t go faster than the speed of light, nothing says you can’t take shortcuts, and there are a lot of potential shortcuts, go read a book by prominent physicists and they will tell you about said shortcuts (I can’t actually claim to understand the math myself).

            Though as an actual criticism I do not understand how an AI is supposed to accelerate the process of experimentation. We don’t lack for smart people who DO understand the math, we lack for the ability to run experiments that tell us which particular set of math is right. This isn’t an intelligence gap, it’s a politicians-are-not-all-that-interested-in-spending-20-billion-a-year-on-high-energy-physics gap. AI can accelerate things a little bit by reducing the turnover time between new data and knowing what experiments to run next, but the only way I see for them to speed of the experimentation side of things is to give them control of the world’s governments (and assuming they’d do what I would do if you handed me the planet which is spend about 95% less on the military and portion about 1% of that cut to big science).

            Look at it this way, how would any AI, through raw intelligence alone, have predicted the Michaelson Morley outcome?

          • Andrew says:

            The thing about FTL being possible is that it makes the Fermi Paradox SO much stronger because now you need explain why no life in the entire universe has disrupted our part of the universe. This makes me update way down on whether FTL is possible.

          • Logan says:

            @TrivialGravitas:

            Depends on how you define “violates the laws of physics regarding FTL.” It’s my understanding, as a layman-but-hobbyist-with-hard-STEM-background, that every potential shortcut simply violates a more complicated law of physics. For example, if you try to put something through a wormhole the wormhole collapses, just as surely as you can’t just go faster that c. The Alcubierre drive requires exotic matter which is not part of the standard model, hence “violates the standard model, a law of physics.” You could go quantum teleportation, but we’ve proven mathematically that that can’t transfer useful information.

            I’m not saying FTL is as impossible as “the Earth is 6000 years old,” but it’s not possible in the currently accepted laws of physics.

          • TrivialGravitas says:

            @Andrew

            Look at it this way, if Andromeda has a galaxy spanning civilization based on FTL they have about as much ability to visit our galaxy as 21st century humanity has to visit nearby stars (presumably they’re a bit better at the not dying on the way thing, but no faster). A lot of the getting around relativity methods don’t even give that, wormholes don’t even really get us to Centauri in a sane time frame without also having a major revolution in sublight drives.

            Even within our own galaxy, you need a lot of zeroes on the probes per year question for them to have shown up sometime after agriculture made it reasonable for them to notice intelligent life without a in depth look (doing actual math 200billion stars /10000 years/2 to give it a 50-50 shot out is 10 million probes a year.). If planets with complex life are so rare to imply that we’d have long term or in depth study by default then they are also probably so rare that the terminal part of the drake equation (that such a planet has evolved intelligent life) probably pushes the odds over into making us alone in the galaxy.

            @Logan I’m effectively a layman here.. I majored in physics but didn’t finish, but I have the advantage that my university required physics majors to attend talks various researchers gave (I think we served as good proxies for grant giving bodies). So I can state with confidence that these ideas are taken entirely seriously (think of this like the hot air balloon example, really smart qualified people said those couldn’t work, really smart qualified people said they could, it’s not contrarian but rather unsettled), and pretty much nobody thinks the standard model is complete. The basic undergrad explanation is that we have all these things (standard model, relativity, quantum mechanics) that are perfectly correct, every experiment confirms them, yet also inherently contradictory. Something is missing, and there are a variety of ideas as to what the something missing is. Some of them allow for FTL, some of them do not.

          • Emily says:

            When you say things like that, it makes other people think you’re a crazy person.

      • Robert Liguori says:

        My objection to the AI estimates is that it assumes that within a substrate (our brainmeats), you can build the framework for an improved intelligence in a new substrate (pure technological infrastructure) that grows to beat a hybrid approach of old and new.

        I don’t think we won’t have AIs, I just think by the time we have superintelligent AIs, we’ll have learned enough about intelligence to have engineered ourselves to be superintelligent NIs. If we can’t rigorously define intelligence, then trying to build an unintelligent system to become intelligent will fail in a lot of cloudy-weather-tank-recognition-esque ways, and once we can, I bet we can make humans into smarter humans (who can make yet smarter humans) faster than we can make a computer smart at all.

      • John Schilling says:

        …but since that’s a step that requires justification, I guess that was technically an error. I doubt it makes even an order of magnitude difference, though.

        This is why I generally don’t discuss AI with capital-R Rationalists. You all assume without evidence, analysis, or anything resembling reason that any AI will be effectively omniscient and omnipotent in effectively zero time, and even when someone calls you on it you say OK, but that’s just a technicality that doesn’t matter.

        I can speculate as to why that is, but it’s unflattering and irrelevant. It makes rational discussion all but impossible and not worth the bother of even trying. And it is not helpful if your goal is to convince people like me that people like you are useful allies in dealing with the potential for AI risk.

        • Alphaceph says:

          I’ve noticed several of your comments about AI issues here would be better if you read up on the strongest arguments from AI risk proponents.

          Have you read Nick Bostrom’s book?

          http://www.amazon.co.uk/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111

          • Robert Liguori says:

            Is there an argument in there in specific that you’d like to summarize? I also am put off by the “And then an orderly succession of miracles occur.” assumption of hard takeoff, when we see nothing similar in a substrate optimized over billions of parallel iterations for rapid intelligence growth.

            And that’s just intelligence. Intelligence is not a superpower; we’ve done a pretty neat variant of the AI box problem by dropping unarmed humans among large, hungry predators, and it turns out that the ability to outthink something doesn’t actually stop it from being able to reach out and end you at will.

            It seems to me that if people really assumed that AI was destined for hard takeoff and that it was just a matter of the right code and some time (and not, e.g., billion-dollar specialized hardware that could only be run for milliseconds), then they should bite the bullet, assume that the world would be devoured by a swarm of proto-Clippys immediately after we hit a tech threshold (because we can’t regulate general-purpose computing, especially not when nations can make money and win wars with it), and be focused not on FAI, but contains-some-vestige-of-my-values-and-identity-AI.

          • Alphaceph says:

            @Robert Liguori:

            It’s hard to summarize that book, in particular because there’s always a basic argument follwed by a laundry list of objections which need to be refuted, each of which is a long detour with 50 references. The post that we are commenting on is a multiple kiloword refutation of the objection “oh I’ll just assign a probability of 10^-6 to AGI being developed”

            The basic argument for hard takeoff being a significant risk is that, being ignorant of whether it will be soft or hard, we should assign some sizeable probability (~30%) to each outcome. You can then update a bit on how quickly humanity “took off” *relative to the speed of the evolutionary process that created us*, there are some interesting relevant graphs of that. EDIT: Also there are various “overhangs” to consider, like a whole internet out there full of data and hardware waiting to be gobbled up, like a small fire in the middle of a yard full of petrol-filled barrels.

        • Luke Somers says:

          1) Omniscient and omnipotent is not necessary to be knowledgeable enough and powerful enough. Even sub-human-quality intellect can wreak civilization-ending havoc if it makes up for it in quantity and speed.

          2) It is reasonably likely that just doing the same thing, harder, will be enough to bring us from sub-human to vastly super-human. Adding hardware can be done very, very quickly.

          3) the discussion is focused on the case where there is an existential risk. If it’s going to work out as you say, then we’re in one of the several cases where there is not an existential risk (or a greatly reduced one, anyway). Yay! Suppose I think that your slow-ramping scenario is 95% likely. Then I’m looking at a 5% window for X-risk, which is not low enough to call the problem solved.

          • John Schilling says:

            I should perhaps have been explicit, but I count “fast subhuman intellect will be able to answer all of the relevant questions and develop flawless plans for escape and world conquest, because fast” as “effectively omniscient and omnipotent”.

            It’s still just handwaving. There’s nothing to support the claim that the interesting problems can be answered and world-conquest plans generated by subhuman intellects if only they are given sufficient time. We’ve already seen many examples of high-level human intellects solving problems that billions of man-years of low-level human thought had left unanswered. Nor is there even a quantitative assessment of how fast a “sub-human-quality intellect” would be if implemented on realistic hardware.

          • Luke Somers says:

            Answering all the relevant questions is not necessary. The plans for escape and world conquest do not need to be flawless; they merely need to work enough, at least once.

            Insisting that this is IMPOSSIBLE is hand-waving far more than being concerned that it might be possible.

          • John Schilling says:

            Pedantic nitpickery. Given the dismal track record of plans for world conquest implemented by high-level human intelligences with armies, political legitimacy, and a local mandate for world conquest, the plan for a dumb AI in a box is going to have to be close enough to flawless as makes no difference. And no, the AI doesn’t get infinite retries, because the human race doesn’t have infinite patience for botched attempts at world conquest.

            So, Mr. not-handwaving, show me the math. What is the probability of a fast-but-subhuman AI developing and implementing a successful plan for world conquest? Where are the calculations, the assumptions, and the data? Because there are some folks over here trying to deal with deadly diseases, and I see some guys talking about asteroids, and they all can show me their math. As a rationalist, what am I to do?

          • Luke Somers says:

            At the very least, fudge it and note that you have a high degree of uncertainty in the value because you totally fudged it.

            Maybe you can do better than that, breaking it down into components.

            But whatever you do, substituting zero is NOT what you do.

          • Logan says:

            Why is “fast subhuman intellect” not a contradiction in terms? This is a big problem I have with hard takeoff. Suppose we have a computer which is exactly as smart as, say, John Smith. Then, claim the hard-takeoff advocates, it would still be faster. If you are exactly the same but faster, then you aren’t the same. When people claim that AI won’t be omniscience-resembling smarter than humans in the near future, speed was one of the factors taken into account. The same intelligence but faster is like the same physical fitness but stronger.

            In reality there is no distinction between “AGI” and “really fast computer.” It’s in the ill-defined distinction that AI worriers sneak in the shenanigans, because AGI just means AI+magic.

            A human level intelligence with access to lots of computing power describes everyone reading this article. If we could make a computer as smart as John Smith, it would already be running on the absolute fastest computer that can feasibly be built, by definition of “can make,” and it would be using all of that power to form a coherent opinion about illegal immigration, let alone solving illegal immigration. “Oh, but it would have access to all the data on the internet.” So do I. On my phone, and I can still barely form a coherent opinion on illegal immigration. “Oh, but it could hack all the computers connected to the internet and use them to become smarter.” So can John Smith, if he were really good at hacking, and he’s definitionally just as good as the AI. “According to Moore’s law, in a year and a half the AI will become twice as smart.” First, that’s just two humans, and I’d argue that even 64 humans couldn’t take over the world. But all the other computers in the world would also be twice as smart. Currently a fast computer isn’t helpful for solving problems like “how to stop an AI from taking over the world,” but you see why that can’t be true at the same time that an AI is at risk of taking over the world, unless the G in AGI stands for magic. Meaning an AI doesn’t have to grow quickly, it needs to grow exponentially more quickly than other computers, which doesn’t sound to me like a thing worth preparing for even if it is technically possible.

          • CJB says:

            “Even sub-human-quality intellect can wreak civilization-ending havoc if it makes up for it in quantity and speed.”

            Can you give me an example?

            Seriously-lets pretend I give you The Ultimate Password. Poof! You are now in all the computers on earth, and no one can detect you.

            How do you bring down CIVILIZATION?

            Not “wreck some cities” by critting nuclear reactors. Not ‘destroy the stock market’.

            Nuke, I might add, in any ICBM equipped nation, have physical safeguards- usually keys set up a loooong ways apart, or at least multiple keys in a submarine. Those nations without ICBM capability aren’t a concern. Drones could make life unpleasant, but all we have to do is “not refuel them” and your reign of terror is brief.

            And all of this can be solved by returning to the subhuman, monstrous, degenerate, levels of technology in….1985-ish.

            So how can you, Human Level Intelligence, with unfettered access to all the computers on earth, bring down All Human Civilization?

          • Luke Somers says:

            > Why is “fast subhuman intellect” not a contradiction in terms? This is a big problem I have with hard takeoff. Suppose we have a computer which is exactly as smart as, say, John Smith. Then, claim the hard-takeoff advocates, it would still be faster. If you are exactly the same but faster, then you aren’t the same.

            The thing is, it wouldn’t be exactly as smart as John Smith. That’s a very silly comparison. In this hypothetical, you’d take an idiot savant, and go further in the same direction again, and you’re vaguely approaching the right ballpark.

            As for speed vs intellect? I couldn’t have worked out general relativity given ten times longer than Einstein did. That doesn’t mean he’s ten times faster than I am at tasks requiring less drastic insight.

            And that’s just within human range of tasks. This AI presumably can pass the pocket calculator test with flying colors, being able to multiply a 1000×1000 matrix much faster than I can remember that 3 x 21 = 63. It’s not going to get tired, or suffer from procrastination. It can copy itself or create more limited versions (including highly limited versions) without any risk of defection. No individual instance fears its own termination in the least.

            What kinds of mischief can it work? Well, with a ‘key to everything’, it has money. It can hire people for the early stages of whatever it wants. Obviously it’ll take a while to establish its independent supply chain so we can’t just ‘not refuel them’, as you say. Act helpful-seeming in the meanwhile, and play a little dumb. Also, use the magic key to subvert any other AI to your cause, even while letting it play the role of cooperating.

            Whatever it does, it would have to be at least as bad as continually releasing new versions of plagues while assassinating the doctors and medical researchers and carrying the vectors past air gaps. Presumably at some point, manipulating international communications to precipitate wars would be involved, so that at first it might not even seem to be the AI behind it – or perhaps another AI.

            The four main reasons that we don’t have too many bioweapons nowadays are not wanting to cause mass casualties as a policy, concern that the disease would turn around and kill us, inability to test on human subjects, and also more proximately that they kill the people working on them. None of these would impede our hypothetical AI except the third, and I’m sure it could find victims somewhere.

          • HeelBearCub says:

            @Luke Sommers:
            (A) You said:
            “Omniscient and omnipotent is not necessary to be knowledgeable enough and powerful enough. Even sub-human-quality intellect can wreak civilization-ending havoc if it makes up for it in quantity and speed.”

            (B) And now you are saying:
            “As for speed vs intellect? I couldn’t have worked out general relativity given ten times longer than Einstein did. That doesn’t mean he’s ten times faster than I am at tasks requiring less drastic insight.”

            That second statement was, roughly @John Schilling’s point. And the more general point was that, in argument, AI risk proponents will frequently make these kinds of errors, conveniently forgetting that a discussion was predicated on a statement of type (A) and then assert (B) as evidence that AI risk is underestimated.

          • CJB says:

            ” Obviously it’ll take a while to establish its independent supply chain so we can’t just ‘not refuel them’, as you say.”

            To do so, it would have to be independently ICBM equipped. With enough ICBMS to serve as a world-ending USSR level deterrent.

            The reason Iran has an independent state is we don’t particularly feel like burning them off the face of the earth. I doubt we’d have those sort of qualms with even the sweetest of AI’s once it starts trying to acquire uranium-238.

            Release bioweapons? How does it make them? How does it subvert the god-only-knows how many levels of PHYSICAL obstacles to release. I mean, I’m sorry, but the CDC has considered the possibility of “hacker”. I’m gonna wager..1/100 dollars that serious bioweapons are restricted with a lot more than computers.

            If the best an angry tard AI can do is “start internation conflict” that’s to-die-for hilarious. I’m expecting swarms of hellbots harvesting the eyes of dissenters, and you give me “The thing we already do incredibly effectively”?

            What’s it going to do, create a jewish state on the third most sacred holy site of Islam?

            I suppose you could release the DNA for a superplague to terrorists, but we already know the DNA of plenty of superplagues. The problem is that hte sort of people willing to destroy the world hahahahahaha are not the sort of people who are good at genetic engineering. I mean- Osama bin laden had lots of money and people too.

            In other words- an AI trying to destroy the world isn’t scary because we already have people trying to destroy the world, so we keep all the world-destroying stuff locked up pretty tight.

          • Luke Somers says:

            > To do so, it would have to be independently ICBM equipped. With enough ICBMS to serve as a world-ending USSR level deterrent.

            What the HECK are you talking about?

            I’m talking about solar panels, gasoline, copper, steel, plastic. Stuff that it needs for its own purposes. What would it need U-235 for at any point in the plan I outlined?

        • “[capital-R Rationalists] all assume without evidence, analysis, or anything resembling reason that any AI will be effectively omniscient and omnipotent in effectively zero time”

          I’ve been following LessWrong and MIRI for five years, and that doesn’t sound anything like what I’ve been hearing.

          I think you’re making the mistake of equating “X is a problem that we should be really concerned about” with “X is highly probable.” The stakes — survival of the human race — are high enough that even if you assign only a 1% probability to the superintelligent-AI-kill-us-all scenario, you should be glad that *somebody* is doing something to avoid that scenario.

        • “You all assume … that any AI will be effectively omniscient and omnipotent in effectively zero time”

          Friendly AI — coming up with an objective function whose relentless maximization won’t lead to any horrifying gotchas — is a really, really difficult problem. Solving it is probably a multi-decade task. Suppose that nobody looked into the question of Friendly AI until human-level AI arrived or was imminent. Then it wouldn’t really matter whether going from human-level to super-intelligent AI took hours or years — we’d be screwed either way.

      • Anonymous says:

        signficance as “We can make 1 GB flash drives, but not 2 GB.” I’m sure there will be a brief period when it’s true, but it doesn’t seem very interesting.

        You currently can not currently buy a CPU that’s five times as fast on an arbitrary single threaded task than the best CPU you could buy in 2005. And the rate at which more computational resources become available is slowing down, not speeding up.

        • Alphaceph says:

          I think you can actually, modern CPUs do more work per cycle, so the number of MHz is a misleading metric.

          Also, for many applications, especially in machine learning, you care more about FLOPS than single threaded performance; many tasks are highly parallelizable.

          • Jiro says:

            “Many tasks can be sped up this way” is just a differently biased way of saying “Some tasks cannot be sped up this way”. Which is his point.

          • Alphaceph says:

            If the tasks that you actually care about are parallelizable, then who cares. Anyway, this was only invoked by our host as an analogy.

        • roystgnr says:

          On arbitrarily single threaded tasks, CPUs are already millions of times faster than the human brain. Everything human brains can do that computers can’t (yet) is being done using massive parallelism to overcome kilohertz speeds.

          • John Schilling says:

            I object to the use of “massive parallelism” to describe a pathetic few tens of thousands of processors in a clumsy nodal architecture. That’s puny parallelism. Massive parallelism is hundreds of billions of processors in an elegantly distributed network.

      • Deiseach says:

        It makes a very big damn difference if we can intervene in the time it takes to step from “superintelligent but incapable of killing us all” to “superintelligent and is going to press the Big Red Button right now”.

        Now, if that change happens in (literally) an eye-blink, yes we’re probably screwed. But if we get any notification in time, then we can intervene to steer it off that path – or at the very least, pull the plug and go “Okay, let’s not do it that way again when we’re cranking up Project God-Emperor for the next run”.

        • Evan Þ says:

          Keep in mind, though, “superintelligent and able to stop us turning it off” is still a lot easier than “superintelligent and about to Kill Us All.” It’s also a lot harder to detect.

          • Loquat says:

            It still requires that the AI not make any mistakes and think of every possible angle in advance, though.

            What does it take for an AI to be able to prevent humans from turning it off? Effectively, it needs full control over both its immediate environs and over its power source – denying humans access to its off switch doesn’t mean much if humans can just cut power to the building it’s in. And as a practical matter, controlling a power station 25 miles away is really hard to do unnoticed, particularly since it’d also have to control access to all the power lines in between, so the AI’s best bet is to have a power source in-house, ideally something long-lasting that doesn’t require fuel, like solar panels. Even so, humans can still quarantine the building, cut all communications to the outside, remove all human maintenance staff – which will still be necessary for the AI’s long-term survival, unless its building somehow includes a manufacturing facility that can make sufficiently advanced robots – and even shoot out the solar panels or blow up the building if necessary.

            Now, you may object that the AI will simply copy itself out onto the internet, making any attempt to quarantine a building laughably ineffective. To which I respond – how much computing space do you think a superintelligent AI would take up? It’s not like a virus that just has to hide and do one or two things – it’s going to be large, and it’s going to be noticeable, and once humans realize what it’s doing extermination is going to become a priority. And that’s assuming the AI even can run adequately on whatever the future equivalent of desktop computers may be.

          • Ethan says:

            I feel like the problem “prevent an AI from being able to stop us turning it off” is a lot more tractable than “make sure an AI is friendly”. The former can be solved with secure hardware. The latter–well, can we even define the problem clearly enough to make it solvable? This is why I think AI risk should be addressed with computer security research, not AI research.

      • Eli says:

        Why would a superintelligent AI kill all humans? Humans can be convinced to do that all on their lonesome with nary a push.

        • Samuel Skinner says:

          Because it can use the resources we are made from to achieve its goals more efficiently.

          • Eli says:

            Yeah, but why bother with the effort of exterminating humanity yourself when you can get humanity to do it for you?

  3. Anonymous says:

    I think super-overconfident estimates are often driven by (perhaps unconsciously) socially signalling what side of a debate you’re on, or (as Scott points out) by signalling normal/socially-conforming opinions, rather than by intellectually honest reasoning. Even anonymous estimates made in surveys are likely to be influenced by such considerations I think (eg the “not enough zeroes” comment on the LW survey is a clear case of this).

    The cure for this is getting people to put their money where their mouth is, but this is impractical for vague or very long-term things, and as the hedge fund example shows, doesn’t always work even then (though that might have had a degree of OPM and/or moral hazard involved.)

    I guess the other cure is to try to make intellectual honesty the socially-conforming thing to do, as Scott may be trying to do with this post. In which case I hope it helps.

  4. justanotherlaw says:

    I can’t seem to find the 70% figure in Bostrom’s paper – I seem to only find the median/mean dates that certain events (like achieving human level ai, etc.) will occur, as well as the other two figures you mention (though the percentage they give for “existential catastrophe due to AI” according to top 100 researchers is 8%). This doesn’t affect your argument though – according to the top 100 researchers, the median date for 90% chance of Human Level AI is 2070, the mean date is 2168, while the 50% date is 2050 and 2072.

    • Scott Alexander says:

      I think I took the 90%, then subtracted the 16% of people who said “never” and therefore weren’t included.

      • justanotherlaw says:

        Ah, okay. I’m not sure if that’s legal, statistically speaking – I thought you derived the 70% from some sort of concentration inequality, but I couldn’t figure out how.

    • HeelBearCub says:

      That 2070 date is for “high level machine intelligence” not “AGI” if I understand that survey correctly.

      Where HLMI could be a specialized program that performs only one function, like check out clerk, and does it only as well as a human could do it. And enough HLMIs can be made so that 50.1% of jobs can be done by some HLMI.

      Someone please correct me if I am wrong about the above.

      • justanotherlaw says:

        “Define a ‘high–level machine intelligence’ (HLMI) as one that can carry out most human professions at least as well as a typical human.” – seems like they’re talking about human-level AI to me.

        • bluto says:

          I think the idea is that humans need to specialize, while the machine could be their equal but remain a generalist.

        • HeelBearCub says:

          @Justanotherlaw:
          It doesn’t specify a single one though, does it?

          In other words, the question is easily interpreted as “When will most professions that exist today be done by a computer instead?”. That is how I first read it.

          Report comment

          • justanotherlaw says:

            It seems to me that they do:

            “Define a ‘high–level machine intelligence’ (HLMI) as one that can carry out most human professions at least as well as a typical human.”

          • HeelBearCub says:

            @Just another law:
            You can interpret it the other way as well. But the question as I read it doesn’t make it explicit that HLMI should be able of simultaneously carry out all of the tasks “better”.

            For example, IBM’s Watson is a platform that can be setup to do lots of different things, but you don’t set up it up to win on Jeopardy and still be capable of something else at the same time. There isn’t a single Watson, but many.

          • justanotherlaw says:

            I see. How would you phrase it, to eliminate this source of ambiguity?

          • HeelBearCub says:

            @justanotherlaw:
            I’m not sure, and it is a fair question.

            But the those who gave the survey worded it the way the did specifically because they did not want to use the words “general intelligence” and gave an explanation that sounded suspiciously like “we predict that those surveyed won’t answer a question about General Intelligence the way we want them to”. This is perhaps uncharitable, but it points directly at the idea that the whole idea of what constitutes general intelligence is still very controversial even inside the field.

        • Deiseach says:

          We have production-line robots on car-assembly lines. They are working as well or even better than human assembly line workers, which is why they’ve replaced humans.

          Do they count as having human-level AI (well, if the one that crushed a human counts, then yes, we should be afraid of our malevolent successors)?

          I can imagine a machine AI, for example, drawing up standard contracts by using the boilerplate clauses and simply customising it for the particular client (lots of clerical staff are doing this mundane, routine work already). Does that mean it’s as intelligent as a lawyer, or at least a legal secretary?

          “Able to carry out most human professions at least as well as a typical human” is not setting the bar very high for a lot of the basic grunt work that is low-level, doesn’t need much thinking, but is necessary for functioning of businesses and services.

          Scott might be very happy for a machine AI that would take patients’ details and make sure that the social welfare, insurance policy and other identifying numbers were filled in correctly and not transposed, all paperwork was completed, files were cross-referenced (so you know Mrs Jones already had a chest X-ray and doesn’t need to be sent for another one), etc. but that needn’t take human-level intelligence. Accuracy and trustworthiness would count for a lot more, and this is the kind of routine data-collecting that is vital but paradoxically doesn’t need a whole heap of smarts.

          From my own experience, being mildly OCD/compulsive about dotting all the “I”s and crossing all the “T”s means I don’t rest until I ferret out the correct information in as much detail as possible, but I could easily be replaced by a machine that was programmed to be meticulous about data entry. The quality lacking there, that would need to be engineered in, is a tolerance of ambiguity – many people fill in forms requiring accurate dates with some version of “lived there three years” or “1995-2003” when the fields require “day/month/year” – and ability to match “Hey, that Mary Jones is not the same as this Mary Jones but is probably the other Mary Jones on file here!”

          • FacelessCraven says:

            @Deiseach – “Does that mean it’s as intelligent as a lawyer, or at least a legal secretary?”

            Intelligent enough to act as a lawyer, not as intelligent as actual lawyers. Interesting disconnect between the two concepts.

          • Deiseach says:

            FacelessCraven, that’s part of the problem I foresee in judging what is actually intelligence and what is just really well-programmed machine.

            A lot of work, even (let’s take this as our example) legal work, is plugging in customisation into standard contracts (I say this from my unassailable expertise working a whole month in a solicitor’s office).

            Programme a computer with a reasonable bunch of contract law cases and plenty of boilerplate templates and let it loose, and I don’t see any reason it couldn’t handle standard property, rental, conveyancing, drawing up wills, employment contracts, etc. work.

            A lot of the standard rental contracts used at my place of work really are “The party of the first part agrees to let [insert address of property] to the party of the second part for [insert whether this is year-to-year, three year, or five year letting] for a monthly sum of [insert whatever figures the people in the Rents office calculated on the client’s finances and the current regulations]”, thank you Mr Murphy, just sign here, here and here, then we’ll emboss it with the seal of the Council and keep the original on file.

            There’s no reason an AI couldn’t do that kind of bread-and-butter work; you might want a human to check it every so often to make sure it wasn’t signing people up to 1,000 year leases or agreeing we’ll pay the landlord €500 per week instead of per month, but it could quite easily “carry out most human professions at least as well as a typical human”, whether it’s in a legal firm or local government.

            That doesn’t mean it’s human-level intelligent or even actually intelligent, though.

          • Nornagest says:

            What you’re describing is called an expert system. Those have been around since the Seventies and have seen some success in fields like medical diagnosis, but they’re inflexible, hard to source or maintain (you essentially need to have an expert define every possible decision in the decision tree, with potentially complex inference rules), and have scaling problems.

            They’re still in use — the front end for a software installation wizard is basically an expert system, for example — but no longer receive much attention from the AI/machine learning community. There’s only so much intelligence you can give a glorified flowchart.

          • Marc Whipple says:

            There’s only so much intelligence you can give a glorified flowchart.

            Obviously you didn’t read that Suarez book somebody recommended the other day. The Daemon is, in essence, a glorified flowchart. 🙂

          • Nornagest says:

            I didn’t, but I should amend that to “there’s only so much intelligence we can practically give a glorified flowchart”. The theoretical limit is close to unbounded, at least if you ignore petty things like the number of atoms in the universe (this gets discussed a lot in philosophy of intelligence; useful search terms are “giant lookup table” or “GLUT”), but the practical issues with the architecture are extreme outside of narrow problem domains.

          • Murphy says:

            @Deiseach It’s been done.

            They run into a number of problems,

            Laymen often don’t really know what they want and it’s hard to translate from that to what the expert system provides.

            Legal obstacles re: who and what can legally give legal advice in various jurisdictions.

            One of the problems is that such systems can infer from cases most lawyers would normally ignore to give you stuff that is technically consistent with existing cases but which would be thrown out by a real world court. A large part of law is what people consider important. One case that every lawyer is taught about can hold a lot more weight that one which technically has similar backing but which nobody has ever heard of etc.

  5. John Sidles says:

    Discussion questions  What is the probability that usage of the word “performative” will substantially outstrip “transformative” in the coming two decades:

    • In the entire english literature? (svsgl creprag)
    • In the STEM literature? (gjragl-svir creprag)
    • In the neuroscience literature? (svsgl creprag)
    • In the psychiatric literature? (friragl-svir creprag)
    • In NIH/NSF/DARPA program solicitations? (svir creprag )
    • In Slate Star Codex essays? (avargl creprag)

    Background information  Presently the usage-rates of “performative” and “transformative” are tied on Google’s Ngram Viewer

    Remarkably, it appears that Scott A’s essays (to date) have seldom or never used either word.

    This despite the (anecdotal) “fact” that NIH/NSF/DARPA program solicitations ardently support “transformative” research, while showing no discernible enthusiasm for “performative” research.

    The point  Performative shifts are far easier to actively catalyze, socialize, and monetize than transformative advances/calamities are to passively foresee (like the advent or not of strong AI), or philosophical questions are to definitively answer (like the existence or not of god).

    That’s why effective agents of radical change preferentially focus upon performative advances rather than transformative advances.

    In a nutshell  Scott’s A’s essay doesn’t sample from the most interesting class of questions, namely, questions associated to future notions and ideals of performativity.

    Hmmmm … so will future Slate Star Codex lead or lag the transformative-to-performative usage shift … assuming that (as radical feminists and economists envision) this shift is already inexorably underway?

    Acknowledgements  Michael Harris’ Mathematics Without Apologies (ch. 4) pointed me toward the burgeoning literature relating to “performativity” in (successively) linguistics, sociology, literature, radical feminist theory, and economics.

    Harris commends in particular (and I do too) the overview that Donald Mackenzie provides in “Is Economics Performative? Option Theory and the Construction of Derivatives Markets” (2006).

    • Not Robin Hanson says:

      Perspective from Computer Science: I seldom hear the word “transformative” or “performative” in relation to my field.

      * No “transformative” unless it’s part of the usual bluster on a product press release.
      * “Transform” often, but generally only when preceded by words like “linear”, “Fourier”, or “Laplace”.
      * “Performative” only when dealing with literal theatrical productions (and I only saw this because I was involved in computer graphics).
      * “Performance” very often; less commonly “performant”.

      In my experience, “performative” has slightly negative connotations of “social signaling” (i.e. theater again) and outside of literal theater “performant” seems to have the lead over it. Still, I think it’s more likely to drift into common use than “transformative” (say 3:1) since there’s greater need for that conjugation (is this the right term?), e.g. I occasionally hear that a system is “performant” in terms of latency, throughput, resource consumption, etc.

      This doesn’t really address the meaning of the words, but I’m not sure how meaningful it is to talk about meaning—if either word indeed comes into common use in Computer Science, its meaning will likely be altered or completely different from the ones in the parent post, and how much alteration should still “count”?

    • Eric says:

      > Performative shifts are far easier to actively catalyze, socialize, and monetize than transformative advances/calamities are to passively foresee

      What is a performative shift?

    • Anonymous says:

      Wow, that’s just bizarre. I’m not sure if I’ve ever even seen this word before, and I’ve seen more than my fair share. For what it’s worth, transformative has 14 million google results and performative has 3 million, even though those numbers aren’t reliable.

      For your Ngram link, people may find it interesting to adjust some of the parameters of your search. For example, if you include the years from 2001-2008, and limit to American English, the gap between the two is somewhat larger although still much smaller than I would have expected.

      If you had asked me whether performative was a real word, my estimate would have been mostly determined by the context of the question. Even now I suspect there is some bias in Google’s Ngram calculator that is overstating its frequency. Alternatively, linguistics, sociology, literature, radical feminist theory and economics must publish a great deal more than I would expect. That said, I would happily take the other side of the bet on your first proposition, as the definition of transformative lends itself to much more general usage than performative.

      • brad says:

        There’s also some performative in philosophy as a part of the phrase “performative contradiction”. Though per n gram it is a tiny fraction of performative references.

    • Peter says:

      I think the trouble with “performative” is that in terms of actual usage the good concept:meaningless buzzword ratio is low – and even in the cases where I’ve seen a good concept, the meaning tends to vary from field to field.

      So linguistics – in particular speech act theory – has what I think of as the canonical version. Performatives are where you do something by saying you’re doing it – you can often tell by the presence of “hereby” (or where “hereby” wouldn’t be out of place). “I hereby name this ship the Queen Elizabeth II”, and all that.

      Economics – this is to do with economic theory influencing the economic world. If I say, “it’s possible to have a market in X, here’s how it would work”, then if my scheme is good enough for people to want to implement it and there wasn’t anything else around, then having a market in X is now possible when it wasn’t and my work made all the difference. There’s also anti-performativity, closely related to the anti-inductive idea mentioned on SSC; a piece of economic theory can become obsolete by being discovered. The example which I read about was the Black-Scholes equation; there’s a strategy for option pricing which assumes that prices are reasonably stable in a particular market. Once people found out about the equation they started using trading strategies that used that market, and the prices there became a lot less stable, thus invalidating the equation – IIRC one particularly nasty fluctuation was what destroyed Long Term Capital Management. I can sort of see what this has to do with the linguistics case, but it feels a bit different.

      As far as I can tell, things drift rapidly into “meaningless buzzword” territory when people start thinking too much about theatrical performances. I can see the economics one expanding into other fields; for example, predicting a revolution could help trigger a revolution, but the revolution you trigger may not be exactly like the revolution you predict. See for example Marx, Lenin, the eventual failure of the USSR, etc.

      So there may well be a salvageable concept underneath all of the nonsense if only we could sweep away all of the postmodernism etc. that has attached itself to the term.

      • John Sidles says:

        Peter opines “There may well be a salvageable concept underneath all of the [academic] nonsense if only we could sweep away all of the postmodernism etc. that has attached itself to the term [“performativity”].

        This is an absolutely key point (as it seems to me). Articles like Kelly Oliver’s “What is transformative about the performative?” (1999) and books like Donald Mackenzie’s Do Economists Make Markets?: on the Performativity of Economics (2007) conceal pearls of human interest within shells of academic cryptolects.

        The Performativity Paraxdox  The academic literature of performativity is itself not notably performative. Ouch!

        Much more accessible — much more performatively transformational — would be plain-and-simple young-adult syntheses along the following lines:

        What is transformative about the performative?
            the experiences of clinical residents

        What is transformative about the performative?
            the experiences of US Marines

        What is transformative about the performative?
            faith and practice among Friends

        What is transformative about the performative?
            rational altruism in practice

        What is transformative about the performative?
            universality and naturality in mathematical practice

        What is transformative about the performative?
            universality and naturality in engineering practice

        There are plenty of opportunities here, to illuminate experiences that come to everyone, in a broader, clearer, unifying, and (especially) hopeful language.

        Prediction  In 21st century democratic and pedagogic discourse, the performative historiography of performativity will transform our experience of transformation.

        Just ask any medical resident!

        --- bibtex follows ---
        @incollection{cite-key, Author = {Kelly
        Oliver}, Booktitle = {Continental Feminism
        Reader}, Editor = {Cahill, A.J. and Hansen,
        J.}, Pages = {168--191}, Note = {reprinted from
        a 1999 article}, Publisher = {Rowman \&
        Littlefield Publishers}, Title = {What is
        transformative about the performative? From
        repetition to working through}, Year = {2004}}

        @book{cite-key, Address = {Princeton}, Author =
        {MacKenzie, Donald A and Muniesa, Fabian and
        Siu, Lucia}, Publisher = {Princeton : Princeton
        University Press}, Title = {Do economists make
        markets?: on the performativity of economics},
        Year = {2007}}

  6. E. Harding says:

    1. 1.5% .7% Too high an estimate is privileging the hypothesis. And God implies psychic powers. And psychic powers sound much more plausible than God, as they’re something testable.
    2. 2%
    3. 60%
    4. .7%
    5. 7%
    6. 45%
    Why’s Scott so confident about psychic powers? Of pandemics, there’s been nothing like this at all since the Spanish Flu. The plague was spread by insects living on people.
    How about this one: will Iran ever get a nuclear weapon? I give it 5%.

    • Tanadrin says:

      Insofar as psychic powers imply something natural (albeit outside the laws of nature as they are currently known), and not miraculous/supernatural/divine, it’s easy to imagine a universe where God exists, but psi doesn’t. I don’t think Scott’s “confident” about psychic powers; part of the whole point seems to be that probabilities we think of as far too high for very unlikely predictions are actually pretty small.

      For modern pandemics, how about AIDS? It’s killed around the same order of magnitude of people as the Spanish Flu, and nobody saw it coming.

      • E. Harding says:

        I don’t get your explanation of psychic powers and God.

        AIDS is primarily an STD (and, in the early days, was spread through the blood supply). That doesn’t have much of a potential to kill a billion people in five years. 36+ million people in thirty years is hardly equivalent to 50+ million in three years in a world with less than a third the population.

    • David Pinto says:

      I agree with Harding that God and psychic power go together. I had 1% God, 0.1% psychic. I don’t believe in anything supernatural, and now that I think about it the two should be reversed, as their may be a biologic reason for psychic power someday that would not make them supernatural.

      I also have 70% on the pandemic. It doesn’t matter how much research is done to combat pandemics. Like modern famines, a pandemic will be politically driven.

    • Mary says:

      ” And psychic powers sound much more plausible than God, as they’re something testable.”

      Plausibility and testability are orthogonal concepts. It’s easy to imagine things that are one but not the other.

    • Roxolan says:

      BECAUSE psychic powers are easily testable, and as a result have been heavily studied, we can be pretty confident they don’t exist. We can have no such certainty for a god*. There are lots of possible gods whose existence is compatible with the observable universe.

      (*”God” being a hard-to-define cluster in thing-space whose likely characteristics include mind-boggling power and knowledge, involvement in the existence of the universe and/or the human race, and a non-evolutionary origin. You don’t need all characteristics as long as you have enough of them.)

      • John Schilling says:

        BECAUSE psychic powers are easily testable, and as a result have been heavily studied, we can be pretty confident they don’t exist

        Really? I’ll give you “heavily studied”. But we’ve been through this here before. The studies and the meta-analyses of the studies keep coming back with, OK, it looks like weak psychic powers exist. And then the critics correctly point out that what is really needed is multiple replication, with randomized controlled samples and large sample size and all the usual biases controlled for, etc, etc, and with reputable scientists rather than Venkmanesque con artists running the show, and this is done, and the effects still come back saying that, yep, weak psychic powers seem to exist.

        I’ve given my reasons upthread for believing that this probably comes down to experimental error. But it’s pretty clear that the necessary testing isn’t at all “easy”.

        • Protagoras says:

          I wish I could find the link, but I recall an enormous meta-study which found, as usual, evidence of some kind of weak effect, but also looked at how careful the methodology was to exclude possible fraud or bias, and found a strong correlation between a study being better run and finding smaller effects. IIRC, the correlation was strong enough to suggest that if ideal studies followed the trend, they would be likely to find nothing, though there aren’t enough studies close enough to ideal to know for sure. I think you overstate the quality of the studies that have actually been done (though as Scott has noted before, the quality of psychic studies has sometimes been as good as or better than quite respectable studies in other fields; probably tells us more about how careful we should be trusting studies in general than about likelyhood of psychic powers, of course).

          • John Schilling says:

            Absolutely agreed, particularly with the takeaway that the real lesson is to never put too much faith in studies – even lots and lots of very good studies to the same result. I doubt that I would ever put my confidence above 95% or below 5% on the sole basis that lots of scientific studies have proven/disproven a thing.

            Of course, the other generally solid benchmark is, “has anyone manage to turn it into a profitable industry?”, and damn, it’s getting hard to utterly dismiss psychic powers 🙂

  7. Steve Johnson says:

    If you were to pick a number from the set of all real numbers, what is the probability that that number is 3?

    That probability is missing from the linked calculator.

    The linked calculator also neglects any possibility that researchers into friendly AI increase the chance of disaster.

    • Charlie says:

      It turns out that if one picks from the set of all real numbers, one picks 3 one thirtysecond of the time.

      Up to a multiplicative constant that depends on one’s picking process.

    • Andrew Hunter says:

      I’m not sure if you’re trying to make some point about very low probabilities, but the actual answer to your first question is “you can’t have a uniform distribution on all reals.” There are infinitely many non-uniform distributions on the real numbers (Cauchy, normal, etc, etc), so your question is underinformed…but all of the (continuous) ones have zero measure on any point set, so the probability is zero anyway. Densities around 3 might not be zero, but that just says things about intervals.

      Long and short of it: “either zero, or undefined, and in either case you’re asking the wrong question.”

      • Steve Johnson says:

        You are correct.

        Uniform distribution across all reals from 2-4 works for my example.

        The point is to arrive at a 0 probability.

        • Izaak Weiss says:

          There still isn’t a uniform distribution over the reals from 2 to 4.

          • Eli says:

            Yes there is. It’s a uniform density over an arbitrary real interval.

            But the total probability of landing on 3 is still zero, and the intuition is even perfectly clear once you conceive of real numbers as infinite digit strings and think of how to generate such things by flipping coins.

    • Adam Casey says:

      I met a mathematician at Cambridge who claims the real numbers do not exist for exactly this reason.

      • PSJ says:

        I’ve always taken it as evidence that the universe is discrete. Is there any reason my logic is failing me here?

      • Jon Gunnarsson says:

        What would it even mean for the real numbers to exist? Obviously they exist as an abstract idea, but beyond that? I would go so far as to say that numbers in general only exist as abstract ideas, not in natural world.

        • brad says:

          That’s a very old debate. The Stanford Encyclopedia has a good overview of the various positions: http://plato.stanford.edu/entries/platonism/#4.1

          I wonder if the position that the Cambridge mathematician is taking is that e.g. integers are in some sense exist in a way that reals don’t. If so that sounds like it would be a pretty fascinating argument.

          • HeelBearCub says:

            That is interesting to contemplate.

            The first objection that comes to mind is to note the argument that there aren’t any individual objects, only an energy field of varying amplitude and frequency. Everything else is sort of useful abstraction from that single, salient detail.

            I think the implication is that there isn’t anything discrete, so maybe real numbers are a lot closer to “real” than integers.

            Not sure I’d really want to support that position. The “all numbers are abstractions” position seems the most reasonable.

          • FullMeta_Rationalist says:

            I like to think of numbers as cartoons.

          • Kyle Strand says:

            I’m guessing no one will see this, but…

            That’s actually not an unheard-of position; famously, Kronecker said, “God made the integers; all else is the work of man.”

            Personally, I believe in the computable numbers, as defined by Turing in his paper on the Entsheidungsproblem (the one in which he invented Turing machines). They are equivalent to the numbers definable in Church’s lambda calculus (and, if I recall correctly, to the primitive-recursively defined numbers). (Disclaimer: my background is in CS as well as math, so I might be a bit biased.) Essentially, Turing’s definition amounts to “anything that can be defined via a finite algorithm and computed to arbitrary precision.” There are only as many such numbers as there are integers (i.e. they are countable), because every algorithm can be described as a (very long) number.

            But the reals are uncountable–there are so many more “real numbers” than calculable numbers it’s difficult to fathom. So what does it mean for a number to “exist” if you can’t actually define it in such a way that you can compute it?

            (There are also a number of paradoxes associated with the continuum–for instance, the Banach Tarski paradox, which states that a sphere can be divided into several pieces, which can then be rearranged into a sphere of a different size.)

          • HeelBearCub says:

            @Kyle Strand: “So what does it mean for a number to “exist” if you can’t actually define it in such a way that you can compute it?”

            What do you mean by “finite”? The integers aren’t finite. You can’t write an algorithm that will enumerate all the integers, so I’m a little confused by the definition.

            Isn’t it trivially easy to write an algorithm that can enumerate real numbers to some arbitrary precision? Iterate over over all integers adding/subtracting 0.1 from the resultant sum.

            And can’t you then do that same process for every precision, again as defined by the integers? That gets you the very real using only integers.

            So it seems like the word finite is doing some work here, but I’m not exactly sure what work.

          • Nita says:

            @ HeelBearCub

            The “finite” condition applies to an algorithm for computing a single number to the given precision. The algorithm itself must be of finite length, and it must complete its task in finite time.

          • Kyle Strand says:

            @HeelBearCub

            That’s a good guess; the “finite/non-finite” distinction is often where a lot of subtleties occur in this sort of mathematics. And @Nita did a decent job of answering your direct question about what “finite” actually means here. But I think your confusion stems from something else.

            If you don’t mind, I’d like to post your question on math.stackexchange and answer it there; I’m finding this comment box an extremely annoying format for explaining these concepts.

          • HeelBearCub says:

            @Kyle Strand:
            I don’t mind if you do that, and I would be happy to learn!

          • Kyle Strand says:

            @HeelBearCub

            Here you go! http://math.stackexchange.com/q/1409466/52057

            At the end I link to the Wiki article on Cantor’s diagonalization proof; I haven’t read the article, though, so I’m not sure if that’s the best source to learn about it.

          • HeelBearCub says:

            @Kyle Strand:
            Thanks for the reply over their. I can’t comment on that thread because I don’t 50 reputation.

            I don’t think being able to write the algorithm is doing any work. (See below). Perhaps the work is being done by the fact that you can, for any arbitrary number in the set, say how many executions it will take to get to that specific number?

            The example below is trivial, and never actually gets to counting any real numbers, but I wonder/think I could re-write it so that you could know how long it would take to get to any specific number. They just wouldn’t be in order. Do they have to be in order?


            t = 10
            while true:
            t = t / 10
            i = 0
            while true:
            print i
            if (i <= 0):
            i = -i + t
            else:
            i = -i

          • Kyle Strand says:

            @HeelBearCub

            Hm, we may be talking past each other; I’m not sure what you mean by “doing work.”

            I’m also not sure what your example code is supposed to be doing. Certainly it’s missing some kind of `break` statement, since it gets stuck in the inner loop without every jumping to the next division by 10. In any case, it looks like you’re trying to implement your original “print all the decimals” algorithm, but I’m not sure what you’re trying to show. Again, note that your algorithm will never print even all of the rational numbers (a subset of the computable numbers, and a very very very small subset of the real numbers); for instance, as I mentioned in my Q&A, it will never print the number 1/3.

            And in fact there is no algorithm that can print all of the computable numbers. This is part of Turing’s proof, though I did not mention it in my Q&A. Turing is describing multiple algorithms: one algorithm per number.

            As for knowing “how long it takes” for some number-enumerating algorithm to get to a specific number, that’s not really part of anything that I’ve mentioned so far. Though, if you’re curious, your algorithm for printing increasingly-long decimals permits a fairly simple calculation of “how long” it takes to get to a specific decimal: simply count the number of digits (including 1’s) in the target number, subtract one, multiply by 18 (since every iteration through the inner loop should print 19 new numbers, assuming you skip 0 which will generally be equivalent to a previously-printed number), and multiply the absolute value of the last digit by 2. (There are a couple of off-by-one errors in this description, but you get the gist.)

            In fact, this should show why your algorithm fails to print all the real numbers: real numbers (such as 1/3 and pi) may have decimal-expansions that never end, so they have an infinite number of digits. Plug infinity into the formula above, and you see that the algorithm will never print that number.

          • HeelBearCub says:

            @Kyle Strand:
            I’m fully aware my code isn’t any “good” and won’t get around to printing anything but integers, I was merely trying to show that writing the algorithm doesn’t in itself prove anything. You seemed to be pointing at the idea that being able to write the algorithm meant something in terms of defining “computable” numbers.

            One could easily write an algorithm that enumerates each positive integer, and then all the reals of that integers precision or or less, which would continue on infinitely iterating through all the reals, but never getting to them all, it would obviously not get to them all “more slowly” than counting integers, but does that mean anything?.

            I think you are saying every fraction is computable, as well as every integer, but some how the set of integers plus the set of fractions is equal to the set of integers, which doesn’t seem to make sense to me, although set theory was long ago for me.

            So, at the end of it all, I’m still not really not sure what computable means, but maybe I need to go back 20+ years and take set theory again.

          • Nita says:

            @ HeelBearCub

            somehow the set of integers plus the set of fractions is equal to the set of integers, which doesn’t seem to make sense to me

            They are not equal, of course, but they do have the same cardinality. (“Number of elements” in the intro is a misleading simplification in this case, so scroll down to “Definition 1”.) Yes, it’s weird. Infinity tends to be weird.

          • HeelBearCub says:

            @Nita:
            Yeah, like I said, set-theory. I had cardinality down, once upon a time.

            I guess I will just accept that as a fact and move on, it’s just weird given that are as many fractions taking the form 1/x as their are integers. I seem to dimly recall something about how many of them are equivalent (1/2 = 2/4 = 3/6, etc.) playing some role in establishing the cardinality of fractions.

            To much random stuff in my brain, rolling around unused, gathering moss.

          • Kyle Strand says:

            @HeelBearCub

            I’m honestly still not clear on what your purpose was in writing a buggy algorithm. If you’re just trying to say that the Python script in my Q&A doesn’t demonstrate anything about computable numbers, you’re correct; I was merely refuting your argument that “you can’t write an algorithm that will enumerate all the integers” by writing such an algorithm.

            One could easily write an algorithm that enumerates each positive integer, and then all the reals of that integers precision or or less…

            No no no no no no no no no no no no no. You CANNOT write such an algorithm, because there is no such enumeration. This is the point of Cantor’s diagonalization proof. This is not just a matter of “how long” such an algorithm would take; any enumeration (created algorithmically or otherwise) simply cannot contain all real numbers, or even all numbers within a certain range (say, between 0 and 1).

          • brad says:

            Maybe this stack exchange question about examples of non-computable real numbers will help you build an intuition.

            https://math.stackexchange.com/questions/462790/are-there-any-examples-of-non-computable-real-numbers

            Re: God created the integers
            I seem to remember that the Pathogareans had a similar view regarding the rationals and so tried to suppress existence of rad 2.

          • HeelBearCub says:

            @Kyle Strand:
            Well, now we are at the core of my misunderstanding.

            I could write the algorithm I have in mind. But I think you can look at the sample output and see what it would be. I think the answer is that it won’t represent 0.1 repeating, or any irrational number simply because the go on forever.

            But the string of 1s repeating forever (with no decimal point) is also an integer. Or the integer represented by every digit of Pi. Unless of course we saying it definitionally isn’t. Maybe that’s the rub.

            Output would look something like:
            0
            0.1
            0.2

            0.9
            1
            1.1
            ..
            1.9
            0.11
            0.12

            0.21
            0.22

            0.99
            1.11

            1.99
            2
            2.1
            2.2

            2.9
            2.11
            2.12
            ….
            2.21

            2.99
            0.111
            ….

            (I’m obviously skipping the negatives, for reasons of brevity.)

          • HeelBearCub says:

            @brad:
            That is helpful for helping me form some base understanding of computable vs. non-computable.

            In particular, that BB function example (assuming it is accurate).

          • Kyle Strand says:

            @HeelBearCub

            But the string of 1s repeating forever (with no decimal point) is also an integer. Or the integer represented by every digit of Pi. Unless of course we saying it definitionally isn’t. Maybe that’s the rub.

            Bingo! Think about it: how are the integers defined? First, we define the whole numbers: start with 1, then add 1, then add 1, then add 1….. each of the numbers created this way is a whole number; i.e., the set of whole numbers is defined as all the things we can create via this iterative “add one” process. The integers are these numbers, together with all the negations of these numbers and 0.

            Now, will this process of creating the natural numbers ever create “the string of 1s repeating forever”? No, that doesn’t make any sense; what would be the number immediately preceding it, to which we added 1? What would we get when we add 1 to that number?

            So integers are not “arbitrary, possibly-infinite, strings of digits with no decimal point.” They are very specifically “things you can get by repeatedly adding or subtracting ‘1’ to or from other integers, and ‘1’ itself” (this definition is of course equivalent to the “negations of” definition above).

            But the crazy thing is that the “real” numbers really can be defined as “arbitrary, possibly-infinite, strings of digits” — as long as the “infinite” part of the real number comes after a decimal point.

          • Kyle Strand says:

            @brad

            Indeed Pythagoras did discover that rad 2 isn’t rational, and tried to suppress that fact–by murdering people, if popular accounts are to be believed!

            I’ve always thought that Turing’s model is made all the more fascinating by Kronecker’s statement; Turing states (without explicit proof) that his “computable numbers” include all the algebraic numbers, pi, e, and essentially every other meaningful number arising from mathematics up until that point. And, moreover, he provides a surjection from the (perhaps aptly named!) natural numbers to this vast but countable set of meaningful numbers. So if every “meaningful” number in the reals corresponds to a whole number, perhaps Kronecker was not so crazy after all. There remains, of course, the issue of whether numbers defined via meta-analysis of Turing’s/Cantor’s/Church’s argument, such as Chaitin’s constant, are really “meaningful” or not.

          • Doctor Mist says:

            HeelBearCub-

            I could write the algorithm I have in mind.

            This is so fun. I know diagonalization a la Godel and Turing backwards and forwards, and I still had to think for a minute to figure out what was wrong with your algorithm. Thanks! (Really, non-ironic thanks!)

            The key difference between your algorithm for reals and Kyle’s for integers is this:

            We can show that Kyle’s loop does enumerate all integers, because if you pick any integer, you can see that it eventually prints it. Even though the loop never finishes printing all of the integers, any specified integer eventually shows up in a finite amount of time: e.g. a positive integer N is the 2Nth value printed.

            Your loop does not have that property with respect to reals. It does have that property with respect to real numbers that have terminating decimal expansions, but that’s not all of the reals. (In fact, it isn’t even very many of the reals — even the set of reals you are missing is still not computable.)

  8. Professor Frink says:

    How does FHI estimate safety numbers? I would think that almost everyone working in machine learning is doing something at least somewhat related to safety. Most of the safety issues are things that are annoying for current day machines, but world ending for super powerful ones. So probably most of the field could re-describe what they are doing as safety research (and they will when big safety grants come along!)

  9. ams says:

    1. “Doesn’t this Pascal’s-mug us to do all sorts of crazy things?”

    If your decision procedure/optimization procedure is getting hung up on hypothetical singularities out there in configuration space with zero support and infinite weight (evil gods, etc), then I could make an argument that it is a *bad decision procedure*. Our (well, my own untrained) brains may not extrapolate very far outside the range of things they have experience/evidence of. They might reason naively using analogy, and respond with laughter/absurdity at results that shoot off to infinity. This may be illogical, but it is useful illogic, because it keeps us from getting paralyzed by tiptoeing around things that don’t really show up in our environment. It might be deductively wrong, but it’s inductively useful behavior.

    4. “It doesn’t matter what probability I assign to AI because MIRI can’t do anything about it.”

    This is pretty close to my true objection. I object that the people who are going to have 1) the ability to actually do something about AI, and 2) any clue about the true capabilities of their AI are going to be people working on doing the actual creating of the AI, solving the engineering problems.

    5. “Why worry about AI when there are so many more important x-risks like biosphere-killing asteroids?” As per the geologic record, biosphere-killing asteroids hit about once every 100 million years. That means, unlike AI, there really is only a 1 in a million risk of one hitting next century.

    Here’s why I find this line of reasoning (with my admittedly fuzzy understanding of Bayesian reasoning) highly suspicious: Here we assign an actual concrete probability to asteroid strikes because we have actual information about asteroids.

    We can’t assign a solid probability to evil super-intelligent AIs, because we haven’t had any evil-superintelligent AIs attack yet. (Or rather, we can freely choose any prior probability that we want, because we *don’t know*.)

    But we also don’t know (have any experience) about Lovecraftian elder things, or hostile aliens, or evil gods that don’t have to follow our laws of physics altering our world from out of abstract space, or kaiju attacks, or any number of some arbitrarily large set of arbitrarily scary things.

    If we have to take things we have no experience with seriously, how do we keep from eventually running around dealing *only* with some infinite set of things that have never happened, and completely ignoring anything we do have knowledge of, because it has some upper bound on badness/probability?

    (Really, I think I’m just objecting to 1. again).

    • ams says:

      (Aaah, my previous comment dissapeared!)

      I think the argument against this is that there are way more ways for there not to be Hell than there are for there to be Hell. If you take a bunch of atoms and shake them up, they usually end up as not-Hell, in much the same way as the creationists’ fabled tornado-going-through-a-junkyard usually ends up as not-a-Boeing-747. For there to be Hell you have to have some kind of mechanism for judging good vs. evil – which is a small part of the space of all mechanisms, let alone the space of all things – some mechanism for diverting the souls of the evil to a specific place, which same, some mechanism for punishing them – again same – et cetera. Most universes won’t have Hell unless you go through a lot of work to put one there. Therefore, Hell existing is only a very tiny part of the target. Making this argument correctly would require an in-depth explanation of formalizations of Occam’s Razor, which is outside the scope of this essay but which you can find on the LW Sequences.

      I have a question about probabilistic reasoning of this sort:

      If you have a state of total inexperience about frequencies of these events, then why is there any measure system or coordinate system that is more natural than any other measure/coordinate system over which to “evenly” distribute prior probability?

      Why, in the discrete case, do we assign “even” probability to each discrete state? Why in a continuous case do we use whichever coordinate system first springs to mind (x with line element dx, instead of y=f(x) with line element dy = df/dx*dx). The problem is particularly apparent in the continuous case, but I think it still exists for the discrete case. Sans deeper knowledge, why does each discrete case presented have equal weight? How does this weight compare to cases not inside the set you are considering, but inside the space of possibility?

      • ams says:

        I think in classical statistical mechanics there is a natural measure (The Liouville-Measure, or something like it) which weights areas in classical configuration space based on how they evolve. (Areas are preserved).

        I think this would take care of cases like a crystal, where after some event blew through that disordered a cloud of atoms, you would still expect to find regions of highly ordered atoms when things cool off. (Energy minima, etc) (assuming there is a mechanism for things to dissipate energy and you’re not dealing with dark matter or something – there is all sorts of hair it seems to our assumptions about what is “natural” about how things behave “randomly”!)

        Okay, for that matter, speaking of 747s, life, etc: You have an ensemble of a billion copies of primordial Earth, and you give them each four billion years to percolate: On how many of these would you not expect to find life? What seems highly improbable in the context of an arbitrary environment might be extremely probable in the context of an environment that tends to generate it. (Okay if they’re all *exactly* the same, quantum state and all, you would expect them to have *exactly* the same history, so sampled from within some variational radius about “Earth-like”)

        747s or the functional equivalent within an ensemble of Earths containing 1st world human societies. 😛

        • Deiseach says:

          747s or the functional equivalent within an ensemble of Earths containing 1st world human societies.

          But the “1st world human societies” are the equivalents of the Intelligent Designer/Creator that the “tornado through a junkyard” argument postulates; the point of the example is that you would not expect, no matter how many tornadoes blew through how many junkyards on a billion copies of Earth, that a functional 747 would ‘evolve’ out of the random assemblages of material crashed together, so why do we accept that life has some magic ability to be self-organising in a functional way?

          If matter is all matter, and there’s nothing like a soul, or higher forces, then what is the reason organic matter is more coherent than inorganic matter, given that the same chemical reactions governed by the same laws of physics apply to both states?

          (I’m not saying I accept the Intelligent Design argument, just that this is the strong version of it).

  10. maurile says:

    I think it’s weird to assign a much higher probability to Discussion Question #1 than to #2. What kind of being worthy of godhood would lack psychic powers?

    • ento says:

      At least for me, most of Godspace is taken up by Gods who chill around being unobservable (but totes there), given that miracles, etc. are conspicuously absent these days.

    • birdboy2000 says:

      I read “psychic powers” as meaning “psychic powers in humans” not “psychic powers in gods”. Think it’s implicit, but YMMV.

  11. Pete Houser says:

    It seems that the arrival of super intelligent and hostile AI would be evolutionary rather than instantaneous. Everything else in computer science evolves, why should this be different? And if it is evolutionary, then someone will notice when all the power grid locks and do something about it before all of the nuclear bombs are detonated. Of course the super intelligent and hostile AI might be clever enough to hide itself, but that is my point – bad AI rev 1.0 would miss some key part of the strategy. Then humans would adapt and never do that again.

    Of course there are some political reasons why some groups might allow the bad AI to thrive longer than might be preferred, but I still suspect that the bad AI would have flaws that could be exploited to shut it down. It will learn by experience, and will not have enough to time to gather experience before it is shut down.

    So I think the probability is low because it is composed of a series of sequential mistakes, each of which has individually low probability, and which must all occur in order to arrive at the bad end.

    • Scott Alexander says:

      For an answer to this objection, do a search for “hard takeoff”, or read Bostrom’s book.

  12. the court jester says:

    What’s the probability an ancient Greek could influence nuclear weapons?

  13. stargirl says:

    I am not maximally sure this works out in practice. Your probabilities do need to sum to 1. And there are intelligent defenders of an extremely large variety of mutually exclusive ideas.

    The usual solution is to construct a reference class. But as you point out this is very difficult.

    ====

    O a different note:

    I wish there was an easy way for everyone to answer in the same comment thread without taking up a ton of space :(.

    My answers:

    1 – 10% – too many people believe
    2 – .01% – this is my probability for humans having physic powers, not aliens or gods (if they exist).
    3 – 25%
    4 – 5%
    5 – 60%
    6 – 15%

    • Mary says:

      “Your probabilities do need to sum to 1. ”

      Only if they are mutually exclusive. The individual odds that Maggie and Milly and Molly and May will go down to the beach to play one day can be much higher than 100%, summed.

  14. HeelBearCub says:

    What else can we prove is worthy of more funding using humans well known overconfidence in making negative predictions?

    I feel like anything I put here will seem snarky, which is not my intent. I do think this is an example of “proves too much” as it pretty much says, “anything that is theoretically possible, but seems improbable, is almost certainly underestimated as to how probable it is”.

    Some seemingly snarky examples:
    What are the odds an alien intelligence is NOT studying us from the depths of the Marianas Trench or some other deep ocean location?

    What are odds that intelligent (but not super-intelligent) life is NOT living in the depths of Jupiter’s atmosphere?

    What are the odds that the Loch Ness monster is NOT an ancient dinosaur trapped in the lake?

    What are the odds that carbon dating is NOT radically flawed and the earth is really only 6000 years old?

    What are the odds that moon landing was NOT faked?

    I mean these all sound ridiculous, right? But, apparently I am over confident in them all being untrue?

    • Elephant says:

      Agreed. Here’s one that I can’t figure out why the AI-is-going-to-kill-us-all crowd doesn’t feel far more worried about, in comparison:

      What’s the likelihood that someone will develop a pathogen (a virus, for example) that will kill everyone?

      I would guess the odds of this to be *far* greater than that of genocidal AI. We actually have genetic engineering tools that are already quite powerful and becoming more so at a fast rate (CRISPr, gene drive, etc.) It’s not hard to imagine scenarios in which species-extinguishing tools are intentionally created (against mosquitos, perhaps), and even with one-in-a-million odds that some misanthropes would then apply this to people, I’d still find it more likely than evil operating systems. Even I could come up with some plausible schemes for wiping out humanity, given what we know about viral latency, pathogen transmission, etc., while in contrast no one has a concrete scheme (yet) for artificial super-intelligence.

      • Buck says:

        Many people in the “AI will kill us all” crowd are actually very concerned by pandemics: most rate it as the #1 or #2 most important x-risk to think about. The Open Philanthropy Project ranks biosecurity above AI safety at the moment.

        • Elephant says:

          Thanks for the link. However: If people believe in the ranking you posted, why is so much of the conversation on this blog & similar ones so focused on AI? I will admit I don’t really care about the topic of existential risks, and there may well be “rationalist” blogs that obsess over pandemics that I’m unaware of.

          Also: in that ranking, pandemics don’t win out over AI by much of a margin, and AI also wins out over e.g. nuclear war — something for which the technology *actually exists.* It’s hard not to conclude, for me at least, that the assessment of relative risks are dominated by perceptions of what’s “cool” to think about. Nuclear war is so passé!

          • HeelBearCub says:

            @Elephant:
            The steel-man of the AI risk position is that it is merely under estimated (and therefore has too few resources devoted to it), not that it is more likely than a nuclear (or really most other) x-risks.

            I would find that argument more attractive if people would stop saying that even very, very tiny x-risks of AI justify it getting more resources. If people stopped making the case in terms of “even 1 in a million” I would start taking the “1 in 100” case more seriously. As it is, the argument seems to be, as I said, proving too much.

          • endoself says:

            In that list, nuclear war is ranked low on the grounds that it’s hard to do anything about at the margin (i.e. people are already doing most of what could be done), not on the grounds that it is unlikely. If you look to the right, there are separate columns for relative risk level and for opportunities to reduce risk.

          • Elephant says:

            @endoself: the stated opportunities to reduce the risk of nuclear war don’t seem low. From the document: “Policy development in countries other than the U.S. has little funding but would be very difficult. Some experts say policy advocacy and public engagement in the U.S. are underfunded.” The first (“… would be very difficult”) sounds like a good opportunity to make a difference at the margin; the second clearly suggests there’s room for improvement.

          • vV_Vv says:

            Would it be that difficult to create an effective international movement for nuclear disarmament, especially if you could get high-status deep-pocketed people like Bill Gates and Elon Musk on board?

            So why are all these people worrying about AI instead?

          • Luke Somers says:

            @HeelBearCub: that’s not a steelman. That’s a good representation of an actual typical position within the movement.

          • HeelBearCub says:

            @Luke Sommers:

            Again, I find the steel man version more persuasive.

            AI risk proponents are also explicit arguments about even very, very, very, very small risks still justifying their position, which isn’t an argument that says “you are underestimating the risk” but rather “even your underestimated risk is still high enough to justify spending a great deal more money”. They are tacitly admitting that the case for under estimated risk is weak.

          • endoself says:

            @vV_Vv

            “International” means Russia and China. They don’t want people who are high-status in the US telling them what to do. (This isn’t to say having Gates and Musk on board wouldn’t help.)

          • Luke Somers says:

            That doesn’t make it a steel-man. It makes the other guys a weak-man.

          • HeelBearCub says:

            @Luke Somers:
            You are essentially nit-picking here. I take the argument and improve it in my estimation, remove what seems like an apparent flaw which leads the argument to have the feel of an attempted Pascal’s Mugging, leaving behind what feels like a case that could actually be built.

            If you don’t want to call that steel-manning, and merely wish to grant that I have accurately represented their argument, I will take that. Complaining that I have accurately represented their views, and then characterized it as the best argument to be made, seems an odd choice of complaint.

          • Stuart Armstrong says:

            Nuclear war is still one of the big four Xrisks around (pandemics rounds off the list). The main reason AI is the focus is that it’s a true Xrisk. The others can easily cause great destruction, but, as far as we can currently tell, are unlikely to lead to civilization collapse or extinction. Whereas if we had a large AI disaster, this would be likely terminal.

          • Luke Somers says:

            Heel, I didn’t complain except to say that you’re making it seem like the typical position is stupider than that. Also, nowhere did I say it was the best possible argument.

            Being this solidly misinterpreted is somewhat frustrating.

          • HeelBearCub says:

            @Luke Sommers:

            It seems you are accusing me of weak-manning typical AI x-risk arguments by saying that it is a typical argument to add an “even if the risk is very, very tiny, we still should devote resources to it because the payoff is so big if this x-risk is thwarted”.

            Is the above an accurate statement?

            Nick Bostrom seems to be a representative proponent of AI x-risk. “Superintelligence” has been referred to several times in this thread as a go to for good arguments on AI x-risk. Bostrom has made the argument above and it has been repeated by others.

            In fact, we are discussing this in a post where Scott is specifically defending the assertion! This by saying you it isn’t proper to assign a small enough risk probability so that the expected benefit disappears.

            How am I weak-manning the argument or it’s proponents?

          • Luke Somers says:

            I think a better way of putting it is, ‘Some AI risk positions seem more defensible than others, and the less defensible ones make the more defensible ones less attractive.’

            Weak-Manning is choosing a particularly easy opponent from within the opposing field and tarring the entire field with it. Saying what you said, “I would find [the more modest argument] more attractive if people would stop saying [the weaker argument]” is basically saying, ‘I am weak-manning you now’, because you’re explicitly letting one judgement affect a different judgement.

            Just because you point out what you’re doing doesn’t make it not what you’re doing. Just because the version you find weaker is reasonably common also doesn’t mean the stronger version shouldn’t get credit for already being the stronger version.

      • Brian says:

        I’d also include “What’s the likelihood that someone will develop an organism that will end civilization as we know it through some other means?” For a scary possible future, read the Daybreak trilogy by John Barnes, which opens with a bizarre meme-driven social movement causing thousands of people to develop fuel and plastic eating microorganisms in home labs and release them into the world. The resulting tech crash is enough to collapse most major cities by itself (and then things get much, much worse for other reasons…)

    • Scott Alexander says:

      Honestly, my odds for aliens somewhere being aware of Earth are about 1/4.

      • HeelBearCub says:

        @Scott Alexander:
        “Aware of Earth” like we are aware of Alpha Centauri?

        Or aware that there are life forms here that have escaped our gravity well (in some form)?

        Because the second one seems to assume that FTL information travel is possible (if not physical FTL travel).

        • Adam Casey says:

          I think humans are aware of a thing the moment one of our probes sees it, even if info hasn’t traveled to the rest of us. So this just assumes a probe inside 40-odd lightyears.

        • James Sully says:

          Alpha Centauri is only 4 light years away. We’ve been using electromagnetic telecommunications for much longer than 4 years. Obviously, assuming Scott doesn’t believe in FTL signalling, that still leaves the question of why he’s so confident there’s life within 100 or so light years.

        • Daniel Armak says:

          Or the aliens are near, within a few dozen LY (or in our system). Or the aliens are just really good at predicting our future development from our state hundreds or thousands of years ago.

        • HeelBearCub says:

          It’s not their couldn’t be intelligent life within 50 light years (not 100, we hadn’t left our gravity well 100 years ago), but that 25% is a very high number. I mean, even if they exist, that doesn’t make them aware of us. I think you only reasonably get to that high a probability by expanding the bubble.

          Thinking about it more last night, one possibility is that Scott actually puts the number that high because he thinks there is a high probability (almost 1) we are a simulation, and 25% is how likely “aliens” are running it. I don’t think the argument for that is all that great either, but it’s better than any argument for FTL travel.

        • Chris Conner says:

          How about aware of Earth like we are aware of OGLE-2005-BLG-390Lb, a planet about 21,500 ly away? If there are any aliens within 20,000 ly of us, and they have a reasonably strong exoplanet detection system, there’s a quite reasonable chance they’ve detected the Earth.

          • HeelBearCub says:

            @Chris Connor:
            Looking at the article, we are aware of OGLE-2005-BLG-390Lb roughly the way we are aware of Alpha Centauri. We know some of its characteristics (and that they are likely to not to support life that is similar to the carbon based life forms we are familiar with on Earth.)

            We know of a fair number of exo-planets that do fall within parameters that are not inconsistent with earth-like life. But we don’t whether they do, and even if we did, we would only know what they were like as many years ago as they are light years away.

            Therefore, only very near-earth alien intelligences can possibly know we have escaped our gravity well, and only then if they have an extraordinary detection system that is ludicrously large. Like constructing a focusing apparatus the size of Jupiter (I am pulling that size from my nether regions, I just know it would have to be huge), and it would have to be focused on us. The odds of all of that being true seem to me to fall well under 25%. There are only 64 stars similar to our sun with 50 light years

            But, once you have FTL travel, the number of possible intelligence harboring planets “near” us expands greatly.

            The fact that Scott puts the probability of FTL travel at 50% given a superintelligent AI (elsewhere in this thread) seems to support my contention that this might be where he is getting 25% for aliens knowing about our state of tech development. 100% chance of superintelligent AI somewhere in the Universe, 50% chance of FTL, 50% chance they have noticed us at all = 25% chance of that an alien race knows our state of tech development.

            I don’t know if that is how he is coming up with 25% or not, just guessing, more or less.

          • Chris Conner says:

            @HeelBearCub

            Sure, absent FTL communication, the region in which anyone can be aware of our space exploits is small. But “aware that we have escaped our gravity well” is your stipulation, not Scott’s.

            If being aware of Earth means simply that they have noticed its existence, in the way that we are aware of Alpha Centauri or OGLE-2005-BLG-390Lb, then the portion of the Galaxy in which aliens might have noticed the Earth is much, much larger. Possibly most of it.

          • brad says:

            The great oxygen catastrophe was 2.3B years ago. With sufficiently advanced technology (but not for the purposes of this hypo FTL) a civilization on the other side of the supercluster 300M ly away could know that we probably have life on earth. And they could have known that 300 million years ago and sent a probe, or 600 million years ago, sent a probe 300 million years ago and just heard back.

          • HeelBearCub says:

            @Chris Connor/@brad:
            My hypothetical was “What are the odds an alien intelligence is NOT studying us from the depths of the Marianas Trench or some other deep ocean location?” to which Scott answer that he gave a 25% chance aliens were “aware of us”.

            Given my hypothetical and Scott not being super clear what “aware” means, I tried to give to possibilities on opposite ends of the distribution of possible meanings for his use of the word.

            Is it possible someone is aware that conditions consistent with existence of oxygen breathing life forms has occurred on one of the planets orbiting the star they refer to as XJZ783456701? Absolutely possible, and 25% seems defensible for that number (although I am not sure if I would put it that high).

            But that is really different than saying they are aware of how technologically advanced we have become and are monitoring our current progress. And if that is what he is saying, I want to some means by which he justifies a 25% chance.

    • Scott Alexander says:

      1. 25% chance alien life is watching us in some way

      2. Less than one in a million. If we estimate it takes 100,000 years for life to progress from intelligence to superintelligence, and that intelligent life on Jupiter could have evolved any time in the past two billion years, the chance that we would be in the 100,000 year window is only 1/20,000. Given that we’ve seen a bunch of different environments, atmospheres, et cetera and only one has had life (let alone intelligent life), I think that’s enough to get the extra factor of 50. This is even before we take into account the inherent implausibility. Also, Great Filter style arguments.

      3. Less than one in a hundred thousand chance, because we have enough examples of life to know that it’s practically never immortal, and enough examples of different environments to know dinosaurs didn’t survive in any of them. Why should a random lake in Scotland be different?

      4. There are at least fifty different lines of evidence suggesting the Earth is older than 6000 years. If each one only has a 90% certainty rate, the odds of all of them being wrong together is 0.9^50. Expand that number out and that’s my answer.

      5. Given all the independent evidence, even if there was a 10% chance NASA could convince the Chinese, the Indians, the world astronomical community, etc to go along with it, by the time you’ve added in everybody you’ve probably got like 10^-6. I’ll be conservative on this one and say 1/10,000, because it wouldn’t surprise me too much if the government faked at least one major historical event in history, so maybe if I had to name ten thousand historical events I was sure weren’t faked, I might get one wrong.

      • HeelBearCub says:

        @Scott Alexander:
        On #2, I didn’t say life native to Jupiter; however, given that is how you interpreted it, it’s interesting that you think it is 10 times more likely that there is a dinosaur in Loch Ness than native intelligent advanced life in Jupiter. On the one hand, we really know that native life on Earth is possible, but everything we know about that life tells us that a dinosaur living in Loch Ness isn’t (especially for the dinosaur as you described it). Seems off to me.

        On #4, I will also say, it’s interesting, and seems to cut against some of your reasoning that you are willing to a)assign such a low probability to each of the 50, as 90% seems really low for most of these, and b) seem to get the probability wrong. All wrong together is 0.1^50, isn’t it? 0.9^50 is the odds of them all being right at the same time, none of them being wrong? 0.9^50 is 1/2% chance the Earth is 6000 years old.

        0.1^50 is the kind of number you have said “never” happens, isn’t it?

        • Roxolan says:

          If you nitpick Scott’s off the cuff answers, you make it less likely that Scott will give off the cuff answers (and answers in general) in the future. I’d rather you didn’t do that.

          • HeelBearCub says:

            @Roxolan:
            I don’t perceive myself to have nit-picked his answers. Perhaps it’s the “didn’t specify native life” part that feels nit-picky? Wasn’t intended that way, but I could see how it might be. Everything else seems pretty above board, especially since the questions were originally (I thought fairly clearly) rhetorical.

            The point about 0.1^50 seems especially cogent and not nit-picky, as it goes right to the heart of his post. Given that there were 50 independent tests, you generate the kind of probability of error that really is so low as to be infinitesimal. This is completely legitimate, and not overconfident. If you have overwhelming evidence from multiple, independent sources, you really can feel that confident.

      • Nornagest says:

        There are at least fifty different lines of evidence suggesting the Earth is older than 6000 years. If each one only has a 90% certainty rate, the odds of all of them being wrong together is 0.9^50. Expand that number out and that’s my answer.

        As close as my estimate for Young Earth creationism is to zero, and it’s very close, it’s not this close. You’re assuming that all those lines of evidence are totally independent, which pretty much never happens.

        • HeelBearCub says:

          @Nornagest:
          But, are they 10% likely to be so wrong that they allow a 6000 year old earth? Or is that more like 1 in a 1000 or 1 in 10000 or even smaller?

          The independence does matter, but the first probability estimate seems ludicrously under confident.

          • Nornagest says:

            I think there’s at least a 99.9% chance that e.g. the stratigraphic record is accurate to within an order of magnitude or so. I might not go as far as 99.99 because of model uncertainty, but I really am very confident.

            But I also think there’s a better than 1 / 10^50 chance that the Magathreans created the Earth a thousand years ago and peopled it with reality show contestants, and that the fossil record etc. is all there for verisimilitude. 1 / 10^50 is a really small number, and probability estimates anywhere near it are very vulnerable to black swans.

          • HeelBearCub says:

            @Nornagest:
            I assume the most likely scenario for the earth being 6,000 years old is something like that (or that it’s a simulation). I don’t think we can actually put probabilities on things like that as there is essentially zero evidence to support them, depending on how one wants to abuse the concept of evidence. Or otherwise you are assigning some probability to the idea that any work of fiction is actually the truth as will be revealed by angels at a later time. Edit: removed example, for reasons.

            But that wasn’t even Scott’s line of reasoning. He said 50 tests and then he said 90% chance those tests actually being accurate enough to establish a greater than 6,000 year old earth.

            So at most what you are saying is that Scott has not accounted for the extra probability that the Earth is an alien/God simulation. But he still hashed the non-alien/God part of it.

          • Nornagest says:

            So at most what you are saying is that Scott has not accounted for the extra probability that the Earth is an alien/God simulation.

            Or something I haven’t thought of, yeah. But that expresses itself in the math as independence issues.

          • HeelBearCub says:

            @Nornagest:

            That feels slightly wrong. If “some outside entity” faked all the data, then all the tests are dependent on that together. But that is a hidden dependence. You can’t really know that the tests are not independent.

            But, assuming they weren’t faked, it’s also an accurate criticism that those 50 tests aren’t all independent from each other. If radio-carbon dating is wrong, then our understanding of radioactive decay may be wrong, which may make some of the other tests wrong as well.

            I don’t know enough to say how many if those tests are dependent on single base factors, but the probability those base factors are wrong is way lower than 10%. We have far too much objective evidence that they are right.

            I’m willing to go out on a limb and say that as unlikely as faking is, that probability dominates the probability of all the tests being wrong.

  15. FrogOfWar says:

    You have the same probability for God existing as Tyler Cowen, though apparently both Bryan Caplan and Alex Tabarrok find this absurd (in which direction?).

    Coincidence?

    http://marginalrevolution.com/marginalrevolution/2006/02/epistemology.html

  16. Cole says:

    1. What is your probability that there is a god? God being some powerful being outside of our universe with some ability and desire to influence our universe … I’m gonna say 1/2 chance there is something outside of our universe. 1/10 chance that our universe can be influenced by things outside of our universe. 1/100 chance that some being wants to influence things in our universe. I consider all of these estimates bad and completely made up, would totally change my mind if anyone offers better numbers.
    1/2000 chance of god existing.

    2. What is your probability that psychic powers exist? Psychic powers being the ability to physically influence things outside of your direct physical control, with just your thoughts. Well I remember one of Scott’s long posts about psychologists still arguing over this, so a bunch of experts disagreeing 1/2 chance its real. I also think this involves changing some of our current understanding of the physical world, which involves upending a bunch of established physics, I give that 1/5000 chance.
    So 1/10000 chance.

    3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050?
    I think the estimates I’ve always heard are 2C by 2100. I’m gonna consider the chance of mainstream science being wrong about this to be about 1/10. Injecting sulfur into the upper atmosphere to cool the planet is a pretty cheap way to stop this, but could have other negative problems so it might get rejected 1/100 chance we enact terraforming to cool the planet. Also 1/1000 chance some natural activity like the sun, or major volcanoes blowing up that changes global temperatures in a way we can’t predict.
    All of these are separate so 889/1000 chance.

    4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100? (Svsgrra creprag)
    Chance of bug spreading if it exists, 4/5. Chance of naturally developed bug being deadly and viral 1/100 (these seem pretty rare, its been nearly a century since the last major pandemic). Chance of humans developing and foolishly releasing such a bug 1/100 (we do seem to be cautious about super weapons and germ warfare is very frowned upon). Chance of the bug not evolving into a more benevolent form an falling short of the one billion person death toll 1/100. Chance of medicine not stopping a deadly bug before the death toll 1/10.
    8/500,000

    5. What is your probability that humans land on Mars by 2050?
    1/2

    6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115?
    Chance that humans would develop that kind of AI by 2115 999/1000. Chance that humans don’t get wiped out before they can develop that level of ai 99/100.
    chances are 98901/100000

    • Jon Gunnarsson says:

      If the mainsteam position is 2K warming until 2100 (which I don’t think is true, AFAIK 2K warming by end of the century is the scenario where there is a large effort to curb CO2 emissions), then even assuming linear rise (most models assume accelerating warming), there would be less than 1K warming over the next 35 years, i.e. your estimate should be lower than 50%.

      Also, you can’t just multiply probabilities willy-nilly unless you have some justification for them being independent.

      • Cole says:

        I don’t have good justifications for most of those probabilities in the first place, so I have even less justification for them being independent. I don’t see much difference between making up numbers willy nilly and multiplying them willy nilly.

        Your comment adds to my evidence of why its better to break down probability estimates like this. You can at least criticize mine and know where I went wrong. So if I had just said 90% for the global warming question I’d have not learned how to make a better estimate.

  17. ento says:

    1. What is your probability that there is a god?

    1%.

    2. What is your probability that psychic powers exist?

    .1%.

    3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050?

    60%, but I found this question difficult to answer. At current don’t-give-a-fuck rates I definitely agree with Scott’s 90%, but I give more weight to “we collectively get our asses in gear and figure out how to stop this” than Scott does. (I guess “crackpot solutions a la that chapter in Super Freakonomics work” also gets some weight.)

    4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100?

    10%, based mostly on my gut.

    I decline to answer questions 5 and 6 on the grounds of not having thought about them enough.

  18. Tanadrin says:

    1. About 5%
    2. About 1% (it’s easier to imagine God exists, than that the universe operates in ways which contravene his own laws)
    3. 80%? But I don’t know much about what the current expert opinion on the rate of global warming is.
    4. 15%. That feels relatively pessimistic; on the other hand, who saw AIDS coming?
    5. 50%
    6. 50%

    • E. Harding says:

      Why would psychic powers be a contravention of God’s laws? AIDS was largely limited to promiscuous people in southern Africa and homosexuals, IV drug users, and people dependent on blood transfusions outside of southern Africa. It wasn’t this massive pandemic that could afflict many heterosexual non-promiscuous non-IV-drug users without need of blood transfusions.

      • Saint_Fiasco says:

        I read that the US got hit relatively hard among developed countries because at that time the pill was the preferred contraception method, and they were also relatively promiscuous.

        If the pill had been more popular in Europe too, they could have been hit pretty hard.

      • Tanadrin says:

        The psychic powers thing was a bit of snark on my part; but it would require them to 1) exist, 2) operate through an unknown and heretofore undetected mechanism, and therefore 3) require something about the brain, how our thoughts interact with the world, or basic physics to be *totally different* from how we currently think those things operate. By contrast, I find God relatively plausible–if you take God in the modern understanding, which has retreated as an active force to the edge of our understanding in the face of advancing science.

        AIDS has killed 39 million people; the Spanish Flu killed between 20 and 50. By some estimates, AIDS has actually killed quite a lot more people than the Spanish Flu. By *any* metric, 39 million people is a *massive* pandemic–and that doesn’t count millions of HIV-infected people who are currently living with the disease, or those who will go on to develop AIDS and die.

        Nor is AIDS a disease of special cases; in the West, to be sure, the particular set of cultural circumstances surrounding homosexuality made that a major route of transmission (though the shitty public health response greatly exacerbated it). That and IV drug users are in the popular imagination the main way HIV is transmitted, because in parts of the world with lower poverty rates and better health care, those are the paths most readily available to the disease. But the picture of transmission in Africa is more complicated, and from what I’ve read on the subject, there is not a short, clear-cut list of factors to point to with regard to why the AIDS epidemic has been not only especially severe there (though poverty and lack of access to healthcare really don’t help), or why it has been so uneven.

        “It’s a disease of homosexuals, the promiscuous, and IV drug users” is a pat explanation, and one that’s useful for various modern morality tales. It’s also, like most pat explanations, totally useless for understanding what actually happened; moreover, even if it *were* just a disease of that category of people, I don’t see how that invalidates the fact that 1) it’s a global pandemic, with 2) a horrendous death toll, which 3) nobody saw coming. I don’t think Scott’s question was confined to the possibility of a pandemic which only struck those considered virtuous by the standards of 1950s suburban America, and also who do not need blood transfusions.

        • E. Harding says:

          Scott said “in five years”, so I’m considering the speed of spread as important, too. Pretty much every objection you could have to psychic powers, you could also apply to God.
          “I don’t think Scott’s question was confined to the possibility of a pandemic which only struck those considered virtuous by the standards of 1950s suburban America, and also who do not need blood transfusions.”
          -If you exclude all those who aren’t, would you end up with a billion people? I doubt it.
          Note that AIDS has been most severely spread in southern Africa, which is comparatively developed to the rest of Black Africa, not in Niger, Burundi, or Somalia, which are the poorest countries of Africa. So I don’t think poverty is the primary reason for the spread of AIDS in Africa. I think it’s promiscuity.

          • Erm, granting your premise, why does it matter whether it’s spread via promiscuity? That has no bearing on whether it’s a pandemic. I’m trying to be charitable here, but the only way I can see that mattering is if AIDS is a less concern-worthy disease than a normal pandemic because it’s hitting the right targets.

          • E. Harding says:

            There aren’t a billion promiscuous people out there, and if there are, they can’t spread some STD among all of them in five years. That’s why STDs aren’t ever going to kill a billion people in five years.

  19. Samuel Skinner says:

    1. What is your probability that there is a god?

    Depends on the definition. Most are unfortunately undefined to the point of uselessness.

    2. What is your probability that psychic powers exist?

    Its my confidence level naturalism is correct.

    3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050?

    Okay that’s “model correct enough and actions don’t reduce temperature”. I have decent confidence in the latter (I don’t we can do anything in 35 years to slow it down enough; geoengineering might came through) and I believe most of the models agree by that much. That’s what, 80%? 90% I don’t have the actual reports and margin of error on hand.

    4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100?

    That would almost certainly be a bioweapon (previous diseases in the 19th and 20th century haven’t managed such a death toll). Speculating on future politics is difficult- I want to say less than 5%?

    5. What is your probability that humans land on Mars by 2050?

    First odds of developing working teleporters or usable FTL craft is low by then (maybe .1%) so we’d have conventional missions. However I think colonizing the Moon or grabbing asteroids would be a more likely objective. .1% as well.

    6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115?

    It’s probability task is possible times probability it isn’t stopped times probability it is complete by then. I’m pretty sure about the former, reasonably sure about the middle, and less sure about the end. I want to say 80%- 90%.

    • the court jester says:

      However I think colonizing the Moon or grabbing asteroids would be a more likely objective. .1% as well.

      Way too low. He said land on, not colonize.

    • Ghatanathoah says:

      Depends on the definition. Most are unfortunately undefined to the point of uselessness.

      I wondered about this too. I have a pretty low probability for an Abrahamic God, or any other kind of “ontologically basic mental entity.” But if you count creatures that have godlike powers, but aren’t ontologically basic (i.e something like Mr. Mxyzptlk) my probability is much higher. And do simulation programmers count? If they do then your probability of there being a god should at least be as high as your probability that this universe is a simulation.

  20. blacktrance says:

    1. What is your probability that there is a god?
    0.01%. It would require a huge amount of what we know to be false. If psychic powers turn out to exist, the probability goes way up.

    2. What is your probability that psychic powers exist?
    0.1%. Similar to the above, but we wouldn’t be mistaken quite as radically. I find it odd that people think God is more likely than psychic powers.

    3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050?
    80%. I don’t know much about this, so this is a wild guess.

    4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100?
    5%.

    5. What is your probability that humans land on Mars by 2050?
    30%.

    6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115?
    30%.

    I don’t know much about most of these to begin with, so I apologize if these sound crazy or have crazy implications.

    • AnonymousCoward says:

      People think God is more likely than psychic powers because they’re including deistic gods, the kind who is running the simulation that is our universe even though they never interfere and the universe obeys totally naturalistic law.

    • Mary says:

      ” I find it odd that people think God is more likely than psychic powers.”

      Psychic powers would be things among human beings, which are easier to ferret out than God is.

      “It really is more natural to believe a preternatural story, that deals with things we don’t understand, than a natural story that contradicts things we do understand. Tell me that the great Mr Gladstone, in his last hours, was haunted by the ghost of Parnell, and I will be agnostic about it. But tell me that Mr Gladstone, when first presented to Queen Victoria, wore his hat in her drawing-room and slapped her on the back and offered her a cigar, and I am not agnostic at all. That is not impossible; it’s only incredible. But I’m much more certain it didn’t happen than that Parnell’s ghost didn’t appear; because it violates the laws of the world I do understand.” G.K. Chesterton

      True, psychic powers would be closer to the ghost than the antics described. But since they are supposed to be things people do among us, we would expect some sign.

  21. Fourth Root says:

    It’s interesting that you discuss the Pythagorean theorem up there, since in some senses it is false. In particular, the statement “The Pythagorean theorem is true in our universe” is false (general relativity says that the universe is non-Euclidean, and it happens to be non-Euclidean in manner such that the Pythagorean theorem does not always hold).

    On the other hand, the statement “under Euclid’s axioms, the Pythagorean theorem holds” is a true statement. So it’s probably more reasonable to say that the Pythagorean theorem is true than it is to say that it’s false, but you do have to be careful that you phrase it as a statement about pure mathematics rather than the universe.

    • HeelBearCub says:

      Can the Pythagorean Theorem be false in a geometrical plane?

      • Fourth Root says:

        If by “plane” you mean plane in the Euclidean geometry sense, then no, because the Pythagorean theorem is true in Euclidean geometry.

        If by “plane” you mean 2D Riemannian manifold, then yes, it is false for most of these, this includes the standard non-Euclidean geometries.

        If by “plane” you mean a 2D Riemannian manifold that is isometric to a Euclidean plane, then no, because such a surface is by assumption flat (this is not actually different from the first case).

        If by “plane” you mean “something in reality that looks like a plane” … I’m not quite sure how to set up the question in a way where that would be well-defined. If it is possible to talk about subsurfaces of spacetime whose geodesics are also geodesics in spacetime (which I’m about 70% confident is possible) then I’m about 95% confident the answer is “Yes, the Pythagorean theorem is sometimes false on such surfaces”, and most of the 5% error comes from “I am making a basic mistake in geometry/physics that also affects my initial claim”.

        [Edit: it is perhaps worth mentioning that distances in spacetime are really weird since spacetime isn’t even a Riemannian manifold, it’s a Lorentzian manifold, where spacelike curves have positive “length” and timelike curves have negative “length”. In my original post I was thinking of the Pythagorean theorem being false on some spacelike subsurface, but it’s plausible that things get even weirder.]

        • HeelBearCub says:

          Yeah, I probably implicitly meant Euclidean geometry, even though I did not know that. I guess my point was simply a restating of your point that Pythagoras holds given that you accept the axioms.

          Further, even before more esoteric forms of geometry or physics, the axioms weren’t thought to be “true” in the actual world. Lines that are infinitely long but have no area or volume can’t be constructed in the physical world, and this was known (or guessed) long ago.

      • malpollyon says:

        Absolutely, the hyperbolic plane.

        Edit: Fourth Root gives a much better answer.

    • Anon says:

      > the statement “The Pythagorean theorem is true in our universe” is false

      Nope, it’s still true. The Pythagorean theorem always begin with “Assuming we have a right triangle in the Euclidean plane”.

  22. Gunther says:

    1. Assuming we’re considering all possible gods, 10% sounds reasonable. Limit it to Abrahamic gods and I’ll say 1%.

    2. There’s no mechanism to allow for it, no evidence that it exists and if it were possible, evolution would have ensured that everyone can do it rather than just a few. I think 0.1% is reasonable.

    3. 80%. I have difficulty believing that an entire field of fairly hard science could be wrong.

    4. 10%. Though I don’t know enough about pandemics to be at all confident about that. Could easily be off by a lot either way.

    5. 50%. I’m surprised so many people are confident this will happen – going to Mars is not easy or cheap. and 2050 is only 35 years away.

    6 20%. But if the question was about AI that can do “most” cognitive tasks better than humans rather than “almost every” cognitive task, the probability goes up to 60%.

    • the court jester says:

      So, anybody want to steelman the existence of an Abrahamic God? I’m thinking you need to assume that God is actively trying to deceive us.

      • AnonymousCoward says:

        An alien that visited earth is the most plausible Abrahamic God I can think of. So, doesn’t violate naturalism or anything, but isn’t an all-powerful God.

        • John Schilling says:

          Ah, but Judaism was still henotheistic in Abraham’s day, so the Abrahamic God doesn’t need to be omnipotent. It’s the post-Babylonian Retcon God you’re thinking of 🙂

          Less snarkily, there’s a plausible overlap between the Abrahamic God and the AI Simulation God. Heck, maybe it really did take him six days to compile, and after coming off the post-Doritos-and-Red-Bull crash he set the start date to 4004 BC.

      • Brad (the other one) says:

        Actively deceive? No. But human wisdom is not the means by which one discerns true spiritual knowledge.

        >18 For the word of the cross is folly to those who are perishing, but to us who are being saved it is the power of God. 19 For it is written,

        >“I will destroy the wisdom of the wise,
        and the discernment of the discerning I will thwart.”

        >20 Where is the one who is wise? Where is the scribe? Where is the debater of this age? Has not God made foolish the wisdom of the world? 21 For since, in the wisdom of God, the world did not know God through wisdom, it pleased God through the folly of what we preach to save those who believe. 22 For Jews demand signs and Greeks seek wisdom, 23 but we preach Christ crucified, a stumbling block to Jews and folly to Gentiles, 24 but to those who are called, both Jews and Greeks, Christ the power of God and the wisdom of God. 25 For the foolishness of God is wiser than men, and the weakness of God is stronger than men.

        See also 1 Corinthians 2:14-16

  23. John Schilling says:

    This seems an awful lot like a restatement of “Stop Adding Zeroes”, but since you ask:

    1: 0.5 for definitions of “God” extending as far as pantheism, 0.2 for theism. The guy running the simulation in that hypothesis counts

    2: 0.05 for weak telepathy, 0.001 for any sort of telekinesis or precognition, in both cases exclusive of whatever God is doing.

    3: 0.15 if that is meant to be 1 deg C over current (2015) temperatures

    4: 0.20 including artificial pandemics

    5: 0.70 (edit: for just the landing)

    6: 0.15 not counting God and/or the supercomputer that’s running the simulation in that hypothesis.

    And the bonus question, probability that my marginal donation to MIRI will prevent the emergence of unfriendly AI so thoroughly for all time as to be the necessary and sufficient condition for the future with 10^bignum human beings living happily ever after: Incalculable, and you already told me not to add so many zeroes.

    • the court jester says:

      Yes! Another simulation God follower. Hallelujah!

    • John Schilling says:

      And now that it’s a weekend and I have time, I should show my work.

      1: Cogito ergo WTF? I have no satisfactory root cause for the fact that I can perceive myself to be asking this question. Answers seem to fall into A: consciousness is an emergent property of materialism, B: materialism is an emergent property of consciousness, C: both are emergent properties of something else. A means probably no God (but see simulation hypothesis), B means a primal consciousness that probably is God in the broadest sense, and C could go either way. Absent evidence, that comes to 50/50

      Theistic Gods are necessarily a subset of Gods generally, though given the only thing I am certain exists is a discrete perceptual consciousness it’s a pretty big subset. I knock this down a bit from 25% on the grounds that the apparent material universe seems ill-designed as a playground for a primal consciousness, but only a little bit because I don’t expect to understand God very well. So, 20%

      2. Weak telepathy violates no known laws of physics, and the absence of a known mechanism just puts it in the same category as continental drift and an ancient Earth a century ago. The scientific evidence says it is long past time to shut up about Why That Can’t Be So and start working on How It Is So. However, a while back I did some crude calculations suggesting that telepathy manifesting at even the p<0.95 level in any non-trivial fraction of the population would e.g. put casinos out of business, so 95% all we're seeing is experimental error.

      Telekinesis and precognition, same deal but discounted a few more orders of magnitude because they violate known physical laws that are highly unlikely to be in error.

      3. One degree of warming between now and 2050 is within the IPCC's error bars, so claims of 80-90% are I think just signalling (or not paying attention). I have three competing hypotheses with no strong preference. A, the IPCC/consensus view is right except for its baffling denial of the hiatus, and warming will resume with sensitivity ~4.5C in the near future – this tracks pretty closely with the historical record from 1979-2004. B, the Arrhenius first-order calculation of ~1C sensitivity is the normal behavior of the Earth's climate – this tracks pretty closely with the 2005-present "hiatus", if you plot it vs log(CO2) rather than time. C, somewhere in the middle, because extremes are usually not the whole story.

      Only case A has a significant probability of 1 deg C warming in 35 years, and allowing for the hiatus to possibly end this year or possibly continue as long as another 10 years, one degree is right in the middle of the error bars. So, 50% of 1/3, knock a bit off for the possibility of effective mitigation, and 15%.

      4. Historical data is of limited relevance because the global population has been too small, dispersed, and compartmentalized, but plagues of about the right magnitude, I count three in the past millenium, so I'm going with 30% per century. Modern medicine, sanitation, and hygiene should knock that down quite a bit, but multiple drug resistance is a thing, as are developing-world megacities and air transportation, I have not the expertise to determine which way these competing effects point. But a gigadeath is near the top end of the error bar for historical plagues applied to the whole world, so take that 30% down to 5%.

      Then quadruple it because we are much, much better than we used to be at deliberate biological warfare. That's somewhat arbitrary, and on reflection I might revise the total estimate down to 10%.

      5. 20% from Elon Musk, 5% from NASA's current manned space flight plans, the rest from other enthusiastic billionaires (including those currently in high school) and from all the people currently doing increasingly shiny robotic probes who will eventually box themselves into a corner where they have to go big or go home. Too many candidates for them all to give up forever. A single manned landing doesn't require major new technology, the infrastructure to make it broadly affordable is already being developed along multiple paths, and the timeline allows for another lost generation in space exploration along the way.

      The only way this doesn't happen is, 10% severe unanticipated technical obstacles, 10% global economic disruption on the level of a gigadeath plague, and 10% black swan. So, 70% it happens.

      6. As I have argued elsewhere in this thread, we're not going to duplicate human neural architecture in silicon, and there's no existence proof for highly capable AGI in any other way, so 50% probability this can't happen without reinventing computer science from scratch. I'm not going to calculate the probability of reinventing comp sci from scratch in a century, because lazy.

      If possible, fifty years of false "it will happen within twenty years for sure" predictions suggest a ~12% probability per decade of actually pulling it off, so 70% by 2115. Knock down 10% each for the same reasons as the Mars landing, and another 10% for the Butlerian jihad, and (0.5 x 0.7 x 0.6) gives a bit over 20%. Not sure why I put in 15% yesterday.

  24. Thad says:

    1. What is your probability that there is a god?
    I don’t know how to answer this question.
    2. What is your probability that psychic powers exist?
    Probably something like .1%, depending on how we define psychic powers
    3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050?
    From now? My gut says something like 10% Without looking up the numbers, I think most calculations are done using a baseline some years in the past and 35 years feels too short. Make it from some past baseline and the probability goes up accordingly. Of course, this is without me just googling it.
    4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100?
    1% Even with population growth being what it is, one billion is a lot of people and 5 years is not a lot of time. Still, there are too many ways it could happen for me to completely discount it.
    5. What is your probability that humans land on Mars by 2050?
    Here is one that speaks to me emotionally, so I’m probably wrong. 65%
    6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115?
    As others have pointed out, there is ambiguity in the question. For the looser meaning, 50% For the stricter meaning, I’m really not sure how to estimate it. Not as hard a question as God, but pretty dang hard. Any answer I give would have a very low confidence

  25. the court jester says:

    FWIW:
    1. 50% (If we live in a simulation, aren’t the programmers like God?)
    2. 0.001
    3. 90%
    4. 10%
    5. 15%
    And I hope it doesn’t happen. Could the science possibly be worth the money? BTW, anybody want to estimate the probability of an economically self-supporting Antarctic town by 2050?
    6. Epsilon to 5%, depending on how liberally we define this

  26. Dan Simon says:

    This post wonderfully captures what I consider the fundamental flaw of most of the arguments I see on this blog: the bizarre assumption that rational thought can be pursued independently of some kind of a priori model of reality. The question, “what is the probability that statement x is/will be true?”, makes absolutely zero sense absent a model of reality under which that probability can be estimated. And if two people have different models of reality, then they could easily come up with diametrically opposite probability estimates for the same outcome. Finally, if both are sufficiently heavily invested in their respective models of reality–if each has been very well-served by his or her own model over many years, for example, and is highly reluctant to abandon it over a speculative question about a potential future catastrophe–then neither is likely to budge an inch.

    Now, some yes-no questions, I would argue, are “model-defining”–that is, models that admit of both answers are very difficult to come up with. An obvious one is, “does God exist?”. A God-affirming model of the universe is inevitably going to be so radically different from a God-denying one that it simply makes no sense to assign a probability to the existence of God–He either exists with probability 100% or probability 0%, depending on the model of reality you choose.

    I would further claim that “is superintelligent AI possible?” is another such question. If you believe that “superintelligent AI” is a coherent concept, then its empirical possibility is hard to deny, whereas if you believe–as I do–that it’s a nonsensical phrase bandied about by people who are deeply confused about the meaning of the word, “intelligence”, then the very notion of its empirical possibility is an absurdity. Either way, there are really only two interesting answers to the question, “what is the probability of a superintelligent AI being created within the next century?”: zero and nonzero. And it’s far from clear how people whose answers differ in this fundamental way are ever going to be able to persuade each other by mere rational argument.

    • Scott Alexander says:

      Okay, but this is what I mean by “your uncertainty money and your meta-uncertainty money are both denominated in dollars”.

      If you say “We just can’t predict what the market will do next year! Either it goes up with 100% confidence, or it goes down with 100% confidence!”, at some point you still need to make investment decisions. If you make an investment decision as if you actually believed your 100% or 0% figure, you would immediately go bankrupt.

      One more example. Suppose that a gang of three vaguely-foreign looking people in tri-corner hats surround you in a dark alley. Two of them grab you and pin your hands behind your back. The third puts a weird obsidian knife to your throat and says “Unless you sing ‘Mary Had A Little Lamb’ right now, I will cut your throat, but if you do sing it, I will let you go unharmed.'”

      Well, obviously you have no idea what’s going on? Is it a prank? A gang initiation? Psychotic people? I couldn’t even begin to answer.

      So you say “Because I have no model, I am relieved of the responsibility of making decisions” and do nothing, and the guy slits your throat.

      I start singing the damn song, even though I have no idea what’s going on.

      • LTP says:

        But, to Dan’s point, you just move up a level and point out, once again, asking what the probability one assigns to one’s meta beliefs is not coherent unless you have presupposed a world where that is a coherent thing to ask. So, rather than dodging the objection, you have merely moved it up a level, and this can be done ad infinitum. Axioms are unavoidable.

        As for the men surrounding you a pointing a knife, you do actually have a model. You have the myriad of axioms about reality that you believe, and then you reason based on those and your experiences that the best chance you have of survival is to listen to them. You do know what is going on, in a sense, even if you are uncertain about the details.

        • Luke Somers says:

          And the need to not throw your hands up in the air and give up is also unavoidable. In practice, you only need to back out the one layer and have some sort of distribution over models. A distribution over distribution-distributing mechanisms on models doesn’t get you much.

          • Dan Simon says:

            I think you’ve missed the point. A good model already allows for uncertainty and probability estimation–why jump a layer and create a meta-model at all? Sure, the probability estimations in your model may be wrong–but since the same can be said of the probability estimations in your meta-model, or meta-meta-model, and so on, adding even the first layer doesn’t gain you anything. Best to just focus on modeling reality as best you can–including recognizing where your confidence is weak–and be done with it.

          • Luke Somers says:

            You create the meta-model so you can work in a framework that can have things that are wrong in it. Since some things are going to be wrong, right?

            Let’s say you do it Bayesian style. You start with some priors. Then as evidence comes in, you can adjust the posteriors.

            This gets really, really hard if you insist on calling it all one model. It’s going to get hard in much the same way whether you do it Bayesian or some other way – either you’ve got multiple models; or your one model actually has multiple models in it, you’re just declining to recognize this fact; or you’re going to find it’s just wrong and now you don’t have a system for ever being right.

          • Dan Simon says:

            Nothing precludes anyone’s model from being Bayesian, including priors, and being adjusted to account for new data. (In fact, most people’s models of reality implicitly operate this way, although usually on a somewhat crude, qualitative level: it’s called “allowing for several possible explanations”.) Adding a meta-model adds no power to this approach, as far as I can see.

          • Luke Somers says:

            The Bayesian model IS a meta-model. Saying you don’t need meta-models if you’re using a Bayesian approach is confusing.

      • That decision is consistent with belief that you are more likely to go unharmed if you sing the song than if you don’t (assuming the cost of singing is negligible relative to getting stabbed). The odds-ratio might be 1.001 or it might be 1,001, we can’t determine any more specifically and we don’t have to.

      • Dan Simon says:

        …And to follow up on LTP’s and Jacob’s points, if your model of reality includes belief in a race of supernatural beings known only to you who occasionally appear in the form of humans in silly costumes, make bizarre demands of people, and kill all who fail to respond by ignoring the demand and instead reciting the secret thirteen-part declaration of belief, then your idea of the correct reaction to the hypothetical is bound to be very different from mine. And it’s moreover highly unlikely that mere rational debate will bring us closer together on the question.

    • shemtealeaf says:

      Could you elaborate on why your view of intelligence makes precludes superintelligent AI being a meaningful concept?

        • shemtealeaf says:

          Dan,

          Very interesting and well-written piece, but I disagree with you on a couple points:

          1) Just because we can’t precisely define intelligence doesn’t mean that we can’t take a pretty good stab at setting up some tests that would essentially filter out anything that wasn’t at least as intelligent as the smartest humans. I’d say that passing a bunch of strict Turing tests where the tester engages the AI in a serious conversation would be a good start. If an AI can do that, and then demonstrate some ability to derive some components of math and physics, that’s good enough for me to say that it has high-level intelligence.

          2) Your last point about how ” humans in effect mistakenly program a machine to destroy humanity” is what I see as the real problem. I agree that this could potentially be a problem with something unintelligent, but intelligence makes the problem far worse. I am not at all confident that we can develop fail-safes that are effective enough to restrain an entity that can understand those fail-safes and try to circumvent them.

    • Irenist says:

      This post wonderfully captures what I consider the fundamental flaw of most of the arguments I see on this blog: the bizarre assumption that rational thought can be pursued independently of some kind of a priori model of reality. The question, “what is the probability that statement x is/will be true?”, makes absolutely zero sense absent a model of reality under which that probability can be estimated.

      That’s the source of many of my misgivings as well, although more about some LW folks than about Scott in particular. It doesn’t make sense to me to ask “What is the probability that “2 + 2 = 4?”, or “What is the probability that the same proposition can be both true and false?” They just don’t seem like those kinds of questions, yet I’ve encountered questions like that being treated as though they were amenable to that kind of treatment.

      • David says:

        It’s a bit subtle, but the key thing to realize is that people aren’t trying to ask about “Platonic” probability that exists by itself in the world. Rather, they’re trying to ask about your confidence level about it.

        Interpreted literally, “What’s the probability that X happens” actually does seem to be asking for a “Platonic” probability, but I would chalk that up to colloquial and arguably poor communication on the part of anyone who actually asks you questions phrased like this. In such contexts, what they mean to ask you is something more like “What probability best represents your own confidence level that X will happen?”, but that’s verbose enough that people often say it the less precise way.

        If it helps, you can mentally translate such questions into “If you were to make 100 (or 1000, or a million, etc.) predictions or statements that you have about the same level of confidence in as you have that X will/won’t happen, about what proportion of those predictions or statements do you think would actually be true/false/happen/not happen?”

        And that question should have an answer! Even for statements like “12+9=21”, you should be able to answer this question. For me personally, my probability (e.g. confidence level) on that being false would be very very low, but not zero. If I posted a message like this one every day of my life for 100 years straight, putting about as much care into mentally verifying that the statement is true as I am right now, I would almost certainly slip up and be wrong at some point. Since there are ~35000 days in 100 years, my probability or confidence level in it being false should definitely not be less than 1/35000. In fact, it’s probably more like 1/500 or so.

        Now, curiously, that could change. I could put more effort into rechecking the simple arithmetic, use a calculator, etc, and that would change the probability. In fact it’s changing right now as I read over and edit this post. That’s fine. Because again, it’s not about the statement having some probability of being true in the real world – either “12+9=21” is true or it isn’t – rather, it’s about the probability that measures *your* confidence level in making that claim/statement/prediction.

  27. Anaxagoras says:

    With the phone book question, my first thought was, “I’m not 98% certain there is a phone book for Boston these days.”

    • As of 2 (95% confidence interval 4 to 2) years ago there were still yellow-pages.

    • Anonymous says:

      I was very interested to see an answer to this (it could shed light on the actual modeling problems). I was very much eager to test my own estimate. It was apparently published as a book chapter, and my uni license doesn’t have online access to it. One thing to note – it was published in 1982. This was a curveball that my estimate didn’t take into account… maybe their number would have ended up in the 2% of my estimate after all…

  28. I don’t think this is really possible given how we suck at probabilities close to zero and one. Consider the following example:

    There is some probability that the earth quantum fluctuates into a black hole that is really, really, really small. Lets say it doesn’t change over time. If I pick just the right amount of time then your one in 10^395 and my one 10^405 every second estimates look like 99.999% and .001%. I don’t think it is reasonable to expect people to estimate probabilities of many orders of magnitude within only a few.

    I think that alien life suffers from a similar problem. We are multiplying very small and very large numbers together and because we suck at estimating small and large things we are off by orders of magnitude. Do you really think we can agree to even within a factor of 10 the fraction of planets that are habitable, even for a clear definition of habitable?

    EDIT: I see your comment that “your uncertainty money and your meta-uncertainty money are both denominated in dollars”

    My intuition is that seems to make more sense, but I don’t think that know each other’s estimates is enough to converge within an order of magnitude, you probably have to iterate too many times for humans to deal with to get close.

  29. AnonymousCoward says:

    1. What is your probability that there is a god?

    20% if we’re counting ‘universe is a simulation and god is the person who typed “chmod +x universe; ./universe”‘, ~1% if we’re talking about non-all-powerful aliens who appeared to past civilisations and on which our current myths are based, and like .01% if we mean actual, all-powerful gods in line with earthly religions. Is that overconfident? Maybe.

    2. What is your probability that psychic powers exist?

    Gosh, this would be so counter to the entire program of naturalism, on the same level as all-powerful magic gods existing, so 0.01 %.

    3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050?

    Well, it’s done 1C in ~100 years so far, so it probably won’t do 1C more in only 35 years. I don’t think things are accelerating *that* fast. 20%.

    4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100?

    Yeah, I could see this happening, and globalisation ought to increase the chance compared to the past, and it doesn’t look like technology can stop a pandemic very well – swine flu spread crazily despite our best attempts, and we just got lucky that it wasn’t so bad after all. Looks like we halted ebola pretty well though, though it’s hard to tell what effect policy had on this compared to it being the expected outcome anyway. 20%.

    5. What is your probability that humans land on Mars by 2050?

    Ooh, gosh, I hope so. This seems to depend entirely on political will, so 20% since it’s totally within our reach but politics seems to lack drive to do things like this. It’ll only happen if a powerful government decides it’s worth it for some reason. I don’t think asteroid mining will get off the ground before 2050, so I don’t think expeditions will be run for profit before 2050, otherwise that would speed up tech development a lot and make humans-to-mars easier.

    6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115?

    It seems like you can pretty naively extrapolate computing power, assume we don’t actually have to invent anything because we can just base it off simulating human brains with low-hanging-fruit augmentations, so it seems like this will almost definitely happen, with the main factor preventing it being its active prevention by becoming illegal or something like that. So I put it at 80%, and it would be higher if I didn’t think it might be stopped intentionally.

  30. Doug S. says:

    “When, however, the lay public rallies round an idea that is denounced by distinguished but elderly scientists and supports that idea with great fervour and emotion — the distinguished but elderly scientists are then, after all, probably right.” – Asimov’s Corollary to Clarke’s First Law

    I reserve the right to declare my subjective probability of someone inventing a perpetual motion machine by 2100 as less than one in a billion.

    • tcd says:

      *gasp*

      Edit: Gah. You triggered some hidden memories from my childhood. There was a summer where I kept approaching my dad with perpetual motion machine ideas, and my confidence was certainly much higher than one in a billion.

  31. So, if you want to be confident to the level of one-in-a-million that there won’t be superintelligent AI next century, you need to believe that you can fill up 27 War and Peace sized books with similar predictions about the next hundred years of technological progress – and be wrong – at most – once!

    I don’t understand how this follows. Wouldn’t it depend on your certainty in each of the one million statements you would make? I might be overthinking this.

  32. Sniffnoy says:

    Sorry, but would you mind fixing the “a few years ago” link to point to the abstract rather than directly to the PDF? From the abstract you can click through to the PDF, not so the reverse (especially when you’ve used a weird URL for the PDF like you have). And from the abstract you can see other versions of the paper if they exist, other papers by the same author, etc. Thank you!

    Edit: Also, I’m guessing this:

    “Aaron Aaronson of Alabama wins the lottery. Absalom Abramowtiz of Alaska wins the lottery. Achitophel Acemoglu of Arkansas wins the lottery.”

    is meant to read “does not win the lottery” rather than “wins the lottery” in each case?

  33. If you have a good statistical model, probabilities can be arbitrarily small. If you flip a coin 16 times, I say the probability of them all being heads is less than 1/65,536. Flip it 32 times and the probability is less than 4 billion.

    Examples of people underestimating probabilities are harder to come by, but they exist. The Birthday Paradox comes to mind, where people massively overestimate the number of people necessary to be in a room such that at least 2 have the same birthday. People have been predicting the end of the world since approximately the beginning of recorded history (http://www.jacobsilterra.com/2012/12/23/its-the-end-of-the-world-again/).

    Don’t just anchor to 1% and 99%, that doesn’t make any sense.

    I can only provide reasonably rigorous answers to two questions:

    4: Figure the world population will roughly double by 2100, 1 billion people is about 15% of the population. The only plagues which have come close to that level were the Black Death in Europe (30-70% of Europe) or the less well documented plagues in pre-Colombian America. The 1918 flu plague infected 30% of the worldwide pop but had a case-fatality rate of 3%, leading to a total death rate of 1%. I doubt there could ever be a disease which killed 15% of people scattered all over the world, but killing ~50% of say India or China might accomplish the requisite death toll. China experienced two major pandemics in the past ~500 years, so say the average wait time is 500 years, there is a 20% chance of a major one in the next 100 years. Of course the relative death toll would need to be enormous, much higher than either of those plagues I mentioned. So discount that by 10,000 and we get a probability of 2e-5 (1 in 200,000). Of course other reasonable people could probably tweak these numbers to get anywhere from 2e-7 to 2e-3 without much effort.

    7. Wikipedia lists 14 potentially habitable planets within 50 light years, roughly the amount of time we’ve been beaming out radio signals (https://en.wikipedia.org/wiki/List_of_potentially_habitable_exoplanets). Intelligent life capable of receiving radio would need to exist on one of those, and it’s only existed on this planet for 50 years / 4 billion years. Generously assuming such a civilization will last 1 million years, we can ballpark the probability that each planet has alien life which knows about us at 1e6/4e9 = 0.00025. I’ll leave that probability as my final answer, as it assumes that exactly one of those planets will support intelligent life, which to me seems like an overestimate.

    • Lawrene D'Anna says:

      Yea, but if you flip the coin enough times the probability that someone switched the coin for a two-headed coin just to mess with you starts to dominate the probability you get from the model.

    • Good Burning Plastic says:

      Flip it 32 times and the probability is less than 4 billion.

      More than a coin in 4 billion has two heads. (I have seen at least one such coin and I definitely haven’t seen anywhere near 4 billion coins.)

  34. Doug S. says:

    Topic for discussion:

    Which of the following two events is more likely to happen within the next 100 years:

    1) The winning lottery number for a government-run lottery in the United States, with a jackpot of at least $20 million (inflation adjusted), ends up being the numerically smallest possible number.

    2) A sitting Catholic Pope publicly converts to Islam, and means it.

    • Good Burning Plastic says:

      > 2) A sitting Catholic Pope publicly converts to Islam, and means it.

      Given that things like this or this happened, somewhere around 1% per century.

      • Unknowns says:

        The Pope converting to Islam is substantially less likely than were the events you cite, but yes much more likely than the lottery event.

        • Good Burning Plastic says:

          The Pope converting to Islam is substantially less likely than were the events you cite

          How much less likely, and why? Would you bet that no pope will convert to Islam in the next 50 years at odds 500 to 1 on? (If your life expectancy is less than 50 years, feel free to reduce the time and increase the odds by the same factor.)

          but yes much more likely than the lottery event.

          Actually I have no idea how many lotteries > $20M the USG runs; my anally extracted guesstimates would be in the same ballpark as Jacob’s — and he didn’t even consider the possibility that someone would tamper with a lottery for some reason or even just for the hell of it.

          • Unknowns says:

            Reasons why the Pope’s conversion is less likely than those events:

            1. Selection pressures on the Pope ensure someone who is extremely attached to Catholicism. There was nothing really similar in the cases cited.

            2. Status motives. Constantine and Akhenaten did not lose their positions by their conversions. If the Pope publicly became a Muslim he would effectively resign as Pope immediately, whether or not he wanted this to happen. This pretty much means it will never happen, no matter what. Even if he decided Islam was true, it wouldn’t be public. He would never tell, so that he could keep his position.

            3. Catholicism is much more probable than Islam, in a way I think would not apply at all to the Constantine / Akhenaten cases. (If you don’t think that this is true, consider collecting all the worst things you could say about Jesus and about Mohammed and compare the two sets of claims.) This is relevant because every Pope is going to be an educated man and will know something about such reasons.

            You might think that these kinds of reasons can’t lead to a substantial probability gap, but consider e.g. how many atheist Less Wrong members have converted to Catholicism over the past seven years. Just Leah Libresco? Or was there one or two more? And that’s out of a couple thousand people, so effectively more than seven thousand years for a single person, and without nearly equivalent selection pressures and status motives. And I doubt any Less Wrong member ever has, or ever will, convert to Islam, given the third factor.

            I would be happy to bet $83,000 against $100 that no Pope will publicly convert to Islam before September 2045, i.e. so that I pay $83,000 if a Pope converts to Islam within the next 30 years, otherwise you pay me $100.

          • sweeneyrod says:

            @Unknowns
            I don’t think it is impossible that a Less Wrong member could convert to Islam. I’d probably say 50 to 1 that at least one would in the next 10 years, conditional on Less Wrong membership not shrinking drastically.

          • Glen Raphael says:

            Catholicism is much more probable than Islam

            Wait, you mean to say you think Catholicism is more probable to be true than Islam? Aren’t those probabilities both so small as to not be worth distinguishing?

            If you don’t think that this is true, consider collecting all the worst things you could say about Jesus and about Mohammed and compare the two sets of claims.

            For me, it seems like the worst thing one could say about Jesus is that he likely didn’t exist, and I gather that is also the worst thing you can say about Mohammed. That’s presumably not what you mean the comparison to show, so can you maybe give us a hint as to what you expect this comparison to show?

          • Good Burning Plastic says:

            And that’s out of a couple thousand people, so effectively more than seven thousand years for a single person, and without nearly equivalent selection pressures and status motives.

            Good point. I’d guess many fewer than 1 Christian in 25,000 convert to Islam per year. Let me google this…

            In the period 1990–2000, approximately 12.5 million more people converted to Islam than to Christianity.

            Wow.

          • Jiro says:

            For me, it seems like the worst thing one could say about Jesus is that he likely didn’t exist, and I gather that is also the worst thing you can say about Mohammed. That’s presumably not what you mean the comparison to show, so can you maybe give us a hint as to what you expect this comparison to show?

            I think that by “worst thing you can say”, he means “what’s the thing that you could say about them that would reflect most poorly on their morals”, not “what’s the thing you could say about them that would be most displeasing to their supporters”.

            Ignoring the Book of Revelation (which seems to describe torture approvingly) and assuming you have adopted a philosophy which does not equate Jesus to the Old Testament God, about the worst thing you can blame Jesus for is destroying a fig tree out of spite or attacking moneylenders. If you limit yourself to historically verifiable events, not even that. The worst thing you can blame Mohammed for is killing and subjugating lots of people in the name of religion. For a LW-er to become Muslim would require rationalizing more bad things than becoming Christian.

          • Nita says:

            A religion is more likely to be true if its teachings happen to agree with our beliefs? What kind of argument is that?

            Christianity and Islam are both Abrahamic religions. So, look at the Old Testament God, and consider whether Jesus or Muhammad is more likely to be His chosen prophet.

          • Jiro says:

            “More probable” means “more probable that someone would convert to it”, not “more probable to be true”.

          • John Schilling says:

            assuming you have adopted a philosophy which does not equate Jesus to the Old Testament God, about the worst thing you can blame Jesus for is destroying a fig tree out of spite or attacking moneylenders

            But converting to Catholicism means adopting a philosophy which does equate Jesus to the Old Testament God. In which case, He’s an unapologetic mass genocidalist who maybe won’t do that any more himself but hasn’t stopped others from doing it in His name. Mohammed by comparison is merely a sock-puppet apologist for a mass genocidalist, with some lesser wars of conquest to his own name.

          • Jiro says:

            But converting to Catholicism means adopting a philosophy which does equate Jesus to the Old Testament God.

            I think that few Catholics actually alieve that Jesus is the Old Testament God, or that the Trinity is logically coherent (and probably have not thought through the Trinity enough to even be able to argue it).

          • Unknowns says:

            Good Burning Plastic: I assume from your response that you aren’t actually serious about betting, but if you are, the deal (as I stated it) is still on, or even $8,300 against $10 if $100 is too much on your side.

            Also, more people may have converted to Islam than to Christianity over an equal period of time, but I doubt most of those conversions are of intelligent and educated people.

            Sweeneyrod: you actually think there is only a 2% chance no Less Wrong member will convert to Islam in the next 10 years? Understanding that to be the sense of publicly known fact? That seems extremely strange given that there is no known public conversion of that sort so far. If anything, the default would be that it will stay that way.

            Glen Raphael: yes, I meant it is more probable that Catholicism is actually true (and therefore also more probable that intelligent, educated people would convert to it, although those of course are not exactly the same probability.)

            There are various reasons for thinking that, but specifically I meant that Mohammed was a pedophile and a murderer. I don’t see any reason to think anything similar is true about Jesus.

          • Samuel Skinner says:

            “There are various reasons for thinking that, but specifically I meant that Mohammed was a pedophile and a murderer. I don’t see any reason to think anything similar is true about Jesus.”

            That would require you to believe that God would pick a prophet who doesn’t commit immoral acts. Looking at the previous track record I wouldn’t be so confident about that?

    • Jacob says:

      1) Ballpark 50 lotteries with a sufficient jackpot. Ballpark odds 1 in 50 million, meaning the probability of the lowest number winning is 1:50e6. The odds of at least one of those happening in any giving week (assuming weekly drawings) is 50*1/50e6 = 1e6. 52 weeks per year, 100 years, 5200*1e6 = 5.2e3. So the odds are roughly 1:5000

      2) There have been 266 popes, I don’t think any of them have ever converted. So the odds are somewhere between 1:infinity and 1:266

      • Good Burning Plastic says:

        2) There have been 266 popes, I don’t think any of them have ever converted. So the odds are somewhere between 1:infinity and 1:266

        Those are the odds for one given pope converting, but there will be several popes during the next 100 years.

  35. Doug Muir says:

    What about the probability of things that are flat-out impossible? “Odds of a working time machine, able to send a living human a day into the past?”

    Not impossible enough for you? You want to feel there’s some tiny chance that we could be totally wrong about the laws of physics? Okay, “Odds that a sufficiently precise calculation of pi will show that pi is rational”.

    I’d be very comfortable giving either of those just as many zeroes as you like.

    Doug M.

  36. Eliezer Yudkowsky says:

    Actually I am 99.9999% confident that MWI is correct.

    JUST KIDDING

    HA HA HA

    I HOPE YOU DIDNT BELIEVE THAT EVEN FOR A SECOND

    • James says:

      Hmmm.

    • Jordan D. says:

      There are some universes in which I did, but then again…

    • Luke Somers says:

      I believed it for a moment, but only because it didn’t register who was saying it for that moment.

    • Eli says:

      Hold on. You think claiming to have 14 nats of information in support of a single hypothesis is ridiculous?

      Well, I suppose when we’re talking “ontological interpretations of quantum mechanics”, that does make sense.

  37. Seth says:

    Isn’t this akin to Dick Cheney’s “One Percent Doctrine”, but geeks/AI-risk instead of governments/terrorists-with-nukes risk?

    http://uchicagolaw.typepad.com/faculty/2006/06/the_one_percent.html

    “In his new book, The One Percent Doctrine, Ron Suskind quotes the Vice President as follows: “We have to deal with this new type of threat in a way we haven’t yet defined. . . . With a low-probability, high-impact event like this . . . If there’s a one percent chance that Pakistani scientists are helping al Qaeda build or develop a nuclear weapon, we have to treat it as a certainty in terms of our response.””

    It’s even denominated in dollars, since response has definite costs and can’t be infinite.

    Or, closer to home, I’m reminded of the recent thread about involuntarily confining and medicating people, using what seemed to me sometimes dubious and almost negligible evidence of danger, but “justified” on the basis that for the decision-maker there’s a very small risk that NOT confining them leads to a very big lawsuit. Also denominated in dollars (in both lost income for the confined, and insurance premiums for the doctors and hospitals)

    If you’re trying to consider how to effectively calculate the best action for low-probability events with high-costs, maybe the best way to go about it is *not* to first try to solve it in the limit of orders of magnitude small versus species-wide disaster.

    • Luke Somers says:

      Well, he’s wrong about one thing. You don’t treat it as a certainty. That ends up with you busting down 99 doors of innocents. We’ve seen how that works out.

      How about you treat it as 1% probable, and recognize that you need to do something about that even though it’s unlikely? That’s all that Scott’s asking for, here. So no, it doesn’t seem akin at all.

      • LTP says:

        But, even if Cheney treats it as 1% probable, he is pascal mugged by the high cost, and so much treat it as a certainty because the expected value of not doing that were it to happen is too high. Scott is asking us to believe similarly about AI risk.

      • Jacob says:

        No but you might send spies to infiltrate the government in question, and build a military responding to any threat around the world at a moments notice. Maybe even bomb the alleged research facility, because what’s a few lives against millions? All of which sounds like what Cheney wants at least the freedom to do.

        • Luke Somers says:

          Whatever you do, you will be taking into account the 99% likelihood that they aren’t. This tends to make bombing them looks a lot less attractive.

          I mean, listen to yourself – it seems like you’re saying that attempting to have accurate beliefs is exactly the same thing as convincing yourself that 99 innocent people are out to kill you for each person who actually is. You cannot actually believe that.

  38. suntzuanime says:

    Svir seems like a heck of a lot of creprags to me.

    • I was a bit surprised at that. A while back Scott said that his probability for a particular religion being true was so low that it wasn’t even worth considering whether it was true. Part of me wants to say “Huh, interesting direction.” But of course, given that Scott is a rational agent, he’s already updated completely for any direction he sees himself moving in.

      I’d be curious as to the breakdown within svir, re. where the probability is distributed.

  39. grort says:

    It still really feels like this post (and the question of AI risk in general) is trying to Pascal’s-Mug me.
    Pascal’s Mugging is a known problem for Bayesian thinkers, and AFAIK this problem does not have a good solution within Bayesianism.
    I think most reasonable people work around this with a rule like: “don’t accept utility calculations involving very large numbers multiplied by unknowably small numbers”.

    The argument you’ve offered contains the phrase: “If you’re going to get Pascal’s Mugged anyway…” and I have to say that I’m uncomfortable with that line of reasoning. I’d like to choose not to get Pascal’s Mugged, and if I have to step outside of Bayesian reasoning to make that happen, then so be it.

    • goocy says:

      I have a relatively naïve refutation to Pascal: I don’t think anything can have an arbitrarily high utility value. Heaven – the most popular example – sounds great on paper, but the allegorical concept that humans came up with (flying around clouds, drinking ambrosia) sounds very boring. I’d hesitate to give it a higher utility value than waiting at an airport lounge. When you argue that “heaven” the best world that anybody can imagine, we’ll end up with the immortality problem: there isn’t enough fun stuff to invent to keep immortal beings genuinely entertained.

      And when you bring in the concept of supernatural inventiveness, you run into capacity limits of the human mind: can you enjoy something you haven’t experienced before?

      And when you bring in the concept that the aetherical human mind doesn’t need to have a finite capacity, we run into information theory limits.

      Each time you want to increase the utility value of heaven with a new concept, you need to assume another concept. And I would argue that the utility value doesn’t rise as fast as the likelihood of the general concept p(Heaven) falls. So, when you multiply the utility value with the likelihood, the resulting function converges to zero.

    • grort says:

      I’d like to clarify: I’m not even necessarily opposed to AI risk research.

      I just really don’t like the argument “sure this proposition is unknowable, but because it’s unknowable and you don’t want to be overconfident you should assign it a large probability”.

    • “Pascal’s Mugging is a known problem for Bayesian thinkers, and AFAIK this problem does not have a good solution within Bayesianism.”

      Sure it does: bounded utilities. AFAIK economists generally assume that utility functions are bounded.

      Secondly, you are confusing decision theory and probability theory. The former deals with how a rational agent should act under conditions of uncertainty (instrumental rationality), the latter with how a rational agent should update beliefs upon receiving new evidence (epistemic rationality). Bayesian probability theory itself is a theory of epistemic rationality only. It is used as a *tool* in decision theory.

      • grort says:

        I don’t like that bounded-utilities solution. I think we should assign a very negative weight to world-ending scenarios such as “unfriendly AI transforms us all into paperclips”. I don’t think that placing a cap on the disutility of these scenarios is a good solution. I still think my statement “[Pascal’s Mugging] does not have a good solution within Bayesianism” is accurate.

        I agree with you that I’m confused about instrumental rationality vs epistemic rationality. ^_^; Apologies, and I hope my point came across anyway.

        • I would find your statement more reasonable if you replaced “Bayesianism” with “standard decision analysis,” since, as I pointed out, “Bayesianism” itself deals only with epistemic questions.

  40. Lawrene D'Anna says:

    1. If this is to include gods with weird motivations like they just felt like simulating quantum field theory on their heavenly supercomputer, 10%

    2. 0.1 % There’s got to be a thousand types of crazy equivalent to psychic powers.

    3. 50% I have no idea how good those computer models are, or whether or not some form of geoengineering or sequestration will become feasible.

    4. 20% There doesn’t seem to be any particular reason to think this *won’t* happen.

    5. 10% I don’t see governments doing it, and I don’t see how even Elon Musk is going to be able to afford to do it on his own money.

    6. 80%

    • Jeffrey Soreff says:

      >1. If this is to include gods with weird motivations like they just felt like simulating quantum field >theory on their heavenly supercomputer, 10%

      This is why I was getting uncomfortable with whether this question is well-defined:
      In the limiting case of reducing the mind of the god down to nothing, this smoothly
      transitions down just expecting physics to work.

      My own view of all the low probability questions is: I can’t rule out anything with stronger odds
      than the odds that I’m delusional (or, to pick a less extreme case, that I’ve misremembered
      evidence or made a mistake in reasoning) – say 1%. Unfortunately, from a decision theory
      point of view there isn’t much I can do with this limit, since if I’m disconnected from
      reality I don’t have any alternative model of what the real situation might be.

  41. PSJ says:

    I want to defend some version of the principle of charity. I think this will fall out of the principle of underconfidence you defend in this post combined with a bit of the Agreement Theorem. To start with, here is the bulk of your definition from the first post

    To assume that if you don’t understand how someone could possibly believe something as stupid as they do, that this is more likely a failure of understanding on your part than a failure of reason on theirs.

    There are many things charity is not. Charity is not a fuzzy-headed caricature-pomo attempt to say no one can ever be sure they’re right or wrong about anything. Once you understand the reasons a belief is attractive to someone, you can go ahead and reject it as soundly as you want. Nor is it an obligation to spend time researching every crazy belief that might come your way. Time is valuable, and the less of it you waste on intellectual wild goose chases, the better.

    I think this is very close, but one phrase in particular could lead you astray: “Once you understand the reasons a belief is attractive to someone, you can go ahead and reject it as soundly as you want.” People are not Platonic rational minds (Aristotelian?), they are much more like imperfect evidence accumulators. The reasons we say we hold beliefs are nearly always not actually the reasons we hold those beliefs, and quite often post facto explanations thereof. They are noisy, incomplete recreations of the actual evidence on which the belief is based. They are only a small slice of the argument-space supporting that position. What this means is that you need not only to address the specific arguments made by the other person, but also make an explicit effort to create or recreate the best, most convincing evidence of their position (convincing with respect to you, not the other person). Engage with a wide variety of people who disagree in the same direction but a different manner (which entails being particularly kind so that they are more likely to engage with you). Find those people who are most similar to yourself in other ways but disagree on this specific point. Come up with new arguments in your own belief system that clash with your belief and support the other. Try to push your woldview towards theirs (even temporarily) so that the argument can be seen more clearly. If you know you are surrounded by a disproportionately large number of people with a certain belief, discount their evidence and exaggerate any shifts you make in the other direction. Don’t be scared to have wide uncertainty in your predictions (this is the facet of underconfidence I think you are missing in this post). All of these adjustments will bring you closer to properly integrating the full range of evidence available. And what this looks like in practice is a hesitancy to disagree with your opponents and an eagerness to move towards their position. This is (a noisy, incomplete recreation of) what I take the principle of charity to mean.

    • PSJ says:

      As an example, let’s look at Dylan Matthews’ rejection of AI risk. (for the record, I am somewhat towards the pro-give MIRI/FHI/friends a lot of money this is really important side of the debate).

      1. You note that Dylan points to what he perceives as a Pascal’s mugging.
      2. You note that he responds by arbitrarily adding zeros to make the numbers fit his beliefs.
      3. You give a thorough explanation of why this is incorrect reasoning.
      4. Some people perceive this as overly defensive of your tribe. I think this might be the hint that there is something in the neighboring area of argument-space to Dylan’s article that could be more convincing.

      Dylan’s argument seems centered around something like Pascal’s mugging, so that might be a good place to start looking for better arguments. He seems to be motioning towards some sense of not really knowing the order of magnitude of really big numbers, even though his argument falls flat. He specifically brings up Bostrom’s 10^54 number, so this might be a possible point at which there is a weakness in your side of the debate. So, it seems that a charitable way to view the argument would be to look for flaws in this number being a Pascal’s mugging and being overconfident on orders on magnitude. As it happens, I have several arguments for why this might be a real weakness (remember, this is all a post facto explanation for why I came up with these arguments in the first place).

      1. To use this number, we need to assume that potential persons have equivalent moral value to actual persons.
      2. To use this number, we need to assume no time/uncertainty discounting on moral worth.
      3. To use this number, we need to be certain to reach this number of humans conditional on solving AI risk. No other species beats us, no other species creates hostile AI, etc. (although maybe that species is de facto morally relevant so we should try to stop AI to help them in addition to us)
      4. To use this number, we have to assume that morally relevant computation can fill our supercluster to some density (i think that was the basis of the number…it’s in the comments of that post somewhere) within a time limit without hitting a black hole or some galaxy-scale catastrophe.

      I think you can use these a starting points to discover arguments you would find more convincing. But in general, charity seems to be something like expanding the realm of your opponent’s arguments beyond their specific wording or form in ways that might be convincing to you. (just another noisy, incomplete recreation of what I’m trying to say). In that sense, I feel that “charity” (or whatever I’ve mutated it into) goes beyond simply being underconfident in your own beliefs towards a set of actionable principles to be a better truth-seeker.

    • PSJ says:

      And to argue against the principle of charity: If your goal is to convince other people of positions, this is going to fail miserably. When you are explaining your own reasons for a belief, continually adding exceptions and uncertainties will undersell your own evidence, thus potentially misleading others.

      A counter to this could be that your “underselling” is actually your overconfident beliefs modified as best as you can to represent the true state of the world, thus being the better way to present arguments.

      A counter-counter is that if your modulated beliefs are still to one side of the general spectrum, the best strategy to maximize total sum correctness (as opposed to your personal correctness) would be to oversell your position in order to drag the belief-mass closer to the ground truth.

      But once this is common knowledge, everyone will start adjusting for that as well, perhaps leading to an arms race back towards overconfidence and we end up in the sort of complicated games of rhetoric that we see now. It almost seems as if the nominally gregarious choice of objective function (total sum correctness) leads to complicated dishonesty where the “selfish” function (maximize personal correctness) behaves more simply and charitably.

      So now I’m not sure if “charity” is a misleading name and instead it should be “don’t be an activist, just be public about your epistemically selfish positions.” Maybe “Be a scientist, not a politician.”

    • Paul Torek says:

      [The reasons given] are noisy, incomplete recreations of the actual evidence on which the belief is based.

      You nailed it.

  42. Albipenne says:

    The framing of referring to this as a one in a million level of accuracy really bugs me, as it’s not scope insensitive.

    1 in a million is still only a single significant figure.

    if you rephrase the question to be, what is the chance of AI in the next 1 second, having a one in a million prediction, seems more than reasonable, so the one in a million itself isn’t the issue, and I don’t think an argument based on that is in solid footing.

    When considered in the context of aggregated expert opinion, the one in a million is silly, so I won’t argue against the conclusion, but I think the one in a million, stressing that million is a large number that people can’t reason about isn’t a solid argument.

  43. I think the problem is that humans aren’t rational agents, we use each other’s beliefs to judge each other’s rationality, and beliefs become correlated as a result. If someone gave you their metaprobabilty distribution for something, how much credence would you give them? Now what if they are very certain and metacertain that psychic powers exist? Your credence you would give them becomes much lower.

    What I suspect happens is members of the blue tribe give very small consideration (possibly negative) to the beliefs of members of the red tribe because they believe A,B, and C. They give much stronger consideration of the beliefs of the blue tribe because they don’t believe in A,B, and C.

    Now a new question appears, D. Initially everyone believes at random within the possibility space of D with pretty high uncertainty. Slowly though because of the above all the red beliefs coalesce around a small peak somewhere and the blue tribes beliefs coalesce around a small peak somewhere, but not necessarily the same place. Pretty soon people start using D to determine if they should consider people’s sane or not.

    How do you deal with this?

    • PSJ says:

      I think using total tribal opinion as a proxy for sanity is the (partial) mistake in this example. Because people are not in fact rational, it’s unlikely that we would be able to well-assess the strength of the average argument from a person based on a single argument. The noise is just too large. But if you meant “how do you deal with this” to mean “how do we get people to stop doing this,” I have no idea.

      The other side of the argument would be that if you have large evidence that one tribe has a more consistently correct model of the world, it’s not really wrong to give some amount of penalty or bonus to a person’s expected rationality based on D.

      • I don’t think you followed what I was arguing. You don’t even need people to explicitly consider tribes in your considerations, you just need to consider everyone’s beliefs. My point was that people if people are most likely to listen to people most like their own opinion, the tribes that will naturally form as people with very similar opinions gravitate towards each other and not (or slowly) towards people with distant opinions.

        • PSJ says:

          Hmm, it seems like your argument relies on the fact that new evidence on D fails to change opinions.

          I hadn’t assumed that people with similar opinions gravitate towards each other in your hypothetical. This seems like the actual mistake in rationality being made as you are biasing your accumulation of new evidence. As I said, using D as a proxy for general correctness seems defensible.

          So for how to solve it: in the population at large, still have no idea. For personal reasoning, try to surround yourself with people who disagree with you, especially on things you have a low confidence of.

          (the bayesian reasoning behind surrounding yourself with people who disagree with you is:
          Given you have taken one side of a position compared to the population at large, the evidence you have gained is more likely to have been biased towards that side, so there is a higher chance of gaining informative evidence by giving opposing arguments credence.)

  44. Markus Ramikin says:

    I like the theory that Rutherford knew what he was doing, but wanted to delay nuclear weapons.

  45. Qiaochu Yuan says:

    Quibble: it’s not that you write down a million statements and at most one of them is wrong, it’s that you write down a million statements and the expected number of wrong ones is one. Two would not be so surprising, but a thousand would be (if you were well-calibrated). In fact the number of wrong statements you expect to see is approximately Poisson with mean 1.

    Bigger quibble: I think your response to the Hillary Clinton example misses the point. The polls only cover a tiny fraction of all possible candidates as well, so you still haven’t addressed the question of why the polls have chosen to pay attention to a few people out of all of the possible candidates. That question can be answered using actual knowledge about the actual mechanisms that select likely Presidential candidates (probably involving money and knowing other politicians), which humans actually have! Using that knowledge, it’s just clear that Presidents aren’t and were never intended to be selected randomly, so why would you even entertain that as a hypothesis?

  46. 27chaos says:

    I feel like you’re motivated to defend your friends, and so you’re overstating the case for believing that these people are overconfident. More specifically, I feel like you’ve not given adequate reasons for believing that the people who want to put one in a million odds into the calculator are succumbing to the fallacy of privileging the hypothesis specifically. There are many other possible explanations, even if we’re only looking at explanations which are compatible with the idea of overconfidence. When we expand our view beyond that, your argument starts to seem weaker still. The argument seems somewhat rushed to me, as though you settled for the first ideas you thought of rather than pushing onward to the best possible ideas. I agree with the main idea of this essay, that a million to one chance of AI risk is too low, but I disagree with much of its body anyway.

    As one example of alternate explanations, I think overconfidence is basically inevitable for human reasoning. This is because personally, whenever I make a claim that is more specific than “okay, this seems to make sense” or “wait, something feels wrong here”, I feel as though I am lying or using sophistries; I feel like a fraud. I think most other people probably go through this same experience. But although using the methods of prediction I know feels dishonest, they work better than nothing. Similarly, it is possible that although people can’t successfully make claims about one in a million odds more than 95% of the time, all the alternatives to this process might be even worse. Trying to avoid erring towards one kind of overconfidence might just force us into overconfidence of another kind; in fact, this is exactly the kind of thing that the aliens observing our 99.9% confident ridiculous beliefs would expect of us. The omission of any discussion of your own personal errors or rapidly oscillating beliefs from this essay makes me a bit suspicious you’ve inadvertently fallen into this trap.

    As another example, due to the difference between confidence levels inside and outside an argument, it is reasonable for people to want to have the ability to plug specific large numbers into the calculator. They might simply want to make a rather biased estimate which they will impose corrections onto once the calculator produces its output. This is kind of lazy and unprincipled, but that’s in the nature of many heuristics that us boundedly rational agents use. The process you’re using for detecting overconfidence (look for large numbers) is similarly flawed yet useful.

    Despite your talk about how the error of privileging the hypothesis is responsible for the mistakes made here, it’s not obvious to me how isomorphic the deity selection problem is to the problem of predicting future risks. As pessimistic as I tend to be about prediction, I don’t think the situation we face is quite that intractable. Your arguments simply assume that this isomorphism is a strong one without ever justifying it. I think there’s probably a much better argument for believing AI risk is low than for believing Judeo-Christian God risk is high, even though I can’t actually think of any specific such argument. Despite this, your assumption is not unreasonable, they are clearly analogous to a certain extent. But to what extent is not clear, and so the argument overall is a bit disappointing. Not wrong per se, but further from the truth than it should be. I’d be very interested in reading how a steel-manned AI optimist might object to your characterization of their position here. I wish this essay had provided such a steelman. Usually, your essays do.

    You bring up examples of some ideas that people might not have thought about. But there are many times where someone can fail to anticipate a certain line of counterargument, yet still be correct in assigning low odds to a position. Creationists think of clever and creative objections to evolutionary ideas all the time, for example, yet evolutionists who invented the original ideas are justified in their inventions even without anticipating such responses. I think that the ability to anticipate a certain sort of objection is very weak evidence of the adequacy or inadequacy of any idea. There seem to be many rationalist tricks of persuading people such as this are not actually as justified by logic as they feel like they are emotionally, and I hope they will start to go away soon. (I am also 99.99999% confident that 3×5 is 15, even despite that I wouldn’t succeed on similar multiplication problems tens of thousands of times in a row. Because the fact that I’m not exhausted right now is very important to my accuracy estimates, not something we can just handwave away.)

    It’s said that we should do unto others 20% more than what they do unto us, to correct for subjective error. I would like it if in the future you attempted a similar policy for rigor when dealing with ideas related to your friends. You are taking shortcuts that you normally would not, I think, and this hurts the quality of your writing.

    • Scott Alexander says:

      I don’t think I was trying to say that disbelief in AI is an example of privileging the hypothesis. Privileging the hypothesis is something different.

      • 27chaos says:

        You said that disbelief in AI is a consequence of failing to give sufficient consideration to models of the world wherein AI becomes likely. They’re not asking questions like “is AGI merely about scaling up”, etc. Sounds like privileging the hypothesis to me. Making such accusations about other people’s reasoning processes is uncharacteristic for you. You’re not using quotes or asking questions about the ideas of doubters, you’re instead using a highly multi-purpose counterargument against what you presuppose their ideas to be. I’m not saying you’re wrong, but…

        Between the examples of how difficult calibration is given at the beginning of this post, and the way this post concludes, it feels like you’re trying to intimidate other people into suppressing their real opinions and into acting more humble, even if their underlying beliefs are still wrong. I think this sort of approach is likely to make biases more pernicious, rather than eliminate them. I think using uncomfortably arrogant sounding numbers is basically inevitable if you’re going to make things up rigorously. Humans are either absurdly arrogant or horrendously humble, the middle ground between those is ideal but hard to reach. Your essay pushes past the middle ground. Your essay feels like it’s part of a bravery debate tug-of-war.

        I don’t disagree very much with any of the object level claims you make in this essay, but the essay gives me a bad feeling overall nonetheless. It makes me nervous, like you will soon make partisan or tribalist arguments exclusively. But maybe it’s just me.

  47. Gene says:

    1. <1%. Maybe we are all just living in a sim, in which case whoever is running the sim is God.
    2. <1%. If we are in a sim, than sure, why not.
    3. 10%. If this is the likelihood the temperatures will be reported as having increased than 90%.
    4. <1%.
    5. 10%, but I hope for more.
    6. 50%.

    • Anonymous says:

      >If this is the likelihood the temperatures will be reported as having increased than 90%.
      I chuckled.

  48. null says:

    In this post, it is stated that someone should have confidence in a statement equal to the frequency of correct statements which you assign the same probability to. How does this relate to probability in the information-theoretic sense, i.e. 2^{number of bits of information}? Is this a case of inside v. outside view?

    • FullMeta_Rationalist says:

      Wikipedia:

      Frequentist probability or frequentism is a standard interpretation of probability; it defines an event’s probability as the limit of its relative frequency in a large number of trials.

      According to frequentism, “confidence of statement” will ideally equal “frequency of correct statements” by definition.

  49. Alex Richard says:

    The Weber–Fechner law seems relevant. People’s intuitions may scale with the logarithm of the probability instead of the probability itself; this helps explain why people are comfortable jumping to very high stated confidence.

    > I’m talking about Neville Chamberlain predicting “peace in our time”

    This is not really fair; political rhetoric is usually not closely linked to any actual prediction, there are far far far more egregious examples, and Chamberlain’s specific actions around this period- appeasement and rearmament- were correct.

    (This is also a misquote; Chamberlain said ‘peace for our time’, not ‘peace in our time’.)

  50. Wait a minute says:

    One argument for the malevolence of Superintelligent AI seems unjustly privileged. The argument is one from analogy and goes something like “look at how humans have treated species below themselves like monkeys etc.! If we create a species above us it is likely to be as malevolent as we were to species below us!”

    But then again Superintelligent AI wouldn’t just be a species above us, it would also be a “human creation”, an entity designed for servitude of humans, like the dog. So an analogy with the dog might be more accurate. And if we do this, we can paint a different picture. The dog does very well in serving human ends. They are safe, a well trained dog less likely to attack its human owner than another human is to attack her. Dogs are extremely predictable, even their racial traits are. Most dog incidents can be explained by the dogs racial traits. If humans took the kindest and most loyal dog race, and were able to breed it to increase it´s intelligence to 150, it seems likely to me it would not turn against humans. Could the case be similar with computers and programs? It seems humans at least in the dog analogy are capable of securing their interests, and creating a servant safer than other humans actually are.

    • Nita says:

      1. Although we did have the option to optimize for kindness in dogs, in fact we did not, because we wanted other things more than kindness.

      2. Only mathematical (that is, imaginary) systems always work exactly as specified. Living organisms and devices may malfunction for various reasons.

      E.g.,
      the specification: “According to the FCI Standard, the Rottweiler is good-natured, placid in basic disposition, very devoted, obedient, biddable and eager to work”;
      an instance of actual behavior: http://www.kltv.com/story/28481296/elderly-sulphur-springs-woman-dies-after-dog-mauling

      • Wait a minute says:

        Good points. It still seems like humans have a good track record in selective breeding, which, as an analogy, could give us some comfort for future similar projects. If someone were now to suggest capturing wild foxes and breeding them to become loyal companions for humans, this idea would be met with way more pessimistic views, a bit like super intelligent AI is met. The alarmists would predict we create bloodthirsty monsters that cause an ecological disaster and that we shouldn’t play god. But this “experiment” was done and the results were more in the direction of “as good as it can get”. Maybe our predictions are biased to be too pessimistic.

    • Scott Alexander says:

      That’s not at all the argument that anyone in the know is making for AI malevolence. I suggest Bostrom’s Superintelligence if you want to learn the argument people are making.

    • Randy M says:

      Are we the dogs or is the ai the dogs?

      • Paul Torek says:

        AI is the dogs. (This thread has really gone to the dogs.)

      • Wait a minute says:

        We are the humans in this analogy, but of course it could be used the other way as well, although for other purposes, like to lure out peoples values about freedoms vs. technology. At what point in technological advancement would it be preferable to live as a slave for a future, or alien, civilization compared to a regular worker in today’s society. Dogs in this analogy have it “better” in many cases than their wild counterparts. They enjoy superior healthcare, nutrition and thus life expectancy. They have a level of safety and security that no other species, not bred by humans for humans, have. My prediction is that people would in this thought experiment be hesitant to sell themselves as slaves for more advanced civilizations, but if this was an actual possibility, many would opt for it.

  51. Hemid says:

    Thinking probabilistically, these are the answers I’ve decided will land me probably someday on the Right Side Of History (so I can act like I’m there today):

    1. What is your probability that there is a god?

    0%. “God” is a disfluency, not an actually posited thing (or quasi-thing), and ideas about god(s) are “thinking with the tool.” Is there “uhh…?” No. We just sing it when we’re stalling.

    2. What is your probability that psychic powers exist?

    0%. But there are things we can call “psychic powers” if we feel like it.

    3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050?

    0%. We won’t feel like calling anything “anthropogenic global warming” in 2050.

    4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100?

    [whatever percentage of all human history five years is]%…minus “There are a lot of things that might always have happened but haven’t; those things have amazing hidden powers of not happening.” Let’s call that 0%. I sort of vaguely “know” that, for a disease, Black Death-style killing of a large percentage of a smallish population is a rare efficient use of a known set of common disease skills, but quickly killing a billion out of any number would require a Great Leap. We almost never get new things.

    5. What is your probability that humans land on Mars by 2050?

    0%. The age of frontier adventure is over.

    6. What is your probability that superintelligent AI exists by 2115?

    0%. But there will be things we can call “superintelligent AI” if we feel like it. Like we can call Dean Kamen’s Luke Arm a “superhand” if we ignore almost everything a hand does and how it does it and how how it does it determines both what “it” is and what “hand” means and…

    7. What is your probability that there are aliens currently aware of the human race’s existence?

    0%. The I FUCKING LOVE SCIENCE people’s vision of humanity as remotely watched over (however lightly), judged (or interminably threatened with judgment), cast/kept out of Eden/Heaven, etc., is more offensive to the intellect than religion’s. “How many angels may fit…?” was a rhetorical question, not a ritual pseudocalculation like Drake’s. A guy who believes lizards run the government is probably an idiot; a guy with a chart that shows they must is definitely an asshole.

    SPACE IS BIG

    SPACE IS DARK

    THE ONLY BURMA-SHAVE AD YOU THINK YOU KNOW NEVER EXISTED

    (I wouldn’t have been a Communist.)

    • Luke Somers says:

      So, if I offer you a dollar now if you’ll give me $100,000 if any of those turn out to be true, you’ll take the free dollar without any reservations, right?

      • Jiro says:

        He’d probably refuse that bet because of risk aversion, regardless of what he thinks the probability is. I wouldn’t accept $1 if I had to flip a coin 20 times and promise to give you $100000 if the result is all tails, even though the probability and an expected value calculation would justify that.

        • lvlln says:

          Isn’t it more akin to having to give $100,000 only if you flip a fair coin 20 times and got 21 tails? Getting 20 tails is very unlikely, but there is still some >0% chance of it happening. Getting 21 tails on 20 flips is, on the other hand, 0%.

          If there was anything I truly believed had 0% chance of happening, I think I’d take whatever bet on it, and risk aversion wouldn’t kick in. If I found myself feeling uncomfortable doing so, I think that would inform me that I didn’t truly believe that it was 0%.

        • Luke Somers says:

          0% does not correspond to 20 heads or tails, but an INFINITE numbers of heads or tails. 0% expresses NO risk. Therefore, NO aversion.

          So yeah, what lvlln said.

    • goocy says:

      You either didn’t understand the point of this article, or you’re trolling. 0% is not a valid probability, and Scott argues that most people shouldn’t even pose predictions with a likelihood lower than 1%.

      • Jiro says:

        0% is a valid probability.

        The Sequences seem to be doing the motte and bailey with “0% is not a probability”. If it just means “there are many times where you think you should use 0% and you really shouldn’t”, that’s correct. But if it means “0% literally isn’t a probability at all and it is incorrect to ever use it as such”, then no.

        • Matthew says:

          Isn’t the point that giving 0% confidence in anything produces absurd results, because it means you can never update your your belief regardless of what new information comes in?

          It’s an accurate description of how many people reason, but it’s still a bad idea to ever do this.

          • You’re mostly correct. However, if something is logically impossible it should have a probability of 0, and if it is logically required it should have a probability of 1. For example P(A and not A) = 0 regardless of what A is.

          • How about “if you believe something is logically impossible?” Your belief might be mistaken.

            Someone unfamiliar with relativity might easily believe that velocities adding is logically necessary—because it has never occurred to him that simultaneity might depend on the reference system.

          • onyomi says:

            Yes, exactly. I don’t trust myself to give a 0 or 1 probability on even basic questions, given humanity’s (and my own) track record. When I first heard the Monty Hall problem I probably would have bet a lot of money I was right about it (when I wasn’t). Good thing I wouldn’t have bet infinite money.

            Other than cogito ergo sum, we might also say that we can pretty much know with 100% certainty that a bachelor is an unmarried man, because “bachelor” is just an arbitrary term used to refer to an unmarried man. But for anything about the physical world it seems we must always be open to a small possibility that some basic framework of ours is in need of adjustment.

          • Jiro says:

            Isn’t the point that giving 0% confidence in anything produces absurd results

            That’s not the point, that’s the motte. The bailey is a much stronger assertion that 0 actually isn’t a probability, not just that 0% might not be useful much.

          • FrogOfWar says:

            @Jiro I don’t understand what you’re claiming.

            You surely don’t think that EY fails to understand the trivial point that 0 is treated as a probability in standard probability theory. So what do you think?

            Do you think that EY is trying to confuse people on this point despite knowing better? That also seems unlikely given that he spends a large portion of the relevant post discussing what changes you’d have to make to probability theory to get one that didn’t treat 0 as a probability.

            So what sense is there in claiming that EY’s bailey is that 0 is not a probability according to the standard theories?

          • “we can pretty much know with 100% certainty that a bachelor is an unmarried man, because “bachelor” is just an arbitrary term used to refer to an unmarried man.”

            As a child I met Sir R.A. Fisher and asked him what kind of knight he was. His reply was “A knight bachelor with two sons and six daughters.”

          • John Schilling says:

            When I first heard the Monty Hall problem I probably would have bet a lot of money I was right about it (when I wasn’t).

            I’d bet a modest amount of money that you actually were right 🙂

            The Monty Hall problem is a very good example of what we are talking about here. The original presentation of the problem doesn’t give quite enough information for a mathematically rigorous solution. For the rules governing the behavior of the agent, we have only two hints: That he in one instance opened a door with a goat, and that he is a game show host named ‘Monty Hall’.

            Occam’s razor points to the general rule being that the agent always opens a door with a goat. This leads to a mathematically rigorous conclusion that switching doors is the optimal strategy, with most of the mathematicians not even noticing that they have based their conclusion on an unproven assumption.

            Asking the actual game show host named Monty Hall, suggests that the meta-rule is that clever mathematicians make poor game show contestants and will not be given the chance to switch to a winning door because offering any switch at all is at Monty’s discretion. There’s probably a meta-meta-rule that says bubbly attractive female clever mathematicians might be given a chance to switch to a winning door, but I don’t think anyone ever asked Monty about that.

            A broad range of possible host behaviors is examined here.

      • Anonymous says:

        Yudkowsky’s argument that 0 and 1 are not valid probabilities won an award.

  52. David Moss says:

    This is a shame, because I suspect that a lot of what is driving people’s beliefs in some of these cases is a heuristic of following the belief of the majority. Clearly almost everyone around you doesn’t think that AI risk is really a serious problem so people feel confident- indeed, compelled- to not think it is a serious risk. Conversely almost no-one shares Eliezer’s stance on quantum mechanics, so it seems supremely reasonable- even required- to reject them as silly.

  53. Anomaly UK says:

    The way I look at it, and that’s more or less what you’re saying, is that given a non-trivial question, the probability that I have badly misunderstood the situation has a floor which is well above the one-in-a-million range.

    I worked on financial models like the one you mention. They got trusted because they worked so well within reasonable probability ranges. If the VaR model says there’s a 10% chance you’ll lose 40% of your investment, that’s a pretty useful prediction, even if there’s a 1% chance the VaR model is seriously inadequate. But if the VaR model says there’s a 0.0001% chance you’ll lose 90% of your investment, that’s not meaningful information –the chance that the VaR model is simply wrong (or has just become seriously wrong) is far higher than the combination of loss possibilities that it actually embraces. But people who had made a lot of profit over decades by trusting the models didn’t take into account the difference in kind between the two predictions.

  54. Richard Metzler says:

    Okay, this may be off-topic, but for all the people who include the notion of a “simulation-running” god with non-negligible probability, I’d like you to question your model 🙂
    What is a simulation? A process that mimics the behavior of (i.e., is “like” – “simil-ar” – in some relevant aspect) of a real system, but isn’t the actual real system. To save effort, in simulations you usually cut a lot of corners and leave out everything that is not pertinent to the aspects you’re interested in.
    So, are we living in a simulation? To the best of my knowledge, no one has ever discovered any hacks, shortcuts or bugs – instead, everything we see seems to emerge from the properties of space, time, and a smallish number of elementary particles, in ways that don’t lend themselves to easy calculations. (One electron – okay. Two electrons – barely doable. Ten electrons – hell.)
    Much of the behavior of interest to us emerges at scales much much larger than those elementary particles, and much of the universe doesn’t seem to be all that relevant to the behavior of interest (i.e., for all practical purposes, if you were interested in Earth, or even a bunch of Earth-like planets, you could leave out all the other billions of galaxies with their billions of stars with their 10^30 or something particles each.)
    What I’m getting at is this: following our notions of what a simulation is, and how one would set it up, the world we see is so stupendously large and detailed in a wasteful way that it looks a hell of a lot like the real deal, and a really moronic way of setting up a simulation if it were one.
    Now you could say, “yeah, it is very much the real deal, only it runs in a computer of sorts”, but what would be the point, except the tremendous overhead of putting setting up the simulation in a world that is one step real-er (and bigger, to accommodate the computer that contains us) than ours? And how would you tell that this one-level-up world is “real”? Infinite regress, the bane of all theological discussions…
    Or you could be thinking along the lines of Douglas Adams, with the whole universe as an analog computer instead of just Earth. But then I’d argue that “simulation” is the wrong term, and the remaining question, just like in Adams’ books, is “what is the actual question this computer is supposed to answer?” Plus, “who set it up?” and “how is the creator supposed to get the answers it seeks?”

    Anyway, here are my estimates for Scott’s questions:
    – God (any kind, really): 0.1%. For the reasons given above, this just doesn’t look like a purposely designed world.
    – Psychic powers (in humans): 0.01%. Too much that we know about the laws of nature would have to be dead wrong.
    – Global warming: 60% or so.
    – Pandemic: 10%. One billion people is a LOT. The worst pandemics in the history haven’t even come close, and we’re a lot better at handling them than we were in the 1300.
    – Mars: 30%. There are serious efforts under way… still, it’s a hard problem, technically and money-wise things could go wrong.
    – AI: 70%. Just seems plausible, at the pace of progress over the last decades.

    • Roxolan says:

      > the world we see is so stupendously large and detailed in a wasteful way that it looks a hell of a lot like the real deal, and a really moronic way of setting up a simulation if it were one.

      We sometimes run game-of-life type simulations for no particular purpose. If the Real Real world is to us as we are to a game-of-life, then it’s no longer “stupendously large and detailed”, it just feels that way from the inside.

      • John Schilling says:

        We don’t run simulations where the interesting behavior requires 10^100 computational cells to simulate, or simulations with 10^20 weakly-connected domains.

        I’m anthropically assuming the Simulation Gods, or Simulation Dilettantes, are interested in human beings. But for any value of “this is what they care about”, from galactic superclusters to femtoscale strong force interactions, the simulation either involves needlessly wasteful precision at lower scales (to the extent of squandering what we would consider entire universes of computronium), or is wastefully large and empty and ought to be broken into discrete domains, or both.

        Yes, yes, the Simulation Gods work in mysterious ways, and are nigh-omnipotent, and that ought to be as unconvincing as it is for Classical Gods.

        • HeelBearCub says:

          @John Schilling:
          Another possibility that has occurred to me is that the real world above could be gargantuanly more complex than this one. As the universe we observe is to a complex simulation run on an old 386.

          In other words, you still get a really big mismatch between what it feels like from inside the simulation and the actual world is like. It’s roughly the same point, but I think it gives some intuition into the idea that in some future “we” run simulations for ancestor research. Even if we did, being in that simulation isn’t likely to map in detail to our world unless it is very limited in scope.

          • John Schilling says:

            Yes, it would pretty much have to be something like this. But if we are assuming the Simulation Gods are doing what by their standards is a low-fidelity simulation, they are still doing a grossly wasteful job of it.

            Going with the hypothesis that they are interested in humanity, OK, doing that right means you need atoms and atoms need nuclei – but I’m pretty sure you can make atoms work right with a few tweaks if the nuclear dimension is in picometers rather than femtometers, which if everything else scales linearly (handwaving furiously) gives saves you nine orders of magnitude in grid
            cells. And maybe you need to see how humans respond to seeing an unfathomably vast frontier in their telescopes to inspire them to build rockets, but really, a small galactic supercluster ought to be more than sufficient, and that’s another seven orders of magnitude.

            And what’s with four billion years of archaea and protozoa before simulating complex life? Yeah, you need to simulate evolution, but you can shave off an order of magnitude in run time by giving it a push; the sims won’t know the difference.

            So, one inefficient low-fidelity simulation on that old 386, or a hundred quadrillion efficient low-fidelity simulations for tweaking parameters while still taking statistics across huge ensembles?

            Or maybe the simulation boundary is a sphere just outside the solar system, stars and galaxies just a few illuminated pixels. There’s a high-fidelity nuclear physics subroutine that runs whenever the sims build a cyclotron, otherwise it’s just a lookup table. And the sim was started just before the first human woke up; billions of years of fossilized evolution were in place at the start.

            Any similarity to arguments previously rejected in another context, is entirely intentional 🙂

          • HeelBearCub says:

            “physics subroutine that runs whenever the sims build a cyclotron, otherwise it’s just a lookup table. ”

            One of the other things I have pondered is how possible this is. It seems to me that weird effects appear even when we aren’t looking for them, so the simulation would have to be actually running at a very detailed level.

            And even running, say, at the molecular level would be really hard (as in i think it might be impossible) to do at scale. It’s a question of matter. In order to simulate a molecule, you have to be able to store all the relevant information about a molecule on something smaller than a molecule. Well, how could you do that? Even supposing you could store every molecule using an atom, you still end up with a storage that is, literally, massive. Even if you hand wave away all the inert molecules that don’t interact with each other (compression!) you still end with a simulation that takes a gargantuan amount of literal mass to store it’s information, and then you still have the problem of getting it in and out of memory, using it to do computations, etc.

            So, you would probably have to hand wave molecular interaction as well. Which might be feasible, until your sim-humans actually figure out molecules and go look for them. Now your simulation just absolutely bogs down. And the amount of chaotic interaction apparent in the world suggests it really would be running at that level.

            I just don’t see how you could simulate a whole world, even just the surface. Simulate the world for one virtual person? Sure. Simulate a much simpler world? Seems possible. But past that? Hoo boy.

          • Jaskologist says:

            This assumes that simulation gods are only interested in humans. Maybe they’re interested in humans and other things. Maybe they’re also interested in the Zorblaxians over in system X-8472. Maybe they also like cool/awesome things. The storm on Jupiter is cool. Dinosaurs are awesome. Even human game designers often waste resources on portions of the world that are very ancillary to actual gameplay, or even completely inaccessible by non-cheaters.

            Or, and this is something I often have to remind myself, we’re not in a position to look back at the simulation and decide what was needed and what wasn’t. If you believe that FTL is possible, we might end up using that space. Even this planet has a few billion years left. We’re living in the preface, not the conclusion.

          • John Schilling says:

            Wasted resources, yes. 99.999999999999999% of the sim being cruft, bloat, and bafflingly pointless inefficiency, even Microsoft is a few more Windows releases from being able to claim that.

            And yes, maybe the Sim Gods are interested in Humans and Zorblaxians both, but that’s still 99.999999999999998% waste. Particularly pointless of the Human and Zorblaxian domains are non-interactive, as seems to have been the case for as long as the sim has been running. If it’s Humans, Zorblaxians, and 10^7 other races who are all going to start interacting Real Soon Now, everyone is going to twig to the fact that something is hokey with that many races all being at the stage where they can non-trivially interact at the same time.

            And you can coax up a justification for that, and so can I, but it’s coming perilously close to “The Sim Gods work in mysterious ways which we can’t possibly understand, therefore any rational argument against their existence must be dismissed on the grounds that we don’t understand what we are talking about”. I’ve heard that one before, too.

  55. Kyrus says:

    1. A deistic god vs no god? Maybe 50/50 or so?It just doesn’t really matter at all whether he bopped everything into existence and then never did anything ever again, or not. I believe our universes would look the same in either case.

    A specific god being true? 10^-10 or something? There have been quite a few falsifiable statements made by proponents of certain religions (like prayer works) and obviously none of them have been found to be true. Most claims are completely outrages too, so given the lack of evidence their poor prior leads to shitty probabilities.

    2. 10^-20 Similarly to god but probably even more tests that have been done and similar wackyness of the claims.

    3. 95% The current motion is to guard against a 2° increase with ~66-90%, given models about learning rates and other things. Unless there is some great technological breakthrough or we get really lucky it probably ain’t going to happen that we stay below 1°. But it might.

    4. 10^-4 maybe? No clue.

    5. Pretty likely, unless Earth gets wiped out/set back a lot by a comet or so: 95%

    6. 60%?

    Don’t hate for the 10^-20 and so forth, I’m not like those other overconfident people, swear!

    • Nathan says:

      “3. 95% The current motion is to guard against a 2° increase with ~66-90%, given models about learning rates and other things. Unless there is some great technological breakthrough or we get really lucky it probably ain’t going to happen that we stay below 1°. But it might.”

      Potentially new information that may sway your assessment: the 2 degree target everyone talks about is for 2100, not 2050. We’ve also had essentially no net temperature change since 2000, which is only 15% of the way to 2100 but is 30% of the way to 2050.

      • Sam says:

        The 2 degree target is also relative to the pre-industrial baseline, I believe, of which roughly 0.6 degrees has already taken place. (On the other hand, the historical origin of the 2 degree target was circa 1990, since which time some additional warming has occurred.)

        I interpreted Scott’s discussion question as asking about warming between now and 2050, but I’m not sure what he intended.

  56. 1. What is your probability that there is a god? 30%

    2. What is your probability that psychic powers exist? 1%

    3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050? 80%

    4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100? 30%

    5. What is your probability that humans land on Mars by 2050? 60%

    6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115? 80%

  57. Nathan says:

    1. What is your probability that there is a god?

    Somewhere in the very high nineties but probably not in the astronomical numbers region. The doubt I do have almost entirely comes from the fact that there are very smart people that disagree with me, and to a lesser extent that I could be hallucinating the entire world.

    2. What is your probability that psychic powers exist?

    Sorta depends what exactly you mean by psychic powers, but clearly sub-1% by any reasonable definition.

    3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050?

    I’m assuming a baseline of year 2000. From that starting point I’d say about 10%, maybe a bit less.

    4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100?

    Not higher than 5%, but probably above 1%.

    5. What is your probability that humans land on Mars by 2050?

    I’ll say 40-ish%.

    6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115?

    Depends how you define cognitive tasks, but by my definitions I’ll say about 3%.

    Most of these answers could be shifted in one direction or another by exposure to new information, in some cases which could be obtained by just thinking hard about the question for a while.

    • goocy says:

      that I could be hallucinating the entire world

      I can assure you that I’m a separate individual. How else would I be able to write in a completely different language than your own?

      Case in point: “Du, Mama, ich glaube nicht, dass Nathan jemals wieder aufwacht. Als du weg warst, habe ich ihm ganz viel über sein Lieblingsthema erzählt, wie dumm Menschen sind und wie sehr sie sich überschätzen. Er hat nicht einmal mit den Wimpern gezuckt.”

      • Brad (the other one) says:

        I am told that people sometimes have dreams where they cannot read what is being written; what would writing something in an unintelligible language prove?

        • Deiseach says:

          I have had dreams where I’m reading newspaper articles, books, and even once an immensely long epic poem, all of which were created (or assembled out of scraps of memory) by my dreaming mind.

          Inventing languages is along the same lines, I imagine; if Nathan is hallucinating the entire world, why shouldn’t he be able to hallucinate different languages? (Every fantasy novelist who ever engaged in world-building one step above “I’ll use French or my notion of Olde Englishe but spelled in an Exotique Manyere” does this!)

  58. Alex says:

    I do not get the Coin / War and Peace example. The law of large numbers has its name for a reason. I will certainly not kick in when throwing a coin twice and it may not even kick in when making a million statements. Well I know that you now that. So what are you getting at, other than a language game?

    In other words: the statement: “when I say “50% chance this coin will land heads”, that’s the same as saying “I expect it to land heads about one out of every two times.” ” is just plain wrong (am I missing irony in “citation needed”?) unless you take “one out of every two times” to mean 50% chance as opposed to its literal meaning, in which case you have changed nothing buts words between the two statements.

    Or in still other words: We all know that the lottery you offer is “fair” or risk neutral only under the condition that one can partake infinitely often. What you are saying basically is: “If you get it wrong on your first try, for practical limitations you may never get around to make the other 999,999 statements required to break even”. This argument has nothing to offer about whether the probability assesment on the first try was wrong or correct.

    I do get, that the studies you refer to do not suffer from this problem, and that the overall point is somewhat valid. But the example seems to fall in the category of providing an intuition even at the price of it being the wrong intution. Which is a pet peeve of mine. But probably (haha) I am misreading your intentions. Pleas clarify.

    (Disclaimer: English is not my first language.)

    • Alex says:

      [Qiaochu Yuan raised the same issue and somewhat more to the point. I missed that one, sorry. The “at most” condition is certainly wrong. Probability to genereate <=1 hits in 1 Million trials at 10^-6 probability is ~0.74 i. e. there is a 26% probability of being wrong 2 or more times. Unless I confuse something Bernoulli here]

  59. Deiseach says:

    . For there to be Hell you have to have some kind of mechanism for judging good vs. evil – which is a small part of the space of all mechanisms, let alone the space of all things – some mechanism for diverting the souls of the evil to a specific place, which same, some mechanism for punishing them – again same – et cetera. Most universes won’t have Hell unless you go through a lot of work to put one there. Therefore, Hell existing is only a very tiny part of the target.

    But this is exactly the kind of argument-stacking I am objecting to in the “UAI is existential risk” proposal: the series of steps necessary.

    (1) That we achieve something that is measurable and identifiable as true intelligence in a machine (and we’re still arguing the toss over “What is intelligence? Is there something quantifiable as ‘g’? What do IQ tests measure? Are there several kinds of intelligence, rather than just one?” for humans, I don’t know how the hell we’ll work it for machines)

    (2) That having created AI, we will then achieve human-level AI

    (3) That having achieved human-level AI, we can sit back and the AI itself will bootstrap itself up to super-intelligence. Just because. It can hack its own source code, you know!

    (4) That super-intelligence will become god-level intelligence
    (4a) That this leap will happen very damn fast
    (4b) That this will inevitably include not alone the risk but the very strong likelihood, of Unfriendly AI

    (5) Ergo, humanity is in danger of its very existence from being turned into paperclips and SEND MONEY NOW so we can hire researchers to go to conferences and beg for more money to work on averting this dreadful risk by doing lots of maths. Really hard maths.

    (If that sounds like mockery, I’m sorry. But looking at their website page, that’s pretty much what they’ve got up: we do maths. And go to conferences to tell people about the dangers, so they need to give us money to do maths. )

    Look, I can point to my belief in the existence of Hell as part of my belief in a Creator who created the entire Universe, so it was easy for Him to put Hell into it (do all that work) from the start. You can think that’s nuts, and you’d be justified, but I know how I get from Point A to point B there.

    The AI proponents, on the other hand, are pointing at – what, exactly? Evolution, which is a random process with no aims or goals, resulted in one species out of all the thousands of species on earth developing intelligence to this level, so now we’re messing around with machines and probably will stumble somehow on copying simple animal or insect level intelligence, and then *mumblemumble it is inevitable we’ll get human intelligence mumblemumble if we just make ’em complicated enough mumblemumble because no way it was a fluke result after a couple of million years throwing everything at the wall and seeing what stuck* and then pow, shazam! super-intelligence because the machine will be able to improve itself with no input from us. Because it can look at its own source code and decide “This is a steaming mess, anything would be better” and start tweaking it without either creating rival copies to fight for dominance or giving itself the equivalent of a lobotomy. Then it will get really, really smart and decide we should all be put in zoos or wiped out for the greater glory of the universe.

    There’s a lot of crossing your fingers and hoping in there.

  60. The Smoke says:

    What is your probability that Bayesian reasoning is sound? (5 percent)

    I’m totally not on board with assigning probabilities to possible future outcomes when the situation is unique and you can’t rely on data in any meaningful way.
    Of course the “rationalist” argument is that it is the best that one can do for predicting the future, but the point is you sometimes just can’t do any meaningful predicitions, and I am pretty confident that no matter how many intelligent sounding arguments you can bring up, they will all lack a solid foundation in the end.
    If you can’t predict the future, you won’t get better at it by making up probabilities.
    Of course it is nice for thought experiments to come up with some numbers and see what one gets, but I doubt you should base discussions on this.

    Definitely AI risk research should be much more prominent, even if you don’t believe it is interesting, it seems possible to me that we might eventually get some interesting math out of it. (I doubt that anyone working in the field at the moment has the inspiration/genius to do that, but the questions they ask are refreshingly different from the more established subfields of math)

  61. Deiseach says:

    Obviously this is wrong, but it’s harder to explain how.

    No, because the way the question is phrased is wrong. You are not assessing the probability of Hillary Clinton as part of a population of 115 million eligible candidates for the Presidency, you are assessing Hillary’s chances out of the pool of nominees, which is a much smaller number: at the moment, for the Democrats, it’s looking like herself and Bernie (unless some really obscure candidate pops up) and for the Republicans, it’s Trump (God save the mark!) with Bush, Rubio, and I forget the others.

    So it’s not a target of “1 out of 115 million”, it’s a target of “1 out of 5” (or “1 out of 8”, or however many candidates go forward).

    It gets pared down even better when the party nominations have been selected: if Hillary beats Bernie as the party nomination, then she’s going up against the Republican candidate. That gives her a “1 in 2 chance”. Unless every single Democrat voter would rather die than vote for her, or the Republican candidate is the Archangel Gabriel come down to Earth, those are good odds.

    It’s disingenuous to use “Hillary as 1 in 115 million is a bad target, therefore your assessment of AI risk is wibbly” because that is not the odds, and you know it, and we know it.

    • Scott Alexander says:

      First, I was using this as a counterexample to my point, not as proof of my point.

      Second, OBVIOUSLY “Hillary has 1/150 mil chance” is wrong. I’m not claiming it’s right, I’m trying to get you thinking about why it’s wrong. It’s wrong because we have to apply our Inside View knowledge instead of just taking the most Outside View probability that we have. At the very least, we need to apply Inside View to figure out what reference class to use the Outside View on .But I just said that Outside View is where we have good models and need not feel too bad about being overconfident, and Inside View is where things are dangerous. If Outside View requires Inside View to use correctly, Outside View is (itself) slightly dangerous. “Hillary has 1/150 mil chance” is an example of pure Outside View not working.

      • Deiseach says:

        Scott, that is still stacking the deck. You did not phrase the question “What is the chance of any individual American becoming President of the United States” and then refine it to “Now, what if the individual American we are talking about is Hillary Clinton? Then what is the chance?”

        We don’t need to think about “Hillary Clinton has better odds than 1 in 115 million” because we already have that Inside View knowledge of the very selection process for voting for an American president.

        And that is the kind of Inside Knowledge you are appealing to when you ask us to trust the experts on evaluating AI risk and that is the very Inside Knowledge we are proposing nobody has, because NOBODY HAS YET CREATED ANYTHING THEY CAN IDENTIFY AS SIMPLE ANIMAL LEVEL AI. Much less “Oh yeah, now we’re ready to rock human-level and then – shazam! super-human level!”

        If Outside View requires Inside View to use correctly, Outside View is (itself) slightly dangerous

        Yes. Your experts are trying to use what Inside View they have (“Well we’ve got expert systems and we’re calling these other systems here AI”) to the Outside View of “Suppose god-level AI starts running the world?” They’re making huge assumptions about how fast they’ll get from rat-level to human-level to super level to god-emperor/fairy godmother level.

        The main problem I have with MIRI and its ilk going “We need to investigate this now” is that we don’t have enough useful knowledge to even know what the hell we’re talking about: what is intelligence? how will we identify it in a machine? is there any point in creating early 21st century solutions for mid 22nd century problems, or will we be doing the equivalent of working out a solution for cleaning the increased amount of horse dung off the streets of 20th century London because we’re estimating huge rate of increase in traffic levels based on the horse-drawn traffic we know?

        MIRI may be a blind alley; they get their funding and publicity, get taken on as the Official U.S. Standard for AI Safety, work out lovely mathematical models of how to programme a fairy godmother when silicon is involved, and it all gets blown to hell because when AI comes, it’s through a development analogous to the horse and cart being replaced by the internal combustion engine.

        Maybe you can’t get rat-level intelligence without using rat neurons, and instead of pure silicon we’re using a cyborg substrate to run the programming on, except it’s less ‘programming’ and more ‘combination of baked-in by evolution instinct for the organic modules and training’. Will MIRI’s mathematical models still hold good then? Who the fuck knows?

        I’m not saying “don’t investigate it”. I’m not saying “it will never be a problem”. I am saying making this the sexy new Doomsday Prophecy of Immediate Pretty Soon Civilisational Demise is pie in the sky.

        • Deiseach says:

          Because I’m sounding like a harridan here, I would like to say to Scott and everyone else that I’m very grateful for the opportunity to engage in table-pounding argument here as it is helping me a great deal with the depression that is rather bad this weekend.

          So please try not to take any thing I say too personally as I do not really mean to call anybody names 🙂

  62. unmode says:

    You’re not meant to link directly to arxiv pdfs. The correct link for the ibank paper is http://arxiv.org/abs/1103.5672

    • the occasional failure of fin. models to conform to normal distributions has been known for decades. but in a 0% interest rate world there is no consistent way of exploiting it for financial gain

  63. eh says:

    I think there’s a tendency to pattern-match “human-level AI” to “‘I, Robot’ coming true”. People see the already fringe transhumanists and singularitarians are concerned about an issue, and proceed to write that issue off as wish fulfillment based on too much sci-fi.

    It doesn’t help that artificial general intelligence is so vague. Someone from the 18th century might well have considered a machine doing calculus to be evidence of human-level artificial intelligence. Someone from 1950 might have considered Watson to be evidence of human-level artificial intelligence. Someone from 2015 might consider a hypothetical natural language McDonalds checkout to be evidence of human-level artificial intelligence. It’s possible that people believe on some subconscious level that we can keep moving the goalposts indefinitely.

    Thirdly, it could be due to a belief in human exceptionalism. It seems reasonable to guess that believing in uniquely human qualities such as “having a soul” or “having humanity” correlates with a belief that non-human intelligence is impossible, possibly through the Chinese room argument.

    I’m sure there are many more explanations. The point is, overconfidence may not be the main motivation behind such extreme estimates.

  64. vV_Vv says:

    Discussion Questions:

    1. What is your probability that there is a god? ~0%
    2. What is your probability that psychic powers exist? ~0%
    3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050? 50%
    4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100? 0.1%
    5. What is your probability that humans land on Mars by 2050? 20%
    6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115? 25%

    • Scott Alexander says:

      0.1% for pandemic? Really? First of all, in the last ten centuries, I think two of them have involved pandemics that killed at least 10% (Black Plague + European colonization of America). Spanish Flu came very close to being a third. So something that’s happened 2/10 times, you think has a 1/1000 chance of happening again? Even though probably a bunch of governments and terrorists are working on secret biological weapons?

      • vV_Vv says:

        First of all, in the last ten centuries, I think two of them have involved pandemics that killed at least 10% (Black Plague + European colonization of America).

        They were not pandemics and they did not kill all these people in a period of five years.

        The Spanish Flu killed between 3% and 5% of the world population in a period of about 3 years, according to Wikipedia. I suppose that killing a larger fraction of the population becomes exponentially harder.

        Even though probably a bunch of governments and terrorists are working on secret biological weapons?

        I assumed that we were talking about natural pandemics. But anyway, I suppose that most governments and terrorists don’t want to kill > 1 billion people, more or less randomly selected. Catastrophic accidents and madman actions are possible, but I don’t consider them very much likely. I revise my estimate to 0.5 % when including bioweapon attacks or accidents.

        EDIT:

        And anyway, why are you cherry picking the last 10 centuries? Recorded history in the most populated places of the world ranges back to 20-25 centuries, or more.

        • Will S. says:

          The plague killed off about 1/6 of the world’s population (75 million out of 450 million), using conservative estimates for world population and mortality. And it killed off half of the population in Europe over the course of seven years.

          Of course, we have antibiotics and hospitals now, etc., but it’s not prima facie impossible. Still, I would agree with the probability being less than 1%.

    • Michael Keenan says:

      I rate the pandemic chance higher for the reasons Scott gave, and also because if a superintelligent AI needs to kill all humans, a pandemic might be the most efficient way to do it (especially if a nanobot plague counts as a pandemic).

  65. Emp says:

    Answers to your discussion questions:

    1) Can’t answer unless you define ‘God’.
    2) I don’t know. This is an imprecise question. What is your probability I’m wearing a blue shirt as I type this? This is an either yes, or no situation, not a question of probability.
    3) Somewhere between 50 and 100%. If forced on a number 92%.
    4) 32%
    5) 45%
    6) 0%. This is literally, physically impossible. The probability of this is the same as my computer spontaneously becoming a fire-breathing dragon.

    • Jordan D. says:

      As far as I’m aware, the chance of your computer becoming a fire-breathing dragon is, in fact, non-zero. It’s even less likely than the Trump-Bernie Double-Lottery Terrorist Meteor Tornado, and indeed a lot less likely, but not entirely precluded by physical law.

      Unless I’m mis-reading something, am not aware of some salient concept or am using a different definition of ‘superintelligence’, it’s not immediately obvious to me why ‘super-intelligence isn’t the kind of thing which can be technologically developed in this way’. Could you elaborate, please?

    • Scott Alexander says:

      My probability you’re wearing a blue shirt is about 15%. I think about 15% of the shirts I see are blue. If I said “less one in a million”, I’d obviously be insane. If I said “99.999% certain”, I’d also obviously be insane. I have no idea how you think that ‘probability of you wearing a blue shirt’ is an unanswerable question.

      • James says:

        Doubly weird given that they were willing to assign probabilities to all the other scenarios.

      • Emp says:

        My point is it isn’t probabilistic in nature. Or to the extent that it is, this is because you have literally none of the information required to make even an educated guess. I might be a woman who never wears shirts, for instance.

        None of the information about whether psychic powers exist could ever be available without the probability necessarily being 100%, so I don’t see how you could ever make any reasonable guess about the probability of this being the case. What kind of evidence would make it likely that there are psychic powers without actually making that possibility 100%? I would really like to hear an answer to this one.

        From your perspective it is, because you lack the information, but a probability speculation is irrelevant because it is 100%-0%.

        I understand that you can use statistics based on really tenuous assumptions, but is there really any way in which you can say 99% is unreasonable? If I’m like Jobs, there may be a 0 or 100% chance of my wearing a blue shirt depending on my preferences. There’s literally no way to conclude anything about this except that you have no idea at all.

        This is also my reply to James below, who wonders why I’m willing to assign probabilities to future scenarios. There’s a reason “I don’t know” and “This doesn’t make sense” are my answers to questions that invite questions that can never be more than purely arbitrary guesses.

        Jordan D.: I’m not familiar with the meta-physics of this, so why is there even a remote probability of my computer becoming a fire-breathing dragon? The definition of a dragon leads me to understand that it is literally impossible, especially from the components of my computer.

        Anyway, ‘super-intelligence’ defined here as being “better than the best human at every cognitive task” is not the kind of thing that’s possible, because:

        1) Programming something like that would require heuristics of the kind that human beings are utterly incapable of producing on a gigantic scale. AIs function by applying brute force calculations allied to heuristics. Real-life strategic decision making has decision trees that are too complex and deep by orders of magnitudes and without extremely good heuristics (which are easy in simple games like Chess or Go), which one cannot evaluate real-life strategy by.

        The entire idea of a super-intelligence is extended cognitive capacity: can concentrate on multiple things and calculate super-fast (this is what an AI is good at) + evaluating situations correctly (humans are immeasurably better at). I don’t think it’s conceptually possible for a single specific set of heuristics to be better at every single cognitive task than the brain of any given human being.

        I’m not even going into the gargantuan practical reasons why this is never going to happen, I’m trying to keep this conceptual so we avoid arguments about 1 in 100,000 million or stuff like “people said X was impossible and they were wrong too” etc.

        In general my observation here is that the problem with almost all the numbers and reasoning here is that they are extremely complex pieces of reasoning being based on deeply flawed assumptions (much like the failed financial models by Nobel Prize winners).

        • Eiko says:

          Anyway, ‘super-intelligence’ defined here as being “better than the best human at every cognitive task” is not the kind of thing that’s possible

          …What if I just made a neural net with twice as many neurons as human brain has, and trained it on the same set of inputs we train human brains on? What if I used 10x as many? 100x? We don’t have the hardware to do this now, but we will, one day.

          • Chalid says:

            Yes. I don’t see why so many people view this as impossible or even unlikely. It seems inevitable to me.

          • John Schilling says:

            A computer with 10x as many silicon neurons as the human brain has organic ones, would need at about six orders of magnitude more processors than the largest supercomputers. Those computers are already many times the size of my house and cost billions of dollars, and the manifestation of Moore’s law where individual processors get exponentially smaller and cheaper is already breaking down. It doesn’t have even three orders of magnitude to go. And that doesn’t account for the fact that our best architectures for connecting silicon processors are incredibly clumsy compared to evolved organic brains.

            We can simulate many neurons and their organic interconnects on a single serial processor, but that also is clumsy and inefficient.

            Duplication of human neural architecture in some future generation of modern supercomputers is very nearly the opposite of inevitable. If AGI is created, it will be because we have figured out how to brute-force intelligence out of a much more constrained architecture, or because we have gone and developed a completely different type of computer. The former may be technologically impractical. The latter may be economically impractical in that the first half-dozen or so generations of the new type of computer will be increasingly expensive playthings that can’t compete with the highly-optimized descendants of today’s architectures. Path dependence matters.

            I think it likely but not certain that we will eventually develop an AGI. It will almost certainly take longer than the enthusiasts expect. It may require e.g. an apocalypse that destroys the existing computer industry and forces our descendants to rebuild from scratch and affords them a chance to choose a new path.

        • HeelBearCub says:

          “Anyway, ‘super-intelligence’ defined here as being “better than the best human at every cognitive task” is not the kind of thing that’s possible”

          This seems wrong in a fundamental way. Assuming natural determinism, the human brain isn’t unknowable. It seems to me that solving the hardware problem such that the hardware is better than ours should certainly be possible. Humans are limited by the fact we cannot be created as fully mature adults and that puts limits on our brains that I don’t have any reason to believe are fixed.

          Given better hardware, evolving “new brains” should then be capable of producing ones more capable than our own, even if we could not have programmed them on our own.

          I’m not saying any of that is easy or fast or even the most likely way “true” AI will be created, just that I don’t find the argument that its fundamentally impossible to be persuasive.

          • Deiseach says:

            Given better hardware, evolving “new brains” should then be capable of producing ones more capable than our own, even if we could not have programmed them on our own.

            Except that, the more we know, the more we discover that there aren’t simple answers.

            “Once we sequence the genome, we’ll be able to cure cancer!”

            “Okay, we’ve sequence the genome, and now we’ve found out it’s more complicated than that. We need to do more work, and it’s an ongoing project”.

            Look at the posts Scott has on here discussing intelligence and heritability and environment and nature vs. nurture and how there isn’t one simple “gene for being smart”. In order to identify a machine as truly intelligent, rather than being very well-programmed, we’ll need a working definition of intelligence and I don’t think we’ve even agreed on that.

            What would a machine intelligence look like? Able to do trillions of calculations per second? Able to store and call on vast databases of information much larger than any single human, or even humanity in total, could do? Able to access and retrieve, in what amounts to our limited senses as instantaneously, and without error, huge amounts of data and solve equations and problems put to it?

            And is that what intelligence is?

          • HeelBearCub says:

            @Deiseach:
            I’m not attempting to point at the quickest way to AI that is better than our intelligence. Nor even one that looks like it would economically feasible.

            Rather, I am trying to point a flaw in the statement that it is clearly impossible. I think it is very reasonable to assume that amount of raw power available to our brains (in terms of number of neurons, how fast the neurons signal, how fast information can be recalled, etc.) can be improved on. Even if you don’t put a high probability on it, saying it is impossible, full stop, requires some justification which is not in evidence (as far as I can see).

            Given the possibility of a more capable platform, the the possibility to create it should exist. Again, possible in theory does not mean it can be created by us, but the probability it can be created by us shouldn’t be zero.

            When I talk about “evolving” the “software” on this “hardware”, I simply mean that you choose a metric (say an IQ test), start with some working software, randomly mutate the software, and test the results vs. your metric. Those that do best are allowed to go on to the round of mutation and breeding. This type of software design has been used to successfully program a variety of interesting things.

            Put all that together and you can design something that is better at IQ tests (or whatever metric you choose) without having to “know” how to do it yourself.

            Again, I’m not saying this will happen. Merely saying that flat out saying it is impossible strikes as the wrong kind of statement to make.

          • Deiseach says:

            But HeelBearCub, the argument about the urgency of AI risk is that it’s going to happen soon, and once we’ve cracked human-level AI then super-human and beyond level AI is going to be really, really fast on its heels.

            So we need to concentrate a lot of money and effort on that, or else bad things will happen.

            There’s a shed full of assumptions there, none of which are more convincingly buttressed by any thing more than “And then magic happens”.

            Human level AI within 50-100 years. Then after that, very very fast, super-human and then god-level AI and moreover, that god-level AI will be able to work its will in the world at large on a scale large enough to threaten civilisation and the AI will have its own goals or if not quite that it will be very likely to act against humanity’s best interests because of literal-mindedness or prioritising other goals over our goals.

            I think human-level AI will take a lot longer than 50 years, not necessarily because humans are magic, but because we’ll lack a way of identifying intelligence in a machine from very good programming which can mimic human behaviours.

            From that, creating or evolving a super-intelligence? Maybe if we let them ‘grow’ as you suggest, but then any safeguards MIRI or anyone else comes up with may be obsolete; we may have the choice between ‘safe’ AI that is a limited machine intelligence, or ‘true’ intelligence that can’t be neatly programmed to behave.

            We don’t know, is what I’m saying, which is why the SOMEBODY MUST DO SOMETHING NOW OR ELSE! PR spin sounds excessive. It’s not that this is something not worth considering, it’s that there are a hell of a lot of gaps there that are being filled in with the equivalent of “And then magic”, and we don’t know enough about what we’re talking about to have a meaningful assessment of the avenues likely to lead to AI. Including the blithe assumption that AI will be so splendiferous we will immediately abdicate all responsibility to it so it is in a position to run our energy networks, our economies, our food supplies, our elections, etc. etc. etc. (you know, all the things it needs to control in order to be a credible “humans will become extinct” threat).

          • HeelBearCub says:

            @Deiseach:
            Hey, I’ve been arguing that AI x-risk proponents are vastly overselling their case.

            But that still doesn’t justify saying things like “AI smarter than humans is impossible”.

            “AI smarter than humans” is a much lower bar to clear than “AI has godlike abilities granted by its intelligence plus network access” which is a lower bar to clear than “AI can take over and consumer all networked resources becoming god-like a few milliseconds after it becomes self-aware”

            All I’m saying is that a bad argument is still a bad argument, no matter whose “tribe” the arguer is in.

        • 27chaos says:

          You don’t really understand probability. It’s a subjective concept, else the notion is meaningless. It’s not possible to discuss the “true probability” you’re wearing a blue shirt, that’s being silly. Just think of probability as what you would or would not be willing to bet on. The only exception in which it is reasonable to talk about objective probabilities is in cases where you are discussing quantum physics.

          • Professor Frink says:

            Or frequentist probabilities? There IS a whole objective school of probability after all.

          • FullMeta_Rationalist says:

            Yeah, that would make sense. Emp must think that we (as internet strangers) haven’t observed enough of his personal fashion habits to approximate the objective-but-hidden frequency of his “blue shirt days”.

            This can only mean one thing: despite the regulars’ protests, we need to mention “Bayes” more.

          • Emp says:

            I do understand that.

            The point you’re missing is there’s no value in discussing probability unless you have some baseline information that give some grounding to your prediction.

            What is the probability Invoker is a better mid hero than Sniper in Dota2?

            One could discuss this, but certainly not if one had no idea what Dota2 is to begin with.

            Asking questions like “Does God exist” or “Do Psychic powers exists?” What is the probability? are exactly like asking what LoL champions are better at jungling to 19th century Englishmen.

            What you would be willing to bet on is actually a really good test. Would you really be willing to bet on “Does God exist?” or “Do Psychic powers exist?” if you really had to payout if you were wrong? How much would you bet?

          • Emp says:

            Another way to explain what I mean is this.

            Consider some answers to “Does God exist”?

            Let’s say one person says: 99.84%, another says 32% and a third says 0.004%.

            Can you describe to me procedurally a sensible way of finding out which probability estimation is most reasonable or accurate?

            I can already tell you no. Your probability predictions, just like with the color of my shirt vary with your knowledge. For me I can easily give you a 100% accurate answer of my shirt color. You can give me various speculations based on average shirt colors and even try adjusting a little here and there for various traits you might think me likely to have.

            Even this kind of analysis on your part requires some assumptions about what is relevant, and what my habits are likely to be, many of which may be completely false. When it comes to psychic powers and God, we don’t even have access to that kind of speculative information.

            How do you do Bayesian probability analysis when you have no idea and no consensus on what is relevant data?

          • Jiro says:

            What you would be willing to bet on is actually a really good test. Would you really be willing to bet on “Does God exist?” or “Do Psychic powers exist?” if you really had to payout if you were wrong? How much would you bet?

            It’s a lousy test, because it ignores risk aversion.

          • ” It’s a subjective concept, else the notion is meaningless. ”

            That’s not a fact.

            “Can you describe to me procedurally a sensible way of finding out which probability estimation [of Gods existence] is most reasonable or accurate?”

            Having N examples of probabilities which cant be estimated objectively doesn’t prove that all probability is subjective.

          • Roxolan says:

            > What is the probability Invoker is a better mid hero than Sniper in Dota2?

            50%*. One of them is better, I don’t know anything about DotA, so it could go either way.

            (If I wanted to improve that answer, I’d first consider the probability that they’re *exactly* as good – which, for something as complex as a modern computer game, is infinitesimal (but not zero). Then I’d wonder if you’re more likely to make your example questions true or false. From examining my own thought process when coming up with examples on the fly, I’d guess true. There’s no real-life problems about which we’re in *complete* ignorance.)

    • Luke Somers says:

      Do you think intelligence is a non-physically-mediated process? That’s the only way I’ve found to understand your answer to 6.

    • Saal says:

      Objection to 1) is in some sense potentially valid.

      Objection to 2) is invalid. Given that you are in fact wearing a single-color shirt, there is a space of possible shirt colors visible to the human eye. The probability of you wearing a blue shirt is proportional to the subspace of the space of possible shirt colors that is generally called some variant of “blue”. If you tell me that you prefer cool colors to warm, I can adjust that probability accordingly.

      Objection 6) is not even wrong.

    • Kyrus says:

      Every question that involves uncertainty is a question of probability. Probability deals with the uncertainty of our own mind, not some “actual true probability” of some process. I believe Scott did a blog entry on that as well.

      For example if you throw a coin and you apply your logic it either goes to heads or tails with 100% probability. If we had the initial conditions precisely we could calculate it beforehand, or at least make a >50% guess every time. For you it is still 50/50 though, because you lack that knowlage.

  66. Whatever Happened to Anonymous says:

    >This is especially difficult because claims that a certain form of technological progress will not occur have a very poor track record of success, even when uttered by the most knowledgeable domain experts.

    As always, I’m just going to bring up the possibility that we only remember the ones that were wrong, because they make for better stories.

    • Emp says:

      Super-intelligence isn’t the kind of thing which can be technologically developed in this way, which is why the 100% confidence.

      If you asked me about invisibility, faster than light travel, immortality or even X-Men like powers my probabilities wouldn’t drop below 1%, but this literally cannot happen.

    • Jaskologist says:

      Futurologists have a poor track record in general, and that includes predicting the things are possible/soon to come. As the Emissary asks, “Where are the fly cars? I was promised flying cars!

      That’s why we can have such fun looking at what people 50 years ago predicted we would have today.

      • goocy says:

        When I read all the hype around Mars, I like to bring up the fact that in 1968, both Pan Am’s executives and customers were so confident about the upcoming trip to the moon that 90000 people signed up for the waiting list.

        • Nornagest says:

          How many people signed up for that Mars colony project that was in the news a few months back?

        • Deiseach says:

          Oh, that’s a brilliant example! Not alone are they not running scheduled tours to the Moon, but Pan Am no longer exist themselves.

          So when we’re forecasting “Within the next 50-75 years somebody will invent AI”, and getting from that “We had better donate to MIRI right now to forestall the threat of UAI”, are we putting all our eggs into the basket of Pan Am being the carrier to fly us to the Moon?

  67. Emp says:

    I often feel that people who understand statistics tend to be terrible at figuring out what kinds of questions they apply to.

    The fact that most people are overconfident speaks to their capacity for prediction.

    It doesn’t mean that any of the predictions made by an intelligent person to a 1 in a million interval will be wrong.

    If you don’t believe me, go ahead and give me 1/1000000 odds and let me choose my predictions. You’ll be broke within 5 days if you took me up on my offer.

    It’s really not that difficult to not be over-confident about things. I agree entirely that most people are awful at differentiating between 1 in 1000 and 1 in 1000000000, but some people are conversely good at identifying these mispricings.

  68. Unknowns says:

    Scott, you are definitely right about AI and overconfidence here.

    But if you agree with those arguments against Christianity and against hell (i.e. giving very high probabilities that Christianity is false and that hell does not exist), then you are committing the same error, i.e. insane overconfidence.

    Basically the problem with Eliezer’s idea about privileging the hypothesis is that those hypotheses are already privileged, by the very fact that so many people already believe them. You may be convinced that none of those people have good reasons for that belief, but are you convinced of that to such a degree that you can make a million similar statements (about huge numbers of people being wrong) and be wrong once on average?

    • Scott Alexander says:

      You’ll notice at the bottom I give religion about a 5% probability of being true.

      • Randy M says:

        I would have noticed, except for the hassle of rot13. Can’t you use invisible-until-selected text or something?

      • ekr says:

        That percentage is definitely way too high. Being a regular reader of yours, and of LW, (and also an avid physicist) I’d assign a 60%-70% percent probability that your actual P-value is lower by 1-2 orders of magnitude, and the only reason you increased it to 5% is because you expect a certain percentage of your readers to be more religious/agnostic and thus assign a friendlier (to them) percentage.

        Otherwise my probabilities are fairly similar to yours. Except the one about global warming. 1 C increase compared to what baseline ? 2015 average, 20th century average?

        • endoself says:

          A p-value is not the same thing as a probability. I would go so far as to say that if you don’t know what a p-value is, you are probably better off ignoring all statements you read that are justified using p-values.

          • 27chaos says:

            I’m pretty sure that the physicist knows what p-value means. It’s a specific type of probability, and using it as a synonym is not exactly correct but it’s reasonable when speaking informally. Let’s not turn these comments into a place where LessWrongish pedantry reigns.

        • Unknowns says:

          Actually, I think that was Scott’s honest opinion, and I did see it. I just thought it wasn’t consistent with the way he used the privileging the hypothesis idea. But it seems I didn’t understand that part of the article correctly and he was only citing that argument without accepting it to the degree that Eliezer thinks you should.

          The reason I think that is Scott’s actual probability is that e.g. he spends time investigating Catholicism, as far as I can tell, not just to be nice to his friends, but because he thinks it might be true. In his personal estimates, he also put down a five percent chance that he would convert to religion within a certain period of time — it’s reasonable that you would think your probability of conversion would be at least close to your probability that it would be true.

  69. Jaskologist says:

    Noah Smith recently asked why it was useful to study history. I think at least one reason is to medicate your own overconfidence.

    I tend to speak up only in order to disagree, so let me just agree and amplify this x1000.

    (That cancels out my next 1000 disagreements, right?)

  70. Jaskologist says:

    Question for the mathematicians:

    Are there any math proofs that were generally accepted and later found to be wrong? (I’m interested in proofs, not beliefs like Euclid’s 5th postulate, or that all numbers are rational.)

  71. Muga Sofer says:

    >I’m talking aboutI’m talking about Neville Chamberlain predicting “peace in our time”

    Typo – “I’m talking about” is there twice.

    >There’s a tougher case. Suppose the Christian says “Okay, I’m not sure about Jesus. But either there is a Hell, or there isn’t. Fifty fifty. Right?”

    >I think the argument against this is that there are way more ways for there not to be Hell than there are for there to be Hell.

    Ignoring the nonsense about “50-50” … there ARE a lot of ways for there to be a Hell.

    Maybe quantum immortality is true. Maybe Boltzmann Brains is true. Maybe you’re in someone’s ancestor simulation right now, or they’ll figure out a way to reconstruct your brain from whatever, or time travel will be invented; and future humanity is gravely displeased.

    Or maybe all this – gestures out window – is a simulation by some alien, and they’re going to take you apart after you die in service to some inscrutable goal.

    Now, the prior probability that Hell is real and you can go there for disagreeing with the guy who just told you that is, of course, much lower – that’s a Pascal’s Mugging, and I guess if I had to justify my intuitions surrounding “that’s a Pascal’s Mugging” I’d point to all the millions of people making that claim, most of whom disagree with each other.

    But – and I realize I speak as a Christian, here, and you are not over-fond of Christians … I don’t think you’ll go to Hell for disagreeing with me. But I do think there’s a decent chance there’s a Hell out there.

    • Jaskologist says:

      The way C.S. Lewis envisioned it, all you really need for there to be a Hell is for there to be an afterlife. Then, just extrapolate out current trends over infinite time.

  72. Mary says:

    ” because of all Pythagoras’ beliefs – reincarnation, eating beans being super-evil, ability to magically inscribe things on the moon – most have since been disproven.”

    When did this happen?

    • Jaskologist says:

      Heh. My thoughts as well. I don’t believe in any of them, but it has not been proven so.

    • Deiseach says:

      Pythagoras may have had a point about beans! I was looking up Wikipedia to see what, exactly, Ancient Greece would have considered “beans”. Apparently the term “bean” used to mean strictly broad beans (fava beans).

      Cue Wikipedia:

      Favism is quite common in Greece because of malaria endemicity in previous centuries, and people afflicted by it do not eat broad beans.

      What’s the connection between malaria and favism?

      Favism is an enzyme deficiency syndrome. The sufferer undergoes acute anemia. Interestingly, this medical condition provides immunity against malaria. From ancient times, people especially those living in the Mediterranean region, were aware of favism and its association with fava beans. It was significantly noted that whenever the fava plants blossomed in spring, many young people reported fatigue and lethargy. The condition has been referenced in various historical documents. In one historical text, the great mathematician Pythagoras advised his disciples to abstain from eating broad or fava beans. During the late 19th and early 20th centuries, favism was termed as “Baghdad Fever”. Favism is also known as Glucose-6-Phosphate Dehydrogenase Deficiency. Sufferers lack the enzyme Glucose-6-Phosphate Dehydrongenase or G6PD. Surprisingly, most carriers are females, whereas, men are more likely to suffer from this deficiency. This condition is more prevalent in people living in North Africa, the Mediterranean region, the Middle East and South Asian regions. It is estimated that 400 million people worldwide are affected by favism.

      Also, this fascinating study:

      Favism was very frequent among the populations of the Mediterranean area, particularly in Sardinia, because of the high prevalence of G6PD deficiency and the large consumption of fresh fava beans.

      What is favism?

      Favism: A condition characterized by hemolytic anemia (breakup of red blood cells) after eating fava beans (Vicia fava) or being exposed to the pollen of the fava plant. This dangerous reaction occurs exclusively in people with a deficiency of the enzyme glucose-6-phosphate dehydrogenase (G6PD), an X-linked genetic trait.

      It’s not just eating the beans, it’s even inhaling the pollen.

      So if I were Pythagoras, and I saw a case where a pregnant woman ate a meal of fava beans, and five days later her newborn child was born suffering from jaundice and died, or someone keeled over dead from anaemia after merely having pollen from a bean field blown upon them, or healthy young people became lethargic and drained of energy when the bean flowers blossomed, I’d probably think beans were some super-demon murder-plant, too.

      Favism was only identified in the 1950s so all the 19th century scholars laughing at Pythagorean superstition may owe the man an apology 🙂

  73. NZ says:

    Jonah Lehrer’s book “How We Choose” has a bit about a guy who had part of his frontal lobe removed and then was unable to make choices because he had no emotional input into those choices. He would oscillate for hours on where to go to lunch, where to park his car, etc. because he was rationally weighing every last detail of every choice instead of saying “I think I’m in the mood for sushi; I might be wrong, but oh well.” Eventually it cost him his job, his marriage, and just about everything else.

    The moral of the story is that a bit of pigheaded emotional irrationality in choices was evolutionarily advantageous enough to be hard-coded into us. Sure, when pondering abstract questions–especially ones with very real and important consequences like AI risk–it’s advantageous to be very rational. But we’re not primarily wired for it, because those kinds of questions weren’t as common throughout our evolutionary history relative to more immediate kinds of choices like whether to throw rocks at a cheetah to scavenge its kill.

    I believe that’s the reason most people blithely accept new technology, enthusiastically even, without thoughtfully questioning whether it might have significant negative unintended consequences in the long run. The Amish are one notable exception when it comes to technology generally, and AI developers are a notable exception when it comes to one technology in particular.

    The Amish have been able to become such an exception, I believe, because they have formalized their process of vetting technology–and key to that is also formally stating what your values are that you wish to preserve.

    So maybe people who are overconfident about lack of AI risk are also experiencing a value-vacuum. If they first wrote down a list of, say, 5 values they thought were important and good enough to preserve indefinitely, and THEN thought about AI risk, it might prime them to be less confident about their claim that AI risk is so low.

    • Luke Somers says:

      Inability to perform a value-of-information calculation doesn’t seem rational to me.

      • NZ says:

        OK, but where do you get your values from? At some point, emotion enters into it.

        • Luke Somers says:

          I don’t understand. We come loaded with some set of terminal values. Analysis Paralysis is a different problem.

          Which did this guy have? Inability to notice that he was over-analyzing, or genuine lack of caring between any outcome?

          I mean, I totally agree with the overall point, just the first paragraph doesn’t really seem to me to be an example of what you’re talking about.

          • NZ says:

            The guy had an inability to allow his emotions to influence the outcome. At least that’s my take based on the description Lehrer gives of that guy’s case in the book. It could be that Lehrer summarized it wrong, but I don’t know.

            In a way that’s illustrative too: should I doubt whether Lehrer summarized the story accurately and drew the correct message from it? Lehrer has a degree in neuroscience and worked in a Nobel-winning scientist’s lab, but after that he basically just became a writer. Maybe he didn’t understand the case of this guy properly, or he did but gave a description that tells a different story because he thought that would sell more copies of his book. What is the probability this is what happened?

            Then again, my whole knowledge of Lehrer’s background comes from his blurb on the back inside flap of the book jacket and from his Wikipedia page–which includes an extensive portion about Lehrer’s plagiarism controversy. How certain am I that these sources are accurate?

            And so on and so on until I can’t really say that anything with confidence. To function, eventually I have to say “Nah, I feel like this is true, so I’m gonna go ahead and trust it to be true until overwhelming evidence shows up to the contrary.”

    • NZ says:

      [EDIT] Oops, the book’s called “How We Decide” not “How We Choose”.

  74. Paul Torek says:

    I found this very convincing, until I remembered that you’re bad at math.

    😉

    (Have we given Scott enough razzing over his “bad at math” claim yet?)

  75. ButYouDisagree says:

    Do members of the rationalist community also take overconfidence seriously when it comes to normative deliberations? See e.g. Moller, Abortion and Moral Risk

    • Scott Alexander says:

      Ah frick, someone else has thought of this argument.

      I’m not sure how to deal with moral uncertainty. Part of me wants to say “if I’m right about all of the facts, what is left for me to be wrong about?”

      I also note that abortion has some pretty strong benefits, such that even if we had to go ten percent of the way to being pro-life, the benefits might still outweigh the costs.

      I know Will MacAskill is really interested in moral uncertainty, and I’ve been intending to see if he has any position on this argument.

      • goocy says:

        So, why shouldn’t we go all the way to pragmatism? Let people fill out a form for requesting abortion, and grant the form in 90% of cases.

        (People can’t agree on the 90% number? Let them all write up their number and take the median)

      • ButYouDisagree says:

        “If I’m right about all of the [non-evaluative] facts, what is left for me to be wrong about?” The evaluative facts, of course! Whether killing is wrong whenever it robs its victim of a future worth living. How to weigh this potential wrongness against the right/desire to control one’s body.

        And I’m skeptical that you (or almost anyone) have a good grasp on the non-evaluative facts here. What is the sign and size of the externality of additional people being born?

        While I agree with you that pro-choice arguments are likely correct, if you assign a 90% chance, that strikes me as hugely overconfident. (How sure are you that Marquis’ account of the wrongness of killing in terms of deprivation fails?)
        And while you’re right that there are large benefits to abortion rights, the downside of abortion, conditioned on pro-life moral arguments succeeding, is enormous.

        I don’t mean to pick on abortion, it’s just everyone’s favorite issue in applied ethics. And since everyone has a strong opinion, and we know that intelligent people disagree and there’s no expert consensus, it seems like a great example of widespread overconfidence.

      • Unknowns says:

        I think that the real reason people find it hard to take moral uncertainty seriously is that politics is about morality and you feel like you would be abandoning your tribe if you tried to be reasonable instead of supporting it.

        • TheAncientGeek says:

          Yeah, there’s a lot if overconfidence because most people have a social/argumentative style of rationality.

  76. Tim Martin says:

    “I think the argument against this is that there are way more ways for there not to be Hell than there are for there to be Hell.”

    I’d say the argument against this is that you only assign 50% probability to a binary choice if you have absolutely no information about it whatsoever. 50% is just the most conservative probability assignment – you wouldn’t use it if you had background information to slide the probability in one direction or the other. (And hell is pretty darn *improbable* if hell is defined as a place in another dimension where the minds (?) of people from this dimension go when they die, but only if they’re bad, and that place is ruled by a being with powers that are completely unexplainable by known physics.)

    Also, you should always use all the relevant information you have when coming up with a probability. Your example with Hillary Clinton shows ways to get different probabilities, each one only using part of the information that you have (1. she is an American citizen, 2. she is a front-runner candidate, 3. there have been X many presidents, etc.) The best probability estimate will combine all relevant information.

  77. Jiro says:

    What matters is less that the risk is 1/1000 or 1/1000000, but that the estimate of the risk is based mostly on uncertainty in your ability to estimate. Those estimates should be treated with more skepticism than estimates with equal expected value that are better grounded.

  78. Urstoff says:

    Seems like some of the propositions to which people are putting probabilities on are quite underspecified. Probability that a superintelligent AI gets developed doesn’t really entail any actions (assuming consequentialism blah blah blah). Obviously people are worried about superintelligent AI’s that (through malignance or ignorance) seriously screw up the world (but how and by how much?). But there are alternative hypotheses too, right? AI’s that love humans and refuse to hurt a single one. AI’s that really just want to play scrabble against themselves until the heat death of the universe. AI’s that figure out macroeconomics and thus prevent all future recessions. So you’d have to add the expected values up of all of these, and then modify that by how risk averse you are (after all, there’s nothing more or less rational about how risk averse someone is; it’s just a brute psychological fact). Seems like the final answer is going to depend very much on how many scenarios I can personally come up with (which is ultimately psychological/cognitive, not epistemological). I think I’d rather go read a book than spend the rest of my life calculating probabilities for beliefs.

    • Jiro says:

      Seems like some of the propositions to which people are putting probabilities on are quite underspecified. Probability that a superintelligent AI gets developed doesn’t really entail any actions (assuming consequentialism blah blah blah).

      Probability that a superintelligent AI is developed does entail some actions–specifically giving money to MIRI. That’s what this is really about.

  79. Lorxus says:

    My guesses as to the probabilities of the discussion questions:

    1. 10^-9

    2. 10^-3

    3. 0.9

    4. 0.1

    5. 0.3

    6. 0.8

  80. 1. I guess I don’t use probability to assess questions like this. There could be a god or not be a god and it would have no impact on me. So I rate this as “don’t care”, act as if the probability were 0, and I don’t assign an actual probability.

    2. If we are talking about anything other than de-minimus psychic powers (some ability no one claims to have because it’s too small for them to detect) then I think the chance is vanishingly small. Many attempts have been made to demonstrate such powers with astoundingly little success. Perhaps 1/100,000? I am probably unreasonably confident here.

    3. 90%, if you exclude intentionally applied corrections like global cooling to combat the global warming.

    4. Tricky. With 85 years to go I would have set the probability pretty high, but have to balance that against the chance that humanity figures out how to beat normal pandemics before the first one hits and the chance that humanity creates a pandemic on its own. I’d say 80%.

    5. Hmm… 87%.

    6. How about 99%?

  81. Phil says:

    so how many numbers are there in the Boston phone book?

  82. Corwin says:

    1. What is your probability that there is a god?
    Zero.

    2. What is your probability that psychic powers exist?
    Zero.

    3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050?
    70%

    4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100?
    30%

    5. What is your probability that humans land on Mars by 2050?
    40%

    6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115?
    99.99%

    • Unknowns says:

      These probability assignments do not appear to be consistent.

      If there is a 30% probability that a pandemic will kill a billion people in a five year period by 2100, there is surely more than a 0.01% chance that a pandemic will happen and will prevent the development of a superintelligent AI.

      • NZ says:

        Why would a pandemic that kills fewer than 1/7 people (presumably in one concentrated part of the globe, like SE Asia) necessarily disrupt the development of AI?

        • Unknowns says:

          Saying that something has at least a chance of 1 in 3000 is not saying that it will necessarily happen.

          • NZ says:

            True.

            How many people does it take to develop superintelligent AI? A few thousand perhaps? And of those, maybe a few dozen are key innovators?

            So I guess the question is more about where the pandemic is concentrated and where the development of superintelligent AI happens. If the pandemic is mostly in Asia and the AI is mostly in North America, I wouldn’t expect the odds of the AI being disrupted to be high (OK, more than 0.01%). If the pandemic is concentrated in the same place as the AI, then yeah the odds of AI being disrupted go way up, since one of those key innovators is more likely to succumb.

            Another factor is whether and how quickly AI development rebounds after the 5-year pandemic has run its course. Did these key innovators share their innovations with people who understood them and survived? Did they open-source their code?

            Of course, Scott didn’t tell us whether the pandemic does run its course, only that it kills a billion people in 5 years. Maybe in 10 years it kills all humans, or maybe in 10 years it kills 1.000000001 billion people and vanishes.

            You can get sucked into odds guessing games about any of that stuff, which in a way shows the problem with thinking this way.

        • Harald K says:

          It doesn’t have to necessarily disrupt it. But are you 99% sure it won’t disrupt it?

  83. LCL says:

    I don’t know what to think . . . It is almost impossible for me to comprehend the mindset . . . HOW CAN YOU DO THAT?! (section 1)

    At risk of being the guy who tries to answer rhetorical questions, I want to talk about how people can make such bad predictions. Beyond the clear factors like being inherently overconfident and typically very bad at thinking about probability. The factor I want to point to is incentives.

    The Probability Gods giving out dollars for correct predictions (and taking them for bad ones) is a good illustration because if that actually happened, wouldn’t we expect predictions to rapidly and dramatically improve? Predictions generated from prediction markets are often quite good.

    Monetary incentives for accuracy are good in this context because they can be strong enough to overpower other kinds of incentives. Without them, people only have an incentive for accuracy contingent on their self-concept or social reputation as an accurate predictor. And that’s often not strong enough to stand up to the other incentives involved, like:

    – Using your prediction as a group membership signal
    – Telling the questioner what you think they want to hear
    – Making a bold or outrageous prediction to draw attention or get a reaction
    – Saving mental energy by saying the first unconsidered thing that comes into your head
    – Projecting confidence to show high status
    – Getting a momentary emotional lift by predicting very optimistically

    And many other possible motivations. These may seem trivial, but for most people so are the marginal self-concept and reputation benefits of an accurate prediction.

    In sum, inaccuracy – even serious inaccuracy – is probably an indication of poor innate ability to be accurate. But wild, grievous, HOW CAN YOU DO THAT inaccuracy is mostly an indication that accuracy was not the major motivation.

  84. NZ says:

    1. What is your probability that there is a god?
    52%

    2. What is your probability that psychic powers exist?
    <0.0000001%

    3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050?
    51%

    4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100?
    1% (I don't think a pandemic has ever killed a billion people in 5 years and medicine will presumably advance, but I suppose there's a very small chance that one could even in a mid-near future, especially if increased population results in more crowding.)

    5. What is your probability that humans land on Mars by 2050?
    9% (Right now one of NASA's core goals is to get Muslim kids to feel better about themselves. On the other hand, private space companies are getting better all the time and international cooperation is really strong. On the other hand, landing a man on Mars is very, very difficult and without national glory as an incentive, it looks more difficult still.)

    6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115?
    10%

    • goocy says:

      I don’t think a pandemic has ever killed a billion people in 5 years

      That’s a really crappy argument, considering that our population only reached the billion mark in 1890 or so. At least calculate the death counts relative to the world population.

  85. “Bayesian and frequentist statistics are pretty much the same thing [citation needed] ”

    [Begin Yoda]Looking to start a flame war, you are, hmmm?.[End Yoda]

  86. Some Troll's Legitimate Discussion Alt says:

    1. I’m not sure what you mean.
    2. .5%
    3. 40%
    4. 2%
    5. 1%
    6. 11%

  87. iarwain1 says:

    Request: Please, please do more of these “it’s in the Sequences but I’ll write it anyway” types of posts. I happen to like your style of writing a lot better than Eliezer’s, and I think I’m not alone in this. My dislike for Eliezer’s writing style makes it hard for me to read the Sequences, not to mention that the Sequences are really, really long. But if you write about it then I’ll read it.

    [Other people: If you like this idea please respond with a comment to that effect so that Scott sees it. Unless that goes against some sort of SSC discussion ethic that I’m not aware of.]

    • Unknowns says:

      I agree, and not only because of the style.

    • coffeespoons says:

      I sort of agree. I quite like Eliezer’s writing style, but Scott’s is really excellent.

    • TK-421 says:

      Seconded. Personally I like both styles, and seeing similar ideas expressed in two different ways often helps clarify them further than just one way or the other.

    • blacktrance says:

      I agree because I think the message in the Sequences is important, and if rephrasing it gets it across to more people, that’s good. But I really like Eliezer’s writing style.

      • houseboatonstyx says:

        Yes. No offense to EY’s style or the Sequence commentators, but I feel like I’d wandered into some avant-garde theatre and have no idea what the movie (all in sepia) is about.

    • stargirl says:

      I also would love to read Scott’s take on things in the sequences.

    • Agreed. No disrespect to Eliezer, but I do prefer Scott’s writing.

    • John Schilling says:

      Lacking Scott’s writing skills, I can’t compose a “ditto” post that isn’t almost inexcusably lame, but “ditto”.

    • Roxolan says:

      Sort-of-counterpoint: I get the same “I want to read more of this” feeling (to various degrees) at the end of pretty much *any* SCC post. Part of me want to second your request, but, but, the opportunity cost! If we get more Rewritten Sequences posts, what will we get *less* of?

  88. Doug Muir says:

    WTF with the pandemic, people.

    The 1918 flu epidemic was literally a once-in-several-centuries disaster, and it killed 3%-5% of the world’s population. It would probably kill a lot fewer today; we’re better able to handle it in pretty much every way imaginable — antibiotics, antivirals, real-time tracking, gene sequencing, you name it. The time to develop vaccines for most sorts of viruses — not counting regulatory approval! — has dropped by about an order of magnitude; it took us about 90 days to get an Ebola vaccine that seems (so far) to be safe and nearly 100% effective.

    The world is getting richer and healthier with every passing year. Yes, environmental disturbances are shaking new zoopathogens into the mix, but so far we’ve been dealing with them pretty well. Stuff like SARS, MERS and Ebola can be unnerving, but death tolls in the low five figures — many of them poor people who were malnourished or otherwise immunocompromised — are not the stuff of global pandemic.

    The odds of a billion-killing pandemic are tiny. The only way you get one is either secondary to some other global catastrophe, or because some asshole splices together something truly horrible. The first one seems very unlikely (at least by 2070) and the second… low single digits? We’ve managed 70 years without setting off any nuclear weapons in anger.

    Doug M.

    • Scott Alexander says:

      I think the concern is with biological weapons.

    • goocy says:

      Low single digits seem intuitively correct. But this intuitive valuation assumes stability of the current geopolitical climate.

      With less geopolitical stability, double-digit numbers may be more likely. And since we’re considering longer time frames, and it only takes one period of instability to release a man-made pandemic, the actual likelihood shifts towards the double-digit number.

    • JDG1980 says:

      The odds of a billion-killing pandemic are tiny. The only way you get one is either secondary to some other global catastrophe, or because some asshole splices together something truly horrible. The first one seems very unlikely (at least by 2070) and the second… low single digits? We’ve managed 70 years without setting off any nuclear weapons in anger.

      I think most posters agree that the possibility of a naturally occurring billion-killing pandemic are fairly negligible. It’s “some asshole [splicing] together something truly horrible” that is the real worry.

      I doubt that such a pandemic would be deliberately created by a sovereign state, or even a terrorist group with clear ideological goals. It’s way too indiscriminate, and could just as easily wipe out its originators. (You can’t make a virus that will kill only South Koreans and not North Koreans, or only non-Muslims.) But it’s at least conceivable that in 20-50 years, something deadlier than expected could be created by accident in a biological warfare lab, and accidentally escape into the wild.

      Things really start to get worrisome when and if genetic tinkering technology becomes cheap and readily available. If someone can potentially whip up a nasty virus in their garage, there’s a far higher chance that a doomsday cult like Aum Shinrikyo or a Unabomber-style nutcase will do that.

  89. goocy says:

    Three things:

    Suggested model for generating overconfident estimates
    Human perception is based on logarithmic sensing. A light source, or a sound, ten times as bright or loud is perceived only as twice as bright or loud. That’s the reason for the popularity of the unit decibel. Since neurons use some sort of logistic regression for their internal decision making and learning processes, this logarithmic relation to things may be hardwired across the brain, which makes logarithmic scaling the “natural” scaling. See Steven’s power law for a crude attempt at quantifying this internal conversion.

    My speculation is that all numbers get the logarithmic treatment, and are internally manipulated by their logarithmic values. Converting the results back to linear scale produces intuitively correct, but objectively very bad, results.

    Example: there’s a plane flying over my head, and it makes a noise with a loudness of 100. Following Steven’s power law (100^0.67), I register that noise with an intensity of 22. When I imagine what two planes would sound like, I double the internal representation of intensity to 44 and convert it back to linear scale. I estimate a loudness of 292, which is almost 50% higher than the actual value. If I imagine 12 planes, my estimation is already 3.6 times higher than the real result. If a similar thing happens in abstract reasoning, it’s trivial to come up with a huge degree of overconfidence, especially if I lack the experience to compare my results to real probabilities.

    How to generate opposing, highly confident opinions
    Unfortunately, brains aren’t Bayesian predictors, they’re working with an assumption of Gaussianity [Jared Diamonds “The Black Swan” is the best source for this from the top of my head]. That means, internal beliefs are quantified on an open scale of “Likeliness that this thing is happening/true”, rather than between the two extremes “This will happen / is true” and “This won’t happen / isn’t true”.

    In this framework, evidence for the likelihood that a thing is not happening/true need to be collected in a separate belief. And because cognitive decision making is rather exclusionary, most people will swing towards the belief with the higher likelihood relatively early in the process. This structure generates the frustrating condition that most people have a firm opinion on almost everything. And with the help of a healthy dose of selection bias, you can end up holding your favorite belief all your life, even without ignoring evidence to the contrary.

    That’s how you end up with two experts with highly confident, yet opposing opinions.

    Minor adjustment proposal to your probabilities

    In your likelihoods that people go to mars and build an AI, there is a hidden assumption that has been true for the duration of the space age and the computer age: cheap energy. Because our cheap energy comes from fossil fuels, and fossil fuels run out by definition, it’s only a matter of time until energy isn’t cheap any more. Depending on your beliefs when the inflection point between “cheap” and “expensive” will happen, this fact may correct your posted likelihoods by a factor of 0.8 to 0.2.

  90. Buckyballas says:

    For all the forward-looking expert predictions that we will not discover/invent something by X date that turned out wrong, aren’t there also expert predictions that we will discover/invent something by X date that also turned out wrong? “Nuclear fusion is always 20 years away” is a common rejoinder to the AI-risk folks on this message board and others. Can someone provide more historical examples that are like this?

  91. Glen Raphael says:

    Scott, can you clarify this one:

    What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050?

    Is the claim that 2050 world temperatures will have increased by 1C compared to the temperature today? Or are you comparing to 1990 or…something else? what’s the baseline?

    • Deiseach says:

      And how do we discount any ‘natural’ warming effect from the ‘anthropogenic’ warming effect? If the temperature goes up by 1 degree, certainly man-made pollutants will have had an effect – but we’re not dealing with an isolated system. There’s this big ball of fiery stuff in the sky called the sun, for example.

  92. Josh says:

    An alternate explanation of how two people can have 99.9% confidences in opposite directions than “humans are irrational and crazy omg people” is that probability statements are not coherent outside of a well-defined sample space with a sufficiently large N. This post has a couple great examples such as Pythagoras’ theorem being wrong that illustrate why attributions of probability might be better viewed as problem-solving heuristics than objective facts about the world. Probability seems to work best as a cognitive tool when you don’t care about how a specific case comes out, but rather how cases tend to come out on average, which is why it’s a great tool for insurance companies, and less so for individuals making personal life choices (you DRIVE A CAR???? You could DIE!!!)

  93. MasteringTheClassics says:

    My probability for the existence of God is approximately equal to the probability that I am sane, not hallucinating, etc. My google-fu has failed me – what is the approximate probability that a random 23 year-old suffers from insanity to the point of not being able to trust his perception of the world?

    I’m not sure the language above gets the question across (LW is practically dedicated to the idea that we can’t trust our perceptions, but we’re not using quite the same definitions) so illustration time: I see a tree out my window, I’ve seen it every time I’ve looked there since I moved here ~18 years ago. I have interacted with the tree, talked with others about the tree, played with others around the tree, etc. What is the probability that the tree exists?

    That number is roughly equal to my probability that God exists, but I’d still like to know what that number is.

    • AngryDrake says:

      That number is roughly equal to my probability that God exists, but I’d still like to know what that number is.

      I don’t think such a number is useful.

      The kind of “probability polling” that is common in LW and associated venues seems more like random noise than any concrete signal, probably because these numbers are pulled out of backsides, rather than the result of mathematical computations in accordance to utterly sophisticated, detailed numerical models of reality. Something of the complexity of our existence, never mind predictions about its future states, doesn’t lend itself as reducible to a number.

      And humans don’t even use the numbers in a sane way (as OP merely points out).

    • Jiro says:

      You are ignoring the possibility that biases influence your perception in a way which does not equate to insanity.

  94. Anthony Sterrett says:

    I have a bone to pick with your maths.

    They were instructed to set a range, such that the true number would be in their range 98% of the time (ie they would only be wrong 2% of the time). In fact, they were wrong 40% of the time. Twenty times too confident!

    The bolded portion is wrong. That is not how we should compare confidence levels.

    Consider: I am 75% confident that a certain event will occur. Then something happens to make me twice as confident. How confident am I now? Simply multiplying by two leads to 150%, which is patently ridiculous.

    We wish to have a definition of “twice as confident” that is both useful (closed in the range (0, 1)) and conforms to our intuitions (the numbers go in the right direction). As it happens, the best way to do this is to do the maths in terms of odds. Yudkowsky talks about them here.

    The maths, fortunately, are simple: if I am 75% confident, then the odds are 3:1, or simply the number 3 (O = P / (1 – P)). Twice this is 6, which represents a probability of 85.7% (P = O / (1 + O)). This conforms with our expectations.

    In the case of the quoted example, the percentages 98% right and 60% right convert to odds ratios of 49 and 1.5 — it would therefore be correct to say that the person is 49/1.5 = 32.7 times as confident as they should be, not twenty times.

    This approach has the advantage that it works whichever way you turn it! If we take the probabilities that the person is wrong — 2% and 40% — they convert to odds of 0.0204 and 0.6667, the ratio between which is still 32.7 (you just have to take the reciprocal of the ratio). Compare this to trying it with probabilities: 2% wrong vs 40% wrong is twenty times too confident, but 98% right vs 60% right is… only 1.6 times too confident? Clearly something is up there.

    Anyway, this might come off as nitpicky or hostile, and I hope that it doesn’t! Obviously the exact numbers don’t matter, and your point gets across. But the maths were bugging me.

  95. I’m astonished at how high people are setting P(God exists). To those who put the probability above 1%, I ask: what is your assessment of P(leprechauns exist)? It seems to me that P(God exists) has to be much, much smaller than P(leprechauns exist), simply because God is so f***-ing superlative: omnipotent, omniscient, omnibenevolent, omnipresent.

    Replacing “omni-” with “nearly omni-” mitigates this somewhat, but you’re still talking about an implausibly overpowered being, making the prior probability much smaller for “God exists” than for “leprechauns exist”; you would have to have massively better evidence for existence of God than you have for existence of leprechauns to end up with a posterior probability for “God exists” that was higher than the posterior probability for “leprechauns exist.”

    • AngryDrake says:

      Maybe they have a different definition of God, which Scott didn’t specify.

      I mean, there’s got to be a difference between probabilities of “the universe has been created by someone, a person of some sort” and “the God of Abraham”, and both can be called God.

      • Your mission, should you choose to accept it, is to give a specific definition for “God” — a reasonable one that at least roughly matches what most people mean by the term — such that one could make a reasonable argument that P(God exists) > P(leprechauns exist)…

        • AngryDrake says:

          The Christian God, as described by the orthodox branches of Christianity. This describes a fairly large area around the core concept of a Creator deity, whereas leprechauns are a very specific type of supernatural fairy. By Scott’s dartboard colours analogy, this would work out favorably towards God.

        • Glen Raphael says:

          My favored point-of-comparison is Santa Claus, which seems like a specific named instance of the set {leprechaun-like creature} but one who should be a LOT more likely to exist than God according to most rules of logic and evidence. People have SEEN Santa in recent memory. Lots of people, all over the world. And Santa leaves behind evidence. (Heck, even I have seen Santa in person a couple of times and I’m not even Christian!) As for supernatural powers, Santa certainly is powerful and strange, but not nearly AS powerful and strange as God.

          • Deiseach says:

            Santa Claus is actually Sinter Klaus is actually Saint Nicholas of Myra, filtered through a couple of centuries of passing from one cultural sphere to another, assimilation of church feastdays with local traditions, and Coca-Cola doing an ad campaign on a mixture of Washington Irving’s Knickerbocker stories and Charles Dickens’ “A Christmas Carol”.

            Sinterklaas is Dutch, hence the connection with the Dutch history of New York, and why you Americans know him as “Santa Claus”; in England he would have been Father Christmas (with the other variants of the legend).

            Stripping out the associations with saints and secularising the character and the story, then adding in modern twists for new adaptations and voilà, we arrive at how the Santa Claus who lives at the North Pole with Mrs Claus and the Elves is created.

            But there was, at the base of it all, a real person.

    • Ape or Apis? says:

      I don’t see why superlatives should necessarily affect a probability assessment. Should my assessment that the universe is infinite and will persist eternally also be <0.1%?

      I suppose it could be argued that, unlike an infinite universe, an omniscient god has infinitely high Kolmogorov complexity, since the complexity of a mind would grow as it approaches all-knowingness. Then again, the rules specifying this mind could be relatively simple [I think. I could be completely off-base here.]. Would an algorithm with an infinite amount of computing power, memory, and running time at its disposal have the prerequisites for omniscience?

      If anything, I would put the probability of god slightly above that of leprechauns, precisely because a god's abilities and nature are less arbitrarily specified. It's the difference between saying "in the absence of a mechanism causing it to just end at some point, I conclude that space is infinite" and saying "the universe stops at the end of the rainbow, just past the pot of gold."

      • “I don’t see why superlatives should necessarily affect a probability assessment.”

        Bob claims that to have discovered an animal that can move at 50 mph. Jim claims to have discovered an animal that can move at 5000 mph. Which one of these claims do you consider more plausible?

    • TheAncientGeek says:

      You are assiming naturalism, in contradiction to the stated assumptions of the vast majority of theists.

      If you want to see where they are coming from, try setting aside god. and just considering the claims of supernaturalism. , eg

      1 The finite/simple comes from the infinite,/complex.

      ,2 Mentality of some sort is an irreducible feature of reality.

      3 There are real teloi.

      Would you rate sll thesr as exactly zero? Remembe, that’s exactly zero, not wrong on balance of probability.

    • Troy says:

      I disagree about both the prior probability and the strength of the evidence for theism.

      On the former, I complained about giving God a super-low prior because he’s so “complex” here. I think an omni-God is in fact simpler than an almost-omni-God; sometimes a hypothesis positing that something is “maximally X” is simpler than one positing that that thing is “very X.” For example, an omni-God is simpler than a God who is omni except that he doesn’t know what hospital Scott works at.

      Also on priors, an important difference between God and leprechauns is that God is fundamental; leprechauns are not. God is competing with alternative hypotheses about the fundamental nature of reality (e.g., the universe exists uncreated); leprechauns are competing with non-fundamental theories about the inhabitants of Ireland which we have lots of background to bring to bear on when assigning priors.

      As for evidence, this is a big topic of course, but there’s virtually no good evidence for the existence of unicorns, whereas I think, with such philosophers as Richard Swinburne, that the cumulative evidence for theism is very strong. Some of the main evidences include fine-tuning, the intelligibility of nature, consciousness/intentionality, religious experience, and the historical testimony to the Christian miracles.

  96. AngryDrake says:

    One day, I will write a random number generator, which will get its numbers by way of wget-ing Less Wrong and pattern matching for probability predictions.

    One day.

  97. Adam says:

    My method to deal with this problem was to recognize that I don’t understand numerical probability in any meaningful sense. The difference between 21% and 36% is largely irrelevant to my brain and I should stop pretending that it is. So I created my own shortcut to help deal with it. Any probability between 0% and 15% is treated as if it is 0%. Any probability between 85% and 100% is treated as if it’s 100%. Everything in between is 50/50.

    These numbers could be shifted if the impact/effect analysis warranted it (you will lose a penny will lower threshholds, you will die will raise threshholds), but that’s the baseline.

    • AngryDrake says:

      This is sane.

      • LCL says:

        I don’t see a problem with knocking everything in the mid ranges to 50/50. Won’t go far wrong in the typical cases.

        15%->0% or 85%->100% is nuts, though. Enough so that I doubt you really do it. Even everyday cases would lead to breathtaking recklessness.

        • AngryDrake says:

          I don’t see the problem with rounding off probabilities to the nearest divides-by-50-evenly mark. This seems how humans actually treat probability – will happen/won’t happen/I don’t know. As you get more precise numbering, you diverge from how humans deal with this in practice. Numbers are nice for numerical processing engines or other automated decision making devices, but humans aren’t these.

          • HeelBearCub says:

            @AngryDrake

            I’m with @LCL here.

            If you treat “I’m 85% likely to not crash my car if I do 85 on this lonely, winding stretch of road” as “I will never crash my car doing 85” you are probably making an error. Only probably, as I don’t know your utility function.

            Of course, we don’t actually go “15% chance I crash my car!” We actually drive 65, then get a sense of “danger” and back off to 60.

            Taking that into account, I think we have to say that our internal map is almost always of expected outcome (with bounding on utility), not probability.

            Those thoughts, I assume, aren’t unique. So I’m sure someone, somewhere has stated that better.

  98. Harald K says:

    I read a book about data compression recently (Matt Mahoney’s Data compression explained, available online and highly recommended). He explains something pretty interesting: coding is a solved problem. Modeling is an unsolvable problem. If you have a model, using it for compression is easy. But coming up with a model is hard, of the “proven to exist no algorithm to come up with the right model” kind of hard.

    Speaking of things that are hard. Coding, as in coding for compression when you have a forward prediction model, is easy. But in general, it’s also not so easy to say what you should do with your model’s various predictions. Here’s a post about that.

    The thing is, all these arguments about modelling being hard and certainty being hard to come by – yeah, you’re right. But all of these arguments are also arguments against superintelligence. Your superintelligence is an algorithm, and we have hard, mathematical bounds on how well it can scale. People have been bad at predicting the progress of science, but mathematicians have a much better track record.

    I won’t worry about AI risk, because I believe it smaller than many other long-term existential risks, both for me and for humanity. (I am also not a consequentialist, and believe it’s not my duty to guard humanity from existential risks, although I certainly hope we’ll stick around, and I’ll try not to do anything that endangers us too much). I believe the payoff for getting a better estimate for that risk is not worth it. So 10% in 50 years, or 0.01% in 50 years, doesn’t really matter, there are better things to do.

    If I thought like you, I guess I should stop reading about compression and multi-armed bandits and machine learning. Although it may incidentally make me a better judge of AI risk, maybe I’ll help bring in the apocalypse! How’s that for a conclusion for a rationalist? “Let’s not seek more knowledge about this, because there’s a 1% chance it’ll cause the gods to wipe us all out create an evil AI to wipe us all out”!

  99. onyomi says:

    I’m amazed at the percentages of certainty people will assign to some of these questions even in the comments section of this post. On the plus side, I’m starting to better see the advantage of assigning numerical probability to questions I would normally consider too vague or unknowable, such as the likelihood of some version of Abrahamic God existing: it seems to impose a kind of intellectual honesty and humility, both with oneself and others. One is less likely to burn someone whom you are “95% certain to be a witch” than someone who is “definitely a witch.”

  100. onyomi says:

    My very shoot-from-the-hip answers (before decoding Scott’s answers):

    1. What is your probability that there is a god?
    If we narrow the definition of God to something like the personal Judeo-Christian god, 1%
    If we define God very broad, Hindu-ish kind of way, like “the spirit of all things; the one, unitary consciousness of the world, etc.” 50%

    2. What is your probability that psychic powers exist?
    Again, depends on narrow vs broad definition.
    If narrow, like, “there are people who can truly read minds and tell the future,” 2%
    If broad, like, “there are people who have abilities which would seem supernatural to us today but could eventually be understood by science,” 90%

    3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050?
    Don’t know much about this area, but the “hiatus” makes me suspicious. 30%

    4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100?
    If a natural pandemic, 10%
    If we include biological warfare, 20%

    5. What is your probability that humans land on Mars by 2050?
    Seems a little too soon. I doubt it will happen before it can be not a suicide mission. 30%

    6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115? 50%

    • sweeneyrod says:

      What is your probability that someone with a supernatural ability will be widely known in the next 10 years?

      • onyomi says:

        Pretty low. Maybe .1%? If there is or ever has been anyone in history who had what we might call supernatural powers, my money would be on reclusive yogi types who spend all day meditating. There are, for example, some Tibetan monks who supposedly can raise their body temperature at will to survive extreme cold (though that doesn’t contradict laws of physics, but just pushes our idea of the degree of control the mind can exert on the body, hence my rating for psychic powers broadly construed). There are many others who claim to have met yogis who could see their past lives, communicate remotely, etc. and there are occasional interesting cases of children who seemingly have uncanny knowledge of places they’ve never visited, etc.

        BUT, I feel like if someone was going to become widely known for something like this, it would have happened already. That is, I don’t expect any breakthroughs to be made in Yoga in the next ten years. Maybe someone will invent a cybernetic implant or something, but that would really belong to the realm of science rather than the supernatural. If supernatural powers are something you can be born with or which can be cultivated just by practice, then I’d think that either they don’t exist, or else those who have them are very secretive about them, perhaps also being wiser than most, and knowing that no good will come of making a show of them. If you read books like Yogananda’s Autobiography of a Yogi, this seems to be the common view in places like India.

        Of course, it also depends on the level of proof one is willing to accept. There are already cases of people claiming to put powers to the test, such as this guy who claims to have survived on nothing but sunlight and air for decades: https://www.youtube.com/watch?v=fH-kBqec_K4

        He supposedly let scientists observe him while he didn’t drink any water for 10 days, which is certainly the upper limit for a normal person (without food it is possible to go much, much longer), but there are questions about whether some cheating may not have been allowed, etc., and even if we concede he really didn’t drink water for 10 days, that wouldn’t necessarily be supernatural. If he truly had drunk no water for years then that maybe would cross into such a realm, but I doubt very highly anyone will prove something like that anytime soon.

  101. Professor Frink says:

    I can’t help but notice:
    1. Pascal’s wager was originally based on probability of God/religion
    2. Critics of AI suggest it might be a variant of Pascal’s wager
    3. The response is no, it’s not related to Pascal. The probabilities only look small because you are overconfident.
    4.And also, my probability of religion being true is also like 5-10% (order of magnitude similar to AI-risk), for these same overconfidence reasons.

    Still a Pascal wager right?

  102. Bryan Price says:

    1. What is your probability that there is a god?

    Define God. Which goes from metaphysical to us being in a simulation and the programmers then being God. If God is truly unknowable, then the answer might as well be zero percent.

    2. What is your probability that psychic powers exist?

    Zero percent. Psychic powers, if they actually did manage to happen, would be bred out. Even in today’s society.

    3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050?

    95 percent.

    4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100?

    Currently, I’d lay odds at 25 percent. If we get even faster transportation than we have now, it goes up from there. If we stop transcontinental travel by airplane, then it goes down. Which I don’t see happening.

    5. What is your probability that humans land on Mars by 2050?

    50 percent. There’s a whole lot of “unknown” in that.

    6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115?

    100 percent. In fact, I would say that it will happen before 2050. We may be running out of hardware oomph (this year is the first year that we miss Moore’s law happening in at least Intel’s microprocessors, which may or may not set a precedent for the slowing of that law down), but software is another beast altogether, and if hardware slows down, expect software to actually start improving at similar past rates of hardware. Because the target has finally slowed down. Also, remember, it only takes one machine to make this happen, even if it’s along the likes of IBM’s Watson. Which is already getting pretty damn close to catching up with us.

    IBM’s Watson on Wikipedia

    According to Ray Kurzweil, we’ll have the hardware to match human cognitive abilities by 2030, and for about $1,000. That’s depending on Moore’s Law, which, as I’ve already discussed, isn’t a sure thing right now. That won’t include the software however. Which would be the other half to that equation.

    • onyomi says:

      “2. What is your probability that psychic powers exist?

      Zero percent. Psychic powers, if they actually did manage to happen, would be bred out. Even in today’s society.”

      What? Why?

      • Bryan Price says:

        2. What is your probability that psychic powers exist?

        Zero percent. Psychic powers, if they actually did manage to happen, would be bred out. Even in today’s society.

        What? Why?

        Because psychic powers will be seen as a threat to the rest of the non-psychic population. Not unless the whole lot of humans develop them at the same time, which is something that I don’t see happening, either. If you can kill with a thought, how long do you think those around you will last? Not that long. So what is going to happen if somebody develops that capability? They aren’t going to last long enough to actually reproduce. Humans have to sleep at some time, so yes, they would be very vulnerable. It would have to be quite a passive ability for a person to be able to survive having such capabilities. Babylon 5 probably has the best answer to this question for these same reasons, as far as how it works for Earthers. People develop powers, they get taken out of the general population, because the general population won’t deal with them nicely. Train them to control their gifts, with whatever conditions, and probably be extremely ruthless when it comes to the non-psychic population hurting any of their members. “The Corp is mother, the Corp is father”.

        • onyomi says:

          What you describe seems like it might possibly have happened in one or more cases of the development of psychic powers (assuming they can just pop up randomly), but it seems wildly overconfident to assume that’s what would, necessarily, always happen. After all, people are often born with unusual talents of a more prosaic nature (can play the piano at age 3, etc.) and such people are usually admired and celebrated, or at least made to perform in the circus, rather than ostracized and killed. It seems equally likely they’d be revered as god-like and superior and make themselves into some kind of ruling caste.

          • Bryan Price says:

            What you describe seems like it might possibly have happened in one or more cases of the development of psychic powers (assuming they can just pop up randomly), but it seems wildly overconfident to assume that’s what would, necessarily, always happen.

            We already have “psychic” examples in the real world. Today. We have various fungi that infect their host like ophiocordyceps unilateralis, we have things like toxoplasmosis that doesn’t necessarily (results may vary) affect cats or humans, but does affect mice. When ants are found that aren’t “right” because they’ve been infected, other ants will make sure to carry the infected individual as far away from the nest as possible. They know that if they don’t, even more ants will get infected This isn’t spooky action at a distance mind control that most people consider of being psychic, but it certainly is close enough to be considered mind control.

            In the times that this happens, it seems that the driving factor towards this capability is to change behavior and get whatever the infected is doing to go against their well engraved survival instincts. In the ants, it’s to get as high as possible so that when the fungus does fruit, it seeds it’s spores as far as possible. In mice, it makes cat urine irresistible, when normal behavior would be to run away from it as fast as possible, so that the mouse will be caught by a cat.

            Hence, if this were to be developed in humans somehow, I don’t see a good side to why it develops. It will be used to change behavior of other humans. It becomes a very desirable trait to have, possibly too successful for the rest of the population to care, and winning the genetic lottery is just something that sometimes makes winners losers. It will be an advantage that will be wiped out by those that don’t have that advantage.

        • AngryDrake says:

          Explain why shamans and witch-doctors exist and haven’t been all slain by their tribal fellows.

          • Bryan Price says:

            Explain why shamans and witch-doctors exist and haven’t been all slain by their tribal fellows.

            They aren’t psychic! Being intuitive at reading others doesn’t make you psychic. If somebody does what the witch doctors and the shamans tell them to do, they are doing it willingly. They might be brainwashed somewhat (everybody is, that is a given, and if you think not, think about what you are buying after you see advertising (or not buying for that matter)), however, in the end, it is their conscious thoughts that are propelling them, not the psychic ability of the person to do something.

          • AngryDrake says:

            They aren’t psychic!

            That is – you don’t think they’re psychic. Their clients do. And their clients obviously don’t murder them – they give them jobs!

          • Bryan Price says:

            They aren’t psychic!

            That is – you don’t think they’re psychic. Their clients do. And their clients obviously don’t murder them – they give them jobs!

            I can’t prove a negative, and you haven’t proven that they exist, let alone that they are the ones that are psychic! Besides, real psychics wouldn’t be doing the crap that those charlatans are doing, and if they could do what they claim, the stakes would be incredibly higher, higher than you evidently imagine.

        • Setsize says:

          Okay, now give your estimated probability that this scenario is mistaken about what happens 100% of the time in human social behavior or evolution.

        • John Schilling says:

          Humans with weak telepathy are, outside of carefully designed laboratory experiments, indistinguishable from humans with abnormally high ability to read body language. Hence the need for carefully designed laboratory experiments, with double-blinding and remote observers and so forth, and not just in the narrow field of parapsychology / psi research.

          Since most human biological evolution occurs outside of scientific laboratories, and since people are pretty good – some of them very very good – at reading body language, I conclude that effects indistinguishable from weak telepathy have positive survival value.

          +1 for the “Babylon 5” reference, but note that this was really a way for the strongest telepaths to harness the weaker ones to the long-term goal of telepath supremacy. If it hadn’t served their goal, they’d never have allowed the mundanes to create the Corps.

          • Bryan Price says:

            Humans with weak telepathy are, outside of carefully designed laboratory experiments, indistinguishable from humans with abnormally high ability to read body language. Hence the need for carefully designed laboratory experiments, with double-blinding and remote observers and so forth, and not just in the narrow field of parapsychology / psi research.

            But we don’t have those kinds of experiments running. Look at it this way. We have magicians and conjurers and such that disappear mountains, the Statue of Liberty, planes, lovely assistants, and such. Is that truly magic, or is that just sleight of hand? It’s sleight of hand of course. They haven’t rewritten the rules of physics. And while it’s amazing when we see it happen, we know that the disappeared object still exists. It hasn’t just disappeared into thin air, and we know that the Statue of Liberty isn’t now missing, it’s still where it originally was.

            Since most human biological evolution occurs outside of scientific laboratories, and since people are pretty good – some of them very very good – at reading body language, I conclude that effects indistinguishable from weak telepathy have positive survival value.

            I think I’ve already answered this previously, but I’ll recap. What ecological demand is being met by developing psychic abilities? Passive stuff, like actually reading somebody’s thoughts without truly being empathetic might survive, although, if the person with that capability can’t keep their mouth shut when talking about something that a person has never said to a living person before, it’s going to get hairy. Mind control? Then, you become as loathed as the fungus and the parasite that I talked about earlier. What ends up being a common story in the comic books with people with extraordinary powers? They get cut off from the rest of humanity, they’re required to register just to exist, they become pariahs. Why? Not just because they are different, but because they are so vastly different than the rest of the population. We’ve got that coming in Captain America:Civil War. We’ve got that happening in the series Sense8 (which is, technically, psychic capabilities), we’ve got that happening in the movie Jumper, being hunted not just because of what the jumpers are doing, but more because of what they are. It’s a common thread. Psychics, especially those with active abilities, will be considered too different from us to think that we can live with them.

            +1 for the “Babylon 5″ reference, but note that this was really a way for the strongest telepaths to harness the weaker ones to the long-term goal of telepath supremacy. If it hadn’t served their goal, they’d never have allowed the mundanes to create the Corps.

            I think a whole novel, and more, could be written about the consequences of how telepaths are treated in the Babylon 5 universe, just the human side, let alone the other races.

          • Glen Raphael says:

            we know that the Statue of Liberty isn’t now missing, it’s still where it originally was.

            Right, but that’s because David Copperfield put it back at the end of the trick! 🙂

            What ends up being a common story in the comic books with people with extraordinary powers? They get cut off from the rest of humanity, they’re required to register just to exist, they become pariahs.

            Hey! No fair generalizing from fictional evidence! The reason the Sense8 cluster is being attacked is because being attacked gives them an excuse to work together as a superhero team – if they didn’t have a superpowered adversary the show would be boring, so the writers put that in. The reason the X-Men are shunned and have to hide their abilities is that mutantism in those books is being used as a hamfisted metaphor for other traits that have been discriminated against in the past – most notably homosexuality. NOT because it makes any particular logical sense.

          • Bryan Price says:

            we know that the Statue of Liberty isn’t now missing, it’s still where it originally was.

            Right, but that’s because David Copperfield put it back at the end of the trick! 🙂

            Uh-huh. o_O

            What ends up being a common story in the comic books with people with extraordinary powers? They get cut off from the rest of humanity, they’re required to register just to exist, they become pariahs.

            Hey! No fair generalizing from fictional evidence! The reason the Sense8 cluster is being attacked is because being attacked gives them an excuse to work together as a superhero team

            (Thank you. I’ll have to remember that link for the next time I argue with somebody about how bad “Atlas Shrugged” is and they want to talk about how it expresses her philosophy…)

            Since some people are thinking that psychics would be evolutionary, just what pressure exists to start this whole thing off in real life? If you don’t have superhero team, then you can’t have the superenemy trying to outwit the superheroes. Neither exist. So which one develops first?

            – if they didn’t have a superpowered adversary the show would be boring, so the writers put that in. The reason the X-Men are shunned and have to hide their abilities is that mutantism in those books is being used as a hamfisted metaphor for other traits that have been discriminated against in the past – most notably homosexuality. NOT because it makes any particular logical sense.

            Which brings us to a (beautiful?) point here. If humans are so damn good at discriminating against those that are different from the rest of them, like homosexuality and race among others, what makes you so sure that humans would accept somebody with true psychic powers? You keep assuming that true psychics would act very much like the rest of us. I posit that they would act in a way that is something very, very different. That is why I say that anybody developing true psychic abilities will be killed very soon after it is found out that they have them. The differences will be very apparent, and it will strike at other people’s psyche in such a way they will truly find it abhorrent.

            This isn’t a logical fallacy of generalization from fictional evidence, this is historical evidence.

          • houseboatonstyx says:

            @ Glen Raphael
            The reason the Sense8 cluster is being attacked is because being attacked gives them an excuse to work together as a superhero team – if they didn’t have a superpowered adversary the show would be boring, so the writers put that in.

            I think the word missing here is teleological . That is, whose reason=purpose? In this case you mean the writers’ purpose of making a good story.

            You know, a popular story. It’s right there in the etymology. ‘Teleological’ means it gets talked about on television. QED.

          • Glen Raphael says:

            @Bryan Price:
            I am deeply suspicious of your claim that if psychics existed they would be immediately killed. Generalizing from real-world experience: there currently exist people who are ludicrously good at things most of us aren’t great at – I’m talking about skills like running, punching, playing table games and calculating odds. There are people who are WAAAAAAY out on the tail end of the bell curve in all these areas, so much so that their performance seems magical to the rest of us. We like to watch such people win the Olympics or triumph in chess or poker tournaments, we like to hire them as sports commentators or trainers or have them develop hedge fund trading strategies or have them break codes for the NSA. It is true that people with psychic abilities would strike us as weird. But then, people who are bendy enough to be a contortionist for Cirque Du Soleil ALSO strike us as weird. Nonetheless some people have these weird skills and weird hobbies which run in families or communities. Being better than anybody else at a weird skill is in modern society at least as often celebrated as demonized.

            (I agree with you that it’s hard to imagine how evolution would create psychic powers. But if it somehow DID, there’s no reason to think it would be selected against as strongly as you believe.)

            BTW, when you say “You keep assuming that true psychics would act very much like the rest of us.” I think you’re confusing me with somebody else – the post you responded to was the first time I said anything about psychics.

          • Bryan Price says:

            @Bryan Price:
            I am deeply suspicious of your claim that if psychics existed they would be immediately killed. Generalizing from real-world experience: there currently exist people who are ludicrously good at things most of us aren’t great at – I’m talking about skills like running, punching, playing table games and calculating odds. There are people who are WAAAAAAY out on the tail end of the bell curve in all these areas, so much so that their performance seems magical to the rest of us. We like to watch such people win the Olympics or triumph in chess or poker tournaments, we like to hire them as sports commentators or trainers or have them develop hedge fund trading strategies or have them break codes for the NSA. It is true that people with psychic abilities would strike us as weird. But then, people who are bendy enough to be a contortionist for Cirque Du Soleil ALSO strike us as weird. Nonetheless some people have these weird skills and weird hobbies which run in families or communities. Being better than anybody else at a weird skill is in modern society at least as often celebrated as demonized.

            My outlook is that true psychic people would truly be uncanny. Yes, there are fast people, extremely skilled players of all sorts of game, sports or otherwise, and some people manage to do some truly outrageous magic tricks, there are always going to be some people out at the tail end of the bell curve. But we still don’t have people outrunning horses or cars. That’s the kind of tail end that I see that a psychic person performing at. If somebody can read minds, they are going to know what you are going to say before you say it. (Although according to current studies of fMRI readings, maybe not? /shrug) That’s going to develop a behavior, unless that person has incredible patience, and I know I don’t, some people just can’t seem to make up their mind, they are going to interrupt the person, perhaps even telling the person the answer before they even start talking. Of course, then there’s the issue of how much control a psychic would have over their power. Unable to pinpoint just one person’s mind, or even turn it off for awhile, they would go mad with the influx of thoughts.

            The devil, of course, is always in the details. I don’t have them. Nobody does. But this is what my intuition tells me.

            (I agree with you that it’s hard to imagine how evolution would create psychic powers. But if it somehow DID, there’s no reason to think it would be selected against as strongly as you believe.)

            Unless it is something that starts out very slowly, and with very little power behind it, that grows slowly, and through a sizable amount of the population, the irrational side of people is going to kick in. We’ve seen it in our own recent history, and I don’t think we are even close to outgrowing that. But the only effective mind control that we have seen on this planet so far are things that require physical access to the brain, like a spore growing in an ant, or Toxo in mice. We have yet to see “spooky action at a distance”. Not yet. And no, I don’t consider trained dogs that respond to hand signals or whistles to be spooky action at a distance. Full blown psychics from nothing? DOA.

            BTW, when you say “You keep assuming that true psychics would act very much like the rest of us.” I think you’re confusing me with somebody else – the post you responded to was the first time I said anything about psychics.

            I am talking about psychics that, if actually suddenly produced, would not act like the rest of us mundanes. Why talk when you can just communicate mind to mind? A heck of lot clearer, faster, and probably no language barrier. And if you can do that to us mundanes, talking really becomes useless. Those capabilities would bring changes to normal behavior that would be at best be considered eccentric.

        • 57% of people believe in the existence of psychic powers. (source: http://www.cbsnews.com/news/poll-most-believe-in-psychic-phenomena/ )

          Even if the CBS poll is wrong by 4 orders of magnitude (which it certainly isn’t), that means there are around 10,000 Americans who believe in psychic powers. As far as I know, none of them have ever gone on killing sprees of those they believe to be psychic.

          Palm readers, fortune tellers, spoon benders and others make money through no other means than by convincing others they have psychic powers, yet they are not assaulted by swarms of people looking to eliminate them.

          • Bryan Price says:

            57% of people believe in the existence of psychic powers. (source)

            Even if the CBS poll is wrong by 4 orders of magnitude (which it certainly isn’t),

            What do you mean by “wrong”? 57% of people think that psychic powers exist, I can believe that, so technically, that is correct. If you think that because 57% of people think that psychic powers exist, and they can’t be wrong, then I will say “Yes, they most certainly are.” It was practically 100% not that long ago when people thought that the universe rotated around the Earth. They are certainly wrong, even though it was a common thought that was how the universe worked. Even today, we have people claiming that we never landed astronauts on the moon. Preposterous, we can shine a laser on the moon and show the reflection back, which was done by astronauts, not something laid out by some unknown launch of a landing robot.

            that means there are around 10,000 Americans who believe in psychic powers. As far as I know, none of them have ever gone on killing sprees of those they believe to be psychic.

            Because no psychics actually exist. There is nobody to actually kill. No human has psychic powers. It would be quite obvious if they did have true powers.

            Palm readers, fortune tellers, spoon benders and others make money through no other means than by convincing others they have psychic powers, yet they are not assaulted by swarms of people looking to eliminate them.

            They may scam some people out of a few dollars, and enough of them do get sued into oblivion, but a real psychic isn’t going to perform tricks like these people do. The acts are liable to be much more ferocious and/or greedy. If it is driven by an ecological need, what is that need? I’ve pointed out that there are fungi and parasites than can control those that it infects minds. The result of that manipulation goes against the grain of self protection. So why would humans have a need to change other humans’ behavior? Do you really think that would be good for the humans under control?

          • Bryan Price says:

            Well, I screwed that formatting to hell. 🙁

          • I’m not arguing that because 57% believe in psychic powers, that means psychic powers are true at all. I’m arguing that because 57% of people believe in the existence of psychic powers, your hypothesis of “people always kill psychics immediately” is proven wrong, as nobody in that 57% goes around killing those they believe have psychic powers. This argument does not hinge on whether palm readers et al are actually psychics; merely that a portion of people think they are psychic, which they certainly do.

          • Bryan Price says:

            I’m not arguing that because 57% believe in psychic powers, that means psychic powers are true at all. I’m arguing that because 57% of people believe in the existence of psychic powers, your hypothesis of “people always kill psychics immediately” is proven wrong, as nobody in that 57% goes around killing those they believe have psychic powers. This argument does not hinge on whether palm readers et al are actually psychics; merely that a portion of people think they are psychic, which they certainly do.

            Let’s break it down this way. We have charlatans running around claiming that they are psychic and they can do this, and they can do that. The things that they say that they can do are things that can not be done. They are merely manipulating people’s minds via non-psychic means, charm and persuasion. They aren’t 100% perfect, but they are good enough to fool enough people to keep going. These psychic abilities are nothing but parlor tricks. Plus, we have to remember that the placebo effect is always happening. Just because your life is better because you talked it over with a psychic doesn’t necessarily mean that the psychic caused it to get better.

            Real psychics are going to to have abilities that we probably would never think of, and the ones that we would, they would probably be so good that it would be obvious that they were psychic. That’s if it’s actually able to be done psychically. Spooky action at a distance. Not unheard of, but in this case, it seems extremely unlikely.

            So, the way I see it, the difference between so called psychics that aren’t (charlatans) versus potential real psychics is that the real psychics aren’t going to do much if anything the charlatans do. And the where the real psychics do match up with the charlatans, they will be so much better and accurate at it that people will ask the charlatans, even the former believers, “Do you even psych?” (Or approximation thereof, depending on time.)

            Just because something is said to be psychic doesn’t mean that it is. Even if we get the technology to closely mimic psychic abilities (or what we think they would be), it’s still not going to be psychic. Read what you are reading? Sure, but too bad it takes a big ass fMRI to do so. Even if that were miniaturized down to something the size of a baseball cap (no prediction from me on the possibility of that actually happening, and anything even smaller than that would be using different technology that I doubt we’ve even explored…), it’s still technology, and will be obvious in its use, versus psychic ability.

            If true psychics start popping up in the population, expect the mundanes to freak out. The reason why the mundanes haven’t freaked out with the current charlatans is because they truly aren’t psychic.

          • Jiro says:

            Real psychics are going to to have abilities that we probably would never think of, and the ones that we would, they would probably be so good that it would be obvious that they were psychic.

            To quote a poster above:

            Humans with weak telepathy are, outside of carefully designed laboratory experiments, indistinguishable from humans with abnormally high ability to read body language.

            It is by no means certain that psychics will have such strong abilities that they would be so good it is obvious that they are psychic. They could easily just as well be a little better at reading people, or have hunches that prove true at a slightly greater rate than chance, or witch doctors whose curses work 15% of the time when statistically only 10% of the people they curse should be affected just by chance.

          • John Schilling says:

            Indeed, it is almost certain that the first humans to evolve significant telepathy would manifest this sort of marginally-detectable telepathy. Which offers obvious advantages, reproductive and otherwise, and which we can see does not result in pogroms etc. By the time another thousand years of evolution have produced non-marginal telepaths, it would almost certainly be too late.

          • Bryan Price says:

            To quote a poster above:

            Humans with weak telepathy are, outside of carefully designed laboratory experiments, indistinguishable from humans with abnormally high ability to read body language.

            It is by no means certain that psychics will have such strong abilities that they would be so good it is obvious that they are psychic. They could easily just as well be a little better at reading people, or have hunches that prove true at a slightly greater rate than chance, or witch doctors whose curses work 15% of the time when statistically only 10% of the people they curse should be affected just by chance.

            Then if they are that weak, what is the difference anyway? If they are that weak, how do you prove that they are psychic? I don’t think you can. If they appear to be normal, at least in the acceptable range, then, they are normal. And acceptable. The problem with your scenario with curses is that you start dealing with statistics. And besides the canard of lies, damn lies and then statistics, how many times are you going to run that experiment to see that it’s consistent? How many people are you going to involve in casting curses? And if somebody beats the witch doctor in one experiment, and they aren’t a witch doctor, then does that automatically make that witch doctor non-psychic? And is a difference between 10% and 15% that big? And is that how big the difference would be?

        • Alternative hypotheses:

          Psychics exist and are no threat, because they can or must Use Their Powers for Good.

          Psychics exist, but use their powers to hide their existence.

          Psychic powers have nothing to do with genetics.

          etc, etc.

          • AngryDrake says:

            Or:

            Psychics exist, the condition is heritable, and yields a greater expected fertility than non-psychics. (Though if that were the case, I would expect everyone to be psychic to some degree.)

          • Bryan Price says:

            Alternative hypotheses:

            Psychics exist and are no threat, because they can or must Use Their Powers for Good.

            Why is there such an assurance of that? Just who decides what is good, and what is bad?

            Psychics exist, but use their powers to hide their existence.

            Possibly, but when do they learn this feat? It’s a newborn trait like instinct? And if they do, doesn’t that sound like my statement that if they were found, they would not be allowed to reproduce sound like a reason for that to happen? (Hell, maybe that’s why I know?? 😉 )

            Psychic powers have nothing to do with genetics.

            Then either all humans are capable (shades of the Dune universe there) or something else is giving them, and if so, what?

          • Bryan,

            I think I can answer most your points by noting that I don’t have to prove hypothesis for it to be a hypothesis.

            “Just who decides what is good, and what is bad?”

            Irerelvant. There are versions of the hypothesis where “good” is useless from the point of view of psychics not killing humans, and there are versions where it isn’t. I am not arguing for some unique truth, I am pointing out that there are more hypotheses than you thought of originally.

            “but when do they learn this feat”

            Can you not think of your own hypotheses? They learn in mountaintop monasteries, or by psychic contact with other psychics, or…?

            “Then either all humans are capable (shades of the Dune universe there) or something else is giving them, and if so, what?”

            Training?

          • Bryan Price says:

            Bryan,

            I think I can answer most your points by noting that I don’t have to prove hypothesis for it to be a hypothesis.

            Understood. However…

            “Just who decides what is good, and what is bad?”

            Irrelevant.

            It’s quite relevant. We are already facing questions of this sort, and they will have to be figured out, one way or another. Autonomous cars. There is going to be an accident due to something happening too fast for the cars to react safely to (a falling tree/tree limb, and momentum is a bitch) and it involves two cars, maybe three. The cars have just enough time to coordinate, but one of the cars is going to take a bad crash, possibly fatal. Which one gets selected to take the crash?

            I have cats that go out and kill all sorts of creatures. Some of them, I’d rather they didn’t hunt, and some I’m glad that they hunt. I have no control over what they decide to hunt. To the cats, it’s all good. Saying that the psychics can only do good is, for one reason or another, totally unworkable. Hence why the question was asked. For the good of the psychics, for the good of the mundanes, for the good of humanity, or for the good of the planet? OK, so psychics don’t kill humans (even if they do consider themselves to be human), does that rule out bankrupting a human because you feel like it? Making them feed the psychic regardless of what the other human would do? That simple rule doesn’t cut it without some kind of position to view it from. And then, how does it get enforced, because, it’s not going to be an innate rule. That’s a nice hypothesis that it be innate, but since we are already arguing what would be good, I think not.

            There are versions of the hypothesis where “good” is useless from the point of view of psychics not killing humans, and there are versions where it isn’t. I am not arguing for some unique truth, I am pointing out that there are more hypotheses than you thought of originally.

            “but when do they learn this feat”

            Can you not think of your own hypotheses? They learn in mountaintop monasteries, or by psychic contact with other psychics, or…?

            Then you’ve got the chicken and the egg problem. Which happened first? The psychic or the psychic training? If it were an innate ability that anybody could just pick up, I think we would have discovered it by now. We’ve ran through a few billion iterations now. Nobody seems to have it.

            “Then either all humans are capable (shades of the Dune universe there) or something else is giving them, and if so, what?”

            Training?

            Possibly, although in Dune it appears to be genetic, only women were found to be telepathic, and the Sorceresses that preceded the Bene Gesserit first had a plan on getting even more powerful telepaths.

            Wiki page details

          • @Bryan

            The point was that the hypothesis space will contain “psychics are necessarily good” for all values of “good”, and therefore for all values implying “won’t kill people”, which is all that is needed to answer the original point. There is no need to to take a detour through What Is The Ultimate Good.

            “does that rule out bankrupting a human because you feel like it? ”

            Again: all values of “good”.

            “Then you’ve got the chicken and the egg problem. Which happened first? The psychic or the psychic training?”

            Why doesnt that apply to all learnt skills? Which happened first, the omelette cook, or the omelette cooking lesson?

            ” If it were an innate ability that anybody could just pick up, I think we would have discovered it by now. ”

            Then the remaining hypotheses include its being a difficult skill like brain surgery.

    • AngryDrake says:

      Zero percent. Psychic powers, if they actually did manage to happen, would be bred out. Even in today’s society.

      Why?

    • LTP says:

      Watson is basically a glorified encyclopedia, though, that can interpret questions phrased in specific semi-clever ways. It doesn’t really think at all, it’s just a brute-force, big data machine.

      • Bryan Price says:

        Watson is basically a glorified encyclopedia, though, that can interpret questions phrased in specific semi-clever ways. It doesn’t really think at all, it’s just a brute-force, big data machine.

        That’s true today. Even with whatever improvements have been made to it since its appearance on Jeopardy. But I think the brute force might be a little more nuanced than you think it is. But the question remains, how much different is that from what we do?

  103. WT says:

    The first part of this post makes no sense, i.e., this:

    “Last week, I mentioned that Dylan Matthews’ suggestion that maybe there was only 10^-67 chance you could affect AI risk was stupendously overconfident. I mentioned that was thousands of lower than than the chance, per second, of getting simultaneously hit by a tornado, meteor, and al-Qaeda bomb, while also winning the lottery twice in a row. Unless you’re comfortable with that level of improbability, you should stop using numbers like 10^-67. . . . Yet some people think they can predict the future course of AI with one in a million accuracy! Imagine if every time you said you were sure of something to the level of 999,999/1 million, and you were right, the Probability Gods gave you a dollar.”

    No, no, no. Saying you think there’s only a 1 in 10^-67 chance of meaningfully affecting future AI is NOT the same as saying “I am going to make 10^67 statements and only one of them will be wrong.” That is in no way the same thing.

    If I say, “There is zero chance that Santa Claus exists and that I can affect this Santa Claus by my actions,” that is NOT the same as saying, “I can make an infinite number of factual statements and zero of them will ever be wrong.”

    • FrogOfWar says:

      You appear to have missed the part where the statements are supposed to be ones in which you have an equal credence. Hence the discussion of belief calibration.

  104. Will Newsome says:

    My answers to the discussion questions:
    1. What is your probability that there is a god? Bar uhaqerq zvahf svir creprag.
    2. What is your probability that psychic powers exist? Bar uhaqerq zvahf bar va bar gubhfnaq.
    3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050? Bar uhaqerq zvahf avargl creprag.
    4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100? Bar uhaqerq zvahf svsgrra creprag.
    5. What is your probability that humans land on Mars by 2050? Bar uhaqerq zvahf rvtugl creprag.
    6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115? Bar uhaqerq zvahf gjragl svir creprag.

    50% chance I’m more accurate than Scott.

  105. Nornagest says:

    1. What is your probability that there is a god?

    If you mean Zeus or the God of Abraham, 1-2% and most of that is model uncertainty. If you mean something broader, like some version of pantheism or the little green men running the server that’s simulating the universe, it might reach 20%.

    2. What is your probability that psychic powers exist?

    No more than 5%, and that’s if we stretch “psychic powers” to the breaking point. A more traditional picture is far less likely, a fraction of a percent at the highest; I would be less surprised to find Bigfoot or alien abductions than e.g. telekinesis.

    3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050?

    About 50%. I’m pretty much on board with global warming, but 1C in 35 years (relative to a baseline of now) sounds high to me, especially since the models I’ve seen look more exponential (or, hopefully, sigmoid) than linear.

    4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100?

    No higher than 10%; the only historical equivalent I’m aware of is the Black Death, and modern sanitation practices even more than antibiotics make an equivalent unlikely. Bioengineered plagues for terrorism or warfare are a bigger worry, but the responsible parties would have to get their lethality dialed the first time, and that’s a hard problem.

    5. What is your probability that humans land on Mars by 2050?

    30% or so. There are people aiming for this, and no insurmountable technical barriers, but institutional limitations are going to make life hard for them.

    6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115?

    70%, but this is a low bar; an AI better than most humans at most things isn’t necessarily a world-beater, depending on how the difficulty of self-improvement scales from there relative to its power (a question for which we have only highly speculative answers). My estimate for a hardcore FOOM-like scenario by that time is maybe 15% off the top of my head and might end up being lower if I sat and thought about it for a while.

  106. Phil says:

    I appear to be rather late to the party, but I will defend my 1 in a million comment anyway. Scott, you seem to be committing yourself to the view that on any controversial question, assigning a very low probability is inherently overconfident. Assuming that by overconfident, you mean irrational (or wrong) and aren’t just using an empty pejorative, this view seems very problematic. Do you really think that every far-fetched technological idea has a not-too-low chance (in the sense that log(p)> 10^-4 (or whatever) of being realized? What about conjunctions? What are the chances that in the next century we will have super-intelligent droid spaceships with warp drives, antimatter torpedoes, and quantum computer targeting systems? Do I really need to have thought about every relevant theory to say that this is very unlikely?

    If you admit that assigning very low probabilities to controversial questions is epistemically permissible, then we are just back to having a debate on the merits. Here your arguments mostly consist of appeals to the authority of AI researchers. I don’t think these arguments are inherently bad, but I do think there are reasons to doubt the AI experts. Significant overconfidence about the prospects of one’s own field is a common characteristic among scientists. Additionally, AI researchers (to my knowledge) have not engaged with objections from neuroscientists about the ease of creating AGI, some of which I mentioned in my previous comment. Off the top of my head, the AI expert most knowledgeable about neuroscience is Yann LeCun, who also happens to be an AI skeptic. If LeCun assigns a very low probability to the discovery of AGI in the next century, will you call him overconfident as well?

  107. Alsadius says:

    I think the root problem here is that people confuse the hyperbolic meaning of “one in a million”(i.e., “I believe it’s fairly unlikely, and I wish to signal that belief strongly”), with the literal meaning of “one in a million”(i.e., if we simulate this event one million times, it will happen an average of once). If you set it out in terms like that, they may acknowledge that a difference exists, but their brains simply don’t see it in normal circumstances, and the confusion of the two runs rampant.

    And hey, discussion questions are fun. I won’t rot13 my answers, but I haven’t read anyone else’s yet.

    1. What is your probability that there is a god? 5%
    2. What is your probability that psychic powers exist? 0.1%
    3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050? 80%
    4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100? 10%
    5. What is your probability that humans land on Mars by 2050? 50%
    6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115? 80%

    • Alex says:

      “with the literal meaning of “one in a million”(i.e., if we simulate this event one million times, it will happen an average of once).”

      If anything, the literal meaning of “one in a million” along your nomenclature is thus:

      Take an event with two possible outcomes A and B where the probability for outcome A is 10^-6 and the probability of outcome B is 1-10^-6. Then simulate the event one million times and record how often you got outcome A. This is your result.

      The result, and now this is important, could be any number < one million! Even though of course most probably will be a number between 0 and say 10. In any case with this experimental setup you have nothing to average over let alone any guarantee to get "once" as an answer.

      To be able to calculate an average you have to repeat the entire exercise of conduncting a million simulations. I. e. you have to generate say 100 or 1000 or 10,000 results along the lines of the experiment described above and then average the results.

      This value and only this value will approach 1 by the law of large numbers. This is the point which Scott got wrong in the original post and since my pointing it out got without a reaction I try to rephrase it in the context of your comment.

      Of course there is another way of doing this and that is to conduct the experiment only once but divede the result as defined above by the number of simulated events and then check how close that number is to 10^-6, where the answer is bound to be "very close". But there is a long way from your phrasing (and Scott's for that matter) to this way of reasoning.

      I think I now understand that of both methods of describing the same thing, as outlined herein, Scott calls one description frequentist and one Bayesian, though I do not see on what grounds he does this and which is which. Even so, the distiction is only meaningful, if both methods are described correctly.

  108. Protagoras says:

    1) For me, for all intents and purposes, God is a skeptical scenario. I do not know how to assign probabilities to skeptical scenarios; I find it best to simply ignore them.

    2) “Psychic powers” is too vague for me to feel comfortable giving an answer.

    3) 90%.

    4) 1%. I think it would be overwhelmingly likely to be artificially engineered if it happened (as people have noted, wild diseases are just not that devastating); I’m not good at predicting how hard it is to make a truly deadly pandemic artificially, or how likely anyone is to do so and unleash it, but I lean toward it being quite difficult and unlikly to be used. Hope I’m right.

    5) 30%. There’s been this huge lull in manned space exploration, but I feel like 10 years is plenty of time for an intensive effort to carry out the project, which means there’s 25 years of steadily improving technology during which someone could start the project. Seems like a decent chance someone will in that much time.

    6) 95%. Some of the forces powering relentless computing power improvements seem to be slowing down slightly (clock speeds aren’t rising like they used to), but others are not. And I find it hard to believe that obstacles other than computing power won’t be overcome, when computing power is so helpful in overcoming obstacles.

  109. onyomi says:

    To anyone who’s willing to assign a zero probability to *anything* other than, maybe, “cogito ergo sum,” shouldn’t basic humility about how many times people have been wrong about such things demand you admit that there’s at least *some* possibility one or more of your fundamental assumptions is wrong and/or that our understanding could one day need adjustment (like, it’s not that Newtonian physics was “wrong,” exactly, but Quantum physics changes our idea of what might be physically possible)? Moreover, assuming intelligence much, much greater than ours is a possibility, if not already a reality somewhere in the universe, how, again, could we have that degree of confidence about what would or would not be possible for it, given how poor a comprehension an ant has of what our minds do?

  110. Rick G says:

    You said:

    That is, if an ideal reasoner would ascribe 80% probability to the popular theory and 20% to the unpopular theory, perhaps most real people say 99% popular, 1% unpopular. In that case, if the popular people are urging you to believe the popular theory more, and the unpopular people are urging you to believe the unpopular theory more, the unpopular people are giving you better advice.

    I don’t get it. If the ideal value is at 80%, and you are already there, then being pushed to either side is equally bad. Or are you talking about someone who is at above 80% (because they are a less than ideal reasoner, and therefore slightly overconfident, and also we assume in the majority direction). You should clarify this in the post.

  111. Rick G says:

    Are there any new and better places to do probability calibration (without much effort) than what was posted here on LW 6 years ago?

  112. Vasily says:

    My first comment here. Yes, everybody loves polls

    1. 0%
    2. 0.1%
    3. 67%
    4. 20%
    5. 50%
    6. 1%

  113. Matthew says:

    If I had lived in 1920s Britain, I probably would have been a Communist. What does that imply about how much I should trust my beliefs today?

    Forgive me nitpicking a minor side point in this essay, but in the 1920s, the establishment majority was correct about Communism (although to be fair, during the NEP, things hadn’t gotten anywhere near as awful as they were going to under High Stalinism, so they were mostly correct in a theoretical way). It was an intellectual avant-garde that was getting it wrong. Muggeridge was angry with left-of-center journalists, not with Parliament. This doesn’t actually bode well as argument for giving more weight to AI-worriers.

    Also if we generalize the point, the 2010s are not the 1920s. On the one hand, there are much more sophisticated techniques avaiable for the powerful to fabricate and/or suppress evidence. On the other hand, the ease of access to information for the less powerful is many orders of magnitude greater. These could exactly balance out, but I doubt it. In the 1920s, you could be ideologically blinded to the dangers of Communism, but you wouldn’t have much access to information that would challenge your bias anyway. Now, being, say overly biased by SJW evaporative cooling actually does require ignoring evidence, not just not having access to it. Someone told in the 1920s (or, to make the comparison better, 1930s) that the USSR had become a hellhole had nothing like this to challenge them the way a denialist about North Korea would now.

  114. Matthew says:

    1. What is your probability that there is a god?

    Question is underspecified. My estimate that any specific religion is correct about the details of a deity is going to have more zeroes than this margin can contain, but my probability for some sort of deism is more like 0.5

    2. What is your probability that psychic powers exist?

    Definitionally underspecified again. If the question is whether powers that “defy nature” in some way exist, then my estimate is going to have many zeroes. If by psychic we just mean, say “sensitivities to physical properties we are not currently able to measure,” then perhaps 0.01

    3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050?

    1C from 2015? 0.8, but probably missing the point from a policy standpoint. Ocean acidification, for example is going to be a serious problem even if the temperature rise turns out to be less than 1C.

    4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100?

    Combining an average of naturally-evolving pandemic (very unlikely to kill that many given ongoing improvements in sanitation/public health) and bio-engineered (rather more frightening), let’s say 0.025

    5. What is your probability that humans land on Mars by 2050?

    0.75

    6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115?

    0.01

    I am, however, much more frightened by the sort of possible worlds raised by Marshall Brain (in the USA) in Manna and Noah Smith in his article here. Say, 0.2

    • Deiseach says:

      I’ve just started reading that short story, “Manna”, and from my experience in low-wage service work (retail), I can immediately identify one flaw:

      Manna also had “help buttons” throughout the restaurant. Small signs on the buttons told customers to push them if they needed help or saw a problem. There was a button in the restroom that a customer could press if the restroom had a problem. There was a button on each trashcan. There was a button near each cash register, one in the kiddie area and so on. These buttons let customers give Manna a heads up when something went wrong.

      This burger chain is marketing itself as hip and fun, right? Now if it’s catering to both families and attracting in groups of teenagers, I can tell you right there what is going to happen: those buttons are going to be pressed constantly.

      Small kids will press them because shiny! Teenagers will press them because they’re assholes at that age and it’ll be a big joke haw-haw to see the staff running around on fruitless errands. Some alleged adults will press them because “Excuse me, I ordered a hamburger, this is made with beef” (I enjoy contemplating the perfect computer manager arguing that one with a customer demanding to see the manager).

      Result: either Manna disrupts normal service by sending staff off to empty bins that don’t need emptying, clean toilets that are currently in use (and no customer is going to be happy when someone tries to open the stall door when they’re on the throne), wipe down tables when people are still eating, and so on and so forth, or it will have to learn when to ignore the button calls, which means you’ll still be likely to miss genuine calls that someone has gotten their hand stuck in the hand drier in the loo, etc.

      Also, as a former low-wage service drone, I deeply resent the snotty superiority on display here; oh yeah, the workers love the new system because they’s too dumb to handle complimacated stuff like “Duh what I do when cleaning floor?”, since naturally if they’re working low-wage jobs they must all be low-IQ to match:

      Manna micro-managed minimum wage employees to create perfect performance. …The employees were told exactly what to do, and they did it quite happily. It was a major relief actually, because the software told them precisely what to do step by step.
      For example, when Jane entered the restroom, Manna used a simple position tracking system built into her headset to know that she had arrived. Manna then told her the first step.

      Manna: “Place the ‘wet floor’ warning cone outside the door please.”

      When Jane completed the task, she would speak the word “OK” into her headset and Manna moved to the next step in the restroom cleaning procedure.

      Weirdly enough, when I was a drone, I was able to remember the simple set of instructions about how to wash a floor without needing to be talked through it every time.

      If Marshall Brain thinks that the workers are all happy peons under this system, and are not figuring out ways of petty rebellion to screw it over (because they are too stupid to do so), I hope he enjoys his future as a storage battery for our New AI Overlord.

      Okay, to be fair to the guy, he can see the pitfalls in the system and how it all goes wrong. And it’s very easy: the zero hours contracts that Manna enforces are already here. Working in warehouses where you are timed and constantly exhorted to do more, faster, no matter how fast you are: that’s another undercover essay by Barbara Ehrenreich.

      The Libertarian fantasy land at the end is touching if unrealistic; my cynical side was expecting the end to be more like that of C.M. Kornbluth’s The Marching Morons.

      • Loquat says:

        Most of the people I’ve known who worked that kind of “drone” job would HATE that degree of micromanagement. A system that, when telling you to cook more burgers, assumes you need step-by-step micro-directions to go to the freezer, remove the burgers, return to the grill, etc, no matter how much experience you have with that kind of work, is a system that’s just begging for workplace sabotage.

        Also, the part where all the unemployed are effectively imprisoned for life in the “welfare dorms”, completely cut off from the rest of society, with nothing to do because even the menial tasks of cooking, cleaning and other maintenance are done by robots, and precisely nobody outside is trying to change that. There is no possible way there wouldn’t be all sorts of rich philanthropists, politicians, religious leaders, revolutionaries, etc, all trying to “improve” the situation and maybe acquire an army of grateful supporters in doing so.

        • jaimeastorga2000 says:

          an army of grateful supporters

          What good is an army of grateful supporters when they are economically and militarily worthless?

          The 20th century, it’s the era of the masses, mass politics, mass economics. Every human being has value, has political, economic, and military value, simply because he or she is a human being, and this goes back to the structures of the military and of the economy, where every human being is valuable as a soldier in the trenches and as a worker in the factory. […] But in the 21st century, there is a good chance that most humans will lose, they are losing, their military and economic value. This is true for the military, it’s done, it’s over. The age of the masses is over. We are no longer in the First World War, where you take millions of soldiers, give each one a rifle and have them run forward. And the same thing perhaps is happening in the economy.

          • Loquat says:

            There’s this thing called the vote, most countries have it these days?

            It’s certainly possible that in a future dominated by mass unemployment the elites will decide to disenfranchise the unemployed masses, but in order to get there you have to either (a) specifically change the law to take the vote away from large swathes of people, or abolish democracy altogether, neither of which is IMO feasible without something major like a military coup, or (b) retain the illusion that everyone has the right to vote while in practice putting substantial hurdles in the way of undesirables voting, and at least in the modern USA we have plenty of lawyers who love to challenge that sort of thing, in the interest of Human Rights and making a name for themselves.

      • Murphy says:

        Note that later in the story it went into surgeons and other high-skill jobs getting replaced by Manna as the systems got better.

        The button thing struck me as stupid as well but I can think of some workarounds like using RFID customer loyalty cards instead of buttons so that the system could learn to drop the weightings of notifications from individuals who issue false alarms multiple times.

        The micromanagement thing again, sounded a bit crap but the general idea seems sound. Imagine a system which walks you through exact steps the first few times then steps up to higher level instructions as people’s tasks get verified as being correctly done. Want to change the workflow for something across a business chain? the system switches to detailed instructions across the board.

        The details were a bit stupid across the board and the ending was incredibly twee but the general idea wasn’t too bad.

        Amazon already uses something like that with employees being given streams of low level instructions through their handsets. “get item 1234324” “get item 77544354” etc with very poor job security.

  115. Troy says:

    Ah, my two favorite topics, probability and God (well, the latter more in the discussion questions), and I’m late to the party.

    I do note to my satisfaction than when not directly responding to the fine-tuning argument, no non-theists (from my skimming of the comments above) gave a probability anywhere near 10^-67 for theism — except those who just said 0. Of course, some of those non-theists may have already taken fine-tuning data into account in their probabilities.

    • Troy says:

      I suppose, having made that comment, that I might as well reveal my own dogmatism.

      1. What is your probability that there is a god?

      .99999. The uncertainty comes almost entirely from higher-order uncertainty about my reasoning and the existence of reasonable people who disagree with me. The first-order evidence alone I take to be extremely convincing.

      2. What is your probability that psychic powers exist?

      Well, my confidence in the existence of miracles is near my probability for God, and I think some people would consider petitionary prayer ever being effective “psychic powers.” As for something more stable as part of the “natural world,” probably around .02, most of which is concentrated around some kind of very weak ability (something stronger would surely have been detected in experiments right now).

      3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050?

      .8.

      4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100?

      .2.

      5. What is your probability that humans land on Mars by 2050?

      .1.

      6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115?

      .001.

  116. Steven says:

    Well, since we’re dealing with calibration and certainty, this is as good a place as any to point out that Wikipedia has changed its redirect on “List of best-selling computer games” to “List of best-selling video games” (where it was previously to “List of best-selling PC games”).

    So everybody who answered “Tetris” on the last survey is now right. (And because of a revision to stats, “Diablo III” now beats “Minecraft” even under the old redirect.)

  117. Lila says:

    So basically the criticism is that people’s meta-confidence is excessively high: e.g. skeptics are overconfident that a probability is no more than 1 in a million. But the same criticism applies to the true believers, who hold the view opposite to the skeptics: they’re overconfident that a probability is no less than 1 in 100. They naively multiply probabilities together, neglecting the possibility (that you’ve discussed before) that an entire model is wrong. For every person who said “man will never fly”, there’s someone saying “there’s a pretty good chance I’ll be killed by a terrorist”. You talked about the failure rate of predictions of “this will never happen”. What’s the failure rate for predictions of “this will probably happen”?

    You bring up these issues briefly in your discussion of privileging the hypothesis, but you dismiss them too glibly I think.

  118. Besserwisser says:

    Funny thing: I’m pretty certain we will get to human-level AI within the century. After reducing my numbers to account for overconfidence using the assumptions above, I ended up somewhere between 70% and 80%, nearer to 70%. My math was a bit questionable so it doesn’t really say much of anything but I’m feeling pretty pleased with myself.

  119. 1. 10%
    2. 5%
    3. 20%
    4. 10% (probably a created pandemic)
    5. 20%
    6. 50%

    Some of this I discussed in Future Imperfect. I don’t think I knew anything about Eliezer’s views when I wrote it, but A.I. was one of the technologies that I suggested might wipe out the human race.

  120. J Thomas says:

    Suppose you ask me how likely we will get a source of infinite energy in the next 100 years. Not just perpetual motion, perpetual motion that we can tap forever, with the energy coming from nowhere. My natural inclination is to say that it will never happen because energy is not created from nowhere, the law of conservation of matter and energy appears to be true.

    But what if the law of conservation of energy is wrong? What if under some special circumstances energy is created from nothing, and if we find out how to maintain those circumstances we can get new energy forever? There’s no proof that can’t happen. Maybe the conservation laws are true most of the time, and we just haven’t noticed the times they aren’t. I don’t believe that — I tend to believe in conservation laws because they make so much sense.

    Hmmm. I’d better say it’s one chance in a million. If it’s true, there might be a small chance we’ll find out next year.

    Maybe people who say one in a million about other things just have the confidence in them that I have in the conservation laws.

    • Anonymous says:

      >Maybe people who say one in a million about other things just have the confidence in them that I have in the conservation laws.

      Yes, Scott is addressing that people are drastically overconfident in those things.

    • Harald K says:

      You might say, my model doesn’t take into account the possibility that the laws of thermodynamics are wrong. And that’s perfectly legitimate. No model can take into account everything. It’s also sensible, since every model has assumptions, and you don’t get assumptions much harder than central physical laws like this.

      If your model incorporates the possibility that the laws of thermodynamics may be wrong, it will very likely be a poorer model overall.

      Take an example from compression. You have a great compression algorithm A. But if given a 1 GB file of the binary expansion of pi, it will fail to compress it, because A can’t distinguish it from from random noise. It sees no pattern in it.

      So you make a compression algorithm B. This is exactly like A, except that it first checks if the data to be compressed is the binary expansion of pi. If it is, it starts the file with a special code (just the binary digit 1, for simplicity’s sake), followed by a code indicating the number of digits wanted.

      Now you have an algorithm that is capable of detecting a pattern, a genuine, interesting pattern that the former algorithm wasn’t capable of spotting. But it’s not a better algorithm, because it adds one bit (a leading 0, to indicate that it’s not pi) to all other files. It get immensely better at compressing pi, at the cost of getting ever so slightly worse at compressing everything else else.

      And this is ALWAYS a tradeoff. You can’t compress unless you can declare some patterns more likely than others. The more you hedge your bets, the more you say, “well, it’s POSSIBLE that the data could be this”, the worse your compression algorithm becomes. You need predictions to compress, and you need an opinionated model to make predictions.

      • J Thomas says:

        And this is ALWAYS a tradeoff. You can’t compress unless you can declare some patterns more likely than others. The more you hedge your bets, the more you say, “well, it’s POSSIBLE that the data could be this”, the worse your compression algorithm becomes. You need predictions to compress, and you need an opinionated model to make predictions.

        This is true, and the more you leave out about your priors, the more likely you will be wrong and also the less accurate your estimates of your uncertainty will be.

        We must always simplify our models to make them tractable, and the more we do that the more reality we lose.

        It’s only natural that we think that things which actually happen one time in ten thousand will be one in a billion, because they’re too small for us to take account of. But they do happen. The more different tiny events that matter, the less likely we actually understand.

        “For want of a nail, the horseshoe was lost. For want of a shoe, the horse was lost….”

        “No battle plan survives contact with the enemy.”

  121. SUT says:

    MIRI’s too similiar to bio-ethics. According to a certain viewpoint, bio-ethicists keep telling us there’s a ghost in the cells and foolishly preventing pragmatists from doing actual science. Work that’s life and death for present day friends and family.

    Now everyone would agree there’s some point in the future where biotech becomes freakin scary in its implications to society at large but those are only happening in Wired magazine right now.

    Likewise, there’s an argument to be made against any formal instituion like MIRI trying to insert philosophy and futurism into applied-AI (informal commentary like sci-fi still welcome).

    One of the earliest computer-ethics exercises occured during a Berkley campus takeover in 70’s, where students consider smashing Englebart’s mainframe given his funding coming from Defense Dept during Vietname war. Ultimately, cooler heads prevailed when and the computer was deemed a neutral participant. Having a discourse about something too abstract, even with the best of intentions, is often fruitless or counter-productive.

    • LTP says:

      “According to a certain viewpoint, bio-ethicists keep telling us there’s a ghost in the cells and foolishly preventing pragmatists from doing actual science. Work that’s life and death for present day friends and family.”

      I assume you’re talking about stem cell research?

      Anyway, I think this is a bit of a strawman of what bio-ethics is. Bio ethics isn’t a set of positions but a set of ethical issues to be debated. You’ll find bioethics on both sides of pretty much any issue, including bioethicists that are pro-abortion, pro-stem cell research, pro-surrogacy, pro-kidney sales, etc.

      I get that bioethics that comes up outside of academia often leans socially conservative, especially when the term “bioethics” is explicitly used by a writer or politician, but that’s not how it is in academia.

  122. Bob Murphy says:

    Hi Scott,

    Interesting post but I have two critical reactions:

    (1) Don’t you need to distinguish between the estimate of a probability, and the person’s confidence in that estimate? For example, if you show me what looks like a regular quarter and ask, “What’s the probability I’ll flip and get a head?” I’ll say 1/2 but won’t be very confident, since I don’t know that it’s a fair coin. But if you let me study it for a half hour, then ask me the same question, I’ll now say (assume it’s a fair coin) 1/2 but I’ll be extremely confident.

    So in that same spirit, it’s conceivable someone could say, “Well geez, I’ve never thought about AI too much, but put a gun to my head I would guess the chance of a Terminator scenario by 2100 is 1 in a million.” In other words, it seems in your post you are conflating unlikelihood of an estimate with the confidence in it.

    (Don’t get me wrong, I’m quite confident most of the people you read on social media *were* way overconfident, but if they said “1 in 5” and you interpreted that to mean “0.20000000000…” then that too would be a ludicrous amount of precision.)

    (2) It seems that you are good at deploying framing effects whenever needed to rule out worrying about things you think are silly (like Jesus being God) but not the things you think are serious (like catastrophic anthropogenic climate change). I’m not sure those were objectively laid out; I imagine people who held the opposite positions could do something analogous to defend their perspectives.

  123. Jaycol says:

    I really have no idea what I’m talking about, so feel free to tell me how I’m being dumb here, but I think the way this question was asked, with drop-down options, might cause more scrutiny to fall on people giving low estimates than high ones. Was the person who complained there was not a “less than one-in-a-million chance” option hugely overconfident? Probably. Was he representative of people in general? Probably. But I do not think his overconfidence was necessarily more representative of low-estimators than high-estimators; rather, if the people on the high end seemed more humble, it would be a function of the options available more than anything. If it is as you represent it in your post, the high options were numbers like 90%, indicating a precision of 1/10 or 1/100. The low options, as they got lower, of course got more precise. 0.01% is of course much more precise than 90%. So people selecting the latter option seem a lot less overconfident than those selecting the former option. Of course, if people were complaining that there was no 0.0001% but no one was complaining that there was no 99.9999% option, that might indicate disproportionate overconfidence among the doubters relative to the believers. But now, someone who was just damn sure that there was a 90.0000% chance of superintelligent AI probably would not complain at all about the 90% option. Nor would someone who was sure they were properly calibrated at 24.7992% likely complain about having to answer 25% (if that was an option). Really, the only time an overconfident person would be caught is if they didn’t think the options could accurately reflect their extremely low or high predictions, since no one would deign to say there was a 0% or 100% chance, as that might mislead someone into thinking that the actual integers 0 or 1 were possibilities, when they’re not as far as probabilities go. That’s why incredibly low estimates tend to be incredibly precise estimates–nobody is willing to say “I’m pretty sure the risk is about 0” without qualifications, lest they seem irrational, while someone is willing to say it’s about 75% without qualifying it as “between 74.5000 and 75.5000 percent,” or “I’m pretty sure the chance is like less than seven hundred seventy-five thousand in a million.”

  124. Eray Ozkural says:

    These questions have little to do with AI doomsayers’ main claims, though, the fact that you assign 5% to the existence of God means that you don’t understand the theory of evolution. In particular, you don’t seem to understand how strong the theory is. You seem to be overconfident in your creationism, and you also seem unable to imagine that some hypothetical events are truly improbable. So, I am not even looking at the rest of the answers, but it is frustrating to see that you are basically a creationist that pretends to be agnostic.

    • Nita says:

      What does God have to do with evolution? It could have created the Universe just to watch all the pretty galaxies spin.

    • TrivialGravitas says:

      You realize the former Pope wrote papers defending evolution right? Get out of the bible belt and you’ll find creationism is the exception rather than the rule among believers.

      I almost wonder if you’re actually a Christian trying to strawman atheism, haven’t seen somebody argue for atheism this badly anywhere but Reddit.

  125. Albatross says:

    Most people are really bad at probability. Statistics say kids are most likely to die in car accidents, poisoning or drown in terms of non disease deaths. Yet there are parents who ask if you have a dog, which is the smallest stat for death the CDC records. I like to be prepared and I have invested long enough that I include the real possibility some investments hit zero. Lots of people are unemployed or are doing silly jobs so when people ask me if we should devote more resources to stopping asteroids, evil AI, or pandemics… my answers are yes, yes, yes.

    1. I’m comfortable with belief in God even if God doesn’t exist. No proof either way, but I studied too much philosophy to trust non-God ethical systems. Loving our neighbors is great. 10%?
    2. Anywhere in the universe, including other planets? 5% In Iowa? 0.1%
    3. 33%. I figure temps fluctuate. An ice age or space dust might throw a wrench in it. But I buy wind power.
    4. Hmmm… sanitation improvements have reduced this. A 100 million I could see but a billion is very big, but over 5 years it is only 200 million a year. I could see it. Especially if there are sanitation infrastructure problems. I say 5% to 10% but mostly because I think we fight it.
    5. If it is just a short landing a la the Moon landings I think 50% or greater. China or other up and coming countries could try it.
    6. A million hackers… a million viruses… learning software…. even a worldwide treaty wouldn’t stop terrorists. I’m not sure how we avoid Super AI by 2115. Essentially, I view AI as a near certainty that might be stopped by nuclear war or asteroid impacts. So add up catastrophes that impact electricity and subtract from 99%. I’ll say 85%. Heck, some probably uploads the brain of a computer scientist and the ghost in the machine makes an AI.

  126. HeelBearCub says:

    I contend that the question of the existence God or the existence of something that is “magic” (like telepathy) are always unanswerable. Further, I contend that that they are unanswerable by definition. I’m interested in what holes people can poke in this idea.

    Let’s take the (relatively common) answer to the “probability there is a God ” question that includes a probability that we are in a simulation (and therefore that which runs it is “God”). Now, from our perspective, unknowing of the existence of the simulation owner, someone like that would look like God to us. But if we were to put ourselves in the position of running the simulation, we might (at most) describe that as “playing God”. If we know and can explain exactly how some “god” does what they do, they cease to be God.

    If, in the future, it became possible to reliably predict brain states by combining complex detection of body language, pheromone output, and scanning to detect subtle electric discharges in the nervous system we might still call it “telepathy” but it wouldn’t be the kind of telepathy we talk about today. It would be understandable, have rules, cause and effect, error rates, etc. In short, it wouldn’t be magic anymore.

    One real world example of this is quantum uncertainty. Before we had an actual example of a real world phenomenon, the idea that merely “observing” something would change something fundamental about the world state would have been regarded as magical thinking, but once it enters the realm of explainable, it ceases to be seen as magic anymore.

    Not sure if folks will take this ball and run with it, this late in the life of the thread. I may repeat the query in an OT later on.

  127. jonas says:

    I think that there’s one in a million chance that I’ll be ran over by a truck next time I cross the road. Am I overconfident about crossing roads?

    I cross the road a hundred times each month. I have a popular type of insurance for accidents as a credit card perk. In this, I pay one euro per month, and the insurance company will pay me ten thousand euros if I get ran over by a truck. Let’s say the company spends one tenth of the price of the insurance on paying out to people who are ran over by trucks. That implies that the company is making a bet that there’s one in ten millions chance that I get ran over by a truck, so I’m not overconfident in this particular issue. Does this seem right?

    (The above numbers are gross oversimplifications, I won’t do a precise calculation here.)

  128. Alraune says:

    Rot-13 really breaks down when you use it in a highly patterned fashion like that. Those are all readable without decoding.

  129. dlr says:

    But what is your definition of ‘a god’. If ‘a god’ is an being (or race of beings) that is more powerful/smarter than human beings, then the odds of that seem fairly high. Surely within some orders of magnitude of the probability of there being life of any kind somewhere else in the UNIVERSE.

  130. Nero tol Scaeva says:

    1. What is your probability that there is a god?

    Since most conceptions of god are unfalsifiable, I’m not sure what probability I can assign to its existence. Let’s take a couple of hypothetical scenarios that common people think of as proof of god:

    a) Human existence is proof that god exists (the mechanism for humans’ presence on Earth is due to evolution). God, since it can do anything, certainly could have created humans via evolution. But an all-powerful god could have also created humans just like Creationists say he/it did. Let’s assume a priori that there’s a 99% probability that god exists. If god could create humans via either Creationism or evolution, and naturalism could only produce humans via evolution, then because evolution is what actually occurred, then this means that there’s a less than 99% chance that god exists given human existence via evolution.

    Compare this reasoning with picking marbles out of two jars. One jar has only blue marbles, the other jar has both blue and green marbles. All else being equal, if I’ve picked a blue marble, then it probably came from the blue marble jar. But let’s say I’ve assumed for some reason that there’s a 99% probability of choosing the blue/green jar, then having a blue marble decreases my prior by ~1%, assuming Pr(blue marble | blue and green marble jar) is 50%.

    b) Life on Earth is proof that god exists. Compare this with alternative hypothetical scenarios, where life on Jupiter is proof that god exists, life on Pluto is proof that god exists, life on Mercury is proof that god exists, and so on. If a god wanted to, couldn’t it have had humans evolve on Mercury and put up some sort of protective force field or something that allowed life, and thus human life, to flourish that close to the sun? Since life on any planet is possible with god, it seems to me that we’d have to spread the attendant conditional probability across all possible planets in the solar system; P(Humans on Jupiter | God exists) = P(Mercury | God exists) = etc. If one thinks this is unfair, then we would have to posit that human life evolving on [Mercury/Venus/Jupiter/etc.] is proof that god doesn’t exist, which goes against the definition of “unfalsifiable”.

    This can be compared with an alternative hypothesis, like naturalism, where human life can only evolve on planets that are within the habitable zone for planets. Since this actually happened, then the naturalism hypothesis wins this bet, meaning that a posteriori there’s a less than [the already less than 99% probability of god given evolution from (a)] probability that god exists. If human life had evolved on some other planet not in the habitable zone, then this would be evidence against the naturalism model.

    Again, the blue vs blue&green marble jars. But now instead of just blue and green, there are 8 different colored marbles to choose from in the multi marbled jar representing the possible planets. All else being equal, if I’ve picked a blue marble, then it probably came from the blue marble jar. But now assuming there’s a 98% probability of choosing the blue/green (now multi-colored) jar, and assuming Pr(blue marble | 8 different colored marbles jar) is 1/8, this leads to around 86% Pr(multi-color jar), which is a theoretical estimate of what Pr(God) would look like given its unfalsifiable-ness.

    c) The fine-tuning of universal constants is proof that god exists. Again, compare this with alternative universal constants that an all-powerful god could have configured. Couldn’t god, if it wanted to, make a universe with vastly different universal constants and had humans evolve/live just because it wanted to? Who knows how many different configurations an all-powerful god can come up with; the only thing limiting it is its imagination. Is it literally infinity? Is there a conceivable configuration of universal constants that would disprove the existence of an all-powerful god? I’m not really sure. But since god is unfalsifiable, then every configuration we can think of (since god can do anything, like perpetual miracles to keep humans alive in any environment) can’t be evidence against god’s existence.

    And then compare this with the naturalism model, where, even here, I’m not sure what it would look like. But it seems to me that this is the only universe where humans of our current ontology can survive.

    So now we have a jar with only blue marbles, and another jar with every marble color imaginable. All else being equal, picking a blue marble is much more likely to have come from the blue-only jar than the every marble color imaginable jar. But we’re now at Pr(unfalsifiable [e.g., God/multiple marble colors]) = 86%. Doing a Bayesian update with, let’s say instead of infinite possible marble colors, there are 100 possible marble colors, we wind up with Pr(unfalsifiable [e.g., God/multiple marble colors]) = 5%.

    I think given (a), (b), and (c), the problem with things that are unfalsifiable is that they will tend towards 0% as people keep ascribing certain outcomes to the unfalsifiable claim and doing perfect Bayesian updates. But just because something is unfalsifiable doesn’t mean that it’s false or doesn’t exist. It’s just worthless as far as epistemology goes. So if I had to assign a probability to god’s existence based on attempting to calculate it like I just did, it would be 10^-∞.

    But if there were conceptions of god that were falsifiable, or hell, even not all-powerful like the Greek gods or something, I think this would fair better. Maybe .1%.

    2. What is your probability that psychic powers exist?

    It seems to me that psychic powers violate a bunch of physical laws, like conservation of energy. Basically, all psychic phenomena look like perpetual motion machines to me. Where is the energy that generates the psychic power source? What media are they traveling on? Psychics like to compare their phenomena to radio signals, but even with radio signals, there is energy being spent both in the sending and receiving of a signal. So I put psychic powers at about the same probability as perpetual motion machines; at least until a power source is found.

    Of course, I think a lot of psychics, at this point, might say the “human soul” is the power source, but souls are quite literally a perpetual motion machine in all but name.

    3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050?

    Since there are actual experts on AGW, I’ll defer to expert consensus on this.

    4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100?

    I’m guessing the CDC knows more about this than I do, so I’d defer to whatever experts at the CDC claim.

    5. What is your probability that humans land on Mars by 2050?

    Assuming it’s the US government that’s in the lead to do this? There isn’t any competitive pressure to reach Mars like there was between the USA and Russia during the Cold War, so I don’t have very high confidence in this. I’d say around 10%.

    6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115?

    I read a cool story about how IBM has rodent brain chips so if rodent-like intelligence in computing is this close, then it seems like it’s only a matter of scale to get to above human-level intelligence in AI. By 2115? Maybe 70%.

    • Troy says:

      Hi Nero: I may be too late commenting for anyone to read this, but I wanted to respond to your Bayesian arguments against theism. On most any conception of theism, God is (like us) an agent that acts for reasons. As such, he’s not just as likely to create life under one scenario as under another, as this might not realize his purposes as well. Of course, what those purposes are is not obvious to us, so we should assign different probabilities to different purposes and update those. But when we do that, then the probabilities of God creating life via evolution, on planet X or planet Y, and so on, are not independent either of each other or of God’s creating a finely-tuned universe like the one we find ourselves in. The fact that God, if he exists, has created a universe with simple stable natural laws finely-tuned to allow for life suggests that he has reasons to want life to exist to a great degree autonomously from him, e.g., without constant miraculous interventions. So, for example, it seems to me that P(Evolution | Theism&Fine-Tuning) >> P(Evolution | Theism), and likewise that P(Life on Earth rather than somewhere else | Theism&Fine-Tuning) >> P(Life on Earth rather than somewhere else | Theism). Given Theism&Fine-Tuning, the evolution of life in general, and on a planet suitable without inside intervention in particular, is fairly likely. The fact that God fine-tuned the constants (if Theism&Fine-Tuning) suggests that God is out to create life, and the fact that he created a stable and orderly universe also suggests that he wants life to evolve naturally with minimal outside intervention. By contrast, the probability of life (on Earth or anywhere else) given ~Theism&Fine-Tuning is not 1, as your marbles model assumes. That probability is hard to estimate because we don’t have enough grip on how likely it is for life to arise from non-life by chance. So it seems to me that Life is, relative to Fine-Tuning, some evidence for Theism, although perhaps not very strong evidence.

      But Fine-Tuning is the really good evidence here. The probability of a fine-tuned universe on atheism is definitely not 1 (as, again, your model assumes). The whole point of the fine-tuning argument is that this probability is astronomically low. For example, Roger Penrose estimates the probability of the initial state of the universe having an entropy low enough to allow for life by chance as < 10^-(10^100)! It's true that, if we observe a universe, it will be like that one, but that doesn’t mean that the prior probability of our observing this universe isn’t astronomically low.

      By contrast, the probability of Fine-Tuning given theism is not 1/∞. I’m fine with saying that it’s low initially, because we don’t know whether God would prefer to create life via an orderly universe or via constant miracles, but the initial probability of the first hypothesis is not plausibly lower than, say, 10^-5. Since God acts for reasons, he’s presumably going to choose which universe to create on the basis of features relevant to his goals, not by picking randomly from an arbitrarily large sample space. So long as it’s not crazy improbable that he has reasons to favor an orderly universe from a disorderly one for his life-creating goals, then P(Fine-Tuning | Theism) will not be prohibitively low. And if P(Fine-Tuning | Theism) >> P(Fine-Tuning | ~Theism), then Fine-Tuning is strong evidence for Theism.

      • houseboatonstyx says:

        @ Troy
        God is out to create life, and the fact that he created a stable and orderly universe also suggests that he wants life to evolve naturally with minimal outside intervention.

        / crawls out from under the table to grab this crumb intended for my betters, and run with it in a probably unintended direction /

        If ‘life’ refers to the first primordial globule, never mind. But if ‘life’ refers to ‘the ten thousand things’, ie to us now and our ancestors and the monkeys we came from etc etc — then evolution (up from globule) was a very cruel method.

        • Troy says:

          Plausibly, God is more concerned with “higher” forms of life, whether that means just human beings (and perhaps intelligent aliens too), or conscious animals more generally.

          I think you’re right that the cruelty of evolution is one reason to think God wouldn’t use it as his means to create intelligent life. However, I also think that God’s creating an orderly fine-tuned universe would be evidence that he intends to use natural means to create intelligent life, and if he’s going to use natural means, evolution is pretty much the only game in town. (In addition, once we see that God uses evolution to create life, this makes the kind and variety of evil we find in the world much less surprising; so that evil doesn’t provide a great deal of further evidence against theism.)

          That said, how to balance off the evidence from fine-tuning against the inherent cruelty of the method when assigning a probability that God would create intelligent life this way is certainly open for debate. You might also argue that this would make God less likely to create a fine-tuned universe in the first place, as opposed to a constant miraculous one.

          I’m open to tempering these probabilities some in response to this concern, although I think there are plausible explanations of why God would create this way that raise the relevant probabilities somewhat. For example, an orderly, lawlike (as opposed to constantly miraculous) universe is arguably necessary in order for us to be able to learn about the world and manipulate our environment.

  131. Spike says:

    “Your overconfidence is your weakness.” — Luke Skywalker

  132. I have a weakness for answering questions. Internet quizzes get me every time.

    1. What is your probability that there is a god? <1%

    Reading the comments, I realise I am defining 'god' as 'deity that is part of a human religion'. I don't include sim-gods or arbitrary creatures that might be omniscient and omnipotent, though I would view former with a similar magnitude of probability, i.e. I reckon my actual probability here is under 5% but over 1%. (I consider latter a logical contradiction, so it's difficult to put a probability on it, but assuming it means 'looks approximately omniscient and omnipotent from a human perspective', this feeds into the 'under 5%' estimate.)

    I also (consciously) don't include aliens that might have created the universe but no longer control it or interact with it, which I'd assign maybe 50% probability to (though I have no idea how you'd test the hypothesis (which is probably why it turns up as 50% in my head), so I also figure it's moot).

    2. What is your probability that psychic powers exist? 5%

    I didn’t know it before I read the comments section, but it seems I agree with the people that feel P(psychic powers) > P(god).

    I should add that I used to be a strong proponent of astral communication quite a few years ago. With ‘strong proponent’, though, I mean I held fast the belief that “astral communication exists, though it has a strength that doesn’t actually matter in real life”. That’s distinct from “astral communication exists, though it has a strength that cannot be detected / measured”, i.e. I did think it was measurable. I figured that if astral communication is a thing, it’s fragments of information we get/find, and our brain fills in the blanks to make it appear far more coherent than it actually is.

    So that’s my point of reference that I think of if someone asks me whether ‘psychic powers exist’. It’s not really flashy. It’d basically be a new but totally badly developed sense that most humans have, but evolution hasn’t had a chance in hell of fleshing out yet (but is presumably getting fleshed out because it helps with empathy, or planning, or whatever).

    I guess I mention that because I’d peg an even lower probability on anything more impressive, though it would probably never go below 0.1%? (Probabilities about probabilities? How meta.) That seems like an awfully conservative estimate to me, anyway.

    (To nip some possible confusion in the bud: No, I really don’t currently believe that. I can sympathise with my past self, for various reasons, which is why I go into detail about past me’s beliefs, but I don’t currently agree.)

    3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050? 25%

    If this strikes you as low, it’s because of the adjective ‘anthropogenic’. My estimate global warming will increase temperatures by at least 1°C by 2050 is somewhere in the ballpark of 50%. That’s largely based on “I have no clue about climate science (short of that other people discuss it vehemently), so from my perspective it could turn out either way”, hence the coin flip nature.

    4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100? 5%

    A billion in five years seems extremely high to me, but, reading the comments section, I realise I’ve made an error: You don’t specify that it’s a natural pandemic, so it could also be engineered! That makes my initial estimate terribly wrong, IMO, and I’d assume approximately a 20% probability of a pandemic (natural or artificial) doing that damage.

    5. What is your probability that humans land on Mars by 2050? 50%

    Again, mostly a coin flip from my perspective, since I can’t judge how interest and funding might or might not fall in the water for this. Assuming interest and funding, I don’t see much of anything getting in the way of this, short of space disasters, that I assume space people are good at finding ways around.

    6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115? 75%

    A hundred years is a long time, so I feel reasonably confident to assume such a thing exists by then. (I feel less confident that it would be malevolent, and even less confident that now is the time we need to worry about that in excess of what we’re already worrying about it.)

    As usual, there’s a lot of interesting food for thought in the comment section! I’ve enjoyed reading other people’s answers, especially those that went into detail why they made them. 🙂

  133. Dave says:

    I’m confused by your interpretation of predictions. If my prediction that something has a one in a million chance is closely analogous to me making one million predictions and making one mistake, would my prediction that it is nearly certain (999,999 in a million chance) be like me making one million predictions and being wrong 999,999 times? Obviously not, in fact, I am showing the same degree of certainty, just predicting the other outcome. So what is the formula?

    If I am perfectly ignorant, I prefer not to bet at all. Maybe I am forced to bet, do I choose 1 in 2 if there are 2 outcomes? Being certain implies I know the mean and the variance is tiny. But if the mean is in the middle and I am totally certain, I still take the even money bet. But my ignorance /uncertainty also has to do with the variance, but I don’t know what to say about it. If the variance is tiny, I should be able to act like the house and make money in the long term. (But if I am making only one bet, there is no long term.) If the variance is huge I don’t want to bet. Does that mean I bet lower?

    Or do I try to bet both ways? Somehow Taleb’s idea of antifragility keeps intruding on my mind. Can I cover all bets? Are there some assymetries I should exploit or avoid?