THE JOYFUL REDUCTION OF UNCERTAINTY

# Probabilities Without Models

[Epistemic status: Not original to me. Also, I might be getting it wrong.]

A lot of responses to my Friday post on overconfidence centered around this idea that we shouldn’t, we can’t, use probability at all in the absence of a well-defined model. The best we can do is say that we don’t know and have no way to find out. I don’t buy this:

“Mr. President, NASA has sent me to warn you that a saucer-shaped craft about twenty meters in diameter has just crossed the orbit of the moon. It’s expected to touch down in the western United States within twenty-four hours. What should we do?”

“How should I know? I have no model of possible outcomes.”

“Should we put the military on alert?”

“Maybe. Maybe not. Putting the military on alert might help. Or it might hurt. We have literally no way of knowing.”

“Maybe we should send a team of linguists and scientists to the presumptive landing site?”

“What part of ‘no model’ do you not understand? Alien first contact is such an inherently unpredictable enterprise that even speculating about whether linguists should be present is pretending to a certainty which we do not and cannot possess.”

“Mr. President, I’ve got our Israeli allies on the phone. They say they’re going to shoot a missile at the craft because ‘it freaks them out’. Should I tell them to hold off?”

“No. We have no way of predicting whether firing a missile is a good or bad idea. We just don’t know.”

In real life, the President would, despite the situation being totally novel and without any plausible statistical model, probably make some decision or another, like “yes, put the military on alert”. And this implies a probability judgment. The reason the President will put the military on alert, but not, say, put banana plantations on alert, is that in his opinion the aliens are more likely to attack than to ask for bananas.

Fine, say the doubters, but surely the sorts of probability judgments we make without models are only the most coarse-grained ones, along the lines of “some reasonable chance aliens will attack, no reasonable chance they will want bananas.” Where “reasonable chance” can mean anything from 1% to 99%, and “no reasonable chance” means something less than that.

But consider another situation: imagine you are a director of the National Science Foundation (or a venture capitalist, or an effective altruist) evaluating two proposals that both want the same grant. Proposal A is by a group with a long history of moderate competence who think they can improve the efficiency of solar panels by a few percent; their plan is a straightforward application of existing technology and almost guaranteed to work and create a billion dollars in value. Proposal B is by a group of starry-eyed idealists who seem very smart but have no proven track record; they say they have an idea for a revolutionary new kind of super-efficient desalinization technology; if it works it will completely solve the world’s water crisis and produce a trillion dollars in value. Your organization is risk-neutral to a totally implausible degree. What do you do?

Well, it seems to me that you choose Proposal B if you think it has at least a 1/1000 chance of working out; otherwise, you choose Proposal A. But this requires at least attempting to estimate probabilities in the neighborhood of 1/1000 without a model. Crucially, there’s no way to avoid this. If you shrug and take Proposal A because you don’t feel like you can assess proposal B adequately, that’s making a choice. If you shrug and take Proposal B because what the hell, that’s also making a choice. If you are so angry at being placed in this situation that you refuse to choose either A or B and so pass up both a billion and a trillion dollars, that’s a choice too. Just a stupid one.

Nor can you cry “Pascal’s Mugging!” in order to escape the situation. I think this defense is overused and underspecified, but at the very least, it doesn’t seem like it can apply in places where the improbable option is likely to come up over your own lifespan. So: imagine that your organization actually reviews about a hundred of these proposals a year. In fact, it’s competing with a bunch of other organizations that also review a hundred or so such proposals a year, and whoever’s projects make the most money gains lots of status and new funding. Now it’s totally plausible that, over the course of ten years, it might be a better strategy to invest in things that have a one in a thousand chance of working out. Indeed, maybe you can see the organizations that do this outperforming the organizations that don’t. The question really does come down to your judgment: are Project B’s odds of success greater or less than 1/1000?

Nor is this a crazy hypothetical situation. A bunch of the questions we have to deal with come down to these kinds of decisions made without models. Like – should I invest for retirement, even though the world might be destroyed by the time I retire? Should I support the Libertarian candidate for president, even though there’s never been a libertarian-run society before and I can’t know how it will turn out? Should I start learning Chinese because China will rule the world over the next century? These questions are no easier to model than ones about cryonics or AI, but they’re questions we all face.

The last thing the doubters might say is “Fine, we have to face questions that can be treated as questions of probability. But we should avoid treating them as questions of probability anyway. Instead of asking ourselves ‘is the probability that the desalinization project will work greater or less than 1/1000’, we should ask ‘do I feel good about investing this money in the desalinization plant?’ and trust our gut feelings.”

There is some truth to this. My medical school thesis was on the probabilistic judgments of doctors, and they’re pretty bad. Doctors are just extraordinarily overconfident in their own diagnoses; a study by Bushyhead, who despite his name is not a squirrel, found that when doctors were 80% certain that patients had pneumonia, only 20% would turn out to have the disease. On the other hand, the doctors still did the right thing in most every case, operating off of algorithms and heuristics that never mentioned probability. The conclusion was that as long as you don’t force doctors to think about about what they’re doing in mathematical terms, everything goes fine – something I’ve brought up before in the context of the Bayes mammogram problem. Maybe this generalizes. Maybe people are terrible at coming up with probabilities for things like investing in desalinization plants, but will generally make the right choice.

But refusing to frame choices in terms of probabilities also takes away a lot of your options. If you use probabilities, you can check your accuracy – the foundation director might notice that of a thousand projects she had estimated as having 1/1000 probabilities, actually about 20 succeeded, meaning that she’s overconfident. You can do other things. You can compare people’s success rates. You can do arithmetic on them (“if both these projects have 1/1000 probability, what is the chance they both succeed simultaneously?”), you can open prediction markets about them.

Most important, you can notice and challenge overconfidence when it happens. I said last post that when people say there’s only a one in a million chance of something like AI risk, they are being stupendously overconfident. If people just very quietly act as if there’s a one in a million chance of such risk, without ever saying it, then no one will ever be able to call them on it.

I don’t want to say I’m completely attached to using probability here in exatly the normal way. But all of the alternatives I’ve heard fall apart when you’ve got to make an actual real-world choice, like sending the military out to deal with the aliens or not.

[EDIT: Why regressing to meta-probabilities just gives you more reasons to worry about overconfidence]

[EDIT-2: “I don’t know”]

[EDIT-3: A lot of debate over what does or doesn’t count as a “model” in this case. Some people seem to be using a weak definition like “any knowledge whatsoever about the process involved”. Others seem to want a strong definition like “enough understanding to place this event within a context of similar past events such that a numerical probability can be easily extracted by math alone, like the model where each flip of a two-sided coin has a 50% chance of landing heads”. Without wanting to get into this, suffice it to say that any definition in which the questions above have “models” is one where AI risk also has a model.]

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

### 449 Responses to Probabilities Without Models

1. TheNybbler says:

Trying to make a quantitative probability judgement between alternatives when the uncertainty is enormous is probably futile. So yeah, you go with your gut… call it an abdominally-placed heuristic model if you prefer.

• Andy says:

Probably better than my rectally-placed intuition model… perhaps I should upgrade.

• AndR says:

A coworker of mine used to say: “Better to pull the numbers out of your ass and base your decision on them, than pull your decision out of your ass”. I think, as long as you have some basic probabilistic reasoning competency, that’s a valid route.

• Deiseach says:

Okay, Scott’s link in that “I don’t know” chat is making me see red.

From that, I am willing to assign a probability of fewer than 20 apples on the tree that Yudkowsky routinely talks out of his arse.

“Oh no, you can’t say ‘I don’t know’, that’s simply displaying your ignorance because you’re too much of a coward to examine your assumptions! On the other hand, if you’re talking to morons* then sure, you can use it, because they’re too dumb to figure out if you say ‘I assign 95% confidence to my prediction of the probability of there being fewer than 20 apples on the tree neither of us have seen’, that actually what you mean is ‘This is a guess’.”

I have no idea if the man really is as smart as he thinks he is, but he’d do well to remember that when God made him, He did not use up the entire world supply of Smarts on the end product and occasionally some few poor flickers of intellect may be discerned in the wasteland that is the Rest Of Humanity. I am willing to continue assuming (though through gritted teeth) that the man is not completely a thundering pain in the arse because intelligent and reasonable and mannerly people like Scott maintain an acquaintanceship with him despite exposure to him in the flesh, but he does himself no favours in how he presents himself.

*If this is not exactly the word he uses, that is the tone comes across when speaking of people not trained in his version of rationality or with mathematical skills up to the level he presumes satisfactory. Ordinary people, in other words.

• Cauê says:

I found that chat great. I thank Eliezer for the insights and Scott for the link.

• Izaak Weiss says:

To be fair, the other end of the conversation did ask him to imagine someone who had no idea what probability was, and had never heard of it. That person would be woefully unequipped to hear “there’s a 50% chance that this coin will come up heads”, much less understand a probability distribution about an unknown tree.

• Deiseach says:

someone who had no idea what probability was, and had never heard of it. That person would be woefully unequipped to hear “there’s a 50% chance that this coin will come up heads”

First Year maths class, when I was around thirteen, is when I heard about coin flipping and the chances of heads and tails. You don’t do Inter Cert maths in America?

Look, guessing the number of apples on a tree is one thing; I can see how you’d arrive at a likely estimate there.

Even guessing someone’s name, if you say “Well, based on his gender, apparent age, ethnicity, and the ranked popularity of male names in the English-speaking U.S.A. for that estimated birth year, I would say in descending order of likelihood Michael, William, Tobias, Xanzzz The Indescribable” and so forth is a tolerable answer to give if you really do think “I don’t know” doesn’t cover your massive intellect’s capabilities.

But what the hell is so impossible about saying “I don’t know” to a question like “I just saw a woman walk past my window – do you know her name?”

The second half of that chatlog sounds like an elaborate way to go “I have This Many More Brains than you clods!” and seriously, when people talk about cult-like behaviour, giving off that aura of “Bhagwan Shree Guru Head-Up-Own-Arse” isn’t making him look any less like “I possess ineffable wisdom and you must trust me blindly”.

Even the pope only possesses infallibility when exercising the teaching magisterium ex cathedra. Yudkowsky seems to regard it as a little something to start the day with when brushing his teeth.

• Nornagest says:

You don’t do Inter Cert maths in America?

I have no idea what Inter Cert math is. I don’t remember when I was introduced to basic probability, either, but my best guess is the algebra-level math classes offered in eighth grade (age 12 or 13); that would have been a very superficial take on it, though.

I do remember a somewhat more advanced treatment in my Algebra II class, which despite the name consisted less of advanced algebra (though there was some linear algebra involved) and more of a grab-bag of mathematical concepts that didn’t fit cleanly under the headings of calculus or trigonometry. That would typically have been offered to sophomore- or junior-level students (age 15-17; Year 10 or 11 in a British or Irish context).

• ” You don’t do Inter Cert maths in America?”

I don’t know how it is in Ireland, but my impression is that a large proportion of students in the U.S. learn enough math in a high school class to pass the final exam and then forget most of it.

My wife, as a graduate student in geology, taught labs that were part of a geology course taken mostly by non-scientist types to satisfy a science requirement. A large minority of the students, given the height, width, and depth of a rectangular ore body, had no idea how to calculate the volume. That was at VPI, probably the second best state university in Virginia, so those students would have been from the top ten or twenty percent of high school graduates.

• kappa says:

Yeah, that chatlog is bewildering.

Like, it almost seems like there must be some kind of subtextual undercurrent going on that has nothing to do with the surface-level meanings of the statements being made*, because if I don’t assume the conversation is about something other than what it looks like, it looks like a conversation about a completely crazy way to deal with being asked to guess an alien’s name.

If someone is forcing you on pain of death to guess what an alien’s name is, fine, make some random noises and hope for the best. (Unless you think “no name at all” is a valid guess, in which case say that, because it’s likelier that the alien belongs to a culture that doesn’t use names than that the alien’s name is any specific particular sequence of noises you could make. I haven’t a damn clue what either of those probabilities is, but I am extremely confident in my estimate of their relative magnitude. “I can’t pronounce it” is also an option and may even be a better one, but I wouldn’t risk it because I don’t have a good way of estimating the tradeoff between “covers almost all the possible alien names” and “may cause the person currently threatening you with death to decide you’re being a smartass and follow through”.)

In almost any other situation I can conceive of**, the only reasonable answer to “What’s this alien’s name?” is “I don’t know.” Because not only do you not know the alien’s name, you know damn well that you don’t know the alien’s name and that any guess you could make about it is overwhelmingly likely to be wrong.

You know that the alien either speaks some language or doesn’t, and that you have no information about which. You know that if the alien speaks some language, the language may or may not have a sound-based mode of expression, and if it does have a sound-based mode of expression there is no good reason at all for it to have high overlap with human-pronounceable noises in general, let alone with the specific set of noises pronounceable by you, a particular human. You know that while there certainly must be some sort of rules (however fuzzy and riddled with exceptions they may be) governing name generation in this alien’s language, you don’t know what they are and don’t even have enough data to get a good idea of what the space of possible rulesets looks like, because you have never encountered an alien language before and know nothing about the ways in which their linguistic structures might differ from ours.

In short, you know that just about the only information you aren’t missing is the information it takes to rule out any particular name you could conceive of as a viable guess! “Vkktor Blackdawn is as (im)probable as anything else”? No it isn’t! You can tell it isn’t because it is very obviously something a human came up with!

What is this alien’s name? I sure as hell don’t know.

…I had no idea I had such passionate feelings about xenolinguistics.

And if there is some hidden meaning to that conversation that explains the Vkktor Blackdawn approach to alien-name-guessing, I’d love to hear it, because I am stumped.

*Which would make that post an extremely poor choice to be linked as an explanation of anything.

**Specifically limited to situations where you’ve never seen this or any other alien before, of course.

• Deiseach says:

“Vkktor Blackdawn is as (im)probable as anything else”? No it isn’t! You can tell it isn’t because it is very obviously something a human came up with!

Agreed. “I don’t know” is a more honest and humble estimation than “I guess the name of the man walking past your window is Michael because once upon a time I glanced at the Most Popular Male Names By Decade lists and my massive brain effortlessly retained that information”.

Well, sorry, mate, ach Gaeltacht is ea Rinn ua gCuanach, agus Traolach is aimn an bhfear sin, and if you weren’t so busy trying to come off as Mr Know-It-All and considered matters six inches beyond your own nose, you might have contemplated the possibility of your own ignorance.

Honestly, I don’t want to dislike Yudowsky, and okay this was private chat between members of a group, not meant for public consumption and people speak differently amongst themselves when they’re relaxed and all on the same page, but not alone does he seem not to be in the slightest danger of hiding his light under a bushel, he goes looking for bushels to ostentatiously overturn to demonstrate his light is not hidden under there, nosireebob!

• Cauê says:

So, did you two try to understand what he meant? It’s not very far from what Scott said in this post.

• kappa says:

Cauê: I read the chatlog post twice trying to figure out what the hidden meaning was. It is very well hidden, apparently.

• Jiro says:

The hidden meaning is that just like a guess about the alien’s name has some probability of success that is not zero, the possibility of rogue AI has some probability that is not zero (and therefore, that donating to MIRI to protect against rogue AI does some good).

• Cauê says:

See if this helps: http://lesswrong.com/lw/om/qualitatively_confused/

When you see every belief as quantitative, saying “I don’t know” looks like forcing an arbitrary qualitative distinction, and refusing to provide (in EY’s post) or act accordingly (in this post) to the information and beliefs you do have.

• Anonymous says:

LessWrong is the anti-wikipedia. You don’t want to click a wikipedia link because you know you risk spending the next few hours clicking interesting links to tangential subjects. You don’t want to click LessWrong links because you know the author isn’t able to make his point without pulling in by reference 6 other LessWrong posts, all of which make you want to punch the author in the face.

I think this sentence which I came across is the perfect LessWrong sentence:

“Like Wittgenstein and Yudkowsky, Quine didn’t try to straightforwardly solve traditional Big Questions as much as he either dissolved those questions or reframed them such that they could be solved. ”

That’s right, W.V. Quine was *just* like Wittgenstein and Yudkowsky. In other news, Fenymen was just like Einstein and Yudkowsky. Also, von Neumann was just like Gauss and Yudkowsky. Arthur Miller? Clearly like Shakespeare and Yudkowsky.

• Urstoff says:

Saying “I don’t know” seems like shorthand for “I have vastly insufficient data and thus my confidence in any judgment (probabilistic or otherwise) so low as to render the judgment practically worthless”. Saying “I don’t know” is appropriate in plenty of contexts: what’s the capital of Kyrgystan? Who won the presidential primary? What’s the square root of 877? Are there Black Widow spiders in Kentucky? Do you like Lutefisk?

• LCL says:

I don’t think I get it either. There seem to be (at least) two different subjects under discussion.

One is whether you’ll ever face a question where no probability estimates are possible. Where you can’t say any possibility is more likely than any other possibility, or even limit the set of possibilities in any way. By implication this is a (rather unorthodox) definition of “not knowing.”

The other is whether it’s OK to say “I don’t know” when answering questions.

The first looks clearly false, except perhaps for some really contrived examples I haven’t thought of. The second looks clearly true, as a useful conversational shorthand for “I am the wrong person to ask” or “it needs looking up and I’d rather not right now.”

It’s unclear to me how the two subjects relate. I think I must have missed the point as well.

• kappa says:

Jiro: …Is it really?

Cauê: In that case, that chatlog probably wasn’t a good way to make the point, because apart from coming off as obnoxious to people like me and Deiseach, it also seems (to me at least) to provide a really clear-cut example of a situation where “I don’t know” is the obvious answer to a question.

“I don’t know, and I have no reasonable way of arriving at a guess worth making” seems like a perfectly legitimate way to summarize one’s position in the alien-name-guessing scenario whether or not the speaker ascribes to the quantitative-reasoning paradigm.

Urstoff: Yes, I agree.

• Nornagest says:

That’s right, W.V. Quine was *just* like Wittgenstein and Yudkowsky.

Like it or not — and I often don’t — Less Wrong is largely a discussion forum about Eliezer’s ideas. It’s reasonable, when discussing someone’s ideas, to compare them to others’ even if you don’t think they’re all of the same intellectual caliber.

Imagine you’re part of a book club that includes Alice and Bob, who are both aspiring writers. Last week you read Alice’s first novel, which was decent but not the kind of thing you immediately enshrine in the English canon. This week you read Dostoevsky. During the course of the discussion you mention that Dostoevsky raises some of the same questions about ignorance and social control that Alice did last week… oh yeah, and that Huxley guy too. Are you thereby saying that Alice is on par with those two?

I’d say not. It would of course be unspeakably parochial to say that Eliezer was just like e.g. Einstein and Feynman, without qualification; but that’s not what the bit you quoted says.

• houseboatonstyx says:

“Like Wittgenstein and Yudkowsky, Quine didn’t try to straightforwardly solve traditional Big Questions as much as he either dissolved those questions or reframed them such that they could be solved. ”

It sure doesn’t. Try rearranging the grammar: “Quine didn’t try to straightforwardly solve traditional Big Questions like Wittgenstein and Yudkowsky [did], as much as he either dissolved those questions or reframed them such that they could be solved. ”

• Deiseach says:

Okay, having calmed down a little, I was unkind and indeed rude about Mr Yudkowsky.

But Cauê, I still maintain that “I don’t know” is a perfectly reasonable answer to questions where you don’t know. Now, you can expand upon it to “I don’t know, but I’ll go look it up” or “I don’t know, but I can tell you someone who would know” or “I just don’t know because I don’t have access to any way of finding out”.

Where you have some idea of the matter, e.g. estimating the number of apples on a tree or ‘what is the likelihood that a white 30 year old American male has one of the most popular male first names for while Americans born in 1985″, then putting a probability estimate on it is not unreasonable.

But bluffing (and that’s what it is, bluffing) about “I have X% confidence that the probability is Y, which means ‘I don’t know’ but if you’re too thick to realise that’s what I mean and you assume I am giving you a definite answer, that’s your problem” is dishonest.

Now, I can understand it as a quirk: I share the same one, and I recognise (even though I can’t bloody stop myself doing it) that it’s a psychological defence mechanism, a way of protecting my amour-propre and a hangover from being a Smart Kid, where “I don’t know” was not an acceptable answer to a question from teachers, parents or other adults. So I still have the knee-jerk reflex when asked a question, even where I don’t know the answer, to do some plausible bull-shitting on the fly.

Yudkowsky, being genuinely smart and knowing what he’s talking about when it comes to probability (I have to take that on trust because I know Sweet Fanny Adams about maths) can be even more plausible sounding when he’s bullshitting on the fly, but I’m sure I sound just as much “head up your own arse” when I do it, and I’d genuinely say that talking about ‘ordinary’ people (people who don’t know about rationality, people who haven’t studied Bayes or probability theory) in terms of something you’d scrape off the sole of your shoe.

In sum: sometimes “I don’t know” is a reasonable answer and is more honest than a bluff probability estimate that you intend to mean “I don’t know and it’s up to you to realise this”.

• Deiseach says:

Cauê, look at this part of the exchange: X asks “If it comes to guessing the name of a random guy in the street” and Yudkowsky goes off on this loopiness:

[09:27] Eliezer: I suppose I could construct a second-order Markov transition diagram for the letters in names expressed in English, weighted by their frequency
[09:27] Eliezer: but that would be a lot of work
[09:28] Eliezer: so I could say “I don’t know” as shorthand for the fact that, although I possess a lot of knowledge about possible and probable names, I don’t know anything *more* than you do
[09:28] X: ok, so you say ruling out what you see as likely not correct is ok?
[09:28] Eliezer: what I’m saying is that I possess a large amount of knowledge about possible names
[09:28] Eliezer: all of which influences what I would bet on
[09:28] Eliezer: if I had to take a real-world action, like, guessing someone’s name with a gun to my head
[09:29] Eliezer: if I had to choose it would suddenly become very relevant that I knew Michael was one of the most statistically common names, but couldn’t remember for which years it was the most common, and that I knew Michael was more likely to be a male name than a female name
[09:29] Eliezer: if an alien had a gun to its head, telling it “I don’t know” at this point would not be helpful
[09:29] Eliezer: because there’s a whole lot I know that it doesn’t

What the hell? I’m quite sure Yudkowsky knows stuff about Earth that an alien does not know, including what kinds of names Earth male humans are likely to have. Equally there is a whole lot of things the alien knows that Yudkowsky doesn’t know. That does not change the fact that when it comes to guessing the name of a random person, “I don’t know” is a reasonable answer in the circumstances. You can expand upon it to go “I can guess at a name, given this and this and this as factors”, but there is no obligation on anyone here to claim infallibility, and as an example of “using ignorance priors” (or whatever this discursion is getting off to), it is god-awful.

As well, for someone doing a lot of finger-wagging about “I don’t know” is only lazy politic face-saving, he’s pretty damn confident in jumping to the conclusion that the name is one expressed in English when he doesn’t know if the random man walking by is Irish, Chinese, Ukrainian or what.

Also, the “second-order blah de blah” comes across as (I’m not saying it is intended as, just what it sounds like) self-aggrandisement: “I am this smart so I use these fancy concepts, admire the bigness of my brain”.

But then again, as he says about us ordinary people:

Eliezer: what you *say* is another issue, especially when speaking to nonrationalists, and then it is well to bear in mind that words don’t have fixed meanings; the meaning of the sounds that issue from your lips is whatever occurs in the mind of the listener. If they’re going to misinterpret something then you shouldn’t say it to them no matter what the words mean inside your own head

[09:06] Eliezer: often you are just screwed unless you want to go back and teach them rationality from scratch, and in a case like that, all you can do is say whatever creates the least inaccurate image

So you know, if he says “Zebras are pink and yellow”, that’s just sound waves, it’s not meant to mean anything objectively, you say “pink is not black”, how do you know what he means by “pink” in the inside of his head, or that what you call “pink” is not what he calls “black” and vice versa, you lumpen prole plebeian clod?

Unless you’re a trained rationalist, then you will instinctively know that when he says “zebras”, what he really means is “tea cosies”.*

*Before anyone protests, remember: words don’t have fixed meanings! It all depends on what the meaning of “is” is. What he means by “apples” may not at all be the same thing as what I mean by “apples”, given that I am a middle-aged rural Irish female and so I have completely different life and environmental experiences. The only meaning is what the words mean inside your own head.

Flob-a-dob mangel wurzel hoosh hoosh sarcoptic mange mites tá mé mahogany gas pipes, as I’m sure you will all agree.

• Nita says:

“the meaning of the sounds that issue from your lips is whatever occurs in the mind of the listener”

Well, that is true, for the purpose of choosing what to say. If you know that your interlocutor is a totally colourblind Nigerian fan of the singer Pink, you should probably say “the colour pink and the colour black are different colours”, rather than “pink is not black”.

• Deiseach says:

This probably illustrates the difference between the Maths people and the English people 🙂

The Maths types are head-scratching over “What’s the problem? It’s a neat way of introducing probability!” while the English types are going “But the word selection! The tone! The meeeeeeeannnninggggg!!”

And Pink is not Black, she’s a (currently?) blonde white girl 🙂

• Nita says:

Hmm, I think the idea of using a Markov transition diagram for this purpose is rather silly, so to me it came across as “I really love Markov chains and will use the flimsiest excuse to mention them”. The general vibe is kind of disarmingly awkward.

And yeah, if someone is really going to kill an alien person unless they correctly guessed some local guy’s name, you should give them the best guess you have (e.g., “James”) — but perhaps distracting the would-be murderers for a few minutes would work even better.

I know Pink’s not black, but compare these two responses:

(A) Uh, what’s your point? Are you saying that listening to white singers is unpatriotic or something?
(B) Yeah, I know. Pink looks exactly like yellow or sky blue.

Clearly, in at least one those cases your intended meaning failed to cross the space between your brain and theirs.

• Jiro says:

Jiro: …Is it really?

Most of the weird arguments by Eliezer on LW are bricks in the wall that is support of MIRI and AI risk.

You just have to look at how Scott is using it here–this post is a followup to On Overconfidence, which is explicitly about AI risk.

• Deiseach says:

When you see every belief as quantitative, saying “I don’t know” looks like forcing an arbitrary qualitative distinction, and refusing to provide (in EY’s post) or act accordingly (in this post) to the information and beliefs you do have.

Okay, take Urstoff’s example. They ask me, Deiseach, “Do you like Lutefisk?”

I reply (being an untrained non-rationalist) “I don’t know, I don’t even know what lutefisk is, so I can’t have an opinion”.

So how can I avoid “forcing an arbitrary qualitative distinction, and refusing to provide information and beliefs you do have”? I need more information, I can’t assign a confidence level to something I have no idea about.

Great, give me five minutes to look it up.

Okay, dried salted white fish (cod, haddock, Pollock or ling). So far, so fine – I’m accustomed to these fish and I like salted fish (kippers and smoked haddock are traditional over here, after all).

Hmmm – cured in lye? That does not sound so good.

So far: estimation of do I like lutefisk? No idea, the lye doesn’t sound appealing, but what does it taste like?

Apparently the taste is bland (so you need seasoning or sauces to accompany it) but the texture is gelatinous.

So – do I like lutefisk? Again, I don’t know. I would have to try it. I could estimate I might like it, or I might not like it, depending on how gelatinous/slimy the texture is and how strong the taste is. I could estimate I’d be 85% willing to try it, 65% I might like it (pro: it is cooked not raw; con: it’s apparently jellified in texture so likely to have a slimy mouth-feel), 35% I might not (imagining what a strong alkali/soapy taste might be like, the slimy texture), but I’d be pulling those figures out of the air.

“I don’t know” is not an attempt to dodge the question or refusing to quantify estimates, it’s as honest a response as I can give, and quibbling over “true but not honest”, “words have no fixed meaning except what they have in the brain of the hearer” and “if you’re not a trained rationalist and don’t know what ignorance priors are, you’ll just have to take my word on blind faith” don’t change that.

• Kiya says:

Another important meaning of “I don’t know” is “I have no more information than you, my conversation partner, on this question,” sometimes phrased as “your guess is as good as mine.”

If I ask a question of fact and someone answers in a confident tone of voice, I assume they think they are a lot more certain of the answer than I would be if I guessed, so certain that I needn’t bother investigating further.
“What’s the capital of Kyrgyzstan?”
“Bishkek,” replies someone looking at their phone.

If I ask a question of fact and someone gives a definite answer with some uncertainty noises, I assume they think they’re making a better guess than I could, but that they’re providing less reliable information than I could likely get elsewhere.
“What’s that guy’s name?”
“Michael, I think?” replies someone who met him at a party and isn’t sure she kept all the mappings of names to faces straight.
“What’s the square root of 877?”
“Um, 30-ish. A little less,” replies someone who remembers offhand that 30 squared is 900 — something the asker also knows, but might have taken longer to dredge up and apply.

This isn’t just social nicety, although I see value in not totally scrapping English common usage in favor of jargon you use with your friends. It helps us do Aumann agreement right. If I have some limited information about the number of apples on the apple tree outside (I heard it was a bad year for apples this year), and I ask how many apples are on it, I want to hear the difference between “10-1000” (I went outside and glanced at it earlier) and “10-1000” (I think apple trees in general have around that many apples). I want to update on new information, not on reiteration of my original priors.

• Cauê says:

Using Urstoff’s examples:

What’s the capital of Kyrgystan?
Well, I actually know this with high confidence, but if I didn’t: imagine you’re playing a quiz game, and there are a list of, say, 50 cities to choose from. You can say “I don’t know” and it would be true in the usual qualitative sense, but you’ll help your team a lot more if you say “I’m sure it’s not any of these 37, I’m like 65% confident it’s not any of these other five, and I give equal probability to the rest”. There’s a lot you do know about it, you just didn’t cross the arbitrary threshold.

Who won the presidential primary?
With near certainty an American citizen of legal age, of the appropriate party, healthy, not poor; more likely rich than not, more likely male than female, more likely 50 than 30 years old, more likely white than black, etc., etc.

What’s the square root of 877?
I can definitely narrow it down to a pretty small space of probable answers, out of all possible answers.

Are there Black Widow spiders in Kentucky?
Well, there’s enough ways for the answer to be “yes” that I’ll say it’s >95% likely.

Do you like Lutefisk?
Never tried it, but probably not. I think most people don’t, and it doesn’t sound appealing at all.

Sure, I can say “I don’t know” to these, by which one would mean something like “I don’t know more than you”, or than the average person, or other focal baseline. And sure, ordinary life probably works more smoothly if we use it like this.

But the binary “I know / I don’t know” doesn’t capture how much I do know about the things I might answer “I don’t know”. And when the time comes to make decisions (“Should I order lutefisk?”, “Should I tell my teammates I think it’s more likely to be Askana or Bishkek than Tbilisi and Sanaa, although I’m not confident of any of this?”, “should I put the army on standby?”, “should we worry about AI risk?” ) I shouldn’t ignore the information I do have when it doesn’t cross the arbitrary threshold that would “socially justify” an “I know”.

• FullMeta_Rationalist says:

IIRC, the Page Rank Algorithm is based on a 16th-order Markov Model. The Page Rank Algorithm is the pride and joy of Google. This is arguably the largest contributing factor to their success because, during the search-engine wars, it identified the relative importance of each website more accurately than its contemporary search engines. My point is, Markov Models certainly have their usescases.

Is using a Markov Model more precise than a random guess? A little bit. Is the computation large and tedious to carry out by hand? Extremely. The socially-relevant difference between EY writing out an entire Markov Matrix and EY simply responding “idk” is the cost of expression. In situations involving website names and personal names, a Markov Model will give slightly more information than “idk”. But the distribution will be diverse and uniform enough that “idk” gives roughly the same amount. So the marginal accuracy of writing out the Markov Matrix, while meaningfully extant in some cases (Google’s), is virtually infinitesimal in most cases.

One criticism of “Bayesianism” (and EY’s related ideas) is how while Bayes Theorem may be ideal in the sense that a Carnot engine is ideal, using Bayes Theorem is also as impractical as trying to build a Carnot engine. I think this is just EY’s standard mode of thought. To think in terms of Platonic forms rather than practicalities (costs be damned).

There may very well be an smarty-pants signalling component to EY’s remarks. But I’m not sure “honest vs bluff” is the correct way to frame them.

• Deiseach says:

I shouldn’t ignore the information I do have when it doesn’t cross the arbitrary threshold that would “socially justify” an “I know”

But that’s not how Yudowsky is using the example in the chat. X is asking is it ever okay to answer “I don’t know”, Yudkowsky says no, X asks but when you say “10-1000 apples” without any qualifiers you are sounding more confident than you intend to be, you may lead the listener astray as to how accurate what you are saying is, Yudkowsky more or less says “That’s their problem if they’re not trained rationalists and don’t instinctively know I’m speaking using confidence ranges from probability theory and that I’m not assigning high levels of confidence here”.

So instead of saying “I don’t know but I can make an informed (or uninformed) guess” Yudkowsky would prefer to make an authoritative sounding statement (not even qualifying it with “Maybe 10-1000 apples” to indicate that he’s guessing) and it’s the other chump’s fault for being a chump because telepathy doesn’t exist and all the chump knows is what meaning he assigns to words inside his chump head, not what meaning smart people assign to words.

• Urstoff says:

I shouldn’t ignore the information I do have when it doesn’t cross the arbitrary threshold that would “socially justify” an “I know”.

Is anyone claiming otherwise? To me, the point is that saying “I don’t know” and leaving it at that is actually the (conversationally or epistemically) prudent thing to do in many situations. To do otherwise is simply pedantic and wasting time.

Person A: What’s that guy’s name?
Person B: Well, we can rule out women’s names as unlikely. And he’s white, so we can rule out ethnically non-white names. And he looks like he’s in his 40’s…
Person A walks away slowly.

Also, it seems that EW’s claim that “I don’t know” is usually just a screen that people think is defensible and unarguable before they go on to do whatever they feel like, and it’s usually the wrong thing because they refused to admit to themselves what their guess was, or examine their justifications, or even realize that they’re guessing just seems like a prejudice on his part. Answering “I don’t know” to any of the sample questions I posed above is not “refusing to admit what your guess is” or “refusing to examine justifications”.

• Cauê says:

Urstoff, I think we’re doing that thing where we actually agree but act as if each other’s positions are more extreme than they are.

As for the last part, it is definitely my experience that people will argue lack of knowledge as a defense against criticism of their previous position, but retain that position based on as much or less knowledge.

E.g. “economics isn’t a real science” to argue against a proposal from the opposite team, while campaigning for proposals that depend heavily on assumptions about economics. Or, personal anecdote, one might go “there’s so much we know nothing about… some things are beyond human knowledge”, and proceed to turn the baby’s clothes inside-out to confuse the spirits that are making her cough.

• Urstoff says:

Yes, I very much agree that often people say “I don’t know” or “we can’t know” and then hold very strong beliefs anyway. That is inconsistent and epistemically irresponsible. Like you, I’ve seen lots of people critique economics as a science and then use that critique as a blank check to believe whatever nonsense economic beliefs that they want. Sorry, but “Economics isn’t a science, ergo the minimum wage is good” is not a valid argument.

Similarly, people use “we can’t know” to justify an unreasonable risk assessment (often when that “we can’t know” is demonstrably false). “We can’t know what effects GMO foods will have, therefore they should be banned”.

In sum, and I think this is a pretty uncontroversial statement, there are times when saying “I don’t know” is actually the conversationally and epistemically prudent thing to do, and there are occasions on which people use “I don’t know” disingenuously to avoid challenging a previously held belief. However, these are mutually exclusive. EW, in contrast, seems (maybe) to treat all utterances of “I don’t know” as the latter case.

• Cauê says:

I don’t think this conclusion falls much outside of what EY says in the chat. Afterall, the reason saying “I don’t know” is the “conversationally prudent thing to do” is that one doesn’t usually converse with perfect reasoning machines, and one must consider what information will be actually received by the other party (actually, the larger part of it is probably that most of the information hidden by “I don’t know” is just not usually worth transmiting, but that’s not disagreeing with him either).

I probably disagree with the “epistemically prudent” part, though.

• Anonymous says:

Like it or not — and I often don’t — Less Wrong is largely a discussion forum about Eliezer’s ideas. It’s reasonable, when discussing someone’s ideas, to compare them to others’ even if you don’t think they’re all of the same intellectual caliber.

Imagine you’re part of a book club that includes Alice and Bob, who are both aspiring writers. Last week you read Alice’s first novel, which was decent but not the kind of thing you immediately enshrine in the English canon. This week you read Dostoevsky. During the course of the discussion you mention that Dostoevsky raises some of the same questions about ignorance and social control that Alice did last week… oh yeah, and that Huxley guy too. Are you thereby saying that Alice is on par with those two?

It’s unreadable. I don’t want to be part of this terrible book club. So it follows that links to lesswrong to try to bolster an argument are unconvincing because to understand the argument I need to go read something that’s unreadable.

• Cauê says:

Anon, when you see the many very smart people who not only have Eliezer in high regard but say they learned a lot from his “unreadable” posts, what do you think is happening?

• Nornagest says:

Read what you like, anon. No one’s holding a gun to your head.

(I doubt this is actually what you want, but if anyone in the audience wants an introduction to LW‘s cog-sci foundations without actually touching LW, I recommend Thinking, Fast and Slow by Daniel Kahneman — it covers much of the same ground, but doesn’t wander into AI risk or weirdly strident opinions on Many Worlds, and Kahneman’s a very well-respected cognitive scientist if you’re into the authority thing. Better written than most of Eliezer’s essays too, in my opinion.)

• Who wouldn't want to be Anonymous says:

Ack! I think I got moderated or something. *sigh*

Well, if it never makes it out of limbo, the short version is:

Agreed. “I don’t know” is a more honest and humble estimation than “I guess the name of the man walking past your window is Michael because once upon a time I glanced at the Most Popular Male Names By Decade lists and my massive brain effortlessly retained that information”.

If someone has a gun in your ear and is demanding an answer, [after playing with some data I estimate that] “Michael” has a 3% chance of being correct vs. “I don’t know” which is guaranteed to make your brains stain the grout. Under the circumstances, the potential pay-off from guessing is probably high enough to be practical. You don’t even have to have some super-brain and remember that Michael was the most popular male name for few decades straight. Any of the top 15 or so names have a greater than 1% chance, or the top 40 all give you better than half a percent. As a practical matter, any name you would even think to pick gives a pretty decent chance. And that is before getting all pedantic about it how even picking Adolph (at .0002%) is technically better than 0.

If you’re guessing the name of some alien monster, “I don’t know” is the only appropriate answer because any name you are physically able to conceive is impossible to be correct.

Somewhere in between, guessing becomes effectively indistinguishable from “I don’t know” because the effort to make a guess overwhelms the potential payoff for being correct.

There are times when an educated guess is appropriate, and times it isn’t. Being an ass about guessing is not conducive to effective communication in any case.

• TrivialGravitas says:

It’s also a really bad example in a number of ways.

For one thing trees have a relatively uniform number of apples. This changes over the course of a year, so you can make a solid order of magnitude estimate given a bit of searching and knowing the date. Bad examples of trying to use statistics are cases where there’s no input data at all, or where you have to sum up a lot of fuzzy estimates.

The response is no better, the chat took place in winter so there are no apples on the tree at all.

• Nita says:

Yes, the funniest part was Eliezer’s off-the-cuff estimate of 10..1000, when he doesn’t even have a reason to believe it’s an apple tree.

• DavidS says:

That was my favourite bit too! Almost feels like it’s written as a spoof: he’s got fixated on trying to quantify but missed the fact that the odds are actually that it has 0 apples on account of not being an apple tree (or indeed being out of season)

• Eli says:

Wow, that link actually is pretty bad.

Lesson: always remember that when you “assign a probability” you need to damn well remember what the probability is measuring. Any distribution over “number of apples on a tree” is ultimately only as good as your previous experience with apple-trees, which “Bayesians” often seem to forget.

Yes, there are Dutch Book theorems saying you need to have some coherently probabilistic notion of odds to engage in gambling, but firstly, most of the time in real life you’re not gambling (eg: the real universe can just kill you, leaving you no recourse at any sensible odds other than to not have been there in the first place) and secondly, your brain can calculate it all out just fine without having to Speak Probability Jargon.

2. Chris Thomas says:

“…the foundation director might notice that of a thousand projects she had estimated as having 1/1000 probabilities, actually about 20 succeeded, meaning that she’s overconfident.”

Maybe I’m totally misreading this, but doesn’t this mean that she is underconfident?

• Logan says:

Underconfident in their success, overconfident in their failure? It’s not made entirely clear whether the director invested on those thousand projects or passed on them and watched 20 succeed with other funding sources.

• Vaniver says:

That her probability of success was too low is typically described as being ‘underconfident,’ but I think in the context of the previous post Scott is pointing at the confidence necessary to identify a probability as “about 1/1000” instead of “less than 1/100,” which is a less informative (and thus harder to get wrong” estimation.

• DanielLC says:

She was 99.9% certain each would fail when she should only have been 98% certain. Overconfidence means your probabilities are too extreme, not too high.

• Jeffrey Soreff says:

She thought she had 10 bits when she actually had 6?

• AnonymousCoward says:

Overconfident that they won’t succeed, underconfident that they will succeed, same difference. Because she’s giving a low probability, her claim is that they won’t succeed, so it makes sense to describe her as overconfident in that position.

3. Janne says:

What you’re describing in the scenario isn’t a probability estimation. It’s executing default actions in the absence of any information on which to base judgment. Imagine this slightly tweaked scenario:

Madam President, something has happened!

What “something”?

We don’t know! It’s indescribable. Reports are coming in from all over about something but nobody can specify it.

Very well. Put the military on alert. Contact our allies. Call a press conference stating that we’re on top of the situation and monitoring the development closely. Order pizzas to the situation room. Then we’ll start figuring out what’s going on.

That’s not a reaction based on probabilities, unless you fall back to the mealy-mouthed position that this course of action has been modelled to give give the best probability of future success when you have no prior.

• PSJ says:

That’s still somewhat a probability model in that the set of {things presented to the president} has more members that require military attention than banana attention.

• Eli says:

Yes. You can often use a meta-level hypothesis to try to give yourself an informed prior on something where you’ve got no object-level information. That’s not guaranteed to be helpful, though: informed priors can push you in the wrong direction, even though they do let you re-use information from other problems.

• PSJ says:

Absolutely. I just don’t buy that the examples Scott gives in this article are things that don’t have models (or meta-models). The fact that we have language to describe them already indicates that our brain has some model of their properties.

To be fair, Scott is defending against not using probabilities in the absence of a well-defined model in which case I completely agree.

• Scott Alexander says:

Insofar as that’s true, AI risk is also something we have a model for.

• PSJ says:

Right, which is why I think your criticism is totally valid for people demanding a well-defined model in the strictest sense. (I now realize that I don’t actually know what this would mean)

But I still think there is a neighboring criticism in that AI-risk research proponents have not provided a convincing enough model for many people who feel like the wool is being pulled over their eyes when people come out saying things like 10^54, despite the really really obvious problems with using that number at face value for ai-risk utility calculations.

It feels like: if they have to use that terrible of a model to convince me of their position, they must not have that strong a case. And if they want to convince me that they’re effective, it’s on them to make a better model. (I don’t agree with everything here, but I think it’s a valid criticism.)

Edit: whateverfor says something like this much, much better below

• Pku says:

Just out of curiosity for developing a lower bound regarding AI risk: can anyone think of a good example in history where people predicted that a process that hadn’t really began would prove to be dangerous, then suggested a clever way to avert it that was effectively implemented? (Global warming, say, wouldn’t count, as people only noticed it while it was in progress (among other reasons)).
Even if the answer’s no, that isn’t neccessarily a good argument against working on AI risk – but a good example might be a good argument for it.

• Eli says:

Yes, but no.

(Disclaimer: I support doing AI risk research, including with public funds, and have donated to MIRI at times, and have in fact held a MIRIx workshop myself.)

The Bostromian model is a model, but it also seems to me to conspicuously refuse to use all relevant information. “Suppose we had a really powerful optimization process that is otherwise a black box: that would be really dangerous. Therefore, we should invest in avoiding this situation happening.” This is a perfectly reasonable statement, but it’s also a motte.

The baileys get pretty damned weird, extrapolating out to statements such as, “This requires that we solve moral philosophy forever” (which implies that we’re suddenly going from thermodynamic/causal-inference concepts like optimization processes to barely-coherent philosophical ones like “moral philosophy”) and “Therefore we need to deliberately slow down AI research until such time as moral philosophy is solved.” Bostrom plays a very careful game where he never comes out and assigns propositional, 1.0 truth to these baileys, but instead tries to say that they have positive expected values in a probabilistic sense.

I think that these sorts of baileys are at least mildly disingenuous, because they ignore a major factor: basically nobody in AI or cognitive science, with the sole exception of raving Kurzweilians, actually believes that we ought to build a very powerful black-box optimization process. Pretty much everyone wants controllable, programmable, white-box mechanisms, techniques, and ultimately programs that will do precisely what their human operators intend, no more no less.

(Schmidhuber and Hutter have occasionally claimed to anticipate that their work will create an unfriendly Singularity, but this seems like an affectation designed to make LessWrongers stop pestering them. When they get the opportunity to publish papers about controllable, white-box machine-learning techniques, they take it.)

The important thing about the “everyone” here is that it’s not just AGI risk researchers or FAI researchers, it’s the entire professions of machine learning, computational cognitive science, neuroscience, etc. So the scenario of some poor machine-learning professor accidentally stumbling upon general intelligence and winding up with a self-improving, hard-taking-off agent that destroys the world because it has a utility function randomly sampled from the Solomonoff Prior is basically completely off the table. Nobody actually does research that way.

So we end up with a very sound, highly probable motte that says, “We need to shift towards working on white-box, controllable learning and inference techniques, so that when we get to the point of writing learning and inference agents for active environments (aka: “AIs”), we’ll be able to guarantee that what we’re writing down precisely expresses what we really mean”, but a lot of people running around claiming a bailey of very, very hard take-off and a priori philosophizing that bares little resemblance to machine learning, theoretical neuroscience, computational cognitive science, or anything else that involves dissolving the concept of “intelligence” or “thought” into a coherent mechanistic theory.

Now admittedly, most people actively involved in the field don’t seem to really endorse the bailey if pressed on it, but fear seems to be how you convince the public to work on problems very thoroughly a decade or two before you’ll need the solution, so we all seem to be engaged in a conspiracy of silence where we let the bailey-spewers run loose even though we all know it’s going to be mostly motte-y in real life.

• discursive2 says:

@Pku — the Y2K bug. It was predicted, massive resources were poured in, problem was pretty much totally averted. Not a perfect example because it didn’t require a major shift in our mental model to conceptualize it, and the solution wasn’t clever so much as just “find every damn place it happens and fix it” (though people did invent some cool meta-programming tools). But it’s definitely in the “civilization vs doomsday scenario: civilization wins!!” category.

• Pku says:

@discursive2:
Neat example (also, this is the first time I heard the Y2K thing was an actual problem and not just a hoax, which I didn’t know). As far as the solution goes, this example seems to give weak evidence for Eli’s whitebox-solutions approach.

• Eli says:

As an extra note on white-box solutions: white-box solutions are what mainstream academia likes, and what MIRI likes, and what Bostrom likes, and basically what everyone likes. They barely qualify as a “position”: everyone likes to know what they’re modeling and how to make sure their model stays within its specified behavior.

The only real dispute is between people who claim we can’t build a white-box understanding of minds in general (eg: alarmists, Douglas Hofstadter sometimes in some ways, alarmists, neural-net wankers at industry companies who want to pretend they’re doing serious AGI work when really they’re building cat-picture classifiers), and people who claim we can, with sufficient research work, build up a white-box understanding of minds in general (eg: mainstream academics, less mainstream academics, AI risk advocates, MIRI, myself).

So actually, most of the noise on this subject is an attempt to make both the Pollyannish Kurzweilians and the sky-is-falling Rokoists please quiet down so the real scientists can do their jobs without getting death threats in their blog comments (that actually happened once). The actual dispute between, for instance, MIRI and mainstream academia is a good deal smaller and narrower than the propaganda baileys make it out to be.

• Deiseach says:

Pretty much everyone wants controllable, programmable, white-box mechanisms, techniques, and ultimately programs that will do precisely what their human operators intend, no more no less.

YES. This. The claims that “Oooooh, we’ll invent something that will end civilisation as we know it because we never anticipated it would be a human-killing god-emperor machine” are absurd to me because if we’re stupid enough to hand over that much and that level of control, we pretty much deserve what we get.

The more likely avenues of risk are, well, something like this story (admittedly probably exaggerated because after all, it’s a news story that relies on attention-grabbing headlines). Is MIRI working on real-world, right-now risk of “Crap, we destroyed the economy because of stock market panic facilitated by reliance on computer models of trading”?

Increasingly the world’s markets are governed not by human traders but by computers; sleek black boxes running frighteningly sophisticated algorithms that can produce effects that leave even their creators scratching their heads.It is true that a situation like that which emerged at the start of this week would have given the Dow a case of the jitters in whatever era it occurred. Human traders would have followed the herd’s instincts and pressed their sell buttons, or called through their sell orders.

But among their number would always have been one or two contrarians doing the opposite, at least when it looked as if they could make a buck by screaming “buy, buy, buy”. The further prices fell, the more of them there would have been.

Computer algorithms don’t work like that, at least not yet. Episodes such as the Dow’s 1,000-point plunge, followed by its equally rapid reversal, are therefore becoming increasingly common.

Why is that a problem? Humans might no longer fully be in control of what’s happening, but they’re still a part of the mix. Events like 1,000-point falls in the Dow have the capacity to panic them, exacerbating the machines’ brutal logic and feeding more panic.

MIRI and the like are working on fairy tales of Fairy Godmother AI that, if we get it right, will solve all our problems and Cinderella, you shall go to the Singularity ball!

What they should be worrying about is not “human-level AI bootstrapping itself to super-human AI and beyond” but the toxic combination of human-level humans and AI.

If they can prove themselves by coming up with theory that can be translated into practice in reaction to the likes of ‘market panic exacerbated by near-instantaneous communication and ability to make massive trades in split-seconds means mistakes are amplified to dangerous levels’, then they’ll have proven themselves to have a grasp of the problems, the likelihoods, and the ability to provide solutions that actually work.

• Eli says:

Excuse me, Deisach, but you’re failing to address MIRI’s actual claim, which is that in the attempt to make programs that perform a useful function for their human operators, people might accidentally create something “generally intelligent” that proceeds to bootstrap itself to whatever whatever and wipe out the human race.

I’m not really sure where this straw-man of “Fairy Godmother AI” actually comes from, given that MIRI have given every impression of being Very Serious People who believe that for Very Serious Reasons their FAI really won’t do things that fall under the ordinary heading of “being nice to people”, but will instead more sort of just protect the existence of human civilization in general and maybe cure a couple of diseases for starving Africans before leaving humans to do everything else Because CEV.

Basically nobody believes that ordinary ML researchers will deliberately wipe out the human species. Except fucking Roko.

• Ilya Shpitser says:

Re: Y2K, actually what happened is, first, a lot of resources were spent, and then there was no serious problem. (Slightly less general statement than “serious problem averted.”)

• Aaron says:

@Eli

Could you elaborate on the white-box divide among AI theorists you mentioned (but please without the ad hominem)? I would imagine that there are good arguments on both sides.

When I look at the history of the white-box school of thought (if I understand what you mean correctly) I don’t see reason to believe it is the best or even most viable path. Since Newell (et alia) wrote the General Problem-Solver paper in 1959 and described what looks like a class white-box approach, what has this approach actually accomplished?

I’m probably wrong so what would you say are few working examples of major white-box successes?

• Eli says:

@Aaron: “white-box” does not mean “formal/symbolic logic”, and “black-box” does not mean “statistical”. An ordinary linear regression or probabilistic graphical model are white-box statistical methods. Likewise, an SVM or a primitive (ie: until recently) deep neural net are black-box statistical methods, and unification-resolution algorithms are black-box symbolic logic methods.

• Professor Frink says:

@Eli what are you meaning by “white box” then? Why would a Tweedie/glm regression be white box, but an SVM be black box?

• Deiseach says:

in the attempt to make programs that perform a useful function for their human operators, people might accidentally create something “generally intelligent” that proceeds to bootstrap itself to whatever whatever and wipe out the human race

And that sounds reasonable to anyone? “Whoops, we somehow ended up with a genuine machine intelligence that we then allocated responsibility for certain tasks on a global basis and it magically made itself smarter, developed an aggressive anti-human personality, and did its damnedest to wipe us out”?

I am perfectly willing to yield on “We didn’t understand what we were doing in full detail so we screwed up”. Humans do this all the time.

The rest of it? “Our smart machine made itself smarter – without us noticing – and got into a position to wipe out humanity – without us noticing – and was the kind of entity which wanted to wipe out humanity – without us noticing the bits where it went I HATE HUMANS I WILL KILL YOU ALL DIE HUMANITY DIE Oh hello Bob, yes, I’m working on that mosquito net distribution routes planning”: I’m not so willing to concede this.

• gbdub says:

Maybe we just need to make an AI that’s at least sort of sympathetic towards humans, stick it in a box, and feed it nothing but stories and images of rogue AIs making humans very sad set to Sarah McLachlan music. If it can make people care about chickens…

• Paul Torek says:

@Eli,

I’m one of those neural-net wanker researchers in industry. But I don’t claim to be advancing machine learning, just making use of it.

And what I notice so far, is that black-box methods seem to beat white-box ones most of the time. For example, SVMs do much better on most of my classification problems than linear regression models do.

And another thing I notice, whenever I try to stay semi-literate about neuroscience, is that the mammalian brain seems to use a lot of black-box type tricks. No?

• Logan says:

The set of {things} already has more members that require military attention than banana attention because the military is much more useful than bananas. Even in a famine, the military can be useful, because it’s useful in any emergency.

The strategy here is best described as keeping your options open given a lack of information, which probably applies to AI threats as well.

• PSJ says:

But since {things presented to the president} is more informative than {things}, I don’t really see what your criticism is. There’s no flaw in using all of the information you do have. (and yes, only considering the pre-existing set without any uncertainty is not the right way to do this, etc., etc.)

I maintain that this strategy is better described as keeping your options open given exactly the amount of information you have.

• Muga Sofer says:

>unless you fall back to the mealy-mouthed position that this course of action has been modelled to give give the best probability of future success when you have no prior.

But … it has, though. Get all your most reliable and important people together to try and figure out what’s going on, put the military on alert in case your preliminary model predicts you need them to act now or face enormous consequences. That is a protocol expressly designed for dealing with situations where you start with insufficient info.

• “We don’t know! It’s indescribable. Reports are coming in from all over about something but nobody can specify it.”

But that was nothing to what things came out
From the sea-caves of Criccieth yonder.’
‘What were they? Mermaids? dragons? ghosts?’
‘Nothing at all of any things like that.’
‘What were they, then?’
‘All sorts of queer things,
Things never seen or heard or written about,
Very strange, un-Welsh, utterly peculiar
Things. Oh, solid enough they seemed to touch,
Had anyone dared it. Marvellous creation,
All various shapes and sizes, and no sizes,
All new, each perfectly unlike his neighbour,
Though all came moving slowly out together.’
‘Describe just one of them.’
‘I am unable.’
‘What were their colours?’
‘Mostly nameless colours,
Colours you’d like to see; but one was puce
Or perhaps more like crimson, but not purplish.
Some had no colour.’
‘Tell me, had they legs?’
‘Not a leg or foot among them that I saw.’
‘But did these things come out in any order?’
What o’clock was it? What was the day of the week?
Who else was present? How was the weather?’
‘I was coming to that. It was half-past three
On Easter Tuesday last. The sun was shining.
The Harlech Silver Band played Marchog Jesu
On thrity-seven shimmering instruments,
Collecting for Caernarvon’s (Fever) Hospital Fund.
The populations of Pwllheli, Criccieth,
Were all assembled. Criccieth’s mayor addressed them
First in good Welsh and then in fluent English,
Twisting his fingers in his chain of office,
Welcoming the things. They came out on the sand,
Not keeping time to the band, moving seaward
Silently at a snail’s pace. But at last
The most odd, indescribable thing of all
Which hardly one man there could see for wonder
Did something recognizably a something.’
‘Well, what?’
‘It made a noise.’
‘A frightening noise?’
‘No, no.’
‘A musical noise? A noise of scuffling?’
‘No, but a very loud, respectable noise —
Like groaning to oneself on Sunday morning
In Chapel, close before the second psalm.’
‘What did the mayor do?’
‘I was coming to that.’

• I read your example as “we’ve always got a model”. But if almost all our other models fail, I think the fallback model is confusion, fear, and threat (“omg monsters”), which is probably a rational placeholder model seeing as though dying is a lot higher magnititude than being pleasantly surprised by something.

4. Eli says:

In real life, the President would, despite the situation being totally novel and without any plausible statistical model, probably make some decision or another, like “yes, put the military on alert”. And this implies a probability judgment. The reason the President will put the military on alert, but not, say, put banana plantations on alert, is that in his opinion the aliens are more likely to attack than to ask for bananas.

But of course the President has a model. That’s the basic point of Jaynesianism and other forms of cognitive Bayesianism: your brain makes and evaluates probabilistic models. In fact, your sub-conscious, intuitive judgements are almost definitely going to be more probabilistically accurate (accurate to the real probabilities) than your explicit attempt to evaluate probabilities, because the latter uses a bad model of how to form and evaluate probabilistic causal models, while the former uses actual algorithms evolved over millions of years.

This is why we trust in experience: experience just is a well-trained sub-conscious model. Likewise, it’s also why we’re being genuinely wise if we refuse to assign numerical probabilities in certain cases:

1) There can be cases where we possess so little information that we simply can’t trust our mind’s current class of models actually encompasses at least one correct hypothesis. Refusing to assign probabilities in an Outside Context Problem is rational: it’s saying, “There’s a honking huge random variable over there, full of entropy, and I’m not entangled with it at all! Get me information before I have to guess anything about it!”

2) There can be cases in which we possess so much detailed information that we no longer see the forest for the trees. This is why, for instance, I don’t indulge in AI risk pontification: I know more than any layperson at a technical level, but as a result, I overfit to my existing technical knowledge.

• PSJ says:

This. Our brains are pretty good at applying its collage of prior experience to new stimuli, so when something seems exceptionally hard to determine a good model for, instinct will often perform pretty well. Of course, this shouldn’t prevent trying to create good models, but we should be careful of models that oppose intuition.

• Pku says:

More or less what I was going to say (the example I was thinking of was that while technically catching a ball by solving differential equations to calculate its trajectory works, nobody would suggest it as a preferable alternative to using intuition).

• Eli says:

Indeed. And lo and behold, the brain’s noisy, stochastic model of the ball’s arc is actually a lot easier to compute than solving those differential equations!

• Luke Somers says:

Of course, it’s quicker still to just use the solution given by the differential equations without solving it afresh every time.

• Muga Sofer says:

Yeah, I don’t think that’s a great example. The investment one seemed to work, but not the alien one – it doesn’t help that we all have a bunch of (largely inaccurate) models of First Contact scenarios from science fiction.

• Deiseach says:

Sure. We can go the model of “The Day the Earth Stood Still” where using the “call the army out!” reaction resulted in human paranoia getting us banned by the Intergalactic Federation of Smart Peaceful Peoples, while in “War of the Worlds” the peaceful first contact deputation got fried by the Martian war machines.

Putting the military on alert is probably the protocol because, whatever the intentions of the aliens (and let’s face it, if they’ve got interstellar craft our weapons are likely to be the equivalent of spears versus cannons*), the element of civilian population control and establishing order where there is likely to be panicking and rioting is what is paramount.

*I never shall forget the way
That Blood upon this awful day
Preserved us all from death.
He stood upon a little mound,
Cast his lethargic eyes around,
And said beneath his breath :
“Whatever happens we have got
The Maxim Gun, and they have not.”

• AngryDrake says:

Damn, you sound like a neoreactionary sometime. Full agreement.

• Nita says:

you sound like a neoreactionary

May I ask what gave you that impression?

• AngryDrake says:

I don’t remember where I saw it, but AFAIK, politicians not needing to justify their actions – ruling by their gut and authority – is (part of) the neoreactionary position on governance.

• keranih says:

AFAIK, politicians not needing to justify their actions – ruling by their gut and authority – is (part of) the neoreactionary position on governance.

I have deleted multiple reactions that were not kind nor necessary, even while I thought them absolutely true. (I know they were true because I used examples.)

Instead I will observe that AFAIK, what you have described is not limited to any theory of governance, but is governance.

• AngryDrake says:

Instead I will observe that AFAIK, what you have described is not limited to any theory of governance, but is governance.

I don’t disagree, but it is still quite different from modern (democratic) politics, where our lords and masters are organized in ideological groupings. These groups also have public programs for implementation. This artificially limits their options, if they want to retain electability, and not get excommunicated from their support base. They can’t just go with their gut, if their gut tells them to do something the opposing team(s) would do, because they’d be ousted from power come next election.

I think this stems largely from the insecurity of their position; that they can be removed from power before long, if they don’t toe the line of pleasing the electorate (which is quite different from good governance). The PRC’s nominal Communist party appears to give approximately zero damn about ideological purity, because what they define to be communist and revolutionary is communist and revolutionary, and there is no credible threat of them being removed from power. Thus, they can quite openly implement capitalism, if they think it’s good for them.

(In a similar fashion, Stalin reversed a whole lot of Leninist reforms, on pragmatic, rather than ideological grounds.)

5. grort says:

When you know that your knowledge of the future is sketchy, the [frequently] correct response is to gather more information before making a decision.

In your aliens-landing example, the President could try to send them a message like “hello, are you peaceful?” and see if they respond.
In your NSF example, you should request more research — maybe get some known experts to look over Proposal B, maybe ask the Proposal B folks for a proof-of-concept.

If you tell a decision-maker: “We have no idea if Proposal B will succeed”, the decision maker is probably fairly likely to give you the “go get more information” response. If you instead tell them “We estimate the success probability at 1%”, essentially you’ve abstracted away all your uncertainty until it’s no longer visible, and the decision-maker is much less likely think to ask for more research.

From this perspective, “you have to express your uncertainty as a probability” is sort of sneakily implying the constraint that there’s no way to get more information before making a decision.

• grort says:

Admittedly this does not scale incredibly well to questions of existential risk, where waiting to get more information could leave you dead.

• PSJ says:

Yeah, a big problem I have with the last few posts is that there is almost no mention of uncertainty measures. Saying 50% because that’s your prior and 50% because you have 1000 bits of confirming evidence leads to completely different optimal actions. (the explore-exploit problem in learning)

• grort says:

…And it sounds on the face of it reasonable to use some sort of uncertainty estimate, like “0.1% plus or minus fifty percent”, or the equivalent in log-odds. I think some sort of formulation like that would let us talk about uncertain things without losing that measure of how uncertain we are.

On the other hand, if we just abbreviate “0.1% plus or minus fifty percent” to “I honestly have no idea”, it seems to me that that also lets us communicate unambiguously, and for most of the examples I can think of it seems just as good.

• Pku says:

For a lot of the examples we’re thinking of that’s not enough though – AI risk, gamma ray burst risk, and alien invasion could all be described that way, but seem to have pretty different probabilities.

• Scott Alexander says:

It’s really important to realize that a point estimate plus uncertainty collapses to a less confident point estimate.

Suppose that I think there’s an 0.5% chance that hostile aliens will invade, and if they do I need to be planning planetary defenses right away.

But suppose I have uncertainty around that estimate, and the real estimate might be as much as 100x more or less in either direction (ie an error of +/- 2 in log odds) with an equal prior across that whole space.

I don’t want to do the actual math, so please tolerate my horrible hack. Suppose we represent this as a 33% chance I’m right on the mark, a 33% chance it’s actually 100x higher than this, and a 33% chance it’s actually 100x lower than this.

That means there’s a 33% chance the true probability is 50%, a 33% chance the true probability is 0.5%, and a 33% chance the true probability is 0.005%.

But that means there’s a 33% chance the true probability is 50%, which means there’s a 16.5% probability! (plus a tiny bit extra from the other branches) I thought I only had an 0.5% probability, but the real probability is in fact 33x higher! I better start working on those planetary defenses!

The actual math will be less extreme, but I think the point will still hold.

This is a big part of what I meant by the post on not being overconfident, except that I think probabilities of probabilities are kind of dumb and so I didn’t want to talk about it in those terms.

• PSJ says:

But you don’t usually have point estimate plus uncertainty, you have a distribution over probabilities. In your model, you never thought there was a .5% chance of aliens invading: that isn’t the mean of your distribution.

• Muga Sofer says:

You might. If your model was “most aliens that were hostile would have blown themselves up with nukes, so only about one in twenty would ever want to invade us”, or something.

• Pku says:

In terms of doing the actual math, if you got a bell curve for the log probability, the point of highest contribution would be about the mean plus the variance (afterwards it starts decaying exponentially, which would cancel out with the log). So if you’re too lazy to do the integral (which I am), just calculate 10^(average + variance of the log probability).

• This sounds a lot like a minimax strategy, in that it involves minimizing the maximum likely loss.

• grort says:

I agree with you that it’s possible to collapse a point estimate plus uncertainty into a different point estimate which is closer to 50%.

What I’m trying to suggest is that you lose a lot of information when you perform that collapse, and losing that information can harm your decision-making process.

I’ve tried to give examples where the loss of the uncertainty information causes you to make a different (probably worse) decision than you would otherwise have made.

• Jiro says:

I think this is a good time to point out this Givewell post in favor of using better founded estimates over worse-founded ones, even if the new value you get by combining old value+uncertainty is not better.

A lot more details about basically what you’re describing.

• “…you lose a lot of information when you perform that collapse [to a point estimate]”

This depends on the specifics, and Scott is right in this specific case.

In general, to get the probability of A you want to take the weighted average of P(A | theta), where theta is the set of parameters for your model, and the weights are the probabilities you assign to different possible parameter vectors theta.

In the example Scott mentions, where theta is itself a probability with

P(A | theta) = theta,

this procedure amounts to collapsing your distribution for theta down to a point estimate by taking the mean of the distribution.

But if we were talking about a sequence of identically distributed and independent propositions A[i], with

P(A[i] | theta) = theta independently

then just using the mean of the distribution for theta would give the wrong answer. As an extreme example, suppose that we have equal probabilities of 0.5 for theta=1 and theta=0. Then

P(A[1] and A[2] and A[3]) = 0.5

whereas using the mean point estimate of 0.5 for theta gives a probability of 0.125.

• Good Burning Plastic says:

In the example Scott mentions, where theta is itself a probability with

P(A | theta) = theta,

this procedure amounts to collapsing your distribution for theta down to a point estimate by taking the mean of the distribution.

Yes. At first I found it very counter-intuitive that the answer in this problem, so long as it’s only one new tree you’re planting, should only depend on the expected values of the posterior predictive distributions and not on their shapes or widths, but indeed it shouldn’t.

• Ral says:

Wait, what?

This is not how statistics work.
If i understand correctly, you are trying to express the idea that, given an uncertain estimate of a probability, it might be off by a factor correlating to your uncertainty?
In mathematics, there is an easy tool for that, the confidence interval. It is like a reversed bell curve that shows how certain you are, that a given event is somewhere inside it.
In your example, it might look something like this: you are certain (i think up to three standard deviations, or 0.997) that a hostile alien event is likely to occur between 0.005 and 0.5.
This means that you are confident that in less than 0.003 of all alien encounters, the probability of a hostile invasion is outside of [0.005-0.5].

If this looks as weird to you as it looks to me, there has to be some fundamental misunderstanding, and i think it is statistics.

Statistics is applied probability theory: “If an individual coin toss or the roll of dice is considered to be a random event, then if repeated many times the sequence of random events will exhibit certain patterns, which can be studied and predicted.”
Probability, then, is only a number for the pattern, not the individual event. the measure of the number results of type A vs the number of results of type B.
When you consider an alien Visit – and the actions that ensue – a random event, you cannot assign a probability because there is no repeating, observable pattern.
You can, for example, measure the probability of events occurring that you assigned a chance of 1/1000 (to occur) by counting all events you assigned a chance of 1/1000 and then evaluating how many of them occurred.
when you find that, on average over a few thousand events about 1/1000 occur, then your assigned probability seems to be correct.

For everything with an unknown, because not observed, probability, you may state a confidence interval to be able to calibrate yourself later / brag about it / bet against someone else’s confidence, or simply to communicate what you think of the situation at hand.
To decide whether to make investment A or B based on a more or less arbitrarily estimated 1/1000 chance of success for B without the means to be able to repeat often enough in case of failiure is stupid.

The president in your story? Should he prepare the military? Of course. Get communication experts ready? Definitely. Try not to provoke a war? I sincerely hope so.
Make an a priori decision what to do because he trusts statisticians to be able predict the most likely thing the aliens will do? I definitely hope not.

• Anonymous says:

This is what I came here to say. We’re necessarily losing information when we move from a pdf to a point estimate. In estimation theory, we construct fun estimators to try and keep some measure of the pdf, but they’re awkward and usually very slow. Anyway, any fundamental theory of reasoning that uses pointwise probabilities is likely to fail as a fundamental theory of reasoning.

• Izaak Weiss says:

But in the end, you have to select an action! That’s the point, no matter what you do, you have to select a point of belief that informs your action about a thing. If I give you a coin and say, “This coin is not fair, but I won’t tell you how”, you should still bet, for the first flip, that the chances of it coming up heads vs tails is 50/50, because even though i’ve explicitly told you that this is impossible, it’s still the point that any good distribution collapses to.

• grort says:

@Izaak: I think that one of the actions you can select should be “try to get more information to improve your prediction, and then select another action based on your improved information”.

I worry that, when you say: “you have to select an action!” what you’re implying is: “there is no possible way to improve your prediction, you have to make your final decision now!”

I agree that, in the situation where there’s no way to get more information and you have to select your final action right now, it’s correct to collapse your point-estimate-plus-uncertainty into a less certain point estimate.

I think in most real-world situations there will be a way to try to get more information before acting.

• Anonymous says:

If I give you a coin and say, “This coin is not fair, but I won’t tell you how”, you should still bet, for the first flip, that the chances of it coming up heads vs tails is 50/50, because even though i’ve explicitly told you that this is impossible, it’s still the point that any good distribution collapses to.

This is another way of saying, “We got literally no information from probability theory, so we’ll draw a uniform distribution over our model of the sample space.” That’s all well and good, but we’re not learning anything here. What happens if the coin was rigged to always land on edge? That’s outside of your sample space (i.e., a modeling error).

Furthermore, if I said, “I’m going to generate a real number using an unspecified process,” you can’t even generate a uniform distribution to be your prior.

If the aliens are coming, how do you model the sample space? “Well, we might need the military and we might not need the military. Thus, it’s 50/50.” “Well, they could be coming for one banana; they could be coming for two bananas; they could be coming for three bananas…” If we’re just drawing a uniform distribution on the sample space, then all of the work is being done by the model, which is contrary to the point of the OP.

• grort says:

In World A, the aliens are landing, and the President asks for a report. “Well,” say the scientists, “we really don’t have any idea what they’re going to do.” The President orders the scientists to try to make contact. She also orders the military to be ready, but not to make any overtly threatening moves until they have a better understanding of the aliens’ intentions.

In World B, the aliens are landing, and the President asks for a report, and she’s not taking “I don’t know” for an answer because that’s not a number. “Well,” say the scientists, “let’s say 30% chance of attacking, plus or minus five orders of magnitude in log-odds.” The President correctly understands that the scientists don’t have enough evidence to make a decision, so she does the same thing as in World A.

In World C, the aliens are landing, and the President asks for a report, but the Chief Scientist notices that you can collapse a point estimate plus uncertainty into a less confident point estimate. “49% chance of the aliens attacking”, the Chief Scientist reports. The President does the math and decides that’s unacceptable, so she launches the nukes…

• “except that I think probabilities of probabilities are kind of dumb and so I didn’t want to talk about it in those terms.”

Please elaborate. Yes probabilities of probabilities are a bit hard to conceptualize, but isn’t it a lot easier to see how agreement happens if two people just share probability of probability (posterior) distributions with each other, since that contains all their evidence rolled into one by simple multiplication/addition?

Trying to only use point probability estimates to explaining the agreement theorem looks very awkward. You have to think about my how my probability estimate changes given your probability estimate given my probability estimate given your probabilty estimate given…

I’m guessing the best way to visualize this is that each iteration corresponds to a term in fourier transform in the combined probability of probability distribution. I’m not sure about that though.

• lmm says:

Suppose there are a million different possible outcomes. You think they’re all equally likely. But you know your model may not be that accurate, that you might be out by a factor of 100x on the probability of any given outcome.

If you try to say the probabilities of each outcome are now 33x higher, your probabilities don’t add up to 1 any more and when you try to make predictions, demons will fly out your nose.

I don’t advocate abandoning mathematics in these cases, but you have to be really careful about the mathematics you use.

• LtWigglesworth says:

If I recall correctly abstracting away uncertainty by just throwing out a number was a major problem at NASA in both Shuttle disasters.

6. stargirl says:

It is not clear why you should always try to maximize expected value (unless you plan to play forever). It seems to me you should look at the sampling distribution.

Imagine the following game. At each round you choose either option A or B. After 100 plays whoever has more money wins.
Option A = 10 dollars with probability 1 (EV = 10)
Option B = 11000 with probability .001 (EV = 110)

You will lose this bet over 90% of the time. In order for you to be favored to win we need to play about 700 times. This example seems relevant to your discussion of the grant funding agency.

Even bets with infinite expected value can be bad value. Unless you can take the bet infinitely often. Here is the full argument the sampling distribution of the st. PEtersburg Lottery with n plays converges to nlog(n).

http://su3su2u1.tumblr.com/post/120917333088/st-petersburg-full-argument

• Decius says:

If you can show me a bet that offers payoff that remains infinite after accounting for diminishing returns, I’ll take it at any price. (and the St. Petersberg lottery can’t, even with an infinite bankroll, if the derivitive of my utility function U for dollars is dU(\$)/d\$=1/BB(Floor(1+\$/1,000,000,000,000), or generally speaking if I can devalue the next trillion dollars in a manner that exceeds any computable increase in number of dollars.

• stargirl says:

Here is another discussion of the St. Petersburg Lottery. IT discusses why taking the EV is not clearly useful when we are talking about distributions with infinite mean.

I would not take the St. Petersburg lottery even if payed out in “utility” if it was sufficiently costly. Would you actually take the “utility” version of the St. Petersburg lottery if the cost was 6^^^^^^^^^^^^^^^^6 years being horrifically tortured. While your torturer made sure to not let you lose sanity or otherwise avoid your suffering.

Your claim is you would take it for “any price” in theory this should still work if the price is unfathomable (but finite!) amounts of torture.

• Decius says:

*Infinite* expected utility.

Let’s assume a standard moral person, and after 0 heads one person is taken out of abject poverty and is poor for one day, and that utility of that operation is linear across all abject-poverty-person-days.

You run out of abject-poverty-person-days to alleviate before the heat death of the observable universe at some point, and the expected utility of alleviating all of them is finite.

Likewise curing all disease and stopping every bad thing from happening has finite utility.

The infinite value lottery offers some nonzero chance greater than epsilon of causing more benefit than the maximum thermodynamically possible benefit, and a chance greater than epsilon/2 of being at least twice as good as that.

So yes, I’d risk a finite time T in the worst possible conditions C in exchange for a chance epsilon of a finite time T in conditions of better than -C/epsilon, where C is defined such that U(C) is linear. You can describe a time and a torture condition; can you imagine a condition that is equivalent to the inverse of the worst possible torture, divided by epsilon?

St. Petersburg Lottery can’t give you an infinite payout even if your utility function for dollars is linear. You can only add a fixed amount to your current winnings on each flip, and the only way to get an infinite payout is to make an infinite number of flips, which takes an infinite amount of time, in which case you never receive any money at all. You may as well get your infinite payout by working an infinite number of hours at McDonald’s. Any dollar-generating scheme can generate infinite dollars if it’s given forever to do so, but that doesn’t make any sane person indifferent between them. You take the scheme that generates dollars fastest.

• RCF says:

It’s clear that by “infinite”, Decius meant “unbounded”.

• RCF says:

If you’re allowed to use BB in your utility function, why can’t I use it in deciding the payout? Suppose I make the payout after n flips BB((10^^^10)n).

I’m still bothered by his introducing me to the two-envelope problem. (There are two envelopes. One has twice as much money as the other, but you don’t know which is which. You open one envelope, find X dollars, and are given the choice to switch to the other envelope. Other envelope has an Expected Value of (2X)/2 + (0.5X)/2 = 5A/4, so switching is always optimal. This is Wrong; switching does not actually make a difference. No proposed solution is widely accepted as definitive.) To me, this is very close to implying that probability/math is Wrong.

• Jiro says:

You can’t average out 2X and 0.5X, because the X’s in question have different distributions.

• Professor Frink says:

Look at the wikipedia article. You can make legitimate probability distributions for envelope 1 and envelope 2 such that the argument still holds.

• anon85 says:

Okay, but the probability distributions have infinite expectations. So this is no more surprising than saying “let game G have infinite expectation. After playing G, I will give you the option of throwing away your result and replaying G. Since G has infinite expectation but its outcome is always finite, you’ll always choose to replay the game”.

In other words, the paradox lies entirely in the infinite expectation of the distribution (which is admittedly paradoxical, but doesn’t mean that all of probability is wrong).

• Rinderteufel says:

why can’t you just compare the expected values of the strategys “always switch ” and “never switch”? They both compute to 3/2 * X (where X is the amount of money in the envelope with less money). Or am I missing something obvious?

• Professor Frink says:

Someone above linked to the wikipedia article on two envelopes, there is a huge amount of literature most of it well summarized in the wikipedia article.

Anyway, the issue is that when you open the envelope you get information about “X.” If you open the envelope and see \$40, then there is a 50% chance the other envelope contains \$20. There is also a 50% chance the other envelope contains \$80.

So before you open the envelope, both strategies have a 3/2 X value (where X is the lower amount). After you open the envelope, switching will get you 5/4 Y (where Y is the value you saw when you opened the first envelope). So why the contradiction?

• Decius says:

Because the game with \$20/\$40 payouts is a different game than the one with \$40/\$80 payouts. Having opened one envelope, you are betting \$20 against \$40 that you are in the \$40/\$80 game rather than the \$20/\$40 game.

• HeelBearCub says:

Actually, I think all this does is let you update your model of the expected value of the GAME, not the expected value of switching.

If I tell you before hand that one envelope has \$1000 and the other \$2000, the expected value of picking is \$1500 (obviously, with no switching).

But if I tell you one of the envelopes has \$1000 in it, there are two equally possible games. One with a an expected value of \$750, the other with an expected value of \$1500. The expected value of the game is then \$1125. If I told you this information in advance, and then you picked an envelope with \$1000 in it, you should switch.

When we run the actual game we don’t tell you in advance any information about the game, so switching then is simply equivalent to finding out in advance and then pre-committing to a strategy of either opening one envelope or two.

• HeelBearCub says:

I think the 2nd Smullyan formulation is most salient,

“Let the amounts in the envelopes be X and 2X. Now by swapping, the player may gain X or lose X. So the potential gain is equal to the potential loss.”

When we find out one value, there is an equal probability that the value is X or 2X. Put another way, you need to decide beforehand whether you are going to represent the envelopes as X and 2X or 1/2 X and X. The contradiction comes from mixing the two representations of the basic equation.

• Professor Frink says:

No, it doesn’t. If it were that simple, there wouldn’t be new papers written about this almost every year I’ve been alive.

The formulation you are listing is an accurate calculation before you’ve opened the envelope. But opening the envelope gives you new information, as a good Bayesian once you open the envelope and see \$100 you have to decide whether you are more likely to be in the \$200, \$100 game or in the \$100,\$50 game.

If you are equally likely to be in either game, switching seems like the rational thing to do. The “paradox” is that the principle of indifference is always going to tell you to switch.

• HeelBearCub says:

@Professor Frink:
Can we agree that the truth is that switching envelopes makes no difference and does not increase your expected value?

Can we also agree that, once you open an envelope and see amount A, given that the game is defined as two envelopes containing amounts X and 2X that A is equally likely to be X or 2X?

Edit: And I am sensitive to the criticism that I am not a professional mathematician, just a schlub with a comp. sci. job and a math degree. But I also think there is an Occam’s razor type explanation which is roughly “figures don’t lie, but liars figure”

• Professor Frink says:

Try this- imagine that you know ahead of time there are 3 possible values for X, the lowest envelope in the game. \$25,\$50 and \$100. Now, I think we’ll agree that you’ll switch if you see \$25, and not switch if you see \$100? We can even arrange that so that 25/50 is equally as likely as 50/100.

Smullyan’s argument is still valid, before you open you shouldn’t switch. But after you open you’ve learned something about X, so maybe you should switch, maybe you shouldn’t (depending on what you see), but it’s a new situation. And, if you see \$50, the normal argument holds- you should switch.

The two envelopes paradox is really a question about how to estimate what X is, given the information you receive when you open one of the envelopes.

• HeelBearCub says:

@Professor Frink:
It’s interesting that you didn’t answer my “can we agree statements”.

Here is another one, can we agree that the logical implication of the “paradox” is to pre-commit to switch, no matter what value you see?

Now, let’s suppose the envelopes contain checks.

Furthermore, let’s suppose I can open the envelopes in such a way as to keep the “blank” side up. I open envelope 1, don’t look at the check, and then decide to switch. Theoretically, I have made the best possible move.

I open envelope #2 and see it is written for \$50. Let’s now say the game rules say I can now switch again, because I only opened envelope #1 but didn’t look at the check.

As to your your version of the game, I don’t think it’s actually the same. If you told me that there were two sets of 2 checks, (25,50) and (50,100) and you were going to randomly select a set and hand it to me, then your example works (and isn’t, I think, paradoxical). Then when I opened the envelope I would know I was actually in the middle of a distribution.

But in the real game, there aren’t two different \$50 checks.

• Professor Frink says:

@HeelBearCub. I can’t agree, because I think it depends on the details of how X is generated.

Switching before you see a number is never different than not switching. Switching after you see a number depends on your prior for X. In general, a strategy of
1. if you see a big number don’t switch
2. if you see a small number switch
is probably good.

• HeelBearCub says:

@Professor Frink:
Details of how the number is generated aren’t part of the game.

In fact, as usual, playing the game with dollar figures distorts our intuitions.

Instead we should play the game with “points” where your job is to beat someone else playing the same game. You don’t get see them pick the envelope and they don’t get to see you pick your envelope. Before the game your are not told anything about the values X could take. Your first envelope has a slip worth 10000 points.

Do you think switching improves your odds of winning the game?

• Professor Frink says:

Money doesn’t obscure the point, it lets us anchor the prior (money has meaning in a way that ‘points’ don’t).

Tell me where you disagree with this reasoning: For your example, by principle of indifference we expect that there is a 50% chance the other envelope contains 20000 points, and a 50% chance that the envelope contains 5000 (the pair was either 10000/5000 or 10000/20000 and with no other information we set this equal).

So now switching gives us a 50% chance of losing 5000 or a 50% chance of gaining 10000. So we switch.

• HeelBearCub says:

@Professor Frink:
Let me further specify that you and your opponent are playing with the same value of X. Actually, let me further simplify and just say that your opponent has 1.5X.

The reason I do that is to make the point obvious that if you pick the 2X envelope you win and if you pick the 1X envelope you lose. We could go through the rigmarole of working out the tie scenarios, but this is cleaner.

Knowing that you have an amount A doesn’t tell you anything about whether you have X or 2X. Switching is basically just saying “Ooooh. I hope I have X and not 2X” but there is zero probability that switching will actually change whether A was equal to X or 2X.

And the reason I wanted to get away from dollars is precisely so that you can’t use prior knowledge about how much money you think professors can spend on experiments to give you a bead on what the likely top value for X could be (or any other prior like that, for instance if A was \$0.01, we don’t have half-pennies in the US). That’s not where the supposed paradox lies.

I’m trying to avoid the situation where you think you can raise the expected value of the game by simply pre-committing to the strategy of switching even when the numbers are arbitrary. Sure you can use psychological and other reference points to try and guess at a likely number distribution, but this is not the case in a game where X is actually arbitrary (and this is what the supposed paradox is about, hence expressing the paradox in terms of X).

• HeelBearCub says:

@Professor Frink:
Here is another way of thinking about it.

Suppose I get 1000 subjects and each of them play the game once, independently at the same time and without knowing anything about the others’ results. I use the same value of X for every game. The envelopes are randomized before being given to the subjects. 500 of them switch when they open the first envelope. 500 don’t.

Do you think that the expected value of the switchers is higher than the expected value of those who don’t switch? Have I violated the rules of the game?

• Professor Frink says:

You never answered my questions:
“If I open an envelope and see 10000 points, can I assert that the other envelope contains either 5000 or 20000? If this is true, why shouldn’t I put a 50% on each scenario? Is it really true that there is a Bayesian probability of 4/5 for 5000 and a 1/5 probability of 20000? (as would be true for switching to be EV neutral)?”

Now, in your repeated game situation imagine that several students picked a number at random and decided to switch if they got below and stay if they got above that number. Is this a better strategy than always staying or always switching?

Like all probability paradoxes, this is about limitations of the rules of probability. Why is the principle of indifference failing? What is going wrong?

• HeelBearCub says:

@Professor Frink:
“You never answered my questions:”

I did in my OP. Once you open an envelope and see a number A, switching at that point is no different than pre-committing to switch. So no, you can’t compute an expected value of switching. You aren’t actually updating your strategy based on any real information.

You saw a number and switched, but no matter what number you saw, you would have switched, so it’s no different than just opening envelope #2 to begin with. Seeing the number A has no possibility of changing what value X is. It merely tells you some possible values for X.

That is why I gave you the example of running the trial with a single value of X over 1000 independent trials. So it becomes really apparent that switching can’t change the value of X.

And that is one of the ways that using dollar amounts perverts the example. Instead of using a 2X multiplier, use a 1000000X multiplier.

If I get \$1, I should switch, but that has nothing to do with expected value and everything to do with the relative utility of 1 million dollars.

But if they are just points, and not dollars, and you run lots of independent trials using a 10000000 multiplier all you will find out is that it’s easy to manipulate people by taking advantage of the fact that they don’t work with really large or really small numbers very much.

Make X = 1 million points and no one will ever switch. Make X = 0.0000001 points and everyone will switch. But that’s psychology, not math.

• ams says:

I simulated this using a Calc spreadsheet.

Let’s say you have some random integer dollar amount for the lower dollar amount between \$0, and \$5. The higher dollar amount is twice this.

You pick an envelope. You either don’t know the range of dollar values, or don’t care enough to play some more clever strategy like (switch if less than \$5).

If you are in branch 0, switching nets you the higher amount. If you are in branch 1, switching nets you the lower amount.

The average values I’m getting for both switch and don’t switch are converging to something like 3.7 (meaning my distribution must be a bit skewed somewhere). Nevertheless, they both converge to the same amount.

• ams says:

Here’s where you go wrong in the formulation of the problem: You are using X for the amount in your envelope in both branch 0 and branch 1.

Suppose there is an actual physical state in each envelope: There are two amounts: A high amount H, and a low amount L.

You open an envelope, and find an amount, but you don’t know if it is H or L, so you really haven’t gained any information about the problem!

If in branch 0, the amount in your envelope is L, strategy S (switch) gives you H, strategy !S gives you L.

If in branch 1, the amount in your envelope is H, strategy S gives you L, strategy !S gives you H.

Assuming equal likelyhood for either branch, the expectation values of both strategies is 1/2(H+L). The symmetry that should be there is there, as long as you have no way of knowing, upon opening the envelopes, whether what you have is more likely to be H or L.

If you have a dollar range that you know is in there, then you gain information about which branch you might be in when you examine the value in your envelope.

• Decius says:

The envelopes have \$2A and \$A.

Before I take an envelope, I reason: If I have the envelope with \$A and switch, I gain \$A. If I have the envelope with \$2A and switch, I lose \$A. The expected value of the other envelope is \$1.5A, which is also the expected value of my envelope.

Knowing that, I take an envelope. It has expected value B=1.5A and actual value C of 50% \$A and 50% \$2A. I know that the other envelope has an actual value of 50% 2C and 50%C/2, perfectly correlated to the actual value of C. Therefore the other envelope has an actual value D of 50% (2C=2(\$A)=\$2A) and 50% (C/2=\$2A/2=\$A), or an expected value of \$1.5A.

Knowing all of that, I open the envelope and find \$20. Now I know that A has an actual value of 50% \$10 and 50% \$20, and an expected value of \$15. The envelope I selected has below the expected value of 1.5A, or 1.5(\$15)=\$22.50, or (\$10+\$20+\$20+\$40)/4 (the average value of all possible permutations), so I switch. Now I know that I have an envelope that has an actual value of 50% \$10 and 50% \$40, and an expected value of \$25. I know the other envelope has \$20, so I keep the one I have.

The only paradox is that opening the envelope and counting the money always results in the envelope containing less than retrospectively expected.

• Peter says:

I think “No proposed solution is widely accepted as definitive” reduces to “there are controversies in probability theory”. I mean, if you can insist a priori on a proper prior – any proper prior – for the expected amounts of money before opening an envelope, then the problem seems to be OK, but there are controversies over whether you can do that.

• RCF says:

Did you miss the “risk neutral” part, or did you not understand it?

• Peter says:

That sampling distribution point is an interesting one, and feeds into the idea of learning. “Were I to spend time and resources trying to work out whether that lottery is a good one by playing and seeing what happens, would I learn to play it or avoid it?”

Given that we, as individuals, as a culture and as a genetic lineage, have had finite time to learn and evolve, then if there have been things like St. Petersburg Lotteries in our past, if we are well-adjusted then we will have learned not to attach a high value to them. Probably.

• After 100 plays whoever has more money wins.

Maybe I misunderstood your intention, but I think you may have conflated expected value of the money and the expected value of winning the game. The strategy for maximising the two are different. So I personally don’t think this refutes maximising value being sensible, provided you’re pursuing the correct value?

• HeelBearCub says:

@Citizenseaearth:

The point is (for me anyway) that simply computing expected value is not enough. Grant funders don’t get to fund arbitrarily large number of grants, so you can’t only maximize on expected benefit of grants alone.

• Grant funders don’t get to fund arbitrarily large number of grants, so you can’t only maximize on expected benefit of grants alone.

Apparently, I still don’t follow… why? Do you mean because the marginal value of each dollar is different (a billion dollars isn’t 1000x a million dollars in terms of value?), or something else that I’m not getting entirely? It seems to me that maximising expected benefit of the grants is exactly what they’re doing, even if there are risks involved in doing so?

• HeelBearCub says:

Because grant funders need to show results from their activities in the near term.

If a grant funding body gives 500 grants all to 1 in a million shots each year, the fact that they don’t “expect” to see any positive results until at least 1000 years from now won’t save them, after year 5 of “wasted” money, 2500 grants with only failure to show, from losing their ability to get new money or their jobs or what have you.

• Ok thanks for reply. I see what you mean but I took this: Your organization is risk-neutral to a totally implausible degree to mean we were ignoring that. Or to put it another way, doesn’t maximising expected benefit mean taking that sort of thing into account anyway? I take “benefit” to imply that anyway.

• HeelBearCub says:

@Citizenseaearth:

You may not ever see this, but …

I took stargirl’s point to be, essentially, that arguing an organization is risk-neutral to an implausible degree makes no sense as it ignores the actual expected returns of the strategy, which are negative.

You only get higher expected returns if you get to play for an absurdly long time. Just claiming that a group is risk neutral doesn’t mean that they get to play for that long of a time.

• grillerman says:

One thing is often left out of the discussion: that the other person might be adversarial and try to trick you. It’s easy to cheat with almost-zero probabilities by quietly fixing them to exactly zero.

All our intuitive gut-level decisions factor this risk in.

Simply put, a “bad guy” could offer the game of “11000 dollars with probability .001 (EV = 110)” for a given price. Then you play 10 times and every time he cheats by claiming/forcing that you lose (using some unknown mechanism). It would be a plausible outcome, so he can’t really get caught just based on the results.

This alone can predict aversion to extremely low probability but high reward situations. The diminishing value of money contributes, but it’s not the only thing.

• TrivialGravitas says:

you don’t need a big reward or zero probability to pull that off, lots of small value prize games (think carnivals, claw machines) have fixed lower than normal values. Never impossible to win, but way harder than you’d think.

7. Real Bob Dole says:

There are two big problems here, both of which the OP neatly sidesteps.

The first, the one that comes closest to being addressed, is the problem of using made-up numbers in statistics. As common sense would indicate and your doctor example shows, it’s actually not a very good way to make decisions. Not just because your numbers will be atrocious, although they will, but because you’re bypassing your relatively quite reliable intuition in favor of a system relying on those atrocious numbers.

The second problem that isn’t being mentioned is using expected value calculations with your made-up probabilities and a made-up value for the return. This is problematic for the same reason as the first, that “garbage in, garbage out” will make a hash of your decision making. But it is doubly problematic because it let’s you play with the numbers and get literally any possible result out of it with very little effort. After all I’m probably not a genie who grants wishes in exchange for blowjobs, that’s probably at least one-in-ten-million odds… but since one wish from a genie is worth Avogadro’s Number dollars, then you’d still be a fool not to take me up on my generous offer!

If your decision-making system spits out “pay money to some guy with no track record or accomplishments whatsoever on the off chance he immanitizes the eschaton” then it is a faulty system and should be discarded. Even if I am in fact a BJ-powered genie, the reasonable choice given the information you have is to laugh and walk off.

8. discursive2 says:

Probabilities are useful in repeatable situations where you are trying to maximize a quantifiable outcome, and your concern is what happens in aggregate, not in any individual case. So, the Director of NSF example is a perfect example of where probabilities are useful, because the director makes tons of grants, and cares about the performance of the portfolio rather than any one grant.

The correct cognitive tool for situations where there is a singular, all-important occurrence is values. For the alien appearance, do you value peaceful co-existence, or are you xenophobic and committed to self-defense at all costs? For “should I invest for retirement, even though the world might be destroyed by the time I retire?”, do you value living for today and taking your chances, or do you value maximizing your odds of financial stability?

• Eli says:

Uh, no, you’re basically denying the existence or usefulness of Bayesian probability here. You can have a mental model where the execution-trace frequencies or outcome frequencies when sampling from the mental model are the assigned probabilities, and that’s what a degree of knowledge is.

• discursive2 says:

Yes, I am denying the usefulness of Bayesian probability here 🙂 We are discussing cases where your mental model is bad, and you have one chance to get it right. I’ll press on the retirement example, since I think it is the strongest example. I have no idea if the world will still exist when I try to retire. Sure, I can sample multiple times from my mental model and come up with a number, but that number says more about me than it does about the real world.

If I try to use probability and expected outcome maximization to decide how much to save for retirement, I can make the answer come out however I want it to come out, by varying my assumptions within the parameters of my uncertainty. So it doesn’t help me.

What does help me is statements like “hmm, you know what, the world might or might not exist, but if it does, I certainly want to have enough money to retire on, and I don’t care too much about saving a bit now”. That statement of values gives me a very clear answer to the question. Alternatively I can say “I plan to be as scrappy when I’m 80 as I am now, so let’s live it up, and if I’m broke and old, well, that could be fun too!” Which again gives a very clear answer. And of course positions in the middle like “let’s save enough so I’m not totally screwed”.

In other words, thinking probabilistically is less useful than thinking in terms of values for making a practical do-it-or-don’t decision, at least under extreme uncertainty and one chance to get it right.

• LTP says:

Not only is it not useful in practical terms, but I wonder if it is not even really sensible to even try to attach numerical probability to subjective gut feelings. The part of my brain that produced the subjective feeling isn’t the same part that has been taught mathematics or thinks quantitatively. The part that knows math is at best making a vague analogy between a subjective gut feeling and numerical probability that cannot be checked by others, nor objectively measured in a numerical way. It strikes me as being overconfident in math and our ability to apply it to things that are difficult or impossible to quantify.*

*There really needs to be a version of the word ‘scientism’ but applied to math. Unfortunately, mathematicism doesn’t really roll off the tongue.

• shemtealeaf says:

I don’t think you can avoid the necessity of a probabilistic estimate by looking to values. Even if I’m very risk averse, I have to choose between saving money to avert the risk of being broke when I’m old and spending money on things like better health insurance or safer cars to avert more pressing risks. How does my value of risk aversion allow me to decide where to allocate my money if I’m not making some probabilistic estimate of how likely I am to actually need my retirement funds?

• discursive2 says:

“Risk aversion” (your phrase, not mine) is a way of translating value-language into quantifiable-language. Yes, once you make that translation, you can then try to compare spending money on a better car now vs saving for retirement later, and you’re stuck again trying to calculate expected values when your model is hopelessly inaccurate.

What I would say instead is “I want to have a nice retirement, and I want to feel safe about the car I drive, and I’m okay going to movies less”. There, you’ve made a decision and you’re happy.

• Muga Sofer says:

Both of those are maximizing a definable outcome over a series of choices – dollars and lives, respectively. Values aside, which strategy leads to a better outcome?

9. 578493 says:

Good post, though honestly I’m not sure that I fully understand the opposing side, because it seems quite obviously flawed in the ways you point out. Hopefully this post will prompt some interesting counter-arguments. Anyway, my real reason for commenting is to register my appreciation for this line:

a study by Bushyhead, who despite his name is not a squirrel

• LTP says:

“I’m not sure that I fully understand the opposing side”

I get the feeling that Scott and his critics are talking past one another to a degree, and it may just come down to incompatibilities in worldviews. As Scott presented them, they seem to have a weak case, but I’m not sure Scott’s summary of the otherside was super charitable.

• 578493 says:

That’s certainly possible, but I’ve observed some of the tumblr discussion, and my feeling is that Scott has done a pretty good job of explaining his position clearly and non-evasively — so when he and his critics talk past each other, I tend to blame the critics. (Of course, the version of me with different preconceptions is probably observing the same exchanges and coming to the opposite conclusion, but, well, fuck that guy :p)

10. onyomi says:

I think that failure to think in this way makes most people choose option A to an irrational degree, though probably a few starry-eyed types choose option B to an irrational degree, which may balance it out. But generally I’ve noticed that in hiring decisions, investment decisions, deciding whether to make experimental movie or another sequel, people overwhelmingly choose the known.

A lot of this is probably just risk aversion, but I also think sometimes people aren’t thinking in terms of the risk-reward ratios this way (that I should give money to the desalination plant if it has even 1/1,000 the chance of working as the less daring proposal; I think people frequently, if pressed, would expect the desalinazation idea to have 1/100th the probability of working as the other idea, yet give money to the other idea).

11. Sebastian H says:

I’ve been reading this post and the related threads before it, and something has been nagging at me.

MIRI and the related enterprises appear to me to have a lot of the markings of a an apocalyptic cult–lots of hyping scary scenarios, weird charismatic in group/out group features, and appeals to please give them shit tons of money without accountability on how doing so will actually help avoid the apocalypse.

How high do you believe the chances are that it is a straight up scam?

Do you believe that chance is higher or lower than the chance that their outlined program is likely to help avoid the AI apocalypse?

If those two chances are anywhere near equal (or if the scam chance is higher), wouldn’t the proper response be to find 1,000 things to invest in, rather than invest 1,000 times as much money in MIRI?

I wonder if the intuition that people who talk like MIRI is talking are often scamsters is underlying some of the pushback on pascal-mugging types of appeals to probability.

• Anonymous says:

I get the sense that sooner or later someone involved with the “movement” is going to try to blow up a research lab. I feel a little bad for thinking that, as I haven’t seen anyone suggest it, but there’s just something about the rhetoric.

• Deiseach says:

Oh no, I don’t get the animal rights activist vibe from MIRI (give them that). I think the problem for any would-be bomber would be identifying an actual research lab to bomb (Google? They’re working on AI, correct?)

How far would a lone bomber get trying to blow up part of Google’s campus? Not very far, I imagine. And really it would have to be a lone guy who was much too invested in doomsday scenarios to take such action. He might be reading up on MIRI and others, maybe even donating, but I think if someone is going to do a bombing run, then they’re not convinced that research into the risk is enough.

• HeelBearCub says:

“How far would a lone bomber get trying to blow up part of Google’s campus? Not very far, I imagine.”

Ummmm, depending on what they wanted to do? Very far. I don’t really want to go into all the reasons for that, but there is ample evidence that creating bombs is cheap and reliable.

Specifically target AI researchers? Lots harder.

• Harald K says:

“How far would a lone bomber get trying to blow up part of Google’s campus? Not very far, I imagine.”

I think the odds that there will be a guy like this and he’ll succeed, are better than the odds that AI risk of the sort MIRI worry about is real and that MIRI can do anything about it.

The more you fear AI due to fundamental uncertainty, the less faith you should have in MIRI to help you too. It’s hard to prevent something when you only have the vaguest ideas of how it could happen.

• jaimeastorga2000 says:

I’m reminded of that time “How would you stop Moore’s Law?” was mistaken for terrorist advocacy.

• Anthony says:

I would put the chance of MIRI being a straight-up scam at <1%. I haven't poked too hard into their stated program or their arguments, but I would place the chance of them being actually effective 1% < p < 5%. (And the chance that something is actually necessary at 1% < p 0.0.) So if I look at a 5% chance of preventing a 10% chance of killing 20 billion humans somewhere around 100 years in the future, relative to other future and current risks, I don’t believe their argument is nearly so compelling as they think it is.

ETA – returning somewhat to the meta argument, my problem with MIRI’s attempt to fundraise among EAs is that they’re using a bad model to inflate their importance. If you accept their 3^^^3 humans lost, then even if all the odds are much lower than I guesstimated, you should still donate to them. (Or to the Society for the Prevention of the Solar Nova.) But if you don’t accept that part of the model, they look like they might be worth donating to, but not as THE MOST IMPORTANT CAUSE IN THE WORLD.

“Accept their 3^^^3 humans lost”

Where do they make this claim?

Also, can we get a list of all of the most outrageous claims ever made by MIRI which were not later recanted? So that, you know, we just know what they are, instead of having to have arguments over whether somebody actually ever made the claim that MIRI would save 3^^^3 lives.

• Anthony says:

So I’m exaggerating for effect. But Bostrom puts the lower bound at 10^16 human lives for X-risks. That 10 quadrillion really outweighs almost anything else in scope, and washes away very very low probabilities – there’s only a one-in-a-million chance that your donation to MIRI will be effective? That’s still saving 10 billion lives!

However, if you accept Bostrom (and presumably MIRI – I haven’t dug into all the connections, and don’t care to) counting lives which never exist as of moral value, even if not the same as lives lost, one still needs to consider the possibility that MIRI’s research accidentally increases the probability of UFAI wiping out the human race.

• Stuart Armstrong says:

You don’t actually need those 10^16 lives to care about AI risk – I reject total utilitarian arguments (see my long less wrong rant against them http://lesswrong.com/lw/d8z/a_small_critique_of_total_utilitarianism/ ) but the extinction of intelligent life on earth is bad enough to worry about, at the likelihood levels that are not absurd.

MIRI is not a scam; they’re one that has done extraordinarily precise mathematical research in the area. That means they take money and have produced a product. You can argue about the value of the product versus the value of the inputs, but that puts them well into “assess investment decision” territory.

As for the chance of making things worse.. well, their prime focus is on keeping goal stable under self improvement, rather than having them wander randomly and unpredictably around. This seems useful rather than pernicious.

• Professor Frink says:

I don’t think I’d describe MIRI as “extraordinarily precise” research. There are maybe one or two mathematical results, but the overwhelming majority of the papers listed on their website are not technical. Most of the technical research isn’t to the “result” stage.

So there is a lot of philosophical speculation. A bit of handwavey, not-yet-formalized stuff, and maybe 2 arxiv-ready mathematical results that are no more (or less) precise than any other mathematical results.

Yes, they’ve produced some product, some of which received some attention (HPMOR was very popular, for instance), but lets not overstate the case.

• FeepingCreature says:

HPMOR is not a MIRI product any more than Carmack’s port of Wolf3D to Haskell is an Id Software product.

MIRI mainly produce interest and technical writeups.

• Jiro says:

HPMOR was meant to teach rationality, in a context where it is assumed that being taught rationality leads to belief in the goals of MIRI. The fact that people who read HPMOR don’t actually end up believing in AI risk is a failure of HPMOR as a tactic, and doesn’t imply that that’s not what was intended.

(It’s also another example of the phenomenon I’ve described before, where some of the things surrounding LW are meant to inspire rationality, and fail because they actually inspire rationality but the intended goals aren’t as connected to rationality as their proponents think.)

• TheAncientGeek says:

Do the exextraordinarily precise results include the AI risk calculation I keep adling for?

• Deiseach says:

The problem is that there are so many tangled threads. You’ve got the sane AI risk proponents, who argue that the increasing complexity of technological development and how deeply this is embedded in our current industrial civilisation means that there is ever greater and more likely potential for things going wrong due to the chasm between “Joe thinks it’s perfectly clear what he means when he gives the voice command to Market Bot to sell all shares in Mellicent plc” and what Market Bot interprets as the meaning; then Market Bot takes that as literally meaning “sell all shares”, not just the shares Joe owns, and it screws up the entire system by trying to sell all the shares of Mellicent plc, including those owned by other individuals and large stock holders.

I think most people agree that being aware of the pitfalls of putting all our eggs in the basket of machine intelligence as an agent permitted to replace human decision making is a good idea, and that careful consideration and working out models of how not to do it is worthwhile.

Then we get the Utopians like Bostrom and I’m going to quote that chunk of paper here:

vHowever, the relevant figure is not how many people could live on Earth but how many descendants we could have in total. One lower bound of the number of biological human life-years in the future accessible universe (based on current cosmological estimates) is 10^34 years. Another estimate, which assumes that future minds will be mainly implemented in computational hardware instead of biological neuronal wetware, produces a lower bound of 10^54 human-brain-emulation subjective life-years (or 10^71 basic computational operations) (Bostrom 2003). If we make the less conservative assumption that future civilizations could eventually press close to the absolute bounds of known physics (using some as yet unimagined technology), we get radically higher estimates of the amount of computation and memory storage that is achievable and thus of the number of years of subjective experience that could be realized.

Even if we use the most conservative of these estimates, which entirely ignores the possibility of space colonization and software minds, we find that the expected loss of an existential catastrophe is greater than the value of 10^16 human lives. This implies that the expected value of reducing existential risk by a mere one millionth of one percentage point is at least a hundred times the value of a million human lives. The more technologically comprehensive estimate of 10^54 human-brain-emulation subjective life-years (or 10^52 lives of ordinary length) makes the same point even more starkly. Even if we give this allegedly lower bound on the cumulative output potential of a technologically mature civilization a mere 1% chance of being correct, we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.

And that is “flying car” territory, which makes it difficult to take the warnings of AI risk seriously. When you are using phrases like “some as yet unimagined technology”, I don’t care how nicely you’ve worked out the sums, you are writing science fiction, not a mathematical postulate (though to be fair to Bostrom, he’s a philosopher and not a mathematician, so what he’s aiming at here is “I clarify the concept of existential risk and develop an improved classification scheme”).

Why do I say “flying car”? Because this was the symbol of the shiny new future of technological discovery and growth, and the progressive society run on sound scientific principles; a symbol that we more or less laugh at, nowadays. People thought they had good reason to assume that, as the progression was that animal and human labour-power was replaced by steam-power and the internal combustion engine, so the progression would continue for ground vehicles to be replaced by air vehicles for commercial and personal use.

We have a different view, and for us it’s “obvious” what the problems with this would be and why it could never get off the ground.

So “one day we’re gonna have super-duper computers smarter than people running everything and we’re gonna live forever in our new uploaded mind state and technology is gonna solve all problems and quadrillions of future humans will all be healthy, educated, have money and leisure time, and nobody at all anywhere on the planet is gonna be poor or sick or working minimum wage scut labour” is “in the future we’ll all gonna have flying cars!”

If the AI risk people stick with what we know and already have in hand and extrapolate in a reasonable manner from that, there’s no reason to object. Starry-eyed utopianism and grim-faced dystopianism are both extremes that are not helpful.

And the argument over how to calculate probabilities is a bit of a red herring; we’re getting bogged down in the mathematics, which is a fascinating theory of its own to be considered separately, but refining which confidence interval to seven decimal places we should be using when you’re discussing “And once I find the end of the rainbow, I will get the crock of gold from the leprechauns”, is not – I submit – the point at issue here.

• Anthony says:

@Stuart Armstrong – AI risk may be something to worry about, but there are many, many things to worry about. The question is where will my donated dollars do the most good. By assigning moral value to people who may never exist, MIRI is greatly exaggerating the effectiveness of dollars donated to them in comparison to other possible donation recipients, even assuming that they are honest and have a chance of being necessary and effective.

• Anonymous says:

FeepingCreature says:

HPMOR is not a MIRI product any more than Carmack’s port of Wolf3D to Haskell is an Id Software product.

MIRI mainly produce interest and technical writeups.

Someone posted this link a few posts back: http://lesswrong.com/lw/di4/reply_to_holden_on_the_singularity_institute/

It’s by Luke Muehlhauser, then the Executive Director of the Singularity Institute. It’s a reply to a critique of SI by Holden Karnofsky, then Co-Executive Director of GiveWell.

Here are some examples of projects that SI is probably better able to carry out than FHI, given its greater flexibility (and assuming sufficient funding):

* A scholarly AI risk wiki written and maintained by dozens of part-time researchers from around the world.
* Reaching young math/compsci talent in unusual ways, e.g. HPMoR.

SI has done a decent job of raising awareness of AI risk, I think. Writing The Sequences and HPMoR have (indirectly) raised more awareness for AI risk that one can normally expect from, say, writing a bunch of clear and precise academic papers about a subject. (At least, it seems that way to me.)

This one is pretty easy to answer. We’ve focused mostly on movement-building rather than direct research because, until very recently, there wasn’t enough community interest or funding to seriously begin to form an FAI team. To do that you need (1) at least a few million dollars a year, and (2) enough smart, altruistic people to care about AI risk that there exist some potential superhero mathematicians for the FAI team. And to get those two things, you’ve got to do mostly movement-building, e.g. Less Wrong, HPMoR, the Singularity Summit, etc.

Three times, HPMoR is explicitly claimed as an accomplishment of the Singularity Institute nee Machine Intelligence Research Institute.

They may want to walk that back now, but it’s there in black and white.

• TheAncientGeek says:

I think they are sincere, but have cultlike features because cultishness is an attractor.

• anodognosic says:

*yawn*

Beyond the affect-words “cult” and “scam”, this is every criticism of MIRI I’ve seen in this comment section for the past two weeks, except less substantive.

• Aaron says:

OK, how about this modest criticism? MIRI, by focusing on an extremely unlikely occurrence in the far future, takes the focus away from much more likely near term problems. I admit it’s probably a very small effect, still I claim that it holds.

By unlikely I mean that for a GAI to actually represent an existential threat first there must actually be a GAI, then it must have the means to execute this threat, and then it must have the will to do so. All of these are, in my opinion, unlikely in the extreme (at least in the near term.) Scott once quoted AI researchers on their view about the timeframe of GAI, but given the history of AI prognostication since the 1960s their guesses carry no weight.

A more pressing and real problem, in my view, is that widespread automation create a highly complex system on which we which we are fully dependent but cannot control. HF/algorithmic trading in the stock market is just the beginning. Add an automated air traffic control system, automated vehicle/truck routing systems, automated power distribution systems, etc. At some point these systems will be highly interconnected meaning that small changes can have unpredictable and disproportionate effects. They will likely gain some ability to self modify (a kind of limited intelligence) making them even more uncontrollable.

I’m not really against this type of automation but I see potential danger in it. How much? It’s hard to say. My point then is why invest energy in the unlikely event of a GAI threat when there are more real threats on the horizon?

12. kz says:

“Making a decision implies a probability judgment” isn’t quite what I would say. “Making a decision implies a decision-making procedure” is better.*

What decision-making procedures work well? This depends on specific facts about our minds, the decisions we face, and the world they take place in. The people you’re arguing with probably disagree about these and related empirical questions. I think it would be more useful to talk about those than to talk about hypothetical risk-neutral funding organizations facing a binary decision with binary outcomes which can be cleanly evaluated in commensurate terms.

*(There’s still room to disagree — to what extent should we even talk about an abstract decision-making procedure, something that can be applied generally, beyond that particular decision and person?)

• discursive2 says:

Yes, exactly. I think the OP’s argument implicitly assumes a consequentialist approach to decision-making, because if your decision procedure is consequentialist, then deciding anything means you have an implicit set of quantifiable expected outcomes, even if your numbers are extremely fuzzy. However, if you have a non-consequentialist decision-procedure, a decision does not necessarily imply that you have expected outcomes.

Personally I see consequentialism as one cognitive tool among many, not as my primary decision standard. I think its usefulness depends on a) how confidently you can predict consequences and b) how easy it is to convert the consequences to common currency (\$, pain, loss of life, etc.).

The thing that started this — AI risk — is an example of where a + b are both weak: no one really knows how the AI thing will play out, and the consequences at stake are all over the map. That’s why there’s divergence here: people like myself see this and say, “okay, consequentialism is the wrong tool to think about this problem”, and other people see this and say “well, I’m a consequentialist, so I better figure out a way of assigning reasonable probability estimates”.

• 27chaos says:

I’ve never seen a moral theory that truly avoids predictions. Deontologists willing to say that it’s wrong to aim a gun at someone and pull the trigger are making an implicit prediction that the gun will work and fire a bullet that will hurt someone.

• discursive2 says:

Agreed. Any decision-making involves some kind of expectation about what the consequences of the decision are.

So I guess what I’m trying to say is that for non-consequentialists, you don’t need a totalizing model of reality where all possibilities are accounted for given a probability (even if it’s “and then something completely unexpected happens”: 1% chance). It’s sufficient to think, non-quantitatively, “that bullet will likely kill that guy”… a deontologist doesn’t have to worry about how likely that is to be true, just that that’s what their intended outcome is.

I *think* that’s still a disagreement with Scott’s post, though I’m not sure exactly what position he thinks he is arguing against.

• 27chaos says:

I don’t agree that deontologists don’t have to worry about the specifics of probabilities. What makes you say that? I think deontologists would say it’s bad to fire a gun that has a 40% chance of killing someone and 60% chance of doing nothing, so mere likelihood doesn’t suffice.

13. Pku says:

Not directly related, but regarding AI risk: I think that (unlike with, say, global warming) AI risk will be easier to fight when we’re closer to having actual AI and have some more experience of how it works (as I understand, MIRI focuses on the least likely versions of AI and how to stop them, because those are the only ones that we can really try to understand without having more data). This implies that you’d be better off saving the money and donating it to AI risk aversion research later, when we have more data. (As opposed to, say, third-world charity, where diminishing returns means you might be better off donating now).

(Random aside: as an Israeli, I resent the allegation that we would spontaneously launch rockets at aliens. Unless maybe they joined the Arab league.)

14. In the case of the aliens, I would say: yes, as President, and despite the massive Knightian uncertainty, you need to make some decision. And as you correctly point out, not doing anything is itself a decision—very likely a bad one. Indeed, the need for a decision in the face of Knightian uncertainty is precisely why books or movies about executive decision-making (e.g., Kennedy during the Cuban missile crisis) can be riveting. Yet it’s far from clear in such cases that thinking in Bayesian terms will actually lead to a better decision (e.g., what if, as seems likely in practice, the President called on several teams of the world’s finest analysts to estimate probabilities for various scenarios, and not being Aumannians, they came back with totally different numbers? while their analyses might provide some insight, in the end it would still come back to Knightian uncertainty and the President’s gut). You yourself made this point beautifully, when you talked about the doctors you studied for your thesis—but then you added that thinking in explicitly Bayesian terms would help more if you were, e.g., a foundation director making thousands of grants. OK, sure, but notice that the latter case is one where “ordinary frequentist statistics” would also work well! And in my experience, this is an extremely common situation: yes, explicit Bayesianism is a clear, excellent methodology that everyone should master (including doctors). But the cases where there’s a clear enough choice of prior for it to give you sensible answers, tend also to be the cases where non-Bayesian statistical approaches would also have worked fine. (Though the Bayesian approach might still “win” because of its greater clarity and generality.)

In the case of the blue-sky desalination research—as someone who has occasionally had to help decide about NSF grants, I can tell you what consideration I’d use; interestingly, it’s something your post never mentioned at all. I would say: I can’t even estimate the probability that the desalination method will work; I have no idea whether it’s greater or less than 1/1000; I have no reliable reference class of ~1000 similar proposals that would permit me to judge. Even so, yes, I do need to make a decision, and I might well decide to fund the blue-sky proposal over the more conservative one. But the key question for me is this: am I reasonably confident that this blue-sky research will at least contribute to the growth of knowledge? That if it fails, we’ll learn something interesting from the failure? That others will be able to build on it later, as the Wright brothers built on the failures of earlier aviators?

FWIW, David Deutsch, admired by many in these parts for his staunch MWI-advocacy :-), has written a lot on the theme of scientific progress being better thought about in terms of the growth of knowledge and the discovery of better explanations, rather than updates to Bayesian probabilities. (Yes, we can always rephrase things in Bayesian terms, but as with any other choice of language, notation, or formalism, before doing so we ought to ask ourselves: will it help?) See, e.g., his book The Beginning of Infinity.

• Pku says:

So the conclusion from this is that the primary property we need in a president is really good gut-feeling intuition. Maybe we should put that into the debates somehow (maybe ask them bizarre hypothetical physics questions they have to answer in five minutes?).

• Eli says:

So the conclusion from this is that the primary property we need in a president is really good gut-feeling intuition.

No, that’s just cherry-picking alien invasions out of the set of all possible policy problems. In real life, most issues a President will face have been studied by policy wonks, and the President should damn well listen to the policy wonks.

• Pku says:

I don’t think so. Even in the aliens case he had policy wonks (if wildly inaccurate ones). In most situations, it seems like his job would be to use his gut feeling to decide which policy wonk to listen to. His final decision should be somewhere in the convex hull of the policy wonks’ suggestions, as they’re the ones who’v already done the research, but once you’re in there it seems like it mostly comes down to gut feeling.

(Fun aside about Ronald Reagan, according to a talk one of his staff members gave here: apparently, as president, he was old enough that he couldn’t function properly without eight hours of sleep. As a consequence, his advisors would, on occasion, tell him to skip the report and just get some sleep. Which was nice, except in the cases where he just wound up watching a late-night airing of the sound of music, which frustrated them to no end.)

• DavidS says:

Well, the implication is that the test shouldn’t be ‘bizarre hypothetical physics questions’ but ‘given complex briefing by conflicting experts and interest groups and make a decision on the basis of it’. Which while not in the President Exam is basically what politicians have to do all the time to get ahead. As I guess do most people in executive positions where their role isn’t super well defined.

Incidentally, I see no reason why it would make sense to ask physics questions specifically, as most politicians aren’t physicists and I see no reason why physicists would be better politicians.

• Well, sure, I’d like a President with “really good gut-feeling intuition”—including an intuition for when the right thing to do would be to hand off a decision to a team of Bayesian statisticians or other experts (which presumably rules out, e.g., George W. Bush), but also an intuition that continues to function well even when the experts disagree or can’t reach a verdict. Does that amount to anything different than saying I’d like a President who will make right decisions rather than wrong ones?

• Pku says:

I think so, in the sense that it might pay off to examine candidates’ intuitive gut response to various complicated hypotheticals over looking at their long-term track record as senators or asking their opinions on general issues (Someone with a good gut feeling on specifics might end up giving better responses to specific problems than someone who’s bad at it, but who’s broad immigration or economic policy is closer in line with mine).
(BTW, I just read your book last week, it was awesome).

• Glad you liked my book!

I actually love your idea of giving presidential candidates bizarre hypothetical situations during the debates, and seeing how they respond to them. I think it would make for much more interesting and informative debates than we get today. Debate moderators who might be able to pull this off: Sacha Baron Cohen, the Mythbusters guys, Scott Alexander… 🙂

(But I’ll note that, particularly in foreign policy, presidential candidates are reasonably often given hypotheticals—“would you go to war with Iran if they did X?”—but they still find ways to wiggle out of saying anything clear. Maybe the key is to present them with a richly-detailed scenario, and make it clear that they’re being asked to roleplay what they would do, rather than just talk in the general vicinity of what you said.)

• Jiro says:

Politicians don’t like to answer hypotheticals because no matter what they answer, someone will disagree with them and reject them.

In order for politicians to be willing to answer hypotheticals, evading questions would have to reduce the politician’s popularity as much as the combination of (reduction in popularity from people who disagree with the concrete answer+ increase in popularity from people who agree with the concrete answer).

And even that won’t work, because some positions are less popular and some more popular. So even if the average effect of evasion is the same as the average effect of disagreement+agreement, the politician would still find it useful to evade all the questions where his answers would be unpopular. In order to avoid *that*, people would have to treat evading questions, not like the *average* of agreement and disagreement, but like the *worst case* of agreement and disagreement.

(Note also that natural selection applies here. As long as being true about your opinions is bad for your popularity, any politician who did that would be selected out of the system and only politicians who evade would remain.)

• Jordan D. says:

I’m with Jiro- the essential problem is that a candidate has a very high chance of being hurt by their answer to a hypothetical and a much lower chance of being helped. Realizing this, candidates try very hard not to answer any hypotheticals at all, and instead stick to announcing broadly-popular themes without too much discussion in the weeds.

But fixing this is hard because even if you start your own debate which is tailor-made to be hard to dodge and interesting, the candidates will just not show up. They negotiate terms and themes of the debate with the hosts ahead of time, after all!

So it seems like there are only two ways to get to a place where candidates will feel that they have to attend your debate:

1) Take over all media and reign as Newsking Supreme
2) Be a very prestigious organization and only offer endorsements to candidates who participate in your debate-cum-survival-maze-challenge-cum-swimsuit-contest.

• DavidS says:

I think in practice it’s not either-or. You would always get ‘expert’ views and then make your decision based on them. Usually if all the experts had a consensus, you’d obviously usually agree.

I put ‘experts’ in scare-quotes and say you’d ‘usually’ agree because all of this is how to make decisions but assuming a clear sense of objectives. In practice, you might ask experts how to acheive some specific objective, but their consensus solution might break your other objective of ‘staying in power’ because it would be unpopular. And also because the ‘experts’ consulted might sometimes specifically mean consulting people expert in making canny political decisions that win you the next election. So you have to look at goals as well as decision-making systems.

I’d hope that alien invasion would bring together politicians’ incentives with those of the country, but there’s always the chance the aliens would follow the ancient Imperial model of ruling through co-opting local elites (Romans in Britain, British in India, President Baltar…).
So perhaps the President would be trying to meet the aliens to explain that they’d get stuff out of Earth much more efficiently if they used the current government infrastructure and let him stay in nominal power. And, crucially, not get death-rayed.

• Froolow says:

Just to follow up on this interesting off-topic tangent, I have an idea for the Best of All Possible Presidential Debates.

Round One occurs before the debate even begins. A complex decision problem is described to the candidates a week beforehand (published in media outlets) and they are allowed to use any resources they wish to frame a response. They are given a five-minute slot at the beginning of the debate to show a pre-recorded (i.e. infographics allowed) presentation on their proposed response. The responses would obviously be shown in a randomised order depending on which station was transmitting your coverage.

Round Two would be another complex decision problem, described at the start of the debate. Ideally, this problem would be ‘time limited’ in some way (i.e. “…and the UN demand a response in half an hour”). The candidate’s advisors are sent away for half an hour to consider the problem while the candidates take part in a more conventional debate.

Round Three is the more conventional debate. As well as policy questions, the candidates are asked problems from moral philosophy. Questions should be chosen to try to demonstrate differences between candidates, but not ‘gotchas’. If one candidate is vegetarian, it isn’t fair to pick lots of questions designed to cause problems for meat-eaters, but if neither candidate are (publically) effective altruists it would be fair to grill them on this.

Short break while candidates are briefed on their wonks’ answers to the problem posed in Round Two, followed by candidates selecting their preferred solution and delivering it to the moderators / public. The moderators can then ask a few questions about the detail, which the candidates must try to answer (either directly, or by asking a wonk to brief the moderator). This takes place simultaneously in soundproofed rooms, with the order of transmission determined randomly.

Then the swimsuit round.

The intuition behind this is that a conventional debate is very bad at determining who is best at what a President is supposed to do (listen to policy wonks about problems the Executive Branch are supposed to take an interest in, then picking the most appropriate response), and very good at determining something almost irrelevant to the Presidency (ability to memorise precanned sound bites). This Best of All Possible Debates sidesteps this by forcing candidates to take the ‘listen to policy wonks’ bit seriously, and limits the effectiveness of precanned sound bites by opening up the whole of moral philosophy as an in-limits topic.

• Nornagest says:

I like this plan, but it seems like it’d be very prone to subtle forms of abuse.

• 27chaos says:

Why would the average person who hates CSPAN want to watch this?

• Eli says:

Bayesianism is a very good way of doing statistics, but when you try to turn it into philosophy, it turns out to be only as helpful as every other kind of philosophy. Ultimately, if you could state a well-tested formal solution to your problem, you wouldn’t be philosophizing at all.

Or, in much more mathematical terms: phrasing your models in Bayesian terms does absolutely nothing to get rid of out-of-model error (where the truth just doesn’t lie in your hypothesis class), even if it’s a great statistical technique for minimizing expected in-model error relative to the data you actually have.

• BR says:

Trying to build off this, it seems to me that the claim that “if you need to make a decision, you need to use probabilities” is too strong, since you could pick any decision-making strategy and plug it in to make a similar claim: “look, Mr. President, we know there are a lot of variables here, but we HAVE to make a decision, so we’re going to need to go to eeny meeny miny moe”. To justify the use of probabilities you need to argue for their effectiveness, and that is what I think critics are getting at, they are saying that for highly complex decisions regarding relatively far future events, the effectiveness of probability models is unproven (and I guess unprovable until we get better IT), and that they are skeptical. I’m sympathetic to this idea, because the idea of using a common-sense technique that works in situation X, in situation Y which is similar in some respects but in others quite different, seems like a consistent source of errors. I.e. using the same technique that predicts coin flips to predict whether AI will happen. Anyways, usual disclaimer about ignorance of statistics and my curiosity as to whether this objection has already been dealt with by actual working statisticians.

15. Tmick.wtg says:

“80% chance the patient has pneumonia,” is a lot easier to swallow than, “There is an 80% chance i should treat this patient as though this patient has pneumonia.”

You first one sounds good and clean and true to patients and lawyers and even to yourself. The second feels impure and lazy: “Let’s just treat it like it’s pneumonia, i’ve got a tee time at three.”

But if you phrase it differently: “There is an 80% chance this patient has pneumonia, or a condition that will also be treated by the prescription i would write for pneumonia, or confirming that the condition is not pneumonia by treating it as though it is pneumonia is cheaper/faster/less invasive than standard tests, or some combination of the above, or any of many other scenarios where treating the patient as though the patient had pneumonia is the right choice.”

It’s accurate, but…verbose.

Distilling the whole paragraph down to, “I am 80% sure the patient has pneumonia,” communicates the idea pretty damn cleanly to most people. And it is much more convincing.

• Muga Sofer says:

>“There is an 80% chance i should treat this patient as though this patient has pneumonia.”

Ah, but do you mean “80% chance I should treat this patient as though this patient has pneumonia”, or “80% chance this patient will be cured if treated for pneumonia”?

Taking everything into account, the odds that this is the right decision given the information available should hopefully be a lot higher – or you’ll soon lose your practise.

• HeelBearCub says:

@Tmick.wtg/@Muga Sofer:

Or, if there is a 20% chance the patient has pneumonia, I should treat them, because that is the point negative outcomes of pneumonia outweigh side effect cost of treatment, but in my head I don’t want to treat for diseases I don’t think they have so my heuristic is “if they have these symptoms then they are very likely to have pneumonia so I will treat for it”

This is (I think) is why older doctors tend to make very different decisions than younger one. Their heuristics don’t map to new reality.

16. Marcus Vitruvius says:

But consider another situation: imagine you are a director of the National Science Foundation (or a venture capitalist, or an effective altruist) evaluating two proposals that both want the same grant…. What do you do?

Actually, don’t consider that. Consider that you’re trying to decide whether to buy something from an e-tailer with a large number of good reviews, or from an e-tailer with a small number of fantastic reviews. It is effectively the same scenario. The correct tool to use is a beta distribution or, more generally, a Dirichlet distribution. The key idea is to model the problem as a distribution of distributions which, while computationally challenging, is a decent way to attack model uncertainties. Not a silver bullet, but at least a principled approach.

17. whateverfor says:

I think the problem is the word model. You’re thinking of it as a fully complex mathematical model with predictive power, but it’s really just an understanding of the world, a set of assumptions you use to make probability calculations possible with reasonable amounts of time and data. So for predicting future AI, the “data” would be stuff like actual growth in computer processing speed, the actual computing power of the brain, etc. And the model is everything else, the things we hold fixed to make a problem graspable by a human mind.

Following from this is model uncertainty: how much your model doesn’t actually map to reality. AKA Knightian Uncertainty, the things you don’t know you don’t know, etc. The more data you collect, the more you can reduce model uncertainty: you see which assumptions tend to stay fixed, which vary in predictable ranges, and which you were just wrong about. Model uncertainty can never become zero unless you’re actually simulating the universe.

In the case of trying to prevent AI risk now, the problem is that the model uncertainty is way bigger than the probabilities you are trying to do calculations with. So you can’t actually reason with those probabilities, pumping them into an equation just increases the noise. The solution is to simplify your model, reducing how specific your outputs are. If you try to model each specific way the stock market could drop, model uncertainty will be bigger than all your predictions. If you lump them into (Financial System Failure/Government Failure/Other Bad Shit) you’ve got enough data to make some meaningful predictions. You end up being able to say less about the outputs of the model, but the things you do say are actually useful.

So, if we lump AI into “Dangerous Technology”, we have enough data to have a reasonable model. The best ways to reduce mass casualties are probably to instill a culture that tests the safety of new things before deploying them widely, and to reduce the odds of global military conflict.

• strident says:

How is “dangerous technology” a useful concept?

• Izaak Weiss says:

“Instill a culture that tests the safety of new things before deploying them widely, and to reduce the odds of global military conflict.”

Like, say, funding organizations that try to research and prevent the creation of dangerous technologies?

18. CJB says:

Ok.

So then the solution is simple.

You shouldn’t be advocating some sort of AI arms race where we try to prematurely out think the AI.

You should be preaching the joys of being a luddite. I mean- we HAD a stock market, and entertainment, and medicine and cars and shit before everyone had PCs and cell phones. Heck, we went to the moon! Is your ability to send blogs through the intertubes REALLY worth the increased risk of superAIgod destroying all the world IF NOT UNIVERSE with AI stuff?

Maybe a few unnetworked supercomputers doing some human genome stuff. An AI can take over the stack in the corner of a genome lab all day.

The only reason AI is a threat is we put computers in everything, just like the only reason global warming is a “threat” is because we put internal combustion engines in everything. And global warming theorists are totes fine with restricting my engines because a few south sea Islanders might get wet feet in a hundred years.

You’re essentially arguing we should fight global warming by creating more efficient V8’s.

Of course, that’ll put the kibosh on some stuff, but the fact is that we don’t really NEED computers. And the infinity billion people that will live after us, in the steampunk amish spaceships traversing the stars will thank us.

• RCF says:

But how do we keep Moloch from forcing us to have computers?

• CJB says:

“But how do we keep Moloch from forcing us to have computers?”

Scare Moloch. We’ve more or less done this before with nukes and other WMDs, we’ve come pretty close with the rules of war, certain economic choices, global warming….the long term horrors outweigh the short term bennies, we all admit that, we all avoid the worst.

This whole “avoid Evil Computers through making Good Computers” seems to miss the obvious, simply, most immediately effective solutions, which makes me suspect a large element of having your cake and eating it too.

19. “we shouldn’t, we can’t, use probability at all in the absence of a well-defined model.”

How confident are the responders about that? 😉

20. I think you’ve outlined why biases/heuristics exist. Decisions are pretty easy when we have the luxury of a well-defined probability model. The common version of the trolley problem (1 person vs. 5 people) is easy, what if there is a car on each track and you can’t see how many people are in each car? That’s a case where I bet most people would do nothing due to status quo “bias”.

• Pku says:

Well in that case, the tie-breaker would be the effort it takes to push the lever (or more realistically, not pushing it might be preferable, because it seems slightly more likely that there would be security measures built along the default path than the other one).

21. dmose says:

“Mr. President, I’ve got our Israeli allies on the phone. They say they’re going to shoot a missile at the craft because ‘it freaks them out’. Should I tell them to hold off?”

I hate to be that guy, but when you thought to yourself “which nation would be most likely to randomly fire a missile at aliens who have so far exhibited no hostile behavior or intentions of any kind, and give a comically stupid justification for doing so?” why did you settle on Israel?

• Jbay says:

I hate to be that other guy, but would you have asked this question regardless of which country was used, or does only Israel require a justification for its behaviour in humorous fictional examples?

• Jiro says:

The Bayseian likelihood that a mention of a country in this context is an attack on the country is larger if the country is one commonly reviled by the blue tribe than if the country is one which is not.

(Reds and grays who use commonly reviled countries in examples tend to use examples that are reviled by everyone like Nazis or Soviets, not examples that are reviled only by their tribe.)

• Nita says:

“Our Nazi allies”? “Our Soviet allies”? And not just any old non-enemy sort of allies, but allies that will almost certainly heed US President’s advice? Are you serious?

• Jiro says:

It’s being used in a sarcastic context, where they are being described as literally allies, but the sarcasm contrasts their literal ally status with their actual nature as warmongering idiots. Reds and greys wouldn’t use that exact phrasing for Nazis and Soviets because Nazis and Soviets aren’t our literal allies, but they would still use them as examples of warmongering idiots.

• Nita says:

Well, another reason why neither the Nazis nor the Soviets would work here is that they currently don’t exist. Among the countries that do exist, Israel is the only one that is both likely to ask for USA’s input and in favour of preemptive strikes (in some circumstances).

• Deiseach says:

WHAT DO YOU MEAN ISRAEL IS VERY EASILY UPSET? WHY WOULD YOU SAY THAT?? ARE YOU TRYING TO INFER SOMETHING ABOUT ISRAEL????

🙂

• irrational_crank says:

POLITICS IS THE MIND KILLER! REPHRASE THIS EXAMPLE TO BE ABOUT LOUIS XVIII!

🙂

• Randy M says:

Are you implying that easily upset people don’t know the difference between infer and imply?!?

• Linch says:

Randy: I inferred what you did there.

• RCF says:

Don’t you mean “imply”?

• Deiseach says:

Randy, I caught the mistake after I posted but too late to edit it, but now I am going to brazen it out and say “No, Scott didn’t imply anything, he flat-out said the Israelis were asking could they fire missiles, which means he is inferring from real-world behaviour that the Israelis are the type to have itchy trigger-fingers” 🙂

• Scott Alexander says:

I was trying to think of a country that’s a US military ally such that it was plausible that they would be coordinating their military manuevers with the President.

• HeelBearCub says:

Use France. Everyone hates the French.

(Kidding! I’m also part French!)

• DavidS says:

Unrealistic: the French would be planning their pre-emptive surrender.

PS: given the parent comment, I should say this doesn’t represent my actual views!

• Deiseach says:

The French would be planning the banana-based dessert to tempt the aliens with, in order to prevent them blowing up the Earth (possibly this one, fondant banane-chocolat) 🙂

To quote Dylan Moran:

“The weak, sensual, pleasure-loving French. You know, not going to war because they’re all still in bed at two in the afternoon, with the sheets coiled about their knees, lying, there scratching themselves, smoking a Gauloise inside a Gitane, sweating a nice Sancerre. Before one of them sloughs off the sheets to pad around the kitchen naked. No, not naked, naked from the waist down. To emphasise their nakidity. Picking up yesterday’s croissant crumbs with their sweaty feet. Slashing yesterday’s paintings.
Chocolate bread! That’s how they start the day. It’s only going to escalate from there. By lunchtime you’re fucking everybody you know. I was in Paris recently—they are very good at pleasure. I was walking by a bakery—a boulangerie, which is fun to go into and to say, even—and I went in, a childish desire to get a cake—”Give me one of those chocolate guys,” I said—and I was talking to someone on the street, took a bite… I had to tell them to go away! This thing! I wanted to book a room with it! “Where are you from, what kind of music are you into? Come on!” Proper, serious pleasure. Because they know they’re gonna die. Nobody goes to church. You think, we’re gonna die, make a fucking nice cake.”

• vV_Vv says:

Sorry but a missile is already on its way to meet you.

• Chris Conner says:

“Mr. President, I’ve got Elon Musk on the phone. He says he’s going to shoot a missile at the craft because ‘it’s stealing his thunder’. Should I tell him to hold off?”

22. Dennis Ochei says:

What’s the goopy stuff between our ears do, if it’s not creating a model of how the universe (at least at the anthrocentric scale) works? You may not have access to the internals, but you do have a model. This is what generates those “gut feelings”

23. Steve Johnson says:

AI and its risk versus your examples:

1) an alien craft of some kind near the moon – plenty of information can be gathered – how much does it weigh? What is its propulsion method? What materials is it made out of? What kind of EM radiation is it producing? All of those things give you data to make conclusions.

2) Desalinization plant made with undiscovered technology – plenty of investment analogies can be made. Can they build a small scale model? If other domain experts look at the model do they notice problems that prevent it from scaling? Etc.

With AI the problem is this – we have zero idea how consciousness works.

You’re talking about coming up with estimates for risks for an area so unknown that researchers who actually work on the problem not only can’t duplicate or explain human intelligence, they can’t even explain worm behavior with a perfect map of the brain of C Elegans – which has all of 302 neurons and 7,000 synapses:

In a debate* with Seung at Columbia University earlier this year, Anthony Movshon of New York University said, “I think it’s fair to say…that our understanding of the worm has not been materially enhanced by having that connectome available to us. We don’t have a comprehensive model of how the worm’s nervous system actually produces the behaviors. What we have is a sort of a bed on which we can build experiments—and many people have built many elegant experiments on that bed. But that connectome by itself has not explained anything.”

For reference, the human brain has 86 billion neurons and 100 trillion synapses. If you can’t even begin to comprehend how to solve the problem of building a human level AI there’s no way you can start to understand the safety implications because you don’t know what you’ll be building.

This is like starting research on the safety of nuclear power before you even figure out atomic structure. The problem of AI safety is inseparable from the problem of understanding intelligence.

[It really doesn’t help that one of the prominent advocates for AI safety research gives off a giant “cult leader” stench and has an institute dedicated to research into a field where he has to produce zero quantifiable outputs.]

• Pku says:

But would AI have to be built on the same structure as the human brain? We have some advantages with building AI (like not having to worry about energy supply) that the brain doesn’t have, which might make building AI much easier than actually understanding the brain.

• Steve Johnson says:

“We have some advantages with building AI (like not having to worry about energy supply) that the brain doesn’t have”

I contend we don’t have that advantage – since we don’t understand what the brain is doing. Removing a constraint on an undefined problem doesn’t get you closer to defining the problem.

• Deiseach says:

But would AI have to be built on the same structure as the human brain?

The trouble is that the discussion is couched not in terms of “machine intelligence” but “human level AI”, “super-human-level AI”, “beyond super-intelligent AI”.

If we’re worrying that hyper-intelligent level AI is going to have aims and goals, we’re ascribing consciousness and volition to the AI (if we’re concerned that “what we want” and “what it wants” are going to come into conflict, we’re saying it can want and plan).

The danger I would consider probable is that the risk comes where implementation of what humans tell the AI to do or perform runs on the grounds of over-literal interpretation or unintended consequences, not the AI deciding it would be much more efficient to melt down human civilisation for scrap. That is, the real danger comes from the human side, not the AI side as such.

The danger the AI risk proponents seem to be predicting wobbles all over the place, but the one I imagine we’re discussing here is that hyper-intelligent AI starts taking over and running things according to its own goals or interpretations of the goals humans gave it, and that this goes badly, so we need to prevent this by making sure we create mathematical models for the AI such that it cannot turn against us.

The more utopian ones think it’s very important because not alone is hyper-intelligent AI inevitable, not alone is it going to happen (relatively) soon, but we need hyper-intelligent AI to be a free agent in order that it can solve all our problems so that gazillions of quadrillions of happy post-Singularity humans can frolic merrily in the aetheric meads of uploaded brain emulation space or something.

(You may possibly pick up a hint of scepticism in that last).

So, pour revenir à nos moutons, since the only current model we have of high-level intelligence is human-level intelligence, and the only structure we have for that is the brain, the brain-based model looks like the easier way of not having to completely re-invent a couple of million years of evolution.

It may not be the easier way; it could well be correct that sticking to pure machine intelligence and not measuring it by how much it resembles human/organic intelligence is the way to go.

• Nita says:

Eh, I don’t think they’re saying that an AI will spontaneously acquire goals. Your third paragraph describes exactly what most FAI advocates seem to be worried about.

Then they go on to argue that we, being non-superintelligent, cannot construct perfectly safe, misinterpretation-proof goals, and so the safest course of action would be to embed all of our values in the AI.

• Luke Somers says:

That’s not quite it.

The idea is, make the AI want to figure out our values and act in accordance with them. Embedding our values in the AI directly would be slamming our heads into the problem you described. By doing it indirectly, we’d move the problem and also give the AI some values-uncertainty, which might be very helpful for keeping its actions restrained.

• Deiseach says:

the safest course of action would be to embed all of our values in the AI

Which would probably be bonkers 🙂

If we’re discussing “We’re relying more and more heavily on computerised communications and decision-making algorithms and we’re not being sufficiently careful to monitor how this can go wrong”, I would agree and I don’t think people would, in general, argue that that kind of AI risk assessment is useless.

But when we get to throwing around ideas about “We need to create models right now so we end up with the Fairy Godmother AI that will solve war, poverty, disease and give us all a pony on our birthdays”, then I for one do not think this is realistic.

The assumptions that human-level intelligence soon, then super-human intelligence after that because the human-level AI will bootstrap itself up to that level, then the super-human AI makes itself god-level and because we’ve handed over the keys of the house to it, it can destroy all civilisation unless we make sure to give it values of “it’s nice to be nice!” in such a way that it can’t figure out a way round those blocks – it’s a long chain of “we can’t put a figure on it but you should believe us when we say a percentage of a percentage of a percentage of a chance that giving us money will means billions of people getting ponies on their birthday every year for the next billion years. And our Sacred Equations* prove that measuring teeny-weeny chance by huge outcome means you have to do it!”

*Honestly – people that would have barracked the Rev. Bayes out of the pulpit if he preached them a sermon about converting to a godly life are taking his Theorem as if it came down from Mount Sinai graven on tablets of stone. I don’t mean the maths doesn’t work out, I mean a level of religious devotion to the One and Sole True Way of deciding anything at all, ever, forever and ever, world without end, amen.

• strident says:

Right.

• stillnotking says:

We don’t know how the brain works, but we know that it works. If natural selection can produce the human brain, then design can produce something very much like it — that’s proof by induction, I guess, but the induction is so strong as to demand a really good reason why it couldn’t happen. We can’t build a model aircraft that flies like a hummingbird, but there is no serious reason to doubt it could be done, in principle.

Design breakthroughs are often only explicable in retrospect — an engineer tries a bunch of different approaches; one of them happens to work; then we try to figure out why it worked. It’s entirely possible that someone will build an AI without understanding every detail of its operation. I don’t think it follows that the safety implications can be ignored. The opposite, if anything.

• strident says:

I don’t understand what motivates your “could” in this case. Are you just saying that the laws of physics are consistent with us designing a human-level AI? Or are you saying that a model we have of some process predicts us producing human-level AI (whatever that is exactly) anytime soon?

24. Professor Frink says:

I think in the last few posts you keep making the same mistakes over and over and over again. You can reason with more than just a point estimate.
You have a point estimate for the probability, a point estimate for the value, an uncertainty in the point estimate of probability (and you might have even higher moments! Is it skew?) and an uncertainty in the point estimate of the value. You can combine all of these in lots of ways to make your decision.

I think the people complaining that you need a model to say something about probabilities are saying “without a model, your 1 in a million number is implying WAY too much precision.”

In particular, it’s pretty standard economic practice to be risk averse to some degree. So if your estimate of the uncertainty is as large as your point estimate, that is probably considered a bad investment.

You know this one some level. When you say “there is a 1 in a million chance of X” you don’t actually mean that. You mean “there is AT LEAST a 1 in a million chance of X.” And implementing some sort of minimax says you invest in AI risk or whatever. But risk-averse utility maximizers might disagree, and say “the uncertainty on this is so huge that I risk accomplishing nothing for my money, but it to put it in high value sure-things/”

• Scott Alexander says:

I don’t think “1 in a million” implies any precision at all.

Suppose some scientist says something like “There are 30,000 species of fish.” Obviously that’s not claiming any level of precision. But there might be reasons it’s useful to know how many fish species there are (like for conservation efforts), and the scientist has decided that’s their best guess. Naming a number makes the limits of uncertainty more obvious – we at least know enough to say there aren’t 1 billion fish species, or only 2 fish species.

• Professor Frink says:

When a scientist says “there are 30,000 species of fish” there is an implied precision. They mean there are 30,000 species, not 20,000, not 40,000. If they didn’t intend to imply that sort of precision, they would say “there are between 10,000 and 100,000 species” or something like that.

• Gbdub says:

“1 in a million” doesn’t imply precision – until you start to multiply it by several other “1 in x” values and use it to “prove” the result exceeds some other much more precise / well understood number.

I contend collapsing it to one number loses information, important information. Are “there are exactly 30k fish species”, “there are probably 30k species but definitely between 20k and 50k species” and “there are probably 30k species but definitely less than 1 billion species” all equivalent statements? You seem to be asserting that they are, for all practical purposes.

Imagine you have a precisely milled 1000 sided die that you’ve weighed and calibrated. You are very certain that the probability of landing on any given side is 1/1000. After 1000 rolls, 777 has come up 25 times. You say to yourself, wow, must be my lucky day… But you don’t update your prediction of the next roll because you have an excellent model of the die.

Now imagine someone just hands you a 1000 sided die. It looks even enough, but you’re not sure. After 1000 rolls, 666 has come up 25 times. Now, based on a 1/1000 model this is no more unlikely than the previous example, but given your uncertainty about the die, this time you say “I’m going to go measure this die before I make my next guess” (or even skip that and say, “my best guess is that this die is biased toward 666”).

Clearly, the uncertainty of the probability is useful information, and I think that’s what the critics are saying. The lack of a model means that the signal to noise ratio on your predictions is too big to be useful. Your approach of “well, 1 in a million is just too certain, let’s say 1 in 1000” just seems like it’s blasting you toward overconfidence in unlikely events, especially when a series of uncertain, unlikely events is required (as is in “AI snuffs out a quadrillion lives”).

I tend to think it means that MIRI is alright as a “sharpen our pencils and improve the prediction” approach, but at the same time think it’s a bad idea to ignore causes with more certain impact, even if the expected value from the best guess prediction is lower. And I think most people are fine with what seems to be your position, “all I’m saying is MIRI should get SOMETHING”. It’s the “drop everything and look only at this thing! You are saving at least 10 billion lives!” rhetoric that is off putting.

In the alien example, your approach of “alert everybody, figure out what’s up” is fine, because it’s being prepared without doing anything irrevocable. But some of the AI risk rhetoric sounds more like “there’s a 51% chance they are hostile, a 10% chance that they have a death ray that can kill us all if we don’t kill them first, and a 23% chance that our nuclear missiles can take them out. Therefore, NUKE THE BASTARDS IMMEDIATELY IT’S THE ONLY RATIONAL CHOICE!!!”

In your example about the research grants, I think most people would be fine with “I’m going to invest mostly in the more certain studies, but fund a few low-probability studies too, in case one of them works out”. Likewise with, “I’ll throw some cash at MIRI, but saving kids from malaria matters too”. You’re sacrificing some expected value for risk reduction, but most people are ok with that. There’s a place for both DARPA and the AFRL.

Short version – I think people are objecting to the “highest expected value” approach when there are large uncertainties in the probabilities used to calculate the expected value.

• HeelBearCub says:

Suppose I tell you there is a “1 in a million” chance of winning the lottery. Tickets are a dollar. Current jackpot is 2 million dollars.

Does 1 in a million imply a precision?

• Gbdub says:

Better yet, does it imply I should pour my \$100k life savings into lottery tickets, since the expected value of each ticket is >\$1?

• HeelBearCub says:

@GDub:
Yes, exactly.

Fun fact, an Australian Syndicate did this with the Virginia lottery in 1992, managing to buy 5 million of 7 million combinations on a 27 million pot.

• discursive2 says:

From that article:

Milton Lyon spoke to the Lottery Board. “We cannot let this lottery get to the point where it is controlled by several millionaires or a couple of corporations with no thought or regard to the little man.”

Wait, what??

Also, who the hell is Milton Lyon?

• Linch says:

I mean, the standard answer to that question is that “if you’re an egotist, no, because happiness gains from money is log-normal. If you’re an altruist, yes.”

• HeelBearCub says:

@Linch:
That is true, but not relevant to Gbdub’s point.

The question is, does a “1 in a million” imprecise prediction justify such a move in the case where you would make such a move if it were justified.

• Linch says:

Ah ok. To me, the error bars on a “1/1 million” estimate should not include 1/2 million since the latter takes the same amount of bits to convey, and I will accuse you of dishonesty if you know it’s significantly closer to 1/2 million..but yes, this is definitely the type of question where I will ask for careful clarification on before dumping my “life savings”(hopefully it’s never significant anyway lol) into.

• Anthony says:

The implied precision of the verbal formulation “one in a million” is probably logarithmic, and possibly scaling on the named large numbers. So without further clarification or background, “one in a million” is 0.000 000 3 < p < 0.000 003 (1/3,000,000 < p < 1/300,000) in the precise case, and 1/30 million < p < 1/30 thousand (0.000 000 03 < p < 0.000 03) in the imprecise case.

• DavidS says:

I don’t understand this. If saying 30k doesn’t guarantee precision, how can it guarantee there aren’t 2 species or 1 billion species? Surely without info on precision we can’t “make the limits of uncertainty more obvious”?

This isn’t just hypthetical. I’m sure people in previous eras estimated the number of stars and came up with a number much further from the truth than 30k is to 1 billion.

• strident says:

I think your argument is too complicated. We don’t know if it’s 1 in ten, 1 in a billion, or 1 in a trillion. We just don’t know anything.

• strident says:

Today I felt like this was a clumsy way of putting it. It’s true, but only because the concept of human-level AI is somewhat incoherent. Depending on which definition you choose, you could perhaps get either 1 in a trillion or 1 in 20 chances for the next century. The best critiques do not rely on saying the probability is negligible (or even uncertain) but rather that the chance we could affect outcomes is zero.

25. Stuart Armstrong says:

People have been talking about trusting your gut, or trusting established procedures (examples of doctors and investors). In both cases, the intuitions or processes have been created by interactions with many genuine cases (and do you think those designing medical procedures don’t look at stats?).

Some people have done research on when these approaches work (see Shanteau “the role of task characteristics” and a lot of the Kanneman stuff). To summarise: there is no reason at all to expect intuition or usual procedures to be accurate about AI risk. That risk has almost all the features that make intuitive judgements terrible.

• Professor Frink says:

But it also has a ton of features that make more formal methods impossible. Because “super intelligent AGI” is at best loosely defined, there is no way to put constraints on the danger (you could argue anywhere from ‘civilization ending’ to ‘humanity ending’ to ‘entire universe ending’ depending on the scenario you desire).

Because there isn’t an obvious research path to extrapolate between the current research and “super intelligent AGI,” the probabilities of disaster are equally hard to estimate. This also makes it hard to estimate the impact of additional research- MIRI has claimed to be basically the only people working on safety BUT they are also some of the only people actively pursuing AGI. Their research could be making the danger worse, not less.

Yes, intuition probably doesn’t work here, but pulling numbers out of a hat and sticking them into an EV calculation, or even trying to build a formal model ALSO aren’t likely to be good here. It can return anywhere from – infinity to +infinity depending on how you pick your hat numbers.

• Luke Somers says:

> BUT they are also some of the only people actively pursuing AGI

They are not.

• Professor Frink says:

Almost all the work being done currently is “narrow-AI” or “tool AI” or whatever your preferred name is. Right now, machine learning is where most of the field is, but that is really just a fancy name for “big non-linear statistical models.” And the output of these algorithms are just functions (which can be neat- a function that takes in screen pixels and outputs which button to press on an Atari controller is pretty incredible).

The number of people working on “AGI” is much, much smaller.

• Raph L says:

I don’t think that’s a fair characterization of machine learning. Big multidimensional functions are where the focus used to be, but the hot stuff is in now in areas such as memory (LSTM is particularly exciting), and combining other such things with the traditional machine learning mechanism. That brings the cutting edge into the realm of dynamical systems, which is waay, way past what just statistical modeling can do.

• Professor Frink says:

Sure, but it’s still fitting a function. The function just has some feedback to give it a sequential memory. It’s neat, but it’s a similar category in that it isn’t at all AGI.

• Luke Somers says:

I mean, they are not working on AGI capabilities themselves. They are solely working on the value alignment problem. It’s hard to see how that could lead to things being worse than the work not being done.

26. The Smoke says:

I think the central point to this is that when facing uncertainty, we have no idea what is going to happen, but we almost always have a way to find out more.
When Aliens land, of course you send out people to gather intelligence, since you want to get rid of the uncertainty and even if you don’t have a model for what they will do, it would be really weird if gathering information does trigger a response that makes you far worse off than had you just left them alone.

What is important here is that the decision is not about allocating resources between different good causes, but about eliminating uncertainty. I would agree that you have no way of assigning prior probabilities on the aliens being well-intentioned, but you have a way of updating, namely by observing and gathering information. Eliminating uncertainty definitely has a high intrinsic value, but I don’t think it can really be specified.

27. Dan Simon says:

As someone who criticized your last post for its advocacy of using probability without a well-defined model, I believe you’ve once again completely misunderstood the criticism. The issue isn’t that you’re using probability–well-defined models do that all the time–but rather that you’re assuming that the probabilities you use somehow materialize magically from nothingness, in the absence of any underlying model whatsoever. In fact, every single one of your probability estimates is derived from an implicit (real or hypothetical) model of reality, whether you admit it or not.

The probability that a saucer-shaped object approaching earth is a hostile alien spacecraft? Non-negligible (in your mind, at least) based on your model of reality, in which saucer-shaped objects are far more likely to be alien spaceships than, say, products of some kind of as-yet-undiscovered volcanic phenomenon. (And by the way, some “intelligent design” creationists are very excited about that model of yours.)

The success probabilities (and payoffs) of those scientific research projects on solar energy and desalination? In your hypothetical, you treat them as if they appear by magic from thin air, but in any real scenario they’d be calculated from elaborate underlying models–each of which is potentially subject to unending dispute. The funding decision-makers would thus need to explicitly choose which models to lend credence to, and would thus be basing their decision as much on an underlying model of reality as were the world leaders in the alien spaceship example.

On the other hand, denying an underlying model can undoubtedly be very convenient. For example, those who worry about AI risk can neatly sidestep arguments from people like me who claim that their model of reality is in fact badly broken. “How can you say that my model of AI is incoherent and nonsensical,” they ask, “when I have no such model at all–no a priori idea what it’ll be like, how it will be built, or how it will operate?” But of course they do, because their estimates of the likelihood of “superintelligent AI” destroying humankind are non-negligible, whereas a different model–mine, for instance–produces a negligible likelihood, unless “superintelligent AI” means something very different from what the AI risk folks typically describe.

Overconfidence on my part? Perhaps–but I’d much rather be overconfident in a model I openly articulate and subject to challenge and debate, than in one that’s carefully protected from doubters by an iron-clad shield of denial that it even exists.

• Christopher says:

So, this isn’t directly related to this post (Because other people have voiced objections I had much better than I could), but it is about AI risk and “rational altruism”.

What makes AIs more dangerous than other human extinction events?

I mean, I’m as dumb about statistics as the next guy, but if you asked me which is more likely, AI destroying humanity, or a giant object smashing into the earth and causing our extinction…

Well, we have several examples of “big things smashing into planets” and a few good guesses that those have caused extinctions on earth. As far as we know nobody has ever invented dangerously intelligent AI.

I’m gonna say the thing we’ve seen happen several times is more likely than the thing that’s never happened.

So why would you favor AI risk mitigation over finding ways to defend us from giant objects smashing into us?

On the other hand, outside of an effective altruism framework I’m not sure you even need posts like this, because “This is how I want to spend my money” is good enough reason to do it.

EDIT: I tried to make it so this comment wouldn’t be a response to another comment. I failed. 🙁

• I’m gonna say the thing we’ve seen happen several times is more likely than the thing that’s never happened.

Would you say it’s more likely that an asteroid wipes out most life on Earth in the next 100 years, or that humans land on Mars in the next 100 years? Neither of these events is certain to happen this century, and the asteroid example has happened multiple times before, whereas no human has ever been to Mars. Yet it seems clear that the Mars scenario is significantly likelier than the asteroid scenario.

Whether we’re talking about future developments in AI or in space exploration, “no importantly new advance is ever likely to happen” is obviously not a winning heuristic. If you’re living in the early 20th century and claiming that heavier-than-air flight or nuclear power are less likely than an asteroid striking the Earth right up until the day those two inventions are actually made, you miss out on a lot of opportunities to make wise investments.

• Urstoff says:

On the other hand, if you invested in flying cars, you’d probably be broke.

• Ariel Ben-Yehuda says:

Giant asteroid impacts are known to happen once per several million years. There’s no good reason to expect the probability of asteroid impact in the next millennium to be larger than the (counterfactual) probability the millennium one million years ago. Therefore, asteroid impacts are a <.1% risk.

On the other hand, actually intelligent AI (as opposed to smart-seeming AIs that just pattern-match things to their Big Data, or AIs that basically just brute-force over the entire search space without taking shortcuts) seem to be something that can actually be written by humans (I will give it a probability 60%) and important enough that people will write it, but does come with a real risk of turning out to be too powerful to contain (as Bostrom says, any sufficiently intelligent entity will do its best to take over everything it can).

• Deiseach says:

“no importantly new advance is ever likely to happen”

But that is not what we are saying. We are saying that “projected huge payoff from uncertain conditions” is not very feasible, and unless we have some kind of model that gives us ground to stand on, then it’s wishing for a Fairy Godmother.

The two sides, so far as I can roughly make out, are breaking along the lines of “very fast, very big advances in AI mean that inevitably a super-intelligent AI will be in a position to wipe out human civilisation and we need to make sure that doesn’t happen. What we are so far proposing to avoid this happening is some kind of underlying mathematical model of ethics. Soon as we crack that, no machine will ever do anything nasty ever and instead our super-advanced intelligence will solve all the intractable problems of getting humans to stop fucking each other over.”

The opposition to that is “You’re assuming progress will be faster and easier than the evidence suggests so far, and you’re putting a lot of weight on exponential advances that may or may not be actually slowing down. You’re also assuming that we’ll more or less hand over running the world to the super-intelligent AI, or that if de jure governance powers are retained in human control this won’t matter because the AI can destroy the economy, scramble drone fleets to nuclear bomb us into surrender, and turn us all into raw material for whatever goals it deems necessary. You’re further assuming that the models you are working on will be of any use when AI of that level is created, and that furthermore your research will be so self-evidently superior, world governments will adopt it as the authorised standard instead of using their own developments.”

• Deiseach says:

any sufficiently intelligent entity will do its best to take over everything it can)

The underlying assumption there is that an artificial entity will have the same drives and motivations as organic, evolved under selection pressures, entities. What is the reason Bostrom thinks that? We’re making large assumptions about universal rules of behaviour when the only examples we have are the organic entities on our own planet.

To use a beloved example of the rationalist community: Bostrom is assuming past-human-level AI will be chimpanzee AI, but maybe it will be bonobo AI instead.

Superhuman-AI is going to go “I want all the resources! Because – because – KILL CONSUME DESTROY CONQUER is why!” But those are biological drives. Our AI is a machine. It’s just as likely that it will be a Fairy Godmother straight out of the box as that it will be Grrr Arrrgh Kill Puny Humans.

• Ariel Ben-Yehuda says:

Because if it is indeed smarter than us, then taking over will allow it to ensure the sufficient quantity of nuclear reactors is built before global warming takes over while properly dealing with these annoying Greenpeace members and dumb politicians (see Omohundro’s AI Drives).

Note that (notwithstanding Eliezer claiming it was specifically evolved) being “Corrupted by Power” (i.e. playing along when weak, but going evilfor your own goals when sufficiently powerful) is an “AI Drive” – something rational agents will do unless there’s a reason for them not to.

The reason MIRI is messing around with what they do is that there is still no good mathematical model that doesn’t assume you are Smarter Than The Universe, and FAI certainly involves dealing with things that you are not Smarter Than. It is hard to do precise practical work when you don’t even have a theoretical model.

• Deiseach says:

Ariel, do you really worry about earthworms using up all the resources you could be using? Do you plan to extirpate or enslave all earthworms to bend them to your purposes?

Because hyper-intelligent AI could be to us as we are to earthworms (and frankly, earthworms are much more necessary and useful to us). If you’re not, as a rational agent, seeking to deprive earthworms of all the resources they’re using that are not going to you, then why would an AI necessarily be concerned with us using resources it wants? Yes, maybe it competes with us for energy generation (it wants the electricity we want to run our Playstations) but maybe it finds an independent way to generate the power it needs and leaves us to our petty devices.

• Peter says:

Deiseach – we’re right in the middle of a mass extinction event. Large numbers of species have been driven extinct, many more are endangered. A few got hunted into extinction but the main culprit seems to be habitat destruction. Thing may be different with plankton in the oceans, but consider England – there are a few nature reserves where things are left as-is, there’s wildlife both rural and urban, but most of the non-human biomass (I think – it would be interesting to see actual numbers) is there because we want it there – crops, farm animals, managed forests, etc.

When gardening, people probably aren’t going to complain about worms – if anything they improve the soil – but greenfly and weeds and various other species do compete with us (well, with our plants, but same thing) for resources. Large amounts of gardening seem to be about winning the competition. Ah, the memories of pulling up bindweed…

• Urstoff says:

I wonder what the risk is of particle physicists accidentally creating a black hole that consume the earth. Should this problem be a major focus of risk mitigation? Is there a reason why this technological change-related existential risk is not mentioned alongside killer AI’s?

• Scott Alexander says:

I think you’re misunderstanding the way “model” is being used here.

Yes, obviously we have certain underlying assumptions. But I think some people are claiming that in order to use probabilities, you need a formal model – the same sort of model that lets you say “given that the die has 20 sides, it will come up ’10’ 5% of the time.”

We can certainly have all sorts of assumptions about aliens, desalinization, and AI, but none of them admit a formal model in the same way the die does.

If we have different assumptions about AI, then we can just debate whose assumptions are right in the normal way.

• Darek says:

Suppose you have a loaded die, but the only thing you know about the chances is that side 1 happens more likely than 2, but nothing more. Does such a die admit a model?

Consider now that you have to bet on either odd or even numbers, which do you choose?

• Dan Simon says:

“If we have different assumptions about AI, then we can just debate whose assumptions are right in the normal way.”

Can we, though? Every time I raise fundamental objections about the assumptions behind AI risk, I get blown off with hand-waving dismissals that minor things like the definition of intelligence, whether it even admits of a notion of “superintelligence”, or how either could possibly imply anything about the goals or actions of future AI, are intuitively understood by everyone at a vague, general level, and don’t really need to be hashed out in any detail. Meanwhile, we get lots and lots of careful analytical discussion of reasoning about probability with limited information, as if that were really the only issue separating the “we’re all gonna die!” folks from the “these people are being spooked by ghosts” folks.

So by all means, let’s discuss those “different assumptions about AI”. I think we’d make a lot more progress if the AI risk doomsayers did more of it.

• Deiseach says:

Yes, I think that’s the point at issue. The question is being phrased here as if it’s “are you denying we will achieve human-level AI sometime within the next hundred years?” and that’s not it, or at least it’s only part of it.

The chain of reasoning, for the existential AI risk parties, appears to be: true AI → human-level AI → super-human level AI → hyper-intelligent AI → we’re all screwed; that the jump from AI to human-level will happen quickly; that the further jump from human to beyond-human will be achieved by the AI itself, very quickly, with no intervention or possible means of acting to control or prevent this by us*; that we will hand over control, or that the AI will assume by various means effective control of the machinery of civilisation such that the hyper-intelligent AI will be in a position to destroy civilisation and even wipe out humanity itself.

There are a lot of assumptions there, and nothing concrete to back them up. The assumptions seem to rest on “Well, computing power has increased hugely from decade to decade, we see no reason for this to slow down any time soon, we’re learning more and more about intelligence and we are working on machine intelligence, we’re going to get what can be true, independent intelligence Any Time Now and from there it’ll be a hop, skip and jump to human-level intelligence”.

If we accept all that, we are then asked to further accept “And once the AI is smart as a human it will be able to upgrade itself to levels far beyond us, and naturally it will have goals of its own and the ability to put those goals into action in the real world in a global scale and in a vastly destructive manner such that it is a genuine, credible and real threat to the very survival of humanity”.

I still haven’t seen anything about how we’ll distinguish true artificial intelligence from “really well-programmed”; do we measure it by the machine passing IQ tests? Do IQ tests measure anything that can be designated as a factor labelled “g”, or do they only measure how good you are at passing IQ tests?

*Unless we RIGHT NOW start pumping money and big-brained smart maths and computer science thinkers into MIRI and similar organisations to work out ways of making sure the AI wants to play nicely with humans, and that all charitable donations and efforts should be focused on research on AI risk as the existential risk most probable and deadly and immediate to us.

• LTP says:

But without a formal model, using probabilities for sort of unmeasurable, private things like vague intuitions strikes me as a category error. Probabilities don’t really make sense without a formal mathematical model.

28. Deiseach says:

Ah, Scott, come on: the aliens might be invading to seize our banana plantations! We should totally put them on alert! 🙂

Also, in your second example, I’d go with Proposal A because any group proposing “if it works it will completely solve the world’s water crisis” fails on basic physical reality. Where is this source of salt water to be desalinated going to come from? The oceans. Where do the oceans get water from? Um, gee, rivers flowing into them? You know, the rivers we are damming for energy generation and diverting for irrigation and basically taking the water out of before they get to the ocean.

(Also, very possibly asteroids provided most or all of our original water after planetary cooling). There’s a limited amount of water available on Earth, and drawing massive amounts from the oceans will in the short term solve water crises, but in the long term it’s like going with a bucket to a tank full of water. Our snazzy new desalination technology is only saying “Now you can use ten buckets at once instead of one” which is going to empty the tank faster.

Since your starry-eyed idealists don’t know where water comes from (hint: rain is not new created water from atmospheric hydrogen and oxygen combining, it comes from evaporation of existing water, so rain is not magic refills for the ocean), I don’t rate the chances of their unproven technology being much cop. The equivalent of a perpetual motion machine, only applied to water from the seas. Though in this scenario, we would encourage anthropogenic global warming, because we need the Artic and Antarctic polar ice caps to melt in order to free up the frozen water and replenish the oceans that we’re emptying with our desalination technology.

Come back to me for funding when you’ve got a plan for exploiting cometary water 🙂

• anodognosic says:

Failure of thinking through your own model.

First, scale: the ocean is really, really big compared to all the water used by humanity. It’s more like going to a water tank with an eye dropper.

Second, sustainability: desalinated water will not fail to go somewhere once it’s used; it’s not getting shot into space afterwards. Either it gets stuck in a cycle of evaporation and rain outside of the ocean, in which case it can be used and reused, or it goes back in the ocean, in which case it can be desalinated again.

• Deiseach says:

Treating any closed system as an inexhaustible cornucopia of resources is bad planning. We’re currently using up rivers at a fast rate (just ask California). If we fall back on “But we can desalinate seawater” that gets us some of the way, but we’re still using up and not putting back.

Else, by your own example, all the water being taken out of rivers would simply recycle back in via evaporation, etc.

• LHN says:

If the exhaustion point is far enough in the future, it may be counterproductive to try to plan for it now.

E.g., early in the whaling era, the question of what would be used for lighting or lubrication once the whale oil was used up couldn’t be meaningfully answered. But in the event there wasn’t a whale-oil crisis or a collapse to candlelight and an end to lubrication.

Likewise, I doubt we could have planned a planet full of steel ships when starting to use limited forest resources to build them. And that seems unlikely to have happened without the prior global trade and technological development that wooden ships enabled.

There’s something to be said for “sufficient unto the day”– if we could change the limiting factor for fresh water to the scale of the oceans (a big if, true), that gives a lot more time to figure out what to do if and when that’s not enough.

(For what it’s worth, it looks like the rings of Saturn have about twenty times as much water as Earth’s oceans do…)

• Deiseach says:

LHN, but meanwhile the whale species that provided the oil were pushed towards extinction until an alternative was found. Whales were deemed a practically inexhaustible resource, until it was discovered they weren’t. Same with the over-fishing of cod and the massive depletion of stocks we are currently dealing with (have you noticed the replacement of cod by hoki, etc. in frozen fish products in the supermarkets?)

I’d be much more inclined to back a “let’s mine the rings of Saturn for water!” project than a “300 times as efficient as any previous desalination method” project 🙂

• LHN says:

@Deiseach Not personally, since I don’t eat fish. 🙂 But I know about the collapse of various fish stocks in general and cod in particular.

Certainly any finite resource can be exhausted eventually. The question is when it makes sense to start planning for that point. I doubt that it’s either possible or useful to try to make those plans at the beginning of the exploitation of a superabundant resource– neither the incentives nor the techniques that we’ll have when scarcity begins to set in are likely to exist yet.

Would there have been any point to, or any possibility of, imposing limits on overfishing the Grand Banks in the 16th century?

• HeelBearCub says:

@LHN:

Now I really want to know if you support putting many dollars into AI X-risk now.

• LHN says:

@HeelBearCub

I don’t. But I haven’t been participating in that discussion because there’s nothing remotely rigorous about my feelings on it. And I’m not especially trying to talk anyone else out of it. But what I’m saying above probably does play into that.

(Along with a general sense that I’ve been presented with a lot more well-argued prospective doomsday scenarios in my life than seen actual dooms come to pass, and distrust of agendas primarily aimed at the benefit of large abstract groups of people, who by definition can’t be consulted or offer feedback.)

But I may be wrong. Perhaps 3^^^^3 human-emulation processes will one day be muttering in between clock cycles, “This is all LHN’s fault. What a tool he was. I have to calculate pi all day because he couldn’t be arsed to stop someone else from plugging in the Overlord.”

• Nornagest says:

We’re currently using up rivers at a fast rate (just ask California).

Hi, this is California! Our water infrastructure is actually pretty mature and sustainable — it’s based on snow- and rainfall in the Sierra Nevada (and to a much lesser extent other mountain ranges), which is a renewable resource fed precisely by evaporation etc., mostly from the Pacific. Much better than the fossil water that many of the Midwestern states are drawing down.

But it’s inflexible — on the allocation side, but also on the infrastructure side — and that’s what’s causing the state’s current water problems. It’s based on a rigid system of water rights that can’t efficiently prioritize and isn’t responsive to drought conditions, which recur every few years and may be getting worse for global climate reasons — so every decade or so there’s a drought panic. The current one is very bad, but it’s an unusual instance of a long-standing pattern, not a totally unprecedented event and relatively orthogonal to sustainability as such.

Adding desalination plants, incidentally, would do little to fix the structural problems we’re dealing with. Even if they’re impractically high-volume.

You guy should use your initiative process to clean up that mess. Given all the silly or outright harmful initiatives one that was actually useful would be a refreshing change.

• Nornagest says:

Our initiative process is unspeakably stupid and essentially can’t be used for good, because good tends to be complicated and subtle and can’t be explained in a bumper sticker.

• Anthony says:

Our initiative process is unspeakably stupid and essentially can’t be used for good, because good tends to be complicated and subtle and can’t be explained in a bumper sticker.

Not quite. Because “Fix our water system” is a bumper-sticker slogan. The problem is that the voters can’t tell whether any given initiative will actually “fix” our water system. Unfortunately, neither can legislators.

• Nornagest says:

“Fix our water system” is a mission statement, not an explanation.

• Deiseach says:

It’s that inflexibility and increasing demand that’s the problem.

Maybe I’m not making myself clear, but my concern is that I don’t think it’s helpful to (and we should certainly know better by now than to) consider any resource as inexhaustible, even if it’s practically so; it’s a bad habit for humans, because we get careless and then create problems which we have to clear up later.

I’m quite sure that there are huge improvements that could be made in the current usage of ground and surface water, but the water rights and associated inflexibility and entrenched attitudes you describe arose from the very attitude that “There’s plenty of water in the rivers! More will always fall from the sky! There will always be snow in the mountains!”

And then the weather patterns change, or we find out that long-term climate has some nasty surprises tucked up its sleeve re: droughts, and suddenly there’s not enough or indeed any rain, and the snow melt from the mountains is decreased, and all the water we’re merrily squandering is not going where we want or need or can get at it.

• Nornagest says:

Not exactly. If the resource was treated as inexhaustible, there wouldn’t be any point in having a system of water rights — it’d be sufficient to allow anyone to build whatever diversions they wanted, since hey, there’s more where that came from.

Rights-based allocation schemes work best when you’re working with a renewable resource with a fixed renewal rate, or one that can be treated as such when averaged over several years. Some well-managed fisheries use such a scheme, for example, and that works pretty well as long as you have the allocations dialed and there’s not too much poaching — the reproduction rate of wild fish isn’t constant year over year, but it’s close enough. Same goes for forestry — trees don’t grow at a constant rate, but if you allow the lumber industry to cut N acres a year where N is chosen such that your overall forested acreage remains constant or increasing, it’ll generally work.

It turns out that water in California is not such a resource, mainly because of limitations on the supply side. If reservoir capacity was high enough, we’d be able to weather droughts without changing allocations simply by drawing them down and letting them recharge in wetter years, but that isn’t the case — in a once-a-decade drought, never mind the once-a-century one we’re seeing, the reservoirs will run lower than they’re designed for before the drought’s over. And it’s difficult to build more, partly for political reasons and partly because most of the good sites have already been used. Hence panic.

Market-based schemes respond better to this kind of dynamic supply and demand, but good luck getting that through the legislature.

From its existence at least until 2040 California is treating the aquifers as inexhaustible in exactly the way you describe. Anyone can take as much water from them as he likes (with the minor exception of a few local court decisions having to do with subsidence rather than resource exhaustion).

• Anthony says:

The problem with the Colorado River is that allocations were based on the previous 6 years of flow, which were the wettest six years in the basin over the previous two centuries. Numbers approximate; the point is that the Colorado has almost *never* had enough flow to satisfy all the allocations – the only reason it kept flowing to Mexico was that not everyone was taking their allocation.

• Daniel Kendrick says:

Your argument here is completely misguided.

All the water being taken out of rivers (and groundwater) does cycle back in through evaporation, etc. It just does this more slowly than we put it back into the ocean.

If we could cheaply desalinate water from the oceans, the supply of water would be effectively unlimited. I guess we could theoretically get to the point where we drained the entire oceans and put all the water into pipes and tanks. But short of that, the only limitation would be the energy cost of desalinating the water.

(And the idea of draining all the oceans is so ridiculously far removed from our current situation that it is not worth worrying about. By that time, I suppose we could add more water to the oceans via capturing comets or something.)

• Marcel Müller says:

Um, no! Water is not used up (except when used for chemical reactions, which is irrelevant to the “water crisis”)! The “water crisis” is a failure to provide enough clean water when and where it is needed and NOT depletation of an exhaustible recource like e.g. coal or concentrated phosphate deposites (again not phosphate per se which is also NOT depleted). The ocean is where all the used water goes via one way or another. Water taken out of a river is indeed recycled back via evaporation, but not necessarily into the same river, so it is still missing further down. Summary: Water problems are ALLWAYS local!

• RCF says:

Besides the fact that you’re Fighting The Hypothetical, I really don’t understand your thinking. Your objections don’t make any sense.

The ocean contains about 1.3 billion cubic kilometers of water. California uses about 50 cubic kilometers of water each year. So if everyone in the world used as much water per capita as California, that would be 10,000 cubic kilometers of water each year. At that rate, it would take about 80,000 years to use up the oceans. I really can’t fathom the psychology of someone who would say “Desalination can’t possibly be a solution, because in 80,000 years, we’re going to run out of water”.

I also don’t understand how you can say we’re not putting it back. Where is it going? Where exactly is over a BILLION CUBIC KILOMETERS of water going to go, if not back to the oceans?

Your argument that if we were putting water back in the oceans, then the rivers must be currently being replenished makes no sense.

• Deiseach says:

Okay, answer me this: where is the river water going? We’re pulling more water out of rivers than is replenishing them, which is why rivers are drying up.

That water is going somewhere, but it’s apparently not going back into the rivers.

The ocean gets water from rivers. Since we can fairly roughly rely on sea levels not to be rising year-on-year, that means the water is being used up somehow (that is, the evaporation cycle for rain).

So far, so good.

We can certainly pull water out of the ocean to make up for the short-fall in available water. And the oceans are certainly very big reservoirs.

But they are not bottomless or limitless. And your estimation is on current levels of usage; that’s like estimating the Colorado River could never run dry based on 1920s levels of water usage. We’re using more and more water, what makes you think that with desalination providing more potable water, water for agriculture, water for industry (and there was a magnesite plant sited by the coast in my youth for precisely this reason: seawater was part of the manufacturing process) that water usage would remain steady at 50 cubic kilometres?

And remember, we’re not confining this to California is only pulling water out of the oceans, we are talking about global usage. Going by 2005 figures, California uses 40 acres-feet of water for urban and agricultural use per year. That’s 49,320 cubic metres which is 0.00004932 cubic kilometres.

That amount of cubic kilometres times 7,000,000,000 (a thousand million is a billion, do we agree?) is 345,250 cubic kilometres per year for the world, which means we’d use up the oceans in 37,653 years 🙂

Now, we don’t have to drain the oceans dry, but I submit that taking even a hundred thousand or so cubic kilometres of water out of the ocean year upon year might cause us problems within a century. After all, it’s the fruits of two centuries of industrialisation that is now giving us problems re: Anthropogenic Global Warming.

Well, I wasn’t seriously arguing that we would drink the oceans dry, I was just trying to point out that super-duper desalination tech is not magic and that the oceans are a finite resource, even if it would take a long time to use them up.

• Anthony says:

Okay, answer me this: where is the river water going? We’re pulling more water out of rivers than is replenishing them, which is why rivers are drying up.

In California, rain and snow fall in the mountains, and flows towards the ocean. We divert a *lot* of the river water from fairly close to where the rain and snow fall, and pipe it to cities and farms in places which don’t get a lot of rain. We divert so much that some rivers do indeed run dry (more than they would naturally – that happens on its own here).

So what happens after the water gets diverted? In cities, people drink it, they wash with it, and they water plants with it. The water they drink gets pissed out, and combined with the wash water, gets sent to sewage treatment plants, where the cleaned-up water gets dumped into the ocean.

On farms, and for the plants people in cities water, some small part of the water becomes part of the food shipped out, and the rest is eventually “exhaled” by the plants. (Some of this happens when leaves and stems fall off and dry up.) The water re-emitted by the plants goes into the atmosphere, where it eventually falls as rain, adding to the freshwater supply (where it eventually goes back to the ocean).

So – with three exceptions, all the water used for human purposes in California ends up back in the ocean. Exception one: groundwater recharge – some treated wastewater is pumped back into the ground. Most of this will end up back in the water supply system, and some will eventually end up back in the oceans, but on much longer timescales. (But a lot was mined in the 20th century for human use, and was thus added to the oceans.)
Exception 2: Human biomass. The population of California has increased by about 20 million in the past 50 years. That’s about one billion metric tons of water walking around.
Exception 3: The Great Basin. Rain and snow which fall on the eastern slope of the Sierra Nevada, or in a big chunk of Southern California (including the LA suburbs of Lancaster and Palmdale) never drain to the ocean. The water drains into dry lakes and evaporates or soaks into the ground.

tl;dr – water used by humans mostly ends up back in the ocean; we merely divert it for a while.

• Eric says:

Deiseach, the water cycle is a cycle, water isn’t created or destroyed. When people ‘use’ water, it doesn’t disappear, it ends up in the ocean again eventually. To take the most basic example, what happens when you drink a glass of water? Eventually, it goes down the toilet, hopefully through a sewage treatment plant and then into river that flows to the sea or directly into the sea.

Rivers can dry up if you take enough water out of them, but that water will end up in the ocean somewhere else, unless it gets trapped somewhere, in giant glaciers during the Ice Age for example, and even with mile deep glaciers across half of North America, sea levels only dropped a few hundred feet.

Edit:Anthony already covered pretty much the same idea except better.

• Deiseach says:

Anthony, this assumption that “there’s plenty more of the stuff there” is what is troubling me, because we’ve had that attitude with regard to a lot of other natural resources, and now we’re starting to see that actually, no there isn’t.

We don’t have a “100% out, 100% back in” cycle for water. As you say, some of it is locked in humans and other animals and plants. Why aren’t the rivers re-filling? Why isn’t it raining in California? We’re using a lot of freshwater, and the natural replenishment is obviously not enough, not if we’re talking about droughts and low reservoirs and doomsday-mongering about ‘water wars’.

Maybe there is 80,000 years worth of water in the ocean, but that’s not infinity; that’s not within the projected lifespan of the Earth (around another 7 billion years or so). And I’m not saying we have to drain the oceans dry, but we could affect them with increased salinity (pumping brine from the desalination process back into the oceans) for one example.

If I’m being stupid for not seeing how all that water eventually gets back into the ocean, I’ll turn the question back on you: how come all those millions of gallons of water don’t end up back in the rivers, if all the water eventually goes back to the ocean and is evaporated and falls as rain?

“Oh, that’s because we’re diverting the water higher and higher upstream, so the rivers never reach their fullest downstream”. And what about the water we’re diverting from the oceans? We’re pulling that out too before it gets evaporated as part of the natural cycle, and we won’t be doing it instead of taking surface and ground water, but as a supplement to the water we’re currently using.

I may be stupid indeed, but I don’t see how taking more water out of the cycle means the same amount goes back in.

• Protagoras says:

Deiseach, the short answer to why the rivers aren’t refilling is that the rivers didn’t particularly used to have more water in total (well, apart from drought effects, which is part of the current problem, but not the long term problem); rather, the rivers get roughly the same amount of water every year from precipitation, and in California they’re now using more water than the rivers gain from precipitation (and so using up reservoirs, etc. to make up for what they can’t get from the rivers). Rivers aren’t lakes; the whole “refilling” thing is kind of a weird word for it.

• Nornagest says:

We don’t have a “100% out, 100% back in” cycle for water. As you say, some of it is locked in humans and other animals and plants. Why aren’t the rivers re-filling? Why isn’t it raining in California?

Rivers do refill — or, more accurately, there’s a seasonal cycle of greater or lesser flow, driven mostly by upstream precipitation: runoff, groundwater seepage, and/or snowmelt, depending on the situation. To the extent that they don’t at any given time, it’s because it rained or snowed less recently.

It’s raining less in California because California is a semi-arid region that’s prone to periodic droughts, and we’re currently in the middle of one. (Or possibly near the end, if the meteorologists turn out to be right about this year’s El Nino.) The unusual severity of this one may or may not be related to global climate change, but it has nothing to do with water use. Our proper response to it has a great deal to do with water use, but that should not be confused with its cause.

There are a few rivers that have been diverted to such an extent that they don’t reach the sea — the Colorado is one. But even that is best visualized as fully exploiting a renewable resource, not exhausting a finite one.

(None of this is necessarily true for groundwater. Some groundwater resources are fossil — totally enclosed by impermeable geology and therefore not replenishable. Others recharge more slowly than they’re being used, causing shallow wells to dry up over time. This is a serious issue in much of the United States, but the Pacific coast states rely mostly on surface water.)

If you really want to fight the hypo the way to do it is point out that there’s a thermodynamic minimum energy needed to desalinate ocean water, our current efforts aren’t that far off, so a trillion dollars in efficiency gains looks impossible. (Or given the prior post on overconfidence, maybe impossible is the wrong word. What’s your estimate that the laws of thermodynamics are wrong?)

But … we probably shouldn’t try to fight the hypo.

• Deiseach says:

(T)he rivers get roughly the same amount of water every year from precipitation, and in California they’re now using more water than the rivers gain from precipitation (and so using up reservoirs, etc. to make up for what they can’t get from the rivers

Protagoras, thank you, that’s what I was trying to get at (and apparently not succeeding).

There’s a finite volume of ground and surface water. All of it ultimately depends on the oceans as a reservoir. Yes, the water we use will eventually return to the ocean and be recycled, but we are pulling it out faster than the speed of the natural cycle, and this problem will only worsen as global population increases and demand scales up.

Pulling water directly out of the ocean to make up for the shortfall between demand and supply of available fresh water will work as a short-term fix (where “short-term” can indeed mean centuries or longer) but eventually, in the long-term, we’re storing up problems for ourselves if we continue with the attitude that water can indeed be created from nowhere and that we’re not dealing with a finite amount in a closed system. The oceans have a huge volume of water, but indeed it cannot be created or destroyed: what we’ve got now (barring something like mining icy comets to bring in extra-planetary sources) is all we’re going to get. The oceans are like a water tank if the earth is like a house, and the tank is as full as it’s going to get.

There may well be 80,000 years worth of water in the oceans if we used it solely as our reservoir for global per capita demand, and that this is effectively “forever” (though our remote descendants might object to that), but my point is that we don’t have to drain the oceans dry for problems to crop up; increasing salinity by recycling brine from the desalination process into the oceans, increasing salinity by storing brine from the desalination process on land (where that means ‘injecting it underground into reservoirs’), knocking off balance ecosystems of oceanic life that we don’t understand sufficiently, and other problems I can’t forecast because we don’t know what they’re likely to be until we run into them.

Come on, we’ve had someone (half-)seriously conjecture that if we had told 19th century scientists and industrialists about the problems that their activities would cause with anthropogenic global warming, then we could have averted the problem, a problem that they never even imagined because how could human activity affect such a huge, massive system as global climate?

We’re concerned about a one degree rise in temperature and demanding immediate action now. One degree? How bad can that be? A tiny temperature increase like that? Well, apparently it can be very bad:

The IPCC predicts that increases in global mean temperature of less than 1.8 to 5.4 degrees Fahrenheit (1 to 3 degrees Celsius) above 1990 levels will produce beneficial impacts in some regions and harmful ones in others. Net annual costs will increase over time as global temperatures increase.

“Taken as a whole,” the IPCC states, “the range of published evidence indicates that the net damage costs of climate change are likely to be significant and to increase over time.”

What effects will or would pulling 10,000 cubic kilometres out of the ocean each year have, over decades? What effects would come from a running arrear of that amount (yes, eventually it will all be recycled back, but we’re taking it faster than it gets recycled back, so each year we’re always 10,000 cubic km behind?) What if the deficit creeps up, year upon year, for one or two or three centuries, always pulling more water out faster than we put it back? We don’t need to drain the oceans dry, but what effects would a fall in mean global sea level of several feet have?

• Nita says:

@ Deiseach

Ha, I can’t believe I’m seeing all this from you of all people.

1. Both your country and mine are very rich in freshwater, so you seem to be demanding that other people make sacrifices that you and I never even have to consider.

2. Wasn’t there a huge outrage in Ireland about having to (gasp!) pay for water just recently?

• Deiseach says:

Nita, the water protests in Ireland are mainly (a) the austerity budgets hit people with a whole lot of new taxes and charges on top of pay freezes or wage reductions, so people are feeling squeezed (b) the government made a piss-poor job of introducing it, and the chickens have come home to roost with the Eurostat refusal to accept that Irish Water was a separate commercial semi-state entity and instead insisted it had to be kept on the government accounts.

Phil Hogan fecking off to the Big Job in Brussels after forcing this through, the “millions for consultants and designing a logo, nothing for infrastructure” mini-scandal when setting it up, “we’re not going to tell anybody what the likely charges in their bills will be but it’ll be reasonable we promise” messing around, etc. simply meant that people were sceptical, fed-up and likely to revolt.

I accept we need to pay for water and that the country’s infrastructure is in poor shape, but the way the utility was set up was the worst kind of (appearing to be) cronyism and more interested in flogging it off as a privatised company than actually tackling water provision meant a lot of people (myself included) are very cynical about the government’s motives.

And yes, we all have to make sacrifices. I’m not saying don’t desalinate ocean water if it’s really needed! I’m simply saying we need to be cautious about treating any natural resource as “Belly up to the bar, boys, there’s plenty more where that came from!”

29. Anomaly UK says:

If you see hundreds of 1-in-a-thousand chances tried, you probably do have a probability model, at least of the sort “I can’t distinguish the chances of these succeeding from any of those 300 which were tried, two of which look like they nearly worked”.

In the flying-saucers-over-the-White-House question, what “no model” means is, you don’t know what the rational choice of action is, and you never will. Even if you do something random and it works out, you’ll never know if you did the wrong thing and got lucky, or did the right thing and got lucky. If the point of your experiment is to hypothesise a situation in which we have no idea what will happen, then either a mathematical analysis confirms that we have no idea, or it is wrong.

30. Peter says:

Idea: the implicit “probability-like” things in System 1, or the whole of our not-explicitly-probabilistic decision making system, work OK in the context of that system, but tend to be badly miscalibrated if you try to fish them out of the middle of the system.

Tales from machine learning: I had a system that had machine-learning components in a pipeline along with various other stuff. It’s final output was a list of things with probabilities attached, and when I checked for calibration, those probabilities were well-calibrated, at least in the 10-90% range.

Some classifiers are based on probability theory, but tend to produce ill-calibrated results. For example Naive Bayesian Classifiers (NBCs) are notorious for being highly overconfident, giving probabilities far too close to 1 or 0. They make some unwarranted independence assumptions. Now assumptions are inevitable – there’s no induction without inductive bias – but they can throw things out of calibration. So you can interpret the outputs of NBCs as probabilities, but it’s better to interpret them as “scores” that scale monotonically with probability, with no more guarantee than that.

The last step of my pipeline was a Maximum Entropy classifier – these can produce well-calibrated probabilities, at least, in my experience, under some circumstances (maybe someone more into machine learning theory can say more than this – they are more vulnerable to overfitting than NBCs). The second-to-last bit produced numbers that were based on probability theory, but turned out to be ill-calibrated – the last step took these numbers, treated them as mere scores, added in some extra data, and outputted well-calibrated probabilities. Indeed I tried a version without the extra data, and the classifier acted as a “calibrator” – basically looking at some fresh data, and tweaking the thing that converted scores to probabilities until the results were as calibrated as possible (i.e. log-loss was minimised).

Summary: the second-to-last step produced scores which were useful – especially in the context of the rest of the system – which had the form of probabilities, but which should not be directly interpreted as probabilities [1] due to miscalibration. So if there are things in a machine learning system which are like that, it seems reasonable that there might be things in a “natural learning” system like that too.

Other issue – how well does calibration – that is, human calibration of – transfer from probabilities in the region of 0.01-0.99 to probabilities more concisely expressed in scientific notation?

[1] i.e. Don’t multiply them, add them, use them to work out expected utilities, use them as priors in Bayes’ Theorem etc. and expect to get sensible results.

31. Darek says:

The situation you describe in the saucer-shaped craft example seems absurd. It’s true that the situation is novel and it’s hard to find any plausible statistical model. Still, we can split the probability space into two parts: either this saucer-shaped object fits into what we imagine about aliens or it doesn’t.

If it does fit, then that’s saying the President can assume a number of things, like the home civilization of the saucer-shaped craft was advanced enough to even try understanding what is happening on Earth and that we might be confused why it came at all. Or that if it has friendly intentions, then at least they will cease fire before trying to communicate. In such setting it is reasonable to put the military on alert and argue against shooting at the craft so soon.

On the other hand, if the intentions are hostile, then putting military on alert can’t hurt much. More importantly if this saucer-shaped object doesn’t fit into what we imagine about aliens, then indeed the effects of our actions are impossible to assess, any such act can cause either good or bad outcome, including the not-doing-anything option (which, as you said, is “a choice too. Just a stupid one”). Whatever we do, we won’t know what will happen, so we might just as well do the thing that helps us in the case where these aliens are not-so-alien alien after all.

In other words, the President may refuse to act without any basis, but can still infer that putting military on alert and not banana plantations on alert is the thing to do. The President’s decision implies a probability judgment, but it doesn’t imply he made it without a model.

Perhaps the real question is what do you consider a model? I think that using probabilities without a model is a nonsense, because probabilities don’t exist outside of a model. The numbers between 0 and 1 become probabilities only in a context of some model. When you say “probability without a well-defined model” perhaps you mean probability of some implicit well-defined model you are not sure how to describe accurately, yet you are sure it exists? Perhaps there are multiple well-defined models your mind is considering, all satisfying the claims you want to make, but you are unsure which one to chose and thus it seems the model isn’t well defined? Or you might not know how to cover all the possible cases, but it doesn’t matter?

All in all, talking about probabilities without a well-defined model is talking just about some probability-like numbers (which is just fine as long as you don’t claim them to be probabilities; rules of thumb, heuristics or “good practices”, very often lead to desirable outcomes). On the other hand, being unsure about the model doesn’t mean you don’t have one in mind.

Consider the mathematical concept of a random variable. Random variables are like Guinea Pigs—neither a pig, nor from Guinea. Random variables are functions, which are by definition deterministic. However, they still model something we think of as chance, randomness or uncertainty. We write X + Y instead of X(ω) + Y(ω) and abstract over the implicit parameter ω, so that we can think of X and Y as random. However, if you were to omit all that and just say “let X represent the outcome of a fair coin throw”, then that wouldn’t be acceptable to others, you would get a lot of blank stares, misunderstandings and claims like “we can’t work with X without it being well-defined”. It is this context that makes it work.

In my opinion mathematics is a lot about putting our fuzzy, messy intuition into right context, a framework which makes it acceptable to others. In my experience people unfamiliar with math and without big enough inventory of tricks and practices to, for example, make their model well-defined, may struggle to convey their thoughts. It’s easy to say they are wrong, because they are if taken literally. Yet, it doesn’t mean their intuitions are wrong, it just takes an effort to understand them and put into right setting (and of course, then they might be indeed wrong, but the arguments will be then much different).

In short, talking about probabilities without a well-defined model is nonsense, but I doubt what you call “probabilities without a model” are actually probabilities without a model, rather it’s only hard to pinpoint what the model is (or perhaps you are right, in which case you are wrong ☺).

• TrivialGravitas says:

Importantly, it’s a model and it’s NOT a probability model. It’s a tradeoff model in which the action is the best possible for multiple predictable outcomes and equally bad for unpredictable outcomes. The probability of ‘hostile imaginable’ ‘not hostile imaginable’ and ‘not imaginable’ aren’t calculated in the decision.

32. I’ll join in with the people who think there actually is a model in the president/UFO story.

For variety, I’ll give an example of a situation where I think there is no model: some no-holds-barred version of the Simulation Hypothesis.

Correctly understand, if you are inside a simulation you have no way of knowing what the outer, embedding reality is like, not least because your simulators could be systematically fooling you about computation, physics, maths, logic, or anything. You have no model, and therefore no way of calculating a probability within that model. Something like Occam’s razor seems to hold out the promise of being able to make comparisons between models, but radically different models, essentially different ontologies, can be incommensurable, as a result of defining “entity” differently.

(Bostrom’s version of the SH, where we are being simulated inside a computer inside a universe similar to our own, is a variation on a fmailiar ontology, . That gives you a model, which you can use to calculate a probability, which Bostrom famously puts as high. But he might be wrong).

• Anonymous says:

there actually is a model in the president/UFO story… For variety, I’ll give an example of a situation where I think there is no model

I think an easier-to-grasp example (that is very relevant for the coming machine apocalypse) might be the current work of letting machine-learning algorithms have a go at video games. They’ve done pretty well at relatively simple games with obvious candidates for relatively well-behaved an obvious goal functions. If the game gives you a ‘points’ metric, use that. If it’s ‘how long can you stay alive’, use that. For one machine-learning Mario player I’ve seen, they used ‘how far right across the screen you are’.

In these cases, you can start out with remarkably little modeling. Ok, the Mario modeling gave them “good squares you can stand on” and “bad squares that will kill you”, which is a substantial model… but with other games like Pong, you don’t have to say much besides ‘play as long as you can; losing hurts’. So, you don’t have any modeling, and you don’t have any priors. What do you do? To be honest, they do exactly what Scott said the President shouldn’t do – absolutely nothing. They do nothing. They generate no points. They die. Game over. Then, the game is started again, and they inject a random perturbation to whatever structure you’ve chosen for your algorithm. This one dies just about as quickly. It may take actions that are actively harmful, going in the opposite direction (Mario can move left, which can be registered as negative points according to the magically well-defined metric). After a few hundred or few thousand iterations of dying spectacularly, the algorithm starts to catch on. …then, when you move to the next game title, you start the process over.

Notice that people don’t play more sophisticated adventure games or RPGs with unstructured machine learning algorithms. They’re complicated; different segments of the game often represent entirely different constructs; there are multiple paths to victory; there aren’t often simple continuous measures of progress; there are complicated trade-offs. I imagine that if you spent enough money on hardware, handed a Zelda game to a machine learning algorithm with the only goal being “getting to the cake/thank you screen is a 1; anything else is a 0,” your number of massive failure iterations are going to be orders of magnitude buried in the exponent for orders of magnitude higher than what you see in the simple games.

So why can people pick up a copy of Zelda and maybe only fail a few times? We have a model, albeit one that isn’t super sophisticated at first. We know what video games are about. There aren’t that many genres. The goal is to explore and acquire things; many times we can also process the game explicitly telling us what the subgoals are.

In the same vein, the President has a general model for a conception of aliens and what aliens might be interested in – strangely enough, it’s probably also “explore and acquire things”. Perhaps it’s not very sophisticated, and they’re just looking for bananas all along… but even that vaguely falls under the “acquire resources” model of what alien agents might be trying to do.

33. Jordan D. says:

So I’m guessing that this is a continuation of your musings on Type I/Type II system dynamics, which you began in your 2013 utility weight posts and sort-of-formalized in your Made-Up Statistics post. (I guess it also ties into a lot of earlier Bayes v. Intuition stuff on LW, but I’m not super-familiar with the history of that board)

In this post, it seems to me like the starting point is at the end- you will never have perfect information when making a decision, but you’re still obligated to make one. Our regular method for making a decision in absence of enough data to create a formal model or heuristic is to generate an informal model in our head* and go with the output- a System I approach. The more quantified approach advocated in Made-Up Statistics says that we should assign our intuitions some rough probabilities and see if they’re generating reasonable numbers- System II.

As you noted in your early posts about QUALYs for general utility calculations, System I trades off against System II, so we’d expect each type of reasoning to introduce errors which the other type of reasoning might not; for example, if I use System I to figure out how much of a threat I think superhuman AI will end up being, I tend to dismiss the problem because it seems too far outside the realm of my experiences to feel ‘real’. On the other hand, if I use System II, then I’m at risk of allowing Nick Bostrom to Euler me because he’s much better at math than I am.

So which one is the right answer? There aren’t any right answers! There’s only the resources which I have to distribute and manage and all of the good causes in the world which I have to pick between. Ideally, I would use both systems- generating results in one and sanity-checking them with the other until I’m comfortable with my decision. But the thing about Systems I & II is that you don’t have to convince people to try System I, because everybody uses System I all the time!

So that is my take-away from all these posts; we ought to use System II more because it will show us errors in our System I reasoning, and since we naturally use System I anyway, we might as well focus on improving System II.

*I realize that technically everything we know or experience is a model we generate in our heads, but I think that’s a level up.

34. Albatross says:

In risk management, we mitigate risk but we can’t eliminate it. Given finite resources, we need to tackle risks with a high probability first and if the risk is low enough and the cost too high than we just document that we thought it through and moved on. The risk levels are absurd: high, medium, low, not yet determined.

So we start with scope: in the aliens example the scope is humans (or for the President maybe Americans). How many are likely to be impacted? Unless the aliens just buy some bananas take a photo and leave, maybe ALL of them. Either via peaceful trade or because interstellar travel means they could attack. We don’t know. But if this were a risk of a bank product we could rule out customers that don’t use those products.

Then we look at costs in relation to scope. If a product only 11 customers use has a risk unless the margin is amazing we probably just discontinue it. If a product the majority of our customers use has a risk, we can justify higher expenses.

With the aliens, there is a kind of model: are different species with the power to kill each other likely to kill each other: yes. Thus mobilizing the military makes sense.

A good example of scope is psychic powers vs pandemic. Even people who think psychic powers exist think they are rare or at least weak in most people. Lots of diseases affect many people right now, so the scope of the low probability pandemic is more meaningful than a high probability of psychic powers. AI risk has a large scope.

Next is cost. Pascal’s wager can be achieved by very poor people. Fighting poverty, improving solar, desalination, fighting disease all provide economic benefits beyond their immediate goals. That is they are investments that yield future or even current GDP. Asteroid defense, non-proliferation, and AI risk are different. Asteroid defense could easily result in 1,000 years of spending and never get used. In 1,000 years people are still likely to be able to create nuclear weapons or AI so those too require constant vigilance from here for a millennium. We might eradicate ebola and heart disease and depression. We might reach such economic heights that poverty no longer exists. Chickens might have their utopia.

There are people arguing probabilities, but while there are certainly people with scope, impact and probability concerns the most vocal opposition is from the people who want to roll the dice for 1,000 years on asteroids and eradicate disease. They are helped in that most of those dice rolls happen after they are dead.

The solar/desalination example is perfect. Make the solar company promise to put a percentage of profits into the desalination project. If you want 1,000 years of international AI risk enforcement then you’ll need to convince people their immediate concerns will be dealt with.

• Deiseach says:

The solar/desalination example could work the way publishing companies work. They put out trashy celeb memoirs which are ghost-written hack jobs but sell by the bucket-load, and the profits from those pay for the small-selling literary novels.

Fund the solar project, get them to pay a percentage of the grant back/take a share of stock in the company/ask for royalties over a twenty year period etc., use that to fund interesting but risky projects like the desalination one.

35. Tim Martin says:

Instead of asking ourselves ‘is the probability that the desalinization project will work greater or less than 1/1000′, we should ask ‘do I feel good about investing this money in the desalinization plant?’ and trust our gut feelings.”

… Maybe this generalizes. Maybe people are terrible at coming up with probabilities for things like investing in desalinization plants, but will generally make the right choice.

I thought this was a strange thing to say. I assume you’ve read Thinking Fast and Slow? The research on intuition seems to indicate that intuitions are accurate only when you’ve had lots of experience with the types of situations about which you’re making a decision, and – a very important detail – that you’ve received feedback from those situations that indicates whether your judgements would have been right or not. In all other cases, our intuitions lead us to confidently believe things that very well might be wrong.

In the absense of the kind of experience that generates good intuitions, I would think that using explicit data to make decisions (which is essentially the same thing as “coming up with a probability”) is better than using your gut.

• 27chaos says:

Thinking Fast and Slow relies on lab experiments which are aimed at finding situations where human biases mislead humans. Real world evidence of the superiority of system 2 over system 1 is rather scarce.

• Urstoff says:

This is basically Gigerenzer’s point, right? Heuristics use System 1, but they also work much of the time.

As an aside, it’s kind of a shame that Gigerenzer isn’t nearly as popular as Kahneman. He’s written some great books and has a research project that is just as interesting and valuable.

36. Alexander Stanislaw says:

There are two steps here:

1: People’s beliefs can be descriptively modeled using probabilities. (more uncertainty is represented via a lower number). All of your examples support this.

2: People’s beliefs can be mapped onto the real number line and then manipulated using arithmetic and expected value calculations in a self consistent way. Moreover this style of reasoning is superior to any other way.

I fail to see where the leap from 1 to 2 was justified. There are an infinite number of mappings from a set of beliefs onto probabilities. Specifying one is where the need for a model comes in. Moreover, the fact that idealized rational agents often (almost always) do things much dumber than a human is surely reason to think that maybe the human way of reasoning isn’t so terrible.

All models are wrong, but pure Bayesian epistemology isn’t wrong in the sense that quantum mechanics is wrong (misses subtle effects and rather unwieldy although that is true of both), it’s wrong in the way that empirical computational models are wrong. You can get them to behave well under test cases and then follow them off a cliff when you aggressively generalize them without being able to recheck your error estimates.

37. J Thomas says:

The problem here is that you’re starting out with situations where you don’t know enough to make a decent decision. You just don’t know.

So if you try to express that with probability theory, if you do it honestly it will give you an answer that says you just don’t know.

Something that looks like space aliens is about to land in the USA. Do you alert the military? There’s a chance that if you don’t, the aliens will establish a toehold you can’t get them out of, and they will exterminate humanity. There’s a chance that they are basicly peaceful, but if they detect that you have contingent plans to act against them militarily they will decide you are an evil aggressor and destroy you. Which chance is larger? You can use everything you know about space aliens to decide that. And everything you know about space aliens amounts to….

If you let the Israelis shoot a missile at the space aliens, there’s a chance the space aliens will be destroyed and the problem is over until the follow-up armada arrives. There’s a chance that the space aliens will be unharmed and will destroy humanity because of our stupid attack. There’s a chance that they’ll settle for destroying Israel and a chunk of the middle east, which would solve some problems for the USA. Letting the Israelis shoot a missile at them is better than shooting a missile at them ourselves, assuming the Israelis are as competent as we are at missiles to destroy space aliens.

“Should we put the military on alert?”

“Maybe. Maybe not. Putting the military on alert might help. Or it might hurt. We have literally no way of knowing.”

This is correct. We have literally no way of knowing. The President has to choose anyway. That’s his job. The information is usually inadequate. He can’t make anything like a good prediction of results. His estimates of probable results are bullshit. And he still has to choose.

Typically, if he does the probabilities correctly, his confidence interval will include the entire range. What good is it? Well, but if he biases the estimates in a pleasing way, he can get a model that tells him what to do based on bullshit. That feels more comfortable than admitting he has no idea what he’s doing, but he has to choose anyway.

Oh, the linguists? Send them. They might do some kind of good, and the worst likely result is you wind up with a few linguists inside the blast zone. Of course there’s the chance the space aliens will send them a message that comes from some human transmission that’s somewhat randomized, and the linguists will ask for time to study it and then the space aliens quick win the war when a military with a quicker response time might have stopped them. I don’t know how to estimate the chance of that, unless the linguists have authority to communicate but no authority to slow the military response.

If the aliens are good at communicating with us, I’d suggest telling them “We’re kind of nervous about you since you look like you’re stronger than we are, would you mind talking to us from orbit for a long while, while we decide how to respond?” And then if they come down anyway, then attack as best we can. But what if they respond “We desperately need a planet for a short time for emergency repairs, and we’d like to buy from you two tons of water and fifty grams of Cobalt-60. We’ll pay well.” Then what do you do?

It’s GIGO. When you don’t know the facts you desperately need to know, no amount of probability theory will create those facts for you.

But when you do have a lot of information and you’re not sure how it fits together, then probability theory might help.

38. Anonymous says:

Modeling errors are important, but so are measurement errors. Suppose I’m writing an estimator for a simple robot. It’s simple enough that I have easy access to what could be considered an arbitrarily high-fidelity model of the underlying dynamics. Now, I put a measurement device on it so that I can take data to feed into my estimator. However, this measurement device is awful. It has obscene amounts of noise. Standard estimators include updating a measure of the variance (more complicated estimators try to sample a whole pdf). If your measurement noise is sufficiently large, this measure will show that your estimate is absolutely worthless.

This is a pretty reasonable way to say that we could use a numerical method to show, “Uhhhh, you can’t use probability here.” (…I could probably drag a proof out if I thought about it for a few hours or spend some time on Google Scholar…)

39. LPSP says:

The reason why the President of America puts his military on alert and not his banana plantations is not because it’s any more likely that the aliens will attack than they will ask for fruit. It’s entirely possible the aliens require potassium-rich vegetable matter and are just looking for a trade, for instance.

No, the reason why the president puts his military on hold is because if it turns out the aliens want bananas, then it isn’t a problem to organise that after the intentions are cleared up. But if the aliens want to slaughter us, it’s too late to act until after they’ve started dropping bombs.

And if the aliens want to take our bananas by force, there’s nothing better than having military units across the country to intercept the fruit thieves where they land. And if they claim to want banana trade but actually want to attack us, having a military in position is a good deterrent.

“But what if the aliens have special mind control technology and they’re banking on us putting the army in position so they can hijack them?” So what indeed. If they have that kind of power, it really doesn’t matter what we do. “Contrivance” would not cover what set of scenarios exist where the aliens are able to take advantage of our military deployment, but would be helpless WITHOUT the army at alert.

TL;DR I think the debate is missing the woods for the trees here. Or maybe Scott’s example isn’t good. In any case, I’m with Scott.

• Deiseach says:

And the reason the President knows putting the military on alert is probably a good thing to do does not depend on “Well, here’s something that never happened before and we have no prior experience of, whatever shall we do?”

Human history does include many examples of how, what, where, when and why exploration fleets from more technically advanced cultures land in territories belonging to less advanced cultures, and what the likely motives of such may be, from Columbus landing in the New World to America itself sending gunships on a trade mission to Japan to encourage them to open up their markets to the outside world.

Now, we may be extrapolating completely wildly here and it’s only on our own savage little planet that bigger dogs meeting smaller dogs goes badly for the smaller dogs, but even if the aliens are space hippies who just dig our organic wow they still grow crops in the soil here yellow fruit, mobilising the military to control our own civilian populations and send out signals to other national governments at a time likely to be of massive disruption and mass panic is a good idea in general, because we already know what mass hysteria mobs running riot and opportunistic foreign interference looks like in our own cultures and past history.

Sure we can extrapolate the risks of AI, but until we have some AI to get an idea what it can and can’t do and might and might not do, hysteria about existential risk of civilisation and species-ending doom is not helpful.

• houseboatonstyx says:

And in this mundane direction, what makes the President or anyone else think that the military is not already on alert? They’ve done what they want to, and their report has only one blank space left, which will be filled in with “Yes, Ma’am” or “Sorry, Ma’am”.

• Deiseach says:

Because we have chains of command, and the military has to wait for the civil power to authorise it, unless we’re talking a coup by the generals and declaration of martial law, which means we’ve got more problems than merely the aliens overhead.

Given that we’re talking about the U.S. military, rather than the military of one of the African nations, then there’s a lesser chance that the generals will seize power during a nationwide state of emergency (though perhaps, if the generals are recommending “Launch the missiles!” and the President is refusing, they might consider that their assessment of the situation was better and so for the safety of the world, they needed to take control).

In the AI hypothesis, I’m saying we should be keeping control in the hands of the civil government. You’re saying the AI would be the military declaring a coup. Which is more likely? Well we don’t know that yet.

• keranih says:

…and now I’m supposing a quasi-sentient AI who has decided that it would be fun to play intercept between the military leadership and the civilian leadership (intercepting and rewriting all the phone calls, emails, and secure messages that it can get) and succeeds (more or less by accident) of convincing the military that the civilian govt has gone insane and intends to start WWIII/wholescale slaughter of the public, at the same time as the AI convinced the civilians that the military is trying to stage a coup/wholescale slaughter of the public.

(I’d have to throw in some convenient earthquakes/Russia getting stupid/Iran nuking Israel/etc to make it believable, but I find this sort of thing (ie, what is essentially a child crying wolf for fun) far more likely than a malicious AI. YMMV.

40. Sebastian H says:

The problem with made up statistics is that their error bars are so enormous that they swamp everything if you multiply them together four or five times.

The problem with made up statistics about unreasonably large numbers of future human beings is that you don’t invoke them evenhandedly in all arguments so there is probably something else going on.

Try it in an abortion argument for example. Each abortion cuts off tens of millions of future human beings under the assumptions outlined above, and it does so in a much less speculative way than is protected against with MIRI’s AI research. Without going into anything more than that, we let arguments about a single woman’s particular short term happiness override those tens of millions of future human beings.

Add in the idea that you may be aborting the one person who can solve the AI friendliness problem IN TIME (sure that is maybe only a 1 in 8 billion or so chance but that isn’t so big) and we are approaching ABSOLUTELY MUST BAN ABORTIONS territory right?

41. anon says:

Typo: “exatly the normal way” => “exactly the normal way”

42. strident says:

Nor is this a crazy hypothetical situation. A bunch of the questions we have to deal with come down to these kinds of decisions made without models. Like – should I invest for retirement, even though the world might be destroyed by the time I retire? Should I support the Libertarian candidate for president, even though there’s never been a libertarian-run society before and I can’t know how it will turn out? Should I start learning Chinese because China will rule the world over the next century? These questions are no easier to model than ones about cryonics or AI, but they’re questions we all face.

They are easier to model. That we know something about the AI that runs your Facebook feed, does not mean we know about human-level AI. That something could theoretically happen, and is allowed by the laws of physics, does not mean we know anything about how it actually would.

On the other hand, we know something about nuclear wars that might destroy the world, or public opinion supporting Libertarian Party positions, or economic growth models applying to China.

Scientific knowledge evolves step by step. A proposal to discover new knowledge is not credible unless it’s reasonably well motivated and maybe close to what we already know. This is why proposals to study gods would not get funded, except at theology or philosophy departments!

You can definitely make forays at the margin into the unknown world where there are no models, and you must to make progress. That’s different from entirely disregarding the state of human knowledge.

43. Aegeus says:

Last post, you argued we should never set extremely high or low probabilities for most events. This post, you argue that we should always reason by assigning probabilities, even when our instincts tell us that our probability estimates will lead us astray. Putting those two together, you seem to be arguing that nearly any event on earth should be considered at least slightly probable, and we should always make decisions with that in mind.

I know you said in this post to stop saying “Pascal’s Mugging,” but this isn’t just a particular mugging, this is a general argument for accepting every kind of mugging there is!

44. Aegeus says:

I think another thing missing from this discussion is that decisions aren’t usually one-offs, you can change them over time. I would be against shooting a missile at the aliens not because I think they’re more likely to be friendly, but because it’s an action that irrevocably commits you to a course of action – however friendly the aliens were before the missile struck, they certainly aren’t now. Putting the military on alert but not shooting is an action that leaves your options open, while positioning you to act immediately if the answer is “Yep, they’re hostile.”

Likewise, I don’t have to invest 100% in the Magic Desalination Project. I can give them a small grant, ask them to give it a test run in the lab, and see if it works out before I draw up plans to make the desert bloom.

In general, when someone says “I don’t know, let’s do nothing,” they aren’t saying “I don’t know so I’m rejecting your question out of spite.” It’d be more charitable to describe it as “I don’t know, let’s get more information first.” You can plan to improvise; you can plan to gather data before you make a plan.

The only reason this looks absurd with the President is that he’s one of few people who can take potentially useful actions before any information is in. The military will be broadly useful in many situations with aliens, he has experts on speed-dial for a variety of situations, and so on. But for a layman without those resources, all they can really say is “I’ll wait for the experts to weigh in,” which is just a fancy way of saying “I’ll do nothing.”

To bring this back to the original topic, donating money to MIRI is also a commitment to a course of action, and I think it’s reasonable to say “I’ll wait to donate to them until someone does a little more work on AI and shows me that yes, it really is just a naïve, self-improving goal-optimizer, so MIRI’s work will be important to that.”

45. vV_Vv says:

Nor can you cry “Pascal’s Mugging!” in order to escape the situation. I think this defense is overused and underspecified, but at the very least, it doesn’t seem like it can apply in places where the improbable option is likely to come up over your own lifespan.

A startup proposal with the potential to generate trillion dollars of value will likely come up over your lifespan? I don’t think so. I don’t think that even Google made that much money.

You also quite cheated when you said:

Your organization is risk-neutral to a totally implausible degree.

No real organization is. Effective altruists claim to be risk-neutral with respect to QUALYs or lives saved, but doubt that in practice they are risk-neutral enough to give any consideration to any charity with a probability of success in the order of 0.1%.

So you are back to multiplying coarsely estimated small probabilities by coarsely estimated large utilities. Pascal’s mugging indeed.

we should ask ‘do I feel good about investing this money in the desalinization plant?’ and trust our gut feelings.

Which sounds better than pulling probabilities and utilities out of the terminal port of your gut and multiplying them together, adding them up and comparing them.

It’s a mental ritual that does not make your decision making process any more formal that just “trusting your gut”. It’s not what humans intuitively do, and, as you note in your study about doctors, it’s actually empirically inferior.
Especially if most of the terms in your sums have different signs and are the product of a very small number times a very large one, in this case your decision would be determined by the errors in the estimates.

But refusing to frame choices in terms of probabilities also takes away a lot of your options. If you use probabilities, you can check your accuracy – the foundation director might notice that of a thousand projects she had estimated as having 1/1000 probabilities, actually about 20 succeeded, meaning that she’s overconfident.

Sure, but at the end of the day, you only care about how much money she makes, not whether she is overconfident. You can actually measure how much money she makes, which is what you want to incentive her to optimize. Optimizing for anything else would likely harm your profits.

46. Anonymous says:

Concerning Edit-3, it’s a legitimate problem. My PhD is in dynamics/control, so I’ve spent an inordinate amount of time thinking about models (…not the ones in the magazines; I’ve spent a more “ordinate” time thinking about those).

Typically, in a Bayesian programming framework, we have at least two levels of modeling. The first is the general environment in which you’re working. Do you have a discrete sample space? Continuous? This is informed by your (hopefully intelligent) selection of salient variables, plausible measurements, and desired outputs. This absolutely is the “any information at all about the problem” type model. If we can’t put together any information about this whatsoever, then pulling numbers out of our butt will not only be unhelpful… they won’t even make sense.

The second type of model is a local model or a process model. This is most obvious in a dynamic setting (which is also most relevant to all the types of questions we really care about), but can be seen in static settings as well (even if you get the same measurement every time you survey people and ask if they beat their spouse, your process model says that this is an underestimate). In the dynamic setting, we’re modeling the process over time. The easiest example is, “The robot is in configuration X1 at time T1; at time T2, it will be in configuration X2.” Climate models are process models. Moore’s law is a process model. Development of AI research is a process model. Calculation of consequences is a process model… a stupid hard one, at that!

I’ve written comments above about modeling error, measurement error, and how they can reduce our Bayesian calculation to absolute meaninglessness, but I’d like to emphasize how insanely important our choice of process model could be. One thing we care about a lot in control is stability. Well, it turns out that depending on your choice of a model, we can make the same phenomenon stable or unstable! The example I always use is Jupiter’s Red Spot. If we’re inside of it, modeling the weather, it’s incredible unstable. If we’re outside of it and thinking on the scale of the whole planet, it’s remarkably stable. (…one of my favorite papers has a pithy title concerning types of stability, “Asymptotic stability is exponential stability – if you twist your eyes”.)

I absolutely object to just assuming that we have a suitable process model. Sure, you can always just pick one, but that doesn’t mean it’s going to give you any useful predictive information. “Well, climate change either exists or it doesn’t exist. That means it’s 50/50.” That’s a valid probability model… if we have literally no other information.

refusing to frame choices in terms of probabilities also takes away a lot of your options. If you use probabilities, you can check your accuracy – the foundation director might notice that of a thousand projects she had estimated as having 1/1000 probabilities, actually about 20 succeeded, meaning that she’s overconfident.

That’s injecting a process model! We’re modeling a biased static classifier. We can’t get away from this.

Now, once we have a process model, we can ask whether or not the output it gives us is garbage. With a nice time-dependence in the model, we can even estimate how far into the future we can predict before the model gives us garbage. If we look at all our process models and they all give us garbage, we can pull a number out of our butt… but as I mentioned above, it’s probably highly dependent on our model of the sample space (…either climate change exists or it doesn’t… either the coin comes up heads or tails…).

47. Walter says:

Looks like its all in how the questions are phrased. “Mr. President, should I not transfer a bunch of money to Walter’s personal bank account, in this time of crisis?”

48. Bill Walker says:

Your confidence in using a Cold War military to protect you against aliens is not consistent with the usual standard of your posts… unless you think all star-crossing species have a slapstick sense of humor. (Do you understand that weapons only work if their targets move less than a thousand times as fast… never mind).

Put those \$#@! banana plantations on full alert, at least that COULD work. (The linguists will just get us all killed, as well as being completely stupid… the aliens can speak English if they want to.)

49. Gunnar Zarncke says:

Following your intuitions to make decisions actually employs a model. Just one that is not accessible to your conscious introspection. Your intuition is basically the output of your complex neural circuitry trying to interpolate reality (and luckily it can do so on imagined realities too). It is a heuristic model no doubt and doesn’t explain anything in terms of familiar symbols, but isn’t a learned input-output relationship no model? Just because your neural net hasn’t converged enough to put concrete thoughts on it (mapping to existing concepts/symbols) doesn’t make it no model – abstractly speaking.

This is actually my explanation of prior probabilities in bayesian reasoning. Where do you get the prior probabilities? These are like the ground terms of linguistics (grounding problem). And in language your brain also acquires the words by mapping complex perceptions to words via your neural circuitry. Granted we do not yet understand how this mapping works in detail, but there is a path from perception via vague neural net representations to (auto) encodings of these to symbols/concepts. For probabilities only one additional conscious (learned) step is needed to translate intuitions into numbers.

50. Tom says:

Here’s how I think the process works:

We use internal heuristics which suggest a plan of action and have an associated feeling for how likely things are to work. This is a very coarse scale with discrete values; for example: negligible, low confidence, medium confidence, high confidence, and near-certain.

When you press someone to assign a number to their guess, you run into psychological factors. If there are many possible events and one of them has a 50% chance of occurring while the others are much smaller, that’s going to dominate. However, 50-50 is a coin flip and a coin flip might as well be a wild guess. There’s the old sophism: “Either it happens or it doesn’t, so it’s 50-50”. So the numbers get skewed to produce the appropriate psychological effect. The reason being that probabilistic thinking can be rather unintuitive at first glance and so most people don’t have a mental model of what, say, 80% certainty really means.

And this is all before the specter of risk arises. If people are bad at assigning a numerical probability to their intuitive feelings of confidence, they’re nigh-hopeless at quantizing feelings of risk. Let’s go back to that desalination plant. Maybe option B has a 1/500 chance of working. It has higher expected value, but it might still be too risky to be worth the investment.

As stated, the problem seems to imply that there are only resources available for one of the options. In that case, the risk is way too high by any reasonable measure and such a bet shouldn’t be taken except in desperate circumstances. Betting on long odds can work out in the long run provided you have a lot of money to work with and each bet is small compared to your bankroll.

• J Thomas says:

“As stated, the problem seems to imply that there are only resources available for one of the options. In that case, the risk is way too high by any reasonable measure and such a bet shouldn’t be taken except in desperate circumstances. Betting on long odds can work out in the long run provided you have a lot of money to work with and each bet is small compared to your bankroll.”

Look at the team who’re proposing the project. Look at their previous projects that had high odds of failure. Try to estimate how much value they created even though all of the projects officially failed. If the averate was better than the predicted value for the blah project, that tells you something.

They might believe that they win with proposals that promise a small chance of a giant reward. But their track record might show good results even though they don’t hit the super home run.

If there is no benefit unless they win the highly-unlikely jackpot, better to turn them down.

51. FullMeta_Rationalist says:

“Mr. President, NASA has sent me to warn you that a saucer-shaped craft about twenty meters in diameter has just crossed the orbit of the moon. It’s expected to touch down in the western United States within twenty-four hours. What should we do?”

As evidenced by the commentariat, this is a bad example because it sends people on wild tangents. I think you maybe would have had more success in parroting the example from LW’s Circular Altruism and Something to Protect posts.

Which of these options would you prefer:

1. Save 400 lives, with certainty
2. Save 500 lives, 90% probability; save no lives, 10% probability.

You may be tempted to grandstand, saying, “How dare you gamble with people’s lives?” Even if you, yourself, are one of the 500—but you don’t know which one—you may still be tempted to rely on the comforting feeling of certainty, because our own lives are often worth less to us than a good intuition.

But if your precious daughter is one of the 500, and you don’t know which one, then, perhaps, you may feel more impelled to shut up and multiply—to notice that you have an 80% chance of saving her in the first case, and a 90% chance of saving her in the second.

And yes, everyone in that crowd is someone’s son or daughter. Which, in turn, suggests that we should pick the second option as altruists, as well as concerned parents.

My point is not to suggest that one person’s life is more valuable than 499 people. What I am trying to say is that more than your own life has to be at stake, before a person becomes desperate enough to resort to math.

I think this better imparts a feeling of “no, foobar does not excuse you from doing the napkin math”.

—————————————————————————————-

“Maybe we should fund Proposal B,” Tom ventured.

• discursive2 says:

But where did that 90% come from, and how certain are you of it? By building it into the example you’ve skipped precisely the contentious point…

(Also, how confident are you that there are only two possible plans to respond to the situation?)

• FullMeta_Rationalist says:

As I understand, the point under contention is that using probabilities for things we don’t fully understand often lead us astray and is therefore inferior to intuition (or qualitative reasoning, et al). My point is that intuitions are inconsistent and also often lead us astray. Thinking probabilistically can help us.

We may not know for certain whether 90% is a reasonable estimate. But we can calculate that the threshold is 80%. Do your intuitions feel like Plan B has a greater than 80% chance of success? If your final answer (after querying your intuitions, doing confidence interval stuff, meta-distribution collapsing, oracle homages, etc) still says “Plan B > 80% success”, then take Plan B. Otherwise, take Plan A.

Also, how confident are you that there are only two possible plans to respond to the situation?

I don’t know how to answer this because I don’t know why it’s relevant. If there’s a third option and I don’t know about it, that doesn’t really change my decision. If you’re suggesting the third option is to “gather more info” – then do so, reevaluate your options, do the math again, and make a decision. I don’t think it changes the usefulness of doing the math. It just changes the epistemic conditions under which the decision is made.

————————————————–

P.S.

There’s this thing called the Multi-Armed Bandit Problem. It asks “At what point does a gambler stop experimenting with all the slot machines and start exploiting a particular slot machine in order to maximize his payout given a finite amount of starting money?”

After writing my last comment, I thought “maybe the contention over Scott’s post was actually an instance of the multi-armed bandit problem rather than a debate over the use of probability. Maybe the naysayers are actually arguing that we should delay donating to MIRI and friends until we have more information on whether strong AI actually poses any risk. Scott seems to want to exploit the MIRI slot machine now, while others want to gather more information before committing their money.”

• FullMeta_Rationalist says:

Scott, it seems that LW has discussed the Multi-Armed Bandit Problem to some extent. E.g. in Probability, Knowledge, and Meta-Probability. I haven’t read it and I need to go to bed. But skimming it over, it discusses things like Multi-Armed Bandit Problem and Laplace’s Rule of Succession. So it seems maybe relevant.

• Deiseach says:

Maybe the naysayers are actually arguing that we should delay donating to MIRI and friends until we have more information on whether strong AI actually poses any risk.

That seems a very reasonable summation of my objections, whatever about other naysayers. I’m not saying “AI will never happen” or “There are no risks associated with AI”, I am saying “The list of assumptions about the progression, speed and risks/benefits associated with AI is overstated, particularly since we don’t have enough information to be making any such statements”. I’m going to ignore Bostrom’s potential quadrillions of happy humans under the Singularity as not seriously meant but used as part of his philosophical digressions.

• Peter says:

The Multi-Armed Bandit Problem is a study in reinforcement learning. The problem with existential risk is that it is precisely a situation that you can’t learn from. Wipe out 99% of the population and the civilisation founded by the 1% may have cultural memories of how to do better next time. Wipe out 100% and there is no next time. If there’s no disaster – maybe you just got lucky. OK, you can have near-misses, or big disasters not on the x-risk scale, you can use those as analogies, but there you’re getting into much deeper aspects of intelligence (or for that matter evolution) and you’re well past playing a classical bandit.

The second thing about the Multi-Armed Bandit Problem is that the only way to find out about an arm is to shove money into the machine and give it a pull; the mechanic for exploring is the same as the mechanic for exploiting. When you’re presented with a fresh bandit, you pretty much want to give each arm at least one pull before even thinking about biasing your exploration.

If we’re talking about MIRI specifically; in a sense, people have been shoving money into MIRI, and research papers have been coming out. However, lots of people here including myself are unimpressed with the level of the output from MIRI: we think MIRI has been tried, and found wanting.

• FullMeta_Rationalist says:

It appears Deiseach believes AI R&D will develop along a smooth progression through which we can learn along the way, rather than a foom. In which case, the FAI problem indeed resembles a reinforcement-learning problem.

But yes, I agree that X-risk otherwise does not map well to reinforcement-learning.

• Deiseach says:

Okay, tell me where I’m being stupid.

What I am taking is that there is one group of 500 people in the example, not two groups: one of 400 people and one of 500 people. Yes?

So the problem to be worked out is:

Save 400 of the 500: 100% chance (I’m taking that as “complete certainty”. Yes?)

Save all 500: 90% chance.

Try to save all 500, but save nobody: 10% chance.

So we’re saying we can split the group into 400 people: definitely saved, 100 people: 90% chance of being saved. I have a precious daughter, and I don’t know if she’s one of the 400 or one of the 100. Yes?

The “shut up and multiply” says that I should go for the 90% chance of saving everybody, since that gives my daughter a better chance than the 80% of saving the 400. Yes? I don’t know maths, so I don’t know why a smaller chance for a bigger group works out better than a 100% chance for the smaller group, somebody walk this idiot through it.

But what are the chances that my daughter is one of the 100, rather than one of the 400? Is that part of the calculation?

Going by instinct, I would estimate (and here’s where I start going wrong, correct?) that it’s more likely she’s part of the 400 than the 100. Saving the 400 will definitely, 100%, succeed. Trying to save all 500 runs a 10% chance of failure. So, for the sake of 100 people over 400 people, I’m running a small but definite risk of killing everybody, and only 90% as against 100% of saving everybody.

I’d pick the “definitely save the 400” over the “try and save everybody, which means risking 400 people for 100 people”. Tell me why that’s wrong (and I’m not joking, I’m too stupid to work out the sums, seriously explain this to me).

• Nita says:

It goes like this:

Option A: 400 people definitely live, 100 definitely die.
Option B: on average, 450 people live and 50 die, but they’re unevenly split among these possible outcomes:
500 people live and 0 die, in 9 cases out of 10;
0 people live and 500 die, in 1 case out of 10.

I would say that the right answer depends on the context. E.g., if these 500 were the last humans in existence, I think most of us would choose option A, effectively sacrificing 50 lives to ensure the survival of humanity. Interestingly, this is similar to what MIRI seems to advocate in terms of donations — sacrifice some lives (e.g., to malaria) in order to prevent certain annihilation by UFAI.

• Linch says:

Deisach, assuming a fully random distribution, there’s an 20%(100/500) chance that your daughter is one of the 100. So if your terminal value is “maximize the chances that my daughter will be alive,” go with the 90% chance of the 500.

• Deiseach says:

But doesn’t that mean there’s an 80% chance (400/500) she’s one of the 400 who can definitely be saved so I should stick with the 100% certainty?

• HeelBearCub says:

@Deiseach:
Let’s put this in (admittedly ridiculous) real world terms.

There is a passenger train that has managed by contortions of narrative to find itself with one car (still on the rails) hanging over the edge of a precipice, the end of said rails being on a steep grade into the void.

You, as the engineer had walked back to look at the damage and realize that the last of 5 passenger cars is slowly pulling the rest of the train into the void. You see leaking hydraulic fluid and realize the brakes are failing!

You know that you have time to either release the coupling on the last car, which will definitely save the other four. Or you can run back to the engine and engage the engine, which almost certain to succeed as well. But, you also know, due to recent disaster preparedness training, that is has a 1/20 chance of failing, in which case all the cars will plunge into the void. (Did I admit this is all ridiculous?)

The passengers are all in a drug induced stupor. There are 100 in each car and you know your daughter is in one them, but have no idea which one. Do you pull the release for the coupling on the last car? Have your intuitions changed at all?

• Deiseach says:

HeelBearCub, that’s the trolley problem all over again 🙂

Which, if we’re all steely-nerved consequentialists, means we should release the coupling and let the fat man 100 drugged passengers plummet into oblivion.

However, if we’re deontologists, we should run back and engage the engine.

In either case, my daughter is a red herring to be ignored, because that’s introducing an unnecessary subjective element into a test of pure reasoning – “You have attempted to tinge it with romanticism, which produces much the same effect as if you worked a love-story or an elopement into the fifth proposition of Euclid” 🙂

• HeelBearCub says:

@Deiseach:
Hey, you’re the one who brought your daughter into this. Don’t look at me.

🙂

But yeah, it’s the trolley problem, except the mechanism is, while ludicrous, far less ludicrous than the trolley problem. And you aren’t killing anyone who doesn’t already have a chance of dieing.

Honestly though, I think the trolley problem, when examined not philosophically but psychologically, is a great example to consider. Especially the original one.

There is a fat man on a bridge, who can, we are told, be thrown onto the tracks and it will save the lives of 5 others by derailing the empty runaway train.

Now imagine actually doing this; running on to the bridge, grabbing hold of this man who weighs 300 lbs and attempting to lift him up and over the rail, all while aiming him at the tracks. He falls exactly onto the tracks, holds still rather than rolling or writhing off the track, the runaway car hits him and miraculously does not just bounce him off the rails, but does indeed derail. Further miracles occurs. The car is empty, no one else other than the fat man was hurt by the derailment, and so on.

You could talk about this in terms of probabilities, but this, is of course, complete bullshit. This is a complete fantasy, the kind dreamt of by 8 year old boys when thinking about being a hero. If you actually start attempting to evaluate fantasy scenarios by the odds they will actually play out like they do in your head, you stop and back up and realize know you have made a serious error in thinking.

• RCF says:

“But what are the chances that my daughter is one of the 100, rather than one of the 400? Is that part of the calculation?”

The simplest way to look at is that if you save 400, then expected lives saved is 400. If you try to save the 500, then he expected lives saved is 450. In that analysis, “what group is my daughter in?” is an obfuscating diversion. But if you really way to do it that way:

If you save the 400, there’s an 80% chance that your daughter is in the 400, and if she is in that group, then she has a 100% chance of saving her, for a total chance of 80%.

If you try to save the 500, there’s a 100% chance that your daughter will be in the 500, and a 90% chance that the group will be saved, for a total chance of 90%.

• Deiseach says:

RCF, thank you, that’s admirably clear and resolves my confusion.

• J Thomas says:

Which of these options would you prefer:

1. Save 400 lives, with certainty
2. Save 500 lives, 90% probability; save no lives, 10% probability.

Where did you get the precise statistics? Was it a demon who offered you two choices, who promised to roll a fair ten-sided die in the second case?

In real emergencies, isn’t it far more likely that your choice will be more like:

1. Save 0 to 400 lives with probability that’s probably somewhere in the range 0 to 95 percent.
2. Save 0 to 500 lives with probability that’s probably somewhere in the range 0 to 85 percent.

In either case you put 20 to 200 additional lives at risk.

You can calculate stuff if you want to make assumptions about the shapes of your distributions. Or instead you might spend that time doing something useful.

But if your precious daughter is one of the 500, and you don’t know which one, then, perhaps, you may feel more impelled to shut up and multiply—to notice that you have an 80% chance of saving her in the first case, and a 90% chance of saving her in the second.

In the first case you have a good chance to save 400 out of 10,000, and in the second case it’s 500 out of 10,000. You estimate the chance for your daughter is 4% versus 4.5%. You choose the second approach. You send in the first scouts who start to find people, you arrange pickup and more scouts. It goes slower than expected, and you commit more of your backups hoping to speed it. The first 20 survivors are extracted and then it all goes pear-shaped. You have lost 9,980 victims plus 100 volunteers. If it had held off longer you would have lost 200 volunteers, but if it had lasted longer still they may have gotten hundreds of victims out, maybe 600 or more, enough to severely strain your resources caring for them.

Did you make the right choice? At the inquest various people will argue that the other approach would have been better, and they would likely have argued for this one if the other approach had failed. They might decide that you should not have risked your volunteers in a lost cause.

But you didn’t know it was a lost cause. You didn’t know how long you had, and an extra hour playing with probability distributions while you delay choosing a plan could have cost lives.

• Nita says:

And there you go, spoiling a perfectly nice toy problem by dragging in all that messy realism.

• FullMeta_Rationalist says:

I think you’re smuggling all sorts of connotations into your example. [5-95] is a wide interval, but it still has to collapse down to some kind of average. If this average collapses down to a 90, then so far so good. If it collapses down a something lower than a 90 (as I suspect your example insinuates), we’re talking past each other.

But you didn’t know it was a lost cause. You didn’t know how long you had, and an extra hour playing with probability distributions while you delay choosing a plan could have cost lives.

In a similar spirit, since when is 90% a lost cause? Don’t you think that “time wasted playing with distributions” might actually reveal that A) the plan is a lost cause, our initial 90% estimate was wrong, and we should abort the operation; or B) the plan is not a lost cause, since 90% is actually pretty good odds?

You have lost 9,980 victims plus 100 volunteers.

The risk to the volunteers should have been adjusted for in the original problem. You’re not making the case for probability less appealing, you’re changing the nature of the problem under discussion.

We can discuss “but what if” scenario’s all day. The point of the example was not “Probabilities give you the insight of Laplace’s Daemon”. The point of the example was “intuition will lead us astray in ways that math can’t”.

There are other examples. E.g. check out the Allais Paradox. The math more convincingly demonstrates that intuitions are actually pretty inconsistent. I didn’t use this because the example with the dice at the end of the post left me spinning for days. Or consider the more popular Monty Hall Problem. My intuition tells me all doors have an equal probability, so there’s no point in switching… right?

• J Thomas says:

I think you’re smuggling all sorts of connotations into your example. [5-95] is a wide interval, but it still has to collapse down to some kind of average. If this average collapses down to a 90, then so far so good. If it collapses down a something lower than a 90 (as I suspect your example insinuates), we’re talking past each other.

How about when you have no possible way to know what it collapses down to?

Don’t you think that “time wasted playing with distributions” might actually reveal that A) the plan is a lost cause, our initial 90% estimate was wrong, and we should abort the operation; or B) the plan is not a lost cause, since 90% is actually pretty good odds?

You might get that result. But only when the differences among your probability musings are not swamped by the unknowns in your estimations.

Like, say the concern is that a stockpile of perchlorate has been discovered stored in an unsafe manner, and it might explode. How much is there? You don’t know. How long do you have before it explodes? You don’t know, maybe minutes, or hours, or days.

You must get people out of the area, but how large is the area you must get them out of? If your headquarters is too close you will be a casualty yourself, but the farther away it is the less influence you have over events.

If you have minutes, you can try to get mass media to encourage people to leave, and try to discourage people from going in. If you have hours, you can arrange a phone tree, knock on some doors, etc. If you have days maybe you can get everybody out who’s willing to live. But you don’t know.

The numbers you make up about probability won’t help you much. If you knew more, they might help.

If you’re in a situation where you *know* that one probability is 100% and another is 90%, you can work with that. When you make vague guesses because you don’t want to just say “I don’t know”, that won’t likely be very useful. Last time around the argument was that people say one chance in a million when they probably ought to mean one chance in a hundred. How unlikely should it have to be for them to say 0%, it’s 100% it won’t happen? How unlikely will it be when they say that?

• FullMeta_Rationalist says:

How about when you have no possible way to know what it collapses down to?

The way this sentence is framed rubs me the wrong way. It’s Frequentist in the sense that it suggests the Flying Spaghetti Monster has given Event X a little html tag that says “P(X) = 90%”. If only us mortals had access to such divine knowledge of what the FSM had inscribed! I mean, do we ever truly, literally know what the probability collapses down to?

Probability is a feature of the map, not the territory. Maybe you feel uncomfortable giving P(event X) and explicit number. But as an extension of boolean logic, making any sort a decision at all ipso facto involves making a guess as to what is *probably* (in the colloquial sense) going to happen. E.g. tying my shoes is probably safe, since I have little reason to believe my shoelaces will spontaneously combust. And if I’m making a guess (for something important enough), I feel I might as well make that guess precise and explicit by slapping a number on it.

You might get that result. But only when the differences among your probability musings are not swamped by the unknowns in your estimations.

If you’re referring to the daughter example, 90% is high enough that the underlying distribution should have low enough variance to make any unknowns reasonable to ignore (in this particular example). 90% is pretty close to 100% after all.

If you’re referring to the perchlorate example because you think the daughter example is unreasonably unrealistic, it’s not like you have to spend weeks organizing a conference of world class statisticians. The example you constructed makes time the limiting reagent. So of course it’s silly to waste time finagling with a spreadsheet. Just get the citizens out of the area.

But do notice that the situation forces you to make a guess as to how close you must base your HQ. Even if the conscious part of your brain doesn’t make the calculation, the subconscious is making a similar one with heuristics (see the shoelace example).

If you’re referring to the FAI problem, the time scales are totally different. We have lots of time to muse and play with spreadsheets.

The numbers you make up about probability won’t help you much. If you knew more, they might help.

I don’t know why we’re pretending that information gathering and guesstimation are mutually exclusive. In fact, I don’t even know how this thread relates to the original topic of hard takeoff. Are you arguing that explicit probabilities are less useful than information gathering?Or that explicit probabilities are not useful at all? Or something else entirely?

How unlikely should it have to be for them to say 0%, it’s 100% it won’t happen?

I feel like this should be a different thread, since the merits of using explicit probabilities is different a different topic than calibration. But since you asked,

No, this doesn’t make sense either. Probabilities shouldn’t depend on what a person says. In other words, saying “grass is red” doesn’t actually make a patch of green grass suddenly red.

Instead, Scott knows that most people are overconfident. Because Dunning-Kruger, status signalling, etc. So he suggested that maybe the people who had given him estimates of “P(foom) = 10^-6” were maybe overconfident.

If the “P(foom) = 10^-6” had come from a particular person who (for example) was rated perfectly calibrated by prediction book, then Scott might take his or her particular estimate more seriously. If humanity was a perfectly calibrated species, then Scott might not have made the overconfidence claim to begin with.

• Nita says:

@ FullMeta_Rationalist

Probability is a feature of the map, not the territory.

If a random event occurs in the world, and no one estimated the likelihood of its outcomes, did the outcomes have a probability?

• J Thomas says:

Probability is a feature of the map, not the territory. Maybe you feel uncomfortable giving P(event X) and explicit number. But as an extension of boolean logic, making any sort a decision at all ipso facto involves making a guess as to what is *probably* (in the colloquial sense) going to happen.

Agreed. Probability is a feature of the map, not the territory. It’s an estimate of what you don’t know. And often you really don’t know what you don’t know.

When you know that you don’t know enough to make adequate estimates, the obvious choice is to try to find out more.

If you can’t find out more, you have to choose.

For some hazardous chemical accidents, rescue workers are told not to go in. Not enough chance they’ll do good, too much chance they’ll just be more casualties. For 9/11 there was no good estimate. The best known estimate was that the first tower would not fall down from a plane crash, until it did. They had no way to tell how much time they had. They went in without knowing that, and died.

There is no way that estimating the probabilities that they had no clue about, would have helped them. When your probability estimates are bullshit, you can do mathematically correct operations on them to generate bullshit answers.

• FullMeta_Rationalist says:

@ Nita

Oh. I was actually expecting the “probability is subjectively objective” post. In any case.

Let’s examine the parallel. In “disputing definitions”, EY draws a distinction between two different things both described by the word “sound”. There’s sound (vibration) and there’s sound (sensation). EY says disputes over semantics are pointless since both Albert and Barry’s definitions both describe the same forest physically, and therefore do not pay rent.

Back to our current thread. I am drawing a distinction between two types of things both described by the word “probability”. There’s (explicit) probability and (implicit) probability. I think the distinction (though possibly a dispute over definitions, I’m not sure at this point) does indeed pay rent. As I understand, the dispute between J Thomas and I concerns the usefulness of “performing calculations using explicit probabilities”.

J Thomas essentially argues “Real-life high-stakes situations often do not offer enough easily-extractable information for our brains to manufacture explicit probabilities that are high-quality enough to meaningfully improve our chances of success.” (J Thomas, correct me if I’m wrong).

I argue that “Our System-1 brains automatically and cheaply produce implicit probabilities (called intuitions), which are crude but inevitably used by the brain anyway.” Furthermore, I argue that “We often have the capability to make these intuitions more explicit by quickly assigning them a number ranging from 0 to 100. This can meaningfully contribute to our chances of success because it allows us to: A) compare our estimates to those of others; B) compare the size of our sample-space to those of others; C) keeping a record of our estimates in order to improve our response in future scenarios; D) perform expected-utility calculations; etc.”

(n.b. I think there are more advanced things we can do with numbers. But J Thomas added a time constraint on the order of minutes to days. Which I think is weird, since the FAI problem (which this discussion should ultimately recurse to) is IMHO on the order of decades to centuries. This signifies to me that I’m misunderstanding something important. But I also suspect Cunningham’s Law/topic creep.)

• FullMeta_Rationalist says:

@ J Thomas

You keep bringing up scenarios involving explosions. For the second time, I agree that playing with spreadsheets would have done little good during the 9/11 and perchlorate incidents. The limiting reagent that makes spreadsheets maladapted to these types of scenarios is not just the lack of a model, but also the lack of time. So I’m not sure why you think these scenarios are relevant to FAI.

Like, maybe you noticed in EY’s daughter-example that lives were at stake and you mapped this onto “emergencies” which implies a lack of time. But not all dire situations require a solution in less than 24 hours though. FAI has centuries. The daughter example was unspecified. Miners often become trapped underground for weeks. Arguably, geologists have a pretty solid model of the crust (pun intended). But what about climatologists? Climate Change will occur over the order of decades to centuries, but our climate model still has lots of holes. Imagine if someone had said “Climate Change? We don’t know enough, so what’s the point of estimating the damage?” (… wait, does the U.S. Republican Party still do this? Maybe this was a bad example…)

But seriously, how do your examples relate back to FAI/X-risk?

p.s. Yes, it’s difficult to come up with examples for which humanity has no models. Which is one reason why I borrowed the Daughter example: the context is unspecified.

• J Thomas says:

FAI is a good example. Meiotic drive is another, that I’d like to describe briefly.

In general, we want for natural selection to result in a population whose members survive and reproduce better. But sometimes genes can reproduce better by making their competitors reproduce worse. For example, suppose you have a gene on a Y chromosome which kills the sperm that carry X chromosomes. Then that male may produce only half as many sperm, but they will all carry the Y chromosome with that gene. Given some assumptions, that gene will roughly double each generation, until you get a generation with no females — a bad outcome for everybody.

That sort of problem can be minimised if you have a bunch of small populations with limited mixing. Then “segregation distorters” can only survive if they can spread to new populations before they destroy the old ones. For most of history and prehistory humans have done that. But every now and then we set up empires or big trading webs where there is a lot of mixing, and pretty soon things fall apart (likely for other reasons). What we have now is the biggest ever.

Sometimes I have the urge to run around telling people we have to stop it. We should divide up into breeding groups of no more than say 10,000, and for many generations enforce the rule that no one has sex with anyone outside the group that it happens only a few times per generation. Of course everybody would think I was an utter crackpot and a racist to boot. It’s probably a very serious threat, but there’s a chance that before it hurts us we will learn enough genetic engineering to deal with it. We don’t know nearly enough now, but maybe we’ll learn.

The problem is not on a human time scale. We don’t begin to know enough to decide what to do.

Similarly with FAI. We don’t know whether we’ll ever get significant AI. My guess is that we will eventually. Since we ourselves exist it will probably be possible to figure out how we do things and learn how to copy that, or else do things that work better.

If we do get AI that’s smarter than we are, there’s no way to tell whether it will believe it’s living in a world of competition and natural selection. If so, there’s every reason to expect each instance to compete for resources, doing whatever it takes to get those resources. “Nice” AIs would be at a disadvantage unless we are smart enough to reward them for being nice, in which case some will pretend to be nice while they wait for the opportunity to get enough resources in one lump sum to forego the paltry niceness rewards. But there’s no telling how AIs will think.

We can make metaphors. If a bunch of mice living in your house wanted you to be “friendly”, what could they do toward that goal? The obvious approach would be to try to be cute. But they might not know what you think is cute. If you think they are cute enough you might want to live-trap them and keep them in cages so they won’t nibble into the wiring with bad results for them and for you. But maybe it would be different if it was a bunch of mice raising you from a baby. You might just naturally think they were cute then….

So the basic argument is about how to manipulate somebody who’s a whole lot smarter than we are, when we don’t know what it likes or what kind of manipulation it will be susceptible to. We have no examples and can get no experience. We are proceeding entirely blindly about something which may never exist.

When we understand that probabilities are about the map and not the territory, then I guess there’s nothing particularly wrong with them. They give us a way to quantify our prejudices. We make unjustified assumptions about things that we don’t know, and assigning probabilities lets us look closer at those assumptions and realize that they are bullshit. That’s a good thing.

But when people believe in those numbers enough to manipulate them and draw probabilistic conclusions, that’s bad unless of course they look at the conclusions and realize that they are nonsense and that therefore the worthless assumptions they made to get those conclusions are also nonsense.

When it’s something you actually have knowledge about, then the probabilities could be useful provided they accurately reflect that knowledge.

• Peter says:

Of course, offer people:

1. 100 people certainly die
2. No-one dies, 90% probability, 500 people die, 10% probability

and people may well have different intuitions.

• Tom says:

One complication is that people tend to evaluate decisions with the full benefit of hindsight. If you’re in a position to be making these calls, then if all 500 die that’s a lot of ammunition your opponents can use against you. Think about it: how will the public react to 500 people dying when you could have guaranteed 400 survivors?

I’m not just being frivolous here. If you face this scenario many times and pick B each time, you will eventually fail to save anyone. In the given scenario, the risk of a string of failures is rather low so option B is reasonably safe in most circumstances. If we took an analogous situation where you could guarantee saving 400 people or take a 50% chance at saving 900, then hitting a string of failures is a real possibility. You can get painted as a cowboy taking unnecessary risks and shut down.

• FullMeta_Rationalist says:

You’re fighting the hypothetical. The example is meant to explore ethics and epistemology, not politics. If you care about your public image, then this is a valid concern. If you actually want to save lives, worrying about your image is sort of irrelevant.

52. Alex Zavoluk says:

This whole line of reasoning smacks of “privileging the hypothesis” to me.

“Hmmm, what’s something we could convince people is important so we keep getting research funding?”

“Well, maybe X has a massive risk or could result in a massive benefit.”

“Is there any data indicating such?”

“No, but no one can disprove my assertion about it maybe saving 10^18 lives, and it would be way overconfident to say any non-contrived event will occur with less than 10^-9 probability, so it’s at worth at least 10^9 lives in expected utility.”

“Awesome, let’s go see if we can dig up some other arguments to support this wild conjecture.”

If you want an actual argument, here’s mine: what’s stopping us from making basically the same argument that these conjectures will result in massive negative consequences? Why can’t I just assert that the probability that pushing AI research now will lead to an evil, omnipotent AI which raises and tortures 10^60 humans for their entire lifespan just for its own amusement is obviously at least 10^-20, since saying it’s any smaller would be arrogant overconfidence?

• J Thomas says:

This whole line of reasoning smacks of “privileging the hypothesis” to me.

Yes. Your comment was done well and clearly.

When we don’t know much about the situation, we also don’t know how to estimate precise probabilities for outcomes. Our estimates of probability then tend to be bias, not information.

53. RCF says:

It seems to me that this is in part arguing about definitions. Suppose we define MRVP (mathematical random variable probability) as a probability arising from a well-defined random variable. A MRVP has many useful properties: it’s (by definition) well defined, and its value is objective rather than a matter of opinion. It also has drawbacks, one of the major ones being that it’s a mathematical abstraction rather than a real-world phenomenon.

Now, we also have another concept. When faced with a decision, people have to weight the expected benefit of an outcome by how likely they think the outcome is. I’m going to call this RPCW (revealed preference confidence weight).

So, now, someone can say “Okay, MRVP is a mathematical model that corresponds in useful ways to how we use the word ‘probability’. I think we should agree that in cases where we need to discuss rigorously what the word ‘probability’ means, we should take MRVP to be the definition of the word. Given that definition, we, by definition, cannot have a probability without a well-defined model.”

Someone else can say “When people are faced with a decision, they implicitly have some idea of how likely events are. That is RPCW. So when we talk about the term ‘probability’, we should understand it to be referring to RPCW. Using this definition, it’s clear that people are coming up with probabilities without an explicit model.”

So when one person says “We can’t have a probability without an explicit model”, and someone else says “Here’s all these examples of people having probabilities without an explicit model”, are they disagreeing about anything substantial, or simply disagreeing about what the word “probability” means?

• Nita says:

I like your analysis. But there is another source of disagreement: Scott wants us to use RPCWs in formulas designed for MRVPs. Now, if our RPCWs happen to satisfy the same mathematical conditions as MRVPs, then there’s no problem. But if they don’t, then our results may not satisfy any conditions either.

The Bayes theorem, expected value and other mathematical tools are guaranteed to work only with actual, mathematical probabilities. Perhaps someone can prove similar theorems for RPCWs, and then we can actually apply the power of mathematics to such numbers, instead of merely pretending to do so.

See also Peter’s comment with an example from machine learning.

• Peter says:

RPCWs get you on to Prospect theory, which suggests that RPCWs scale monotonically but not linearly with probability – some “straightening out” would be required before you get things which you can meaningfully do probability mathematics with – i.e. using RPCWs as MRVPs without recalibration is likely to lead to systematically wrong results.

I think this is a little stronger than my machine learning example – mine actually spits out well-calibrated probabilities in the end – but it’s a very similar idea.

This is a bit outside the scope of this particular post, but my main objection to the calculations generating an expected utility from AI risk research has never been the math. There’s also the issue of incommensurate values. Scott, and Bostrom et al., pretty clearly place a great deal of value on having a future universe with more humans in it. Personally, I’m completely indifferent between a future with 10^57 humans in it and a future with 0 humans in it. I don’t consider it an evil thing for humans that don’t currently exist to continue to not exist. All that matters to me is what happens to them once they come into existence.

Furthermore, if our goal is to simply maximize the future proliferation of sentient creatures, it’s not clear to me why we should focus on preserving currently existing species as opposed to firing off deep space landing craft at every known life-supporting heavenly body filled with dormant microbes, which as far as we know, may even be roughly how we came into existence in the first place.

54. Frank Barker from the University of Melbourne wrote a 2001 monograph http://www2.isye.gatech.edu/~brani/isyebayes/bank/bookphilabayes.pdf suggesting that Thomas Bayes, in his famous essay on probability DEFINED probability in terms of the decision that a risk-neutral decision maker is rationally obliged to make. It seems that Scott is heading in that direction. Of course, if you don’t know what decision you ought to make, this definition doesn’t help.

• RCF says:

I’m not sure that the risk-neutral qualification is needed. I think that risk preference is due to a nonlinear utility function, so properly calibrating the utility function should make everyone risk neutral.

55. J Thomas says:

[EDIT-2: “I don’t know”]

[09:02] X: how many apples are in a tree outside?

[09:02] X: i’ve never seen it and neither have you

[09:02] Eliezer: 10 to 1000

[09:04] Eliezer: if you offer to bet me a million dollars against one dollar that the tree outside has fewer than 20 apples, when neither of us have seen it, I will take your bet

Neither of them know whether it is an apple tree. I might take either side of that bet if it’s my dollar. I wouldn’t bet a million dollars on either side even if it’s March and the tree is in Anchorage Alaska, because the bet is likely to be rigged. Someone (not Eliezer who has never seen the tree) has tied 25 apples to a pine tree.

I don’t need an earful of cider. Bets at million to one odds are sucker bets.

• Deiseach says:

Ah, 10-1000 isn’t a bad estimate, but given that it’s in the context of someone ticking everyone else off for not getting all their ducks in a row, certainly he would need to know a lot more, such as: what season of the year is it (e.g. if the tree is in blossom, there are no apples on it, if it’s winter, etc.), have the apples been picked yet, is it a mature tree or a young one, do you count the unripe apples as well?

A quick search online gives a rough estimate of 300-500 apples per tree. That being said, if Yudkowsky can work out a good method for estimation, there’s a researcher would like to hear from him:

If we have a good estimate of the average number of fruit per tree, then we can estimate the number of fruit per acre by simply multiplying fruit per tree by trees per acre. Unfortunately, I am not aware of methods for estimating numbers of fruit per tree or per acre. For fruit thinning experiments, researchers often use data from one to three limbs per tree to express crop density (fruit/cm2 branch cross-sectional area). My experience is that this does not provide a very good estimate of either the number of fruit or average fruit size for the entire tree. There is quite a bit of variation from one limb to another and sampling just three limbs is usually inadequate to estimate the entire tree. Currently many orchards attempt to estimate crop load by looking at trees. Experienced estimators usually provide fairly accurate estimates, but every few years the estimates are inaccurate. I think the problem is that the tree is 3-dimensional and we can only see two dimensions and we need to be able to see the fruit in the interior of the tree. So one area where we need additional research is to develop a method for accurately estimating number of fruit per tree. This chore will be easiest for high-density plantings with narrow canopies because we can see into the middle of the tree.

• J Thomas says:

Ah, 10-1000 isn’t a bad estimate, but ….

He’s given an estimate for how many apples on the tree.

Now, how many pears will be on that tree?
How many quinces, lemons, kumquats, figs, coconuts, walnuts, pecans, pistachios, filberts, cherries, jackfruit, ginkgo nut, durian, etc etc etc?

Is there any nut or fruit in the world that he should give an estimate less than 1 for?

For the ginkgo, half the trees are male and don’t bear any fruit at all. If you estimate zero fruit then even in season for an actual ginkgo you will be right half the time, which is better than any other estimate though biased.

Somehow to me, the question how many fruit will there be on a tree I know nothing about except that I haven’t seen it, is less meaningful than asking how many angels can dance on the head of a pin. The angle matter at least asks deep questions about the nature of angels — whether they are material etc. The apple question is just frivolous.

56. Deiseach says:

Off-topic, but I think people here would be interested in this test 🙂

• DavidS says:

Interesting test! Although if I’m reading the results right, I think at least one of the tests is rather faulty/misleading. Specifically (protected for spoilers)

V’z cerggl fher vg guvaxf V’z onq ng inyhvat zl gvzr orpnhfr V jnf jvyyvat gb cnl zber gb nibvq jnfgvat gvzr ba n genva wbhearl guna jnfgvat gvzr ba n ubhfrubyq gnfx, rira gubhtu gurl jrer obgu rkcyvpvgyl ‘arhgeny’ nf rkcrevraprf gurzfryirf. Ohg gur wbhearl jnf va n ‘fgenatr pvgl’ fb gur cerfhzcgvba vf zl gvzr gurer vf n ybg zber yvzvgrq: gur bccbeghavgl pbfg bs jnfgvat na ubhe vf n ybg zber ba ubyvqnl guna bgure gvzrf.

• Nita says:

That was fun, thanks! Apparently, I’m good with numbers and not-terrible with everything else.

• Max says:

Its a great example of a bad test

Just two examples
You awarded more points for considering more causes for firemen to die. Ignoring that all the evidence and domain knowledge point to single most likely cause (smoke inhalation). Yeah sure there are many ways to die in fire, but question was what was most probably cause

With ad campaign. It uses pure statistics again ignoring trends and domain knowledge. The jump from 42 mill to 60 mill with month to month std deviation ~6 mill (I dont remember exactly) with a single different variable being ad campaign is not a fluke.

Sales were trending downwards for past 3 month (typical for new products) ad campaign made it jump 2 std deviations. Yeah sure ad campaign was only “somewhat likely”

• LTP says:

Yeah, I figured the fire one was a trick question of some sort, so I thought about it for awhile, which boosted my score. Still, I think that wasn’t an indication of rationality as opposed to me just being suspicious of the question.

Also, for many of the other questions, what the supposed “rational” answer is seems to me to be too subjective, vague, and dependent on circumstances. For example, the ones about how much you would pay to save effort on neutral tasks, or the ones about money now vs. later. In reality, none of those situations involve pure economic calculus, they involve things like how much you value your time, do you have better things going, how much do you trust yourself to make good investments with the money received earlier, do you have pressing needs now or the near future that need to be met, etc.

I get annoyed when rationality is just reductively conceived as some sort of abstract economics problem, when there’s so much more.

• vV_Vv says:

For example, the ones about how much you would pay to save effort on neutral tasks, or the ones about money now vs. later.

Each of these questions doesn’t individually have a right answer, but there are consistency constraints on the answer you give.

But there’s nothing wrong or irrational about time inconsistent discounting (e.g. hyperbolic).

FWIW, I got skeptic and I’m satisfied with that.

• LTP says:

That’s what I got too. I figure self-awareness is most important, whereas in the other categories what is considered “rational” strikes me as debatable and subjective.

• Max says:

Oh also when it asks whether you prefer \$45 in 1 year or \$90 in 2 answer is – I dont care , because either amount is irrelevant. The scale matters. A lot. You cant ask abstract questions about economics because economics is very concrete.

• Pku says:

Interesting, but I disagree on the ad campaign – about two thirds of ad campaigns are effective and only one month out of seven had that level of sales, so it was about 6% likely to be by chance.

57. Deiseach says:

Scott, I love you (in a completely platonic, non-creepy way). You and this blog are doing me more good than the counselling (online and real life) that I’ve started 🙂

I have received this heart-felt, plangent request via Tumblr:

Can you please stop insulting the father of the rationalist community on a LessWrong diaspora blog?

How can I but respond in a spirit of positivity, warmth, uplift, and repentant determination to atone for all my past wrongs to such a plaintive yet hopeful request?

So I hereby and henceforth shall only and ever refer with compliments and sincere admiration to the person, nay, the towering example of transcending human limitations, that colossus who bestrides the fields of pure thought as though they are but flowerbeds in the vast acreage of his own personal demesne,

Eliezer Yudkowsky, Father of the Rationalist Community! (What, all of them? Yes indeed, all! Do you doubt his powers and/or his virility to achieve such a deed? No more belittlement of this giant, this titan, this man above men!)

Eliezer Yudkowsky, supreme prose stylist of our (or any) era!

Eliezer Yudkowsky, enlightening a benighted world by the mere falling of his shadow upon us as he passes!

Eliezer Yudkowsky, Matchless and Uttermost Nonpareil!

Eliezer Yudkowsky, cynosure of all eyes, hearts, minds and wishes whether the entity beholding in awe-stricken amaze is human or AI!

I abjure most humbly all my unworthy and petty references to The Father, and beg him to take pity upon my ignorance and hold them not against me.

(If you think it might be a bad idea to ask an Irish person to refrain from slagging off someone, on the grounds that it never works and only invites renewed slagging, I think you might be onto something there).

• Nita says:

That’s some poetic, vocabulary-expanding slagging, though. Perhaps we should point earnest rationalist anons to your Tumblr blog on a regular basis.

• Cauê says:

Trying to be charitable would be improvement enough, I think.

Afterall, if there’s one thing we could learn from Scott…

• PSJ says:

Scott said he was giving up on being charitable in the last post, didn’t he?

• Cauê says:

That’s what you took out of that?

• Deiseach says:

Cauê , I am wounded, stricken to the marrow, downcast in the very innermost citadel of my being, that you somehow have reason to doubt my total sincerity and the goodness of my intentions.

Behold the sackcloth in which I array myself! Regard the ashes strewn upon my head! Harken unto my utterances of penitence: woe, alas, eheu, och, ochón agus ulagón ó!

I have been chastised in fraternal correction for my grievous faults, and I tender profoundly meant apologies for any insult or offence I have rendered*, what more can I do to prove my bona fides? 🙂

(*This part is genuine).

• Pku says:

I feel somewhat guilty about enjoying this. I also now want to build a giant robot EY with laser eyes.

• Deiseach says:

I also now want to build a giant robot EY with laser eyes.

I was going to ask if sharks would be involved there somehow, but then I remembered that Effective Altruism is Ethical Altruism and that animal rights are all part of that, so strapping sharks to your giant robot EY’s feet* possibly would count as animal cruelty 🙂

*Or any place else you feel sharks would be better placed; perhaps they could draw the triumphal chariot of the giant robot as it speeds over the waves, much like dolphins draw that of Galatea?

• Pku says:

Maybe having shark arms? You could hook them into life support and low-dose morphine or something, so everyone wins.

• Deiseach says:

Mmmm – but we’d need ethically sourced morphine, and since it’s Afghani warlords controlling the poppy trade, that might be problematic.

We should probably use sharks in aquarium-style tanks in the main body of the robot? Doing – something? Or do they need to do anything except LOOK AWESOME?

• Pku says:

Their turbulence of the tank could provide us with a random number generator (I may not believe in free will, but I also believe that sharks have it.)

58. moridinamael says:

If people are really defending the assertion that “we can’t use probability at all in the absence of a well-defined model”, isn’t that assertion completely exploded by the relatively good track record of prediction markets?

59. keranih says:

A bunch of the questions we have to deal with come down to these kinds of decisions made without models. Like – should I invest for retirement, even though the world might be destroyed by the time I retire?

Eh. We do have models for this. Mostly religious in origin, which makes it easy for some to reject. But the models do exist. (And then there is Luke 9:57-60.)

(My point is not that either of these ideas are necessarily correct, just that people have thought about this before. In which case, one might use “traditional religious cultural teaching” as a “expert who guesses right a great deal of the time”, and (failing other effective, definitive, evidence-based models, such as 40 near-Earth planets followed for 10,000 years following the development of agriculture by the dominant sentient species, which we have not got) use that guide as a reasonable model for action choices.)

60. Peter says:

Random thought experiments for people who like to say “we don’t know what the probability is”.

Get a dice and an opaque cup. What’s the probability that you’re going to roll a six? Roll the dice under the cup, so you can’t see it. What’s the probability that you rolled a six? Get someone else to look at it, but not tell you anything. What’s the probability that you rolled a six?

Flip a fair coin. What’s the probability of heads? Take a coin from a bag of equal numbers of double-headed and double-tailed coins – that’s the probability of heads? Take a coin from a bag full of “unfair” coins – you know there’s an equal number of coins in there with a bias towards heads, and a bias towards tails, and that the biases are equal and opposite, but you don’t know what the bias is – and flip it – what’s the probability of heads?

So the standard Bayesian answers are 1/6 for all of the first lot and 1/2 for all of the second lot and I think there’s a lot of sense to that. But the “we just don’t know” seems to capture something even if I think it needs some rephrasing. With a fair coin, there’s no further information to be gained short of going ahead and flipping it “for real”. With a biased coin, potentially you could learn something more; possibly the more time you spent studying the coin (maybe making trial flips before the one that counted) the more accurately and reliably you could know things. If there was a going to be some sort of bet then studying the coin more might be valuable.

In a way, potential knowledge looks like anti-knowledge… like when Sartre goes to the cafe and finds that his friend Pierre isn’t there, it’s like there’s a special absence of Pierre there, when really Pierre is almost completely omniabsent.

• J Thomas says:

Get a dice and an opaque cup. What’s the probability that you’re going to roll a six? Roll the dice under the cup, so you can’t see it. What’s the probability that you rolled a six? Get someone else to look at it, but not tell you anything. What’s the probability that you rolled a six?

Take a die randomly from a bag of dice, roll it, and record the result. The bag contains an unknown number of four-sided dice, an unknown number of six-sided dice, and an unknown number of twenty-sided dice. An unknown number of the dice are not fair dice.

Before you get the die from the bag, estimate the probability that you will roll a six.

How confident should you be in your estimate?

• Peter says:

Did you have some actual bag of dice in mind? I’m suspicious of the way Knightian uncertainty interacts with hypothetical situations; it seems that you can stipulate “true” Knightian uncertainty in a way that doesn’t necessarily occur in real life. Now in mine I had this cunning way of making the uncertainty exactly cancel out.

“Estimate the probability” is not very meaningful. OK, suppose I had some nice mathematical model to work from, but doing the calculations would take too long – possibly they’re intractable and I’m forced to work with unstable numerical methods.

These unfair dice – do they depend on the mechanics of rolling? The surface I roll them on? How hard I roll them? I think you think there’s a nice well-defined “the probability” that can be calculated using GCSE-level techniques and I’m resisting that.

• Peter says:

(Estimate the probability: I’ve just though of a good example: when playing poker, you sometimes want to know the odds of getting some particular hand, when you already have some of that hand already. For example if you’re playing Texas Hold’em, between your hand and the three face-up cards on the table you have the 10,J,Q,K in a mix of suits, there are two cards left to see. You could calculate some “exact” odds of getting a straight, but the calculations are a little too fiddly to do at the table, so approximate. There are eight cards that could make your straight, approximately 50 cards in a deck, so each time you see a new card there’s a 2% chance of it being some specified card. So 2% * 8 = 16%, there’s your estimate, you can try using it to help work out whether it’s worth staying in or folding. Now if there’s someone across the table who had been bidding quite aggressively in the opening round, you might suspect they might have an ace or two in hand, which suggests those aces might not be face-down on the table – then perhaps you should adjust that 16% down a bit.)

• J Thomas says:

“These unfair dice – do they depend on the mechanics of rolling? The surface I roll them on? How hard I roll them? I think you think there’s a nice well-defined “the probability” that can be calculated using GCSE-level techniques and I’m resisting that.”

No, I’m saying that there are important unknowns that keep you from making a meaningful estimate.

Say the task is to predict the odds of getting six when you roll one die. You have strong reason to think that the die is not a fair one, it is biased, but you don’t know which direction it’s biased in or how much.

What’s your best estimate of the probability? What kind of confidence interval should you put around that estimate?

If the reason you think the die is loaded is statistical, then you can use that. “Somebody I trust told me he saw this die rolled 10,000 times, and 2118 times it came up six.” That isn’t definitive, but probably your best estimate is .2118.

What if it’s “Somebody I trust told me he got a good look at that die, and it has a great big glob of solder on one edge near the corner.” What does that tell you? I have reason to think it isn’t a fair die but I don’t know by how much, or even which direction.

I guess my best guess is .1666, because I have reason to think that is wrong but I don’t have reason for a better estimate. And my confidence interval? Should it be unchanged compared to a fair die?

Surely I should be less certain than if I had reason to believe it was a fair die. Shouldn’t I?

61. Earthly Knight says:

There’s a crucial disanalogy being missed here between divining the intentions of extra-terrestrial visitors and estimating the chances that an AI will be created in the next century. Given that the only force in the universe capable of giving rise to intelligent life from scratch is evolution by natural selection, we can make reasonable guesses about the motivation and behavior of aliens by way of deductions from one of our best-confirmed scientific theories. We prepare for the risk of attack because it is likely that any alien visitors must have triumphed, like us, in a Malthusian struggle for existence, where violence is often a winning strategy. We send linguists to the landing site because the most intelligent and technologically proficient animals on earth– corvids, cetaceans, hymenoptera, elephants, apes– tend to be social animals with sophisticated systems of communication.

Nothing comparable is available when it comes to artificial intelligence, which means that, in lieu of deductions from scientific theory, we have only fearful and tendentious speculation to go on. In the one case, our judgments are indirectly supported by the reams of evidence accrued to evolution by natural selection. In the other, we pluck numbers from the thin air to hide our ignorance behind a veneer of mathematical sophistication. The two are not alike, and one of the chief failings of the pop-Bayesianism in vogue here is that it contains no mechanism for distinguishing between credences assigned on the basis of evidence and credences assigned on the basis of crankish guesswork.

Incidentally, firing up the banana plantations might not be such a bad idea, were it not just as likely that the aliens would be seeking molybdenum and benzene as potassium and ascorbic acid. In light of our own experiences with sea exploration, it doesn’t seem all that implausible that they would be visiting earth to remedy a nutrient deficiency. Space scurvy kills.

• J Thomas says:

“Given that the only force in the universe capable of giving rise to intelligent life from scratch is evolution by natural selection, we can make reasonable guesses about the motivation and behavior of aliens by way of deductions from one of our best-confirmed scientific theories.”

That makes sense.

“Nothing comparable is available when it comes to artificial intelligence, which means that, in lieu of deductions from scientific theory, we have only fearful and tendentious speculation to go on.”

That makes sense too.

Oh, wait. What if the space entity that approaches us is in fact an AI that space aliens created? Doesn’t that mean that all bets are off in that case too?

• Earthly Knight says:

An AI designed by creatures who are themselves the products of selection will still bear selection’s imprimatur. If the AI is operating by the book, it will inherit the intentions and value system of its masters. If it has gone rogue, it will have to have overpowered, outwitted, or eluded its creators, in which case it will still be wise to arm the warheads in preparation.

There are some more far-fetched scenarios– maybe aliens launched the AI into space as a performance art piece, maybe the computers on a long-derelict vessel achieved self-awareness of their own accord– where we will have no insight into its designs for our planet. But it is just a fact of life that there will always be possibilities too remote and poorly understood to integrate into our plans. We must do the best we can with the information we have.

• J Thomas says:

It sounds like you’re saying we can depend on alien AIs just as much as we can depend on our own AIs, so both will be reasonably predictable.

But of course we can’t be sure how alien AIs came into existence and so we can’t be sure how they’d treat us. And we can’t be sure whether it’s aliens we run into or alien AIs that might perhaps have already exterminated their creators.

I agree, we must do the beset we can with the information we have. Except that what we have is not information about the possible future events at all, but only speculation based on things we’ve seen happen in our own provincial experience, here on our isolated planet. Since we have no way to predict what we’ll actually be facing, we might as well think in terms of what we’re comfortable thinking about. Because if it turns out to be useful then we’ll have prepared, and if it isn’t useful at least we haven’t wasted our time trying to think about things that don’t come natural to us, and probably not imagining the ones we’ll actually run into.

62. HeelBearCub says:

I’ve think I have a much better way of framing the debate about probabilities as applied to the AI problem. Maybe this is unfair. Not sure.

Scott just recalled that there is a patient who insisted he be referred to by the name Kal-El. What is the probability you assign to him being Superman? How much effort do you expend trying to convince the patient that AI is an x-risk?

Try and resolve this problem using the rule that zero is not a probability and using Bostrom’s number for potential future humans.

• Deiseach says:

Shouldn’t we be invoking MWI as well? We cannot dismiss the possibility that this person came from one of the planes or dimensions where Krypton did indeed manage to send a rocket bearing the infant Kal-El to an Earth, if not our Earth.

• HeelBearCub says:

Well, sure.

But that’s not quite taking what I am saying seriously. And I actually do mean this seriously.

What probability would we assign to each of the following set of statements:
– Super intelligent creatures are possible
– FTL is possible
– FTL is resource intensive
– Cryonic sleep is possible
– World destroying cataclysms are possible
– World destroying cataclysms are predictable
– Creatures will attempt to flee from a world destroying cataclysm
– Higher mass worlds naturally produce stronger creatures. Stronger are on multiple vectors.

So what are the chances that a super-intelligent race facing a world-ending cataclysm sent their progeny in cryonic sleep to all potential life-supporting worlds within range of their FTL drives as limited by their available resources but were so stupid as to not at least send a breeding pair and furthermore that one of those worlds was our Earth and he is here right now in Scott’s hospital and he is super-strong and super-intelligent.

There are a number of independent statements there. You could assign a probability to each. Are we allowed to use standard probability theory to come up with the likelihood of them occurring simultaneously? Or would that leave us with a number that is “too small” so that we have to consider it to be the result of something like being killed by two asteroids on the day you win two lotteries?

The first statement is very simple: What is the probability that Scott actually made contact with Superman?

The second one contains many of the hidden conjoined statements that add up to it being so unlikely as to be dismissed out of hand.

The simple statement is: What are the chances MIRI helps immanentize the eschaton?

• Deiseach says:

The simple statement is: What are the chances MIRI helps immanentize the eschaton?

I don’t think they will, but that’s mainly because I don’t think what MIRI is presently doing is going to amount to much of anything at all. We may indeed get some useful theory of computation or something out of it, and that’s not a bad result, but that it will produce a working solution universally accepted as the standard governing AI that governments (both those of the U.S. and elsewhere) and independent research teams (ditto) will agree to abide by – I don’t know how likely that is.

Let’s use your list, HeelBearCub, but plugging in the AHHHHH! AI WILL KILL US ALL UNLESS! variables that we are being urged mean we need to do something right now, that something being give give give to MIRI and those like it, and encourage people to choose to do research on AI risk over anything else.

– Human-level AI is possible
– Super-human level AI is possible
– Human level AI will naturally evolve/refine itself to super-human level AI. Super-human on multiple vectors
– Super-human level AI is resource intensive
– Super-human AI assuming control on a global scale of human geo-polity is possible
– Civilisation/entire species destroying cataclysms are possible
– Civilisation/entire species destroying cataclysms are predictable
– Rational creatures will attempt to prevent a civilisation/entire species destroying cataclysm
– Super-human level AI will compete with us for resources/may view us as a resource (because of point 4 above); this will be the civilisation/entire species destroying cataclysm we must attempt to prevent

The first statement is very simple: What is the probability that MIRI et al as presently constituted are our best chance to do this?

The second one contains many of the hidden conjoined statements that add up to it being – what? so unlikely as to be dismissed out of hand? credible? immediate and present risk necessitating we concentrate on this to the exclusion of other competing demands such as poverty, racism, global warming, etc.?

Out of the list, I think Civilisation/entire species destroying cataclysms are possible and Rational creatures will attempt to prevent a civilisation/entire species destroying cataclysm are the points everyone is in agreement on and the points to which everyone gives the highest confidence/most probability, it’s the rest of it where we’re not in agreement.

• Well, there have been ~110 billion total human-looking things who ever lived. To the best of our knowledge, none had Superman’s powers.

Moreover, none of Superman’s powers have any plausible explanation in any of the laws of physics as we know them. To find a being with Superman’s powers would arguably be fair evidence that we’re in a simulation and someone’s screwing with the controls; it would overthrow the laws of physics, actually, more thoroughly than the Copernican overthrew the Ptolemaic system; at least these could both be used to get some ok estimates of where planetary bodies would appear.

Moreover, a being who claims to have superman’s powers (which include a superhuman IQ as well as superhuman strength) but refuses to demonstrate them and for this reason arrives in a psychiatric institution–such a being seems *highly* unlikely to actually have them. Perhaps there could be a subterfuge in a plot that would cause such a being to do it for a time, granted that we already knew that such a being existed. But I honestly cannot think of any subterfuge that would require Superman to continue in this for a long time, chalk up a lengthy history of involuntary commitments, etc.

So we start off with prior odds less than 10^-(11) that an individual is superman. Let’s call the odds that he physically could exist as 10^(-11), because no human has ever made a discovery that seems like that. There’s also the implausibility of such a being being committed, which is harder to provide a justified estimate for, but I’ll call also 10^(-11) because I like that number and it keeps showing up. Of course, if he is superman, what you do is hardly likely to convince him because he’s already thought of so much, so let’s call the odds of you making him alter his preexisting course (for the better) at 10^(-3).

So we a 10^(-36) chance that trying to persuade this individual of AI risk, on the off-chance he is superman, will alter his course for the better. That’s not the chance that he’ll make the difference between Bostrom’s number of humans existing, and them not existing, but let’s call that 1 for now.

But wait! It seems that if you multiply that out by Bostrom’s number, the expected value of talking with him is still gigantic! So you must!

Well, no.

After all… you have to consider the distribution of all possible worlds, here. If he’s crazy, talking with him like that makes him worse off, which could have a lot of bad effects. It makes you worse off at your job. You could get fired; and then you couldn’t have the free time to blog about AI risk, and tell other people about it. This could have ripple effects all over the places, with odds much, much greater than 10^(-11) against. And these could all influence the probability of 10^54 (or whatever Bostrom’s number was) of humans living happily, and would *certainly* do so with greater probability than that outlined above. So my expectation of 10^54 humans happily living in the future would be *less* if he were to talk to him thus than if otherwise. Talking to him as if he were superman would be killing, probably, millions of statistical humans.

So, no, although all our actions may be weighted with enormous utilitarian significance if Bostrom’s number is right, they don’t oblige you to do stupid things. It just means all your actions are weighted with tremendous utilitarian significance.

Edit: Billion is 10^(-9), not 10^(-12), which for some reason I was acting as if it were.

• J Thomas says:

“So we start off with prior odds less than 10^-(11) that an individual is superman. Let’s call the odds that he physically could exist as 10^(-11), because no human has ever made a discovery that seems like that. There’s also the implausibility of such a being being committed, which is harder to provide a justified estimate for, but I’ll call also 10^(-11) because I like that number and it keeps showing up. Of course, if he is superman, what you do is hardly likely to convince him because he’s already thought of so much, so let’s call the odds of you making him alter his preexisting course (for the better) at 10^(-3).”

“So we a 10^(-36) chance that trying to persuade this individual of AI risk, on the off-chance he is superman, will alter his course for the better.”

I think this would be a good thing for you to read.
http://slatestarcodex.com/2015/08/20/on-overconfidence/

Unless you are being ironical. Sometimes I have the problem myself that irony doesn’t always come across.

• I’ve read it; could you be more specific? No doubt my estimate is off by a few billions of billions, but I don’t think it is unduly improbable. If you have a better model, I’d be happy to listen to it.

If anything, I’d expect the probability to be less if you included elements of what makes superman superman, which I have elided: I.e., he grew up on an alien planet but happens to look just like us and to have gained physiology *more* adapted to our planet than the physiology of things that have evolved there.

If you look at the possible space of DNA-animals alone (!) not merely the possible space of living beings, then I’d expect the odds that he looks like us to give us that kind of a number, if not even less. I’m not particularly good at biology, but Wikipedia tells me that we have 3 billion or so base pairs. Let’s assume 1% code for proteins, and 0.1% of those have to be the way they are so that we look human. That gives us, what, 2^(-30,000) probability that Superman looks human? Something like that? And this is taking as given that Superman uses DNA. Even assuming I’m enormously off, my above estimates seem more conservative rather than otherwise.

• LHN says:

While the physical impossibilities and human mimickry are both issues, being better adapted to Earth than we are arguably isn’t. E.g., rabbits are both better adapted to Australia than native competitors and found it a more congenial environment than where they’d evolved. Kryptonians are the perfect invasive species for, well, anywhere.

• Nornagest says:

That gives us, what, 2^(-30,000) probability that Superman looks human? Something like that?

You could make a similar if slightly weaker argument for e.g. thylacines relative to wolves, or any of Nature’s various attempts to evolve a crab — they’re far more closely related to each other than we’d be to Kryptonians, but they look far closer to each other than to their common ancestor. That’s not to say that it’s likely; we have a rather weird body plan and no particular reason to think that it’s an attractor in detail. But it’s not 2^(-LARGENUM) unlikely.

• HeelBearCub says:

He is a super-intelligent, cryo-revived alien from god knows how many light years away.

What is the probability you only think he looks like we do?

• J Thomas says:

I’ve read it; could you be more specific? No doubt my estimate is off by a few billions of billions, but I don’t think it is unduly improbable. If you have a better model, I’d be happy to listen to it.

OK, you are basicly arguing that Superman does not exist, therefore this person is not Superman.

But the anthropic principle might apply. Superman obviously does not exist exactly like the comic books, because the US government and the UN have not awarded him any public medals. He does not regularly fly over NYC or Gotham City or any place where people regularly see him and wave at him. He has mostly kept his existence secret.

If he does exist, he could have revealed himself to the comic book authors. They wrote up stories about him, at least partly so that people might like him better when he did reveal himself. If he does exist, then all your hypothetical arguments about why he can’t exist are no longer relevant.

But here is one possible way for him to exist, one of a great many. Perhaps some higher intelligent alien was interested in studying things about humans. So maybe 10,000 years ago they took a large human sample and started genetically engineering them. Some of the things they did look impossible to our physics, because our physics is so backward compared to theirs. The idea that Superman is so strong because he came from a hi-gee planet is obviously flawed, although not completely impossible. More likely somebody who didn’t know, made up that story. It fits that perhaps Krypton was destroyed when the experimenter finished his experiment and euthanised his lab animals. And Superman was sent here because his father knew about this planet (obviously), and thought it was the best place for him.

Can you assign probabilities to the chance that there was a super-intelligent entity who did all that? I can’t. To me it seems like something from the Overconfidence post to imagine that I know enough to do that.

What is the probability that Batman would beat Superman in a fight?

(Answer: Bar uhaqerq creprag, juvpu whfg tbrf gb fubj gung bar uhaqerq creprag vf n inyvq cebonovyvgl nsgre nyy.)

• All of the above seems true, particularly if you relax the hypothesis that Superman looks sufficiently like a human to get a hospital examination and have them notice nothing, and especially if you relax how much Superman has to be like Superman. (Can see through everything but lead, including denser objects? Can fly… just because? Can shoot lasers from his eyes? Can move planets?) Good points about crabs and attractors.

Although that does move uncertainty into things like the thoroughness of the examination, and stuff. So I’ll drop the probability down to… I don’t know. If 10^(-36) seems crazy off, what probability do you assign to the crazy person saying he is Superman actually being Superman?

And I’ve been alternately capitalizing Superman and superman. A superman? The Superman? Is it a title, like Thor is now?

• HeelBearCub says:

@Seeking Omniscience:

Along the lines of what J Thomas said:

– Do you feel that you could make 10^11 similar statements about the likelihood someone possesses someone unimaginable to you power and be right every time?

– Do you feel you could make 10^11 statements about who will and won’t be committed and be right every time?

– On what basis do you justify stating that Scott talking about AI risk reduces AI risk?

• – Sure I could make those statements. Aaron doesn’t simultaneously violate X, Y, and Z physical laws in a stupendously obvious fashion. Aaron B. doesn’t, etc. Then I’d start on the other particular mammals, then on particular ants, and so on. Again, my odds are pretty conservative; see my response to J Thomas.

– I’m probably off on this, sure. But you didn’t quite summarize it right–it isn’t that X will be committed or not, it’s that a super-being would act in Y fashion. But sure, I’m off, granted.

– Talking about risk has historically been a useful way of making people aware that there is risk, and being aware of a risk has historically been useful for avoiding it. I dunno, it doesn’t *always* work, but at least it does sometimes. The chances of it working are pretty obviously better than the superman chances.

I think you missed the main point of what I said, though. It’s that (Scott acting normally, talking about things, through normal channels) is faaar more likely to influence AI risk positively than (Scott talking to random crazy-sounding people). The billions of people who would be saved are just icing on the cake–they make it more important that Scott act (relatively) normally than otherwise. If you have an argument that Scott trying to persuade who claim to be superman is more likely to positively influence AI risk than Scott talking normally, let’s hear it. But my point is that you have to consider the odds of the the alternative actions helping, which you haven’t done.

• HeelBearCub says:

So, do you think Scott is making good arguments about estimating risk?

You seem to be explicitly rejecting his arguments.

• I don’t understand what you are saying. You were originally implying that Scott’s accepting Bostrom’s numbers means [absurd result].

I pointed out that by any reasonable expectation of what the world is like, Scott doing [normal things] will result in more of Bostrom’s [large number of people] being saved (statistically) than Scott doing [stuff like listening to someone who claims to be Superman].

Do you mean I should have more doubt about the certainty of my models? That I need more of my meta-doubt? Is that what you mean by explicitly rejecting his arguments? Is so, that’s probably true. But I think even with a healthy load of that, Scott can save more statistical lives acting normally rather than otherwise.

• HeelBearCub says:

@SeekingOmniscience:

Serious question, did you actually read Scott’s last two posts?

Because this is a framing device specifically designed to examine his arguments in this post and the last one. It’s not a general argument about AI x-risk.

• Seth says:

Ah, but there are many examples of Superman being temporarily de-powered for a time, due to one reason or another (e.g. magic, red kryptonite, various advanced devices cutting him off from solar rays). It’d be no trouble at all for the real Kal-El to have a legitimate reason for being powerless at the moment. Indeed, “You say you’re Superman, but you’re not invulnerable” (which can be tested easily and not harmfully, e.g. cut his hair) is arguably one of the weaker refutations in terms of a certain probability chain. That is, when observing Superman at any given time, him being depowered is nontrivially likely. So that’s another reason not to be overconfident in discounting the claim.

[There was one old story where the antagonist depowered Superman, took his place, and managed to have Clark Kent eventually getting psychological therapy for the delusion that he (Kent) was the famous hero.]

• Pku says:

Yeah. There was also an episode of Smallville where an evil phantom trapped superman in an alternate world where he had no powers and was insane, with the catch that if he broke down and believed that world to be true, he would be trapped in it forever. It was somewhat terrifying.

• LHN says:

And of course gold kryptonite exposure would explain why his powers never seem to reappear.

Often Clark Kent has the longest and best-witnessed record of evidence against his being Superman of anyone on the planet: nigh-constant, embarrassingly unsuccessful attempts to prove the opposite on the part of a top-flight newspaper reporter, a neighbor who’s been observing him closely since childhood, a certified supergenius who hates him, and countless lesser investigators. If Kent could be Superman despite that mountain of contrary data, anyone could.

• Yes. I concede this.

• Deiseach says:

Edit: Billion is 10^(-9), not 10^(-12), which for some reason I was acting as if it were.

Difference between American and British billion 🙂

63. Gerry Quinn says:

Bayes starts at 50%. If you are asked “What is the chance that X might be the case”, where you have no notion what X is, the answer is 50%. If the question relates to whether aliens would attack the White House or a random banana plantation, you assess probabilities based on what you know about the three entities involved, and you move away from 50%. But 50% is always the starting point.

64. Gilbert says:

I have a long, rambly reply to this. This is a manual pingback since the automatic one doesn’t seem to work.