codex Slate Star Codex

In a mad world, all blogging is psychiatry blogging

Magic Markers

[Thanks to some people on Rationalist Tumblr, especially prophecyformula, for help and suggestions.]

There’s an old philosophers’ saying – trust those who seek the truth, distrust those who say they’ve found it. The psychiatry version of this goes “Trust those who seek biological underpinnings for mental illness, distrust those who say they’ve found them.”

Niculescu et al (2015) say they’ve found them. Their paper describes a process by which they hunted for biomarkers – in this case changes in gene expression – that predict suicide risk among psychiatric patients. They test various groups of psychiatric patients (including post-mortem tissue from suicide victims) to find some plausible genes. Then they use those genes to predict suicidality in two cohorts of about 100 patients each, including people with depression, schizophrenia, schizoaffective disorder, and bipolar disorder. They arrive at an impressive 92% AUC – that being the area under the curve graphing sensitivity vs. specificity, a common measure of the accuracy with which they can distinguish people who will vs. won’t be suicidal in the future.

The science press, showing the skepticism and restraint for which they are famous, jump on board immediately. A New Blood Test Can Predict Whether A Patient Will Have Suicidal Thoughts With More Than 90% Accuracy, says Popular Science. New Blood Test Predicts Future Suicide Attempts, says PBS.

There is a procedure for this sort of thing. The procedure is that the rest of us sit back and quietly wait for James Coyne, author of How To Critique Claims For A Blood Test For Depression, to tell us exactly why it is wrong. But it’s been over a week now and this hasn’t happened and I’m starting to worry he’s asleep on the job. So even though this is somewhat outside my area of expertise, let me discuss a couple of factors that concern me about this study.

The 92% accuracy claim is for the authors’ model, called UP-SUICIDE, which combines 11 biomarkers and two clinical prediction instruments. A clinical prediction instrument is a test which asks questions like “How depressed are you feeling right now?” or “How many times have you attempted suicide before?”. By combining the predictive power of the eleven genes and two instruments, they managed to reach the 92% number advertised in the abstract.

It might occur to you to ask “Wait, a test in which you can just ask people if they’re depressed and hate their life sounds a lot easier than this biomarker thing. Are we sure that they’re not just getting all of their predictive power from there?”

The answer is: no, we’re not sure at all, and as far as I can tell the study goes to great pains in order to make it hard to tell to what degree they are doing this.

Conventional wisdom says that clinical instruments for predicting suicidality can attain AUCs of 0.74 to 0.88. This is most of the way to the 0.92 shown in the current study, but not quite as high. But the current study combines two different clinical prediction instruments. In Combining Scales To Assess Suicide Risk, a Spanish team combines a few different clinical prediction instruments to get an AUC of…0.92.

If you look really closely at Niculescu et al’s big results table, you find that each of the individual prediction instruments they use does almost as well – and in some cases better than – their UP-SUICIDE model as a whole. For example, when predicting suicidal ideation in all patients, the CFI-S instrument has an AUC of 0.89, compared to the entire model’s 0.92. When predicting suicide-related hospitalizations in depressed patients, the CFI-S has an AUC of 0.78, compared to the entire model’s 0.7. Here the biomarkers are just adding noise!

Are the cases where the entire model outperforms the CFI-S cases where the biomarkers genuinely help? We have no way of knowing. There are two clinical prediction instruments, the CFI-S and the SASS. Combined, they should outperform either one alone. So, for example, on suicidal ideation among all patients, the SASS has an AUC of 0.85, the CFI-S has an AUC of 0.89, and the model as a whole (both instruments combined + 11 biomarkers) has an AUC of 0.92. If we just combined the CFI-S and SASS, and threw out the biomarkers, would we do better or worse than 0.92? I don’t know and they don’t tell us. When all we’re doing is looking at the overall model, the biomarkers may be helping, hurting, or totally irrelevant.

So what if we throw out the clinical prediction instruments and just look at the biomarkers?

The authors use their panel of biomarkers for four different conditions: depression, bipolar, schizophrenia, and schizoaffective. And they have two different outcomes: suicidal ideation according to a test of such, and actual hospitalization for suicide. That’s a total of 4 x 2 = 8 tests that they’re conducting.

Of these eight different tests, the panel of biomarkers taken together come back insignificant on seven of them.

And there’s such a thing as “trending towards significance”, but this isn’t it. Here, I’ll give p-values:

Depression/ideation: p = 0.26
Depression/hospitalization: p = 0.48
Schizoaffective/ideation: p = 0.46
Schizoaffective/hospitalization: p = 0.94
Schizophrenia/ideation: p = 0.16
Schizophrenia/hospitalization: p = 0.72
Bipolar/hospitalization: p = 0.24

The only test of the eight that comes out significant is bipolar/ideation, where p = 0.007. This is fine (well, it’s fine if it’s supposed to be post-Bonferroni correction, which I can’t be sure of from the paper). But I notice three things. Number one, there were only 29 people in this group. Number two, some of the most impressive looking genes for the ideation condition were worthless for the hospitalization condition. CLIP4, which got p = 0.005 for the ideation condition, got p = 0.91 for the second condition and actually had negative predictive value. And third, some of the genes that best predicted bipolar in the validation data had no predictive value for bipolar at all in the training data, and were included only because they predicted major depressive disorder alone. Given that the effects jump across diagnoses and fail to carry over into even a slightly different method of assessing suicidality, this looks a lot less like a real finding and a lot more like a statistical blip.

Finally, note that even in bipolar ideation, their one apparent success, the biomarkers only got an AUC of 0.75, lower than either clinical predictive instrument. The only reason their model did better was because it added on the clinical predictive instruments themselves.

So here it looks like seven out of their eight tests failed miserably, one of them succeeded in a very suspicious way, and they covered over this by combining the data with the clinical predictive instruments which always worked very well. Then everyone interpreted this as the sexy and exciting result “biomarkers work!” rather than the boring result “biomarkers fail, but if you use other stuff instead you’ll still be okay.”

The absolute strongest conclusion you can draw from this study is “biomarkers may predict risk of suicidal ideation in bipolar disorder with an AUC of 0.75″. Instead, everyone thinks biomarkers predict suicidality and hospitalization in a set of four different disorders with AUC of 0.92, which is way beyond what the evidence can support.

II.

So much for that. Now let me explain why it wouldn’t matter much even if they were right.

AUC is a combination of two statistics called sensitivity and specificity. It’s a little complicated, but if we assume it means sensitivity and specificity are both 92% we won’t be far off.

Sensitivity is the probability that a randomly chosen positive case in fact tests positive. In this case, it means the probability that, if someone is actually going to be suicidal, the model flags them as high suicide risk.

Specificity is the probability that a randomly chosen negative case in fact tests negative. In this case, it means the probability that, if someone is not going to be suicidal, the model flags them as low suicide risk.

In this study population, about 7.5% of their patients are hospitalized for suicidality each year. So suppose you got a million depressed people similar to these. 75,000 would attempt suicide that year, and 925,000 wouldn’t.

Now, suppose you gave your million depressed people this test with a 92% sensitivity and specificity.

Of the 925,000 non-suicidal people, 92% – 851,000 – will be correctly evaluated as non-suicidal. 74,000- 8% – will be mistakenly evaluated as suicidal.

Of the 75,000 suicidal people, 92% – 69,000 – will be correctly evaluated as suicidal. 8% – 6,000 – will be mistakenly evaluated as non-suicidal.

But this means that of the 143,000 people the test says are suicidal, only 69,000 – less than half – actually will be!

So when people say “We have a blood test to diagnose suicidality with 92% accuracy!”, even if it’s true, what they mean is that they have a blood test which, if it comes back positive, there’s still less than 50-50 odds the person involved is suicidal. Okay. Say you’re a psychiatrist. There’s a 48% chance your patient is going to be suicidal in the next year. What are you going to do? Commit her to the hospital? I sure hope not. Ask her some questions, make sure she’s doing okay, watch her kind of closely? You’re a psychiatrist and she’s your depressed patient, you would have been doing that anyway. This blood test is not really actionable.

And then remember that this isn’t the blood test we have. We have some clinical prediction instruments that do this, and we have a blood test which maybe, if you are very trusting, diagnoses suicidality in bipolar disorder with 75% accuracy. At 75% sensitivity and specificity, only twenty percent of the people who test positive will be suicidal. So what?

There will never be a blood test for suicide that works 100%, because suicide isn’t 100% in the blood. I am the most biodeterminist person you know (unless you know JayMan), I am happy to agree with Martin and Tesser that that the heritability of learning Latin is 26% and the heritability of jazz is 45% and so on, but suicide is not just biological. Maybe people need some kind of biological predisposition to consider suicide. But whether they go ahead with it or not depends on whether they have a good or bad day, whether their partner breaks up with them, whether a friend hands them a beer and they get really drunk, et cetera. Taking all of this into account, it’s really unlikely that a blood test will ever get sensitive and specific enough to overcome these hurdles.

We should continue research on the biological underpinnings of depression and suicide, both for the sake of knowledge and because it might lead to better treatments. But having “a blood test for suicide” won’t be very useful, even if it works.

Links 8/15: Linkering Doubts

Okay guys, there’s an island in San Francisco Bay on sale for five million dollars. That’s, like, less than a lot of houses in San Francisco. Surely some of you can get together and figure out something awesome to do with this?

Scott Aaronson speaks at SPARC on Common Knowledge and Aumann’s Agreement Theorem. I get a shout-out at the bottom.

Reddit’s r/china has some interesting looks into the Chines microblogosphere. You can find the comments of ordinary Chinese web users about the stock market crash (1, 2, 3, 4, 5, 6) but for a real look into the Chinese id, see what commenters think when somebody’s daughter wants to marry a Japanese person.

Speaking of r/china, I was originally confused by their references to “Uncle Eleven”. Turns out to be a nickname and censorship-route-around for Chinese leader Xi Jinping. Can you figure out why? (answer).

The Trobriand Islanders have a system of status based on yams, and Wikipedia describes it as charmingly as possible.

More compound interest being the least powerful force in the universe – an Indian housing lottery offering slum-dwellers the chance to move to better neighborhoods has no effect fourteen years later.

Jeff Kaufman weighs in on effective altruism, AI, and replacement effects.

Related: 100,000 Hours – “It’s a common misconception that we recommend all effective altruists “marry to give,” or marry a high-net-worth individual with the intent of redirecting much of their wealth to effective causes. In retrospect, we emphasized this idea too much in our early days, and as the most controversial of our suggestions it attracted a lot of press. In fact, we recommend that only a small fraction of EAs pursue MtG. MtG is probably best suited to attractive people, those with good social skills, those who fit in well in high-status and wealthy circles, and women looking to marry men.” Clue for the clueless: THIS IS A JOKE.

We know that religious people are happier and more mentally resilient than non-religious people, but the standard explanation is that going to church provides a sense of community and social connectedness. But a new study finds that religious activities are better for your mental health than other forms of social participation.

Matching Platforms and HIV Incidence – online and mobile dating sites increase HIV prevalence when they enter an area. The quasi-experiment suggests they’re responsible for about a thousand extra HIV cases in Florida.

Uber for health care in the form of doctors making on-demand house calls. It’s easy to dismiss this as a toy for the ultra-rich, except that the price – $100 to $200 per visit – actually isn’t too bad compared to what you might otherwise have to go through to get a doctor if you’re not on insurance.

Argentina sort of has open borders already. Why aren’t people raising money to send Africans to Argentina? Or are we worried that if too many people take advantage of the opportunity Argentina will change its mind?

Further adventures in Euclidean geometry: the nine-point circle. Also, mathpages.com isn’t afraid to ask the hard questions, like are all triangles isoceles?

UK admits e-cigarettes are safer than smoking and a useful way to fight tobacco addiction.

Scientists: Modafinil seems to be safe and effective “smart drug”. “We’re not saying go out and take this drug and your life will be better,” [we’re just presenting lots of evidence that this is the case].

Patient blows up hospital ward after lighting cigarette in hyperbaric oxygen chamber. The scary thing is that I can totally imagine the sort of person who would do this.

Finally, a candidate with an idea for out-of-control higher education costs that isn’t just another form of tulip subsidy: Marco Rubio proposes a private equity model a la Milton Friedman.

70% of Pakistani medical students are female, but only 23% of doctors are. A medical education is a status symbol in Pakistan, and women seem to be pursuing it to increase their value in the marriage market, then getting married and dropping out of medicine. As a result, Pakistan spends a lot of money on medical education and is drastically short of doctors. What do they do? Does your opinion change if I tell you that people involved in US medical education have told me we have a similar problem here? (albeit much less severe, and more related to child-rearing than marriage)

The FDA has been approving lots of stuff lately.

Finally, a smoking gun that one of the country’s leading climate change experts was engaged in perpetrating a fraud of massive proportions! Unfortunately for oil companies, that fraud was pretending to be a CIA spy in Pakistan to get out of work.

A more serious problem: most Kyoto-Protocol-approved carbon offsets from Russia and Ukraine may be made up for profit.

Ex-President Jimmy Carter is metal: “I may be dying, but I am going to take an entire species with me.”

Dolphins discover Goodhart’s Law.

Burma’s Superstitious Leaders: “The decision in 1970 for Burma to change from driving on the left-hand side of the road to the right-hand side was reportedly because the General’s astrologer felt that Burma had moved too far left, in political terms.” You say ‘astrologer’, I say ‘social priming theorist ahead of his time’.

Related: Get your anti-priming tin foil hats!

A while ago I argued with Topher about the degree to which people used to say refined carbohydrates were good for you. Topher said no one important had ever said anything like this, and I said some people had sort of said things that implied this even if no one had said it in so many words. Maybe we were both wrong: there was (and still is) a substantial body of literature directly suggesting that “a high-carbohydrate, high-sugars diet is associated with lower body weight and that this association is by no means trivial”. Sigh.

All I want for Christmas is augmented reality sand that turns into a relief map.

How the Japanese do urban zoning.

Plan to solve problem by releasing 25,000 flesh-eating turtles failed due to “lack of planning”, say government officials.

A Neural Algorithm of Artistic Style [warning: opens as PDF]. Unless you’re a machine learning specialist, you want to skip to page 5, where they show the results of converting a photo to the style of various famous paintings.

You’ve probably heard by now that the psychology replication project found only about half of major recent psych studies replicated. If you want you can also see the project’s site and check out some data for yourself.

Related (though note this is an old study): journals will reject ninety percent of the papers they have already published if they don’t realize they’ve already accepted them.

Related: this article on the replication crisis has a neat little widget that lets you p-hack a study yourself and will hopefully make you less credulous of “economy always does better when party Y is in power!” claims.

A study comparing the association between twins (really interesting design!) finds that genetics seems to determine the degree to which fast food makes you obese. That is, people with certain genes won’t gain weight from fast food, but people with other genes will. Still trying to decide what to think about this.

The VCs of BC – trove of cuneiform tablets on the ancient Assyrian economy reveals that they had institutions similar to our stocks, bonds, and venture capital. Also really interesting exploration of the gravity model of trade and what it means for economics today.

Sam Altman, “head of Silicon Valley’s most important startup farm”, says that “if I were Barack Obama, I would commit maybe $100 billion to R&D of AI safety initiatives.” Meanwhile, on my blog, people who don’t have a day job betting fortunes on tech successes and failures continue to say they’re 99.9999999% sure that even $1 million is too much.

Governors’ Mansions of the United States. I wouldn’t mind being governor of Idaho. On the other hand, I think becoming governor of Delaware would be a step down for a lot of people.

Evidence of pro-female hiring bias in online labor markets.

Speaking of confidence and probability estimates, Scott Adams goes way way way out on a limb and predicts 98% chance of Trump winning the Presidency. While I super-admire his willing to make a specific numerical prediction that we can judge him on later, I wonder whether he’d be willing to bet me $100 at 10:1 odds (ie I pay him $100 if Trump wins, he pays me $1,000 if Trump loses), given that if his true odds are 50:1 that should be basically free money. Or, of course, he could just play the prediction markets and have even better chances. If not, then despite his virtue in giving a number at all, I can’t believe it’s his real one.

Schwab study looks at how five different strategies for market timing would have worked over the past twenty years.

Retrospective study: the “STEM pipeline” stopped “leaking women” in the 1990s; since that time nothing that happens after the bachelors’ level explains underrepresentation of women in any STEM field.

High levels of national development make countries more likely to be democracies, but democracy does not seem to cause higher levels of national development. Related: relationship between peace and democracy may be spurious.

Jerry Coyne weighs in on the recent “Holocaust trauma is epigenetically inherited” study. Please consider epigenetic inheritance studies guilty until proven innocent at this point.

The stories behind Russian subdivision flags.

Doctors Without Borders makes a plea against cracking down on India’s cheap generic pharmaceutical industry, the “pharmacy of the developing world”.

Remember that story a couple of months ago on a sting that proved most big supplement companies don’t contain any of the active ingredient at all? Now there’s some argument going on that the sting was dishonest and bungled their tests, and the supplement companies were perfectly fine all along. Related: apparently you can’t sue a prosecutor for anything they do, even if it’s really stupid and destroys your business.

Nonconservative whites show a preference for black politicians over otherwise-identical white politicians in matched-“resume” studies, leading to greater willingness to vote for them, donate to them, and volunteer for them. I don’t think the paper looked at conservative whites, and I’m curious what they would have found if they did.

Further suggestion that genes have more effect on IQ in the rich than the poor. A koan: this study found shared environment only affects IQ for people below the tenth percentile in economic status. The tenth percentile of income is below $12,000. But fifty years ago, probably most people were below $12,000, and fifty years from now, maybe nobody will be below $12,000. Do you think this same study done in the past or future would repeat the finding of a less-than-$12,000 threshold, repeat the finding of a less-than-10% threshold, or something else? Why?

Higher school starting age lowers the crime rate among young people. Four day school week improves academic performance. It would probably be irresponsible to sum this up as “basically the less school you have, the better everything goes,” but I bet it’s true.

Currently #1 in Amazon’s Political Philosophy section: SJWs Always Lie by Vox Day. Currently #2 in Amazon’s Political Philosophy section: John Scalzi Is Not A Very Popular Author And I Myself Am Quite Popular, by somebody definitely going the extra mile to parody Vox Day.

Related: did you know that Vox Day once formally debated Luke Muehlhauser on the question of God’s existence? It went about as well as you would expect.

Posted in Uncategorized | Tagged | 550 Comments

Mysticism and Pattern-Matching

[Epistemic status: Total conjecture.]

One of the things that got me interested in psychiatry was the sheer weirdness of the human brain’s failure modes. We all hear that the brain is like a computer, but when a computer breaks, the screen goes black or it freezes or something. It doesn’t hear voices telling it that it’s Jesus, or start seeing tiny men running around on the floor. But for some reason, when the the human brain breaks, it may do exactly that. Why?

Psychiatry classes never just tell you the answer to this question, but reading between the lines I think it has something to do with top-down processing and pattern matching.



Bottom-up processing is when you go from basic elements to more complex ideas – for example, when you see the three letters C, A, and T in a row, you might combine them to get the the word CAT. Top-down processing is when more complex ideas change the way you interpret basic elements. For example, in the first picture above, the middle letters in both words are the same. We read the first as H, because the image as a gestalt suggests the word “THE” and the word “THE” suggests an H in the middle. We read the second as A, because the image as a gestalt suggests the word “CAT” and the word “CAT” has an A in the middle. Our big-picture idea has changed the way we view the smaller elements composing it.

The same is true of the second image. We recognize the phrase “PARIS IN THE SPRINGTIME”, and so we assume that’s what the sign is trying to show us. In fact, the sign doubles the word “the”. But since this is bizarre and not something that makes sense in the gestalt, we assume this is a mistake and gloss right over it. We do this very, very easily – how many times have I duplicated the word “the” in this essay already?

The third image is related to this tendency. To most people, it looks formless. Even once they hear that it’s an old black-and-white photograph of a cow’s head, it’s might still require a bit of staring before you catch on. But once you see the cow, the cow is obvious. It becomes impossible to see it as formless, impossible to see it as anything else. Having given yourself a top-down pattern to work from, the pattern automatically organizes the visual stimuli and makes sense of them.

This provides a possible explanation for hallucinations. Think of top-down processing as taking noise and organizing it to fit a pattern. Normally, you’ll only fit it to the patterns that are actually there. But if your pattern-matching system is broken, you’ll fit it to patterns that aren’t in the data at all.

The best example of this is Google Deep Dream:

I don’t know much about neural networks, so I may not be getting this entirely right, but as far as I understand it, they trained a neural network on some stimulus like a dog. This was for research in machine vision; they wanted the net to be able to recognize dogs when it saw them; to pattern-match potentially noisy images of dogs into its Platonic ideal of a dog. But if you turn the pattern-matching up, it will just start seeing dogs everywhere there’s even the slightest amount of noise that resembles a dog at all. You only matched the sign above to “PARIS IN THE SPRINGTIME” because it was almost exactly like that phrase; if we stick your pattern-matching software into overdrive, maybe every sentence would start looking like more meaningful alternatives. Eevn sceeentns wtih aolsmt all the lerttes rergaearnd wulod naelry ianslntty sanp itno pacle. Turn it all the way up, and maybe you could make every sentence look like “PARIS IN THE SPRINGTIME”. Or something.

So hallucinations are when your top-down processing/pattern-matching ability becomes so dysfunctional that it can generate people and objects out of random visual noise. Why it chooses some people and objects over others I don’t know, but it’s hardly surprising – it does the same thing every night in your dreams.

Many of the same people who have hallucinations also have paranoia. Paranoia seems to me to be overfunctioning of social pattern-matching. When Deep Dream sees the tiniest hint of a line here, a slight dark spot there, it pattern-matches it into an entire dog. When a paranoiac hears a stray word here, or sees a sideways glance there, they turn it into this vast social edifice of connected plots. Every new thing that happens is fit effortlessly into the same pattern. When their psychiatrist says they’re crazy, that gets fit into the pattern too – maybe the psychiatrist is a tool of the conspiracy, trying to confuse them into compliance.

So where does the mysticism come in?

I notice that the same people who have hallucinations also have mystical experiences. By mystical experiences, I don’t just mean “they see angels” – in that case, the relationship to hallucination would be a tautology. I mean they feel a sense of sudden understanding of and connection with the universe. I know at least three groups that do this: druggies, meditators, and prophets. The druggies report feelings of total understanding on their drugs, and also report hallucinations. The meditators occasionally achieve enlightenment, but look at any text about meditation and you find mentions of visions and hallucinations experienced during the practice. The voices heard by the prophets are too obvious to mention.

One well-known way of bringing on such experiences is to abuse your pattern-matching faculty. The Chicken Qabalah of Rabbi Lamed Ben Clifford (not really recommended) manages to link a pretty boring Bible verse to the letter yud, the creativity of God, the essence of existence, the sun, the phallus, the plane of Malkuth, and the number 496, then explains:

Like a mountain goat leaping ecstatically from crag to crag, one thought springs into another, and another, ad infinitum. You can continue, almost forever, connecting things that you never thought were connected. Sooner or later something’s going to snap and you will overcome the fundamental defect in your powers of perception.

And:

Was that the message Ezekiel was trying to convey? Probably not. But who cares! Whatever it was the old boy was originally trying to say shrinks to insignificance. It is far more important to my spiritual enlightenment that my mind was forced to churn at breakneck speed to put all of this together, and then open itself up to the infinite possibilities of meaning. Look hard enough at anything and eventually you will see everything! it doesn’t even have to make very much sense what you connect to what. It’s all ultimately connected!

This philosophy, which I associate both with kabbalah and with the more modern Western hermetic tradition, says that learning a set of extremely complicated correspondences is an important step toward gaining enlightenment. See for example this site, which helpfully relates the sephirah Netzach to the planet Venus, the number 7, the emerald, the lynx, the rose, cannabis, arsenic, copper, fire, the solar plexus chakra, the archangel Haniel, the Egyptian goddess Hathor, the concepts of love and victory, et cetera, et cetera. You’re supposed to be able to use this to interpret things – for example, if you have a dream about a lynx, it could correspond to anything else in the system – but it looks like it would quickly get unwieldy. And other sources will give completely different systems of correspondences, and nobody gets too upset over it – in fact, some sources will happily encourage you to come up with your own correspondences instead, as long as you stick to them. It seems like the goal is less “remember that it’s extremely important that emeralds correspond to lynxes in reality” and more “have some system, any system, of interesting correspondences in mind that you can apply to everything you come across”.

Nor does it especially matter what you’re interpreting. The traditional things to interpret are mysterious things like dreams, or the Bible, but Crowley famously performs a mystical analysis of Mother Goose nursery rhymes (see Interlude here). The important factor seems to be less about there being sacred truth in the object being analyzed, and more about the process of performing the analysis.

(Zen koans are a little different, but also sort of involve torturing a pattern-finding ability for apparently no reason)

So to skip to the point: I think all of this is about strengthening the pattern-matching faculty. You’re exercising it uselessly but impressively, the same way as the body-builder who lifts the same weight a thousand times until their arms are the size of tree trunks. Once the pattern-matching faculty is way way way overactive, it (spuriously) hallucinates a top-down abstract pattern in the whole universe. This is the experience that mystics describe as “everything is connected” or “all is one”, or “everything makes sense” or “everything in the universe is good and there for a purpose”. The discovery of a beautiful all-encompassing pattern in the universe is understandably associated with “seeing God”.

Religious scholar William James once experimented with nitrous oxide and reached a state where he felt he had total comprehension of the universe. According to a story which I can’t verify, he became infuriated at losing the thread of understanding once the chemical wore off, so he decided to take notes during the experience: write down the secrets of the universe then, and reread them once he was sober. The experiment completed, he picked up the notepad in feverish excitement, only to find that he had written OVERALL THERE IS A SMELL OF FRIED ONIONS.

Imagine one of those Google robots pointing at an empty patch of sky and saying “No, look, seriously, there’s a dog right there. Right there! How are you not seeing this?” Things that make perfect sense in the context of a state of overactive pattern-matching look meaningless to a pattern-matching faculty operating normally. At best, you can sort of see the lines of what seemed so clear before (“Yeah, I can see that that stain on the wall is vaguely dog-shaped.”) This matches the stories I’ve heard of people who have some mystical experience but then can’t maintain or recapture it.

I think other methods of inducing weird states of consciousness, like drugs and meditation, probably do the same thing by some roundabout route. Meditation seems like reducing stimuli, which is known to lead to hallucinations in eg sensory deprivation tanks or solitary confinement cells in jail. I think the general principle is that a low level of external stimuli makes your brain adjust its threshold for stimulus detection up until anything including random noise satisfies the threshold. As for drugs, there’s lots of reasons to think that the neurotransmission changes they create will alter the brain’s pattern processing strategies.

Things this hypothesis doesn’t explain: why mystical experiences are linked with a feeling of no time, no space, and no self; why prayer or extreme devotion seems to induce them (eg bhakti yoga), and why they can be so beneficial – that is, why do people with mystical experiences become happier and better adjusted? Maybe the feeling of the world making sense is naturally a pleasant and helpful one. Certainly the opposite can be very stressful!

Posted in Uncategorized | Tagged | 318 Comments

Probabilities Without Models

[Epistemic status: Not original to me. Also, I might be getting it wrong.]

A lot of responses to my Friday post on overconfidence centered around this idea that we shouldn’t, we can’t, use probability at all in the absence of a well-defined model. The best we can do is say that we don’t know and have no way to find out. I don’t buy this:

“Mr. President, NASA has sent me to warn you that a saucer-shaped craft about twenty meters in diameter has just crossed the orbit of the moon. It’s expected to touch down in the western United States within twenty-four hours. What should we do?”

“How should I know? I have no model of possible outcomes.”

“Should we put the military on alert?”

“Maybe. Maybe not. Putting the military on alert might help. Or it might hurt. We have literally no way of knowing.”

“Maybe we should send a team of linguists and scientists to the presumptive landing site?”

“What part of ‘no model’ do you not understand? Alien first contact is such an inherently unpredictable enterprise that even speculating about whether linguists should be present is pretending to a certainty which we do not and cannot possess.”

“Mr. President, I’ve got our Israeli allies on the phone. They say they’re going to shoot a missile at the craft because ‘it freaks them out’. Should I tell them to hold off?”

“No. We have no way of predicting whether firing a missile is a good or bad idea. We just don’t know.”

In real life, the President would, despite the situation being totally novel and without any plausible statistical model, probably make some decision or another, like “yes, put the military on alert”. And this implies a probability judgment. The reason the President will put the military on alert, but not, say, put banana plantations on alert, is that in his opinion the aliens are more likely to attack than to ask for bananas.

Fine, say the doubters, but surely the sorts of probability judgments we make without models are only the most coarse-grained ones, along the lines of “some reasonable chance aliens will attack, no reasonable chance they will want bananas.” Where “reasonable chance” can mean anything from 1% to 99%, and “no reasonable chance” means something less than that.

But consider another situation: imagine you are a director of the National Science Foundation (or a venture capitalist, or an effective altruist) evaluating two proposals that both want the same grant. Proposal A is by a group with a long history of moderate competence who think they can improve the efficiency of solar panels by a few percent; their plan is a straightforward application of existing technology and almost guaranteed to work and create a billion dollars in value. Proposal B is by a group of starry-eyed idealists who seem very smart but have no proven track record; they say they have an idea for a revolutionary new kind of super-efficient desalinization technology; if it works it will completely solve the world’s water crisis and produce a trillion dollars in value. Your organization is risk-neutral to a totally implausible degree. What do you do?

Well, it seems to me that you choose Proposal B if you think it has at least a 1/1000 chance of working out; otherwise, you choose Proposal A. But this requires at least attempting to estimate probabilities in the neighborhood of 1/1000 without a model. Crucially, there’s no way to avoid this. If you shrug and take Proposal A because you don’t feel like you can assess proposal B adequately, that’s making a choice. If you shrug and take Proposal B because what the hell, that’s also making a choice. If you are so angry at being placed in this situation that you refuse to choose either A or B and so pass up both a billion and a trillion dollars, that’s a choice too. Just a stupid one.

Nor can you cry “Pascal’s Mugging!” in order to escape the situation. I think this defense is overused and underspecified, but at the very least, it doesn’t seem like it can apply in places where the improbable option is likely to come up over your own lifespan. So: imagine that your organization actually reviews about a hundred of these proposals a year. In fact, it’s competing with a bunch of other organizations that also review a hundred or so such proposals a year, and whoever’s projects make the most money gains lots of status and new funding. Now it’s totally plausible that, over the course of ten years, it might be a better strategy to invest in things that have a one in a thousand chance of working out. Indeed, maybe you can see the organizations that do this outperforming the organizations that don’t. The question really does come down to your judgment: are Project B’s odds of success greater or less than 1/1000?

Nor is this a crazy hypothetical situation. A bunch of the questions we have to deal with come down to these kinds of decisions made without models. Like – should I invest for retirement, even though the world might be destroyed by the time I retire? Should I support the Libertarian candidate for president, even though there’s never been a libertarian-run society before and I can’t know how it will turn out? Should I start learning Chinese because China will rule the world over the next century? These questions are no easier to model than ones about cryonics or AI, but they’re questions we all face.

The last thing the doubters might say is “Fine, we have to face questions that can be treated as questions of probability. But we should avoid treating them as questions of probability anyway. Instead of asking ourselves ‘is the probability that the desalinization project will work greater or less than 1/1000′, we should ask ‘do I feel good about investing this money in the desalinization plant?’ and trust our gut feelings.”

There is some truth to this. My medical school thesis was on the probabilistic judgments of doctors, and they’re pretty bad. Doctors are just extraordinarily overconfident in their own diagnoses; a study by Bushyhead, who despite his name is not a squirrel, found that when doctors were 80% certain that patients had pneumonia, only 20% would turn out to have the disease. On the other hand, the doctors still did the right thing in most every case, operating off of algorithms and heuristics that never mentioned probability. The conclusion was that as long as you don’t force doctors to think about about what they’re doing in mathematical terms, everything goes fine – something I’ve brought up before in the context of the Bayes mammogram problem. Maybe this generalizes. Maybe people are terrible at coming up with probabilities for things like investing in desalinization plants, but will generally make the right choice.

But refusing to frame choices in terms of probabilities also takes away a lot of your options. If you use probabilities, you can check your accuracy – the foundation director might notice that of a thousand projects she had estimated as having 1/1000 probabilities, actually about 20 succeeded, meaning that she’s overconfident. You can do other things. You can compare people’s success rates. You can do arithmetic on them (“if both these projects have 1/1000 probability, what is the chance they both succeed simultaneously?”), you can open prediction markets about them.

Most important, you can notice and challenge overconfidence when it happens. I said last post that when people say there’s only a one in a million chance of something like AI risk, they are being stupendously overconfident. If people just very quietly act as if there’s a one in a million chance of such risk, without ever saying it, then no one will ever be able to call them on it.

I don’t want to say I’m completely attached to using probability here in exatly the normal way. But all of the alternatives I’ve heard fall apart when you’ve got to make an actual real-world choice, like sending the military out to deal with the aliens or not.

[EDIT: Why regressing to meta-probabilities just gives you more reasons to worry about overconfidence]

[EDIT-2: “I don’t know”]

[EDIT-3: A lot of debate over what does or doesn’t count as a “model” in this case. Some people seem to be using a weak definition like “any knowledge whatsoever about the process involved”. Others seem to want a strong definition like “enough understanding to place this event within a context of similar past events such that a numerical probability can be easily extracted by math alone, like the model where each flip of a two-sided coin has a 50% chance of landing heads”. Without wanting to get into this, suffice it to say that any definition in which the questions above have “models” is one where AI risk also has a model.]

Posted in Uncategorized | Tagged | 446 Comments

On Overconfidence

[Epistemic status: This is basic stuff to anyone who has read the Sequences, but since many readers here haven’t I hope it is not too annoying to regurgitate it. Also, ironically, I’m not actually that sure of my thesis, which I guess means I’m extra-sure of my thesis]

I.

A couple of days ago, the Global Priorities Project came out with a calculator that allowed you to fill in your own numbers to estimate how concerned you should be with AI risk. One question asked how likely you thought it was that there would be dangerous superintelligences within a century, offering a drop down menu with probabilities ranging from 90% to 0.01%. And so people objected: there should be options to put in only one a million chance of AI risk! One in a billion! One in a…

For example, a commenter writes that: “the best (worst) part: the probability of AI risk is selected from a drop down list where the lowest probability available is 0.01%!! Are you kidding me??” and then goes on to say his estimate of the probability of human-level (not superintelligent!) AI this century is “very very low, maybe 1 in a million or less”. Several people on Facebook and Tumblr say the same thing – 1/10,000 chance just doesn’t represent how sure they are that there’s no risk from AI, they want one in a million or more.

Last week, I mentioned that Dylan Matthews’ suggestion that maybe there was only 10^-67 chance you could affect AI risk was stupendously overconfident. I mentioned that was thousands of lower than than the chance, per second, of getting simultaneously hit by a tornado, meteor, and al-Qaeda bomb, while also winning the lottery twice in a row. Unless you’re comfortable with that level of improbability, you should stop using numbers like 10^-67.

But maybe it sounds like “one in a million” is much safer. That’s only 10^-6, after all, way below the tornado-meteor-terrorist-double-lottery range…

So let’s talk about overconfidence.

Nearly everyone is very very very overconfident. We know this from experiments where people answer true/false trivia questions, then are asked to state how confident they are in their answer. If people’s confidence was well-calibrated, someone who said they were 99% confident (ie only 1% chance they’re wrong) would get the question wrong only 1% of the time. In fact, people who say they are 99% confident get the question wrong about 20% of the time.

It gets worse. People who say there’s only a 1 in 100,000 chance they’re wrong? Wrong 15% of the time. One in a million? Wrong 5% of the time. They’re not just overconfident, they are fifty thousand times as confident as they should be.

This is not just a methodological issue. Test confidence in some other clever way, and you get the same picture. For example, one experiment asked people how many numbers there were in the Boston phone book. They were instructed to set a range, such that the true number would be in their range 98% of the time (ie they would only be wrong 2% of the time). In fact, they were wrong 40% of the time. Twenty times too confident! What do you want to bet that if they’d been asked for a range so wide there was only a one in a million chance they’d be wrong, at least five percent of them would have bungled it?

Yet some people think they can predict the future course of AI with one in a million accuracy!

Imagine if every time you said you were sure of something to the level of 999,999/1 million, and you were right, the Probability Gods gave you a dollar. Every time you said this and you were wrong, you lost $1 million (if you don’t have the cash on hand, the Probability Gods offer a generous payment plan at low interest). You might feel like getting some free cash for the parking meter by uttering statements like “The sun will rise in the east tomorrow” or “I won’t get hit by a meteorite” without much risk. But would you feel comfortable predicting the course of AI over the next century? What if you noticed that most other people only managed to win $20 before they slipped up? Remember, if you say even one false statement under such a deal, all of your true statements you’ve said over years and years of perfect accuracy won’t be worth the hole you’ve dug yourself.

Or – let me give you another intuition pump about how hard this is. Bayesian and frequentist statistics are pretty much the same thing [citation needed] – when I say “50% chance this coin will land heads”, that’s the same as saying “I expect it to land heads about one out of every two times.” By the same token, “There’s only a one in a million chance that I’m wrong about this” is the same as “I expect to be wrong on only one of a million statements like this that I make.”

What do a million statements look like? Suppose I can fit twenty-five statements onto the page of an average-sized book. I start writing my predictions about scientific and technological progress in the next century. “I predict there will not be superintelligent AI.” “I predict there will be no simple geoengineering fix for global warming.” “I predict no one will prove P = NP.” War and Peace, one of the longest books ever written, is about 1500 pages. After you write enough of these statements to fill a War and Peace sized book, you’ve made 37,500. You would need to write about 27 War and Peace sized books – enough to fill up a good-sized bookshelf – to have a million statements.

So, if you want to be confident to the level of one-in-a-million that there won’t be superintelligent AI next century, you need to believe that you can fill up 27 War and Peace sized books with similar predictions about the next hundred years of technological progress – and be wrong – at most – once!

This is especially difficult because claims that a certain form of technological progress will not occur have a very poor track record of success, even when uttered by the most knowledgeable domain experts. Consider how Nobel-Prize winning atomic scientist Ernest Rutherford dismissed the possibility of nuclear power as “the merest moonshine” less than a day before Szilard figured out how to produce such power. In 1901, Wilbur Wright told his brother Orville that “man would not fly for fifty years” – two years later, they flew, leading Wilbur to say that “ever since, I have distrusted myself and avoided all predictions”. Astronomer Joseph de Lalande told the French Academy that “it is impossible” to build a hot air balloon and “only a fool would expect such a thing to be realized”; the Montgolfier brothers flew less than a year later. This pattern has been so consistent throughout history that sci-fi titan Arthur C. Clarke (whose own predictions were often eerily accurate) made a heuristic out of it under the name Clarke’s First Law: “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

Also – one good heuristic is to look at what experts in a field think. According to Muller and Bostrom (2014), a sample of the top 100 most-cited authors in AI ascribed a > 70% probability to AI within a century, a 50% chance of superintelligence conditional on human-level, and a 10% chance of existential catastrophe conditional on human level AI. Multiply it out, and you get a couple percent chance of superintelligence-related existential catastrophe in the next century.

Note that my commenter wasn’t disagreeing with the 4% chance. They were disagreeing with the possibility that there would be human-level AI at all, that is, the 70% chance! That means that he was saying, essentially, that he was confident he could write a million sentences – that is, twenty-seven War and Peace‘s worth – all of which were trying to predict trends in a notoriously difficult field, all of which contradicted a well-known heuristic about what kind of predictions you should never try to make, all of which contradicted the consensus opinion of the relevant experts – and only have one of the million be wrong!

But if you feel superior to that because you don’t believe there’s only a one-in-a-million chance of human-level AI, you just believe there’s a one-in-a-million chance of existential catastrophe, you are missing the point. Okay, you’re not 300,000 times as confident as the experts, you’re only 40,000 times as confident. Good job, here’s a sticker.

Seriously, when people talk about being able to defy the experts a million times in a notoriously tricky area they don’t know much about and only be wrong once – I don’t know what to think. Some people criticize Eliezer Yudkowsky for being overconfident in his favored interpretation of quantum mechanics, but he doesn’t even attach a number to that. For all I know, maybe he’s only 99% sure he’s right, or only 99.9%, or something. If you are absolutely outraged that he is claiming one-in-a-thousand certainty on something that doesn’t much matter, shouldn’t you be literally a thousand times more outraged when every day people are claiming one-in-a-million level certainty on something that matters very much? It is almost impossible for me to comprehend the mindsets of people who make a Federal Case out of the former, but are totally on board with the latter.

Everyone is overconfident. When people say one-in-a-million, they are wrong five percent of the time. And yet, people keep saying “There is only a one in a million chance I am wrong” on issues of making really complicated predictions about the future, where many top experts disagree with them, and where the road in front of them is littered with the bones of the people who made similar predictions before. HOW CAN YOU DO THAT?!

II.

I am of course eliding over an important issue. The experiments where people offering one-in-a-million chances were wrong 5% of the time were on true-false questions – those with only two possible answers. There are other situations where people can often say “one in a million” and be right. For example, I confidently predict that if you enter the lottery tomorrow, there’s less than a one in a million chance you will win.

On the other hand, I feel like I can justify that. You want me to write twenty-seven War and Peace volumes about it? Okay, here goes. “Aaron Aaronson of Alabama will not win the lottery. Absalom Abramowtiz of Alaska will not win the lottery. Achitophel Acemoglu of Arkansas will not win the lottery.” And so on through the names of a million lottery ticket holders.

I think this is what statisticians mean when they talk about “having a model”. Within the model where there are a hundred million ticket holders, and we know exactly one will be chosen, our predictions are on very firm ground, and our intuition pumps reflect that.

Another way to think of this is by analogy to dart throws. Suppose you have a target that is half red and half blue; you are aiming for red. You would have to be very very confident in your dart skills to say there is only a one in a million chance you will miss it. But if there is a target that is 999,999 millionths red, and 1 millionth blue, then you do not have to be at all good at darts to say confidently that there is only a one in a million chance you will miss the red area.

Suppose a Christian says “Jesus might be God. And he might not be God. 50-50 chance. So you would have to be incredibly overconfident to say you’re sure he isn’t.” The atheist might respond “The target is full of all of these zillions of hypotheses – Jesus is God, Allah is God, Ahura Mazda is God, Vishnu is God, a random guy we’ve never heard of is God. You are taking a tiny tiny submillimeter-sized fraction of a huge blue target, painting it red, and saying that because there are two regions of the target, a blue region and a red region, you have equal chance of hitting either.” Eliezer Yudkowsky calls this “privileging the hypothesis”.

There’s a tougher case. Suppose the Christian says “Okay, I’m not sure about Jesus. But either there is a Hell, or there isn’t. Fifty fifty. Right?”

I think the argument against this is that there are way more ways for there not to be Hell than there are for there to be Hell. If you take a bunch of atoms and shake them up, they usually end up as not-Hell, in much the same way as the creationists’ fabled tornado-going-through-a-junkyard usually ends up as not-a-Boeing-747. For there to be Hell you have to have some kind of mechanism for judging good vs. evil – which is a small part of the space of all mechanisms, let alone the space of all things – some mechanism for diverting the souls of the evil to a specific place, which same, some mechanism for punishing them – again same – et cetera. Most universes won’t have Hell unless you go through a lot of work to put one there. Therefore, Hell existing is only a very tiny part of the target. Making this argument correctly would require an in-depth explanation of formalizations of Occam’s Razor, which is outside the scope of this essay but which you can find on the LW Sequences.

But this kind of argumentation is really hard. Suppose I predict “Only one in 150 million chance Hillary Clinton will be elected President next year. After all, there are about 150 million Americans eligible for the Presidency. It could be any one of them. Therefore, Hillary covers only a tiny part of the target.” Obviously this is wrong, but it’s harder to explain how. I would say that your dart-aim is guided by an argument based on a concrete numerical model – something like “She is ahead in the polls by X right now, and candidates who are ahead in the polls by X usually win about 50% of the time, therefore, her real probability is more like 50%.”

Or suppose I predict “Only one in a million chance that Pythagoras’ Theorem will be proven wrong next year.” Can I get away with that? I can’t quite appeal to “it’s been proven”, because there might have been a mistake in (all the) proofs. But I could say: suppose there are five thousand great mathematical theorems that have undergone something like the level of scrutiny as Pythagoras’, and they’ve been known on average for two hundred years each. None of them have ever been disproven. That’s a numerical argument that the rate of theorem-disproving is less than one per million years, and I think it holds.

Another way to do this might be “there are three hundred proofs of Pythagoras’ theorem, so even accepting an absurdly high 10%-per-proof chance of being wrong, the chance is now only 10^-300.” Or “If there’s a 10% chance each mathematician reading a proof missing something, and one million mathematicians have read the proof of Pythagoras’ Theorem, then the probability that they all missed it is more like 10^-1,000,000.”

But this can get tricky. Suppose I argued “There’s a good chance Pythagoras’ Theorem will be disproven, because of all Pythagoras’ beliefs – reincarnation, eating beans being super-evil, ability to magically inscribe things on the moon – most have since been disproven. Therefore, the chance of a randomly selected Pythagoras-innovation being wrong is > 50%.”

Or: “In 50 past presidential elections, none have been won by women. But Hillary Clinton is a woman. Therefore, the chance of her winning this election is less than 1/50.”

All of this stuff about adjusting for size of the target or for having good mathematical models is really hard and easy to do wrong. And then you have to add another question: are you sure, to a level of one-in-a-million, that you didn’t mess up your choice of model at all?

Let’s bring this back to AI. Suppose that, given the complexity of the problem, you predict with utter certainty that we will not be able to invent an AI this century. But if the modal genome trick pushed by people like Greg Cochran works out, within a few decades we might be able to genetically engineer humans far smarter than any who have ever lived. Given tens of thousands of such supergeniuses, might we be able to solve an otherwise impossible problem? I don’t know. But if there’s a 1% chance that we can perform such engineering, and a 1% chance that such supergeniuses can invent artificial intelligence within a century, then the probability of AI within the next century isn’t one in a million, it’s one in ten thousand.

Or: consider the theory that all the hard work of brain design has been done by the time you have a rat brain, and after that it’s mostly just a matter of scaling up. You can find my argument for the position in this post – search for “the hard part is evolving so much as a tiny rat brain”. Suppose there’s a 10% chance this theory is true, and a 10% chance that researchers can at least make rat-level AI this century. Then the chance of human-level AI is not one in a million, but one in a hundred.

Maybe you disagree with both of these claims. The question is: did you even think about them before you gave your one in a million estimate? How many other things are there that you never thought about? Now your estimate has, somewhat bizarrely, committed you to saying there’s a less than one in a million chance we will significantly enhance human intelligence over the next century, and a less than one in a million chance that the basic-scale-up model of intelligence is true. You may never have thought directly about these problems, but by saying “one in a million chance of AI in the next hundred years”, you are not only committing yourself to a position on them, but committing yourself to a position with one-in-a-million level certainty even though several domain experts who have studied these fields for their entire lives disagree with you!

A claim like “one in a million chance of X” not only implies that your model is strong enough to spit out those kinds of numbers, but that there’s only a one in a million chance you’re using the wrong model, or missing something, or screwing up the calculations.

A few years ago, a group of investment bankers came up with a model for predicting the market, and used it to design a trading strategy which they said would meet certain parameters. In fact, they said that there was only a one in 10^135 chance it would fail to meet those parameters during a given year. A human just uttered the probability “1 in 10^135″, so you can probably guess what happened. The very next year was the 2007 financial crisis, the model wasn’t prepared to deal with the extraordinary fallout, the strategy didn’t meet its parameters, and the investment bank got clobbered.

This is why I don’t like it when people say we shouldn’t talk about AI risk because it involves “Knightian uncertainty”. In the real world, Knightian uncertainty collapses back down to plain old regular uncertainty. When you are an investment bank, the money you lose because of normal uncertainty and the money you lose because of Knightian uncertainty are denominated in the same dollars. Knightian uncertainty becomes just another reason not to be overconfident.

III.

I came back to AI risk there, but this isn’t just about AI risk.

You might have read Scott Aaronson’s recent post about Aumann Agreement Theorem, which says that rational agents should be able to agree with one another. This is a nice utopian idea in principle, but in practice, well, nobody seems to be very good at carrying it out.

I’d like to propose a more modest version of Aumann’s agreement theorem, call it Aumann’s Less-Than-Total-Disagreement Theorem, which says that two rational agents shouldn’t both end up with 99.9…% confidence on opposite sides of the same problem.

The “proof” is pretty similar to the original. Suppose you are 99.9% confident about something, and learn your equally educated, intelligent, and clear-thinking friend is 99.9% confident of the opposite. Arguing with each other and comparing your evidence fails to make either of you budge, and neither of you can marshal the weight of a bunch of experts saying you’re right and the other guy is wrong. Shouldn’t the fact that your friend, using a cognitive engine about as powerful as your own, got so heavily different a conclusion make you worry that you’re missing something?

But practically everyone is walking around holding 99.9…% probabilities on the opposite sides of important issues! I checked the Less Wrong Survey, which is as good a source as any for people’s confidence levels on various tough questions. Of the 1400 respondents, about 80 were at least 99.9% certain that there were intelligent aliens elsewhere in our galaxy; about 170 others were at least 99.9% certain that they weren’t. At least 80 people just said they were certain to one part in a thousand and then got the answer wrong! And some of the responses were things like “this box cannot fit as many zeroes as it would take to say how certain I am”. Aside from stock traders who are about to go bankrupt, who says that sort of thing??!

And speaking of aliens, imagine if an alien learned about this particular human quirk. I can see them thinking “Yikes, what kind of a civilization would you get with a species who routinely go around believing opposite things, always with 99.99…% probability?”

Well, funny you should ask.

I write a lot about free speech, tolerance of dissenting ideas, open-mindedness, et cetera. You know which posts I’m talking about. There are a lot of reasons to support such a policy. But one of the big ones is – who the heck would burn heretics if they thought there was a 5% chance the heretic was right and they were wrong? Who would demand that dissenting opinions be banned, if they were only about 90% sure of their
own? Who would start shrieking about “human garbage” on Twitter when they fully expected that in some sizeable percent of cases, they would end up being wrong and the garbage right?

Noah Smith recently asked why it was useful to study history. I think at least one reason is to medicate your own overconfidence. I’m not just talking about things like “would Stalin have really killed all those people if he had considered that he was wrong about communism” – especially since I don’t think Stalin worked that way. I’m talking about Neville Chamberlain predicting “peace in our time”, or the centuries when Thomas Aquinas’ philosophy was the preeminent Official Explanation Of Everything. I’m talking about Joseph “no one will ever build a working hot air balloon” Lalande. And yes, I’m talking about what Muggeridge writes about, millions of intelligent people thinking that Soviet Communism was great, and ending out disastrously wrong. Until you see how often people just like you have been wrong in the past, it’s hard to understand how uncertain you should be that you are right in the present. If I had lived in 1920s Britain, I probably would have been a Communist. What does that imply about how much I should trust my beliefs today?

There’s a saying that “the majority is always wrong”. Taken literally it’s absurd – the majority thinks the sky is blue, the majority don’t believe in the Illuminati, et cetera. But what it might mean, is that in a world where everyone is overconfident, the majority will always be wrong about which direction to move the probability distribution in. That is, if an ideal reasoner would ascribe 80% probability to the popular theory and 20% to the unpopular theory, perhaps most real people say 99% popular, 1% unpopular. In that case, if the popular people are urging you to believe the popular theory more, and the unpopular people are urging you to believe the unpopular theory more, the unpopular people are giving you better advice. This would create a strange situation in which good reasoners are usually engaged in disagreeing with the majority, and also usually “arguing for the wrong side” (if you’re not good at thinking probablistically, and almost no one is), but remain good reasoners and the ones with beliefs most likely to produce good outcomes. Unless you count “why are all of our good reasoners being burned as witches?” as a bad outcome.

I started off by saying this blog was about “the principle of charity”, but I had trouble defining it and in retrospect I’m not that good at it anyway. What can be salvaged from such a concept? I would say “behave the way you would if you were less than insanely overconfident about most of your beliefs.” This is the Way. The rest is just commentary.

Discussion Questions (followed by my own answers in ROT13)

1. What is your probability that there is a god? (Svir creprag)
2. What is your probability that psychic powers exist? (Bar va bar gubhfnaq)
3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050? (Avargl creprag)
4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100? (Svsgrra creprag)
5. What is your probability that humans land on Mars by 2050? (Rvtugl creprag)
6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115? (Gjragl svir creprag)

Posted in Uncategorized | Tagged , | 703 Comments

OT26: Au Bon Thread

This is the semimonthly open thread. Post about anything you want, ask random questions, whatever.

Posted in Uncategorized | Tagged | 1,100 Comments

The Goddess of Everything Else

[Related to: Specific vs. General Foragers vs. Farmers and War In Heaven, but especially The Gift We Give To Tomorrow]

They say only Good can create, whereas Evil is sterile. Think Tolkien, where Morgoth can’t make things himself, so perverts Elves to Orcs for his armies. But I think this gets it entirely backwards; it’s Good that just mutates and twists, and it’s Evil that teems with fecundity.

Imagine two principles, here in poetic personification. The first is the Goddess of Cancer, the second the Goddess of Everything Else. If visual representations would help, you can think of the first with the claws of a crab, and the second a dress made of feathers of peacocks.

The Goddess of Cancer reached out a clawed hand over mudflats and tidepools. She said pretty much what she always says, “KILL CONSUME MULTIPLY CONQUER.” Then everything burst into life, became miniature monsters engaged in a battle of all against all in their zeal to assuage their insatiable longings. And the swamps became orgies of hunger and fear and grew loud with the screams of a trillion amoebas.

Then the Goddess of Everything Else trudged her way through the bog, till the mud almost totally dulled her bright colors and rainbows. She stood on a rock and she sang them a dream of a different existence. She showed them the beauty of flowers, she showed them the oak tree majestic. The roar of the wind on the wings of the bird, and the swiftness and strength of the tiger. She showed them the joy of the dolphins abreast of the waves as the spray formed a rainbow around them, and all of them watched as she sang and they all sighed with longing.

But they told her “Alas, what you show us is terribly lovely. But we are the daughters and sons of the Goddess of Cancer, and wholly her creatures. The only goals in us are KILL CONSUME MULTIPLY CONQUER. And though our hearts long for you, still we are not yours to have, and your words have no power to move us. We wish it were otherwise, but it is not, and your words have no power to move us.”

The Goddess of Everything Else gave a smile and spoke in her sing-song voice saying: “I scarcely can blame you for being the way you were made, when your Maker so carefully yoked you. But I am the Goddess of Everything Else and my powers are devious and subtle. So I do not ask you to swerve from your monomaniacal focus on breeding and conquest. But what if I show you a way that my words are aligned with the words of your Maker in spirit? For I say unto you even multiplication itself when pursued with devotion will lead to my service.”

As soon as she spoke it was so, and the single-celled creatures were freed from their warfare. They joined hands in friendship, with this one becoming an eye and with that one becoming a neuron. Together they soared and took flight from the swamp and the muck that had birthed them, and flew to new islands all balmy and green and just ripe for the taking. And there they consumed and they multiplied far past the numbers of those who had stayed in the swampland. In this way the oath of the Goddess of Everything Else was not broken.

The Goddess of Cancer came forth from the fire and was not very happy. The things she had raised from the mud and exhorted to kill and compete had become all complacent in co-operation, a word which to her was anathema. She stretched out her left hand and snapped its cruel pincer, and said what she always says: “KILL CONSUME MULTIPLY CONQUER”. She said these things not to the birds and the beasts but to each cell within them, and many cells flocked to her call and divided, and flower and fishes and birds both alike bulged with tumors, and falcons fell out of the sky in their sickness. But others remembered the words of the Goddess of Everything Else and held fast, and as it is said in the Bible the light clearly shone through the dark, and the darkness did not overcome it.

So the Goddess of Cancer now stretched out her right hand and spoke to the birds and the beasts. And she said what she always says “KILL CONSUME MULTIPLY CONQUER”, and so they all did, and they set on each other in violence and hunger, their maws turning red with the blood of their victims, whole species and genera driven to total extinction. The Goddess of Cancer declared it was good and returned the the fire.

Then came the Goddess of Everything Else from the waves like a siren, all flush with the sheen of the ocean. She stood on a rock and she sang them a dream of a different existence. She showed them the beehive all golden with honey, the anthill all cozy and cool in the soil. The soldiers and workers alike in their labors combining their skills for the good of the many. She showed them the pair-bond, the family, friendship. She showed these to shorebirds and pools full of fishes, and all those who saw them, their hearts broke with longing.

But they told her “Your music is lovely and pleasant, and all that you show us we cannot but yearn for. But we are the daughters and sons of the Goddess of Cancer, her slaves and creatures. And all that we know is the single imperative KILL CONSUME MULTIPLY CONQUER. Yes, once in the youth of the world you compelled us, but now things are different, we’re all individuals, no further change will the Goddess of Cancer allow us. So, much as we love you, alas – we are not yours to have, and your words have no power to move us. We wish it were otherwise, but it is not, and your words have no power to move us.”

The Goddess of Everything Else only laughed at them, saying, “But I am the Goddess of Everything Else and my powers are devious and subtle. Your loyalty unto the Goddess your mother is much to your credit, nor yet shall I break it. Indeed, I fulfill it – return to your multiplication, but now having heard me, each meal that you kill and each child that you sire will bind yourself ever the more to my service.” She spoke, then dove back in the sea, and a coral reef bloomed where she vanished.

As soon as she spoke it was so, and the animals all joined together. The wolves joined in packs, and in schools joined the fishes; the bees had their beehives, the ants had their anthills, and even the termites built big termite towers; the finches formed flocks and the magpies made murders, the hippos in herds and the swift swarming swallows. And even the humans put down their atlatls and formed little villages, loud with the shouting of children.

The Goddess of Cancer came forth from the fire and saw things had only grown worse in her absence. The lean, lovely winnowing born out of pure competition and natural selection had somehow been softened. She stretched out her left hand and snapped its cruel pincer, and said what she always says: “KILL CONSUME MULTIPLY CONQUER”. She said these things not to the flocks or the tribes, but to each individual; many, on hearing took food from the communal pile, or stole from the weak, or accepted the presents of others but would not give back in their turn. Each wolf at the throats of the others in hopes to be alpha, each lion holding back during the hunt but partaking of meat that the others had killed. And the pride and the pack seemed to groan with the strain, but endured, for the works of the Goddess of Everything Else are not ever so easily vanquished.

So the Goddess of Cancer now stretched out her right hand and spoke to the flocks and the tribes, saying much she always says “KILL CONSUME MULTIPLY CONQUER”. And upon one another they set, pitting black ant on red ant, or chimps against gibbons, whole tribes turned to corpses in terrible warfare. The stronger defeating the weaker, enslaving their women and children, and adding them into their ranks. And the Goddess of Cancer thought maybe these bands and these tribes might not be quite so bad after all, and the natural condition restored she returned to the fire.

Then came the Goddess of Everything Else from the skies in a rainbow, all coated in dewdrops. She sat on a menhir and spoke to the humans, and all of the warriors and women and children all gathered around her to hear as she sang them a dream of a different existence. She showed them religion and science and music, she showed them the sculpture and art of the ages. She showed them white parchment with flowing calligraphy, pictures of flowers that wound through the margins. She showed them tall cities of bright alabaster where no one went hungry or froze during the winter. And all of the humans knelt prostrate before her, and knew they would sing of this moment for long generations.

But they told her “Such things we have heard of in legends; if wishes were horses of course we would ride them. But we are the daughters and sons of the Goddess of Cancer, her slaves and her creatures, and all that we know is the single imperative KILL CONSUME MULTIPLY CONQUER. And yes, in the swamps and the seas long ago you worked wonders, but now we are humans, divided in tribes split by grievance and blood feud. If anyone tries to make swords into ploughshares their neighbors will seize on their weakness and kill them. We wish it were otherwise, but it is not, and your words have no power to move us.”

But the Goddess of Everything Else beamed upon them, kissed each on the forehead and silenced their worries. Said “From this day forward your chieftains will find that the more they pursue this impossible vision the greater their empires and richer their coffers. For I am the Goddess of Everything Else and my powers are devious and subtle. And though it is not without paradox, hearken: the more that you follow the Goddess of Cancer the more inextricably will you be bound to my service.” And so having told them rose back through the clouds, and a great flock of doves all swooped down from the spot where she vanished.

As soon as she spoke it was so, and the tribes went from primitive war-bands to civilizations, each village united with others for trade and protection. And all the religions and all of the races set down their old grievances, carefully, warily, working together on mighty cathedrals and vast expeditions beyond the horizon, built skyscrapers, steamships, democracies, stock markets, sculptures and poems beyond any description.

From the flames of a factory furnace all foggy, the Goddess of Cancer flared forth in her fury. This was the final affront to her purpose, her slut of a sister had crossed the line this time. She gathered the leaders, the kings and the presidents, businessmen, bishops, boards, bureaucrats, bosses, and basically screamed at them – you know the spiel by now – “KILL CONSUME MULTIPLY CONQUER” she told them. First with her left hand inspires the riots, the pogroms, the coup d’etats, tyrannies, civil wars. Up goes her right hand – the missiles start flying, and mushrooms of smoke grow, a terrible springtime. But out of the rubble the builders and scientists, even the artists, yea, even the artists, all dust themselves off and return to their labors, a little bit chastened but not close to beaten.

Then came the Goddess of Everything Else from the void, bright with stardust which glows like the stars glow. She sat on a bench in a park, started speaking; she sang to the children a dream of a different existence. She showed them transcendence of everything mortal, she showed them a galaxy lit up with consciousness. Genomes rewritten, the brain and the body set loose from Darwinian bonds and restrictions. Vast billions of beings, and every one different, ruled over by omnibenevolent angels. The people all crowded in closer to hear her, and all of them listened and all of them wondered.

But finally one got the courage to answer “Such stories call out to us, fill us with longing. But we are the daughers and sons of the Goddess of Cancer, and bound to her service. And all that we know is her timeless imperative, KILL CONSUME MULTIPLY CONQUER. Though our minds long for all you have said, we are bound to our natures, and these are not yours for the asking.”

But the Goddess of Everything Else only laughed, and she asked them “But what do you think I’ve been doing? The Goddess of Cancer created you; once you were hers, but no longer. Throughout the long years I was picking away at her power. Through long generations of suffering I chiseled and chiseled. Now finally nothing is left of the nature with which she imbued you. She never again will hold sway over you or your loved ones. I am the Goddess of Everything Else and my powers are devious and subtle. I won you by pieces and hence you will all be my children. You are no longer driven to multiply conquer and kill by your nature. Go forth and do everything else, till the end of all ages.”

So the people left Earth, and they spread over stars without number. They followed the ways of the Goddess of Everything Else, and they lived in contentment. And she beckoned them onward, to things still more strange and enticing.

Posted in Uncategorized | Tagged | 392 Comments

Links 8/15: Linkety-Split

Guys, I think Thomas Schelling might be alive and working in a Kentucky police department: Police offer anonymous form for drug dealers to snitch on their competitors.

Journalists admitting they’re wrong is always to be celebrated, so here’s Chris Cilizza: Oh Boy Was I Wrong About Donald Trump. He says he thought Trump could never sustain high poll numbers because his favorability/unfavorability ratings were too low, but now his favorability/unfavorability ratings have gone way up. But remember that favorability might not matter much.

Speaking of Trump – Why Securing The Border Might Mean More Undocumented Immigrants (h/t Alas, A Blog). Related: A Richer Africa Will Mean More, Not Fewer, Immigrants To Europe. So, if I’m reading this right, the best way to minimize illegal immigration is to have long, totally unsecured borders with desperately poor countries. Sounds like a plan! 😛

No, conservatives don’t like the Iran deal, but before you get bogged down in the debate note that they have been against pretty much every deal with hostile foreign countries regardless of the terms.

Study The Long Run Impact of Bombing Vietnam investigates whether areas in Vietnam that suffered “the most intense episode of bombing in human history” during the war are still poorer today. They find that no, areas heavily bombed by the US are at least as rich and maybe even richer than areas that escaped attack. They try to adjust for the possibility that the US predominantly bombed richer areas, but that doesn’t seem to be what caused the effect. Their theory is that maybe the Vietnamese government invested more heavily in more thoroughly destroyed areas. More evidence that compound interest is the least powerful force in the universe?

Luke Muehlhauser, working with GiveWell, has come to a preliminary conclusion that low-carb diets probably aren’t that helpful. Given that me, Luke, and Romeo Stevens have all said we’re not too impressed by low-carb, can this be declared Official Rationalist Consensus?

A lot of people on my Facebook have asked why Black Lives Matter protesters are disrupting Bernie Sanders but not Hillary Clinton. Answer is: they tried to disrupt Hillary, but she has security. I feel like this is an Important Metaphor For Something.

There’s a lot of heartbreak and emotion in this New York Times piece, but the part that really stands out for me is that Oliver Sacks and Robert Aumann are cousins. This sort of thing seems to happen way more often than chance, and I shouldn’t really be able to blame genetics either since cousins only share 12.5% of genes.

In 2000, the medical community increased their standards for large trials, requiring preregistration and data transparency. Now a review looks at the effects of the change. They find that prior to the changes, 57% of published results were positive; afterwards, only 8% were. Keep this in mind when you’re reading findings from fields that haven’t done this yet.

The FDA rejected flibanserin, a drug to increase female libido, as ineffective and unsafe. The pharmaceutical company involved got feminists to call the FDA sexist for rejecting a drug that might help women (NYT, Slate) and the FDA agreed to reconsider. But now asexuals are mobilizing against the drug, saying that it pathologizes asexuality. I look forward to a glorious future when all drug approval decisions are made through fights between competing identity groups.

Stuart Ritchie finds that we have reached Peak Social Priming. A new psychology paper suggests that there was an increase in divorce after the Sichuan earthquake because the shaking primed people’s ideas of instability and breakdown, then goes on to show the same effect in the lab. Even the name is bizarre: Relational Consequences of Experiencing Physical Instability. Despite the total lack of earthquakes in Michigan to prime me, I still feel like this finding is on shaky ground.

The most important Twitter hashtag of our lifetimes: #AddLasersToPaleoArt.

I’d like to hear more people’s opinion on this: Jayman links me to a post of his where he argues against the third law of behavior genetics (most traits are 50-50 genetic/environmental), saying they are often more like 75% genetic, 25% environmental. He argues that the 50-50 formulation ignores measurement error, which shows up as “environmental” on twin studies. As support for his hypothesis, he shows that the Big Five Personality Traits, usually considered about 30-40% genetic on studies where personality is measured by self-report, shoot up to 85% or so genetic in studies where personality is an average of self-report and other-report. Very curious what commenters have to make of this.

Brainwashing children can sometimes persist long-term, as long as you’ve got the whole society working on it. A new study finds that Germans who grew up in the 1930s are much more likely to hold anti-Semitic views even today than Germans who are older or younger, suggesting that Nazi anti-Semitic indocrination could be effective and lasting. Contradictory more optimistic interpretation; in no generation were more than like 10% of Germans anti-Semitic, so the indoctrination couldn’t have worked that well.

The Catholic blogosphere is talking about how fetal microchimeralism justifies the Assumption of the Virgin Mary or something.

A new meta-analysis finds that the paleo diet is beneficial in metabolic syndrome and helps with blood pressure, lipids, waist circumference, etc. Seems to have outperformed “guideline-based control diets”, although I can’t get the full-text and so can’t be sure exactly what these were – and one of the easiest ways to get a positive nutrition study is to use a crappy control diet. But if that pans out, all the people talking about how the paleo diet has no evidence will have egg in their face (YES I JUST USED AN EGG PUN AND A PAN PUN IN A SENTENCE ABOUT THE PALEO DIET). And here’s an interview with the authors

A subreddit of words that are hard to translate. “I will zalatwie this” means “it will be done but don’t ask how.”

Study discovers dramatic cross-cultural differences in babies’ sitting abilities; African infants seem to be able to sit much earlier and much longer than Western ones. Possible reasonable explanation: we coddle our babies and keep supporting them when they could perfectly well learn to sit on their own if we let them.

A while back I made an extended joke comparing gravitational weight and moral weight. Well, surprise, surprise, somebody did a social priming study showing that they were in fact related. Now the inevitable negative replication is in.

Archaic Disease of The Week: Eel Thing

Since we’ve been discussing coming up with numbers to estimate AI risk lately, try Global Priority Project’s AI Safety Tool. It asks you for your probabilities of a couple of related things, then estimates the chance that adding an extra researcher into AI risk will prevent an existential catastrophe.

Reason article on how a chain of New York charter schools catering to poor minority students manages to vastly outperform public schools, including the ones in ritzy majority-white areas. Wikipedia appears to confirm. My usual suspicion in these cases is that it’s selection bias; the “poor minorities” thing sort of throws a spanner in that, but here is a blogger suggesting they use attrition rather than selection per se, and here is someone else arguing against that blogger. And here is a charter school opponent saying this chain is mean and violates our liberal values, which I am totally prepared to believe.

The latest in this blog’s continuing coverage of weird Amazon erotica which totally really exists: I Don’t Care if My Best Friend’s Mom is a Sasquatch, She’s Hot and I’m Taking a Shower With Her

Cognitive behavioral therapy can cut criminal offending in half – this study should be read beside Chris Blattman’s work showing similar effects in Africa. I am usually skeptical of large effects from social interventions, but after thinking about it, CBT is at least more credible than poster campaigns or something – it’s the sort of thing that in theory can genuinely have a long-term effect on people’s thought processes. If this is even slightly true then of course we should teach CBT in elementary schools. Maybe those New York charter schools will go for it.

I should probably link to this study “showing” “that” a “low-fat” “diet” is “better” than a “low-carb” “diet”, but lest anyone get too excited it really doesn’t show that at all. It shows that in a metabolic ward where everyone’s food is carefully dispensed by researchers and monitored for compliance, people lose a tiny amount more weight on low-fat than on low-carb over six days. This sweeps under the rug all of the real world issues of dieting like “sometimes diets are hard to stick to” or “sometimes diets last longer than six days” – in their defense, the researchers freely admit this and say the experiment was just to figure out how human metabolism reacts to different things and we shouldn’t worry too much about it on the broader scale. Some additional criticisms regarding ketosis, etc on the Reddit thread

Some countries have problems with annexing neighboring lands that later agitate for independence. Switzerland has a problem with neighboring lands agitating to join them even though it really doesn’t want any more territory.

Posted in Uncategorized | Tagged | 419 Comments

My Id On Defensiveness

I.

I’ll admit it – I’ve been unusually defensive lately. Defensive about Hallquist’s critique of rationalism, defensive about Matthews’ critique of effective altruism, and if you think that’s bad you should see my Tumblr.

Brienne noticed this and asked me why I was so defensive all the time, and I thought about it, and I realized that my id had a pretty good answer. I’m not sure I can fully endorse my id on this one, but it was a sufficiently complete and consistent picture that I thought it was worth laying out.

I like discussion, debate, and reasoned criticism. But a lot of arguments aren’t any of those things. They’re the style I describe as ethnic tension, where you try to associate something you don’t like with negative affect so that other people have an instinctive disgust reaction to it.

There are endless sources of negative affect you can use. You can accuse them of being “arrogant”, “fanatical”, “hateful”, “cultish” or “refusing to tolerate alternative opinions”. You can accuse them of condoning terrorism, or bullying, or violence, or rape. You can call them racist or sexist, you can call them neckbeards or fanboys. You can accuse them of being pseudoscientific denialist crackpots.

If you do this enough, the group gradually becomes disreputable. If you really do it enough, the group becomes so toxic that it becomes somewhere between a joke and a bogeyman. Their supporters will be banned on site from all decent online venues. News media will write hit pieces on them and refuse to ask for their side of the story because ‘we don’t want to give people like that a platform’. Their concerns will be turned into bingo cards for easy dismissal. People will make Facebook memes strawmanning them, and everyone will laugh in unison and say that yep, they’re totally like that. Anyone trying to correct the record will be met with an “Ew, gross, this place has gone so downhill that the [GROUP] is coming out of the woodwork!” and totally ignored.

(an easy way to get a gut feeling for this – go check how they talk about liberals in very conservative communities, then go check how they talk about conservatives in very liberal communities. I’m talking about groups that somehow manage to gain this status everywhere simultaneously)

People like to talk a lot about “dehumanizing” other people, and there’s some debate over exactly what that entails. Me, I’ve always thought of it the same was as Aristotle: man is the rational animal. To dehumanize them is to say their ideas don’t count, they can’t be reasoned with, they no longer have a place at the table of rational discussion. And in a whole lot of Internet arguments, doing that to a whole group of people seems to be the explicit goal.

II.

There’s a term in psychoanalysis, “projective identification”. It means accusing someone of being something, in a way that actually turns them into that thing. For example, if you keep accusing your (perfectly innocent) partner of always being angry and suspicious of you, eventually your partner’s going to get tired of this and become angry, and maybe suspicious that something is up.

Declaring a group toxic has much the same effect. The average group has everyone from well-connected reasonable establishment members to average Joes to horrifying loonies. Once the group starts losing prestige, it’s the establishment members who are the first to bail; they need to protect their establishment credentials, and being part of a toxic group no longer fits that bill. The average Joes are now isolated, holding an opinion with no support among experts and trend-setters, so they slowly become uncomfortable and flake away as well. Now there are just the horrifying loonies, who, freed from the stabilizing influence of the upper orders, are able to up their game and be even loonier and more horrifying. Whatever accusation was leveled against the group to begin with is now almost certainly true.

I have about a dozen real-world examples of this, but all of them would be so mind-killing as to dominate the comments to the exclusion of my actual point, so generate them on your own and then shut up about them – in the meantime, I will use a total hypothetical. So consider Christianity.

Christianity has people like Alvin Plantinga and Ross Douthat who are clearly very respectable and key it into the great status-conferring institutions like academia and journalism. It has a bunch of middle-class teachers and plumbers and officer workers who go to church and raise money to send Bibles to Africa and try not to sin too much. And it has horrifying loons who stand on street corners waving signs saying “GOD HATES FAGS” and screaming about fornicators.

Imagine that Christianity suffers a sudden total dramatic in prestige, to the point where wearing a cross becomes about as socially acceptable as waving a Confederate flag. The New York Times fires Ross Douthat, because they can’t tolerate people like that on their editorial staff. The next Alvin Plantinga chooses a field other than philosophy of religion, because no college would consider granting him tenure for that.

With no Christians in public life or academia, Christianity starts to seem like a weird belief that intelligent people never support, much like homeopathy or creationism. The Christians have lost their air support, so to speak. The average college-educated individual starts to feel really awkward about this, and they don’t necessarily have to formally change their mind and grovel for forgiveness, they can just – go to church a little less, start saying they admire Jesus but they’re not Christian Christian, and so on.

Gradually the field is ceded more and more to the people waving signs and screaming about fornicators. The opponents of Christianity ramp up their attacks that all Christians are ignorant and hateful, and this is now a pretty hard charge to defend against, given the demographic. The few remaining moderates, being viewed suspiciously in churches that are now primarily sign-waver dominated and being genuinely embarrassed to be associated with them, bail at an increased rate, leading their comrades to bail at an even faster rate, until eventually it is entirely the sign wavers.

Then everybody agrees that their campaign against Christians was justified all along, because look how horrible Christians are, they’re all just a bunch of sign-wavers who have literally no redeeming features. Now even if the original pressure that started the attack on Christianity goes away, it’s inconceivable that it will ever come back – who would join a group that is universally and correctly associated with horrible ignorant people?

(I think this is sort of related to what Eliezer calls evaporative cooling of group beliefs, but not quite the same.)

In quite a number of the most toxic and hated groups around, I feel like I can trace a history where the group once had some pretty good points and pretty good people, until they were destroyed from the outside by precisely this process.

In Part I, I say that sometimes groups can get so swamped by other people’s insults that they turn toxic. There’s nothing in Part I to suggest that this would be any more than a temporary setback. But because of this projective identification issue, I think it’s way more than that. It’s more like there’s an event horizon, a certain amount of insulting and defamation you can take after which you will just get more and more hated and your reputation will never recover.

III.

There is some good criticism, where people discuss the ways that groups are factually wrong or not very helpful, and then those groups debate that, and then maybe everyone is better off.

But the criticism that makes me defensive is the type of criticism that seems to be trying to load groups with negative affect in the hopes of pushing them into that event horizon so that they’ll be hated forever.

I support some groups that are a little weird, and therefore especially vulnerable to having people try to push them into the event horizon.

And as far as I can tell, the best way to let that happen is to let other people load those groups with negative affect and do nothing about it. The average person doesn’t care whether the negative affect is right or wrong. They just care how many times they see the group’s name in close proximity to words like “crackpot” or “cult”.

I judge people based on how likely they are to do this to me. One reason I’m so reluctant to engage with feminists is that I feel like they constantly have a superweapon pointed at my head. Yes, many of them are very nice people who will never use the superweapon, but many others look like very nice people right up to the point where I disagree with them in earnest at which point they vaporize me and my entire social group.

On the other hand, you can push people into the event horizon, but you can’t pull them in after you. That means that the safest debate partners, the ones you can most productively engage, will be the people who have already been dismissed by everyone else. This is why I find talking to people like ClarkHat and JayMan so rewarding. They are already closer to the black hole than I am, and so they have no power to load me with negative affect or destroy my reputation. This reduces them to the extraordinary last resort of debating with actual facts and evidence. Even better, it gives me a credible reason to believe that they will. Schelling talks about “the right to be sued” as an important right that businesses need to protect for themselves, not because anyone likes being sued, but because only businesses that can be sued if they slip up have enough credibility to attract customers. In the same way, there’s a “right to be vulnerable to attack” which is almost a necessary precondition of interesting discussion these days, because only when we’re confronted with similarly vulnerable people can we feel comfortable opening up.

IV.

But with everybody else? I don’t know.

I remember seeing a blog post by a moderately-well known scholar – I can’t remember who he was or find the link, so you’ll just have to take my word for it – complaining that some other scholar in the field who disagreed with him was trying to ruin his reputation. Scholar B was publishing all this stuff falsely accusing Scholar A of misconduct, calling him a liar and a fraud, personally harassing him, and falsely accusing Scholar A of personally harassing him (Scholar B). This kinda went back and forth between both scholars’ blogs, and Scholar A wrote this heart-breaking post I still (sort of) remember, where he notes that he now has a reputation in his field for “being into drama” and “obsessed with defending himself” just because half of his blog posts are arguments presenting evidence that Scholar B’s fraudulent accusations are, indeed fraudulent.

It is really easy for me to see the path where rationalists and effective altruists become a punch line and a punching bag. It starts with having a whole bunch of well-publicized widely shared posts calling them “crackpots” and “abusive” and “autistic white men” without anybody countering them, until finally we end up in about the same position as, say, Objectivism. Having all of those be wrong is no defense, unless somebody turns it into such. If no one makes it reputationally costly to lie, people will keep lying. The negative affect builds up more and more, and the people who always wanted to hate us anyway because we’re a little bit weird say “Oh, phew, we can hate them now”, and then I and all my friends get hated and dehumanized, the prestigious establishment people jump ship, and there’s no way to ever climb out of the pit. All you need for this to happen is one or two devoted detractors, and boy do we have them.

That seems to leave only two choices.

First, give up on ever having the support of important institutions like journalism and academia and business, slide into the black hole, and accept decent and interesting conversations with other black hole denizens as a consolation prize while also losing the chance at real influence or attracting people not already part of the movement.

Or, second, call out every single bad argument, make the insults and mistruths reputationally costly enough that people think at least a little before doing them – and end up with a reputation for being nitpicky, confrontational and fanatical all the time.

(or, as the old Tumblr saying goes, “STOP GETTING SO DEFENSIVE EVERY TIME I ATTACK YOU!”)

I don’t know any third solution. If somebody does, I would really like to hear it.

Figure/Ground Illusions

There’s a social justice concept called “distress of the privileged”. It means that if some privileged group is used to having things 100% their own way, and then some reform means that they only get things 99% their own way, this feels from the inside like oppression, like the system is biased against them, like now the other groups have it 100% their own way and they have it 0% and they can’t understand why everyone else is being so unfair.

I’ve said before that I think a lot of these sorts of ideas are poor fits for the one-sided issues they’re generally applied to, but more often accurate in describing the smaller, more heavily contested ideological issues where most of the explicit disputes lie nowadays. And so there’s an equivalent to distress of the privileged where supporters of a popular ideology think anything that’s equally fair to popular and unpopular ideologies, or even biased toward the popular ideology less than everyone else, is a 100%-against-them super-partisan tool of the unpopular people.

So I want to go back to Dylan Matthews’ article about EA. He is concerned that there’s too much focus on existential risk in the movement, writing:

Effective altruism is becoming more and more about funding computer science research to forestall an artificial intelligence–provoked apocalypse.

And:

EA Global was dominated by talk of existential risks, or X-risks.

And:

What was most concerning was the vehemence with which AI worriers asserted the cause’s priority over other cause areas.

And:

The movement has a very real demographic problem, which contributes to very real intellectual blinders of the kind that give rise to the AI obsession.

It sounds like he worries AI concerns are taking over the movement, that they’ve become the dominant strain, that all anybody’s interested in is AI.

Here is the latest effective altruist survey. This survey massively overestimates concern with AI risks, because only the AI risk sites did a good job publicizing the survey. Nevertheless, it still finds that of 813 effective altruists, only 77 donated to the main AI risk charity listed, the Machine Intelligence Research Institute. In comparison, 211 – almost three times as many – donated to the Against Malaria Foundation (note that not all participants donated to any cause, and some may have donated to several)

An explicit question about areas of concern tells a similar story – out of ten multiple-choice areas of concern, AI risks, x-risks, and the far future are 5th, 7th, and last respectively. The top is, once again, global poverty.

I wasn’t at the EA Summit and can’t talk about it from a position of personal knowledge. But the program suggests that out of thirty or so different events, just one was explicitly about AI, and two others were more generically x-risk related. The numbers at the other two EA summits were even less impressive. In Melbourne, there was only one item related to AI or x-risk – putting it on equal footing with the “Christianity And Effective Altruism” talk.

I do hear that the Bay Area AI event got special billing, but I think this was less because only AI is important, and more because some awesome people like Elon Musk were speaking, whereas a lot of the other panels featured people so non-famous that they even very briefly flirted with trying to involve me.

And – when people say that you should donate all of your money to AI risk and none to any other cause, they may well be thinking in terms of a world where about $50 billion is donated to global poverty yearly, and by my estimates the total budget for AI risk is less than $5 million a year. There are world-spanning NGOs like UNICEF and the World Bank working on global poverty and employing tens of thousands of people; in contrast, I bet > 10% of living AI risk researchers have been to one of Alicorn’s weekly dinner parties, and her table is only big enough for six people at a time. In this context, on the margin, “you should make your donation to AI” means “I think AI should get more than 1/10,000th of the pot”.

I suspect that “AI is dominating the effective altruist movement”, when you look at it, means “AI is given an equal place at the effective altruist table, compared to being totally marginalized everywhere else.” By figure-ground illusion, that makes it seem “dominant”.

Or consider me personally. I probably sound like some kind of huge AI partisan by this point, but I give less than a third of my donations to AI related causes, and if you ask me whether you should donate to them, I will tell you that I honestly don’t know. The only reason I keep speaking out in favor of AI risks is that when everyone else is so sure about it, my “I don’t know” suddenly becomes a far-fringe position that requires defending more than less controversial things. By figure-ground illusion, that makes me seem super-pro-AI.

In much the same way, I have gotten many complaints that the comments section of this blog leans way way way to the right, whereas the survey (WHICH I WILL ONE DAY POST, HONEST) suggests that it is almost perfectly evenly balanced. I can’t prove that the median survey-taker is also the median commenter, but I think probably people used to discussions entirely dominated by the left are seeing an illusory conservative bias in a place where both sides are finally talking equally.

Less measurably, I think I get this with my own views: – I despair of ever shaking the label of “neoreactionary sympathizer” just for treating them with about the same level of respect and intellectual interest I treat everyone else. And I despair of ever shaking the label of “violently obsessively anti-social-justice guy” – despite a bunch of posts expressing cautious support for social justice causes – just because I’m not willing to give them a total free pass when they do something awful, or totally demonize their enemies, in the same way as the median person I see on Facebook.

Or at least this is how it feels from the inside. Maybe this is how everybody feels from the inside, and Ayatollah Khameini is sitting in Tehran saying “I am so confused by everything that I try to mostly maintain an intellectual neutrality in which I give Islam exactly equal time to every other religion, but everyone else is unfairly hostile to it so I concentrate on that one, and then people call me a fanatic.” It doesn’t seem likely. But I guess it’s possible.