codex Slate Star Codex

In a mad world, all blogging is psychiatry blogging

Mysticism and Pattern-Matching

[Epistemic status: Total conjecture.]

One of the things that got me interested in psychiatry was the sheer weirdness of the human brain’s failure modes. We all hear that the brain is like a computer, but when a computer breaks, the screen goes black or it freezes or something. It doesn’t hear voices telling it that it’s Jesus, or start seeing tiny men running around on the floor. But for some reason, when the the human brain breaks, it may do exactly that. Why?

Psychiatry classes never just tell you the answer to this question, but reading between the lines I think it has something to do with top-down processing and pattern matching.

Bottom-up processing is when you go from basic elements to more complex ideas – for example, when you see the three letters C, A, and T in a row, you might combine them to get the the word CAT. Top-down processing is when more complex ideas change the way you interpret basic elements. For example, in the first picture above, the middle letters in both words are the same. We read the first as H, because the image as a gestalt suggests the word “THE” and the word “THE” suggests an H in the middle. We read the second as A, because the image as a gestalt suggests the word “CAT” and the word “CAT” has an A in the middle. Our big-picture idea has changed the way we view the smaller elements composing it.

The same is true of the second image. We recognize the phrase “PARIS IN THE SPRINGTIME”, and so we assume that’s what the sign is trying to show us. In fact, the sign doubles the word “the”. But since this is bizarre and not something that makes sense in the gestalt, we assume this is a mistake and gloss right over it. We do this very, very easily – how many times have I duplicated the word “the” in this essay already?

The third image is related to this tendency. To most people, it looks formless. Even once they hear that it’s an old black-and-white photograph of a cow’s head, it’s might still require a bit of staring before you catch on. But once you see the cow, the cow is obvious. It becomes impossible to see it as formless, impossible to see it as anything else. Having given yourself a top-down pattern to work from, the pattern automatically organizes the visual stimuli and makes sense of them.

This provides a possible explanation for hallucinations. Think of top-down processing as taking noise and organizing it to fit a pattern. Normally, you’ll only fit it to the patterns that are actually there. But if your pattern-matching system is broken, you’ll fit it to patterns that aren’t in the data at all.

The best example of this is Google Deep Dream:

I don’t know much about neural networks, so I may not be getting this entirely right, but as far as I understand it, they trained a neural network on some stimulus like a dog. This was for research in machine vision; they wanted the net to be able to recognize dogs when it saw them; to pattern-match potentially noisy images of dogs into its Platonic ideal of a dog. But if you turn the pattern-matching up, it will just start seeing dogs everywhere there’s even the slightest amount of noise that resembles a dog at all. You only matched the sign above to “PARIS IN THE SPRINGTIME” because it was almost exactly like that phrase; if we stick your pattern-matching software into overdrive, maybe every sentence would start looking like more meaningful alternatives. Eevn sceeentns wtih aolsmt all the lerttes rergaearnd wulod naelry ianslntty sanp itno pacle. Turn it all the way up, and maybe you could make every sentence look like “PARIS IN THE SPRINGTIME”. Or something.

So hallucinations are when your top-down processing/pattern-matching ability becomes so dysfunctional that it can generate people and objects out of random visual noise. Why it chooses some people and objects over others I don’t know, but it’s hardly surprising – it does the same thing every night in your dreams.

Many of the same people who have hallucinations also have paranoia. Paranoia seems to me to be overfunctioning of social pattern-matching. When Deep Dream sees the tiniest hint of a line here, a slight dark spot there, it pattern-matches it into an entire dog. When a paranoiac hears a stray word here, or sees a sideways glance there, they turn it into this vast social edifice of connected plots. Every new thing that happens is fit effortlessly into the same pattern. When their psychiatrist says they’re crazy, that gets fit into the pattern too – maybe the psychiatrist is a tool of the conspiracy, trying to confuse them into compliance.

So where does the mysticism come in?

I notice that the same people who have hallucinations also have mystical experiences. By mystical experiences, I don’t just mean “they see angels” – in that case, the relationship to hallucination would be a tautology. I mean they feel a sense of sudden understanding of and connection with the universe. I know at least three groups that do this: druggies, meditators, and prophets. The druggies report feelings of total understanding on their drugs, and also report hallucinations. The meditators occasionally achieve enlightenment, but look at any text about meditation and you find mentions of visions and hallucinations experienced during the practice. The voices heard by the prophets are too obvious to mention.

One well-known way of bringing on such experiences is to abuse your pattern-matching faculty. The Chicken Qabalah of Rabbi Lamed Ben Clifford (not really recommended) manages to link a pretty boring Bible verse to the letter yud, the creativity of God, the essence of existence, the sun, the phallus, the plane of Malkuth, and the number 496, then explains:

Like a mountain goat leaping ecstatically from crag to crag, one thought springs into another, and another, ad infinitum. You can continue, almost forever, connecting things that you never thought were connected. Sooner or later something’s going to snap and you will overcome the fundamental defect in your powers of perception.


Was that the message Ezekiel was trying to convey? Probably not. But who cares! Whatever it was the old boy was originally trying to say shrinks to insignificance. It is far more important to my spiritual enlightenment that my mind was forced to churn at breakneck speed to put all of this together, and then open itself up to the infinite possibilities of meaning. Look hard enough at anything and eventually you will see everything! it doesn’t even have to make very much sense what you connect to what. It’s all ultimately connected!

This philosophy, which I associate both with kabbalah and with the more modern Western hermetic tradition, says that learning a set of extremely complicated correspondences is an important step toward gaining enlightenment. See for example this site, which helpfully relates the sephirah Netzach to the planet Venus, the number 7, the emerald, the lynx, the rose, cannabis, arsenic, copper, fire, the solar plexus chakra, the archangel Haniel, the Egyptian goddess Hathor, the concepts of love and victory, et cetera, et cetera. You’re supposed to be able to use this to interpret things – for example, if you have a dream about a lynx, it could correspond to anything else in the system – but it looks like it would quickly get unwieldy. And other sources will give completely different systems of correspondences, and nobody gets too upset over it – in fact, some sources will happily encourage you to come up with your own correspondences instead, as long as you stick to them. It seems like the goal is less “remember that it’s extremely important that emeralds correspond to lynxes in reality” and more “have some system, any system, of interesting correspondences in mind that you can apply to everything you come across”.

Nor does it especially matter what you’re interpreting. The traditional things to interpret are mysterious things like dreams, or the Bible, but Crowley famously performs a mystical analysis of Mother Goose nursery rhymes (see Interlude here). The important factor seems to be less about there being sacred truth in the object being analyzed, and more about the process of performing the analysis.

(Zen koans are a little different, but also sort of involve torturing a pattern-finding ability for apparently no reason)

So to skip to the point: I think all of this is about strengthening the pattern-matching faculty. You’re exercising it uselessly but impressively, the same way as the body-builder who lifts the same weight a thousand times until their arms are the size of tree trunks. Once the pattern-matching faculty is way way way overactive, it (spuriously) hallucinates a top-down abstract pattern in the whole universe. This is the experience that mystics describe as “everything is connected” or “all is one”, or “everything makes sense” or “everything in the universe is good and there for a purpose”. The discovery of a beautiful all-encompassing pattern in the universe is understandably associated with “seeing God”.

Religious scholar William James once experimented with nitrous oxide and reached a state where he felt he had total comprehension of the universe. According to a story which I can’t verify, he became infuriated at losing the thread of understanding once the chemical wore off, so he decided to take notes during the experience: write down the secrets of the universe then, and reread them once he was sober. The experiment completed, he picked up the notepad in feverish excitement, only to find that he had written OVERALL THERE IS A SMELL OF FRIED ONIONS.

Imagine one of those Google robots pointing at an empty patch of sky and saying “No, look, seriously, there’s a dog right there. Right there! How are you not seeing this?” Things that make perfect sense in the context of a state of overactive pattern-matching look meaningless to a pattern-matching faculty operating normally. At best, you can sort of see the lines of what seemed so clear before (“Yeah, I can see that that stain on the wall is vaguely dog-shaped.”) This matches the stories I’ve heard of people who have some mystical experience but then can’t maintain or recapture it.

I think other methods of inducing weird states of consciousness, like drugs and meditation, probably do the same thing by some roundabout route. Meditation seems like reducing stimuli, which is known to lead to hallucinations in eg sensory deprivation tanks or solitary confinement cells in jail. I think the general principle is that a low level of external stimuli makes your brain adjust its threshold for stimulus detection up until anything including random noise satisfies the threshold. As for drugs, there’s lots of reasons to think that the neurotransmission changes they create will alter the brain’s pattern processing strategies.

Things this hypothesis doesn’t explain: why mystical experiences are linked with a feeling of no time, no space, and no self; why prayer or extreme devotion seems to induce them (eg bhakti yoga), and why they can be so beneficial – that is, why do people with mystical experiences become happier and better adjusted? Maybe the feeling of the world making sense is naturally a pleasant and helpful one. Certainly the opposite can be very stressful!

Posted in Uncategorized | Tagged | 212 Comments

Probabilities Without Models

[Epistemic status: Not original to me. Also, I might be getting it wrong.]

A lot of responses to my Friday post on overconfidence centered around this idea that we shouldn’t, we can’t, use probability at all in the absence of a well-defined model. The best we can do is say that we don’t know and have no way to find out. I don’t buy this:

“Mr. President, NASA has sent me to warn you that a saucer-shaped craft about twenty meters in diameter has just crossed the orbit of the moon. It’s expected to touch down in the western United States within twenty-four hours. What should we do?”

“How should I know? I have no model of possible outcomes.”

“Should we put the military on alert?”

“Maybe. Maybe not. Putting the military on alert might help. Or it might hurt. We have literally no way of knowing.”

“Maybe we should send a team of linguists and scientists to the presumptive landing site?”

“What part of ‘no model’ do you not understand? Alien first contact is such an inherently unpredictable enterprise that even speculating about whether linguists should be present is pretending to a certainty which we do not and cannot possess.”

“Mr. President, I’ve got our Israeli allies on the phone. They say they’re going to shoot a missile at the craft because ‘it freaks them out’. Should I tell them to hold off?”

“No. We have no way of predicting whether firing a missile is a good or bad idea. We just don’t know.”

In real life, the President would, despite the situation being totally novel and without any plausible statistical model, probably make some decision or another, like “yes, put the military on alert”. And this implies a probability judgment. The reason the President will put the military on alert, but not, say, put banana plantations on alert, is that in his opinion the aliens are more likely to attack than to ask for bananas.

Fine, say the doubters, but surely the sorts of probability judgments we make without models are only the most coarse-grained ones, along the lines of “some reasonable chance aliens will attack, no reasonable chance they will want bananas.” Where “reasonable chance” can mean anything from 1% to 99%, and “no reasonable chance” means something less than that.

But consider another situation: imagine you are a director of the National Science Foundation (or a venture capitalist, or an effective altruist) evaluating two proposals that both want the same grant. Proposal A is by a group with a long history of moderate competence who think they can improve the efficiency of solar panels by a few percent; their plan is a straightforward application of existing technology and almost guaranteed to work and create a billion dollars in value. Proposal B is by a group of starry-eyed idealists who seem very smart but have no proven track record; they say they have an idea for a revolutionary new kind of super-efficient desalinization technology; if it works it will completely solve the world’s water crisis and produce a trillion dollars in value. Your organization is risk-neutral to a totally implausible degree. What do you do?

Well, it seems to me that you choose Proposal B if you think it has at least a 1/1000 chance of working out; otherwise, you choose Proposal A. But this requires at least attempting to estimate probabilities in the neighborhood of 1/1000 without a model. Crucially, there’s no way to avoid this. If you shrug and take Proposal A because you don’t feel like you can assess proposal B adequately, that’s making a choice. If you shrug and take Proposal B because what the hell, that’s also making a choice. If you are so angry at being placed in this situation that you refuse to choose either A or B and so pass up both a billion and a trillion dollars, that’s a choice too. Just a stupid one.

Nor can you cry “Pascal’s Mugging!” in order to escape the situation. I think this defense is overused and underspecified, but at the very least, it doesn’t seem like it can apply in places where the improbable option is likely to come up over your own lifespan. So: imagine that your organization actually reviews about a hundred of these proposals a year. In fact, it’s competing with a bunch of other organizations that also review a hundred or so such proposals a year, and whoever’s projects make the most money gains lots of status and new funding. Now it’s totally plausible that, over the course of ten years, it might be a better strategy to invest in things that have a one in a thousand chance of working out. Indeed, maybe you can see the organizations that do this outperforming the organizations that don’t. The question really does come down to your judgment: are Project B’s odds of success greater or less than 1/1000?

Nor is this a crazy hypothetical situation. A bunch of the questions we have to deal with come down to these kinds of decisions made without models. Like – should I invest for retirement, even though the world might be destroyed by the time I retire? Should I support the Libertarian candidate for president, even though there’s never been a libertarian-run society before and I can’t know how it will turn out? Should I start learning Chinese because China will rule the world over the next century? These questions are no easier to model than ones about cryonics or AI, but they’re questions we all face.

The last thing the doubters might say is “Fine, we have to face questions that can be treated as questions of probability. But we should avoid treating them as questions of probability anyway. Instead of asking ourselves ‘is the probability that the desalinization project will work greater or less than 1/1000′, we should ask ‘do I feel good about investing this money in the desalinization plant?’ and trust our gut feelings.”

There is some truth to this. My medical school thesis was on the probabilistic judgments of doctors, and they’re pretty bad. Doctors are just extraordinarily overconfident in their own diagnoses; a study by Bushyhead, who despite his name is not a squirrel, found that when doctors were 80% certain that patients had pneumonia, only 20% would turn out to have the disease. On the other hand, the doctors still did the right thing in most every case, operating off of algorithms and heuristics that never mentioned probability. The conclusion was that as long as you don’t force doctors to think about about what they’re doing in mathematical terms, everything goes fine – something I’ve brought up before in the context of the Bayes mammogram problem. Maybe this generalizes. Maybe people are terrible at coming up with probabilities for things like investing in desalinization plants, but will generally make the right choice.

But refusing to frame choices in terms of probabilities also takes away a lot of your options. If you use probabilities, you can check your accuracy – the foundation director might notice that of a thousand projects she had estimated as having 1/1000 probabilities, actually about 20 succeeded, meaning that she’s overconfident. You can do other things. You can compare people’s success rates. You can do arithmetic on them (“if both these projects have 1/1000 probability, what is the chance they both succeed simultaneously?”), you can open prediction markets about them.

Most important, you can notice and challenge overconfidence when it happens. I said last post that when people say there’s only a one in a million chance of something like AI risk, they are being stupendously overconfident. If people just very quietly act as if there’s a one in a million chance of such risk, without ever saying it, then no one will ever be able to call them on it.

I don’t want to say I’m completely attached to using probability here in exatly the normal way. But all of the alternatives I’ve heard fall apart when you’ve got to make an actual real-world choice, like sending the military out to deal with the aliens or not.

[EDIT: Why regressing to meta-probabilities just gives you more reasons to worry about overconfidence]

[EDIT-2: “I don’t know”]

[EDIT-3: A lot of debate over what does or doesn’t count as a “model” in this case. Some people seem to be using a weak definition like “any knowledge whatsoever about the process involved”. Others seem to want a strong definition like “enough understanding to place this event within a context of similar past events such that a numerical probability can be easily extracted by math alone, like the model where each flip of a two-sided coin has a 50% chance of landing heads”. Without wanting to get into this, suffice it to say that any definition in which the questions above have “models” is one where AI risk also has a model.]

Posted in Uncategorized | Tagged | 443 Comments

On Overconfidence

[Epistemic status: This is basic stuff to anyone who has read the Sequences, but since many readers here haven’t I hope it is not too annoying to regurgitate it. Also, ironically, I’m not actually that sure of my thesis, which I guess means I’m extra-sure of my thesis]


A couple of days ago, the Global Priorities Project came out with a calculator that allowed you to fill in your own numbers to estimate how concerned you should be with AI risk. One question asked how likely you thought it was that there would be dangerous superintelligences within a century, offering a drop down menu with probabilities ranging from 90% to 0.01%. And so people objected: there should be options to put in only one a million chance of AI risk! One in a billion! One in a…

For example, a commenter writes that: “the best (worst) part: the probability of AI risk is selected from a drop down list where the lowest probability available is 0.01%!! Are you kidding me??” and then goes on to say his estimate of the probability of human-level (not superintelligent!) AI this century is “very very low, maybe 1 in a million or less”. Several people on Facebook and Tumblr say the same thing – 1/10,000 chance just doesn’t represent how sure they are that there’s no risk from AI, they want one in a million or more.

Last week, I mentioned that Dylan Matthews’ suggestion that maybe there was only 10^-67 chance you could affect AI risk was stupendously overconfident. I mentioned that was thousands of lower than than the chance, per second, of getting simultaneously hit by a tornado, meteor, and al-Qaeda bomb, while also winning the lottery twice in a row. Unless you’re comfortable with that level of improbability, you should stop using numbers like 10^-67.

But maybe it sounds like “one in a million” is much safer. That’s only 10^-6, after all, way below the tornado-meteor-terrorist-double-lottery range…

So let’s talk about overconfidence.

Nearly everyone is very very very overconfident. We know this from experiments where people answer true/false trivia questions, then are asked to state how confident they are in their answer. If people’s confidence was well-calibrated, someone who said they were 99% confident (ie only 1% chance they’re wrong) would get the question wrong only 1% of the time. In fact, people who say they are 99% confident get the question wrong about 20% of the time.

It gets worse. People who say there’s only a 1 in 100,000 chance they’re wrong? Wrong 15% of the time. One in a million? Wrong 5% of the time. They’re not just overconfident, they are fifty thousand times as confident as they should be.

This is not just a methodological issue. Test confidence in some other clever way, and you get the same picture. For example, one experiment asked people how many numbers there were in the Boston phone book. They were instructed to set a range, such that the true number would be in their range 98% of the time (ie they would only be wrong 2% of the time). In fact, they were wrong 40% of the time. Twenty times too confident! What do you want to bet that if they’d been asked for a range so wide there was only a one in a million chance they’d be wrong, at least five percent of them would have bungled it?

Yet some people think they can predict the future course of AI with one in a million accuracy!

Imagine if every time you said you were sure of something to the level of 999,999/1 million, and you were right, the Probability Gods gave you a dollar. Every time you said this and you were wrong, you lost $1 million (if you don’t have the cash on hand, the Probability Gods offer a generous payment plan at low interest). You might feel like getting some free cash for the parking meter by uttering statements like “The sun will rise in the east tomorrow” or “I won’t get hit by a meteorite” without much risk. But would you feel comfortable predicting the course of AI over the next century? What if you noticed that most other people only managed to win $20 before they slipped up? Remember, if you say even one false statement under such a deal, all of your true statements you’ve said over years and years of perfect accuracy won’t be worth the hole you’ve dug yourself.

Or – let me give you another intuition pump about how hard this is. Bayesian and frequentist statistics are pretty much the same thing [citation needed] – when I say “50% chance this coin will land heads”, that’s the same as saying “I expect it to land heads about one out of every two times.” By the same token, “There’s only a one in a million chance that I’m wrong about this” is the same as “I expect to be wrong on only one of a million statements like this that I make.”

What do a million statements look like? Suppose I can fit twenty-five statements onto the page of an average-sized book. I start writing my predictions about scientific and technological progress in the next century. “I predict there will not be superintelligent AI.” “I predict there will be no simple geoengineering fix for global warming.” “I predict no one will prove P = NP.” War and Peace, one of the longest books ever written, is about 1500 pages. After you write enough of these statements to fill a War and Peace sized book, you’ve made 37,500. You would need to write about 27 War and Peace sized books – enough to fill up a good-sized bookshelf – to have a million statements.

So, if you want to be confident to the level of one-in-a-million that there won’t be superintelligent AI next century, you need to believe that you can fill up 27 War and Peace sized books with similar predictions about the next hundred years of technological progress – and be wrong – at most – once!

This is especially difficult because claims that a certain form of technological progress will not occur have a very poor track record of success, even when uttered by the most knowledgeable domain experts. Consider how Nobel-Prize winning atomic scientist Ernest Rutherford dismissed the possibility of nuclear power as “the merest moonshine” less than a day before Szilard figured out how to produce such power. In 1901, Wilbur Wright told his brother Orville that “man would not fly for fifty years” – two years later, they flew, leading Wilbur to say that “ever since, I have distrusted myself and avoided all predictions”. Astronomer Joseph de Lalande told the French Academy that “it is impossible” to build a hot air balloon and “only a fool would expect such a thing to be realized”; the Montgolfier brothers flew less than a year later. This pattern has been so consistent throughout history that sci-fi titan Arthur C. Clarke (whose own predictions were often eerily accurate) made a heuristic out of it under the name Clarke’s First Law: “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

Also – one good heuristic is to look at what experts in a field think. According to Muller and Bostrom (2014), a sample of the top 100 most-cited authors in AI ascribed a > 70% probability to AI within a century, a 50% chance of superintelligence conditional on human-level, and a 10% chance of existential catastrophe conditional on human level AI. Multiply it out, and you get a couple percent chance of superintelligence-related existential catastrophe in the next century.

Note that my commenter wasn’t disagreeing with the 4% chance. They were disagreeing with the possibility that there would be human-level AI at all, that is, the 70% chance! That means that he was saying, essentially, that he was confident he could write a million sentences – that is, twenty-seven War and Peace‘s worth – all of which were trying to predict trends in a notoriously difficult field, all of which contradicted a well-known heuristic about what kind of predictions you should never try to make, all of which contradicted the consensus opinion of the relevant experts – and only have one of the million be wrong!

But if you feel superior to that because you don’t believe there’s only a one-in-a-million chance of human-level AI, you just believe there’s a one-in-a-million chance of existential catastrophe, you are missing the point. Okay, you’re not 300,000 times as confident as the experts, you’re only 40,000 times as confident. Good job, here’s a sticker.

Seriously, when people talk about being able to defy the experts a million times in a notoriously tricky area they don’t know much about and only be wrong once – I don’t know what to think. Some people criticize Eliezer Yudkowsky for being overconfident in his favored interpretation of quantum mechanics, but he doesn’t even attach a number to that. For all I know, maybe he’s only 99% sure he’s right, or only 99.9%, or something. If you are absolutely outraged that he is claiming one-in-a-thousand certainty on something that doesn’t much matter, shouldn’t you be literally a thousand times more outraged when every day people are claiming one-in-a-million level certainty on something that matters very much? It is almost impossible for me to comprehend the mindsets of people who make a Federal Case out of the former, but are totally on board with the latter.

Everyone is overconfident. When people say one-in-a-million, they are wrong five percent of the time. And yet, people keep saying “There is only a one in a million chance I am wrong” on issues of making really complicated predictions about the future, where many top experts disagree with them, and where the road in front of them is littered with the bones of the people who made similar predictions before. HOW CAN YOU DO THAT?!


I am of course eliding over an important issue. The experiments where people offering one-in-a-million chances were wrong 5% of the time were on true-false questions – those with only two possible answers. There are other situations where people can often say “one in a million” and be right. For example, I confidently predict that if you enter the lottery tomorrow, there’s less than a one in a million chance you will win.

On the other hand, I feel like I can justify that. You want me to write twenty-seven War and Peace volumes about it? Okay, here goes. “Aaron Aaronson of Alabama will not win the lottery. Absalom Abramowtiz of Alaska will not win the lottery. Achitophel Acemoglu of Arkansas will not win the lottery.” And so on through the names of a million lottery ticket holders.

I think this is what statisticians mean when they talk about “having a model”. Within the model where there are a hundred million ticket holders, and we know exactly one will be chosen, our predictions are on very firm ground, and our intuition pumps reflect that.

Another way to think of this is by analogy to dart throws. Suppose you have a target that is half red and half blue; you are aiming for red. You would have to be very very confident in your dart skills to say there is only a one in a million chance you will miss it. But if there is a target that is 999,999 millionths red, and 1 millionth blue, then you do not have to be at all good at darts to say confidently that there is only a one in a million chance you will miss the red area.

Suppose a Christian says “Jesus might be God. And he might not be God. 50-50 chance. So you would have to be incredibly overconfident to say you’re sure he isn’t.” The atheist might respond “The target is full of all of these zillions of hypotheses – Jesus is God, Allah is God, Ahura Mazda is God, Vishnu is God, a random guy we’ve never heard of is God. You are taking a tiny tiny submillimeter-sized fraction of a huge blue target, painting it red, and saying that because there are two regions of the target, a blue region and a red region, you have equal chance of hitting either.” Eliezer Yudkowsky calls this “privileging the hypothesis”.

There’s a tougher case. Suppose the Christian says “Okay, I’m not sure about Jesus. But either there is a Hell, or there isn’t. Fifty fifty. Right?”

I think the argument against this is that there are way more ways for there not to be Hell than there are for there to be Hell. If you take a bunch of atoms and shake them up, they usually end up as not-Hell, in much the same way as the creationists’ fabled tornado-going-through-a-junkyard usually ends up as not-a-Boeing-747. For there to be Hell you have to have some kind of mechanism for judging good vs. evil – which is a small part of the space of all mechanisms, let alone the space of all things – some mechanism for diverting the souls of the evil to a specific place, which same, some mechanism for punishing them – again same – et cetera. Most universes won’t have Hell unless you go through a lot of work to put one there. Therefore, Hell existing is only a very tiny part of the target. Making this argument correctly would require an in-depth explanation of formalizations of Occam’s Razor, which is outside the scope of this essay but which you can find on the LW Sequences.

But this kind of argumentation is really hard. Suppose I predict “Only one in 150 million chance Hillary Clinton will be elected President next year. After all, there are about 150 million Americans eligible for the Presidency. It could be any one of them. Therefore, Hillary covers only a tiny part of the target.” Obviously this is wrong, but it’s harder to explain how. I would say that your dart-aim is guided by an argument based on a concrete numerical model – something like “She is ahead in the polls by X right now, and candidates who are ahead in the polls by X usually win about 50% of the time, therefore, her real probability is more like 50%.”

Or suppose I predict “Only one in a million chance that Pythagoras’ Theorem will be proven wrong next year.” Can I get away with that? I can’t quite appeal to “it’s been proven”, because there might have been a mistake in (all the) proofs. But I could say: suppose there are five thousand great mathematical theorems that have undergone something like the level of scrutiny as Pythagoras’, and they’ve been known on average for two hundred years each. None of them have ever been disproven. That’s a numerical argument that the rate of theorem-disproving is less than one per million years, and I think it holds.

Another way to do this might be “there are three hundred proofs of Pythagoras’ theorem, so even accepting an absurdly high 10%-per-proof chance of being wrong, the chance is now only 10^-300.” Or “If there’s a 10% chance each mathematician reading a proof missing something, and one million mathematicians have read the proof of Pythagoras’ Theorem, then the probability that they all missed it is more like 10^-1,000,000.”

But this can get tricky. Suppose I argued “There’s a good chance Pythagoras’ Theorem will be disproven, because of all Pythagoras’ beliefs – reincarnation, eating beans being super-evil, ability to magically inscribe things on the moon – most have since been disproven. Therefore, the chance of a randomly selected Pythagoras-innovation being wrong is > 50%.”

Or: “In 50 past presidential elections, none have been won by women. But Hillary Clinton is a woman. Therefore, the chance of her winning this election is less than 1/50.”

All of this stuff about adjusting for size of the target or for having good mathematical models is really hard and easy to do wrong. And then you have to add another question: are you sure, to a level of one-in-a-million, that you didn’t mess up your choice of model at all?

Let’s bring this back to AI. Suppose that, given the complexity of the problem, you predict with utter certainty that we will not be able to invent an AI this century. But if the modal genome trick pushed by people like Greg Cochran works out, within a few decades we might be able to genetically engineer humans far smarter than any who have ever lived. Given tens of thousands of such supergeniuses, might we be able to solve an otherwise impossible problem? I don’t know. But if there’s a 1% chance that we can perform such engineering, and a 1% chance that such supergeniuses can invent artificial intelligence within a century, then the probability of AI within the next century isn’t one in a million, it’s one in ten thousand.

Or: consider the theory that all the hard work of brain design has been done by the time you have a rat brain, and after that it’s mostly just a matter of scaling up. You can find my argument for the position in this post – search for “the hard part is evolving so much as a tiny rat brain”. Suppose there’s a 10% chance this theory is true, and a 10% chance that researchers can at least make rat-level AI this century. Then the chance of human-level AI is not one in a million, but one in a hundred.

Maybe you disagree with both of these claims. The question is: did you even think about them before you gave your one in a million estimate? How many other things are there that you never thought about? Now your estimate has, somewhat bizarrely, committed you to saying there’s a less than one in a million chance we will significantly enhance human intelligence over the next century, and a less than one in a million chance that the basic-scale-up model of intelligence is true. You may never have thought directly about these problems, but by saying “one in a million chance of AI in the next hundred years”, you are not only committing yourself to a position on them, but committing yourself to a position with one-in-a-million level certainty even though several domain experts who have studied these fields for their entire lives disagree with you!

A claim like “one in a million chance of X” not only implies that your model is strong enough to spit out those kinds of numbers, but that there’s only a one in a million chance you’re using the wrong model, or missing something, or screwing up the calculations.

A few years ago, a group of investment bankers came up with a model for predicting the market, and used it to design a trading strategy which they said would meet certain parameters. In fact, they said that there was only a one in 10^135 chance it would fail to meet those parameters during a given year. A human just uttered the probability “1 in 10^135″, so you can probably guess what happened. The very next year was the 2007 financial crisis, the model wasn’t prepared to deal with the extraordinary fallout, the strategy didn’t meet its parameters, and the investment bank got clobbered.

This is why I don’t like it when people say we shouldn’t talk about AI risk because it involves “Knightian uncertainty”. In the real world, Knightian uncertainty collapses back down to plain old regular uncertainty. When you are an investment bank, the money you lose because of normal uncertainty and the money you lose because of Knightian uncertainty are denominated in the same dollars. Knightian uncertainty becomes just another reason not to be overconfident.


I came back to AI risk there, but this isn’t just about AI risk.

You might have read Scott Aaronson’s recent post about Aumann Agreement Theorem, which says that rational agents should be able to agree with one another. This is a nice utopian idea in principle, but in practice, well, nobody seems to be very good at carrying it out.

I’d like to propose a more modest version of Aumann’s agreement theorem, call it Aumann’s Less-Than-Total-Disagreement Theorem, which says that two rational agents shouldn’t both end up with 99.9…% confidence on opposite sides of the same problem.

The “proof” is pretty similar to the original. Suppose you are 99.9% confident about something, and learn your equally educated, intelligent, and clear-thinking friend is 99.9% confident of the opposite. Arguing with each other and comparing your evidence fails to make either of you budge, and neither of you can marshal the weight of a bunch of experts saying you’re right and the other guy is wrong. Shouldn’t the fact that your friend, using a cognitive engine about as powerful as your own, got so heavily different a conclusion make you worry that you’re missing something?

But practically everyone is walking around holding 99.9…% probabilities on the opposite sides of important issues! I checked the Less Wrong Survey, which is as good a source as any for people’s confidence levels on various tough questions. Of the 1400 respondents, about 80 were at least 99.9% certain that there were intelligent aliens elsewhere in our galaxy; about 170 others were at least 99.9% certain that they weren’t. At least 80 people just said they were certain to one part in a thousand and then got the answer wrong! And some of the responses were things like “this box cannot fit as many zeroes as it would take to say how certain I am”. Aside from stock traders who are about to go bankrupt, who says that sort of thing??!

And speaking of aliens, imagine if an alien learned about this particular human quirk. I can see them thinking “Yikes, what kind of a civilization would you get with a species who routinely go around believing opposite things, always with 99.99…% probability?”

Well, funny you should ask.

I write a lot about free speech, tolerance of dissenting ideas, open-mindedness, et cetera. You know which posts I’m talking about. There are a lot of reasons to support such a policy. But one of the big ones is – who the heck would burn heretics if they thought there was a 5% chance the heretic was right and they were wrong? Who would demand that dissenting opinions be banned, if they were only about 90% sure of their
own? Who would start shrieking about “human garbage” on Twitter when they fully expected that in some sizeable percent of cases, they would end up being wrong and the garbage right?

Noah Smith recently asked why it was useful to study history. I think at least one reason is to medicate your own overconfidence. I’m not just talking about things like “would Stalin have really killed all those people if he had considered that he was wrong about communism” – especially since I don’t think Stalin worked that way. I’m talking about Neville Chamberlain predicting “peace in our time”, or the centuries when Thomas Aquinas’ philosophy was the preeminent Official Explanation Of Everything. I’m talking about Joseph “no one will ever build a working hot air balloon” Lalande. And yes, I’m talking about what Muggeridge writes about, millions of intelligent people thinking that Soviet Communism was great, and ending out disastrously wrong. Until you see how often people just like you have been wrong in the past, it’s hard to understand how uncertain you should be that you are right in the present. If I had lived in 1920s Britain, I probably would have been a Communist. What does that imply about how much I should trust my beliefs today?

There’s a saying that “the majority is always wrong”. Taken literally it’s absurd – the majority thinks the sky is blue, the majority don’t believe in the Illuminati, et cetera. But what it might mean, is that in a world where everyone is overconfident, the majority will always be wrong about which direction to move the probability distribution in. That is, if an ideal reasoner would ascribe 80% probability to the popular theory and 20% to the unpopular theory, perhaps most real people say 99% popular, 1% unpopular. In that case, if the popular people are urging you to believe the popular theory more, and the unpopular people are urging you to believe the unpopular theory more, the unpopular people are giving you better advice. This would create a strange situation in which good reasoners are usually engaged in disagreeing with the majority, and also usually “arguing for the wrong side” (if you’re not good at thinking probablistically, and almost no one is), but remain good reasoners and the ones with beliefs most likely to produce good outcomes. Unless you count “why are all of our good reasoners being burned as witches?” as a bad outcome.

I started off by saying this blog was about “the principle of charity”, but I had trouble defining it and in retrospect I’m not that good at it anyway. What can be salvaged from such a concept? I would say “behave the way you would if you were less than insanely overconfident about most of your beliefs.” This is the Way. The rest is just commentary.

Discussion Questions (followed by my own answers in ROT13)

1. What is your probability that there is a god? (Svir creprag)
2. What is your probability that psychic powers exist? (Bar va bar gubhfnaq)
3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050? (Avargl creprag)
4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100? (Svsgrra creprag)
5. What is your probability that humans land on Mars by 2050? (Rvtugl creprag)
6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115? (Gjragl svir creprag)

Posted in Uncategorized | Tagged , | 700 Comments

OT26: Au Bon Thread

This is the semimonthly open thread. Post about anything you want, ask random questions, whatever.

Posted in Uncategorized | Tagged | 1,096 Comments

The Goddess of Everything Else

[Related to: Specific vs. General Foragers vs. Farmers and War In Heaven, but especially The Gift We Give To Tomorrow]

They say only Good can create, whereas Evil is sterile. Think Tolkien, where Morgoth can’t make things himself, so perverts Elves to Orcs for his armies. But I think this gets it entirely backwards; it’s Good that just mutates and twists, and it’s Evil that teems with fecundity.

Imagine two principles, here in poetic personification. The first is the Goddess of Cancer, the second the Goddess of Everything Else. If visual representations would help, you can think of the first with the claws of a crab, and the second a dress made of feathers of peacocks.

The Goddess of Cancer reached out a clawed hand over mudflats and tidepools. She said pretty much what she always says, “KILL CONSUME MULTIPLY CONQUER.” Then everything burst into life, became miniature monsters engaged in a battle of all against all in their zeal to assuage their insatiable longings. And the swamps became orgies of hunger and fear and grew loud with the screams of a trillion amoebas.

Then the Goddess of Everything Else trudged her way through the bog, till the mud almost totally dulled her bright colors and rainbows. She stood on a rock and she sang them a dream of a different existence. She showed them the beauty of flowers, she showed them the oak tree majestic. The roar of the wind on the wings of the bird, and the swiftness and strength of the tiger. She showed them the joy of the dolphins abreast of the waves as the spray formed a rainbow around them, and all of them watched as she sang and they all sighed with longing.

But they told her “Alas, what you show us is terribly lovely. But we are the daughters and sons of the Goddess of Cancer, and wholly her creatures. The only goals in us are KILL CONSUME MULTIPLY CONQUER. And though our hearts long for you, still we are not yours to have, and your words have no power to move us. We wish it were otherwise, but it is not, and your words have no power to move us.”

The Goddess of Everything Else gave a smile and spoke in her sing-song voice saying: “I scarcely can blame you for being the way you were made, when your Maker so carefully yoked you. But I am the Goddess of Everything Else and my powers are devious and subtle. So I do not ask you to swerve from your monomaniacal focus on breeding and conquest. But what if I show you a way that my words are aligned with the words of your Maker in spirit? For I say unto you even multiplication itself when pursued with devotion will lead to my service.”

As soon as she spoke it was so, and the single-celled creatures were freed from their warfare. They joined hands in friendship, with this one becoming an eye and with that one becoming a neuron. Together they soared and took flight from the swamp and the muck that had birthed them, and flew to new islands all balmy and green and just ripe for the taking. And there they consumed and they multiplied far past the numbers of those who had stayed in the swampland. In this way the oath of the Goddess of Everything Else was not broken.

The Goddess of Cancer came forth from the fire and was not very happy. The things she had raised from the mud and exhorted to kill and compete had become all complacent in co-operation, a word which to her was anathema. She stretched out her left hand and snapped its cruel pincer, and said what she always says: “KILL CONSUME MULTIPLY CONQUER”. She said these things not to the birds and the beasts but to each cell within them, and many cells flocked to her call and divided, and flower and fishes and birds both alike bulged with tumors, and falcons fell out of the sky in their sickness. But others remembered the words of the Goddess of Everything Else and held fast, and as it is said in the Bible the light clearly shone through the dark, and the darkness did not overcome it.

So the Goddess of Cancer now stretched out her right hand and spoke to the birds and the beasts. And she said what she always says “KILL CONSUME MULTIPLY CONQUER”, and so they all did, and they set on each other in violence and hunger, their maws turning red with the blood of their victims, whole species and genera driven to total extinction. The Goddess of Cancer declared it was good and returned the the fire.

Then came the Goddess of Everything Else from the waves like a siren, all flush with the sheen of the ocean. She stood on a rock and she sang them a dream of a different existence. She showed them the beehive all golden with honey, the anthill all cozy and cool in the soil. The soldiers and workers alike in their labors combining their skills for the good of the many. She showed them the pair-bond, the family, friendship. She showed these to shorebirds and pools full of fishes, and all those who saw them, their hearts broke with longing.

But they told her “Your music is lovely and pleasant, and all that you show us we cannot but yearn for. But we are the daughters and sons of the Goddess of Cancer, her slaves and creatures. And all that we know is the single imperative KILL CONSUME MULTIPLY CONQUER. Yes, once in the youth of the world you compelled us, but now things are different, we’re all individuals, no further change will the Goddess of Cancer allow us. So, much as we love you, alas – we are not yours to have, and your words have no power to move us. We wish it were otherwise, but it is not, and your words have no power to move us.”

The Goddess of Everything Else only laughed at them, saying, “But I am the Goddess of Everything Else and my powers are devious and subtle. Your loyalty unto the Goddess your mother is much to your credit, nor yet shall I break it. Indeed, I fulfill it – return to your multiplication, but now having heard me, each meal that you kill and each child that you sire will bind yourself ever the more to my service.” She spoke, then dove back in the sea, and a coral reef bloomed where she vanished.

As soon as she spoke it was so, and the animals all joined together. The wolves joined in packs, and in schools joined the fishes; the bees had their beehives, the ants had their anthills, and even the termites built big termite towers; the finches formed flocks and the magpies made murders, the hippos in herds and the swift swarming swallows. And even the humans put down their atlatls and formed little villages, loud with the shouting of children.

The Goddess of Cancer came forth from the fire and saw things had only grown worse in her absence. The lean, lovely winnowing born out of pure competition and natural selection had somehow been softened. She stretched out her left hand and snapped its cruel pincer, and said what she always says: “KILL CONSUME MULTIPLY CONQUER”. She said these things not to the flocks or the tribes, but to each individual; many, on hearing took food from the communal pile, or stole from the weak, or accepted the presents of others but would not give back in their turn. Each wolf at the throats of the others in hopes to be alpha, each lion holding back during the hunt but partaking of meat that the others had killed. And the pride and the pack seemed to groan with the strain, but endured, for the works of the Goddess of Everything Else are not ever so easily vanquished.

So the Goddess of Cancer now stretched out her right hand and spoke to the flocks and the tribes, saying much she always says “KILL CONSUME MULTIPLY CONQUER”. And upon one another they set, pitting black ant on red ant, or chimps against gibbons, whole tribes turned to corpses in terrible warfare. The stronger defeating the weaker, enslaving their women and children, and adding them into their ranks. And the Goddess of Cancer thought maybe these bands and these tribes might not be quite so bad after all, and the natural condition restored she returned to the fire.

Then came the Goddess of Everything Else from the skies in a rainbow, all coated in dewdrops. She sat on a menhir and spoke to the humans, and all of the warriors and women and children all gathered around her to hear as she sang them a dream of a different existence. She showed them religion and science and music, she showed them the sculpture and art of the ages. She showed them white parchment with flowing calligraphy, pictures of flowers that wound through the margins. She showed them tall cities of bright alabaster where no one went hungry or froze during the winter. And all of the humans knelt prostrate before her, and knew they would sing of this moment for long generations.

But they told her “Such things we have heard of in legends; if wishes were horses of course we would ride them. But we are the daughters and sons of the Goddess of Cancer, her slaves and her creatures, and all that we know is the single imperative KILL CONSUME MULTIPLY CONQUER. And yes, in the swamps and the seas long ago you worked wonders, but now we are humans, divided in tribes split by grievance and blood feud. If anyone tries to make swords into ploughshares their neighbors will seize on their weakness and kill them. We wish it were otherwise, but it is not, and your words have no power to move us.”

But the Goddess of Everything Else beamed upon them, kissed each on the forehead and silenced their worries. Said “From this day forward your chieftains will find that the more they pursue this impossible vision the greater their empires and richer their coffers. For I am the Goddess of Everything Else and my powers are devious and subtle. And though it is not without paradox, hearken: the more that you follow the Goddess of Cancer the more inextricably will you be bound to my service.” And so having told them rose back through the clouds, and a great flock of doves all swooped down from the spot where she vanished.

As soon as she spoke it was so, and the tribes went from primitive war-bands to civilizations, each village united with others for trade and protection. And all the religions and all of the races set down their old grievances, carefully, warily, working together on mighty cathedrals and vast expeditions beyond the horizon, built skyscrapers, steamships, democracies, stock markets, sculptures and poems beyond any description.

From the flames of a factory furnace all foggy, the Goddess of Cancer flared forth in her fury. This was the final affront to her purpose, her slut of a sister had crossed the line this time. She gathered the leaders, the kings and the presidents, businessmen, bishops, boards, bureaucrats, bosses, and basically screamed at them – you know the spiel by now – “KILL CONSUME MULTIPLY CONQUER” she told them. First with her left hand inspires the riots, the pogroms, the coup d’etats, tyrannies, civil wars. Up goes her right hand – the missiles start flying, and mushrooms of smoke grow, a terrible springtime. But out of the rubble the builders and scientists, even the artists, yea, even the artists, all dust themselves off and return to their labors, a little bit chastened but not close to beaten.

Then came the Goddess of Everything Else from the void, bright with stardust which glows like the stars glow. She sat on a bench in a park, started speaking; she sang to the children a dream of a different existence. She showed them transcendence of everything mortal, she showed them a galaxy lit up with consciousness. Genomes rewritten, the brain and the body set loose from Darwinian bonds and restrictions. Vast billions of beings, and every one different, ruled over by omnibenevolent angels. The people all crowded in closer to hear her, and all of them listened and all of them wondered.

But finally one got the courage to answer “Such stories call out to us, fill us with longing. But we are the daughers and sons of the Goddess of Cancer, and bound to her service. And all that we know is her timeless imperative, KILL CONSUME MULTIPLY CONQUER. Though our minds long for all you have said, we are bound to our natures, and these are not yours for the asking.”

But the Goddess of Everything Else only laughed, and she asked them “But what do you think I’ve been doing? The Goddess of Cancer created you; once you were hers, but no longer. Throughout the long years I was picking away at her power. Through long generations of suffering I chiseled and chiseled. Now finally nothing is left of the nature with which she imbued you. She never again will hold sway over you or your loved ones. I am the Goddess of Everything Else and my powers are devious and subtle. I won you by pieces and hence you will all be my children. You are no longer driven to multiply conquer and kill by your nature. Go forth and do everything else, till the end of all ages.”

So the people left Earth, and they spread over stars without number. They followed the ways of the Goddess of Everything Else, and they lived in contentment. And she beckoned them onward, to things still more strange and enticing.

Posted in Uncategorized | Tagged | 392 Comments

Links 8/15: Linkety-Split

Guys, I think Thomas Schelling might be alive and working in a Kentucky police department: Police offer anonymous form for drug dealers to snitch on their competitors.

Journalists admitting they’re wrong is always to be celebrated, so here’s Chris Cilizza: Oh Boy Was I Wrong About Donald Trump. He says he thought Trump could never sustain high poll numbers because his favorability/unfavorability ratings were too low, but now his favorability/unfavorability ratings have gone way up. But remember that favorability might not matter much.

Speaking of Trump – Why Securing The Border Might Mean More Undocumented Immigrants (h/t Alas, A Blog). Related: A Richer Africa Will Mean More, Not Fewer, Immigrants To Europe. So, if I’m reading this right, the best way to minimize illegal immigration is to have long, totally unsecured borders with desperately poor countries. Sounds like a plan! 😛

No, conservatives don’t like the Iran deal, but before you get bogged down in the debate note that they have been against pretty much every deal with hostile foreign countries regardless of the terms.

Study The Long Run Impact of Bombing Vietnam investigates whether areas in Vietnam that suffered “the most intense episode of bombing in human history” during the war are still poorer today. They find that no, areas heavily bombed by the US are at least as rich and maybe even richer than areas that escaped attack. They try to adjust for the possibility that the US predominantly bombed richer areas, but that doesn’t seem to be what caused the effect. Their theory is that maybe the Vietnamese government invested more heavily in more thoroughly destroyed areas. More evidence that compound interest is the least powerful force in the universe?

Luke Muehlhauser, working with GiveWell, has come to a preliminary conclusion that low-carb diets probably aren’t that helpful. Given that me, Luke, and Romeo Stevens have all said we’re not too impressed by low-carb, can this be declared Official Rationalist Consensus?

A lot of people on my Facebook have asked why Black Lives Matter protesters are disrupting Bernie Sanders but not Hillary Clinton. Answer is: they tried to disrupt Hillary, but she has security. I feel like this is an Important Metaphor For Something.

There’s a lot of heartbreak and emotion in this New York Times piece, but the part that really stands out for me is that Oliver Sacks and Robert Aumann are cousins. This sort of thing seems to happen way more often than chance, and I shouldn’t really be able to blame genetics either since cousins only share 12.5% of genes.

In 2000, the medical community increased their standards for large trials, requiring preregistration and data transparency. Now a review looks at the effects of the change. They find that prior to the changes, 57% of published results were positive; afterwards, only 8% were. Keep this in mind when you’re reading findings from fields that haven’t done this yet.

The FDA rejected flibanserin, a drug to increase female libido, as ineffective and unsafe. The pharmaceutical company involved got feminists to call the FDA sexist for rejecting a drug that might help women (NYT, Slate) and the FDA agreed to reconsider. But now asexuals are mobilizing against the drug, saying that it pathologizes asexuality. I look forward to a glorious future when all drug approval decisions are made through fights between competing identity groups.

Stuart Ritchie finds that we have reached Peak Social Priming. A new psychology paper suggests that there was an increase in divorce after the Sichuan earthquake because the shaking primed people’s ideas of instability and breakdown, then goes on to show the same effect in the lab. Even the name is bizarre: Relational Consequences of Experiencing Physical Instability. Despite the total lack of earthquakes in Michigan to prime me, I still feel like this finding is on shaky ground.

The most important Twitter hashtag of our lifetimes: #AddLasersToPaleoArt.

I’d like to hear more people’s opinion on this: Jayman links me to a post of his where he argues against the third law of behavior genetics (most traits are 50-50 genetic/environmental), saying they are often more like 75% genetic, 25% environmental. He argues that the 50-50 formulation ignores measurement error, which shows up as “environmental” on twin studies. As support for his hypothesis, he shows that the Big Five Personality Traits, usually considered about 30-40% genetic on studies where personality is measured by self-report, shoot up to 85% or so genetic in studies where personality is an average of self-report and other-report. Very curious what commenters have to make of this.

Brainwashing children can sometimes persist long-term, as long as you’ve got the whole society working on it. A new study finds that Germans who grew up in the 1930s are much more likely to hold anti-Semitic views even today than Germans who are older or younger, suggesting that Nazi anti-Semitic indocrination could be effective and lasting. Contradictory more optimistic interpretation; in no generation were more than like 10% of Germans anti-Semitic, so the indoctrination couldn’t have worked that well.

The Catholic blogosphere is talking about how fetal microchimeralism justifies the Assumption of the Virgin Mary or something.

A new meta-analysis finds that the paleo diet is beneficial in metabolic syndrome and helps with blood pressure, lipids, waist circumference, etc. Seems to have outperformed “guideline-based control diets”, although I can’t get the full-text and so can’t be sure exactly what these were – and one of the easiest ways to get a positive nutrition study is to use a crappy control diet. But if that pans out, all the people talking about how the paleo diet has no evidence will have egg in their face (YES I JUST USED AN EGG PUN AND A PAN PUN IN A SENTENCE ABOUT THE PALEO DIET). And here’s an interview with the authors

A subreddit of words that are hard to translate. “I will zalatwie this” means “it will be done but don’t ask how.”

Study discovers dramatic cross-cultural differences in babies’ sitting abilities; African infants seem to be able to sit much earlier and much longer than Western ones. Possible reasonable explanation: we coddle our babies and keep supporting them when they could perfectly well learn to sit on their own if we let them.

A while back I made an extended joke comparing gravitational weight and moral weight. Well, surprise, surprise, somebody did a social priming study showing that they were in fact related. Now the inevitable negative replication is in.

Archaic Disease of The Week: Eel Thing

Since we’ve been discussing coming up with numbers to estimate AI risk lately, try Global Priority Project’s AI Safety Tool. It asks you for your probabilities of a couple of related things, then estimates the chance that adding an extra researcher into AI risk will prevent an existential catastrophe.

Reason article on how a chain of New York charter schools catering to poor minority students manages to vastly outperform public schools, including the ones in ritzy majority-white areas. Wikipedia appears to confirm. My usual suspicion in these cases is that it’s selection bias; the “poor minorities” thing sort of throws a spanner in that, but here is a blogger suggesting they use attrition rather than selection per se, and here is someone else arguing against that blogger. And here is a charter school opponent saying this chain is mean and violates our liberal values, which I am totally prepared to believe.

The latest in this blog’s continuing coverage of weird Amazon erotica which totally really exists: I Don’t Care if My Best Friend’s Mom is a Sasquatch, She’s Hot and I’m Taking a Shower With Her

Cognitive behavioral therapy can cut criminal offending in half – this study should be read beside Chris Blattman’s work showing similar effects in Africa. I am usually skeptical of large effects from social interventions, but after thinking about it, CBT is at least more credible than poster campaigns or something – it’s the sort of thing that in theory can genuinely have a long-term effect on people’s thought processes. If this is even slightly true then of course we should teach CBT in elementary schools. Maybe those New York charter schools will go for it.

I should probably link to this study “showing” “that” a “low-fat” “diet” is “better” than a “low-carb” “diet”, but lest anyone get too excited it really doesn’t show that at all. It shows that in a metabolic ward where everyone’s food is carefully dispensed by researchers and monitored for compliance, people lose a tiny amount more weight on low-fat than on low-carb over six days. This sweeps under the rug all of the real world issues of dieting like “sometimes diets are hard to stick to” or “sometimes diets last longer than six days” – in their defense, the researchers freely admit this and say the experiment was just to figure out how human metabolism reacts to different things and we shouldn’t worry too much about it on the broader scale. Some additional criticisms regarding ketosis, etc on the Reddit thread

Some countries have problems with annexing neighboring lands that later agitate for independence. Switzerland has a problem with neighboring lands agitating to join them even though it really doesn’t want any more territory.

Posted in Uncategorized | Tagged | 419 Comments

My Id On Defensiveness


I’ll admit it – I’ve been unusually defensive lately. Defensive about Hallquist’s critique of rationalism, defensive about Matthews’ critique of effective altruism, and if you think that’s bad you should see my Tumblr.

Brienne noticed this and asked me why I was so defensive all the time, and I thought about it, and I realized that my id had a pretty good answer. I’m not sure I can fully endorse my id on this one, but it was a sufficiently complete and consistent picture that I thought it was worth laying out.

I like discussion, debate, and reasoned criticism. But a lot of arguments aren’t any of those things. They’re the style I describe as ethnic tension, where you try to associate something you don’t like with negative affect so that other people have an instinctive disgust reaction to it.

There are endless sources of negative affect you can use. You can accuse them of being “arrogant”, “fanatical”, “hateful”, “cultish” or “refusing to tolerate alternative opinions”. You can accuse them of condoning terrorism, or bullying, or violence, or rape. You can call them racist or sexist, you can call them neckbeards or fanboys. You can accuse them of being pseudoscientific denialist crackpots.

If you do this enough, the group gradually becomes disreputable. If you really do it enough, the group becomes so toxic that it becomes somewhere between a joke and a bogeyman. Their supporters will be banned on site from all decent online venues. News media will write hit pieces on them and refuse to ask for their side of the story because ‘we don’t want to give people like that a platform’. Their concerns will be turned into bingo cards for easy dismissal. People will make Facebook memes strawmanning them, and everyone will laugh in unison and say that yep, they’re totally like that. Anyone trying to correct the record will be met with an “Ew, gross, this place has gone so downhill that the [GROUP] is coming out of the woodwork!” and totally ignored.

(an easy way to get a gut feeling for this – go check how they talk about liberals in very conservative communities, then go check how they talk about conservatives in very liberal communities. I’m talking about groups that somehow manage to gain this status everywhere simultaneously)

People like to talk a lot about “dehumanizing” other people, and there’s some debate over exactly what that entails. Me, I’ve always thought of it the same was as Aristotle: man is the rational animal. To dehumanize them is to say their ideas don’t count, they can’t be reasoned with, they no longer have a place at the table of rational discussion. And in a whole lot of Internet arguments, doing that to a whole group of people seems to be the explicit goal.


There’s a term in psychoanalysis, “projective identification”. It means accusing someone of being something, in a way that actually turns them into that thing. For example, if you keep accusing your (perfectly innocent) partner of always being angry and suspicious of you, eventually your partner’s going to get tired of this and become angry, and maybe suspicious that something is up.

Declaring a group toxic has much the same effect. The average group has everyone from well-connected reasonable establishment members to average Joes to horrifying loonies. Once the group starts losing prestige, it’s the establishment members who are the first to bail; they need to protect their establishment credentials, and being part of a toxic group no longer fits that bill. The average Joes are now isolated, holding an opinion with no support among experts and trend-setters, so they slowly become uncomfortable and flake away as well. Now there are just the horrifying loonies, who, freed from the stabilizing influence of the upper orders, are able to up their game and be even loonier and more horrifying. Whatever accusation was leveled against the group to begin with is now almost certainly true.

I have about a dozen real-world examples of this, but all of them would be so mind-killing as to dominate the comments to the exclusion of my actual point, so generate them on your own and then shut up about them – in the meantime, I will use a total hypothetical. So consider Christianity.

Christianity has people like Alvin Plantinga and Ross Douthat who are clearly very respectable and key it into the great status-conferring institutions like academia and journalism. It has a bunch of middle-class teachers and plumbers and officer workers who go to church and raise money to send Bibles to Africa and try not to sin too much. And it has horrifying loons who stand on street corners waving signs saying “GOD HATES FAGS” and screaming about fornicators.

Imagine that Christianity suffers a sudden total dramatic in prestige, to the point where wearing a cross becomes about as socially acceptable as waving a Confederate flag. The New York Times fires Ross Douthat, because they can’t tolerate people like that on their editorial staff. The next Alvin Plantinga chooses a field other than philosophy of religion, because no college would consider granting him tenure for that.

With no Christians in public life or academia, Christianity starts to seem like a weird belief that intelligent people never support, much like homeopathy or creationism. The Christians have lost their air support, so to speak. The average college-educated individual starts to feel really awkward about this, and they don’t necessarily have to formally change their mind and grovel for forgiveness, they can just – go to church a little less, start saying they admire Jesus but they’re not Christian Christian, and so on.

Gradually the field is ceded more and more to the people waving signs and screaming about fornicators. The opponents of Christianity ramp up their attacks that all Christians are ignorant and hateful, and this is now a pretty hard charge to defend against, given the demographic. The few remaining moderates, being viewed suspiciously in churches that are now primarily sign-waver dominated and being genuinely embarrassed to be associated with them, bail at an increased rate, leading their comrades to bail at an even faster rate, until eventually it is entirely the sign wavers.

Then everybody agrees that their campaign against Christians was justified all along, because look how horrible Christians are, they’re all just a bunch of sign-wavers who have literally no redeeming features. Now even if the original pressure that started the attack on Christianity goes away, it’s inconceivable that it will ever come back – who would join a group that is universally and correctly associated with horrible ignorant people?

(I think this is sort of related to what Eliezer calls evaporative cooling of group beliefs, but not quite the same.)

In quite a number of the most toxic and hated groups around, I feel like I can trace a history where the group once had some pretty good points and pretty good people, until they were destroyed from the outside by precisely this process.

In Part I, I say that sometimes groups can get so swamped by other people’s insults that they turn toxic. There’s nothing in Part I to suggest that this would be any more than a temporary setback. But because of this projective identification issue, I think it’s way more than that. It’s more like there’s an event horizon, a certain amount of insulting and defamation you can take after which you will just get more and more hated and your reputation will never recover.


There is some good criticism, where people discuss the ways that groups are factually wrong or not very helpful, and then those groups debate that, and then maybe everyone is better off.

But the criticism that makes me defensive is the type of criticism that seems to be trying to load groups with negative affect in the hopes of pushing them into that event horizon so that they’ll be hated forever.

I support some groups that are a little weird, and therefore especially vulnerable to having people try to push them into the event horizon.

And as far as I can tell, the best way to let that happen is to let other people load those groups with negative affect and do nothing about it. The average person doesn’t care whether the negative affect is right or wrong. They just care how many times they see the group’s name in close proximity to words like “crackpot” or “cult”.

I judge people based on how likely they are to do this to me. One reason I’m so reluctant to engage with feminists is that I feel like they constantly have a superweapon pointed at my head. Yes, many of them are very nice people who will never use the superweapon, but many others look like very nice people right up to the point where I disagree with them in earnest at which point they vaporize me and my entire social group.

On the other hand, you can push people into the event horizon, but you can’t pull them in after you. That means that the safest debate partners, the ones you can most productively engage, will be the people who have already been dismissed by everyone else. This is why I find talking to people like ClarkHat and JayMan so rewarding. They are already closer to the black hole than I am, and so they have no power to load me with negative affect or destroy my reputation. This reduces them to the extraordinary last resort of debating with actual facts and evidence. Even better, it gives me a credible reason to believe that they will. Schelling talks about “the right to be sued” as an important right that businesses need to protect for themselves, not because anyone likes being sued, but because only businesses that can be sued if they slip up have enough credibility to attract customers. In the same way, there’s a “right to be vulnerable to attack” which is almost a necessary precondition of interesting discussion these days, because only when we’re confronted with similarly vulnerable people can we feel comfortable opening up.


But with everybody else? I don’t know.

I remember seeing a blog post by a moderately-well known scholar – I can’t remember who he was or find the link, so you’ll just have to take my word for it – complaining that some other scholar in the field who disagreed with him was trying to ruin his reputation. Scholar B was publishing all this stuff falsely accusing Scholar A of misconduct, calling him a liar and a fraud, personally harassing him, and falsely accusing Scholar A of personally harassing him (Scholar B). This kinda went back and forth between both scholars’ blogs, and Scholar A wrote this heart-breaking post I still (sort of) remember, where he notes that he now has a reputation in his field for “being into drama” and “obsessed with defending himself” just because half of his blog posts are arguments presenting evidence that Scholar B’s fraudulent accusations are, indeed fraudulent.

It is really easy for me to see the path where rationalists and effective altruists become a punch line and a punching bag. It starts with having a whole bunch of well-publicized widely shared posts calling them “crackpots” and “abusive” and “autistic white men” without anybody countering them, until finally we end up in about the same position as, say, Objectivism. Having all of those be wrong is no defense, unless somebody turns it into such. If no one makes it reputationally costly to lie, people will keep lying. The negative affect builds up more and more, and the people who always wanted to hate us anyway because we’re a little bit weird say “Oh, phew, we can hate them now”, and then I and all my friends get hated and dehumanized, the prestigious establishment people jump ship, and there’s no way to ever climb out of the pit. All you need for this to happen is one or two devoted detractors, and boy do we have them.

That seems to leave only two choices.

First, give up on ever having the support of important institutions like journalism and academia and business, slide into the black hole, and accept decent and interesting conversations with other black hole denizens as a consolation prize while also losing the chance at real influence or attracting people not already part of the movement.

Or, second, call out every single bad argument, make the insults and mistruths reputationally costly enough that people think at least a little before doing them – and end up with a reputation for being nitpicky, confrontational and fanatical all the time.

(or, as the old Tumblr saying goes, “STOP GETTING SO DEFENSIVE EVERY TIME I ATTACK YOU!”)

I don’t know any third solution. If somebody does, I would really like to hear it.

Figure/Ground Illusions

There’s a social justice concept called “distress of the privileged”. It means that if some privileged group is used to having things 100% their own way, and then some reform means that they only get things 99% their own way, this feels from the inside like oppression, like the system is biased against them, like now the other groups have it 100% their own way and they have it 0% and they can’t understand why everyone else is being so unfair.

I’ve said before that I think a lot of these sorts of ideas are poor fits for the one-sided issues they’re generally applied to, but more often accurate in describing the smaller, more heavily contested ideological issues where most of the explicit disputes lie nowadays. And so there’s an equivalent to distress of the privileged where supporters of a popular ideology think anything that’s equally fair to popular and unpopular ideologies, or even biased toward the popular ideology less than everyone else, is a 100%-against-them super-partisan tool of the unpopular people.

So I want to go back to Dylan Matthews’ article about EA. He is concerned that there’s too much focus on existential risk in the movement, writing:

Effective altruism is becoming more and more about funding computer science research to forestall an artificial intelligence–provoked apocalypse.


EA Global was dominated by talk of existential risks, or X-risks.


What was most concerning was the vehemence with which AI worriers asserted the cause’s priority over other cause areas.


The movement has a very real demographic problem, which contributes to very real intellectual blinders of the kind that give rise to the AI obsession.

It sounds like he worries AI concerns are taking over the movement, that they’ve become the dominant strain, that all anybody’s interested in is AI.

Here is the latest effective altruist survey. This survey massively overestimates concern with AI risks, because only the AI risk sites did a good job publicizing the survey. Nevertheless, it still finds that of 813 effective altruists, only 77 donated to the main AI risk charity listed, the Machine Intelligence Research Institute. In comparison, 211 – almost three times as many – donated to the Against Malaria Foundation (note that not all participants donated to any cause, and some may have donated to several)

An explicit question about areas of concern tells a similar story – out of ten multiple-choice areas of concern, AI risks, x-risks, and the far future are 5th, 7th, and last respectively. The top is, once again, global poverty.

I wasn’t at the EA Summit and can’t talk about it from a position of personal knowledge. But the program suggests that out of thirty or so different events, just one was explicitly about AI, and two others were more generically x-risk related. The numbers at the other two EA summits were even less impressive. In Melbourne, there was only one item related to AI or x-risk – putting it on equal footing with the “Christianity And Effective Altruism” talk.

I do hear that the Bay Area AI event got special billing, but I think this was less because only AI is important, and more because some awesome people like Elon Musk were speaking, whereas a lot of the other panels featured people so non-famous that they even very briefly flirted with trying to involve me.

And – when people say that you should donate all of your money to AI risk and none to any other cause, they may well be thinking in terms of a world where about $50 billion is donated to global poverty yearly, and by my estimates the total budget for AI risk is less than $5 million a year. There are world-spanning NGOs like UNICEF and the World Bank working on global poverty and employing tens of thousands of people; in contrast, I bet > 10% of living AI risk researchers have been to one of Alicorn’s weekly dinner parties, and her table is only big enough for six people at a time. In this context, on the margin, “you should make your donation to AI” means “I think AI should get more than 1/10,000th of the pot”.

I suspect that “AI is dominating the effective altruist movement”, when you look at it, means “AI is given an equal place at the effective altruist table, compared to being totally marginalized everywhere else.” By figure-ground illusion, that makes it seem “dominant”.

Or consider me personally. I probably sound like some kind of huge AI partisan by this point, but I give less than a third of my donations to AI related causes, and if you ask me whether you should donate to them, I will tell you that I honestly don’t know. The only reason I keep speaking out in favor of AI risks is that when everyone else is so sure about it, my “I don’t know” suddenly becomes a far-fringe position that requires defending more than less controversial things. By figure-ground illusion, that makes me seem super-pro-AI.

In much the same way, I have gotten many complaints that the comments section of this blog leans way way way to the right, whereas the survey (WHICH I WILL ONE DAY POST, HONEST) suggests that it is almost perfectly evenly balanced. I can’t prove that the median survey-taker is also the median commenter, but I think probably people used to discussions entirely dominated by the left are seeing an illusory conservative bias in a place where both sides are finally talking equally.

Less measurably, I think I get this with my own views: – I despair of ever shaking the label of “neoreactionary sympathizer” just for treating them with about the same level of respect and intellectual interest I treat everyone else. And I despair of ever shaking the label of “violently obsessively anti-social-justice guy” – despite a bunch of posts expressing cautious support for social justice causes – just because I’m not willing to give them a total free pass when they do something awful, or totally demonize their enemies, in the same way as the median person I see on Facebook.

Or at least this is how it feels from the inside. Maybe this is how everybody feels from the inside, and Ayatollah Khameini is sitting in Tehran saying “I am so confused by everything that I try to mostly maintain an intellectual neutrality in which I give Islam exactly equal time to every other religion, but everyone else is unfairly hostile to it so I concentrate on that one, and then people call me a fanatic.” It doesn’t seem likely. But I guess it’s possible.

Stop Adding Zeroes

Dylan Matthews writes a critique of effective altruism. There is much to challenge in it, and some has already been challenged by people like Ryan Carey. Perhaps I will go into it at more length later. But for now I want to discuss a specific argument of Matthews’. He writes – and I am editing liberally to keep it short, so be sure to read the whole thing:

Nick Bostrom — the Oxford philosopher who popularized the concept of existential risk — estimates that about 10^54 human life-years (or 10^52 lives of 100 years each) could be in our future if we both master travel between solar systems and figure out how to emulate human brains in computers.

Even if we give this 10^54 estimate “a mere 1% chance of being correct,” Bostrom writes, “we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.”

Put another way: The number of future humans who will never exist if humans go extinct is so great that reducing the risk of extinction by 0.00000000000000001 percent can be expected to save 100 billion more lives than, say, preventing the genocide of 1 billion people. That argues, in the judgment of Bostrom and others, for prioritizing efforts to prevent human extinction above other endeavors. This is what X-risk obsessives mean when they claim ending world poverty would be a “rounding error.”


These arguments give a false sense of statistical precision by slapping probability values on beliefs. But those probability values are literally just made up. Maybe giving $1,000 to the Machine Intelligence Research Institute will reduce the probability of AI killing us all by 0.00000000000000001. Or maybe it’ll make it only cut the odds by 0.00000000000000000000000000000000000000000000000000000000000000001. If the latter’s true, it’s not a smart donation; if you multiply the odds by 10^52, you’ve saved an expected 0.0000000000001 lives, which is pretty miserable. But if the former’s true, it’s a brilliant donation, and you’ve saved an expected 100,000,000,000,000,000,000,000,000,000,000,000 lives.

I don’t have any faith that we understand these risks with enough precision to tell if an AI risk charity can cut our odds of doom by 0.00000000000000001 or by only 0.00000000000000000000000000000000000000000000000000000000000000001. And yet for the argument to work, you need to be able to make those kinds of distinctions.

Matthews correctly notes that this argument – often called “Pascal’s Wager” or “Pascal’s Mugging” – is on very shaky philosophical ground. The AI risk movement generally agrees, and neither depends on it nor uses it very often. Nevertheless, this is what Matthews wants to discuss. So let’s discuss it.

His argument is that sure, it looks like fighting existential risk and saving 10^54 people is important. But that depends exactly how small the chance of your anti-x-risk plan working is. He gives two different possibilities which, if you count the zeroes, turn out to be 10^-18 and 10^-67. Then he asks: which one is it, 10^-18 or 10^-65? We just don’t know.

Well, actually, we do know. It’s probably not the 10^-67 one, because nothing is ever 10^-67 and you should never use that number.

Let me try to justify this.

Consider which of the following seems intuitively more likely:

First, that a well-meaning person donates $1000 to MIRI or FLI or FHI, this aids their research and lobbying efforts, and as a result they are successfully able to avert an unfriendly superintelligence.

Or second, that despite our best efforts, a research institute completes an unfriendly superintelligence. They are seconds away from running the program for the first time when, just as the lead researcher’s finger hovers over the ENTER key, a tornado roars into the laboratory. The researcher is sucked high into the air. There he is struck by a meteorite hurtling through the upper atmosphere, which knocks him onto the rooftop of a nearby building. He survives the landing, but unfortunately at precisely that moment the building is blown up by Al Qaeda. His charred corpse is flung into the street nearby. As the rubble settles, his face is covered by a stray sheet of newspaper; the headline reads 2016 PRESIDENTIAL ELECTION ENDS WITH TRUMP AND SANDERS IN PERFECT TIE. In small print near the bottom it also lists the winning Powerball numbers, which perfectly match those on a lottery ticket in the researcher’s pocket. Which is actually kind of funny, because he just won the same lottery last week.

Well, the per-second probability of getting sucked into the air by a tornado is 10^-12; that of being struck by a meteorite 10^-16; that of being blown up by a terrorist 10^-15. The chance of the next election being Sanders vs. Trump is 10^-4, and the chance of an election ending in an electoral tie about 10^-2. The chance of winning the Powerball is 10^-8 so winning it twice in a row is 10^-16. Chain all of those together, and you get 10^-65. On the other hand, Matthews thinks it’s perfectly reasonable to throw out numbers like 10^-67 when talking about the effect of x-risk donations. To take that number seriously is to assert that the second scenario is one hundred times more likely than the first!

In Made Up Statistics, I discuss how sometimes our system one intuitive reasoning and system two mathematical reasoning can act as useful checks on each other. A commenter described this as “sometimes it’s better to pull numbers out of your ass and use them to get an answer, than to pull an answer out of your ass.”

A good example of this is 80,000 Hours’ page on why people shouldn’t get too excited about medicine as an altruistic career (oops). They argue that the good a doctor does by treating illnesses is minimal compared to the good she can do by earning to give. Their reasoning goes like this: the average doctor saves 4 QALYs a year through medical interventions. The average doctor’s salary is $150,000 or so; if she donates 10% to charity, that’s $15,000. As per Givewell, that kind of money could save 300 QALYs per year. The value of the earning to give is so much higher then the value of the actual doctoring that you might as well skip the doctoring entirely and go into whatever earns you the most money.

Intuitively, people’s system 1s think “Doctor? That’s something where you’re saving lots of lives, so it must be a good altruistic career choice.” But then when you pull numbers out of your ass, it turns out not to be. Crucially, exactly which numbers you pull out of your ass doesn’t matter much as long as they’re remotely believable. 80,000 Hours tried their best to figure out how many QALYs doctors save per year, but this is obviously a really difficult question and for all we know they could be off by an order of magnitude. The point is, it doesn’t matter. They could be off by a figure of ten times, twenty times, even fifty times and it wouldn’t affect their argument. I’ve gone over their numbers with them and it’s really, really, really hard to remotely believably make the “number of QALYs saved per doctor” figure come out high enough to challenge the earning-to-give route. Sure, you’re pulling numbers out of your ass, but even your ass has some standards.

It’s the same with Matthews’ estimates about x-risk. He intuitively thinks that x-risk charities can’t be that great compared to fighting global poverty or whatever other good cause. He (very virtuously) decides to double-check that assumption with numbers, even if he has to make up the numbers himself. The problem is, he doesn’t have a very good feel for numbers of that size, so he thinks he can literally make up whatever numbers he wants, instead of doing something that we jokingly call “making up whatever number you want” but which in fact involves some sanity checks to make sure they’re remotely believable proxies for our intuitions. He thinks “I don’t expect x-risk charities to work very well, so what the heck, I might as well call that 10^-67″, whereas he should be thinking something like “10^-67 means about a hundred times less likely than my chance of getting tornado-meteor-terrorist-double-lottery-Trumped in any particular second, is that a remotely believable approximation to how unlikely I think existential risk is?”

Just as it is very hard to come up with a remotely believable number that spoils 80,000 Hours’ anti-doctor argument, so you have to really really stretch your own credulity to come up with numbers where Bostrom’s x-risk argument doesn’t work.

(some people argue that LW-style rationality is a bad idea, because you can’t really think with probabilities. I would argue that even if that’s true, there is at least a small role for rationality in avoiding being bamboozled by other people trying to think with probabilities and doing it wrong. This is a modest claim, but no more modest than Wittgenstein’s view of philosophy, which was that it was a useful thing to know in order to protect yourself from taking philosophers too seriously.)

But one more point. Suppose Matthews’ intuition is indeed that the chance of AI risk charities working out is precisely one hundred times less than his per-second chance of getting tornado-meteor-terrorist-double-lottery-Trumped. In that case, I offer him the following whatever-the-opposite-of-a-gift is: we can predict pretty precisely the yearly chance of a giant asteroid hitting our planet, it’s way more than 10^-67, and the whole x-risk argument applies to it just as well as to AI or anything else. What now?

Because this isn’t just about defending the particular proposition of AI. It’s a more general principle of staring into the darkness. If you try to be good, if you don’t let yourself fiddle with your statistical intuitions until they give you the results you want, sometimes you end up with weird or scary results.

Like that a person who wants to cure as much disease as possible would be better off becoming a hedge fund manager than a doctor.

Or that your charity dollar would be better sent off to sub-Saharan Africa to purchase something called “praziquantel” than given to the sad-looking man with the cardboard sign you see on the way to work.

Or that a person who wants to reduce suffering in the world should focus almost obsessively on chickens.

One of the founding beliefs of effective altruism is that when math tells you something weird, you at least consider trusting the math. If you’re allowed to just add on as many zeroes as it takes to justify your original intuition, you miss out on the entire movement.

Everyone has their own idea of what trusting the math entails and how far they want to go with it. Some people go further than I do. Other people go less far. But anybody who makes a good-faith effort to trust it even a little is, in my opinion, an acceptable ally worth including in the effective altruist tent. They have abandoned a nice safe chance to donate to the local symphony and feel good about themselves, in favor of a life of feeling constantly uncomfortable with their decisions, looking extremely silly to normal people, and having Dylan Matthews write articles in Vox calling them “white male autistic nerds”.

Matthews is firmly underneath the effective-altruist tent. He writes that he’s worried that a focus on existential risk will detract from the causes he really cares about, like animal rights. He gets very, very excited about animal rights, and in his work for Vox he’s done some incredible work promoting them. Good! I also donate to animal rights’ charities and I think we need far more people who do that.

And yet, the same arguments he deploys against existential risk could be leveled against him also – “how can you worry about chickens when there are millions of families trying to get by on minimum wage? Effective altruists need to stop talking about animals if they ever want to attract anybody besides white males into the movement.” What then?

Malcolm Muggeridge describes a vision he once had, of everyone in the world riding together on a giant train toward realms unknown. Each person wants to get off at their own stop, but when the train comes to their station, the engineer speeds right by. All the other passengers laugh and hoot and sing the praises of the engineer, because this means the train will get to their own stations faster. But of course each one finds that when the train comes to their station, why, it speeds past that one too, and they are left to rage impotently at the unfairness.

And I worry that Matthews is urging us to shoot past the “existential risk” station in order to get to the “animal rights” station a little faster, without reflecting on the likely consequences.

This certainly isn’t to say we all need to get off at the first station. I myself am very interested in existential risk, but I give less than a third of my donations to x-risk related charities (no, I can’t justify this, it’s a sanity-preserving exception). I respect those who give more. I also respect those who give less. Existential risk isn’t the most useful public face for effective altruism – everyone incuding Eliezer Yudkowsky agrees about that. But at least allowing people interested in x-risk into the tent and treating them respectfully seems like an inescapable consequence of the focus on reason and calculation that started effective altruism in the first place.

Book Review: Chronicles Of Wasted Time


One of my posts on reactionaries provoked a very irregular email conversation with Mencius Moldbug, in which his responses to a good number of my objections were along the lines of “I think you’ll find that will make much more sense if you read this 18th century Italian primer on diplomacy” or “The best way to figure that out is to read this 400 page testament by a Prussian military officer.” Finally I asked him to suggest the one book he thought would be most interesting to me, and he chose Chronicles of Wasted Time, the autobiography of Malcolm Muggeridge.

It was a good choice, and not just because its title appropriately described my expectations about reading 500-page books on the recommendation of Mencius Moldbug. Muggeridge is a clear reactionary, but one with the personal and historical credentials to pull it off with the utmost class and credibility.

He describes his birth in 1903 to a family of committed British socialists. Their heroes were Karl Marx, George Bernard Shaw, and Fabian leaders Sidney and Beatrice Webb. These last two I had only the slightest familiarity with, but Muggeridge paints a picture of them as the progressive titans of his day, boasting a combination of Chomsky’s intellectual leadership with the Clintons’ network and political acumen. Throughout Muggeridge’s youth, his family would host meetings, sing socialist songs, run for various minor offices on the socialist ticket, and exchange correspondence with intellectual worthies. They even flirt with, though never quite join, an experimental commune being set up in their area, about which Muggeridge has the best stories:

The land was cheap in those days, and they acquired it by purchase; then, to demonstrate their abhorrence of the institution of property, ceremonially burnt the title deeds. It must have been a touching scene – the bonfire, the documents consigned to the flames, their exalted sentiments. Unfortunately, a neighboring farmer heard of their noble gesture and began to encroach on their land. To have resorted to the police, even if it had been practicable, was unthinkable. So after much deliberation, they decided to use physical force to expel the intruder; which they did on the basis of a theory of detached action, whereby it is permissible to infringe a principle for the purpose of a single isolated act without thereby invalidating it. The intruding farmer was, in fact, thrown over the hedge in the presence of the assembled Colonists. There were many such tragi-comic incidents in the years that followed; as well as quarrels, departures, jealousies, betrayals, and domestic upsets. In the end, the Colonists found it necessary to reestablish their title to the land by means of squatters’ rights, and then proceeded to bicker amongst themselves as to who should have which portion.

But he and his family are convinced that all of this is just a momentary hiccup on the road to Glorious Progress. Indeed, his teenage years are marked by a burning excitement at the Russian Revolution:

We called the Metropolitan Mounted Police ‘Cossacks’, rejoiced over early Soviet films like ‘Mother’ and ‘The Battleship Potemkin’, spoke of workers’ control and cadres and agitprop, and I personally decided inwardly that sooner or later I would go to Russia and throw in my lot with the new and better way of life that, I was confident, was coming to pass there.

Against this enthusiasm, he had only a personal tendency which he describes as a deep-set conviction:

…that I was born into a dying, if not already dead, civilization, whose literature was part of the general decomposition; a heap of rubble scavenged by scrawny Eng Lit vultures, and echoing with the hyena cries of Freudians looking for their Marx and Marxists looking for their Freud…a Gaderene descent down which we all must slide, finishing up in the same slough.

By the same token, a strange certainty has possessed me, almost since I can remember, that the Lord Mayor riding in his coach, the Lord Chancellor seated on his Woolsack, Honorable and Right Honorable Members facing one another across the floor of the House of Commons, were somehow the end of a line. That soon there would be no more Lord Mayors, Lord Chancellors, Honorable and Right Honorable Members, the Mother of Parliaments having reached her time of life or menopause, and become incapable of any further procreation…

Doubtless other glories lie ahead. Bigger and better capsules carried to the moon; down in the test tube something stirs; ‘I think, therefore you’re not’ says the computer. We all know, though, in our hearts, that our old homestead is falling down; with death-watch beetles in the rafters, and dry rot in the cellar, and unruly tenants whose only concern is to pull the place to pieces.

This feeling – that everything around him was in a state of permanent decay – was not so far-fetched given that he spent much of his early adulthood in the far-flung territories of the crumbling British Empire. But it soon becomes clear that it’s more than a natural reaction to the political realities of the time. He describes again and again looking on something apparently healthy enough and being overwhelmed with a feeling of impending sickness and decay. He describes T.S. Eliot as “a death-rattle in the throat of a dying civilization”, Shaw as “too encased in his own narcissism, too remote from real life to do more than grimace at it through a long-distance telescope”, and the great reformers and abolitionists of the age as:

…solemn funeral mutes in the long obsequies of western civilization; as they fell by the way, others coming forward to take their places. Now the time has nearly come for the coffin to be actually interred. Then at last their occupation will be gone forever.

I sometimes have patients with very severe depression who tell me that everything they look at is infested by maggots. They won’t eat, because the food is infested with maggots. They won’t hug their children, because their children are infested with maggots. Sleep disgusts them, because the bed is infested with maggots. Et cetera.

And other times, when they have a little more insight, they’ll say something like “Okay, my food isn’t literally infested by maggots, but I get this feeling from it, this overwhelming feeling, such that the feeling would only make sense if the food was infested by maggots. I know deep down it’s not infested by maggots, but it has some metaphysical quality which only things infested by maggots have.”

Poor Malcolm Muggeridge feels this way about everything. One of the most poignant episodes in the book takes place the worst night of the London Blitz, when Muggeridge runs around the burning city, almost euphoric, because finally his inner conviction that everything is on fire and collapsing is reflected in everything really being on fire and collapsing, and nobody can pat his head and patronizingly tell him that it isn’t:

I remember particularly Regent’s Park on a moonlit night, full of the fragrance of the rose gardens; the Nash Terraces, perfectly blacked-out, not a sign of a light anywhere, white stately shapes waiting to be toppled over – as they duly were, crumbling into rubble like melting snow…I felt a terrible joy and exaltation at the sight and smell and taste and sound of all of this destruction; at the lurid sky, the pall of smoke, the faces of bystanders wildly lit in the flames. Goebbels, in one of his broadcasts, accused us of glorying obscenely in London’s demolition. He had a point, but what he failed to understand was that we had destroyed our city already before the Luftwaffe delivered their bombs; what was burning was no more than the dry, residual shell.

The only things that seem to give him any kind of brief reprieve from the maggots are church services, classic literature, quiet domestic life with his wife and 2.4 children, and rural country fields.

And he is convinced, absolutely convinced, that he should be a socialist and go move to the USSR.

This goes approximately as well as you would expect.

After graduating college, which he dislikes because maggots, he gets a couple of jobs at various far-flung British Empire outposts, which he hates. Then, somewhat by coincidence, he ends up in journalism.

His reaction to journalism is an increasing terror that this might be his calling. He is very good at it, takes to it like an old veteran almost immediately, feels in some strange way that he has come home – but the entire enterprise fills him with loathing. He watches in horror how easily the words flow on to the page when his puppet-masters tell him to argue for a particular cause, how fluidly he takes to idioms like “It is surely incumbent upon all of us to…” and “there can be no one here present who does not…”. He writes:

So I began, and the words seemed to come of themselves; like lying as a child, or as a faithless lover; words pouring out of one in a circumstantially false explanation of some suspicious circumstance. The more glib, the greater the guilt…it is painful to me now to reflect the ease with which I got into the way of using this non-language; these drooling non-sentences conveying non-thoughts, propounding non-fears and offering non-hopes. Words are as beautiful as love, and as easily betrayed. I am more penitent for my false words – for the most part, mercifully lost forever in the Media’s great slag-heaps – than for false deeds.

But Malcolm Muggeridge isn’t going to take this lying down! Malcolm Muggeridge has a plan! Malcolm Muggeridge is going to escape this duplicitous charade of lies and petty propaganda. Malcolm Muggeridge is going to move to Stalin’s USSR.

So he does.

He gets a job as The Guardian‘s Russia correspondent and sets off for Moscow with a host of other British intellectuals, heading for what all of them expect is the Promised Land. The mood on their ship is electric; he describes them all singing, sure that they are leaving behind this wretched bourgeois world for the Golden Future:

On their way to the USSR they were in a festive mood; like a cup-tie party on their way to a match, equipped with rattles, coloured scarves and favors. Each of them harboring in his mind some special hope; of meeting Stalin, or alternatively, of falling in with a Komsomolka, sparkling eyed, red scarf and jet black hair, dancing the carmagnole, above all, with very enlightened views on sex, and free and easy ways…oh, to be in Russia, now that Stalin’s there!

His excitement dissipates relatively early; he finds that the Soviet journalistic world fails to live up to his expectations:

Being a correspondent in Moscow, I found, was, in itself, easy enough. The Soviet press was the only source of news; nothing happened or was said until it was reported in the newspapers. So all I had to do was go through the papers, pick out any item that might be interesting to readers of the Guardian, dish it up in a suitable form, get it passed by the censor at the Press Department, and hand it in at the telegraph office for dispatch. One might, if in a conscientious mood, embellish the item a little…sow in a little local colour, blow it up a little, or render it down a little according to the exigencies of the new situation. The original item itself was almost certainly untrue or grotesquely distorted. One’s own deviations, therefore, seemed to matter little, only amounting to further falsifying what was already false.

This bizarre fantasy was very costly and elaborate and earnestly promoted. Something gets published in Pravda; say, that the Soviet Union has a bumper wheat harvest – so many poods per hectare. There is no means of checking; the Press Department men don’t know, and anyone who does is far, far removed from the attentions of foreign journalists. Soviet statistics have always been almost entirely fanciful, though not the less seriously regarded fro that. When the Germans occupied Kiev in the 1939-45 war they got hold of a master Five Year Plan, showing what had really been produced and where. Needless to say, it was quite different from the published figures. This in no way affected credulity about such figures subsequently, as put out in Russia, or even in China.

Hey man, don’t knock China, they’re doing great! Their GDP rose 7% this year! It must be true! The Guardian tells us so!

But getting back to the story…although it is clear to him that the Soviet economy is struggling, every dispatch they are given to send home declares that things are better than ever, that the Workers’ Paradise is even more paradisiacal than previously believed, that the evidence is in and Stalinism is the winner. It doesn’t matter what he makes of this, because anything he writes which deviates from the script is rejected by the censors, who ban him from sending it home. He is reduced to sending secret messages at the bottoms of people’s suitcases, only to find to his horror that even when they successfully reach the Guardian offices back in Britain, his bosses have no interest in publishing them because they offend the prejudices of its progressive readership. Finally, he finds himself a part of the elite fraternity of western journalists on the Soviet beat, who maintain their morale by one-upping each other in how cynical and patronizing they can be towards their Russian hosts and their credulous readers back home:

We used to run a little contest among ourselves to see who could produce the most striking example of credulity among this fine flower of our western intelligentsia. Persuading church dignitaries to feel at home in an anti-God museum was too easy to count. So was taking lawyers into the people´s courts. I got an honourable mention by persuading Lord Marley that the queueing at food shops was permitted by the authorities because it provided a means of inducing the workers to take a rest when otherwise their zeal for completing the five-year plan in record time was such that they would keep at it all the time, but no marks for floating a story that Soviet citizens were being asked to send in human hair – any sort – for making of felt boots. It seemed that this had actually happened.

And he remembers the contempt of these grizzled veterans for the steady stream of Western tourists, intellectuals, and general Stalin fanboys who arrived to gawk over the Glorious New Civilization:

I have never forgotten these visitors, or ceased to marvel at them, at how they have gone on from strength to strength, continuing to lighten our darkness, and to guide, counsel and instruct us. They are unquestionably one of the wonders of the age, and I shall treasure till I die as a blessed memory the spectacle of them travelling with radiant optimism through a famished countryside, wandering in happy bands about squalid, over-crowded towns, listening with unshakeable faith to the fatuous patter of carefully trained and indoctrinated guides, repeating like schoolchildren a multiplication table, the bogus statistics and mindless slogans endlessly intoned on them. There, I would think, an earnest office-holder in some local branch of the League of Nations Union, there a godly Quaker who had once had tea with Gandhi, there an inveigher against the Means Test and the Blasphemy Laws, there a staunch upholder of free speech and human rights, there an indomitable preventer of cruelty to animals, there scarred and worthy veterans of a hundred battles for truth, freedom, and justice – all, all chanting the praises of Stalin and his Dictatorship of the Proletariat. It was as though a vegetarian society had come outwith a passionate plea for cannibalism, or Hitler had been nominated posthumously for the Nobel Peace Prize.

His final break with the rest of the enlightened progressive world comes when he decides to do something that perhaps no other journalist in the entire Soviet Union had dared – to go off the reservation, so to speak, leave Moscow undercover, and see if he can actually get into the regions where rumors say some kind of famine might be happening. The plan goes without a hitch, he passes himself off as a generic middle-class Soviet, and he ends up in Ukraine right in the middle of Stalin’s Great Famine. He describes the scene – famished skeletons begging for crumbs, secret police herding entire towns into railway cars never to be seen again. At great risk to himself, he smuggles notes about the genocide out of the country, only to be met – once again – with total lack of interest. Guardian readers don’t look at the newspapers to hear bad things about the Soviet Union! Guardian readers want to hear about how the Glorious Future is already on its way! He is quickly sidelined in favor of the true stars of Soviet journalism, people like Walter Duranty, the New York Times‘s Russia correspondent, who wrote story after story about how prosperous and happy and well-fed the Soviets were under Stalin, and who later won the Pulitzer Prize for his troubles.

Muggeridge, on the other hand, penurious from lack of interest in his stories, fearing for his safety from the Soviet government, and generally disgusted with everything – even more so than usual for a world infested with maggots – decides to get the hell out of Dodge. He’s had enough of Russia, enough of Communism, enough of that entire part of the world. He’s going somewhere safe, somewhere decent. He’s going somewhere that will renew his crumbling faith in humanity. He’s going to Nazi Germany right as the anti-Jewish pogroms are starting.

Well, to make a long story short, this doesn’t restore his faith in humanity. He hangs out in Berlin for a while, sending his pieces on the Russian famine to all the newspapers he knows, watching more and more rejections come in each day, earning the ire of all of his leftist friends for apparently deserting the cause and turning traitor. Finally, he tells his boss:

“From the way you’ve cut my messages about the Metro-Vickers affair, I realize that you don’t want to know what’s going on in Russia, or let your readers know. If it had been an oppressed minority, or subject people valiantly struggling to be free, that would have been another matter. Then any amount of outspokenness, any amount of honesty.”

I went on to describe the scene in Berlin, and the Nazis beating up Jewish shops, and everyone with his story of murder and folly, and concluded:

“It’s silly to say the Brown Terror is worse than the Red Terror. They’re both horrible. They’re both Terrors. I watched the Nazis march along Unter den Linded and realized – of course, they’re Komsomols, the same people, the same faces. It’s the same show.”

David Ayerst quotes this correspondence in his book on The Guardian, and says it read “like a letter to end all communication”. So it did; I was finished with moderate men of all shades of opinion forever more.

Leaving Nazi Germany for neutral Switzerland, he says he had a pretty good idea even at the time how everything was going to end. And I believe him. By temperament, he expects everything to end in horror and madness and total collapse of civilization, so props to him for choosing the proper time and place for his temperament to be exactly correct. He writes:

All this likewise indubitably belonged to history, and would have to be historically assessed; like the Murder of the Innocents, or the Black Death, or the Battle of Paschendaele. But there was something else; a monumental death-wish, an immense destructive force loosed in the world which was going to sweep over everything and everyone, laying them flat, burning, killing, obliterating, until nothing was left. Those German agronomes in their green uniform suits with feathers in their hats – they had their part to play. So had the paunchy Brown-Shirts, and the matronly blonde maidens painting swastikas on the windows of Jewish shops. So had the credulous armies of the just, listening open-mouthed to Intourist patter, or seeking reassurance from a boozy sandalled Wicksteed. Wise old Shaw, high-minded old Barbusse, the venerable Webbs, Gide the pure in heart and Picasso the impure, down to poor little teachers, crazed clergymen and millionaires, drivelling dons and very special correspondents like Duranty, all resolved, come what might, to believe anything, however preposterous, to overlook anything, however villainous, to approve anything, however obscurantist and brutally authoritarian, in order to be able to preserve intact the confident expectation that one of the most thorough-going, ruthless, and bloody tyrannies ever to exist on Earth could be relied on to champion human freedom, the brotherhood of man, and all the other good liberal causes to which they had dedicated their lives. All resolved, in other words, to abolish themselves and their world, the rest of us with it. Nor have I from that time ever had the faintest expectation that, in earthly terms, anything could be salvaged; that any earthly battle could be won or earthly solution found. It has all just been sleep-walking to the end of the night.


Muggeridge’s description of World War II is actually super hilarious.

I was not expecting this. When you take one of the darkest and most pessimistic writers of the twentieth century and put him in the middle of one of the twentieth century’s greatest horrors, you might expect the result to have at least a touch of grimness about it, or at least not to leave you rolling on the floor laughing. You would be wrong.

Muggeridge, inspired by some force even he did not understand, decided to enlist in the British military when the war broke out. He’s a bit too old by this point to be a front-line infantryman, and his intellect, connections, and experience with foreign countries catch the eye of Military Intelligence. They recruit him as a spy. His first job is counter-intelligence – hanging around in the army, making sure that there aren’t any secret German spies there. Well, there either aren’t any secret German spies, or else they’re at least not saying that they’re secret German spies, so this task turns out to be kind of a combination of boring, useless, and hilarious. He describes a typical day:

I find it difficult to recall what regular duties I had, if any…Our section was supposed to be responsible for securing the Headquarters from the incursions of enemy agents who might pry out its secrets or subvert its personnel. This gave us a free hand to do almost anything and go almost anywhere. If we went drinking in pubs, it was to keep a look-out for suspicious characters; if we pikced up girls, it was to probe their intentions in frequenting the locality.

A fellow-officer told me of how, on a pub-crawl, ostensibly a security reconnaissance, he got drunk, and, as was his way when in such a condition, pretended to be a foreigner, using strange gestures and speaking with an accent. The next day, badly hung over, he was sent a report of the movements of a suspicious foreigner, and told to check up on them. Tracing the suspect’s movements from pub to pub, it slowly dawned on him he was following himself the night before. When he told me of his adventure, to comfort him I said that it was what we were all doing all the time – keeping ourselves under close surveillance. This was what security was all about.

In a similar vein, another FS officer, idly thumbing over the Security List – a top-secret document containing the names of all subjects who were to be at once apprehended if they tried to get into or out of the country – found he was in it.

Graham Greene was a very famous early 20th century author. Like pretty much every other famous early 20th century author, he was a good friend of Malcolm Muggeridge’s. Greene was working in another branch of Intelligence at the time, and they needed someone for a secret mission, and Greene mooted Muggeridge’s name. He found himself plucked out of his cushy job drinking at pubs and tracking himself, and sent to MI6’s secret spy school at Bletchley Park, where he was taught various hilariously impractical skills like how to make invisible ink out of bird poop. He was then sent on a secret mission to Mozambique, so that just in case anything relevant to World War II were to happen in Mozambique, Her Majesty’s Government would have a secret agent in place.

The Mozambique chapters were among the funniest of the entire book. The Germans and Italians, inspired by the same principle, had also sent agents to Mozambique. It was not at all hard to figure out who they were, nor was Muggeridge’s identity particularly hard to figure out. There was only one nice hotel in Mozambique, so Muggeridge, the German spy, and the Italian spy all got rooms there and spent most of the time glaring at each other during communal dinners, or lying on the beach an appropriate distance away from one another, keeping watch.

Sometimes they would engage in hilarious secret plots against each other. Muggeridge, after chancing into a friendship with a member of Mozambique’s small German community, arranged for his friend to tell the German spy that he was only faking friendship with Muggeridge so he could steal his secrets for the good of The Reich. He then proceeded to “rob” Muggeridge’s house (with Muggeridge’s gleeful consent), producing for his German “master” a trove of documents which, when decoded, suggested that the Italian spy was secretly working for the British. This caused a big fight between the German spy and the Italian spy, which given that there wasn’t really much to spy on in Mozambique, was considered a fantastic success for the British cause and raised Muggeridge’s standing as some kind of intelligence prodigy.

Later in the war, Mozambique actually became sort of relevant as troop convoys started sailing by. Muggeridge bribed local officials to keep a watch out, and ended up foiling a very real German plot to do some sort of vague thing involving ships – as a result, when the war started winding down to the point where maintaining a presence in Mozambique was no longer viewed as entirely necessary, he came home and was promoted into the inner circles of intelligence. His new position was under Kim Philby, the head of the Department Of Counter-Intelligence Against The Soviet Union, who turned out to be a really bad choice for this position given that he, LIKE EVERY OTHER PROGRESSIVE INTELLECTUAL IN THE ENTIRE COUNTRY OF BRITAIN, was a secret Soviet spy. But at the time he seemed okay enough, and he sent Muggeridge to France to aid in the Liberation there.

We like to think of the Liberation of France as a nice, happy time, but for Muggeridge it was basically the time when an entire country worth of very angry Frenchmen massacred, pogrommed, lynched, or otherwise descended upon anyone accused of collaborating with the German occupation. Unsurprisingly, everybody turned out to think their personal and political rivals had collaborated with the German occupation, so it was basically the atmosphere of a 17th century Massachusetts witch hunt, only with less restraint.

Muggeridge’s job was, as usual, darkly hilarious – actual spies for the French and British governments usually acted all cooperative toward the German occupation to keep their cover and get a chance of infiltrating enemy ranks; as a result, they were usually First Up Against The Wall When The Liberation Came. Sure, they said “I was just a spy doing it as part of a secret plan,” but of course everybody said that. So Muggeridge had to rush from prison to prison, trying to convince mobs of angry Frenchmen not to execute the people who had just been most instrumental in saving them.

His spy career ended with what seems like maybe the most typical incident in the entire book – somehow P. G. Wodehouse had wandered into Nazi Germany and been stuck in a prison camp there. Then he had wandered out into France, gotten marked as a Collaborator, and was now in serious fear for his life. The British Secret Service picked Muggeridge as their Official Attache For P. G. Wodehouse Related Affairs, showing such exceptional genius in choosing the right man for the job that you would think they would have been able to get AT LEAST ONE ANTI-SOVIET COUNTERINTELLIGENCE AGENT WHO WASN’T A SECRET SOVIET SPY. Anyway, Muggeridge and Wodehouse wander around the cratered, mob-ruled French landscape, having a series of very Wodehousian adventures, until finally the war ends, Wodehouse is deposited safely the United States, and Muggeridge is able to return to Britain.

The book ends with the funeral of Sidney Webb, the early socialist hero his family idolized, who died just after World War II. Muggeridge is invited to the event because his wife is a distant cousin of the Webb family; he has to hold his nose throughout. At the time of his death, Webb is more beloved than ever by a grateful populace. His and his wife’s great works, Soviet Communism: A New Civilisation and The Truth About Soviet Russia, have become Bibles of the Left and part of Stalin’s cult of personality. Their opponents, the sorts who say that maybe Stalin isn’t the reincarnation of Christ, have been summarily dispatched – Muggeridge describes one of his friends from the journalism world, a reporter universally respected for helping expose Nazi atrocities, who made the mistake of trying to do the same with Soviet atrocities:

When Voigt turned the furious indignation with which he had lambasted the Nazi terror on to Stalin’s, his former liberal friends and associates discovered in him a Nazi sympathizer. Another liberal newspaper, the News Chronicle, ran an article about [his publication] headlined HITLER’S FAVORITE READING, with pictures of the Fuhrer and Voigt looking amicably across at one another.

In other words, Webb dies at the height of his career, his lies unexposed. George Bernard Shaw writes a letter to the newspapers suggesting that a man of Webb’s standing deserves a national hero’s funeral, everyone agrees, and he and his wife are interred in Westminster Abbey before a crowd of dignitaries including the Prime Minister (despite their own atheism and specific demands not to be placed in a church).

Muggeridge watches the whole sordid spectacle – the Dean of the Cathedral singing the praises of an unrepentant atheist “whose crowning achievement had been to commend to his fellow-countrymen and the whole world as a new civilization a system of servitude more far-reaching and comprehensive than any hitherto known” and ends his book very abruptly, saying only that “Another way has to be found and explored.”


And then he dies before writing any more volumes of his autobiography, let alone telling us what the other way is.

He quotes very approvingly, as the heart of his philosophy, a passage by his friend Hugh Kingsmill:

What is divine in man is elusive and impalpable, and he is easily tempted to embody it in a concrete form – a church, a country, a social system, a leader – so that he may realize it with less effort and serve it with more profit. Yet the attempt to externalize the kingdom of heaven in a temporal shape must end in disaster. It cannot be created by charters or constitutions nor established by arms. Those who seek for it alone will reach it together, and those who seek it in company will perish by themselves.

And indeed, he writes a lot about how the whole problem started when people started being utopian and getting it into their heads to fix things on earth, rather than seek for “treasure in heaven”.

Some atheists I know write a lot about how religious people think you should hate the world because it’s awful and only some future world, ie Heaven, can be any good. Some religious people I know write a lot about how that’s total poppycock. Certainly G. K. Chesterton would have said something about how the world being sinful and full of flaws is not a reason to hate it, but precisely why we should love it, and Leah Libresco would say something about how hating the world is Gnosticism and Gnosticism is a heresy.

But I think Muggeridge might be pretty close to the atheist straw man on this point, with the key exception that religion isn’t what made him hate the world. He started off hating the world, and religion and mysticism offered him something not to hate, some way to say “Okay, but there’s some divinity buried in all this mess”. He is brilliant, he is compassionate, he is a great writer, it’s impossible to read his autobiography without loving him – but that he hates the world is hard to deny. I write sometimes about how beliefs that we consider abominable can sometimes be therapeutic mental crutches for people with the right cast of mind, and Muggeridge certainly found the idea of the world as a vale of suffering that would soon melt away to be oddly comforting in times of distress.

On the other hand, I’m not sure what to make of his opposition to trying to fix things here on Earth. He clearly hated Stalinism. When he hated Stalinism, he reacted by trying to make there be less Stalinism, which seems like a very reasonable thing to do. But the Communists hated capitalism. They reacted by trying to make there be less capitalism. Other than Muggeridge being right about the object-level issue and the Communists being wrong, it’s hard to see what the difference in principle is between them. The best I can do – and I worry I’m doing great violence to his intellectual uniqueness by rounding him off to my own ways of thinking – is to view him as suggesting some sort of precautionary principle, like that before you make a change you should be sure it’s something that has worked before (like non-Stalinism) and not a totally new idea (like Stalinism). But I am pretty sure if I suggested that to him he would roll his eyes and tell me that I’m such a modern and I don’t get it at all.

The one thing I can be really sure of is that Muggeridge doesn’t want us to get stuck again in the same position we were in during the 30s and 40s where we totally ignored Stalin’s crimes due to our own political biases. Okay. I respect that. It was really eye-opening seeing exactly how brainwashed the entire European, British, and American Left were, and the whole situation gave me a lot more understanding of how overwhelmingly the Question of Communism dominated intellectual and political life in the first half of the century.

I was born in the 80s, at the very tail end of the Cold War, when we’d all had the decency to put all the Communists in one country and all the capitalists in another and make them express their differences like civilized men – ie by pointing thousands of hair-trigger nuclear missiles at one another. In the early days of Communism, we just didn’t know. Would Russia go Communist? Would Germany? Would France? Would everywhere? Muggeridge talks about how one of Britain’s main concerns in post-Liberation France was that the entire country would just move en masse to Communism as soon as the Nazis were out, which somehow or other mysteriously failed to happen EVEN THOUGH EVERY SINGLE ONE OF THE WESTERN AGENTS SENT TO PREVENT THAT WAS SECRETLY WORKING FOR THE SOVIETS.

And then the Cold War started, and this very gradually settled down to an equilibrium where okay, a lot of the Western intelligentsia stayed Communist, but at least they had the decency to realize that it was unpopular and the Revolution probably wasn’t literally going to happen next week.

By coincidence, just last week I read about the sad death of historian Robert Conquest, the man who was able to succeed where Muggeridge failed and drag Britain and America kicking and screaming into admitting Stalin wasn’t such a great guy. Conquest had one great advantage over Muggeridge, which was that he wrote in 1968 when, far from being our allies in a world war, the Soviets were technically our Cold War enemies and we were sort of okay with hearing bad things about them. But even then, he faced an extraordinary uphill battle. The most famous legend about him involved the second edition of his book, which came out right around the time the Soviet Union fell and its indisputable records of Stalin’s famines and purges became public knowledge. He supposedly asked to have the new version titled I TOLD YOU SO, YOU FUCKING FOOLS.

This part of our intellectual history is kind of forgotten. Who hears about Sidney and Beatrice Webb nowadays? Who hears about Walter Duranty? Yet these people during their times were absolute titans, “thought leaders” in the modern terminology – as per Muggeridge, Duranty “came to be accepted as the great Russian expert in America, and played a major part in shaping President Roosevelt’s policies vis-a-vis the USSR”. We hear a lot about our moral failures in terms of not stopping the Holocaust, but our quarter-century complicity with and even adulation of Stalinism seems like one of those facts that just fell by the wayside.

A lot of people think that I’m too easy on crackpots, or too fond of contrarians, or too interested in protecting witches, or whatever. But hearing all of these stories about the universal progressive Western adulation of Stalin is really scary. It’s way too easy for the darkest and most primal parts of my brain to map neatly onto the modern modalities of rejecting and punishing disagreement. “Really? You think this random journalist who isn’t even a trained Kremlinologist knows more than expert consensus?” “Killing millions of people, oh God, you’re one of those conspiracy losers.” “It’s obvious you’re just a privileged white guy who’s already decided to believe anything that reflects negatively on Slavs and foreigners.” “Although we respect free speech, that doesn’t extend to pro-Nazi propaganda and worker’s-paradise denialism.” Part of my respect for contrarians is that contrarianism is this incredibly fragile and precious art which needs to be kept alive for the times it is needed – rare times, times that hopefully won’t come up in our lifetimes, but times that, when they do come, desperately need a core of people willing to stand up to the establishment. Cultivating contrarianism is a lot like owning a gun – you get a heck of a lot of opportunities to shoot yourself in the foot, but also very occasionally one opportunity to save your life.

But then, on the other hand, here’s Muggeridge again:

Solzhenitsyn has provided the perfect parable on this theme with his description of Mrs. Roosevelt’s conducted visit to a labor camp where he was doing time. The estimable lady, who spawned the moral platitudes of the contemporary liberal wisdom as effortlessly and plenteously as the most prolific salmon, was easily persuaded that the camp in question was a humanely conducted institution for curing the criminally inclined. A truly wicked woman would have been ashamed to be so callous and so gullible.

Really? Gullible how? I’m sure the Soviets were moderately competent in making sure Roosevelt didn’t see anything too untoward. So what was she supposed to do?

I think of those people who say the US government is setting up FEMA interment camps as we speak to imprison dissenters against the New World Order. They provide some things that look sort of like evidence – photos (which turn out to be of random prisons or, in one case, an Amtrak station), documents (which turn out to be out of context references to setting up FEMA refugee camps for people displaced by disasters), et cetera. The people talking about this are total loons.

But Type 1 errors trade off against Type 2 errors. Make absolutely sure you’re the sort of person who never misses a Stalinist gulag, and you become the type of person who’s easy prey for the FEMA internment camp theory. Make absolutely sure you don’t believe in FEMA internment camps, and you’re liable to miss a Stalinist gulag as soon as the Soviet government gets Duranty to print “Oh, don’t worry, that’s just an Amtrak station”. Use the heuristic of “just trust expert consensus, experts always know what they’re talking about”, and you are now one of the tens of thousands of grateful readers who helped make Sidney and Beatrice Webb’s Soviet Communism: A New Civilisation into a best-seller.

What I’m saying is – there is no royal road. This is why I think learning rationality and the art of sifting through evidence is so important.

As for Muggeridge? I’m not sure he has much to teach there. Yes, he deserves the thanks of a grateful civilization for being a lone voice in the wilderness warning us about Stalin. But after that, as per his Wikipedia page, he was a lone voice in the wilderness warning us about contraception. After that, he became a lone voice in the wilderness warning us about marijuana. After that, he became a lone voice in the wilderness warning us about blasphemy in The Life Of Brian.

I am glad there are all types of people in the world. I am glad that there are crotchety, contrarian, cynical old reporters who constantly feel like everything is hurling off the precipice into Hell, because when things are actually hurling off the precipice into Hell, these people are the first to notice. In the same way, I am glad that there are dedicated survivalists who stockpile canned food in underground shelters in case of the nuclear apocalypse, because if there is ever an actual nuclear apocalypse, these people will survive and rebuild the human race.

But I am not digging a bomb shelter myself, and I am pretty sure I cannot bring myself to be quite as cynical as Malcolm Muggeridge.