Seattle Meetup This Sunday

Why: Because I’m in Seattle this weekend

Where: 5238 11th Ave NE, Seattle WA (private residence). If there are too many people we’ll figure something out.

When: 4 PM on Sunday 12/23

Who: Anyone who wants. Please feel free to come even if you feel awkward about it, even if you’re not “the typical SSC reader”, even if you’re worried people won’t like you, etc. People who fit those descriptions who decided to come to previous meetups have mostly enjoyed them.

Posted in Uncategorized | Tagged , | 25 Comments

Refactoring: Culture As Branch Of Government

Ribbonfarm likes to talk about refactoring, a conceptual change in how you see the world. I’m not totally sure I understand it, but I think it means things like memetics – where you go from the usual model of people deciding what ideas they want, to a weird and inside-out (but not objectively wrong) model of ideas competing to colonize people.

Here is a refactoring I think about a lot: imagine a world where people considered culture the fourth branch of government. Imagine that civics textbook writers taught high school students that the US government had four branches: executive, legislative, judicial, and cultural.

I think about this because I have a bias to ignore anything that isn’t nailed down and explicit. Culture isn’t nailed down. But if it were in the Constitution in nice calligraphy right beside the Presidency and the Supreme Court, why, then it would be as explicit as it gets.

Like many other people, I was hopeful that nation-building Iraq (or Afghanistan, or…) would quickly turn it into a liberal democracy (in my defense, I was eighteen at the time). Like many other people, I was disappointed and confused when it didn’t. The people in the world that considers culture the fourth branch of government weren’t confused. Bush forgot to nation-build an entire branch of government. If he’d given Iraq a western-style Supreme Court, marble facade and all, but left their executive and legislature exactly how they were before, that would be a recipe for conflict, confusion, and eventually nothing getting done. So why should westernizing their executive, legislature, and courts – but not their culture – work any better?

The world that considers culture the fourth branch of government doesn’t get all confused calling hunter-gatherers or peasant villagers “primitive communism” or “anarchism” or “ruled by elders” or things like that. Those people’s governments have a cultural branch but not much else. Why should we be surprised? Medieval Iceland had only legislative and judicial branches; medieval Somalia only had a judiciary; some dictatorships run off just an executive.

Each branch of government enforces rules in its own way. The legislature passes laws. The executive makes executive orders. The judiciary rules on cases. And the culture sets norms. In our hypothetical world, true libertarians are people who want less of all of these. There are people who want less of the first three branches but want to keep strong cultural norms about what is or isn’t acceptable – think Lew Rockwell and other paleoconservatives who hope that the retreat of central government will create strong church-based communities of virtuous citizens. These people aren’t considered libertarians. They might be considered principled constitutionalists, the same way as people who worry about the “imperial presidency” and its use of executive orders. But in the end, what they want to strengthen some branches of government at the expense of others. The real libertarians also believe that cultural norms enforced by shame and ostracism are impositions on freedom, and fight to make these as circumscribed as possible.

Debates about primaries and the Electoral College are understood to be about how to determine who controls the executive branch. Debates about gerrymandering and voter suppression are understood to be about how to determine who controls the legislative branch. And debates about censorship, the media, and – yes – immigration – are understood to be about how to determine who controls the cultural branch.

In our hypothetical world, the First Amendment specifies “This shall apply to the first three branches of government, but not the fourth”. Still, debate goes on. Some people are happy the first three branches are kept out of it, but glad that the culture censors the speech of people who shouldn’t be talking. These people are another set of principled constitutionalists, the same as people who want to make sure only Congress (and not the President) can declare war, but don’t mind war in and of itself. Other people really are free speech purists, and think that no branch of government should be able to restrict the marketplace of ideas.

I admit this is kind of a silly hypothetical. Culture is much more different from the three branches of government than they are from each other; a civics textbook writer would have to be pretty strange to think it was worth tacking it on. Still, it’s a useful (if extreme) counter to forgetting to ever think about culture at all.

Fallacies Of Reversed Moderation

A recent discussion: somebody asked why people in Silicon Valley thought that only high-tech solutions to climate change (like carbon capture or geoengineering) mattered, and why they dismissed more typical solutions like international cooperation and political activism.

Another person cited statements from the relevant Silicon Valley people, who mostly say that they think political solutions and environmental activism were central to the fight against climate change, but that we should look into high-tech solutions too.

This is a pattern I see again and again.

Popular consensus believes 100% X, and absolutely 0% Y.

A few iconoclasts say that X is definitely right and important, but maybe we should also think about Y sometimes.

The popular consensus reacts “How can you think that it’s 100% Y, and that X is completely irrelevant? That’s so extremist!”

Some common forms of this:

Reversed moderation of planning, like in the geoengineering example. One group wants to solve the problem 100% through political solutions, another group wants 90% political and 10% technological, and the first group thinks the second only cares about technological solutions.

Reversed moderation of importance. For example, a lot of psychologists talk as if all human behavior is learned. Then when geneticists point to experiments showing behavior is about 50% genetic, they get accused of saying that “only genes matter” and lectured on how the world is more complex and subtle than that.

Reversed moderation of interest. For example, if a vegetarian shows any concern about animal rights, they might get told they’re “obsessed with animals” or they “care about animals more than humans”.

Reversed moderation of certainty. See for example my previous article Two Kinds Of Caution. Some researcher points out a possibility that superintelligent AI might be dangerous, and suggests looking into this possibility. Then people say it doesn’t matter, and we don’t have to worry about it, and criticize the researcher for believing he can “predict the future” or thinking “we can see decades ahead”. But “here is a possibility we need to investigate” is a much less certain claim than “no, that possibility definitely will not happen”.

I can see why this pattern is tempting. If somebody said the US should allocate 50% of its defense budget to the usual global threats, and 50% to the threat of reptilian space invaders, then even though the plan contains the number “50-50” it would not be a “moderate” proposal. You would think of it as “that crazy plan about fighting space reptiles”, and you would be right to do so. But in this case the proper counterargument is to say “there is no reason to spend any money fighting space reptiles”, not “it’s so immoderate to spend literally 100% of our budget breeding space mongooses”. “Moderate” is not the same as “50-50” is not the same as “good”. Just say “Even though this program leaves some money for normal defense purposes, it’s stupid”. You don’t have to deny that it leaves anything at all.

Or if someone says there’s a 10% chance space reptiles will invade, just say “No, the number is basically zero”. Don’t say “I can’t believe you’re certain there will be an alien invasion, don’t you know there’s never any certainty in this world?”

But I can see why this happens. Imagine the US currently devotes 100% of its defense budget to countering Russia. Some analyst determines that although Russia deserves 90% of resources, the Pentagon should also use 10% to counter China. Since no one person can shift very much of the defense budget, this analyst might spend all her time arguing we need to counter China more, trying to convince everyone that China is really very dangerous; if she succeeds, maybe the budget will shift to 99-to-1 and she’ll have done the best she can. But if she really spends all her time talking about China, this might look to other people like she’s an extremist – that crazy single-issue China person – “Why are you spending all your time talking about China? Don’t you realize Russia is important too?” Still, she’s taking the right strategy, and it’s hard to figure out what she could do better.

I am nervous titling this “reversed moderation fallacy” because any time someone brings up fallacies, people accuse them of thinking all discussion consists of identifying and jumping on Officially Designated Fallacies in someone else’s work. But I’ve gone years without talking about fallacies at all, so when this inevitably happens it’s going here as Exhibit A.

Posted in Uncategorized | Tagged | 355 Comments

OT117: Ho Ho Hopen Thread

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, but please try to avoid hot-button political and social topics. You can also talk at the SSC subreddit or the SSC Discord server – and also check out the SSC Podcast. Also:

1. Although I’m very happy with the quality of discussion here most of the time, I was disappointed with some comments on the Trump post. Part of this was my fault for going for a few jokes that made it more inflammatory than it had to be – but enough of it was your faults that I banned six people and probably should have banned more. Remember, if you see an immoderate comment that needs moderating, please report it using the report button in the corner.

2. Related: I am going to be stricter on the “necessary” prong of the comment policy. If it’s a thread about the poll numbers for some right-wing policies going down, and you post “The reason Trump won was because everyone knows all liberals are…”, you are probably getting banned. I’ve been reluctant to do this before because it’s the sort of thing that could be true and I don’t want to make it impossible to say certain true things. But now I’m thinking it’s so irrelevant to the topic that it will have to fit both the “true” and “kind” prongs to stay up without getting you banned. If you really can’t figure out whether something you want to post is like this, imagine someone on the opposite side said it about you, and see whether it feels more like a reasonable critique or like they’re trying to start a fight. I like the way Vorkon talks about this here.

3. In early October, I asked people to pick up anxiety sampler kits, try to use them at least twice a week, and send me the results. I gave out thirty kits and so far I have gotten valid results back from two of them (though it hasn’t quite been 10.5 weeks, so don’t worry, you’re not late). If you have a kit, please don’t forget to try it; if you’ve tried it, please don’t forget to send me your numbers.

4. Comments of the week are this discussion of “rods from God” as a nuke alternative; see especially Bean and John Schilling’s responses. Aside from everyone’s erudition, I also appreciate their ability to turn some good phrases – “like playing hide-and-seek while the seeker’s head is wrapped in a burning towel” is going to stick with me for a while.

Posted in Uncategorized | Tagged | 809 Comments

Trump: A Setback For Trumpism

Donald Trump has been called a setback for many things. America. The global community. The environment. Civil service. Civil society. Civility. Civilization. The list goes on.

One might think he has at least been useful to his own cause. That he could at least claim to have benefited the ideas of populism, nationalism, immigration control, and protectionism. That if anything could avoid being devastated by Trump, it would be Trumpism.

But here are some polls from the past few years. They’re all on slightly different things, but I think together they tell an interesting story:

Support for global free trade mysteriously spiked around 2016.

So did moral support for immigrants.

…and, less clearly but still there, support for increasing the number of immigrants (though see here for an apparently contrary source).

…and opposition to deporting illegal immigrants.

So did belief in racial discrimination as a major cause of inequality, according to this chart with a completely unbiased title which is willing to let readers decide how to think about this issue for themselves.

And so did trust in the New York Times and other mainstream media sources.

The clearest example I can find of this effect doesn’t come from the US at all. It’s Minkus, Deutschmann & Delhey (2018). They find that a large European poll asked the same question about support for the EU the week before and after Trump’s election. Just after the election, there was a giant spike in support for the EU, “considerable in size, roughly equivalent to three years of education”. They conclude that:

The election of Trump as a right-wing nationalist with a declared aversion to supranational institutions including the EU — did not trigger a domino effect in the same direction in Europe. To the contrary, a rally effect occurred, in which Europe moved closer together, rallying around the EU’s “flag.” This indicates that an event that may at first sight appear to be a global victory for nationalism can immediately trigger measurable sentiments of resistance in another part of the world, actually leading to new impetus for supranationalism.

This kind of analysis is inherently vulnerable to cherry-picking, and I admit I’ve chosen some especially dramatic results. And polls naturally have a lot of variability, and none of these on their own constitute proof of anything. But I think when you put everything together you do get a trend. Some things have stayed the same, or are inconclusive. But there do seem to be a lot of cases where support for Trumpist positions show a sudden and lasting decrease as soon as Trump enters the national stage.

I want credit for predicting this. In my endorsement of anyone except Trump, I told progressives not to vote Trump because they opposed his policy, and conservatives not to vote Trump because he would cause a backlash that was worse than anything they might get from him. I said that the left thrives by imagining themselves as brave rebels fighting an ignorant, regressive, hateful authority, and that “bringing their straw man to life and putting him in the Oval Office” would be “the biggest gift” they could give the Democrats, and would end up pushing an entire generation further to the left.

I think this is a good broad theory of what’s happening, but it might be worth digging deeper to try to distinguish possible mechanisms.

First, maybe Trump is just such an offensive and aversive figure that people switch sides in disgust. This is a little weird; if you were anti-immigration before Trump, can’t you just say “I hate Trump, but I’m still against immigration”? But maybe people’s minds don’t work that way.

Second, maybe Trump made causes like protectionism and nativism so central to the Republican narrative that they became untenable for Democrats. That is, in 2010, it might have been possible to be an anti-illegal-immigration Democrat (remember, in the early 2000s Hillary supported a border fence), but in 2018, that would signal being a Republican, or at least someone of questionable loyalty to the Democratic Party. In order to fit in, moderate Democrats abandoned their anti-illegal-immigration stances. The graphs above seem to provide some evidence for this: they usually show the largest shift among Democrats, with Republicans merely staying where they are.

Third, and kind of opposite that, maybe Trump is such an offensive and aversive figure that conservatives feel a need to maintain their reputation by distancing themselves from him. Maybe in 2010, being anti-illegal-immigration signaled things that you wanted to signal, like patriotism and support for low-paid workers. And now, being anti-immigration signals things you don’t want to signal, like Trump’s particular brand of inflammatory divisiveness. This doesn’t fit the evidence from the graphs above, but it does sort of fit the European study, where further-right Europeans were more likely to switch opinions after the election than further-left ones.

Fourth, maybe Trump’s focus on certain causes shifted the focus of Democrats and the mainstream media to those causes, and Democrats and the mainstream media were better at opposing them than Trump was at supporting them. For example, since Trump the media has been focusing more intensely on negative aspects of ICE and Border Control practices which were less well-covered before his presidency. If this focus has successfully changed minds, that would explain a shift away from Trump.

Fifth, maybe Trump has shifted the goalposts. Maybe identifying as anti-illegal-immigrant before Trump just meant you thought there should be a little better border control, but now you think it means you want a wall and mass deportations, plus you think all Mexicans are rapists. If you felt like the anti-illegal-immigrant cause was getting more extreme, but your positions stayed the same, then you might stop identifying as anti-illegal-immigrant.

Sixth, there have been a lot of studies showing that peaceful protests may increase support for a cause, but violent or disruptive protests usually decrease it (1, 2, 3). It’s easy enough to analogize Trump to a “disruptive protest” – in the sense of an ideological cause getting associated with an unsympathetic proponent – and this would be compatible with any of the explanations above. But I notice that most of the research in this area was done on whites reacting to civil rights protests, adding an identity dimension: maybe disruptive racially charged protests by blacks increase the salience of race as a category for whites, causing them to shift their opinions more towards ones based on their race rather than based on other values. This would also explain the paradoxical Ferguson effect mentioned in Part III here. In the same way, we can think of Trump’s election as a disruptive Republican move that makes Democrats feel threatened and increases the salience of partisanship for them. This would cause a sort of unilateral polarization, where Democrats become more progressive but Republicans don’t necessarily become more conservative, and so the country as a whole shifts to the left. Like the second explanation, this is compatible with the party breakdown on the graphs above. It’s also compatible with this:

These show the familiar 2016 spike. But although Trump has taken positions against fighting climate change or regulating guns, I don’t think of these two issues as “Trumpist” in the same way as illegal immigration, and I’m surprised they seem to show a Trump-related change. This would make more sense if Trump caused a wider-reaching closing of ranks among Democrats rather than just a shift away from his personal hobbyhorses.

I think all of this should increase people’s concern about backlash effects. Contrary to what some of my conflict theorist friends seem to think, civility and honesty are not always pointless own-goals in politics. If you’re sufficiently repulsive and offensive, you can also end up damaging your own cause.

As I’ve pointed out before, backlash can sometimes be a necessary trade-off to energize your base. But as I’ve also pointed out before, people tend to overestimate the importance of turning out the base, and to underestimate the importance of not having everyone hate you. So if I were a Trumpist, I would be very worried right now.

Diametrical Model Of Autism And Schizophrenia

One interesting thing I took from Evolutionary Psychopathology was a better understanding of the diametrical theory of the social brain.

There’s been a lot of discussion over whether schizophrenia is somehow the “opposite” of autism. Many of the genes that increase risk of autism decrease risk of schizophrenia, and vice versa. Autists have a smaller-than-normal corpus callosum; schizophrenics have a larger-than-normal one. Schizophrenics smoke so often that some researchers believe they have some kind of nicotine deficiency; autists have unusually low smoking rates. Schizophrenics are more susceptible to the rubber hand illusion and have weaker self-other boundaries in general; autists seem less susceptible and have stronger self-other boundaries. Autists can be pathologically rational but tend to be uncreative; schizophrenics can be pathologically creative but tend to be irrational. The list goes on.

I’ve previously been skeptical of this kind of thinking because there are many things that autists and schizophrenics have in common, many autistics who seem a bit schizophrenic, many schizophrenics who seem a bit autistic, and many risk factors shared by both conditions. But Del Giudice, building on work by Badcock and Crespi presents the “diametrical model”: schizophrenia and autism are the failure modes of opposing sides of a spectrum from high functioning schizotypy to high functioning autism, ie from overly mentalistic cognition to overly mechanistic cognition.

Schizotypy is a combination of traits that psychologists have discovered often go together. It’s classified as a personality disorder in the DSM. But don’t get too caught up on that term – it’s a disorder in the same sense as narcissistic or antisocial tendencies, and like those conditions, some schizotypals do very well for themselves. Classic schizotypal traits include tendency toward superstition, disorganized communication, and nonconformity (if it sounds kind of like “schizophrenia lite”, that’s not really a coincidence).

Typically schizotypals are supposed to be paranoid and reclusive, the same as schizophrenics. But the diametrical model tries to downplay this in favor of noting that some schizotypals are unusually charismatic and socially successful. I am not exactly sure where they’re getting this from, but I cannot deny knowing several extremely charismatic people with a lot of schizotypal traits. Sometimes these people end up as “cult leaders” – not necessarily literally, but occupying that same niche of strange people who others are drawn toward for their unusually confident and otherworldly nature. Some of the people I know in this category have schizophrenic first-degree relatives, meaning they’re probably pretty loaded with schizotypal genes.

Schizotypals, according to the theory, have overly mentalistic cognition. Their brains are hard-wired for thinking in ways that help them understand minds and social interactions. When this succeeds, it looks like an almost magical understanding into what other people are secretly thinking, what their agendas are, and how to manipulate them. When it fails, it fails as animism and anthropomorphism: “I wonder what the universe is trying to tell me by making it rain today”. Or it fails as paranoia through oversensitivity to social cues: “I just saw him twitch his eye muscle slightly, which can sometimes mean he’s not interested in what I’m saying, and in the local status game that could mean that he doesn’t think I’m important enough, and that implies he might think he’s better than me and I’m expendable…”

Autism, then, would be the opposite of this. It’s overly mechanistic cognition, thinking in terms of straightforward logic and the rules of the physical world. Autistic people don’t make the mistake of thinking the universe is secretly trying to tell them something. On the other hand, after several times trying to invite a slightly autistic woman I had a crush on to things, telling her how much I liked her, petting her hair, etc, she still hadn’t figured out I was trying to date her until I said explicitly “I AM TRYING TO DATE YOU”. So not believing that you are secretly being told things has both upsides and downsides.

Autistic people are sometimes accused of looking for a set of rules that will help them understand people, or the secret cheat code that will make people give them what they want. I imagine an autistic person asking something like “What is the alternative?” This is the kind of thought process that usually works on stuff: figure out the rules that govern something, find a way to exploit them, and boom, you’ve landed a rocket on the moon. How are they supposed to know that human interaction is a bizarre set of layered partial-information games that you’re supposed to solve by looking at someone’s eye muscle twitches and concluding they’re going to steamroll over you to get a promotion at work?

Is this true? There’s…not great evidence for it. I’ve never seen any studies. There’s certainly a stereotype that brilliant engineers are not necessarily the most socially graceful people. But I know a lot of people who combine excellent technical skills with excellent social skills, and other people who are failures in both areas. So probably the best that can be said about this theory is that it would be a really neat way to explain the patterns of similarities and differences between schizophrenia and autism.

In this theory, both high-functioning autism (being good at mechanistic cognition) and high-functioning schizotypy (being good at mentalistic cognition) may be good things to have. But the higher your mutational load is – the less healthy your brain, and the fewer resources it has to bring to the problem – the less well it is able to control these powerful abilities. A schizotypal brain that cannot keep its mentalistic cognition yoked to reality dissolves into schizophrenia, completely losing the boundary between Self and Other into a giant animistic universe of universal significance and undifferentiated Mind. An autistic brain that cannot handle the weight of its mechanistic cognition becomes unable to do even the most basic mental tasks like identify and cope with its own emotions. And because in practice we’re talking about shifts in the complicated computational parameters that determine our thoughts and personalities, rather than the thoughts and personalities directly, both of these conditions have a host of related sensory and cognitive symptoms that aren’t quite directly related.

So here the reason why autism and schizophrenia seem both opposite and similar to each other is because they’re opposite (in the sense of being at two ends of a spectrum), and similar (in the sense that the same failure mode of high mutational load and low “mental resources” will cause both).

If you’re thinking “it sounds like someone should do a principal components analysis on this”, then Science has your back (paper, popular article). They find that:

Consistent with previous research, autistic features were positively associated with several schizotypal features, with the most overlap occurring between interpersonal schizotypy and autistic social and communication phenotypes. The first component of a principal components analysis (PCA) of subscale scores reflected these positive correlations, and suggested the presence of an axis (PC1) representing general social interest and aptitude. By contrast, the second principal component (PC2) exhibited a pattern of positive and negative loadings indicative of an axis from autism to positive schizotypy, such that positive schizotypal features loaded in the opposite direction to core autistic features.

In keeping with this theory, studies find that first-degree relatives of autists have higher mechanistic cognition, and first-degree relatives of schizophrenics have higher mentalistic cognition and schizotypy. Autists’ relatives tend to have higher spatial compared to verbal intelligence, versus schizophrenics’ relatives who tend to have higher verbal compared to spatial intelligence. High-functioning schizotypals and high-functioning autists have normal (or high) IQs, no unusual number of fetal or early childhood traumas, and the usual amount of bodily symmetry; low-functioning autists and schizophrenics have low IQs, increased history of fetal and early childhood trauams, and increased bodily asymmetry indicative of mutational load.

If men have much more autism than women, shouldn’t women have much more schizophrenia than men. You’d think so, but actually men have more. But men have greater variability in general, which means they’re probably more likely to satisfy the high mutational load criterion. So maybe we should instead predict that women should have higher levels of high-functioning schizotypy. Studies show women do have more “positive schizotypy”, the sort being discussed here, but lower “negative schizotypy”, a sort linked to the negative symptoms of schizophrenia.)

Something that bothered me while I was writing this: famous mathematician John Nash was schizophrenic. Isn’t that kind of weird if schizophrenia is about an imbalance in favor of verbal/personal and against logical/mathematical thinking?

There are exceptions to everything, and we probably shouldn’t make too much of one case. But I find it striking that Nash’s work was in game theory: essentially a formalization of social thinking, and centered around the sort of paranoid social thinking of figuring out what to do about how other people might be out to get you. This is probably just a coincidence, but it’s pretty funny.

Posted in Uncategorized | Tagged | 134 Comments

Del Giudice On The Self-Starvation Cycle

[Content note: eating disorders]

Anorexia has a cultural component. I’m usually reluctant to assume anything is cultural – every mediocre social scientist’s first instinct is always to come up with a cultural explanation which is simple, seductive, flattering to all our existing prejudices, and wrong. But after seeing enough ballerinas and cheerleaders who became anorexic after pressure to lose weight for the big competition, even I have to throw up my hands and admit anorexia has a cultural component.

But nobody ever tells you the sequel. That ballerina who’s losing weight for the big competition at age 16? At age 26, she’s long since quit ballet, worried it would exacerbate her anorexia. She’s been in therapy for ten years; for eight of them she’s admitted she has a problem, that her anorexia is destroying her life. Her romantic partners – the ones she was trying to get thin to impress – have long since left her because she looks skeletal and weird. She understands this and would do anything to cure her anorexia and be a normal weight again. But she finds she isn’t hungry. She hasn’t eaten in two days and she isn’t hungry. In fact, the thought of food sickens her. She goes to increasingly expert therapists and dieticians, asking them to help her eat more. They recommend all the usual indulgences: ice cream, french fries, cookies. She tries all of them and finds them inexplicably disgusting. Sometimes with a prodigious effort of will she will manage to finish one cookie, and congratulate herself, but the next day she finds the task of eating dessert as daunting as ever. Finally, after many years of hard work, she is scraping the bottom end of normal weight by keeping to a diet so regimented it would make a Prussian general blush.

And nobody ever tells you about all the people who weren’t ballerinas. The young man who stops eating because it gives him a thrill of virtue and superiority to be able to demonstrate such willpower. The young woman who stops eating in order to show her family how much their neglect hurts her. If they pursue their lack of appetite far enough, they end up the same way as the ballerina – admitting they have a problem, admitting they need to eat more, hiring all sorts of doctors and dieticians to find them a way to eat more, but discovering themselves incapable of doing so.

And this is why I can’t subscribe to a purely cultural narrative of anorexia. How does “ballerinas are told they should be thin in order to be pretty” explain so many former ballerinas who want to gain weight but can’t? And how does it explain the weird, almost neurological stuff like how anorexic people will mis-estimate their ability to fit through doors?

All of this makes much more sense in a biological context; it’s as if the same system that is broken in obese people who cannot lose weight no matter how hard they try, is broken in anorexics who cannot gain weight no matter how hard they try. There are plenty of biological models for what this might mean. But then the question becomes: how do we reconcile the obviously cultural part where it disproportionately happens to ballerinas, to the probably biological part where the hypothalamus changes its weight set point?

I’m grateful to Professor del Giudice and Evolutionary Psychopathologyfor presenting the only reasonable discussion of this I have heard, which I quote here basically in its entirety:

The self-starvation cycle arises in predisposed individuals following an initial phase of food restriction and weight loss. Food restriction may be initially prompted by a variety of motives, from weight concerns and a desire for thinness to health-related or religious ideas (eg spiritual purity, ascetic self-denial). In fact, the cycle may even be started by involuntary weight loss due to physical illness. While fasting and exercise are initially aversive, they gradually become rewarding – even addictive – as the starvation response kicks in. At the same time, restricting behaviors that used to be deliberate become increasingly automatic, habitual, and difficult to interrupt (Dwyer et al, 2001; Guarda et al, 2015; Lock & Kirz, 2013; McGuire & Troisi 1998). The self-starvation cycle plays a crucial role in the onset of anorexia.

Increased physical activity is a key component of the starvation response in many animal species; in general, its function is to prompt exploration and extend the foraging range when food is scarce. This response is so ingrained that animals subjected to food restriction in conditions that allow physical activity often starve themselves to death through strenuous exercise (Fessler, 2002; Guarda et al, 2015; Scheurink et al, 2010). In humans, pride is a powerful additional rewrad of self-starvation – achieving extraordinary levels of thinness and self-control makes many anorexic patients feel special and superior (Allan & Goss, 2012). The starvation response also brings about some psychological changes that further contribute to reinforce the cycle. In particular, starvation dramatically interferes with executive flexibility/shifting, and patterns of behavior become increasingly rigid and inflexible. The balance between local and global processing is also shifted toward local details. This may contribute to common body image distortions in anorexia, as when patients focus obsessively on a specific body part (eg the neck or hips) but preceive themselves as globally overweight (Pender et al, 2014; Westwood et al; 2016).

The self-starvation cycle has been documented across time and cultures, including non-Western ones. In modern Western societies, concerns with fat and thinness are the main reason for weight loss and probably explain the moderate rise of AN incidence around the second half of the 20th century. However, cases of self-starvation with spiritual and religious motivations have been common in Europe at least since the Middle Ages (and include several Catholic saints, most famously St. Catherine of Siena). In some Asian cultures, digestive discomfort is often cited as the initial reason for restricting food intake, but the resulting syndrome has essentially the same symptoms as anorexia in Western countries (Bell, 1984; Brumberg, 1989; Culbert et al, 2015; Keel & Klump, 2003). The DSM-5 criteria for anorexia include fear of gaining weight as a diagnostic requirement; for this reason, most historical and non-Western cases would not be diagnosed as AN within the current system. However, the present emphasis on thinness is likely a contingent sociohistorical fact and does not seem to represent a necessary feature of the disorder. (Keel & Klump, 2003)

My anorexic patients sometimes complain of being forced into this mold. They’ll try to go to therapy for their inability to eat a reasonable amount of food, and their therapist will want to spend the whole time talking about their body image issues. When they complain they don’t really have body image issues, they’ll get accused of repressing it. Eventually they’ll just say “Yeah, whatever, I secretly wanted to be a ballerina” in order to make the therapist shut up and get to the part where maybe treatment happens.

The clear weak part of this theory is the explanation of the “self-starvation cycle”. Aside from a point about animals sometimes having increased activity to go explore for food, it all seems kind of tenuous.

And how come most people who starve never get anorexia? How come sailors who ran out of food halfway across the Pacific, barely made it to some tropical island, and gorged themselves on coconuts didn’t end up anorexic? Donner Party members? Concentration camp survivors? Is there something special about voluntary starvation? Some kind of messed-up learning process?

I am interpreting the point to be something along the lines of “Suppose for some people with some unknown pre-existing vulnerability, starving themselves voluntarily now flips some biological switch which makes them starve themselves involuntarily later”.

Framed like this, it sounds more like a description of anorexia than a theory about it (though see here for an attempt to flesh this out). But it’s a description which captures part of the disease that a lot of other models don’t, and which brings some things into clearer relief, and I am grateful to have it.

Posted in Uncategorized | Tagged | 249 Comments

Book Review: Evolutionary Psychopathology

I.

Evolutionary psychology is famous for having lots of stories that make sense but are hard to test. Psychiatry is famous for having mountains of experimental data but no idea what’s going on. Maybe if you added them together, they might make one healthy scientific field? Enter Evolutionary Psychopathology: A Unified Approach by psychology professor Marco del Giudice. It starts by presenting the theory of “life history strategies”. Then it uses the theory – along with a toolbox of evolutionary and genetic ideas – to shed new light on psychiatric conditions.

Some organisms have lots of low-effort offspring. Others have a few high-effort offspring. This was the basis of the old r/k selection theory. Although the details of that theory have come under challenge, the basic insight remains. A fish will lay 10,000 eggs, then go off and do something else. 9,990 will get eaten by sharks, but that still leaves enough for there to be plenty of fish in the sea. But an elephant will spend two years pregnant, three years nursing, and ten years doing at least some level of parenting, all to produce a single big, well-socialized, and high-prospect-of-life-success calf. These are two different ways of doing reproduction. In keeping with the usual evolutionary practice, del Giudice calls the fish strategy “fast” and the elephant strategy “slow”.

To oversimplify: fast strategies (think “live fast, die young”) are well-adapted for unpredictable dangerous environments. Each organism has a pretty good chance of randomly dying in some unavoidable way before adulthood; the species survives by sheer numbers. Fast organisms should grow up as quickly as possible in order to maximize the chance of reaching reproductive age before they unpredictably die. They should mate with anybody around, to maximize the chance of mating before they unpredictably die. They should ignore their offspring, since they expect most offspring to unpredictably die, and since they have too many to take care of anyway. They should be willing to take risks, since the downside (death without reproducing) is already their default expectation, and the upside (becoming one of the few individuals to give birth to the 10,000 offspring of the next generation) is high.

Slow strategies are well-adapted for safer environments, or predictable complex environments whose intricacies can be mastered with enough time and effort. Slow strategy animals may take a long time to grow up, since they need to achieve mastery before leaving their parents. They might be very picky maters, since they have all the time in the world to choose, will only have a few children each, and need to make sure each of those children has the best genes possible. They should work hard to raise their offspring, since each individual child represents a substantial part of the prospects of their genetic line. They should avoid risks, since the downside (death without reproducing) would be catastrophically worse than default, and the upside (giving birth to a few offspring of the next generation) is what they should expect anyway.

Del Giudice asks: what if life history strategies differ not just across species, but across individuals of the same species? What if this theory applied within the human population?

In line with animal research on pace-of-life syndromes, human research has shown that impulsivity, risk-taking, and sensation seeking, are systematically associated with fast life history traits such as early intercourse, early childbearing in females, unrestricted sociosexuality, larger numbers of sexual partners, reduced long-term mating orientation, and increased mortality. Future discounting and heightened mating competition reduce the benefits of reciprocal long-term relationships; in motivational terms, affiliation and reciprocity are downregulated, whereas status seeking and aggression are upregulated. The resulting behavioral pattern is marked by exploitative and socially antagonistic tendencies; these tendencies may be expressed in different forms in males and females, for example through physical versus relational aggression (Belsky et al 1991; Borowsky et al 2009; Brezina et al 2009; Chen & Vazsonyi 2011; Copping et al 2013a, 2013b, 2014a; Curry et al 2008; Dunkel & Decker 2010 […]

And:

Disgust sensitivity is another dimension of individual differences with links to the fast-slow continuum. To begin, high disgust sensitivity is broadly associated with measures of risk aversion. Moral and sexual disgust correlate with higher agreeableness, conscientiousness, and honesty-humility; and sexual disgust specifically predicts restricted sociosexuality (Al-Shawaf et al 2015; Sparks et al 2018; Tybur et al 2009, 2015; Tybur & de Vries 2013). These findings suggest that the disgust system is implicated in the regulation of life-history-related behaviors. In particular, sexual and moral disgust show the most consistent pattern of correlations with other indicators of slow strategies.

Romantic attachment styles have wide ranging influences on sexuality, mating, and couple stability, but their relations with life history strategies are somewhat complex. Secure attachment styles are consistently associated with slow life history traits (eg Chisholm 1999b; Chisholm et al 2005; Del Giudice 2990a). Avoidance predicts unrestricted sociosexuality, reduced long-term orientation, and low commitment to partners (Brennan & Shaver 1995; Jackson & Kirkpatrick 2007; Templehof & Allen 2008). Given the central role of pair bonding in long-term parental investment, avoidant attachment – which, on average, is higher in men – can be generally interpreted as a mediator of reduced parenting effort. However, some inconsistent findings indicate that avoidance may capture multiple functional mechanisms. High levels of romantic avoidance are found both in people with very early sexual debut and in those who delay intercourse (Gentzler & Kearns, 2004); this suggests that, at least for some people, avoidant attachment may actually reflect a partial downregulation of the mating system, consistent with slower life history strategies.

And:

At a higher level of abstraction, the behavioral correlates of life history strategies can be framed within the five-factor model of personality. Among the Big Five, agreeableness and conscientiousness show the most consistent pattern of associations with slow traits such as restricted sociosexuality, long-term mating orientation, couple stability, secure attachment to parents in infancy and romantic partners in adulthood, reduced sex drive, low impulsivity, and risk aversion across domains (eg Baams et al 2004; Banai & Pavela 2015; Bourage et al 2007; DeYoung 2001; Holtzman & Strube 2013; Jonasen et al 2013 […] Some researchers working in a life history perspective have argued that the general factor of personality should be regarded as the core personality correlate of slow strategies.

Del Giudice suggests that these traits, and predisposition to fast vs. slow life history in general, are caused by a gene * environment interaction. The genetic predisposition is straightforward enough. The environmental aspect is more interesting.

There has been some research on the thrify phenotype hypothesis: if you’re undernourished while in the womb, you’ll be at higher risk of obesity later on. Some mumble “epigenetic” mumble “mechanism” looks around, says “We seem to be in a low-food environment, better design the brain and body to gorge on food when it’s available and store lots of it as fat”, then somehow modulates the relevant genes to make it happen.

Del Giudice seems to imply that a similar epigenetic mechanism “looks around” at the world during the first few years of life to try to figure out if you’re living in the sort of unpredictable dangerous environment that needs a fast strategy, or the sort of safe, masterable environment that needs a slow strategy. Depending on your genetic predisposition and the observable features of the environment, this mechanism “makes a decision” to “lock” you into a faster or slower strategy, setting your personality traits more toward one side or the other.

He further subdivides fast vs. slow life history into four different “life history strategies”.

The antagonistic/exploitative strategy is a fast strategy that focuses on getting ahead by defecting against other people. Because it expects a short and noisy life without the kind of predictable iterated games that build reciprocity, it throws all this away and focuses on getting ahead quick. A person who has been optimized for an antagonistic/exploitative strategy will be charming, good at some superficial social tasks, and have no sense of ethics – ie the perfect con man. Antagonistic/exploitative people will have opportunities to reproduce through outright rape, through promising partners commitment and then not providing it, through status in criminal communities, or through things in the general category of hiring prostitutes when both parties are too drunk to use birth control. These people do not have to be criminals; they can also be the most cutthroat businessmen, lawyers, and politicians. Jumping ahead to the psychiatry connection, the extreme version of this strategy is probably antisocial personality disorder.

The creative/seductive strategy is a fast strategy that focuses on getting ahead through sexual selection, ie optimizing for being really sexy. Because it expects a short and noisy life, it focuses on raw sex appeal (which peaks in the late teens and early twenties) as opposed to things like social status or ability to care for children (which peak later in maturity). A person who has been optimized for a creative/seductive strategy will be attractive, artistic, flirtatious, and popular – eg the typical rock star or starlet. They will also have traits that support these skills, which for complicated reasons include being very emotional. Creative/seductive people will have opportunities to reproduce through making other people infatuated with them; if they are lucky, they can seduce a high-status high-resource person who can help take care of the children. The most extreme version of this strategy is probably borderline personality disorder.

The prosocial/caregiving strategy is a slow strategy that focuses on being a responsible pillar of the community who everybody likes. Because it expects a slow and stable life, it focuses on lasting relationships and cultivating a good reputation that will serve it well in iterated games. A person who has been optimized for a prosocial/caregiving strategy will be dependable, friendly, honest, and conformist – eg the typical model citizen. Prosocial/caregiving people will have opportunities to reproduce by marrying their high school sweetheart, living in a suburban tract house, and having 2.4 children who go to state college. The most extreme version of this strategy is probably being a normie.

The skilled/provisioning strategy is a slow strategy that focuses on being good at specific useful tasks. Because it expects a slow and stable life, it focuses on gaining abilities that may take years to bear any fruit. A person who is optimized for a skilled/provisioning strategy will be intelligent, good at learning, and a little bit obsessive. They may not invest as much in friendliness or seductiveness; once they succeed at their chosen path, they will get social status through being indispensible for the continued functioning of the community, and they will have opportunities to reproduce because of their high status and obvious ability to provide for the children. The most extreme version of this strategy is probably high-functioning autism.

This division into life strategies is a seductive idea. I mean, literally, it’s a seductive idea, ie in terms of memetic evolution, we may worry it is optimized for a seductive/creative strategy for reproduction, rather than the boring autistic “is actually true” strategy. The following is not a figure from Del Giudice’s book, but maybe it should be:

There’s a lot of debate these days about how we should treat research that fits our existing beliefs too closely. I remember Richard Dawkins (or maybe some other atheist) once argued we should be suspicious of religion because it was too normal. When you really look at the world, you get all kinds of crazy stuff like quantum physics and time dilation, but when you just pretend to look at the world, you get things like a loving Father, good vs. evil, and ritual purification – very human things, things a schoolchild could understand. Atheists and believers have since had many debates over whether religion is too ordinary or sufficiently strange, but I haven’t heard either side deny the fundamental insight that science should do something other than flatter our existing categories for making sense of the world.

On the other hand, the first thermometer no doubt recorded that it was colder in winter than in summer. And if someone had criticized physicists, saying “You claim to have a new ‘objective’ way of looking at temperature, but really all you’re doing is justifying your old prejudices that the year is divided into nice clear human-observable parts, and summer is hot and winter is cold” – then that person would be a moron.

This kind of thing keeps coming up, from Klein vs. Harris on the science of race to Jussim on stereotype accuracy. I certainly can’t resolve it here, so I want to just acknowledge the difficulty and move on. If it helps, I don’t think Del Giudice wants to argue these are objectively the only four possible life strategies and that they are perfect Platonic categories, just that these are a good way to think of some of the different ways that organisms (including humans) can pursue their goal of reproduction.

II.

Psychiatry is hard to analyze from an evolutionary perspective. From an evolutionary perspective, it shouldn’t even exist. Most psychiatric disorders are at least somewhat genetic, and most psychiatric disorders decrease reproductive fitness. Biologists have equations that can calculate how likely it is that maladaptive genes can stay in the population for certain amounts of time, and these equations say, all else being equal, that psychiatric disorders should not be possible. Apparently all else isn’t equal, but people have had a lot of trouble figuring out exactly what that means. A good example of this kind of thing is Greg Cochran’s theory that homosexuality must be caused by some kind of infection; he does not see another way it could remain a human behavior without being selected into oblivion.

Del Giudice does the best he can within this framework. He tries to sort psychiatric conditions into a few categories based on possible evolutionary mechanisms.

First, there are conditions that are plausibly good evolutionary strategies, and people just don’t like them. For example, nymphomania is unfortunate from a personal and societal perspective, but one can imagine the evolutionary logic checks out.

Second, there are conditions which might be adaptive in some situations, but don’t work now. For example, antisocial traits might be well-suited to environments with minimal law enforcement and poor reputational mechanisms for keeping people in check; now they will just land you in jail.

Third, there are conditions which are extreme levels of traits which it’s good to have a little of. For example, a little anxiety is certainly useful to prevent people from poking lions with sticks just to see what will happen. Imagine (as a really silly toy model) that two genes A and B determine anxiety, and the optimal anxiety level is 10. Alice has gene A = 8 and gene B = 2. Bob has gene A = 2 and gene B = 8. Both of them are happy well-adjusted individuals who engage in the locally optimal level of lion-poking. But if they reproduce, their child may inherit gene A = 8 and gene B = 8 for a total of 16, much more anxious than is optimal. This child might get diagnosed with an anxiety disorder, but it’s just a natural consequence of having genes for various levels of anxiety floating around in the population.

Fourth, there are conditions which are the failure modes of traits which it’s good to have a little of. For example, psychiatrists have long categorized certain common traits into “schizotypy”, a cluster of characteristics more common in the relatives of schizophrenics and in people at risk of developing schizophrenia themselves. These traits are not psychotic in and of themselves and do not decrease fitness, nor is schizophrenia necessarily just the far end of this distribution. But schizotypal traits are one necessary ingredient of getting schizophrenia; schizophrenia is some kind of failure mode only possible with enough schizotypy. If schizotypal traits do some other good thing, they can stick around in the population, and this will look a lot like “schizophrenia is genetic”.

How can we determine which of these categories any given psychiatric disorder falls into?

One way is through what is called taxometrics – the study of to what degree mental disorders are just the far end of a normal distribution of traits. Some disorders are clearly this way; for example, if you quantify and graph everybody’s anxiety levels, they will form a bell curve, and the people diagnosed with anxiety disorders will just be the ones on the far right tail. Are any disorders not this way? This is a hard question, though schizophrenia is a promising candidate.

Another way is through measuring the correlation of disorders with mutational load. Some people end up with more mutations (and so a generically less functional genome) than others. The most common cause of this is being the child of an older father, since that gives mutations more time to accumulate in sperm cells. Other people seem to have higher mutational load for other, unclear reasons, which can be measured through facial asymmetry and the presence of minor physical abnormalities (like weirdly-shaped ears). If a particular psychiatric disorder is more common in people with increased mutational load, that suggests it isn’t just a functional adaptation but some kind of failure mode of something or other. Schizophrenia and low-functioning autism are both linked to higher mutational load.

Another way is by trying to figure out what aspect of evolutionary strategy matches the occurrence of the disorder. Developmental psychologists talk about various life stages, each of which brings new challenges. For example, adrenache (age 6-8) marks “the transition from early to middle childhood”, when “behavioral plasticity and heightened social learning go hand in hand with the expression of new genetic influences on psychological traits such as agression, prosocial behavior, and cognitive skills” and children receive social feedback “about their attractiveness and competitive ability”. More obviously, puberty marks the expression of still other genetic influences and the time at which young people start seriously thinking about sex. So if various evolutionary adaptations to deal with mating suddenly become active around puberty, and some mental disorder always starts at puberty, that provides some evidence that the mental disorder might be related to an evolutionary adaptation for dealing with mating. Or, since a staple of evo psych is that men and women pursue different reproductive strategies, if some psychiatric disease is twice as common in women (eg depression) or five times as common in men (eg autism), then that suggests it’s correlated with some strategy or trait that one sex uses more than the other.

This is where Del Giudice ties in the life history framework. If some psychiatric disease is more common in people who otherwise seem to be pursuing some life strategy, then maybe it’s related to that strategy. Either it’s another name for that strategy, or it’s another name for an extreme version of that strategy, or it’s a failure mode of that strategy, or it’s associated with some trait or adaptation which that strategy uses more than others do. By determining the association of disorders with certain life strategies, we can figure out what adaptive trait they’re piggybacking on, and from there we can reverse engineer them and try to figure out what went wrong.

This is a much more well-thought-out and orderly way of thinking about psychiatric disease than anything I’ve ever seen anyone else try. How does it work?

Unclear. Psychiatric disorders really resist being put into this framework. For example, some psychiatric disorders have a u-shaped curve regarding childhood quality – they are more common both in people with unusually deprived childhoods and people with unusually good childhoods. Many anorexics are remarkably high-functioning, so much so that even the average clinical psychiatrist takes note, but others are kind of a mess. Autism is classically associated with very low IQ and with bodily asymmetries that indicate high mutational load, but a lot of autistics have higher-than-normal IQ and minimal bodily asymmetry. Schizophrenia often starts in a very specific window between ages 18 and 25, which sounds promising for a developmental link, but a few cases will start at age 5, or age 50, or pretty much whenever. Everything is like this. What is a rational, order-loving evolutionary psychologist supposed to do?

Del Giudice bites the bullet and says that most of our diagnostic categories conflate different conditions. The unusually high-functioning anorexics have a different disease than the unusually low-functioning ones. Low IQ autism with bodily asymmetries has a different evolutionary explanation than high IQ autism without. In some cases, he is able to marshal a lot of evidence for distinct clinical entities. For example, most cases of OCD start in adulthood, but one-third begin in early childhood instead. These early OCD cases are much more likely to be male, more likely to have high conscientiousness, more likely to co-occur with autistic traits, and have a different set of obsessions focusing on symmetry, order, and religion (my own OCD started in very early childhood and I feel called out by this description). Del Giudice says these are two different conditions, one of which is associated with pathogen defense and one of which is associated with a slow life strategy.

Deep down, psychiatrists know that we have not really subdivided the space of mental disorders very well. Every year a new study comes out purporting to have discovered the three types of depression, or the four types of depression, or the five types of depression, or some other number of types of depression that some scientist thinks she has discovered. Often these are given explanatory power, like “number three is the one that doesn’t respond to SSRIs”, or “1 and 2 are biological; 3, 4, and 5 are caused by life events”. All of these seem equally plausible, so given that they all say different things I tend to ignore all of them. So when del Giudice puts depression under his spotlight and finds it subdivides into many different subdisorders, this is entirely fair. Maybe we should be concerned if he didn’t find that.

But part of me is still concerned. If evo psych correctly predicted the characteristics of the psychiatric disorders we observe, then we would count that as theoretical confirmation. Instead, it only works after you replace the psychiatric disorders we observe with another, more subtle set right on the threshold of observation. The more you’re allowed to diverge from our usual understanding, the more chance you have to fudge your results; the more different disorders you can divide things into, the more options you have for overfitting. Del Giudice’s new schema may well be accurate; it just makes it hard to check his accuracy.

On the other hand, reality has a surprising amount of detail. Every previous attempt to make sense of psychopathology has failed. But psychopathology has to make sense. So it must make sense in some complicated way. If you see what looks like a totally random squiggle on a piece of paper, then probably the equation that describes it really is going to have a lot of variables, and you shouldn’t criticize a many-variable equation as “overfitting”. There is a part of me that thinks this book is a beautiful example of what solving a complicated field would look like. You take all the complications, and you explain by layering of a bunch of different simple and reasonable things on top of one another. The psychiatry parts of Evolutionary Psychopathology: A Unified Approach do this. I don’t know if it’s all just epicycles, but it’s a heck of a good try.

I would encourage anyone with an interest in mental health and a tolerance for dense journal-style writing to read the psychiatry parts of this book. Whether or not the hypotheses are right, in the process of defending them it calls in such a wide array of evidence, from so many weird studies that nobody else would have any reason to think about, that it serves as a fantastic survey of the field from an unusual perspective. If you’ve ever wanted to know how many depressed people are reproducing (surprisingly many! about 90 – 100% as many as non-depressed people!) or what the IQ of ADHD people is (0.6 standard deviations below average; the people most of you see are probably from a high-functioning subtype) or how schizophrenia varies with latitude (triples as you move from the equator to the poles, but after adjusting for this darker-skinned people seem to have more, suggesting a possible connection with Vitamin D), this is the book for you.

III.

I want to discuss some political and social implications of this work. These are my speculations only; del Giudice is not to blame.

We believe that an abusive or deprived childhood can negatively affect people’s life chances. So far, we’ve cached this out entirely in terms of brain damage. Children’s developing brains “can’t deal with the trauma” and so become “broken” in ways that make them a less functional adult. Life history theory offers a different explanation. Nothing is “broken”. Deprived children have just looked around, seen what the world is like, and rewired themselves accordingly on some deep epigenetic level.

I was reading this at the same time as the studies on preschool, and I couldn’t help noticing how well they fit together. The preschool studies were surprising because we expected them to improve children’s intelligence. Instead, they improved everything else. Why? This would make sense if the safe environment of preschool wasn’t “fixing” their “broken” brains, but pushing them to follow a slower life strategy. Stay in school. Don’t commit crimes. Don’t have kids while you’re still a teenager. This is exactly what we expect a push towards slow life strategies to do.

Life strategies even predict the “fade-out/fade-in” nature of the effects; the theory specifies that although aspects of life strategy may be set early on, they only “activate” at the appropriate developmental period. From page 93: “The social feedback that children receive in this phase [middle childhood]…may feed into the regulation of puberty timing and shape behavioral strategies in adolescence.”

Society has done a lot to try to help disadvantaged children. A lot of research has been gloomy about the downstream effects; none of it raised anybody’s IQ, there are still lots of poor people around, income inequality continues to increase. But maybe we’re just looking in the wrong place.

On a related note: a lot of intelligent, responsible, basically decent young men complain of romantic failure. Although the media has tried hard to make this look like some kind of horrifying desire to rape everybody because they believe are entitled to whatever and whoever they want, the basic complaint is more prosaic: “I try to be a nice guy who contributes to society and respects others; how come I’m a miserable 25-year-old virgin, whereas every bully and jerk and frat bro I know is able to get a semi-infinite supply of sex partners whom they seduce, abuse, and dump?” This complaint isn’t imaginary; studies have shown that criminals are more likely to have lost their virginity earlier, that boys with more aggressive and dishonest behaviors have earlier age of first sexual intercourse, and that women find men with dark triad traits more attractive. I used to work in a psychiatric hospital that served primarily adolescents with a history of violence or legal issues; most of them had had multiple sexual encounters by age fifteen; only half of MIT students in their late teens and early 20s have had sex at all.

Del Giudice’s work offers a framework by which to understand these statistics. Most MIT students are probably pursuing slow life strategies; most violent adolescents in psych hospitals are probably pursuing fast ones. Fast strategies activate a suite of traits designed for having sex earlier; slow life strategies activate a suite of traits designed for preventing early sex. There’s a certain logical leap here where you have to explain how, if an individual is trying very hard to have teenage sex, his mumble epigenetic mumble mechanism can somehow prevent this. But millions of very vocal people’s lived experiences argue that it can. The good news for these people is that they are adapted for a life strategy which in the past has consistently resulted in reproduction at some point. Maybe when they graduate with a prestigious MIT degree, they will get enough money and status to attract a high-quality slow-strategy mate, who can bear high-quality slow-strategy kids who produce many surviving grandchildren. I don’t know. This hasn’t happened to me yet. Maybe I should have gone to MIT.

Finally, the people who like to say that various things “serve as a justification for oppression” are going to have a field day with this one. Although del Giudice is too scientific to assign any moral weight to his life history strategies, it’s not that hard to import it.

(source)

Life strategies run the risk of reifying some of our negative judgments. If criminals are pursuing a hard-coded antagonistic-exploitative strategy, that doesn’t look good for rehabilitation. Likewise, if some people are pursuing creative-seductive strategies, that provides new force to the warning to avoid promiscuous floozies and stick to your own social class. In the extreme version of this, you could imagine a populism that claims to be fighting for the decent middle-class slow-strategy segment of the population against an antagonistic/exploitative underclass. The creative/seductive people are on thin ice – maybe they should start producing art that looks like something.

(it doesn’t help that this theory is distantly related to an earlier theory proposed by Canadian psychologist John Rushton, who added that black people are racially predisposed to fast strategies and Asians to slow strategies, with white people somewhere in the middle. Del Giudice mentions Rushton just enough that nobody can accuse him of deliberately covering up his existence, then hastily moves on.)

But aside from the psychological compellingness, this doesn’t make a lot of sense. We already know that antagonistic and exploitative people exist in the world. All that life history theory does is exactly what progressives want to do: provide an explanation that links these qualities to childhood deprivation, or to dangerous environments where they may be the only rational choice. Sure, you would have to handwave away the genetic aspect, but you’re going to have be handwaving away some genetics to make this kind of thing work no matter what, and life history theory makes this easier rather than harder. It also provides some testable hypotheses about what aspects of childhood deprivation we might want to target, and what kind of effects we might expect such interventions to have.

Apart from all this, I find life history strategy theory sort of reassuring. Until now, atheists have been denied the comfort of knowing God has a plan for them. Sure, they could know that evolution had a plan for them, but that plan was just “watch dispassionately to see whether they live or die, then adjust gene frequencies in the next generation accordingly”. In life history strategy theory, evolution – or at least your mumble epigenetic mumble mechanism – actually has a plan for you. Now we can be evangelical atheists who have a personal relationship with evolution. It’s pretty neat.

And I come at this from the perspective of someone who has failed at many things despite trying very hard, and also succeeded at others without even trying. This has been a pretty formative experience for me, and it’s seductive to be able to think of all of it as part of a plan. Literally seductive, in the sense of memetic evolution. Like that Hogwarts chart.

Read this book at your own risk; its theories will start creeping into everything you think.

OT116: Opensées Thread

1. I screwed up the WordPress that runs this blog pretty badly. The main effect on your side is that the mailing list disappeared and so no one’s getting email notifications. Don’t bother signing up again as I’m trying to find a way to restore the old list, after which any new ones will be deleted [EDIT: I think this is fixed now].

2. There are rationalist winter solstice celebrations this year in NYC, Boston, Oakland, and Seattle (possibly also elsewhere) at various times through December; see this link for more details. Warning: can be kind of weird.

3. Comment of the week is theredsheep on the new Eastern Orthodox schism, and on the ecclesiastical link between the Patriarch of Constantinople and the USA.

Posted in Uncategorized | Tagged | 623 Comments

Book Review: The Mind Illuminated

I.

The Mind Illuminated is a guide to Buddhist meditation by Culadasa, aka John Yates, a Buddhist meditation teacher who is also a neuroscience PhD. At this point I would be more impressed to meet a Buddhist meditation teacher who wasn’t a neuroscience PhD. If I ever teach Buddhist meditation, this is going to be my hook. “Come learn advanced meditation techniques with Scott Alexander, whose lack of a neuroscience PhD gives him a unique perspective that combines ancient wisdom with a lack of modern brain science.” I think the world is ready for someone to step into this role. But Culadasa is not that person, and The Mind Illuminated is not that book.

Tradition divides meditation into two parts: concentration meditation, where you sharpen and control your focus, versus insight meditation, where you investigate the nature of perception and reality. TMI follows a long tradition of focusing on concentration meditation, with the assumption that insight meditation will become safer and easier once you’ve mastered concentration, and maybe partly take care of itself. Its course divides concentration meditation into ten stages. Early stages contain basic tasks like setting up a practice, focusing on the breath, and overcoming distractability. Later stages are more interesting; the ninth stage is learning how to calm the intensity of your meditative joy; apparently without special techniques “overly intense joy” becomes a big problem.

I usually hate meditation manuals, because they sound like word salad. “One attains joy by combining pleasure with happiness. Pleasure is a state of bliss which occurs when one concentrates focus on the understanding of awareness. Happiness is a state of joy that occurs when one focuses concentration on the awareness of understanding. By focusing awareness on bliss, you can increase the pleasure of understanding, which in turn causes concentration to be pleasant and joy to be blissful, and helps you concentrate on understanding your awareness of happiness about the bliss of focus.” At some point you start thinking “Wait, were all the nouns in that paragraph synonyms for each other?”

Culadasa avoids this better than most people. Whenever he introduces a term, he puts it in bolded italicized letters, and includes it in a glossary at the back. He tries to stick to multiple-word-phrases that help clarify the concept, like “bliss of physical pliancy” or “meditative joy”, instead of just calling one thing “joy” and the other thing “bliss” and hoping you remember which is which. He includes a section on what he means by distinguishing “awareness” from “attention”, and admits that some of these are tough choices that do not necessarily cooperate with the spirit of the English language. And his division of the material into stages helps ensure you’re not reading a term until you’re somewhere around the point of personally experiencing the quality being discussed.

This is characteristic of the level of care taken in this book, which despite its unfortunate acronym does a good job of presenting just the right amount of information. For example, when people say “meditate on the breath”, I can only do this for a little while until I notice that the breath doesn’t really exist as a specific object you can concentrate on. Really there are just a bunch of disconnected sensations changing at every moment. What do you concentrate on? I had previously dismissed this as one of several reasons why obsessive-compulsive people shouldn’t do meditation, but TMI describes exactly this issue, says that it is normal and correct to worry about it, and prescribes solutions: concentrate on the disconnected sensations of the breath in whatever way feels easiest for the first few stages, and once you’ve increased awareness to the point where you can notice each subpart of the breath individually, do that.

II.

TMI also solves a whole slew of my obsessive questions and concerns with its “attention vs. awareness” dichotomy.

I had always been confused by instructions like “concentrate on the breath until you feel joy, then notice the joy”. Usually what would happen was: I would concentrate on the breath, ask myself “am I feeling joy yet?”, spend some time trying to figure this out, realize my attention had deviated from the breath, put my attention back on the breath, then feel bad because I wasn’t checking to see if I was feeling joy or not. How could I both have 100% of my attention on the breath, but also be checking my joy? If I came up with the policy “check once per minute for joy, then go back to the breath”, how would I avoid checking arbitrarily often whether it felt like a minute had gone by? This was another issue I just dismissed as “maybe meditation is not for obsessive-compulsive people”.

But TMI distinguishes between “attention” (sometimes “focused attention”) as the one thing in the foreground of your brain, and awareness (sometimes “peripheral awareness”) as the potentially many things in the background of your brain. Think of it working the same way as central vs. peripheral vision. When given instructions like “concentrate on the breath until you feel joy, then notice the joy”, you should be focusing your entire attention on the breath, but potentially noticing joy in your peripheral awareness. These instructions are no more contradictory than “look at this dot on the wall straight ahead, but notice if a dog runs past”.

The book urges meditators to avoid a state of hyperfocus in which they are so intent on the breath that they would not notice the house falling down around them. It says this is a trap that will not build the proper habits of mind to continue to higher stages and do insight meditation later. It recommends instead a form of practice in which meditators, while keeping their attention on the breath, are constantly monitoring both for external events like barking dogs or the house falling, and for internal events like feeling hungry or having thoughts. This last one sort of makes me want to scream: how can I monitor whether or not I am having thoughts without thinking about it, in which case the answer is always ‘yes’? But this is exactly the kind of paradox that the attention/awareness dichotomy is supposed to overcome. You can keep attention on the breath, notice a thought arising in the periphery of your awareness, and gently note it and push it away, all without shifting attention.

Culadasa is very excited about this:

One great example of [a new perspective] is the distinction I make in this book between attention and awareness. Despite hundreds of thousands of meditators practicing over millennia, it has never before been clearly conceptualized that the ordinary mind has two distinct ways of “knowing”, even though these different ways of knowing have so much to do with achieving the goals of meditation. However, cognitive psychology and neuroscience have recently shown that there are two distinctly different kinds of knowing that involve completely different parts of the brain. This is a finding that deeply informs new ways of practicing meditation and interpreting our meditation experiences, from beginner to adept. This is only one example, but the point should be obvious: meditation can guide and inform neuroscience, and neuroscience can do the same for meditation.

I would usually be pretty reluctant to propose that hundreds of thousands of meditators practicing over millennia had all just missed something really important. And I have to admit that in the two or three test meditations I have done since reading this, I have had as much trouble as ever with these issues, and don’t notice an attention/awareness distinction that becomes obvious now that I have the terms I need to understand it. But realistically maybe something like this has to be true for most discussion about meditation to make sense at all.

III.

TMI gives its model of how the mind works in six interludes distributed among the chapters on meditation advice.

It begins with a startling claim that mental time is granular, and only one item can be in consciousness per granule-moment. The seven main types of items that can occupy a moment of consciousness are sight, sound, smell, taste, touch, thought, and a “binding moment” that combines aspects of the previous six. Each moment of consciousness is completely static. The only reason things seem to move or thoughts seem to flow is because the moments of consciousness are moving from moment to moment faster than you can detect, like a movie which flips from still frame to still frame so quickly that it gets perceived as continuous action. Culadasa also compares it to a “string of beads”, with each bead being a particular kind of moment (sight, sound, etc).

There are never two things in consciousness at the same time. If you think there are, that’s either because your consciousness is switching back and forth from thing to thing so quickly that you can’t follow it, or because your consciousness is perceiving a “binding moment” that presents a single aspect including both of those things. For example, if you see a cat, and you hear a meow, you might experience a “binding moment” in which you think you hear the cat meowing, although really what has happened is SIGHT:CAT — SOUND:MEOW — BINDING:(CAT, MEOW).

This sounds to me like it completely reverses the point made in the attention/awareness dichotomy, where you can be attentive to one thing but aware of many others at the same time. After all, if consciousness can only contain one thing at a time, what room is there for peripheral awareness? Culadasa states that each individual moment is either a moment of attention, or a moment of awareness. Moments of awareness can contain many things:

For example, say you’re sitting on a cabin deck in the mountains, gazing out at the view. Each moment of visual awareness will include a variety of objects — mountains, trees, birds, and sky—all at the same time. Auditory moments of awareness will include all the various sounds that make up the audible background — birdsong, wind in the trees, a babbling brook, and so forth—again, all at the same time. On the other hand, moments of visual attention might be restricted just to the bird you’re watching on a nearby branch. Auditory attention might include only the sounds the birds are making. Even when your attention is divided among several things at once—perhaps you’re knitting or whittling a piece of wood while you sit—moments of attention are still limited to a small number of objects. Finally, binding moments of attention and binding moments of awareness take the content from the preceding sensory moments and combine them into a whole: “Sitting on the deck, looking out at the mountain, while carving a piece of wood.”

Now, let’s consider the second difference: the degree of mental processing in moments of awareness versus moments of attention. Individual moments of awareness provide information about a lot of things at once, but the information has only been minimally processed. The result is our familiar experience of peripheral awareness of many things in the background. However, these moments of awareness do include some simple interpretations of sense data. You may be aware that the sounds you hear are from “traffic,” or that the things in the background of your visual field are “trees.” These simple concepts help evaluate and categorize all that information, contributing to our understanding of the present context. Although these preliminary interpretations don’t usually lead to any kind of action, some part of this information is frequently referred to attention for more analysis. Other times— say, when the sound of traffic suddenly includes screeching tires — the information in peripheral awareness can trigger an automatic action, thought, or emotion, any of which can then become an object of attention.

This still seems strained, but I grudgingly admit it kind of works.

TMI builds on this idea to create the “mind-system model”, its explanation for what consciousness is and why we have it. In this model, there are many “subminds”. The book is a little vague on how many there are or what level of complexity we’re supposed to be imagining here, and whether they represent only the few most salient divisions (eg “the visual system”) or are more numerous and abstract (eg “the part of your brain that likes to play computer games”), but I get the impression it’s closer to the latter. These subminds usually do their own thing, but sometimes have conflicting agendas.

Consciousness is a neutral ground shared by all subminds:

Here’s the picture presented so far: every sub-mind belongs either to the unconscious sensory or unconscious discriminating mind. Each sub-mind performs its own specialized task independently of others, and all at the same time. Each can project content into consciousness, as well as initiate actions. Obviously, there’s enormous potential for conflict and inefficiency, if not total chaos. This is where consciousness fits into the picture: the conscious mind provides an “interface” that allows these unconscious sub-minds to communicate with each other and work together cooperatively. With all these unconscious sub-minds working independently and at the same time, the potential for conflict is enormous. The conscious mind is what allows them to work together cooperatively.

The conscious mind acts as a universal recipient of information. It can receive information from each and every separate, unconscious sub-mind. In fact, all conscious experience is simply an ongoing stream of moments of consciousness whose content has been projected into the conscious mind by unconscious sub¬minds. Then, when information enters consciousness, it becomes immediately available to all the other sub-minds. Therefore, the conscious mind also serves as a universal source of information. Because the conscious mind is both a universal recipient and a universal source of information, all the unconscious sub-minds can interact with each other through the conscious mind.

As a helpful image, picture the whole mind-system as a kind of corporation. It is made up of different departments and their employees, each with distinct roles and responsibilities. These are the unconscious sub-minds. At the top of the corporate structure is the “boardroom,” or conscious mind. The diligent employees working in their separate departments produce reports, which get sent to the boardroom to be discussed further and perhaps acted on. In other words, the unconscious sub-minds send information up into the conscious mind. The conscious mind is simply a passive “space” where all the other minds can meet. In this “boardroom of the mind” metaphor, the conscious mind is where important activities of the mind-system get brought up, discussed, and decided on. One, and only one, sub-mind can present its information at a time, and that’s what creates single moments of consciousness. The object of consciousness during that moment becomes part of the current agenda, and is made simultaneously available to all the other sub-minds for further processing. In subsequent moments, they project the results of their further processing into consciousness, creating a discussion that leads to conclusions and decisions.

If this sounds familiar, it’s because as far as I can tell it’s a rebranding of Bernard Baars’ global workspace theory of consciousness. I like global workspace theory and have always considered it the most plausible solution to the easy problems of consciousness. I’m a little bit concerned that Culadasa never mentions global workspace theory in the book, and that I’ve never heard of any connection between global workspace theory and Buddhism before. Not really sure what to make of this.

TMI continues:

Just because information projected into consciousness becomes available to every sub-mind of the mind-system, that doesn’t mean they all receive it. It’s like a radio show: the show is being broadcast, but not everyone is tuning in to listen.

Meditation increases the degree to which individual subminds are tuned in to consciousness. Since the book will later say this is all a metaphor, I think a better way of framing this might be “increase the bandwidth of the connections between the individual subminds”. When someone says meditators are “more conscious” or have “higher awareness” than non-meditators, they mean that more sub-minds are tuned in to consciousness more closely at any given time.

This accomplishes what Culadasa calls “unification of mind”; with more bandwidth, the subminds are able to resolve their conflicting priorities and act more like a single unit. This can start out sort of ugly; there can be good reasons why some heavily repressed and traumatized subminds aren’t usually invited to the table, and the creation of new links between them and the global workspace feels from the inside like scary unconscious material welling up into the psyche. But this is part of the “negotiation” it takes for these subminds to unify; with enough meditation, the system will assimilate their insights and they will join the Borg like everyone else.

This isn’t enlightenment. Enlightenment is something else. TMI calls it a “cessation event”:

A cessation event is where unconscious sub-minds remain tuned in and receptive to the contents of consciousness, while at the same time, none of them project any content into consciousness. Then,consciousness ceases — completely. During that period, at the level of consciousness there is a complete cessation of mental fabrications of any kind — of the illusory, mind-generated world that otherwise dominates every conscious moment. This, of course, also entails a complete cessation of craving, intention, and suffering. The only information that tuned in sub-minds receive during this event is the fact of a total absence.

What makes this the most powerful of all Insight experiences is what happens in the last few moments of consciousness leading up to the cessation. First, an object arises in consciousness that would normally produce craving. It can be almost anything. However, what happens next is quite unusual: the mind doesn’t respond with the habitual craving and clinging. Rather, it fully understands the object from the perspective of Insight: as a mental construct, completely “empty” of any real substance, impermanent, and a cause of suffering. This profound realization leads to the next and final moment of complete equanimity, in which the shared intention of all the unified sub-minds is to not respond. Because nothing is projected into consciousness, the cessation event arises. With cessation, the tuned-in sub-minds simultaneously realize that everything appearing in consciousness is simply the product of their own activity. In other words, they realize that the input they’re accustomed to receiving is simply a result of their own fabricating activities.

I usually hate theories that explain the brain based on subminds. They seem too easy, in the way anthropomorphizing is always too easy. Want to run marathons, but spend your time drinking beer instead? Just model yourself as having a marathon-running submind and a beer-drinking submind, and they’re fighting, and the beer-drinking submind is winning. Do this enough times and you’ll never figure out anything about hyperbolic discounting or reinforcement learning or any of the very important principles that govern what the brain actually does and which do not look like little people fighting inside of you. Your solutions will always look like some weird form of therapy based on starting a dialogue with the beer-drinking submind and convincing it that beer isn’t so good after all, which never works, and you’ll never get around to taking Adderall, which for some reason will cause all the little men inside your head to change their opinions to whatever you wanted in the first place.

For whatever reason, TMI’s mind-system model doesn’t bother me as much. Maybe it’s because he’s not trying to invent yet another new age psychotherapy to help fight procrastination. Maybe because it’s in the context of global workspace theory, which I already like. Or maybe it’s because the idea of modules and processes without enough bandwidth to connect to the global workspace sounds less anthropomorphic than little people who make you drink beer because they like beer.

IV.

This is a very optimistic book.

Buddhism started out with Theravada teachers saying it would take millions of lifetimes to reach enlightenment. Then the Mahayana and Vajrayana schools started saying maybe you could reach enlightenment in one lifetime, if you did everything right and worked very hard. Recently I’ve been reading works by modern teachers like Daniel Ingram and Vinay Gupta, who compare the amount of work involved in enlightenment to the amount of work in an MD or PhD – maybe five years? But Culadasa states that “for householders who practice properly, it’s possible to master the Ten Stages within a few months or years”, adding in a footnote:

The Dalai Lama has said “If one knows the nature, order, and distinctions of the levels explained above without error and cultivates calm abiding, one can easily generate faultless meditative stabilization in about a year.” When I first began teaching, I also believed that with diligent practice most people should be able to master all Ten Stages in less than a year. I have since learned that this is not realistic in terms of most people, and making such a flat pronouncement can be discouraging for those who have been practicing much longer without attaining that mastery.

So fine, only cool people can get mastery in less than a year. Still, this is a dramatic promise. But then why are there so many cultures where monks study their entire lives in monasteries? Monks have big advantages over the sort of “householder” meditators Culadasa is talking about – they can meditate every waking hour, they have access to the best teachers. Surely they should all get enlightened within a few months? I have read some work on the idea of “multiple paths” and “endless dharma gates” which suggest that what is ordinarily called enlightenment is just the first and most obvious step on an endless process of personal exploration. But when I read about historical Buddhism culture, it still seems like a majority of monks at any given time are unenlightened, including those who have been at the monastery many years.

Maybe all of this Western rationality and efficiency really is that great, and by cutting out the chaff modern people can get enlightenment much faster than the ancients could? Is this true in any other field? I get the impression that modern schoolchildren still master subjects like geometry or Latin at about the same age that the medievals would, though I could be wrong about this. Maybe Culadasa was right when he claimed his book includes important distinctions that hundreds of thousands of meditators working for thousands of years have missed. Maybe the past was just stupid and anybody moderately competent can make order-of-magnitude improvements. I don’t know. It seems like a pretty big claim, though.

(or maybe this is overcomplicating things. It’s not necessarily contradictory to say that a talented person, practicing an hour a day, could go from “zero math” to “able to solve calculus problems” in a year, but also that the average student has been studying math for ten years and can’t solve calculus problems.)

TMI also feels optimistic in comparison to another meditation book I reviewed, Mastering the Core Teachings of the Buddha. Its author, Daniel Ingram, counts himself as part of the same “pragmatic dharma” movement as Culadasa, and the two of them have occasionally cooperated on various things and taught together. But Ingram stresses that meditation and enlightenment do not provide many of the worldly gains their advocates promise, and in many cases can make things worse. He warns of what he called “the Dark Night”, a tendency for people midway along the path of meditation to shatter their psyches and fall into states of profound depression and agitation.

Culadasa has a rosier view of both points. He believes that the “unification of mind” produced by meditation will have its common-sense result of reducing internal conflict and improving “willpower”; it will also “overcome all harmful emotions and behavior”, leaving you with few things to worry about except the looming specter of excessive joy.

As for the Dark Night, he doesn’t like the term, and only gives it one sentence in the main text of the book plus two pages in an appendix. The two pages reassure us that enough practice in concentration meditation serves as a prophylactic:

One of the greatadvantages of samatha [concentration meditation] is that it makes it easier to confront the Insights into impermanence, emptiness, the pervasive nature of suffering, and the insubstantiality of the Self that produce Awakening. Without samatha, these challenging Insights have the potential to send a practitioner spiraling into a “dark night of the soul”.

Since the whole book is about samatha meditation, and treats everything else as something that happens naturally while you’re doing samatha, this makes it sound pretty minimal; just do what you would be doing anyway and you’ll be fine. This is a big difference from Ingram, who thinks that explaining the risk of the Dark Night and how to get through it is one of the most important jobs of a meditation teacher. Culadasa endorses this difference:

Have I seen in my students anything remotely resembling a “dark night” as defined above? Absolutely not. Nor can I recall ever having seen the sorts of extreme experiences of the dukkha nanas that are appearing so frequently in these online discussions.

There seems to be something of a consensus in the relevant community that Culadasa’s type of practice, which is called “wet” (ie includes concentration and jhanas) may be less likely to produce these kinds of problems than the so-called “dry insight” that Ingram discusses, and that if you’re doing everything right maybe you shouldn’t worry about it. Shinzen Young is another meditation teacher who moves in the same circles as Ingram and Culadasa. I found his perspective on this the most informative:

Historically it is not a term from the Buddhist meditative tradition but rather from the Roman Catholic meditative tradition. (Of course, there’s nothing wrong with using Christian terms for Buddhist experiences but…). One must clearly define what one means by a “Dark Night” within the context of Buddhist experience.

It is certainly the case that almost everyone who gets anywhere with meditation will pass through periods of negative emotion, confusion, disorientation, and heightened sensitivity to internal and external arisings. It is also not uncommon that at some point, within some domain of experience, for some duration of time, things may get worse before they get better. The same thing can happen in psychotherapy and other growth modalities. For the great majority of people, the nature, intensity, and duration of these kinds of challenges is quite manageable. I would not refer to these types of experiences as “Dark Night.”

I would reserve the term for a somewhat rarer phenomenon. This phenomenon, within the Buddhist tradition, is sometimes referred to as “falling into the Pit of the Void.” It entails an authentic and irreversible insight into Emptiness and No Self. What makes it problematic is that the person interprets it as a bad trip. Instead of being empowering and fulfilling, the way Buddhist literature claims it will be, it turns into the opposite. In a sense, it’s Enlightenment’s Evil Twin. This is serious but still manageable through intensive, perhaps daily, guidance under a competent teacher. In some cases it takes months or even years to fully metabolize, but in my experience the results are almost always highly positive. For details, see The Five Ways manual pages 97-98.

This whole Dark Night discussion reminds me of a certain Zen Koan. Although the storyline of this koan is obviously contrived, it does contain a deep message. Here’s how the koan goes: A monk is walking on a precipitous path and slips but is able to grab onto a branch by his teeth. A person standing below, recognizing the monk as an enlightened master, asks him to describe Enlightenment. What should the monk do? As a teacher, he’s duty bound to speak, but as soon as he speaks, the consequences will be dire. It sounds like a lose/lose situation. If you were the monk, what would you do? That’s the koan.

If we don’t describe the possibility of Dark Night, then we leave people without a context should it occur. On the other hand, if we do discuss it, people get scared and assume it’s going to happen to them, even if we point out (as I just did), that it’s relatively infrequent. So the take-home message is:

1. Don’t worry, it’s probably not going to happen to you.
2. Even if it does, that’s not necessarily a problem.

It may require input from a teacher and time but once it’s integrated, you’ll be a very, very happy camper.

I think it would be a good thing if people lighten up around this issue. This may help (see attached cartoon).

From this I gather that Culadasa is closer to the mainstream on this issue (also, that enlightenment does not help the mind overcome a propensity to dad jokes).

There’s a lot of drama over this issue, and if you want you can find a bunch of really enlightened and compassionate pot-shots that the different teachers are taking at each other over their respective positions. The only insight I can add to this comes from my medical experience, where I notice a very similar phenomenon in how many side effects people accord certain drugs. For example, although some people will say SSRI discontinuation syndrome is toxic and scary and omnipresent and a good reason never to use SSRIs at all, my experience in five years of taking dozens of people on and off various SSRIs is that I’ve never seen it happen beyond an occasional mild headache if the drugs are tapered properly. I know there are studies that disagree with my experience, but that is definitely my experience. Part of is is probably a difference in what kind of expectations (in yourself or your patients/students). Another part is probably a difference in what your patients/students communicate to you. A third part is probably actual differences in the way you prescribe or teach. All of these combined can be pretty powerful.

But the biggest difference I notice is that a “serious” side effect is the one you (or one of your patients) has had, a “minor” side effect is one that you haven’t. If a certain drug works great for 95% of people, but causes a month of constant vomiting for 5%, then a doctor who’s used it a few times and always gotten the great results will think of it as great (plus a rare side effect that doesn’t cause lasting damage) and a patient who has been vomiting constantly for a month will think of it as an evil poison which should never have been made legal (even though most people get lucky and don’t have any problems).

Shinzen says that meditation can definitely cause something terrible called “falling into the Pit of the Void”, but that it usually doesn’t happen, and that with daily guidance you will get better after a few months or years, and so basically it’s not a big problem. My guess is that the person who has been trapped in some kind of weird bad trip for several months thinks of it as a very big problem, and wants everybody to warn about it all the time. All of this closely matches the way I’ve seen doctors and patients talk about medication side effects. I’m not sure there’s a difference here except the hard-to-navigate first-person difference of “did it happen to me?”

But overall Culadasa’s optimism seems justified here. Maybe it’s the only approach to this topic that seems justified. Imagine if there was something you could do an hour a day for a year or two, which would win you more willpower and a release from all suffering, with less side effects than the average SSRI? Why aren’t we all doing it?

For more information, you can also check out Culadasa’s website and the The Mind Illuminated subreddit.