Mental Mountains

I.

Kaj Sotala has an outstanding review of Unlocking The Emotional Brain; I read the book, and Kaj’s review is better.

He begins:

UtEB’s premise is that much if not most of our behavior is driven by emotional learning. Intense emotions generate unconscious predictive models of how the world functions and what caused those emotions to occur. The brain then uses those models to guide our future behavior. Emotional issues and seemingly irrational behaviors are generated from implicit world-models (schemas) which have been formed in response to various external challenges. Each schema contains memories relating to times when the challenge has been encountered and mental structures describing both the problem and a solution to it.

So in one of the book’s example cases, a man named Richard sought help for trouble speaking up at work. He would have good ideas during meetings, but felt inexplicably afraid to voice them. During therapy, he described his narcissistic father, who was always mouthing off about everything. Everyone hated his father for being a fool who wouldn’t shut up. The therapist conjectured that young Richard observed this and formed a predictive model, something like “talking makes people hate you”. This was overly general: talking only makes people hate you if you talk incessantly about really stupid things. But when you’re a kid you don’t have much data, so you end up generalizing a lot from the few examples you have.

When Richard started therapy, he didn’t consciously understand any of this. He just felt emotions (anxiety) at the thought of voicing his opinion. The predictive model output the anxiety, using reasoning like “if you talk, people will hate you, and the prospect of being hated should make you anxious – therefore, anxiety”, but not any of the intermediate steps. The therapist helped Richard tease out the underlying model, and at the end of the session Richard agreed that his symptoms were related to his experience of his father. But knowing this changed nothing; Richard felt as anxious as ever.

Predictions like “speaking up leads to being hated” are special kinds of emotional memory. You can rationally understand that the prediction is no longer useful, but that doesn’t really help; the emotional memory is still there, guiding your unconscious predictions. What should the therapist do?

Here UtEB dives into the science on memory reconsolidation.

Scientists have known for a while that giving rats the protein synthesis inhibitor anisomycin prevents them from forming emotional memories. You can usually give a rat noise-phobia by pairing a certain noise with electric shocks, but this doesn’t work if the rats are on anisomycin first. Probably this means that some kind of protein synthesis is involved in memory. So far, so plausible.

A 2000 study found that anisomycin could also erase existing phobias in a very specific situation. You had to “activate” the phobia – get the rats thinking about it really hard, maybe by playing the scary noise all the time – and then give them the anisomycin. This suggested that when the memory got activated, it somehow “came loose”, and the brain needed to do some protein synthesis to put it back together again.

Thus the idea of memory reconsolidation: you form a consolidated memory, but every time you activate it, you need to reconsolidate it. If the reconsolidation fails, you lose the memory, or you get a slightly different memory, or something like that. If you could disrupt emotional memories like “speaking out makes you hated” while they’re still reconsolidating, maybe you could do something about this.

Anisomycin is pretty toxic, so that’s out. Other protein synthesis inhibitors are also toxic – it turns out proteins are kind of important for life – so they’re out too. Electroconvulsive therapy actually seems to work pretty well for this – the shock disrupts protein formation very effectively (and the more I think about this, the more implications it seems to have). But we can’t do ECT on everybody who wants to be able to speak up at work more, so that’s also out. And the simplest solution – activating a memory and then reminding the patient that they don’t rationally believe it’s true – doesn’t seem to help; the emotional brain doesn’t speak Rationalese.

The authors of UtEB claim to have found a therapy-based method that works, which goes like this:

First, they tease out the exact predictive model and emotional memory behind the symptom (in Richard’s case, the narrative where his father talked too much and ended up universally hated, and so if Richard talks at all, he too will be universally hated). Then they try to get this as far into conscious awareness as possible (or, if you prefer, have consciousness dig as deep into the emotional schema as possible). They call this “the pro-symptom position” – giving the symptom as much room as possible to state its case without rejecting it. So for example, Richard’s therapist tried to get Richard to explain his unconscious pro-symptom reasoning as convincingly as possible: “My father was really into talking, and everybody hated him. This proves that if I speak up at work, people will hate me too.” She even asked Richard to put this statement on an index card, review it every day, and bask in its compellingness. She asked Richard to imagine getting up to speak, and feeling exactly how anxious it made him, while reviewing to himself that the anxiety felt justified given what happened with his father. The goal was to establish a wide, well-trod road from consciousness to the emotional memory.

Next, they try to find a lived and felt experience that contradicts the model. Again, Rationalese doesn’t work; the emotional brain will just ignore it. But it will listen to experiences. For Richard, this was a time when he was at a meeting, had a great idea, but didn’t speak up. A coworker had the same idea, mentioned it, and everyone agreed it was great, and congratulated the other person for having such an amazing idea that would transform their business. Again, there’s this same process of trying to get as much in that moment as possible, bring the relevant feelings back again and again, create as wide and smooth a road from consciousness to the experience as possible.

Finally, the therapist activates the disruptive emotional schema, and before it can reconsolidate, smashes it into the new experience. So Richard’s therapist makes use of the big wide road Richard built that let him fully experience his fear of speaking up, and asks Richard to get into that frame of mind (activate the fear-of-speaking schema). Then she asks him, while keeping the fear-of-speaking schema in mind, to remember the contradictory experience (coworker speaks up and is praised). Then the therapist vividly describes the juxtaposition while Richard tries to hold both in his mind at once.

And then Richard was instantly cured, and never had any problems speaking up at work again. His coworkers all applauded, and became psychotherapists that very day. An eagle named “Psychodynamic Approach” flew into the clinic and perched atop the APA logo and shed a single tear. Coherence Therapy: Practice Manual And Training Guide was read several times, and God Himself showed up and enacted PsyD prescribing across the country. All the cognitive-behavioralists died of schizophrenia and were thrown in the lake of fire for all eternity.

This is, after all, a therapy book.

II.

I like UtEB because it reframes historical/purposeful accounts of symptoms as aspects of a predictive model. We already know the brain has an unconscious predictive model that it uses to figure out how to respond to various situations and which actions have which consequences. In retrospect, this framing perfectly fits the idea of traumatic experiences having outsized effects. Tack on a bit about how the model is more easily updated in childhood (because you’ve seen fewer other things, so your priors are weaker), and you’ve gone a lot of the way to traditional models of therapy.

But I also like it because it helps me think about the idea of separation/noncoherence in the brain. Richard had his schema about how speaking up makes people hate you. He also had lots of evidence that this wasn’t true, both rationally (his understanding that his symptoms were counterproductive) and experientially (his story about a coworker proposing an idea and being accepted). But the evidence failed to naturally propagate; it didn’t connect to the schema that it should have updated. Only after the therapist forced the connection did the information go through. Again, all of this should have been obvious – of course evidence doesn’t propagate through the brain, I was writing posts ten years ago about how even a person who knows ghosts don’t exist will be afraid to stay in an old supposedly-haunted mansion at night with the lights off. But UtEB’s framework helps snap some of this into place.

UtEB’s brain is a mountainous landscape, with fertile valleys separated by towering peaks. Some memories (or pieces of your predictive model, or whatever) live in each valley. But they can’t talk to each other. The passes are narrow and treacherous. They go on believing their own thing, unconstrained by conclusions reached elsewhere.

Consciousness is a capital city on a wide plain. When it needs the information stored in a particular valley, it sends messengers over the passes. These messengers are good enough, but they carry letters, not weighty tomes. Their bandwidth is atrocious; often they can only convey what the valley-dwellers think, and not why. And if a valley gets something wrong, lapses into heresy, as often as not the messengers can’t bring the kind of information that might change their mind.

Links between the capital and the valleys may be tenuous, but valley-to-valley trade is almost non-existent. You can have two valleys full of people working on the same problem, for years, and they will basically never talk.

Sometimes, when it’s very important, the king can order a road built. The passes get cleared out, high-bandwidth communication to a particular valley becomes possible. If he does this to two valleys at once, then they may even be able to share notes directly, each passing through the capital to get to each other. But it isn’t the norm. You have to really be trying.

This ended out a little more flowery than I expected, but I didn’t start thinking this way because it was poetic. I started thinking this way because of this:

Frequent SSC readers will recognize this as from Figure 1 of Friston and Carhart-Harris’ REBUS And The Anarchic Brain: Toward A Unified Model Of The Brain Action Of Psychedelics, which I review here. The paper describes it as “the curvature of the free-energy landscape that contains neuronal dynamics. Effectively, this can be thought of as a flattening of local minima, enabling neuronal dynamics to escape their basins of attraction and—when in flat minima—express long-range correlations and desynchronized activity.”

Moving back a step: the paper is trying to explain what psychedelics do to the brain. It theorizes that they weaken high-level priors (in this case, you can think of these as the tendency to fit everything to an existing narrative), allowing things to be seen more as they are:

A corollary of relaxing high-level priors or beliefs under psychedelics is that ascending prediction errors from lower levels of the system (that are ordinarily unable to update beliefs due to the top-down suppressive influence of heavily-weighted priors) can find freer register in conscious experience, by reaching and impressing on higher levels of the hierarchy. In this work, we propose that this straightforward model can account for the full breadth of subjective phenomena associated with the psychedelic experience.

These ascending prediction errors (ie noticing that you’re wrong about something) can then correct the high-level priors (ie change the narratives you tell about your life):

The ideal result of the process of belief relaxation and revision is a recalibration of the relevant beliefs so that they may better align or harmonize with other levels of the system and with bottom-up information—whether originating from within (e.g., via lower-level intrinsic systems and related interoception) or, at lower doses, outside the individual (i.e., via sensory input or extroception). Such functional harmony or realignment may look like a system better able to guide thought and behavior in an open, unguarded way (Watts et al., 2017; Carhart-Harris et al., 2018b).

This makes psychedelics a potent tool for psychotherapy:

Consistent with the model presented in this work, overweighted high-level priors can be all consuming, exerting excessive influence throughout the mind and brain’s (deep) hierarchy. The negative cognitive bias in depression is a good example of this (Beck, 1972), as are fixed delusions in psychosis (Sterzer et al., 2018).25 In this paper, we propose that psychedelics can be therapeutically effective, precisely because they target the high levels of the brain’s functional hierarchy, primarily affecting the precision weighting of high-level priors or beliefs. More specifically, we propose that psychedelics dose-dependently relax the precision weighting of high-level priors (instantiated by high-level cortex), and in so doing, open them up to an upsurge of previously suppressed bottom-up signaling (e.g., stemming from limbic circuitry). We further propose that this sensitization of high-level priors means that more information can impress on them, potentially inspiring shifts in perspective, felt as insight. One might ask whether relaxation followed by revision of high-level priors or beliefs via psychedelic therapy is easy to see with functional (and anatomic) brain imaging. We presume that it must be detectable, if the right questions are asked in the right way.

Am I imagining this, or are Friston + Carhart-Harris and Unlocking The Emotional Brain getting at the same thing?

Both start with a piece of a predictive model (= high-level prior) telling you something that doesn’t fit the current situation. Both also assume you have enough evidence to convince a rational person that the high-level prior is wrong, or doesn’t apply. But you don’t automatically smash the prior and the evidence together and perform an update. In UtEB‘s model, the update doesn’t happen until you forge conscious links to both pieces of information and try to hold them in consciousness at the same time. In F+CH’s model, the update doesn’t happen until you take psychedelics which make the high-level prior lose some of its convincingness. UtEB is trying to laboriously build roads through mountains; F+CH are trying to cast a magic spell that makes the mountains temporarily vanish. Either way, you get communication between areas that couldn’t communicate before.

III.

Why would mental mountains exist? If we keep trying to get rid of them, through therapy or psychedelics, or whatever, then why not just avoid them in the first place?

Maybe generalization is just hard (thanks to MC for this idea). Suppose Goofus is mean to you. You learn Goofus is mean; if this is your first social experience, maybe you also learn that the world is mean and people have it out for you. Then one day you meet Gallant, who is nice to you. Hopefully the system generalizes to “Gallant is nice, Goofus is still mean, people in general can go either way”.

But suppose one time Gallant is just having a terrible day, and curses at you, and that time he happens to be wearing a red shirt. You don’t want to overfit and conclude “Gallant wearing a red shirt is mean, Gallant wearing a blue shirt is nice”. You want to conclude “Gallant is generally nice, but sometimes slips and is mean.”

But any algorithm that gets too good at resisting the temptation to separate out red-shirt-Gallant and blue-shirt-Gallant risks falling into the opposite failure mode where it doesn’t separate out Gallant and Goofus. It would just average them out, and conclude that people (including both Goofus and Gallant) are medium-niceness.

And suppose Gallant has brown eyes, and Goofus green eyes. You don’t want your algorithm to overgeneralize to “all brown-eyed people are nice, and all green-eyed people are mean”. But suppose the Huns attack you. You do want to generalize to “All Huns are dangerous, even though I can keep treating non-Huns as generally safe”. And you want to do this as quickly as possible, definitely before you meet any more Huns. And the quicker you are to generalize about Huns, the more likely you are to attribute false significance to Gallant’s eye color.

The end result is a predictive model which is a giant mess, made up of constant “This space here generalizes from this example, except this subregion, which generalizes from this other example, except over here, where it doesn’t, and definitely don’t ever try to apply any of those examples over here.” Somehow this all works shockingly well. For example, I spent a few years in Japan, and developed a good model for how to behave in Japanese culture. When I came back to the United States, I effortlessly dropped all of that and went back to having America-appropriate predictions and reflexive actions (except for an embarrassing habit of bowing whenever someone hands me an object, which I still haven’t totally eradicated).

In this model, mental mountains are just the context-dependence that tells me not to use my Japanese predictive model in America, and which prevents evidence that makes me update my Japanese model (like “I notice subways are always on time”) from contaminating my American model as well. Or which prevent things I learn about Gallant (like “always trust him”) from also contaminating my model of Goofus.

There’s actually a real-world equivalent of the “red-shirt-Gallant is bad, blue-shirt-Gallant is good” failure mode. It’s called “splitting”, and you can find it in any psychology textbook. Wikipedia defines it as “the failure in a person’s thinking to bring together the dichotomy of both positive and negative qualities of the self and others into a cohesive, realistic whole.”

In the classic example, a patient is in a mental hospital. He likes his doctor. He praises the doctor to all the other patients, says he’s going to nominate her for an award when he gets out.

Then the doctor offends the patient in some way – maybe refuses one of his requests. All of a sudden, the doctor is abusive, worse than Hitler, worse than Mengele. When he gets out he will report her to the authorities and sue her for everything she owns.

Then the doctor does something right, and it’s back to praise and love again.

The patient has failed to integrate his judgments about the doctor into a coherent whole, “doctor who sometimes does good things but other times does bad things”. It’s as if there’s two predictive models, one of Good Doctor and one of Bad Doctor, and even though both of them refer to the same real-world person, the patient can only use one at a time.

Splitting is most common in borderline personality disorder. The DSM criteria for borderline includes splitting (there defined as “a pattern of unstable and intense interpersonal relationships characterized by alternating between extremes of idealization and devaluation”). They also include things like “markedly and persistently unstable self-image or sense of self”, and “affective instability due to a marked reactivity of mood”, which seem relevant here too.

Some therapists view borderline as a disorder of integration. Nobody is great at having all their different schemas talk to each other, but borderlines are atrocious at it. Their mountains are so high that even different thoughts about the same doctor can’t necessarily talk to each other and coordinate on a coherent position. The capital only has enough messengers to talk to one valley at a time. If tribesmen from the Anger Valley are advising the capital today, the patient becomes truly angry, a kind of anger that utterly refuses to listen to any counterevidence, an anger pure beyond your imagination. If they are happy, they are purely happy, and so on.

About 70% of people diagnosed with dissociative identity disorder (previously known as multiple personality disorder) have borderline personality disorder. The numbers are so high that some researchers are not even convinced that these are two different conditions; maybe DID is just one manifestation of borderline, or especially severe borderline. Considering borderline as a failure of integration, this makes sense; DID is total failure of integration. People in the furthest mountain valleys, frustrated by inability to communicate meaningfully with the capital, secede and set up their own alternative provincial government, pulling nearby valleys into their new coalition. I don’t want to overemphasize this; most popular perceptions of DID are overblown, and at least some cases seem to be at least partly iatrogenic. But if you are bad enough at integrating yourself, it seems to be the sort of thing that can happen.

In his review, Kaj relates this to Internal Family Systems, a weird form of therapy where you imagine your feelings as people/entities and have discussions with them. I’ve always been skeptical of this, because feelings are not, in fact, people/entities, and it’s unclear why you should expect them to answer you when you ask them questions. And in my attempts to self-test the therapy, indeed nobody responded to my questions and I was left feeling kind of silly. But Kaj says:

As many readers know, I have been writing a sequence of posts on multi-agent models of mind. In Building up to an Internal Family Systems model, I suggested that the human mind might contain something like subagents which try to ensure that past catastrophes do not repeat. In subagents, coherence, and akrasia in humans, I suggested that behaviors such as procrastination, indecision, and seemingly inconsistent behavior result from different subagents having disagreements over what to do.

As I already mentioned, my post on integrating disagreeing subagents took the model in the direction of interpreting disagreeing subagents as conflicting beliefs or models within a person’s brain. Subagents, trauma and rationality further suggested that the appearance of drastically different personalities within a single person might result from unintegrated memory networks, which resist integration due to various traumatic experiences.

This post has discussed UtEB’s model of conflicting emotional schemas in a way which further equates “subagents” with beliefs – in this case, the various schemas seem closely related to what e.g. Internal Family Systems calls “parts”. In many situations, it is probably fair to say that this is what subagents are.

This is a model I can get behind. My guess is that in different people, the degree to which mental mountains form a barrier will cause the disconnectedness of valleys to manifest as anything from “multiple personalities”, to IFS-findable “subagents”, to UtEB-style psychiatric symptoms, to “ordinary” beliefs that don’t cause overt problems but might not be very consistent with each other.

IV.

This last category forms the crucial problem of rationality.

One can imagine an alien species whose ability to find truth was a simple function of their education and IQ. Everyone who knows the right facts about the economy and is smart enough to put them together will agree on economic policy.

But we don’t work that way. Smart, well-educated people believe all kinds of things, even when they should know better. We call these people biased, a catch-all term meaning something that prevents them from having true beliefs they ought to be able to figure out. I believe most people who don’t believe in anthropogenic climate change are probably biased. Many of them are very smart. Many of them have read a lot on the subject (empirically, reading more about climate change will usually just make everyone more convinced of their current position, whatever it is). Many of them have enough evidence that they should know better. But they don’t.

(again, this is my opinion, sorry to those of you I’m offending. I’m sure you think the same of me. Please bear with me for the space of this example.)

Compare this to Richard, the example patient mentioned above. Richard had enough evidence to realize that companies don’t hate everyone who speaks up at meetings. But he still felt, on a deep level, like speaking up at meetings would get him in trouble. The evidence failed to connect to the emotional schema, the part of him that made the real decisions. Is this the same problem as the global warming case? Where there’s evidence, but it doesn’t connect to people’s real feelings?

(maybe not: Richard might be able to say “I know people won’t hate me for speaking, but for some reason I can’t make myself speak”, whereas I’ve never heard someone say “I know climate change is real, but for some reason I can’t make myself vote to prevent it.” I’m not sure how seriously to take this discrepancy.)

In Crisis of Faith, Eliezer Yudkowsky writes:

Many in this world retain beliefs whose flaws a ten-year-old could point out, if that ten-year-old were hearing the beliefs for the first time. These are not subtle errors we’re talking about. They would be child’s play for an unattached mind to relinquish, if the skepticism of a ten-year-old were applied without evasion…we change our minds less often than we think.

This should scare you down to the marrow of your bones. It means you can be a world-class scientist and conversant with Bayesian mathematics and still fail to reject a belief whose absurdity a fresh-eyed ten-year-old could see. It shows the invincible defensive position which a belief can create for itself, if it has long festered in your mind.

What does it take to defeat an error that has built itself a fortress?

He goes on to describe how hard this is, to discuss the “convulsive, wrenching effort to be rational” that he thinks this requires, the “all-out [war] against yourself”. Some of the techniques he mentions explicitly come from psychotherapy, others seem to share a convergent evolution with it.

The authors of UtEB stress that all forms of therapy involve their process of reconsolidating emotional memories one way or another, whether they know it or not. Eliezer’s work on crisis of faith feels like an ad hoc form of epistemic therapy, one with a similar goal.

Here, too, there is a suggestive psychedelic connection. I can’t count how many stories I’ve heard along the lines of “I was in a bad relationship, I kept telling myself that it was okay and making excuses, and then I took LSD and realized that it obviously wasn’t, and got out.” Certainly many people change religions and politics after a psychedelic experience, though it’s hard to tell exactly what part of the psychedelic experience does this, and enough people end up believing various forms of woo that I hesitate to say it’s all about getting more rational beliefs. But just going off anecdote, this sometimes works.

Rationalists wasted years worrying about various named biases, like the conjunction fallacy or the planning fallacy. But most of the problems we really care about aren’t any of those. They’re more like whatever makes the global warming skeptic fail to connect with all the evidence for global warming.

If the model in Unlocking The Emotional Brain is accurate, it offers a starting point for understanding this kind of bias, and maybe for figuring out ways to counteract it.

This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.

173 Responses to Mental Mountains

  1. marthinwurer says:

    The pictures of the energy landscapes remind me of a paper I was reading about how batch normalization helps artificial neural networks learn. Batch norm basically smooths out the gradient landscape, allowing the optimizer to easily figure out the direction to go instead of trying to follow valleys through to the minima. I’ll add a link to the paper if I remember in the morning; I just remember it was on the machine learning subreddit.

    Couldn’t sleep so I found it: arxiv link. Turns out the graphics I was remembering were about the benefits of skip connections, not normalization. There are still some charts in the appendix that show normalization smoothing out the poss landscape, though, so I’ll keep this comment here.

    • viVI_IViv says:

      The pictures of the energy landscapes remind me of a paper I was reading about how batch normalization helps artificial neural networks learn. Batch norm basically smooths out the gradient landscape, allowing the optimizer to easily figure out the direction to go instead of trying to follow valleys through to the minima. I’ll add a link to the paper if I remember in the morning; I just remember it was on the machine learning subreddit.

      I tend to be wary of these kind of analogies: the human brain has been compared to a mechanical watch, a hydraulic circuit, an electric circuit, a computer, a logical inference engine, a Bayesian network, and now of course an artificial neural network (ironically, since artificial neural networks were originally inspired by the brain). See the trend? The brain is always just like the latest fancy technology du jour.

      Arguably, none of these analogies are completely misleading, they all have a grain of truth, but do they really help our understanding or are they just a source of false insight that suggests deep connections where there is nothing but superficial resemblance?

      • Erl137 says:

        George Zarkadakis apparently traces the “mind as metaphor for most advanced technology” trend all the way back to “clay infused with a soul” biblical description. This is an analogy with the most advanced contemporary technology: the clay pot, which contains a fluid.

        My mom (a therapist) and I have an inside joke about this, imagining a sort of stage-zero metaphor, “your brain is like a rock. Some days the rock is good, other days, not a good rock.” There’s more comedic than analytic mileage in this one, perhaps.

        https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

      • kai.teorn says:

        To be fair, no mechanical watches or even computers (without a NN) have ever been able to do the tricks that modern NNs are capable of, such as successfully playing a new computer game just from the screen, without advance knowledge of its rules.

        • viVI_IViv says:

          Yes, any new technology is better than the previous ones, at least at something.

          Still, it’s complicated.

          For instance, between the 50s and 80s the predominant psychological theories were based on congnitivism: the notion that the mind is structured similarly to a traditional, high-level software system, with files and databases, discrete arithmetic, formal grammars to generate and understand language and GOFAI-style planning and inference. After all, this was the most advanced technology at the time, and it did in fact replicate some of the cognitive skills of humans. It turns out that it replicated some of these skills way better than human-level (try to beat a 70s era pocket calculator at doing multiplications or a floppy disk at storing data), but was crap at other skills.

          When in the 80s-90s artificial neural networks first became interesting, psychology mostly abandoned congnitivism for connectionism, the notion that the mind is just the product of a big black-box neural network without much discernible internal structure, trained to maximize some sort of reward signal. Then artificial neural networks went into their AI winter, neuroscience started to find lots of specialized modules inside the brain and psychology began to move towards Bayesianism. Then ANNs came back in the 2010s and are now the dominant AI paradigm, neuroscience and classical psychology stalled and went into replication crisis, so connectionism is coming back, now with fancy deep learning terminology such as “energy landscapes”.

          ANNs do in fact reproduce some human cognitive skills, in some cases better than humans, still they are crap at other skills. This is why I’m wary of such analogies.

          • kai.teorn says:

            > the notion that the mind is just the product of a big black-box neural network without much discernible internal structure, trained to maximize some sort of reward signal.

            Apart from anything else, this notion has one big advantage: it is much easier to imagine how it could gradually evolve from something extremely primitive — as opposed to, e.g., a computer-like brain with a rigid structure and boundaries between RAM and the processor. For this reason alone, cognitivism and connectionism, to me, are not even comparable in their level of plausibility. They are like top-to-bottom and bottom-to-top descriptions of something — but since we know that this something did in fact grow from bottom to top, the top-to-bottom description is of very limited interest.

          • viVI_IViv says:

            Many organs and systems of the body are made of parts with clearly identifiable functions. E.g. the eye is structured like a camera, with a sensor (retina), a biconvex lens, a diaphragm (iris) and various muscles to change orientation and focus. The circulatory system has pipes (arteries and veins), valves, a pump (heart), gas exchangers (lungs), filters (kidneys), and so on. It was not prima facie implausible that the human brain was also structured similarly.

          • Simon_Jester says:

            Wariness of analogies is justified.

            At the same time, technology iterates towards higher levels of capacity. AI development isn’t progressing ultra-fast, but it is manifestly progressing; whatever it means to ‘think,’ we are getting closer to machines that think.

            Therefore, it is increasingly likely that we will encounter technological discoveries that are at least vaguely related to how the brain really works. No amount of “pot full of liquid” metaphors get you very far understanding the brain, because it’s just too simplistic. But a machine that has components explicitly modeled after the kind of things we know the brain does? That may very well provide us with some insights.

            Trying to say “the brain is just like [piece of tech]” is obviously fallacious. But there are still things we may learn from analogy to pieces of tech.

      • dark orchid says:

        While reading your last sentence, I was reminded of how the atom has been compared to a ball, a plum pudding, a planet with moons etc. – all of these models are useful in some way or another, and they did help my understanding of different parts of basic chemistry.

  2. ikew says:

    Sorry for responding to the part of your post you most likely don’t want to be responded to all that much, but:

    I’ve never heard someone say “I know climate change is real, but for some reason I can’t make myself vote to prevent it.”

    really doesn’t match my experience. About half of the skeptics I’ve discussed this hold the following belief:
    “Climate change appears to indeed be happening. The degree to which it is threatening is likely overblown due to the current political climate(heh). I am not yet fully convinced it is anthropogenic in nature, as (ref.1), (ref.2) and (ref.3). I know these are contested and there are many other studies contradicting these. As I am not a climatologist, I consider the issue to be “currently unsolved”. When politics is involved, I am not eager to accept the overall consensus if there is even a single piece of evidence that does not appear to be fabricated that challenges it. In case climate change is indeed anthropogenic, I am still not convinced it can be countered by policing the first world rather than currently industrializing nations. And even if we are at fault, whatever the effects of it might be, they cannot be worse than a one world government led by the current crop of globalist neoliberal nanny state wanna be dictators we have pushing for it.”
    Notice how It’s a position starting with an open mind in regards to the factual, increasingly biased as the topic becomes more political (as is always the case) and then we find a series of fallbacks all the way down to a complete rejection of governmental (perceived) overreach.
    It’s possible that when you are discussing the issue with a skeptic the issue becomes political (in their mind) from the get go, skipping entirely the lush meadows of factology. I am unsure why would this happen, as you always treat the Other tribe with respect. Could be something specific to the american climate skeptics.

    Oh! About this :

    In his review, Kaj relates this to Internal Family Systems, a weird form of therapy where you imagine your feelings as people/entities and have discussions with them. I’ve always been skeptical of this, because feelings are not, in fact, people/entities, and it’s unclear why you should expect them to answer you when you ask them questions.

    I unknowingly and accidentally applied this form of therapy on myself between ages 16 and probably 19-ish. It got pretty involved over time, with a sort of an inner council of four personified approaches to life discussing among themselves the best course of action, both long and short term. The terms of the discussion were not always civil.
    Its unclear to what degree this influenced my subsequent descent into stuff that is still an issue more than ten years later. But it was fun at the time, so there’s that. 6/10 therapy, wouldn’t recommend to everybody, especially if they are already somewhat dissociative as I seem to have been.

    • ARabbiAndAFrog says:

      Anything that dictates how people should be compelled to act is political, climate change becomes a political issue when any kind of legislative measure is proposed.

      • ikew says:

        The issue is both factual and political. They can be discussed separately.
        If one decides that the political side of the issue is important enough that the factual doesn’t matter, that’s one thing. If he decides that the political is so important that he must insist that the facts supporting the other side are wrong (when they may not be), then he is setting himself for failure, as his position is built on shaky foundation easily exploited.
        If one’s rejection of anti-climate-change legislation is deeply political, but masked as the position that climate change isn’t actually happening, the opposing side can present more evidence until they weaken his publicly held position to the point of ridicule. If the opposing side has a strong grip on the academia and/or the media, evidence doesn’t even need to be particularly strong. This will leave him at a very weak position later on the table when the issue of legislation is forced.
        If one however rejects anti-climate-change legislation entirely on political grounds, this gives certain freedoms. No matter how many tear jerking videos of the displaced inhabitants of an island in the pacific ocean he is forced to endure, he does not have to concede any ground politically, at least until a legislation is proposed that takes into consideration his highly specific requirements in regards to governmental overreach.
        Overall I feel the “climate-change-denial” side were defeated politically the moment they allowed themselves to be manipulated into insisting publicly that climate change isn’t happening. Which seems somewhat typical for the political right over the last half a century or so.

        • FeepingCreature says:

          They can be discussed separately – but they won’t be, because arguments are soldiers.

          If you want this to be discussed separately, you have to create a space that is deeply, convincingly on board with rank hypocrisy. To the level of “I think climate change is happening, but I’ll still vote as if it wasn’t, and I don’t want to talk about why.” And nothing even slightly bad or embarassing has to happen to this person. Anything less than that, you’ll get incentive to adjust your factual beliefs to support your political ones.

        • ARabbiAndAFrog says:

          I don’t think there’s lack of different opinions on the spectrum – climate change isn’t real, is real but not anthropogenic, is real but any legislation that isn’t focusing on destroying China is meaningless, is real but not worth legislating – you are just mostly presented with strawmen by media and stupid people who parrot it mistaking it for official party line.

        • Alex M says:

          I agree with you. I think that Republicans are ultimately setting themselves up for failure and ridicule by failing to acknowledge the reality of climate change, even while mass migration and environmental apocalypse occurs all around them. This should be deeply frightening for them, since whenever a disaster occurs, the first thought in the public consciousness is “Whom should we blame for this?” Do Republicans really want to be the fall guys who take the blame for the end of the world?

          A wiser strategy would be to say “Look, climate change is happening, but we disagree on the best approach to combat it. Democrats keep saying that we should go green, but any rationalist who has studied incentives problems knows that it is useless for us to go all in on the environment when China and lots of third world countries are continuing to pollute and build coal plants. Why should the United States spearhead the immense financial burden of stopping climate change when we are the nation that will be least impacted? If these other nations – which will likely be totally obliterated by climate change – don’t even care enough about their own survival to adopt green technology, then why should we subsidize their existence when we could instead spend that money to improve our own climate change resilience?”

          To this, Democrats will predictably answer “Because mass migration means that their problems will become our problems. It’s easy to say that their stupidity will only impact them, but in reality we’re talking about millions of immigrants sweeping across the border once their nations start collapsing.”

          And the Republican answer can simply be “Hey, their problems don’t have to be our problems. Only reason they would be is if you guys are too chickenshit to play hardball. We have the technology to make the border completely impassible if we want it to be. Deploying landmines and weaponized AI at the border can drop immigration to literally zero. These civilizations chose not to cooperate and pay their fair share when it comes to stopping climate change, so they made their bed and they can lie in it. Play stupid games, win stupid prizes. Best of all, since the citizens of these nations are now aware that there will be no mercy for them if their own stupidity and stubbornness causes their nations to collapse, fear will make them highly incentivized to change their selfish ways and start adopting green technology quicker, because now its their own lives and the lives of their children on the line.”

          You might say that this is an unrealistic position for Republicans to take, because it sounds villainous and unsympathetic. And it is… at least in the current year. But culture changes rapidly depending on how desperate your situation is. In a world full of abundance (like the one we have now) that kind of uncaring “survival at all costs” mentality sounds very unsympathetic to your average voter. But in ten years or so, after a few European countries have completely collapsed due to mass migration, people are going to be a lot more scared and desperate, and they will be willing to allow their governments to take much more drastic measures if it preserves their own quality of life. If Republicans start using negative talking points about these other societies now, pointing out how we subsidize them to be more green while they waste and embezzle the money on internal corruption (which a is completely fair accusation in many cases) then by the time the world is at crisis point, the game plan of sealing the borders and letting the “uncivilized barbarians” drown is going to look pretty appealing to your average voter. The logic going through their heads is “Why should we endanger our own chances at survival so that you can live? You brought this upon yourselves by being selfish and failing to cooperate when you had the chance.”

          Look at how quickly European culture has changed in the past few years. When the refugee crisis started, Germany had a policy of Willkommenskultur. But as soon as it turned out that the refugees were not the well-behaved people that delusional idealists expected them to be, that attitude changed real quick. Populism surged, and now many European countries are actively hostile to migrants. And this is only the beginning! Imagine how hostile they’re going to get once one of these lovey-dovey open-borders countries totally collapses due to mass migration. Shit will hit the fan with a quickness, and anybody who has a credible game plan to stop civilization from falling into chaos (whether that’s a politician looking to gain power, or an entrepreneurial merchant selling weaponized AI or other useful technologies) will most likely end up in high demand.

          • Simon_Jester says:

            A wiser strategy would be to say “Look, climate change is happening, but we disagree on the best approach to combat it. Democrats keep saying that we should go green, but any rationalist who has studied incentives problems knows that it is useless for us to go all in on the environment when China and lots of third world countries are continuing to pollute and build coal plants.

            Much like the “global warming isn’t happening” strategy, this is vulnerable to being mugged by reality. For example, when China starts investing in solar power harder than any other nation on the planet and building scads of nuclear reactors.

            At some point it becomes apparent that China is taking the problem seriously, or at any rate more seriously than the Republican Party.

            You might say that this is an unrealistic position for Republicans to take, because it sounds villainous and unsympathetic. And it is… at least in the current year. But culture changes rapidly depending on how desperate your situation is. In a world full of abundance (like the one we have now) that kind of uncaring “survival at all costs” mentality sounds very unsympathetic to your average voter. But in ten years or so, after a few European countries have completely collapsed due to mass migration, people are going to be a lot more scared and desperate,

            This, too, is a strategy vulnerable to being mugged by reality if the expected collapses don’t materialize. Given the level of sensationalism that has so far been required to keep up the narrative of “European nations are on the brink of collapse because too many migrants,” this may be difficult.

            You might say that this is an unrealistic position for Republicans to take, because it sounds villainous and unsympathetic. And it is… at least in the current year.

            The other problem with the suggested argument is that it basically boils down to “we should be prepared to machine-gun millions of people at our borders rather than just, y’know, build solar panels and live in smaller apartments closer to the city center and things.”

            Inasmuch as this is a hard sell, it’s a hard sell. If it becomes an easy sell, you’re going to have an entirely different set of problems. Starting with “what if a government accustomed to ‘not being squeamish’ about machine-gunning millions of otherwise harmless people who want to enter the country stops ‘being squeamish’ about its treatment of people inside the country?”

            The mentality of machine-gunning climate refugees for trying to cross the border is not far removed from the mentality of deporting one’s own racial or class “undesirables.”

          • Alex M says:

            Much like the “global warming isn’t happening” strategy, this is vulnerable to being mugged by reality. For example, when China starts investing in solar power harder than any other nation on the planet and building scads of nuclear reactors.

            You have a very unrealistic sense of confidence in China’s willingness and ability to address climate change. My sense is that they pay lip service to the problem and then go right ahead and build hundreds of new coal plants. Perhaps you should pay less attention to what China is saying and more attention to what they are doing.

            At some point it becomes apparent that China is taking the problem seriously, or at any rate more seriously than the Republican Party.

            You say this, and yet you have the balls to talk to me about being “mugged by reality?” Your view of reality itself (certainly, insofar as it applies to China) seems completely at odds with the evidence. In Beijing, the air is so polluted that I could stare directly at the sun at midday, and yet China is still going full speed ahead on coal. In their capital city, the tap water is undrinkable and makes people violently ill. Nobody cares. China doesn’t care about the environment, nor will they until it causes a revolution. Their ruling class is primarily made up of very short-term thinkers who lack the ability to resolve cooperation problems.

            Given the level of sensationalism that has so far been required to keep up the narrative of “European nations are on the brink of collapse because too many migrants,” this may be difficult.

            Again, I don’t think you’re living in the same reality as the rest of us. From my perspective, what has been far more difficult is maintaining the fictional narrative that everything in the EU is fine. Our current US president is talking about not defending NATO allies while German soldiers are training with sticks because they don’t have enough guns. Generally speaking, when you are a country with no effective military to speak of and your main ally is talking about pulling out of the alliance and leaving you to your own devices, your situation is somewhere between “totally fucked” and “up shit creek without a paddle.” The EU most definitely is on the brink of collapse, and I’m not sure how you can interpret the data any other way. When the government is unable to maintain order that’s a symptom of civilizational collapse. When you are unable to police areas within your own country because law enforcement has effectively ceded control to organized gangs, this too is a symptom of civilizational collapse. Anybody with common sense can see it for what it is, regardless of how the media may try to spin it. The collapse of Rome started with a refugee crisis also.

            The other problem with the suggested argument is that it basically boils down to “we should be prepared to machine-gun millions of people at our borders rather than just, y’know, build solar panels and live in smaller apartments closer to the city center and things.”

            That’s not a problem at all. People only get squeamish about death when they have to deliver it up close and person. When it’s a drone delivering death remotely, Americans don’t care much. The further your average voter is from doing the dirty work themselves, and the easier you make it for them, the less reluctant they will be to pull the trigger. When the message is as easy as “Hey, push this button and killer AI will patrol the border for you. They will humanely push people out and only gun down people who resist (and let’s be honest, those people are kind of asking for it)” then most voters will be very inclined to agree, particularly when they have just seen European nations collapse for being too compassionate.

            Inasmuch as this is a hard sell, it’s a hard sell. If it becomes an easy sell, you’re going to have an entirely different set of problems. Starting with “what if a government accustomed to ‘not being squeamish’ about machine-gunning millions of otherwise harmless people who want to enter the country stops ‘being squeamish’ about its treatment of people inside the country?”

            Do you even pay attention to the news? Our government has never been squeamish about its treatment of people inside the country. They’re only squeamish about being discovered. The idea of pushing a button and having unstoppable emotionless AI take care of the border control problem is far nicer and more sympathetic than some of the dark extralegal actions that our government has already committed.

            I’m sorry if this line of thinking bothers you. I’m just saying that climate change is a multipolar coordination problem, and trying to solve any sort of multipolar coordination problem without leveraging fear as a tool against defectors is completely impossible. This truth may be bitter and hard to swallow, but it is a fact of life. That means that if you seriously want to resolve the problem (instead of just virtue-signalling intent to resolve the problem) then you have to accept fear as one of the tools in your mental toolbox. The progressive refusal to leverage fear against foreign countries who fail to uphold their climate change commitments is the reason that progressives have failed so utterly at combating climate change. They are trying to fix a complex problem with an inadequate set of tools.

    • eqdw says:

      For the record, your characterization of climate skeptics is one of the most accurate descriptions of my point of view on the matter that I have ever seen someone write down. Congratulations for being the first person to engage with people like myself in good faith. That last sentence might sound sarcastic but I mean it sincerely

    • Radu Floricica says:

      My view as well, regarding climate science. Anthropogenic, but slow, not that bad, not all bad, and unlikely that the place to start is first world countries now. And sure as hell not with plastic straws. But I’m guessing that’s not what Scott is talking about – there most likely exists a minority that is just rejecting anthropogenic climate change outright. Maybe not in our bubbles – probably not in our bubbles – but it’s there. For another so-controversial-it’s-fun example – socialism (hey, Scott started!). Stuff like rent control. For another – libertarians who completely ignore commons problems. The extremes of any position are pretty obviously wrong, and yet there are people that are arguing for them (assumingly) in good faith.

      Regarding Inner Voices Therapy, and the main point of the article, probably my main mental health (? mental optimization?) issue is that the inner actors don’t have voices. I know they’re there, I understand them intellectually (Minsky’s Society of Mind is a damn good book btw), it’s pretty obvious they exist and are quite opinionated but… they don’t have a voice in my inner dialogue. Hell, I don’t have an inner dialogue. Any hint on how I could create one would probably be most useful for me long term.

      • Eigengrau says:

        My experience with people who do not believe in climate change outside of the Rationalism sphere (non-Rationalists being a much much larger group than Rationalists, I do not think this is some small minority), is that they barely have a coherent belief regarding AGW at all, and are mostly just repeating whatever talking points they last saw in a meme or a PragerU video or something. They’ll at once claim that 1) all those temperature graphs are faked via massive international conspiracy to take your hard-earned money, so there’s no warming, 2) there is warming but it’s fine because the climate has always been changing, 3) scientists all lied about global cooling in the 70s so they’re obviously just lying again, there’s no warming, 4) it’s impossible for 400ppm atmospheric CO2 to warm the planet because that feels like a really small number, 5) there was a cold snap last week, checkmate liberals

        etc.

        This is the level of intellectual engagement I’m used to irl. They seem to sincerely believe it, it’s just their standard for argumentation is ridiculously low.

        (And fyi, banning plastic straws is about plastic pollution in oceans and waterways, not global warming, though I’m sure there are some foolish environmentalists out there who have conflated those two issues as well)

        • John Schilling says:

          My experience with people who do not believe in climate change outside of the Rationalism sphere (non-Rationalists being a much much larger group than Rationalists, I do not think this is some small minority), is that they barely have a coherent belief regarding AGW at all, and are mostly just repeating whatever talking points they last saw in a meme or a PragerU video or something.

          This is needlessly and suspiciously specific. Your characterization is generally true of people who hold any strong belief re climate change, regardless of whether that belief is “climate change is certainly real and dangerous” or “climate change is certainly not real”. They just use different memes and videos for source material.

          One of these groups will coincidentally hold beliefs that are closest to objective reality, but that doesn’t make them smarter or more rational than the other. And really, they’re all fairly smart and rational about this, because outside the freakishly small Rationalsphere, there is almost no benefit to holding climatological beliefs that are objectively accurate and a high cost to holding climatological beliefs discordant with those of your social circle.

          • Eigengrau says:

            I see the point you’re making but I think it’s somewhat of a false equivalence.

            The average AGW-believer has been hearing for the last 30 years that global warming is real and serious and all the relevant experts and authority figures are concerned about it. So they basically just defer to the consensus/experts, because hey, those people at NASA/NOAA etc probably know what they’re talking about.

            This is a fairly reasonable heuristic overall.

            Meanwhile, the average AGW non-believer rejects 30 years of scientific consensus on the basis that either 1) those experts are fools who know less about their topic of study than me, A Random Guy, or 2) those experts are all liars/frauds/biased and acting nefariously to gain more power and money.

            Both groups have very little knowledge of the details of global warming, and have little incentive to care about accuracy, as you point out. If you asked them about specific details, they’d both gets lots wrong since they get their information from memes and other bad media. But would you agree that the first heuristic (“defer to expert consensus”) is more reasonable than whatever heuristics are driving the second set of beliefs (“predictions of catastrophe are almost always wrong”, “anything the left wing government gets behind is probably bad”, “I trust my own intuitions over what some pencil-pushing urban elites have to say” as a few potential examples)?

          • Aapje says:

            @Eigengrau

            The average AGW-believer is in fact not listening to NASA/NOAA scientists, nor do they read the IPCC reports. Instead, they tend to get their information from people who tend to distort the science for various reasons.

            But would you agree that the first heuristic (“defer to expert consensus”)

            Ironically, the falsehood that mainstream consensus = expert consensus, is part of the distortion.

          • The Nybbler says:

            Thirty years, did you say?

            Climate catastrophism has a terrible track record. And the people putting their fingers on the temperature scale aren’t particularly subtle about it any more (e.g. “pausebusting”). So I think the “so-called experts are liars and frauds” hypothesis is not something which can be rejected out of hand.

          • John Schilling says:

            So they basically just defer to the consensus/experts, because hey, those people at NASA/NOAA etc probably know what they’re talking about.

            As Aapje points out, they do no such thing. And I’d wager that if you ask three random AGW-believers, two of them wouldn’t even be able to tell you what “NOAA” stands for without a hint. They defer to non-scientist middlemen like Al Gore and Bill Nye and I guess now Greta Thunberg, who tell them that the Consensus Of Smart Honest Science Guys is X and that all the Not-X believers are liars being paid by Big Evil. And then they feel good about being on the side of the smart honest people, and they find themselves surrounded by friends and family who tell them how smart they are to believe X.

            Which is pretty much exactly what the average AGW-denier does.

          • Eigengrau says:

            @TheNybbler

            Are you misinterpreting “A senior U.N. environmental official says entire nations could be wiped off the face of the Earth by rising sea levels if the global warming trend is not reversed by the year 2000.”

            as

            “A senior U.N. environmental official says entire nations could be wiped off the face of the Earth by the year 2000 by rising sea levels if the global warming trend is not reversed.”

            ?

            Because otherwise his predictions are not unreasonable, even though the article represents only the doomiest and gloomiest end of the forecasting.

          • The Nybbler says:

            Trends weren’t reversed by the year 2000. It’s now the year 2019. The only nation mentioned that was wiped off the face of the Earth is the one they said would do best — the USSR. Admittedly, Russia has indeed had a bumper wheat crop since then, but the US and Canadian wheat regions have not become dust bowls. Even Bangladesh, though it has indeed experienced flooding (and predicting that is like predicting the sun will rise in the East), is still around.

          • Eigengrau says:

            @John Schilling

            Not sure where we actually disagree here. Yes, AGW-believers mostly get their info from scientific middlemen. And yes, they have little knowledge of the actual details. But the scientific middlemen lean really heavily into appealing to expert consensus. Greta Thunberg’s whole bit is begging people to listen to scientists because she is only a child. So the appeal to AGW-believers is still very much “this is what the experts think”.

            Anyways, this is getting way off topic now. My original point was that, in my experience, climate skeptics that reject AGW outright are not a minority of climate skeptics overall, as Radu suggested.

            (Also, for the record, *I* didn’t know what NOAA stands for without looking it up. Something oceans something atmosphere?)

          • Aapje says:

            @Eigengrau

            Greta Thunberg’s whole bit is begging people to listen to scientists because she is only a child.

            No, Thunberg’s bit is to accuse older people of failing children/future generations, demanding that older generations give up wealth.

            In her UN speech, she claimed that: “People are suffering. People are dying. Entire ecosystems are collapsing. We are in the beginning of a mass extinction”

            None of that is in the IPCC reports.

            Anyways, this is getting way off topic now. My original point was that, in my experience, climate skeptics that reject AGW outright are not a minority of climate skeptics overall, as Radu suggested.

            This depends completely on how you define skeptics. According to a YouGov survey, 15% of Americans believe that climate change either doesn’t happen or that it is not caused by humans. However, according to another poll, 62% of Americans believe that climate change is caused mostly by humans. So that makes for 38% of Americans who can be called skeptics by a definition of a skeptic as someone who rejects the idea that humans are mostly caused by humans. Yet the majority of them apparently don’t reject some AGW.

            A survey also also found that 14% of Americans believe it is too late to do anything. You can call these people skeptics of the current approach to combat climate change.

            Then you have people who object to specific solutions, like the idea that climate change can be fought without nuclear power, without gas, etc. These people can also be called skeptics by some definition.

          • viVI_IViv says:

            @Eigengrau

            Guess it’s time to link this post again.

            Through the last decades climate scientists have been predicting anything from a new Ice Age to warming so extreme that the polar ice caps would completely melt. Anything from snow-free Britain to Britain with a Siberian climate. None of these catastrophic predictions turned out to be accurate.

            Add all the scandals of scientists caught red handed fiddling with their data to achieve predetermined outcomes, or colluding to suppress the publication of papers presenting evidence contrary to their position, and you don’t really get a picture of trustworthiness.

          • Simon_Jester says:

            It is kind of disingenuous to use the very large error bars associated with 1975-1980 climate modeling to discredit the much narrower error bars associated with 2015-2020 climate modeling.

            Firstly, climate modeling is a computationally intensive field, and the multiple orders of magnitude in improvement of computer technology would have an obvious, predictable effect on the quality of the models.

            Secondly, any model based on data tends to get better, not worse, a more data are added. We have 30-40 years more climate data now than we did at the dawn of the science, including entire categories of data that did not exist or were in their infancy then (e.g. satellite data), and a wealth of ‘deep history’ data (e.g. ice cores) that had not been collected in those days.

            Thirdly, we have now had about two generations to evaluate and reject the most deeply flawed models of how climate would change. Several such models have indeed been rejected. This is no more ‘proof’ that the remaining models are flawed than the debunking of Piltdown Man ‘proves’ that australopithecus africanus never lived.

            What the average AGW believer is believing is that the scientific community, on the whole, has gotten its shit together, and that while some of their best efforts at evidence-based climate prediction may be wrong, not all their efforts are wrong.

          • The Nybbler says:

            It is kind of disingenuous to use the very large error bars associated with 1975-1980 climate modeling to discredit the much narrower error bars associated with 2015-2020 climate modeling.

            The much narrower error bars haven’t been tested. If your 1975-1980 models didn’t work, and your 1990 and 2001 models didn’t work, why should I believe your 2015-2020 models?

        • eric23 says:

          that they barely have a coherent belief regarding AGW at all, and are mostly just repeating whatever talking points they last saw in a meme or a PragerU video or something

          This is pretty typical for what dumb people (and there are many of those) believe when they doubt AGW. However, dumb people who support AGW aren’t much better. And there are also some smart people who doubt AGW.

      • kai.teorn says:

        > unlikely that the place to start is first world countries now

        Whether you like it or not, first world countries are a place where most things start; if they don’t start there, they rarely stand a chance. So it’s not an issue of who pollutes more; it’s an issue of who will or will not be listened to.

    • Dacyn says:

      I don’t really see your description of what the skeptic believes as matching the emotional/rational dissonance described in Scott’s post. It is more like “Climate change is real, but there are various caveats to that, as well as other reasons why we shouldn’t vote based on it even if it is real.” But both the emotional and the rational brain are saying this. What doesn’t seem to happen is people saying “Climate change is real, and I should vote accordingly, but I can’t bring myself to do so.”

    • rjk says:

      Notice how It’s a position starting with an open mind in regards to the factual, increasingly biased as the topic becomes more political (as is always the case) and then we find a series of fallbacks all the way down to a complete rejection of governmental (perceived) overreach.

      This sounds a lot like “solution aversion“. The gist is that people are afraid of the proposed actions, and so express excessive skepticism toward the evidence that motivates those actions. Only when carefully pushed to do so will they reveal that their true objection is to the proposed solution rather than the evidence for the problem.

    • John Schilling says:

      Sorry for responding to the part of your post you most likely don’t want to be responded to all that much, but:

      I’ve never heard someone say “I know climate change is real, but for some reason I can’t make myself vote to prevent it.”

      really doesn’t match my experience. About half of the skeptics I’ve discussed this hold the following belief: [Description]

      This is baffling to me, because the belief you describe is quite charitable and reasonable, but your equating it with Scott’s formulation is very much not. The people you describe might well say,

      “I know climate change is real though probably not catastrophic, but for a very obvious reason reason I can’t make myself vote for things that probably do very little to prevent it and will be catastrophic in other ways”.

      The bit where you do a hack edit like so,

      “I know climate change is real though but probably not catastrophic, but for a very obvious [some] reason I can’t make myself vote for things that probably do very little to prevent it and will be catastrophic in other ways“, is fundamentally dishonest. Yes, the person hypothetically said those words in that order, but by editing out all the other important words, you invert a rational position into an irrational one and falsely attribute that irrationality to another. You shouldn’t do that, and you should feel bad about having done that.

      • Simon_Jester says:

        The problem arises when multiple different lines of potential solutions are all rejected, and then the skeptic proceeds to vote in a government that aggressively denies the existence of global warming and cuts funding for even studying how to solve the problem.

        If actions reveal preferences, this is not suggestive of a preference for ‘rational, effective’ action to counter global warming. It suggests a strong preference for doing nothing and hoping things won’t be that bad.

    • hopaulius says:

      What is the universe in which one can “vote to prevent climate change” in any sense other than symbolic? I challenge anyone to produce a shred of evidence that anyone’s vote will have any influence on global climate. To imagine that one’s vote in, say, the US, will slow the construction of coal-fired power plants in China and avert global climate catastrophe (if that is what you “believe in”), is a leap of faith undreamed of by the most devout religionist.

      • Aapje says:

        Even for domestic reductions in CO2 production, my newspaper is cycling between:
        – X is an important solution for climate change
        – X is untenable due to reasons

        They unwittingly seem to be sending a message that it is rather hopeless.

      • Simon_Jester says:

        I will note that if China really weren’t interested in addressing this problem they’d be doing a lot less with solar and nuclear.

        And furthermore that, if nothing else, we’re probably not going to get meaningful efforts at geoengineering until politicians actually face the prospect of having to spend trillions on carbon emissions reduction.

      • slapdashbr says:

        Your argument generalizes to “democracy doesn’t work at all and you should never vote for any reason.” Do you agree with that generalized statement?

    • PeterDonis says:

      “Climate change appears to indeed be happening. The degree to which it is threatening is likely overblown due to the current political climate(heh). I am not yet fully convinced it is anthropogenic in nature, as (ref.1), (ref.2) and (ref.3). I know these are contested and there are many other studies contradicting these. As I am not a climatologist, I consider the issue to be “currently unsolved”. When politics is involved, I am not eager to accept the overall consensus if there is even a single piece of evidence that does not appear to be fabricated that challenges it. In case climate change is indeed anthropogenic, I am still not convinced it can be countered by policing the first world rather than currently industrializing nations. And even if we are at fault, whatever the effects of it might be, they cannot be worse than a one world government led by the current crop of globalist neoliberal nanny state wanna be dictators we have pushing for it.”

      The error I see Scott making in the post is simpler: he started with “believe in anthropogenic climate change” and then switched to “vote to prevent it”. One can believe anthropogenic climate change is happening but still quite rationally refuse to vote for any of the schemes that have been proposed to prevent it–simply because you don’t think any of them will in fact prevent it. And that belief seems perfectly rational to me, since the key takeaway from all climate prediction anyone has done thus far is that nobody is very good at predicting future climate. Which in turn implies that nobody is very good at predicting the effects of interventions. With that set of beliefs, adaptation seems like a much better option than attempts at mitigation, particularly attempts at mitigation that are going to cost many trillions of dollars and require restructuring the entire global economy. The track record of humans at doing that without causing disaster is even worse than our track record of predicting future climate.

      To get back on topic for the overall post: before one concludes that people are irrationally ignoring evidence, one should first be very, very clear about exactly what belief one is claiming there is evidence for. That means being very, very careful not to conflate beliefs that, while distinct, are often held together.

  3. Joy says:

    > Richard might be able to say “I know people won’t hate me for speaking, but for some reason I can’t make myself speak”

    What Richard would likely say is “I know people won’t hate people like me for speaking, but for some reason they will still hate me personally for saying exactly the same thing”

    • The Nybbler says:

      What if he’s right? That there’s some difference, unnoticed by him and noticed (but not consciously or in a way able to be articulated) by others which results in him being hated for saying the same things others say.

      • Joy says:

        His views may well be accurate, yes, potentially for the reasons you mention.

        • The Nybbler says:

          And if he’s right, successfully encouraging him to do the thing will only _reinforce_ his belief that he shouldn’t do it. And if he does it, comes back and reports that he did it and was hated for it, and the therapist says something idiotic like “Well you must have done it wrong” or “I didn’t mean _that way_”, now the therapist has a bloody nose and Richard has an assault charge.

  4. denverarc says:

    I have a shelf full of books from the 70’s (Structure of Magic, Virginia Satir et al) with plenty of actionable things to enact to help get people over these issues.

    Thing is, the theories they have behind them are all disproven or the techniques don’t match modern ethical considerations and so no one uses the techniques any more even though the methods work.

    For instance, Richard could have had this issue sorted out simply by getting him deliberately be a complete tosser to other people for a bit in the way he personally dislikes. He’d find it isn’t so bad actually, his mental schema would update and then there’d be an integration naturally. Good luck getting that past the ethics committee and/or insurance agent.

    Another way to do this is to generate another extreme feeling to counteract the anxiety which removes the feeling but leaves the behaviour as a choice.

    Anyway…

    The most interesting question is not why but how we hold different opinions at the same time. A decent working but unprovable theory is we use different sense memory to do so. i.e. our thinking about topic one is held in visual memory and our thinking about topic two is held in auditory memory and so they don’t interact (except if reality/therapy intercede).

    Finally a totally random point about climate change. Most people think its real, a lot of people don’t want to change their behaviours because they won’t benefit personally – they also know that saying openly “I don’t care if your or even my own grandchildren live in an oven, I got mine. SUV’s are awesome and I’ll be long dead by the time the shit hits the fan” is a non starter and so instead they move upstream and deny the science.

    • Scott Alexander says:

      “The most interesting question is not why but how we hold different opinions at the same time. A decent working but unprovable theory is we use different sense memory to do so. i.e. our thinking about topic one is held in visual memory and our thinking about topic two is held in auditory memory and so they don’t interact (except if reality/therapy intercede).”

      I think this might be giving people too much credit. I think it’s the same phenomenon as Russell conjugation (“I am principled, you is obstinate, he is pig-headed”), and the thing where people hold different opinions on the same topic depending on what word you use for it (“Violent crime is very low and the media panics people about it unnecessarily…but gun violence is a national crisis”)

      • denverarc says:

        You can use the different modalities to get a different response reliably.

        Richard in your example will have images of his father in mind with the anxious feeling attached but his auditory experience (internal auditory as well) will not be attached to those images or feeling.

        We aren’t really talking about the pigheaded, we are talking about people who admit openly their emotional reactions are dumb, the reason they are seeking therapy in the first place but cannot change them.

        Well, by putting together their split experiences, you can resolve their difficulties. However as I said before the ethics of it are far from in vogue because it means putting people in guaranteed distress for a possible (though highly likely) good outcome.

      • Simon_Jester says:

        …and the thing where people hold different opinions on the same topic depending on what word you use for it (“Violent crime is very low and the media panics people about it unnecessarily…but gun violence is a national crisis”)

        How exactly does one accurately describe a society where muggings, burglaries, and petty vandalism have been declining since the 1980… But where random mass shootings by people with little or no prior criminal record, and who cannot plausibly hope to profit from their crime, have been rising during the same time period?

        While gun violence is a subset of violent crime, the central example of “gun violence” as we now imagine it to be a threat is not a central example of “violent crime.”

        It’s entirely possible to live in a society where violent crime (an aggregate of bare-knuckle attacks, knifepoint mugging, breaking and entering possibly, rapes, and other offenses) is on the decline. And yet to have a society where the central example of “gun violence” (Person A shoots Person B) is about as common as it ever was, or even more common.

        Furthermore, an epidemic of mass shootings may present a national crisis of a very different sort than a generalized violent crime wave, one which requires entirely different solutions to address. It turned out that the best solutions to the 1970s and ’80s crime wave were “ban leaded gasoline and legalize abortion.” These solutions do not appear to be working to prevent rising levels of mass shootings.

        • John Schilling says:

          How exactly does one accurately describe a society where muggings, burglaries, and petty vandalism have been declining since the 1980… But where random mass shootings by people with little or no prior criminal record, and who cannot plausibly hope to profit from their crime, have been rising during the same time period?

          One hopelessly besotted with a mass media optimized for entertaining fools. “If it bleeds, it leads” is an oversimplification, it has to be novel and bloody to be suitably enthralling infotainment. So, every few years, there’s a new moral panic chosen almost at random and without regard to the actual material harm involved. See e.g. the year of the shark.

          Usually this causes no more than pointless terror because e.g. sharks don’t watch CNN and say “hey, yeah, biting tourists is the thing to do!”. Potential mass shooters, so, and so there’s positive feedback. That, plus increasing existential despair and not-giving-a-fuck, gets you the Year of the Mass Shooting.

          Which, like the Year of the Shark, will last until the media gets bored of it or something bigger happens. But because of the positive feedback, it may take a bit longer this time.

    • brmic says:

      In the same vein, I once knew someone deep into astrology and eventually came to realize they were just operating with different symbols. Where I would summarize a pattern as strident, self-assured etc. they would say ‘fire type’. And they would then derive predictions of social in-/compatibility with other types that were not too different from the predictions my symbol system would offer.
      On a long-term, global scale I’m convinced the system that is open to change or revision is better and the one with specific, falsifiable claims is better, but locally the difference for day-to-day life is small and easily swamped by other factors.

      Similarly, UtEB obviously would have some connection to other symbol systems and be prima facie plausible. It was created by humans after all, so it’s hardly surprising that it matches to some patterns observable in humans. And locally, it’s trivially true that it works for some people, just like talking about ‘fire types’ works in the sense of generating useful predictions or solutions for some people. However, the part that’s interesting to me is whether it creates specific, falsifiable claims (apparently no) and whether it can update (apparently no). So, how is it different from astrology?

      • denverarc says:

        Great question – I’d also ask “What can you do with this knowledge?”

        With astrology you can get housewives to part with their small change, with UtEB the practical bit seems to be extracting a slightly larger amount from a smaller and harder to convince demographic.

        As often occurs with “therapy books” – UtEB seems to be either just a re-write of a couple of 60’s/70’s classics with new terminology or a genuinely brand new arrival at existing knowledge.

        What can you do?

      • Kaj Sotala says:

        However, the part that’s interesting to me is whether it creates specific, falsifiable claims (apparently no)

        UtEB gives you a specific sequence of steps for achieving belief updating, as well as a general theory for why those steps work and what you can do to troubleshoot them if they don’t work. See my review (which Scott links to at the beginning of the post) for details.

        I’ve personally found that they work very well, and that they also accurately describe several other self-therapy techniques that I’ve successfully used and which in retrospect feel like special cases of UtEB’s more general process.

      • Dacyn says:

        Er… you can see right in the name that astrology is not primarily about coming up with new names for everyday concepts, but rather about connecting them with astronomical phenomena. That is the part that is bogus.

        • brmic says:

          @Dacyn
          I see your point, but consider:
          (1) Astrologers work from a false premise (the stars influence …), but
          (1a) part of that premise is just causally wrong, but empirically right for the wrong reasons. E.g. Plausibly children born in winter have sufficiently different experiences to children born in summer so that there is a very small overall ‘birth season’ effect, which correlates perfectly with the zodiac sign.
          (1b) From that bogus base they then need to expand in a way that seems plausible to them and current unbelievers. So, by necessity, they add true elements to their system, based on their own experience and observations. (This makes it sound more agenda driven than it is, IMO. Like everyone else astrologers try to make sense of the world around them and attempt to include regularities they believe to exist into their worldview.)
          (2) Freud believed the ego, the id and the super-ego are theoretical constructs of such importance that they might as well be actual agents in the mind. That’s bogus, too, but the therapy framework built on top of it works. (Somewhat.)
          And just like astrologers are hampered in their attempts to explain things by having to relate it back to the influence of distant stars, Freudian psychoanalysis was and is hampered by the need to connect new ideas back to its founder’s revelations.
          (3) If you take something modern like Myers-Briggs types or the Big Five model the issue persists. ‘Progress’ here consists of avoiding speculation as to the causal grounding of those theories and in some cases an agnostic stance on the psychological ‘existence’ (for want of a better term) of the construct, i.e. like intelligence, extraversion is that which the extraversion questionaire measures.

          2 & 3 work together, in that if I can accept Freudian psychotherapists I have no strong arguments against astrologers, and I have to accept Freudians (with caveats) because nobody else has offered a better solution. The ‘best’ solution is not to talk about the problem.
          That doesn’t mean I believe in astrology, just that, as long as they don’t talk about the actual stars astrologers may have as valuable insights, expressed through their symbol systems, as psychologists in general and the proponents of new therapeutic approaches in particular.

          • Dacyn says:

            Regarding (1a), sure it’s vaguely plausible that there is a birth season effect loosely relating personality to zodiac signs, but I’ve never seen any evidence of it. Also, astrologers also believe that things like planetary conjunctions have effects, and there isn’t any plausible correlate to this.

            Regarding Freudian psychotherapy, doesn’t the Dodo Bird Verdict say that you don’t need valid insights in order to have a useful psychotherapy? I guess the analogous result would be that people can find horoscopes give meaning to their life even though they don’t correlate with reality. If that is what you mean to say, then I agree, but it doesn’t have much to do with truth claims.

  5. Rachael says:

    Yay, very insightful and enjoyable, like your classic posts.

    Typo: “even a person who knows ghosts exist will be afraid to stay in an old supposedly-haunted mansion at night with the lights off.” Presumably that should be “knows ghosts don’t exist”?

    • Ttar says:

      like your classic posts

      brutal

      a person who knows ghosts exist will be afraid to stay in an old supposedly-haunted mansion

      Those of us who know ghosts exist also know the supposedly-haunted mansions are fine. It’s the actually-haunted mansions you have to avoid.

    • ARabbiAndAFrog says:

      Once you realize ghosts exist, you will never have to be scared and alone.

  6. Phil H says:

    I loved this post, but I wonder about one model underlying it. Scott often compares a dysfunctional mind to a “normal” mind. For example:
    “the degree to which mental mountains form a barrier will cause the disconnectedness of valleys to manifest as anything from “multiple personalities”, to IFS-findable “subagents”, to UtEB-style psychiatric symptoms, to “ordinary” beliefs that don’t cause overt problems“
    The potential problem here is that if we don’t know much about the normal/healthy/ordinary mind – and the whole point of the rationality movement has surely been to highlight how little we know, understand, and control our own minds – then this may not tell us much.
    In this case, the issue is the ability of the mind to react rationally to the evidence around it. If healthy people usually fail to change their minds in response to the evidence, then getting the unhealthy person to do so might be the wrong kind of thing to attempt. What I quite like about the UtEB therapy as described here is that it is explicitly extraordinary, trying to realign a mind that is doing normal things, just in a harmful direction; rather than saying the mind should change itself on an ongoing basis.
    I don’t think I have a point to make, I’m just thinking through the issues out loud. Sorry if I’m being verbose or obvious!

    • Aapje says:

      If healthy people usually fail to change their minds in response to the evidence, then getting the unhealthy person to do so might be the wrong kind of thing to attempt.

      I think that you’d want temporary plasticity, not permanent.

      UtEB might be superior to some psychedelics for this reason, if the drugs cause permanent damage change.

    • coraharmonica says:

      What I quite like about the UtEB therapy as described here is that it is explicitly extraordinary, trying to realign a mind that is doing normal things, just in a harmful direction

      Not saying that all neuroses fall into this category, but I certainly wonder how many cases of clinical depression/anxiety/etc. are really just normal brain functions gone haywire in response to relatively innocuous life events (like, e.g., Richard’s)—I imagine the number is nonzero. The personal narrative offered in biological explanations of neurosis can be pretty crippling, sometimes bordering on deterministic. I know someone at the moment who sees his psychological problems as all caused by congenitally malfunctioning biology, his only option for fixing it being the right pharmaceutical concoction, and he’ll need to be on it for life. Bracketing whether or not he’s right, it’s likely that many people with neuroses like Richard’s end up convincing themselves that their problems are genetically determined when there are viable reasons why totally neurotypical minds can become neurotic.

    • Garrett says:

      There’s also the issue of separating complex issues which we still don’t understand sufficiently to “bin” properly.

      If a person is a poor sketch artist and wants to improve, there are a number of standard ways to do that, such as art classes. But if a person lost their eyes and vision due to something else (ie cancer), their problem isn’t skill, it’s more physical.

      With issues like “a dysfunctional mind”, we have to separate more simple things which might be solved with psychological care like CBT, exposure therapy, or one of the reviewed therapy books. This is different from something which is either more complex or more widespread, such as schizophrenia or late-stage syphilis. Put another way – there are various categories of “dysfunction” which we are still only barely able to use, let alone treat. And treating them in a uniform way doesn’t seem like it will be the best approach.

  7. mad-mad-beaver says:

    Why pick global warming as an example?
    With all politicized bullshit and vested interests and propaganda around this topic?
    Maybe some other topic with clear unenforced consensus (like minimum wage and rent control effects or hereditary IQ differences or does meat-eating cause cancer and heart disease) would’ve been better suited

    • NoRandomWalk says:

      Note: am one of those people who disagrees w Scott on global warming (I think it’s happening, most likely man caused, the scientific/political establishment is very misleading, and the proposed solutions will cause much more harm than benefit even taking the scientific claims at face value)

      I think it’s a good example, if indeed I am irrational, because I have feelings that could make me irrational about global warming. I don’t like people using science/math I don’t understand as a soldier to advance a political agenda that is not informed by the science/math, and usually when people do this the science/math is wrong (or my irrational mind makes up a convincing sounding reason that allows me to think they’re wrong).

      By comparison, I have no emotional intuition about minimum wage, rent control effects, etc. My opinions on these topics aren’t really informed by lumping them in an emotional category.

      • slapdashbr says:

        >I have feelings that could make me irrational about global warming. I don’t like people using science/math I don’t understand as a soldier to advance a political agenda that is not informed by the science/math, and usually when people do this the science/math is wrong (or my irrational mind makes up a convincing sounding reason that allows me to think they’re wrong).

        But if you don’t actually understand the science/math, how do you evaluate “not informed by the science/math?” I don’t see a way to interpret this other than “I’m willfully refusing to engage the subject rationally.”

    • eqdw says:

      There is most certainly not a clear unenforced consensus about the last two of your examples.

      And while there are clear unenforced consensuses _amongst experts_ on the first two of your examples, those clear unenforced consensuses are very strongly disagreed with by politicians and the public, and so I don’t see how they’re all that different from climate change

      • Ttar says:

        Which experts disagree on red meat and iq heritability, who do research on the subject? Last I checked everyone who does a study on it agrees red meat mildly raises risks and iq is somewhat-to-very heritable. Disagreement is all regarding the nature of the causal links, and whether it matters/is meaningful, not whether the correlative links exist.

    • Garrett says:

      My inner cynic thinks that he’s A/B testing the types of examples he uses to see which ones generate the greatest amount of off-topic conversation.

  8. haxen says:

    I like the analogy, but my guess is that Friston and Carhart-Harris are not saying that. The topology there looks more like a 3D visualisation of a cost function (often used in training a neural network). Here, the prior is more like the point that a helicopter drops a cross country skier in a mountainous landscape. The skier generally wanders around but finds it easier to go downhill. They’ll probably get to a valley, and if it’s not too deep, may get out of it (Ok, so once in a valley, it’s fine to think of it like a prior opinion, and if the hills aren’t too steep, new evidence is the energy to change coordinates, weighed against the steep valley prior) and over into a deeper valley, eventually settling on the deepest valley in their area. This may not however be the absolute deepest valley. So it’s really a map of a single decision model, not several competing models.

    Nevertheless, I do like the idea of multiple valleys with limited information exchange as a good way to think about the fact that the brain seems to have several of these prediction models, and they operated somewhat independently.

    In the skier’s case, the latitude/longitude is the coordinates of the “decision”, and the final valley is the location that Richard chooses. It’s his cost function that’s a bit broken – he’s made the “don’t speak up” valley too deep, so he stays there. Even if the “share your ideas” valley is deeper (which it may or may not be in his mind), there isn’t enough evidence-impetus to move over there. To me, the above approaches are all about trying to strengthen the evidence-impetus energy to move out of the valley. That’s a bit abstract here, as it’s really more complicated than 2 dimensions. (Maybe we could say throw a ball to hit a target, and vary strength=latitude and angle of arm=longitude.)

    Caveat: I know enough about gradient descent and cost functions, but don’t know a great deal about the maths behind Friston’s work. Nevertheless, that sort of map is pretty common in describing training artificial neural networks.

    What I find quite interesting is that @marthinwurer’s link shows that skip-level connections flattening the valleys behaves the same way as psychedelics. A skip level connection in a neural net is a little bit like connecting a neuron to more of its neighbours. I wonder if what psychedelics do is “fire” more connections, or open up unused/rarely used connections to neighbours, which would be very similar to converting a regular network to a skip net/resnet. That then would be a nice hypothesis for describing the actual mechanism by which psychedelics operate to achieve these symptoms.

  9. Kaj Sotala says:

    Kaj Sotala has an outstanding review of Unlocking The Emotional Brain; I read the book, and Kaj’s review is better.

    ^_^ <3 ^_^

    Richard might be able to say “I know people won’t hate me for speaking, but for some reason I can’t make myself speak”, whereas I’ve never heard someone say “I know climate change is real, but for some reason I can’t make myself vote to prevent it.” I’m not sure how seriously to take this discrepancy.

    I haven’t heard this either, but I have heard (and experienced) “I know that eating meat is wrong, but for some reason I can’t make myself become a vegetarian”. Jonathan Haidt uses this as an example of an emotional-rational valley in The Happiness Hypothesis:

    During my first year of graduate school at the University of Pennsylvania, I discovered the weakness of moral reasoning in myself. I read a wonderful book—Practical Ethics—by the Princeton philosopher Peter Singer. Singer, a humane consequentialist, shows how we can apply a consistent concern for the welfare of others to resolve many ethical problems of daily life. Singer’s approach to the ethics of killing animals changed forever my thinking about my food choices. Singer proposes and justifies a few guiding principles: First, it is wrong to cause pain and suffering to any sentient creature, therefore current factory farming methods are unethical. Second, it is wrong to take the life of a sentient being that has some sense of identity and attachments, therefore killing animals with large brains and highly developed social lives (such as other primates and most other mammals ) is wrong, even if they could be raised in an environment they enjoyed and were then killed painlessly. Singer’s clear and compelling arguments convinced me on the spot, and since that day I have been morally opposed to all forms of factory farming. Morally opposed, but not behaviorally opposed. I love the tast e of meat, and the only thing that changed in the first six months after reading Singer is that I thought about my hypocrisy each time I ordered a hamburger.

    But then, during my second year of graduate school, I began to study the emotion of disgust, and I worked with Paul Rozin, one of the foremost authorities on the psychology of eating. Rozin and I were trying to find video clips to elicit disgust in the experiments we were planning, and we met one morning with a research assistant who showed us some videos he had found. One of them was Faces of Death, a compilation of real and fake video footage of people being killed. (These scenes were so disturbing that we could not ethically use them.) Along with the videotaped suicides and executions, there was a long sequence shot inside a slaughterhouse. I watched in horror as cows, moving down a dripping disassembly line, were bludgeoned, hooked, and sliced up. Afterwards, Rozin and I went to lunch to talk about the project. We both ordered vegetarian meals. For days afterwards, the sight of red meat made me queasy. My visceral feelings now matched the beliefs Singer had given me. The elephant now agreed with the rider, and I became a vegetarian. For about three weeks. Gradually, as the disgust faded, fish and chicken reentered my diet. Then red meat did, too, although even now, eighteen years later, I still eat less red meat and choose nonfactory-farmed meats when they are available.

    That experience taught me an important lesson. I think of myself as a fairly rational person. I found Singer’s arguments persuasive. But, to paraphrase Medea’s lament (from chapter 1): I saw the right way and approved it, but followed the wrong, until an emotion came along to provide some force.

  10. Scared_kid says:

    I have an ongoing disagreement with my roommate about this very subject; he is very much on board with the Internal Family Systems model. (He knows it as “parts theory”, but the principle is the same.) He’s willing to maintain that people always act perfectly rationally on the basis of the evidence available to them. It’s just that there are certain mental blocks that can prevent evidence from being made available to you (or to your parts). So the activities that we generally think of as making us better at rational thinking are really just making us better at receiving evidence.

    The main issue I have is the inclusion of an evidence-based predictive model in all parts. With Richard, for example, IFS explains what’s going on in the following way:
    1. Richard in his childhood saw that his father talked a lot and that his peers disliked him.
    2. One of Richard’s parts–the Anxiety Part–formed the evidence-based conclusion that people who talk a lot are disliked.
    3. The Anxiety Part is incapable of perceiving evidence that contradicts this conclusion, even when Richard himself (or one of his other parts) does perceive it.
    4. Every time he is faced with the prospect of speaking, his Anxiety Part starts trying to communicate its evidence-based conclusion: that people will dislike him. This is what makes him anxious.

    Here is my alternative view:
    1. Richard in his childhood saw that his father talked a lot and that his peers disliked him.
    2. Therefore, as a child, when Richard thought of talking, he thought of being disliked by his peers. When he thought of being disliked by his peers, he felt anxious. Therefore, when he thought of talking, he felt anxious.
    3. This established a well-trodden neural pathway between the prospect of talking and feeling anxious.
    4. Because Richard still has this neural pathway, the prospect of talking still makes him anxious.

    A lot of the same analogies hold. I don’t think my model entails a different therapeutic approach. But it does mean that you can feel emotions without having any particular beliefs. I can feel fear without believing that I’m in danger, I can feel happiness without believing that I’ll have a desire fulfilled, I can feel amusement without believing whatever it is that would rationally justify amusement as a response. It also means I don’t need to attribute the source of every emotion with the ability to form predictive models.

    Allergic reactions occur when someone’s immune system attempts to protect against an otherwise harmless food as though it were poisonous. Does that mean that the person’s immune system must believe that the food is poisonous? If I’m diabetic, does that mean my pancreas believes I need more insulin? If I have a heart attack, is it because my blood believed it should clot? Doesn’t anything in my body happen without anything believing it should? My point here is that you can explain the cause for a part of you acting a certain way without attributing to it any belief relating to that cause. Your immune system might cause an allergic reaction to a harmless food because other foods similar in composition weren’t harmless, but that doesn’t suffice to show that your immune system is forming evidence-based predictive beliefs. Richard might feel anxious about talking because people disliked his father for talking a lot, but that doesn’t suffice to show that Richard believes people will dislike him if he talks a lot.

    This next bit is just a personal anecdote explaining why this matters to me–it’s not any kind of argument, except I suppose in response to the question, “Who cares?”

    I suffer quite badly from anxiety. (Usually around talking and social situations–perhaps I am indeed Richard!) Sometimes, when something triggers it, people will try to help by trying to rationally convince me that there’s no danger. This usually doesn’t work very well, and so I might reply, “I know there’s no danger. The issue isn’t that I think there’s some danger. It’s that I often feel afraid even when I know there’s no danger.”

    Under the IFS view, what I’m saying is false. It’s true that rationally convincing me didn’t work, but that’s because of the “rational” part, not the “convincing” part. It’s the same issue you might have if I thought climate change was a hoax and ignored any evidence-based argument to the contrary. What you need to do is get me in a position where you can convince me that there’s no danger.

    But I’m already convinced of that, dammit!

    Obviously this is well-intentioned and the people saying it are without exception very sweet, but it’s a surreal experience to be surrounded by people who keep insisting that I’m not in danger but that it’s OK for me to believe that I am while I protest that I am well aware there’s no danger. It’s as though I woke up one day to find that everyone I meet insists that I’m a human being and not a cockroach, but that it’s perfectly fine for me to believe that I’m a cockroach and they’ll be there for me for as long as I need. If I protest that I don’t think I’m a cockroach, they soothingly tell me that they know, and it’s just that on some level I can’t bring myself to accept it.

    • Kaj Sotala says:

      A lot of the same analogies hold. I don’t think my model entails a different therapeutic approach.

      There is the difference that, under your model, the anxiety couldn’t be changed by anything that would be reasonably described as belief updating. You would need something like extended desensitization or exposure therapy to fix the anxiety. Whereas under the UtEB model, the anxiety could in principle be quickly fixed by finding the right updates to make.

      Under the IFS view, what I’m saying is false. It’s true that rationally convincing me didn’t work, but that’s because of the “rational” part, not the “convincing” part. It’s the same issue you might have if I thought climate change was a hoax and ignored any evidence-based argument to the contrary. What you need to do is get me in a position where you can convince me that there’s no danger.

      I wouldn’t call this an accurate interpretation of IFS: at least, in this kind of a situation the “convincing” isn’t going to look anything like just telling you that you are safe, the way one would in a normal discussion. IFS has specific techniques for actually accessing the source of the emotion and changing it, which are quite different from what it sounds like the people around you are doing.

      • Scared_kid says:

        There is the difference that, under your model, the anxiety couldn’t be changed by anything that would be reasonably described as belief updating.

        This isn’t entirely true, is it? It just couldn’t be changed by anything that’s merely belief updating. But presumably in actual fact anything that’s belief-updating will also alter your mental associations in some way. It seems perfectly plausible to me that the belief-altering strategies used in IFS can break the association between talking and feeling anxious.

        It’s true that it would be strange to think, under my model, that you could have a sudden realization that cures your anxiety, whereas under UtEB you might think that as soon as your anxiety part accepted a new belief you were cured. However, I don’t think this changes much practically, because UtEB (at least in my experience) generally grants that the process of convincing your parts is lengthy and gradual, and that epiphanies are rare. Further, there’s no reason in principle that you couldn’t break a mental association all at once, so even if epiphanies were common it wouldn’t settle the dispute conclusively.

        At least, in this kind of a situation the “convincing” isn’t going to look anything like just telling you that you are safe, the way one would in a normal discussion.

        I hope I’m not using unnecessarily loaded terminology here. By “convincing”, I just meant getting someone to adopt or give up a belief–I’m not trying to hint at any particular method for doing that. I know that these friends aren’t doing IFS, or at least not purposefully or competently. But they do have the same model as IFS, where the issue is my false belief and what needs to happen is that I get rid of it.

        By analogy, it could be that someone’s tactic for convincing me I’m not a cockroach wasn’t to straightforwardly tell me, “You’re a human,” but rather to employ a series of specific techniques designed around making me give up my belief that I’m a cockroach. It could be that these techniques have helped many people suffering from emotional distress, and even that they’re quite effective on helping me feel better myself. But it would still bother me that they think I believe that I’m a cockroach.

    • Aapje says:

      The verb alieving should go mainstream, IMO.

      • Scared_kid says:

        I like the word alief a great deal, but I’m not so keen on that definition.
        “To subconsciously feel (something) to be true, even if one does not believe it; to hold an alief.”

        This is a quote from the paper that I believe (or at least alieve) coined the word: (Alief in action (and reaction) by TS Gendler)
        “An alief is, to a reasonable approximation, an innate or habitual propensity to respond to an apparent stimulus in a particular way.” I like this definition much better.

        The main difference is that the first definition requires a) that everything with aliefs has subconscious feelings and b) that those feelings make (or are) truth-sensitive claims. The second definition doesn’t require either of those.

        By the second definition, we could say that your immune system has an alief that makes it reject the strawberry, and we could even say (at least if we accepted some generous definitions of words like “stimulus”) that a cloud has an alief that makes it precipitate when it’s cold. Neither of those are acceptable under the first definition unless we attribute subconscious feelings to immune systems and clouds, respectively.

        The first definition is close to what’s going on in UteB. The second–I think–is close to what’s going on in my model.

        • Aapje says:

          I have to disagree. I think that alief should be restricted to mind-based mechanisms, because otherwise it just becomes a catch-all term for all causal mechanisms that are are not conscious. Do you raise your lower leg when the hammer hits the knee? You have the alief that you should kick when hit in the knee with a hammer.

          Please no.

          • Scared_kid says:

            I don’t have the “Please no” reaction to that scenario–make your neologism mean whatever you want–but I’m fine accepting a version of the second definition that’s restricted to the mental. In context it’s pretty clear that that’s what Gendler is thinking of anyway. Does that bring us into agreement?

          • Aapje says:

            Is there a clear distinction between the mind and the body? Isn’t the reflex of the leg a neurological response that can’t be easily separated from the brain?

          • Scared_kid says:

            I’d say I don’t think there’s a clear distinction between mental reactions and physical reactions, which I take to mean that there’s no clear distinction between when something’s an alief and when it isn’t. I don’t think that’s unusual or problematic, though. If you’re in the right mood, you can imagine edge cases for just about any word.
            (https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/)

          • Aapje says:

            I want alief to be more restricted. In my view, an alief is something that the mind subconsciously reasons with, not any mental reaction.

            For example, a person who honestly believes that black men are no more violent than white men, but who is much more afraid when encountering a black man in the dark than encountering a white man, has the alief that black man are more dangerous.

            However, a person who has done something so many times that they start to do it reflexively, does not have an alief. For example, imagine a guy who always checks his pocket for his wallet when exiting his front door. If he had put that wallet into his pocket just before and has the belief that he did so, then the check is not caused by an alief that he doesn’t actually have a wallet in his pocket. Instead, it is part of a reflexive ritual.

          • Scared_kid says:

            Right. I mean, use it how you want, of course, but the reason I don’t like that definition is that I don’t actually think those two cases need be very different. Presumably part of the causal reason the person, despite their honest belief, is afraid when they see the black man is that they’ve encountered lots of stories that linked black men and violence (news stories, TV shows, people mimicking black dialects in order to sound like gangsters, etc). They’ve formed a habit of, when they see a black man, thinking of violence. That’s not a belief, though–you can’t use that habit to form an argument. It’s just a causal mechanism.

            Similarly, seeing a raven might make me think of Edgar Allen Poe, but I don’t think that’s because of any kind of subconscious belief. It’s just that the the thought of ravens has a statistical tendency to cause the thought of Edgar Allen Poe.

            Both of these are similar to the wallet case except that the response is a thought rather than an action.

      • LeeBird says:

        “To subconsciously feel (something) to be true, even if one does not believe it.”

        Oh, so like…knowing that I’m not destined to save the world at some point in my life, but still behaving as if I will?

        • Scared_kid says:

          See, I don’t think those are quite the same. Here’s two possible people:

          Person 1 knows he’s not destined to save the world, and he goes about his day-to-day life just like everyone else. However, unlike everyone else, he has a subconscious feeling that he will save the world. It doesn’t change his behaviour much, though.

          Person 2 knows she’s not destined to save the world, and she doesn’t feel like she is. However, she still behaves as though she does–not to try to deceive anyone, but perhaps because up until recently she both believed and felt that she was destined to save the world, and so she’s formed all the habits you’d have if you believed you were destined to save the world.

          I think person 2 is who Gendler was describing, but the definition Aapje linked to is more like person 1.

    • Ttar says:

      The problem is that fixing things in your model requires (as Kaj said) lots of work manually desensitizing you by forcing you to speak and then giving you a reward until your anxiety response went extinct and was replaced by confidence that speaking would be rewarded. This is (as I understand it) the behavioralist approach, and unlike all other forms of therapy, it works. But it is hard and can’t be done from the comfort of a therapist office so everyone keeps looking for a magical cure that doesn’t require as much work/direct confrontation of phobia.

      • Scared_kid says:

        Bit you could in theory just have a magical cure that instantly broke the connection between speaking and the anxiety response, couldn’t you? What on earth have I been doing all this thinking for if it can’t be turned into a get-rich-quick scheme?

      • Nancy Lebovitz says:

        No, there’s a type of desensitizing which consists of teaching the person how to relax, then going through a series of carefully graded increasing exposures to the phobia with relaxation after each exposure. The idea is to gradually break the connection to being frightened.

  11. Kaj Sotala says:

    Exploring the connection to politics a bit more, Coherence Therapy: Practice Manual And Training Guide has this page where it claims that emotional learning forms our basic assumptions for a wide variety of domains, including ones that we would commonly think of as being the domains of rationality:

    Unconscious constructs constituting people’s pro-symptom positions tend to be constructs that define these areas of personal reality and felt meaning:

    * The essential nature of self/others/world (ontology/identity)
    * The necessary direction or state of affairs to pursue (purpose, teleology)
    * What necessarily results in what (causality)
    * How to be connected with others; how attachment works (attachment/boundaries)
    * How self-expression operates (identity/selfhood/boundaries/creativity)
    * Where to place responsibility and blame (causality, morality)
    * What is good and what is bad; what is wellness and what is harm; what is safety and what is danger (safety/values/morality)
    * How knowing works; how to know something (epistemology)
    * The way power operates between people (power/autonomy/dominance/survival)
    * What I am owed or what I owe (justice/accountability/duty/loyalty/entitlement)

    Examples (verbalizations of unconscious, nonverbal constructs/schemas held in the limbic system and body)

    Ontology: “People are attackers. If they see me, they’ll try to kill me.”
    Causality: “If too much is going well for me, that will make a big blow happen to me.”
    Purpose: “I’ve got to keep Dad from withdrawing his love from by never, ever disagreeing with him.”
    Attachment: “I’ll get attention and connection only if I’m visibly unwell, failing, hurting.” “You’ll reject and disconnect from me if I differ from you in any way.”
    Values: “It is selfish and bad to pay attention to my own feelings, needs and views; it is unselfish and good to be what others want me to be.”
    Power: “The one who has the power in a personal relationship is the one who withdraws love; the other is the powerless one.”

    It seems pretty easy to take some of those examples and see how they, or something like them, could form the basis of ideologies. E.g. “people are attackers” could drive support for authoritarian policing and hawkish military policy, with elaborate intellectual structures being developed to support those conclusions. On the other side, “people are intrinsically good and trustworthy” could contribute support to opposite kinds of policies. (Just to be clear, I’m not taking a position on which one of those policies is better nor saying that they are equally good, just noting that there are emotional justifications which could drive support for either one.)

    That might be one of the reasons why you don’t see “I know that X is correct, but can’t bring myself to support it” in politics so much. For things like “will you be hated if you speak up”, there’s much more of a consensus position; most people accept on an intellectual level that speaking up doesn’t make people hated, because there’s no big narrative saying the opposite. But for political issues, people have developed narratives to support all kinds of positions. In that case, if you have a felt position which feels true, you can often find a well-developed intellectual argument which has been produced by other people with the same felt position, so it resonates strongly with your intuitions and tells you that they are right.

    This could also be related to the well-known thing where people in cities tend to become more liberal: different living conditions give rise to different kinds of implicit learning, changing the kinds of ideologies that feel plausible.

    • Aapje says:

      most people accept on an intellectual level that speaking up doesn’t make people hated, because there’s no big narrative saying the opposite.

      What about Twitter (or more generally, the internet)?

      My impression is that speaking up does make you hated, and possibly also loved. Then for people who get relatively little benefit from love, but are very hurt by hatred, it seems perfectly reasonable to neutralize themselves. Or if they do benefit a decent amount from love, adopt a fake persona, tailored for approval (although this doesn’t work that well, because they tend to recognize that it is a lie, if only, subconsciously).

      Of course, one can then argue that the perception of hurt by hatred is false/excessive, but then again, pretty much everyone around me in real life seems to be playing a role. I’ve seen quite a few people remark on how they and others play different roles in different contexts, never being their true selves, but always acting, playing one role or another.

      I’m not convinced that ‘being yourself’ is actually allowed, unless ‘yourself’ is close to an allowed role.

      • Kaj Sotala says:

        Yeah, I was thinking in the specific context of Richard’s case, and comparable situations. Twitter/Internet are different as you say.

        • Aapje says:

          Twitter is merely different in that it connects bubbles. People can be so different that there is no safe bubble for them or they may not have access to a safe bubble, in real life.

          Even if a person has an excessive sense of danger, the danger can still be substantial. Compare it to a soldier with PTSD who feels under attack even at home. This soldier is not helped if you cure him of all fear and on the next deployment, he doesn’t seek cover near the front line and gets himself shot. Or if he wanders in front of cars at home, for that matter.

          The UtEB book seems to present it as easy to destroy a unnecessary coping behavior, without the caveat that this can result in destroying needed coping behavior (or that a lesser amount of the excessive coping behavior can be needed).

          From my perspective, Richard has a relatively trivial problem, where the things he wants to say are mostly harmless and he benefits from losing that filter. Quite possibly, the example was selected to be simplistic, to make the method look good.

          It seems a lot more complex in reality, especially when reception can differ greatly based on fairly enigmatic traits/status/etc. For example, in school I once said something publicly in class, resulting in mere silence, where immediately after, a popular kid said the same thing (not mockingly, but seriously) and he got an appreciative response.

          Imagine that I hadn’t said anything in that case and would have come to a UtEB practitioner complaining about anxiety. They might have used the fact that the popular kid got a good response as evidence that I would have gotten a good response when speaking out, even though that would be false, as we know from what actually happened when I did speak out.

          • Kaj Sotala says:

            As you say, the Richard case was a relatively simple one, and used as the first case study because it’s good to start with a straightforward example. The book covers more complicated cases later on.

            Something that the authors emphasize is that the therapist shouldn’t jump to any assumptions about the client and what’s right for them, but instead “work like an anthropologist” to uncover what’s the cause of the different behaviors. They also emphasize that the therapist should assume that all of the client’s schemas are serving some valuable purpose, as opposed to viewing them as irrationalities to be fixed.

            In that case, if they are doing the process right and you know – even on just an implicit level – that speaking up is actually dangerous to you, then that should come up during the process. Even if it failed to explicitly come up, my experience with such techniques strongly suggests that the belief update will simply fail to go properly through in that case. Your brain will only let you change your behaviors once it has become sufficiently convinced of that actually being safe.

            I discussed this for a bit in my review:

            “Something that the authors emphasize is that when the target schema is activated, there should be no attempt to explicitly argue against it or disprove it, as this risks pushing it down. Rather, the belief update happens when one experiences their old schema as vividly true, while also experiencing an entirely opposite belief as vividly true. It is the juxtaposition of believing X and not-X at the same time, which triggers an inbuilt contradiction-detection mechanism in the brain and forces a restructuring of one’s belief system to eliminate the inconsistency.

            The book notes that this distinguishes Coherence Therapy from approaches such as Cognitive Behavioral Therapy, which is premised on treating some beliefs as intrinsically irrational and then seeking to disprove them. While UtEB does not go further into the comparison, I note that this is a common complaint that I have heard of CBT: that by defaulting to negative emotions being caused by belief distortions, CBT risks belittling those negative emotions which are actually produced by correct evaluations of the world.

            I would personally add that not only does treating all of your beliefs – including emotional ones – as provisionally valid seem to be a requirement for actually updating them, this approach is also good rationality practice. After all, you can only seek evidence to test a theory, not confirm it.

            If you notice different parts of your mind having conflicting models of how the world works, the correct epistemic stance should be that you are trying to figure out which one is true – not privileging one of them as “more rational” and trying to disprove the other. Otherwise it will be unavoidable that your preconception will cause you to dismiss as false beliefs which are actually true. (Of course, you can still reasonably anticipate the belief update going a particular way – but you need to take seriously at least the possibility that you will be shown wrong.)

            Similar to what Eliezer previously suggested, this can actually be a relief. Not only would it be an error as a matter of probability theory to try to stack the deck towards receiving favorable evidence, doing so would also sabotage the brain’s belief update process. So you might as well give up trying to do so, relax, and just let the evidence come in.

            I speculate that this limitation might also be in place in part to help avoid the error where you decide which one of two models is more correct, and then discard the other model entirely. Simultaneously running two contradictory schemas at the same time achieves good communication within the brain, as it allows both of them to be properly evaluated and merged rather than one of them being thrown away outright. I suspect that in Richard’s case, the resulting process didn’t cause him to entirely discard the notion that some behaviors will make him hated like his dad was – it just removed the overgeneralization which had been produced by having too little training data as the basis of the schema.

            Another issue that may pop up with the erasure sequence is that there is another schema which predicts that, for whatever reason, running this transformation may produce adverse effects. In that case, one needs to address the objecting schema first, essentially be carrying out the entire process on it before returning to the original steps. (This is similar to the phenomenon in e.g. Internal Family Systems, where objecting parts may show up and have their concerns addressed before work on the original part can proceed.)”

          • Aapje says:

            @Kaj Sotala

            How many therapists will be capable debuggers when this method goes mainstream? I foresee that many therapists will see Richards everywhere, even if the person is not a Richard at all.

  12. jasmith79 says:

    Nitpicky but important correction: you’re not talking about global warming skeptics.

    You’re talking about global warming deniers.

    Skepticism is a completely rational response to the state of affairs:
    1. It’s a highly politicized issue with all of the expected “rah rah team” tribalism and distortion.
    2. Scientists (even entire scientific disciplines) get stuff like this wrong all the time. It’s part of the process of science: we move asymptotically closer to Truth by stating bold conjectures and disproving them.
    3. Despite #2, the media deals in absolutes. I’m very mildly skeptical about the science (I’m no meteorologist) but I’m super skeptical about the media reporting of it and I’m triply skeptical of people’s alarming tendency to conflate the two.

    That being said, the rational move is of course to act (read: support policy positions) as if it were true: the outcome of runaway global warming is unacceptable, the downsides of doing so if we end up being wrong about the danger are fairly minimal, ergo carbon taxes and what have you. But per the list above there’s nothing wrong with being skeptical while doing it.

    • NoRandomWalk says:

      Would respectfully disagree with “rational move is of course to act as if it were true” if this is equivalent to carbon taxes, etc.
      If global warming is true, then carbon taxes to the point they would have an effect would cripple the economy, and because there isn’t a way to credibly negotiate a global international agreement, would be implemented extremely corruptly (‘who gets the carbon credits that they can then put up for sale to be traded in cap and trade, etc’). A global coordination mechanism that would be successful would effectively involve a one world government, and the impact of this in long run could be much worse than unmitigated global warming.

      One could simply have the model ‘grow the economy by polluting, if you do probably technology will advance fast enough that we can eventually end up with a geoengineering solution that’s much more cost effective than any solutions we currently have available’

      • jasmith79 says:

        We can hash the details out at the bar. My point was merely that you have to take it seriously, even if you think the story around it is balderdash. And that it’s totally ok, given the circumstances involved, to think that it could in fact be balderdash. There isn’t really a category in the discourse for that position, but a lot of people get unfairly lumped in with science-denying troglodytes for applying generic skepticism which is totally warranted in the case in question.

        • NoRandomWalk says:

          Let’s backup.

          I believe most people who don’t believe in anthropogenic climate change are probably biased. Many of them are very smart. Many of them have read a lot on the subject.

          you’re not talking about global warming skeptics.
          You’re talking about global warming deniers.

          My model is that global warming deniers who are well-educated on the subject do not exist, and that Scott has never met them. The terminology is actively misleading because what you and I call ‘global warming skeptics’ probably all are 100% on board with the idea ‘global warming is happening and is caused by man’ they are not skeptical about the Thing In The Name at all, that’s just what we call people who are approximately 0% on board with the political agenda of people who would call themselves global warming activists.

          @Scott since my model is wrong (my prior on you confusing a denier and a skeptic is lower than my prior against well educated deniers existing, I’ve just never met them in the wild), can you please fix it by pointing to an example of such a person?

          I think the source of my confusion is that I see a lot of readers who are global warming skeptics in these threads, and 0 global warming deniers, so when Scott worries about offending his global warming denying readers I only have the uncharitable explanation that he doesn’t grok the difference. Which is wrong, I just am failing to see how.

        • Cory Giles says:

          My point was merely that you have to take it seriously, even if you think the story around it is balderdash.

          How does this differ from Pascal’s Wager? Surely action is not required in every potentially catastrophic case simply because inaction has large downsides. Knowing the probability of different outcomes is very important, because only by knowing that can we decide rationally how much effort should be made.

          My problem as a climate skeptic is that it is very, very difficult to get a realistic picture of the different outcomes contingent on different interventions in all the hype and noise. Among the media and activists, we hear “the world is going to end in 20 years unless we mobilize the entire world to this cause immediately”, and from the other side, “no problem here”. Both of these seem unrealistic and unacceptable.

          It is a source of endless confusion to me that positions like the one I just outlined are consistently lumped in as equivalent to anthropogenic climate denial. It is this conflation that can occasionally be offensive, not someone who genuinely has a different opinion on the level of risk.

          Bringing it back to the OP, perhaps people are not so frequently and obviously irrational as we suppose. Perhaps the appearance of obvious irrationality sometimes comes from an inability to recognize nuance in others when issues become emotionally charged.

          Who says Richard’s father is the explanation for his behavior? The therapist? Because that sounds like the sort of simplistic explanation that would not actually drive real-world behavior. In the real world, if someone has trouble speaking up, I would expect it to have resulted from dozens of little incidents all throughout life where Richard spoke up and was belittled or ignored. I do not think that, for the most part, our mental models are developed by any one incident.

          In the case of climate change, my views have resulted from numerous experiences, including seeing the media exaggerate things many times, my personal experience as a working scientist that has seen the way pressures in science affect results, and the way I have seen all-or-nothing thinking go wrong, such as in religion and politics. All of these dozens and hundreds of little things result in the overall position of “climate skepticism”.

          • EchoChaos says:

            Pascal’s Wager in its original form describes a large benefit that has basically no cost.

            Once each side has a cost, balancing probabilities matters a lot more.

  13. DanMc says:

    The concepts described here seem to bear a lot of resemblance to the Epistemic Learned Helplessness article Scott reposted on SSC:
    https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/
    Except (despite the heavy negative connotations of the title) that article is about more or less beneficial resistance to new information and argument, rather than unhealthy resistance to healing of psychological trauma.

    Both are wonderful, articles, but the mountain metaphor seems weak to me in that the geography of this mind mountain range is crazy unstable. Drugs can level mountains, but only temporarily in many cases, and the right mental framework might be able to perform geoengineering marvels. And even worse, geography is a given, rather than something purposeful (as described in Epistemic Learned Helplessness)–in this case helping the mind not be swayed too much by new information but not unhealthily resistant, either.

    I would submit the alternate metaphor of the mind as a bureaucracy. Instead of mountains and valleys, separated by good or poor roads, departments with more or less complex rules that throttle traffic between them. One of the main functions of bureaucracy is to preserve the current system and make change hard, so that bad actors can’t derail it. The difficulty of changing one’s mind or considering a viewpoint is less like the quality of a road between two towns and more like the difficulty of navigating the red tape between two bureaucratic departments. Psychedelics don’t level mountains, they are more like temporary political fiats allowing communications regardless of the rules. Instead of a flat plain with great roads, you have a government bureaucracy with little constraint that makes only political appointees (no career bureaucrats). Another doesn’t have savage Himalayan peaks and no big city to speak of, but rather a bureaucracy so overgrown little can be accomplished, because the executive is largely powerless.

    What do you think?

  14. Brassfjord says:

    then I took LSD and realized that

    Has there ever been an objectively true insight/revelation/idea from a drug affected brain, that holds even for someone who’s never taken any drugs? What ideas did Steve Jobs get from his LSD trips?

    • Paul Brinkley says:

      I wondered this, too. In light of UtEB, it seems like an easy narrative: LSD makes the mental mountains disappear for a while, as Scott describes.

      One catch I’m seeing is that there may exist many valleys in the brain that really ought to stay that way – isolated swamps where mosquitoes breed. And sometimes the city needs to send a messenger there once in a while anyway, because there’s actually no way to tell a swamp from a valley unless you actually send that messenger.

      And LSD might phase out all the mountains, making swamps as accessible as valleys. If it turns out there are more swamps than valleys, then any psychedelic has the problem of allowing the mind to make spurious connections as well as good ones, and is probably more likely to make the former.

    • Ozy Frantz says:

      The Psychedelic Explorer’s Guide discusses several mathematical theorems and an award-winning mall that were discovered/created (respectively) on LSD trips.

    • Anonymous says:

      > What ideas did Steve Jobs get from his LSD trips?

      I don’t know about Jobs, but Apple’s Bill Atkinson got HyperCard:

      “HyperCard was inspired by an LSD trip that I took on a park bench outside my house.”

      https://twitter.com/jonathansampson/status/728273078359236608

      “HyperCard was a precursor to the first web browser, except chained to a hard drive before the worldwide web. Six years later Mosaic was introduced, influenced by some of the ideas in HyperCard, and indirectly by an inspiring LSD experience.”

      https://www.mondo2000.com/2018/06/18/the-inspiration-for-hypercard/

  15. Nancy Lebovitz says:

    Scott, I don’t think you did justice to the process of eliciting Richard’s problem with speaking up.

    The therapist didn’t have a hypothesis about Richard’s father.

    The therapist asked Richard to imagine wanting to speak up at work, but feeling uncomfortable– and that’s when Richard realized it was about not wanting to be like his father.

    I think the distinction is very important.

    I also think Richard’s not wanting to be like his father included moral disgust as well as not wanting to be hated.

  16. wfwhitney says:

    > More specifically, we propose that psychedelics dose-dependently relax the precision weighting of high-level priors

    Do psychedelics treat the symptoms of autism?

  17. fluorocarbon says:

    The patient has failed to integrate his judgments about the doctor into a coherent whole, “doctor who sometimes does good things but other times does bad things”. It’s as if there’s two predictive models, one of Good Doctor and one of Bad Doctor, and even though both of them refer to the same real-world person, the patient can only use one at a time.

    Sorry if this is off topic, but as soon as I read this section a lightbulb went off in my head. It reminded me so much of the behavior of someone I know; not anyone with BPD or NPD, but one of my uncles. He’s smart and successful, but has always believed in ridiculous (to me) conspiracy theories.

    Unlike the example patient, he doesn’t change opinions about things immediately, but he does divide everything in the world into good and bad. He thinks like this even when it comes to things like restaurants: this restaurant is good, this one is bad. If a “good” restaurant serves a bad dish, then it’s not just that they had a bad day, but the restaurant moves to the “bad” category and he reasons that it must be because “new management,” “they sold out,” or something like that.

    In every discussion or argument I’ve had with him over the years, he’s argued that certain people or groups are bad-intentioned and that’s why bad things happen. I’ve argued that it’s not that simple and that bad things happen because of complex interactions between mostly good people and amoral systems.

    I’ve never been able to convince him (or vice versa). My model of his thinking was that he believed in these theories because he couldn’t reconcile his beliefs that most people are good and that bad things happen: therefore, if bad things happen, they must be caused by a small number of evil people doing things in secret.

    I wonder if there have been any studies about this kind of thinking and belief in conspiracy theories. Does this match anyone else’s experience?

  18. Rationalists wasted years worrying about various named biases, like the conjunction fallacy or the planning fallacy. But most of the problems we really care about aren’t any of those. They’re more like whatever makes the global warming skeptic fail to connect with all the evidence for global warming.

    If the model in Unlocking The Emotional Brain is accurate, it offers a starting point for understanding this kind of bias, and maybe for figuring out ways to counteract it.

    Isn’t this kind of bias already described by Jonathan Haidt’s moral foundations theory? The argument in The Righteous Mind is that moral foundations steer the emotional/subconscious ‘elephant’, and that people who purport to be rational are actually just really good at coming up with post hoc explanations that justify the elephant’s leanings (even the most high-profile rationalists make convincing arguments that the sky is green and up is down in instances where they would benefit from motivated reasoning).

    In this view, Richard (no relation) is not atypical at all: he has some kind of disgust/sanctity reaction to being like his father, and no amount of boring old factual evidence is going to cause him to update. Just like everyone else!

    If this is the class of biases we’re talking about, my main takeaways from Haidt’s book were:
    – politics is literally the mind-killer: a partisan brain distorts reality more than most
    – individual rationality is mostly just fooling yourself
    – group-based rationality is more promising
    – hence the importance of heterodoxy (wisdom of crowds requires uncorrelated errors)
    – making the elephant change direction is much more about emotional resonance and direct experience than careful reasoning

    Of which, that last one seems most relevant to the topic at hand.

  19. Doug S. says:

    I’ve pointed it out before, but “do my beliefs contain a contradiction” is a problem that’s extremely computationally expensive to solve. Even in the simplest case, where beliefs are statements in propositional logic, consistency checking is NP-complete; the difficulty goes up exponentially with the number of beliefs. And real human beliefs seem to be more complicated than propositional logic statements. Brains compartmentalize because *not* compartmentalizing is, in fact, impossible.

    (This argument comes from Godel Escher Bach by Douglas Hofstadter, incidentally.)

    • TJ2001 says:

      The trouble is that In Real Life – “Beliefs” in people’s heads generally have very little to do with the way people conduct their lives In Real Life.. And so for example unresolved contradictions in someone’s expressed “beliefs” have literally (almost) nothing to do with the way they conduct their actual lives or business either way…. Often because they have learned to “Act normal” as Ozy puts it.

      That’s one of the most troubling issues with the Evangelism Of the Religion of (whatever your favorite poster child happens to be – this is just simply going to be my example because Scott called it out specifically) Global Warming

      Is it more important to SAY you believe and to loudly champion “The Belief” in public and on The Internet or is it more important to DO things and ACT consistent with “Solving the Problem” – whatever “The Problem” might be?

      This is important because the so called “Climate Protestor True Believers” are often ridiculously wanton, wasteful, destructive, and harmful to accomplishing the results we are after (aka saving the climate)…. While the “Climate Deniers” they battle are often quite a bit more careful about actual waste, consumption, and their real life environmental impact – just not because of “Climate Change”…

      Therein lies the rub…. The people championing “Climate Change” never disavow the Climate Change Protestors for the horrible problems they cause while protesting, the effigies made of trash tires and junk they burn, their energy inefficient cheap apartments, their clapped out cars are causing a LOT of pollution, that they are littering tons of protest posters/flyers everywhere they go, OR that they give The Cause a particularly bad name because of their very public behavior…..

      But it’s exactly THE SAME everywhere else…. Jesus said that the “Pious Religious Leaders” were tools of Satan while declaring that the hated tax collectors and sinful prostitutes had open access to God’s Kingdom…. That is shocking – but it’s real life….

  20. elguiney says:

    The mountain valleys with limited bandwith roads model also may explain aphantasia (the phenomenon of not having visual mental imagery). As someone with very strong aphantasia, (I have absolutely no mental imagery; I realized this was rare and different in my pre-teens, more than 20 years ago), two observations about my mental functioning have always struck me as interesting:
    1) Though I have no mental imagery while awake, I dream in images. This is consistent with my brain having a very low bandwidth connection between central voluntary consciousness (the capital city on the plain) and the big ‘ole mountain valley that processes imagery, and with the general relaxation of high level priors/reconsolidation that occurs during sleep loosening things up and allowing better access. It is definitely not that I don’t have the mental machinery to form visual images, just that I can’t call them up when awake.
    2) People are often surprised when I tell them I’m aphantasic, because in many ways, my visual/rotational IQ is quite strong- I’m a good artist, like to communicate ideas visually (I’m a cell biologist and do a lot of drawing models of proteins and signaling pathways as I teach), and I’m quite good at thinking spatially- I have a great map sense and don’t get lost. When I engage these capabilities and self reflect, it very much feels like there’s a lot happening “under the hood,” inaccessible to voluntary consciousness or reflection. I can reflect on the inside of my car, for example, and experience a strong sense of “knowing where all the elements of the dashboard and console etc are” without having any visual of it (or as far as I can tell, any phenomenal experience or ‘qualia’ of it at all).

    I’ve never used psychedelics, but if this model is true, I suspect that many aphantasics will be able to perceive visual images under the influence of psychedelics. It would be worth testing.

    • MeaningIsCultivated says:

      I have aphantasia. I relate to the good sense of direction. I believe this may be because I have strong spacial awareness – recalling directions feels similar to recalling where things are that my back are turned to. I do not relate to seeing in my dreams. In my dreams, I am spacially and qualitatively aware of things, but not visually. Eg I know that there is a person in front of me that I am conversing with, and that they are angry, but I do not see anything. The exception to this is my phone screen – I can visually dream of words, but not pictures, as they might appear on my phone screen. I do not visually dream the phone they are on.

      I’m doing shrooms tomorrow, I’ll try to remember to report back on visual imagination while on psychodelics. I remember having vivid hallucinations with my eyes closed on prior trips, but I do not recall if they were visual ones.

    • LeeBird says:

      I also have aphantasia; this is how it manifests for me:

      1) no visual memory whatsoever
      2) no mental imagery when awake; vivid dreams during REM (consistent with other narcoleptics)
      3) mild visual distortions/hallucinations with use of psychedelics, persistent whether eyes are closed or open
      4) severely impaired spatial reasoning, poor kinesthesia, and multiple executive function deficits

  21. jerryb0222 says:

    ===Intense emotions generate unconscious predictive models of how the world functions and what caused those emotions to occur===

    ===We already know the brain has an unconscious predictive model that it uses to figure out how to respond to various situations and which actions have which consequences==

    The above from Kaj’s review and the sentence from Scott’s post seems similar in some aspects to Ignacio Matte Blanco’s Unconscious as Infinite Sets theory. From the Wikipedia page: Ignacio Matte Blanco developed a logic-based explanation for the operation of the unconscious, and for the non-logical aspects of experience.

    https://en.wikipedia.org/wiki/Ignacio_Matte_Blanco

    Being a night owl, it is early for me and my brain is not at Warp speed, otherwise I would elaborate more.

    Have a great Thanksgiving everyone!

  22. OriginalSeeing says:

    The end result is a predictive model which is a giant mess, made up of constant “This space here generalizes from this example, except this subregion, which generalizes from this other example, except over here, where it doesn’t, and definitely don’t ever try to apply any of those examples over here.” Somehow this all works shockingly well. For example, I spent a few years in Japan, and developed a good model for how to behave in Japanese culture. When I came back to the United States, I effortlessly dropped all of that and went back to having America-appropriate predictions and reflexive actions (except for an embarrassing habit of bowing whenever someone hands me an object, which I still haven’t totally eradicated).

    In this model, mental mountains are just the context-dependence that tells me not to use my Japanese predictive model in America, and which prevents evidence that makes me update my Japanese model (like “I notice subways are always on time”) from contaminating my American model as well.

    So…… Should people with social type phobias try moving to a very foreign culture?

  23. lieronet says:

    For whatever it’s worth to anyone, the therapeutic model UtEB describes was used almost verbatim in the Netflix show Maniac.

  24. hedgehog says:

    I recently read Frustration: The Study of Behavior without a Goal, by Norman Maier. It’s a “therapy book,” and thus has some of the happily-ever-after described in your/Scott’s recent post (I loved that post — the snarky parts soothed my soul), but less (and much less ignorantly) than most therapy books I’ve read. Maier’s book has inspired a revolution in my approach to healing from trauma (my own, I mean; this is not an abstract interest, or a professional one), and (same project, different words) my efforts to think and behave ever more rationally.

    Here’s a précis of the book (with some of my own interpretation/synthesis folded in):

    Not all behaviors are goal-oriented. [Semantics can get in the way here; I can see this claim from angles where it’s true, and angles where it’s not. (The latter is my default inclination, but) I have found it very worthwhile to deliberately inhabit the perspective where it’s true, for the sake of following Maier’s reasoning, and for the sake of putting it to use.] Where a need is strong and the need-meeting goal is persistently unattainable, and no sufficient alternatives are accessible, if the pain of the unmet need is extreme and prolonged, frustration becomes not just an emotion, but a (terrible) state of mind/being. In this condition, problem-solving behavior (characterized by variability — that is, trial-and-error — and by learning, creativity, and insight), which is the natural/healthy default (and which seems to me to be approximately synonymous with rationality), is inaccessible. Frustration behaviors do not resolve by means of reasoning, new information, or exposure therapy; they are reinforced, not quenched, by punishment; and they are unresponsive to schemes that reward opposite or alternative/better behaviors. (Maier offers scads of data; I had to skip those parts, unable to bear accounts of rats driven mad by frustration.)

    Maier proposes four categories of frustration (i.e., non-goal-oriented) behavior: fixation, regression, aggression, and resignation. The particular behaviors that become frustration behaviors, arising in conditions of extreme or prolonged frustration, are restricted to the set of behaviors that happen to be available at the time, and are established based on that availability. They are not “chosen”; they do not arise with regard to their specific/practical results. To assume that all behaviors are goal-oriented — thus, to assume that failure to change a pattern of behavior by means of methods that are effective in changing goal-oriented behaviors must be due to an insufficient or inaccurate analysis of the goals of the problematic behavior — is counterproductive to the purpose of altering problematic behavior, Maier says; if the behavior is a frustration behavior, such methods do not work. (Again: Data abound.)

    Maier’s practical recommendations are very interesting, but I won’t go into those, or I’ll be here all day.

    I have been immensely helped (and, by the transitive rule, so have my loved ones) by viewing my own troubles through this lens offered by Maier, and by revising my notions and tactics accordingly. I’m newly able to recognize what I’d been loathe to admit: My efforts to change my most problematic behaviors were entirely unavailing. I was locked into a worldview where every behavior is deliberate and amenable to reason and manipulation; the stories I was telling myself to explain my persistent failure were pretty specious (and ineffective, and quite cruel-to-self). I am more coherent, honest, sane, and effective than I was before reading the book. I am less stuck; I now have work to do, where, before, I had mostly rationalizations to refine.

    Besides the concrete and practical learning, I have had lots of new-to-me ideas since reading the book. For instance, I wonder to what degree autistics’ “odd” behaviors are simply fixated frustration behaviors — arisen from among a very small set of behaviors that would be “available” to a highly sensitive, highly rational mind, driven mad by an illogical, overwhelming/overstimulating society; needing a behavior, any behavior, to avoid complete system shutdown, but unable to tolerate activity that would increase sensory/cognitive overload; and confined to behaviors that are not against “the rules” and are otherwise least likely to directly affect others, and thus least likely to invite intrusion and unwanted attention and physical contact. It would not be the first time a population was driven to unconventional measures in reaction to extreme conditions imposed upon them, and then was identified as odd by those who had imposed the extreme conditions.

    Obviously, the realm of intractable behaviors is bigger than can be accounted for by Maier’s model, but my little corner of the world is functioning a lot better since I read the book.
    ___

    On the topic of psyche/consciousness-parts:

    I’m very isolated socially and geographically, and isolated by the constraints of poverty as well. Though I have plenty of dysfunction, and am extremely motivated to heal/improve, I have given up on therapy; I’ve spent a devastating amount of effort, time, and money on therapists, none of whom has done more good than harm.

    (Speaking of being driven to unconventional measures by extreme conditions …) I’ve had to get very creative on my own behalf; one of the devices I’ve resorted to, one that has helped a lot, is that parts-talking-to-parts method. … I hadn’t realized, before reading the comments above, that it was A Thing; I just noticed a chronic sense of internal conflict, which my active and imagery-prone mind eventually rendered into a sense of harboring a set of warring parties. I gradually began to experiment with facilitating peace talks. They have become very productive.

    It doesn’t surprise me that this isn’t a meaningful/helpful/possible tactic for everyone, but many of my best insights come from moderating discussions between parts, and listening to them. It has long been my impression that each of the parts has a particular interest in which it is primarily invested — a single purpose/priority that it pursues, feels responsible for, and feels entitled to. [(It’s that entitlement piece that most fuels difficulties with inter-part communication, mutual recognition and respect, and integration. (Loosening my grip on various dynamics of entitlement is a key learning these days, with lots of help from meditation practice.)] Listening for the particular interest of each part has been especially useful in my learning to give each part the respect and attention it requires, in order to not revive hostilities.

    I think that’s enough soul-baring for now …

    I’d be interested to hear others’ impressions of Maier’s book.

  25. alwhite says:

    Based on the information in this post, UtEB looks like an experiential therapy (Gestalt, Fritz Perls) crammed into a behaviorism construct. Maybe more directly it looks like Transformational Chairwork with CBT.

    A lot of the issues with all the therapy books is that everyone is talking about the same stuff but we can’t figure out how to codify it, so we end up making our own words and codes to express our idea.

    A counter therapy type to the Coherence Therapy model is Acceptance and Commitment Therapy (ACT). Hayes outright declares that modifying distorted or unrealistic thoughts is unnecessary to profound change. And goes a step further saying this claim by CBT has limited empirical evidence.

    Therefore, a counter approach to Richard isn’t to get him to recognize all the pieces of his belief system but to get him to recognize the uncomfortable emotion he is feeling in that moment. Then teach him to accept that feeling and not make it go away (by refusing to speak up) and act in accordance with his values anyway in spite of the uncomfortable feeling.

    As you said

    I like UtEB because it reframes historical/purposeful accounts of symptoms as aspects of a predictive model.

    But, this isn’t the only way that people can see improvement. The historical/purposeful accounts are loosely categorized as insight based therapies. Psychoanalysis/psychodynamic is very much an insight based therapy. UtEB seems to be a counter to that style. But the present moment, 3rd wave therapies, are a non-insight approach that also have good results.

    I agree that the crucial problem of rationality is emotion based and the emotion aspect has been lacking for a really long time. I don’t think UtEB is adding much that doesn’t already exist in the therapy world and is just being another therapy book that’s attempting to codify what is really hard to codify.

    In regards to this

    Internal Family Systems, a weird form of therapy where you imagine your feelings as people/entities and have discussions with them. I’ve always been skeptical of this, because feelings are not, in fact, people/entities, and it’s unclear why you should expect them to answer you when you ask them questions.

    Humans are REALLY social beings. We are probably grossly underestimating how social we are. Many people think our brains developed the way they did just to handle to social realities. Treating our feelings and thoughts like people does 2 things. First, it’s hijacking all that socialness of our brains to create a sense of connection and understanding. It makes total sense to do this. We humans anthropomorphize everything, why not our thoughts as well? A person talking to their cat feels more connection to their cat. A person talking to their sadness feels more connection to their sadness. Second, it starts the defusion process (ACT language). It separates the person from the feeling so they aren’t so entwined. A simple method is instead of saying “I am sad” say “I notice that I am feeling sad”. Creating the separation that the self and feeling aren’t the same thing. Meditation and mindfulness teach this same concept. In IFS, we use the social tools to do the same thing. Instead of saying “I am sad” we say “A part of me is sad”. I am not the sadness, the sadness is a part of me. And it goes further to say that my Self is something different from the parts and my Self can relate to the sadness. Between the defusion and anthropomorphizing, I think IFS makes a lot of sense for why it works.

    • Vanvidum says:

      Anthropomorphizing parts of me never seemed to be useful or sensible. My emotions don’t have inner lives of their own, and there isn’t anyone to talk to there. Being told to imagine the details required simply results in a poorly fleshed out character that is sad/angry/etc. The problem is, that emotion is all they really are, and it’s the whole of their characterization. I’ve already heard what that part of me has to say, else there wouldn’t be an emotion to feel, so what’s the point in trying to interrogate it further? Any language I project onto that part of me is going to be an entirely external construct with little underlying connection to the aspect it’s supposed to represent. How is that possibly a productive conversation?

      • alwhite says:

        Because the emotion is happening for a reason and to serve a purpose. You’re not “just mad”, a lot more is happening. Biologically, when you’re angry you get a boost of adrenaline and energy to help you fight off threats. You’re are angry in order to do something. Anthropomorphizing the anger just seems to help people figure that out. What is the anger afraid is going to happen? What is the anger trying to accomplish? There is a productive conversation to have there.

        This is not the only way to get to this point. You can absolutely get these answers without the IFS model happening, and if that works for you, keep doing that. But it seems really useful for a lot of people to construct the anger in a personified way to get better responses. Depersonalized anger is just harder for people to engage with than an angry version of themselves that they can talk to. And I suspect it’s because we are such hugely social creatures with social brains and it’s easier for our brains to interact that way.

        • Vanvidum says:

          I still don’t see that as a useful framework. The persona isn’t necessarily ‘correct’ in any sense, it’s merely a rationalization. If one really doesn’t know the source of an emotion, you’re still just guessing and consciously making up a narrative that may not bear any resemblance to what’s ‘really’ going on. It may well provide some comfort and capacity to cope to have that narrative, but it certainly doesn’t seem very realistic.

          Humans are social creatures, but we’re usually less good at predicting the inner lives of others than we think we are. Trying to map those inherently flawed models onto fragments of one’s own psyche seems likely to magnify those mistakes, and lead to unjustified confidence.

          • Kaj Sotala says:

            It’s hard to explain. I used to have pretty much the exact same perspective as you, and had occasionally tried viewing emotions as characters but then just ended up spinning useless stories. Until I did some stuff which caused me to hit upon the thing that I was *actually* supposed to do, and which made it work.

            But to mention a few things…

            First, it’s not necessary to imagine a lot of anthropomorphic details for the emotion. Some people will naturally experience the parts as pretty anthropomorphic, but I for example don’t; to me they just feel like e.g. shards of desires and beliefs. IFS still works for me.

            What I think is going on is… let’s go with Scott’s analogy of the brain being a mountainous landscape. Under this model, when you get angry, there’s a valley in your mind from which the anger emerges, and which contains some set of beliefs and assumptions which cause it to generate the anger. But the valley can’t send their full reasons for being angry, because the passes between valleys are narrow and the messengers can only carry short notes. So normally what gets to consciousness is something like “I’m mad at X for being a jerk”, which loses most of the underlying information.

            Now when you do IFS on this anger, you don’t just randomly start imagining things about it. Rather, there’s a specific procedure where you pay attention to the various bodily sensations and feelings associated with the sense of anger. Keeping those in mind, you ask yourself whether there’s e.g. any mental image that might arise from them. You’re not imagining things the way you would imagine if someone told you “imagine a day on the beach”, rather you are just keeping your attention anchored on the different sensations associated with the emotion and letting that create a mental image. Maybe asking yourself something like “if these bodily sensations had a visual appearance, what would that image be” and letting your mind fill in the answer.

            (Note: you don’t need to have visual images to use IFS; other sensory modalities can be used if you have aphantasia. But I’ll just stick to talking about visualizations for the sake of simplicity.)

            One way of putting it would be, usually if you just imagine stuff, you are taking some internal model of the world that you have, and hook it up to a visualization subroutine which then produces images. In this case, you are instead taking the sensations associated with the emotion and hooking them up with the visualization subroutine. And if do it right, the resulting visualization *stays linked* with those sensations. Meaning that when you focus on the visualization, it also helps you keep your attention focused on the sensations related to the emotion.

            And keeping your attention on the emotion in this way, seems to have two effects. First, it gives you a bit of distance from the emotion itself. Rather than thinking “I’m mad at X for being a jerk”, it becomes easier to experience it as “there’s a part of me who’s mad at X for being a jerk”. This means that you feel less compelled to just lash out at X right away, and have an easier time looking at the emotion and figuring out whether you actually should lash out.

            Second, as the visualization maintains a stable link to the valley from which the emotion came from, it seems to slowly widen the pass, or allow more messengers to pass through, or whatever metaphor you want to use. There’s stuff in neuroscience suggesting that one of the functions of keeping a signal in consciousness is to strengthen the signal and allow other subsystems access to the information carried by the signal. As the signal related to the emotion gets stronger, more information from the valley (underlying schema) can potentially access consciousness. You can then try to query the visualization with a question like “what are you afraid would happen if you didn’t act in an angry way” and then wait for an answer to arise – again, not actively imagining anything or making up things, but rather just posing the question and then waiting to see whether any sensations emerge that might hint at an answer. E.g. you might get a flash of a mental image about yourself getting hurt if you don’t react with anger; then you can focus your attention on that flash, keep it in consciousness and let additional details emerge as the act of shifting your attention to it boosts the activation of *that* neural pattern.

          • alwhite says:

            It looks like you’re stuck on these words ‘correct’ and ‘real’ and I think this is distracting the conversation. The goal is to start connecting to emotional content. This idea of ‘correct’ and really real is very much cognitive content and you aren’t going to get there from here. This is kind of like trying to smell the sound of concert A and unless your mind is very different, it’s not going to happen.

            This personalization framework works because our socialness and our emotions are deeply entwined. It’s almost impossible to remain emotionless and connected to another person. We can use IFS techniques with 2 people sitting together and create emotional connection. It’s pretty standard family and couples therapy. Treating our emotions in this same way helps us connect to that emotional reality that other methods can’t.

            Emotions aren’t about being factually correct, they are about being viscerally real. It seems like you’re chasing the first and don’t recognize the second exists.

      • Simon_Jester says:

        One possible reason to think of parts of yourself as independent people is that emotions are not, on the whole, ‘pure.’ They have spillover effects on your behavior and the way you reason about situations. A person may be objectively more able to identify a feasible solution to a problem when they are calm than when they are afraid, for instance. And this is because the condition of “being afraid” privileges certain very specific solutions and ideas that aren’t necessarily helpful.

        Have you ever met a person who seems quite different when caught in different emotional states? Exactly.

        So when you conceptualize your fear as a separate person, you may not actually be imagining your fear, in and of itself, as an independent entity with an inner mental life of its own. You may be imagining yourself-when-afraid as an independent entity.

        Likewise, if you’ve got a therapy patient who has issues in the present day because of beliefs they formed as an abused child, it may be on some level helpful, or at least soothing, to try to talk to that abused child directly. To remember what it was like to be the abused child, and then remember hearing the words that were never said at that time.

  26. Elsewhere says:

    Does it matter if the therapist identifies the correct root of original belief? Does Richard’s public speaking issue really have to do with his father and if the therapist had tried anchoring the therapy to something else it would have failed? Or is the key part that the patient strongly believes whatever made up cause, and since Richard had suggested this one, it was an easier starting point?

    If the former, than these therapies would be more successful when the therapist could correctly intuit the right source to fix. This would mean that different paradigms would be more or less successful at finding root issues (its always the mother, its always the parents, childhood trauma, feeling inadequate) combined with the therapist’s own emotional detective work.

    If the latter, then it would only matter how much a therapist could convince you that what they said was the root of your problem was true. Then it would only matter how charismatic your therapist was. This would seem to map to why all these therapy books might have different root causes, but the initial practitioners all showed great success that wasn’t replicated by others who weren’t selected for their charisma.

    • Kaj Sotala says:

      The judgment of the root cause doesn’t come from the therapist, it comes from the client. This is emphasized a lot in coherence therapy, and it is also my experience that if you try to guess what your client’s (or even your own) root cause is, you will remain stuck at the level of intellectual theorizing and never manage to actually activate the relevant memory structure in the way that makes reconsolidation work.

      • Elsewhere says:

        But does it matter if the patient is correct? Or is the therapist just targeting an already-formed hypothesis to get a running start on the important part: the juxtaposition against a strongly held belief.

        • alwhite says:

          If you’re asking if the patient’s claim of original belief and memory of original event matches what actually happened as might be recorded by a video recorder. No, it doesn’t matter if the patient’s memory matches what actually happened. The problem is in the patient’s relation to the memory/belief, not the event itself that happened.

          • acymetric says:

            That isn’t what @Elsewhere is asking. The question, I believe, is:

            What if observing his dad wasn’t actually the cause of this anxiety? What if the real cause was a particularly traumatic class presentation in the 2nd grade where he bungled it so bad that even the teacher mocked him relentlessly. 30 years later he barely remembers anything about second grade, but he remembers being around his dad all the time so he identifies that as the “cause” (because identifying parents as causes of problems is as common as it is easy)?

          • Elsewhere says:

            @acymetric has the idea. Basically if you are rewriting some specific emotional memory, you need to make sure it is the right memory and whether or not this treatment works is based partly on making sure you have the right one (meaning if it fails it isn’t necessarily a datapoint against it).

            Whereas if the important part is that it is an sincerely held belief you are juxtaposing, then all that matters is the climactic realignment. It would also be a kind of grand unifying hypothesis of why many of the different therapy “reasons” for why a person is the way they are all work with a sufficiently talented therapist. The reason doesn’t matter, the treatment does.

            This also extends to non-clinical therapies. Did the devil make them do it? Riders on their thetan? Too much video games? Doesn’t matter as long as they believe that is the reason, that belief is reinforced over an extended period, and then they are put in a ceremonial situation where they have to put their demon and the fact that (insert healing truth here) and they update their worldview.

            Also it would mean this would be a great tool to use in rhetoric. Let me spend a great deal of time reinforcing your conception about X. Now let me show you that X is actually good/bad. Hold both of those thoughts in your head for awhile. How do you feel about X now….

          • alwhite says:

            @Elsewhere

            Ah, gotcha. This is repressed memories type stuff. I think you’re going to get different answers from different experts on it. I don’t think identifying the “correct” memory is all that useful. I think it’s a false concept that leads to more trouble than help. I usually run with, if that’s the memory that comes up then that’s the memory that matters. But I also don’t think the historicity approach is all that useful. Blaming something in the past doesn’t really deal with the present and I focus a lot on the present moment experience because that is what is being affected.

  27. ChelOfTheSea says:

    > whereas I’ve never heard someone say “I know climate change is real, but for some reason I can’t make myself vote to prevent it.” I’m not sure how seriously to take this discrepancy.

    This is basically how I feel about vegetarianism/veganism. I recognize that vegetarians are pretty clearly morally right, it would probably better for my health and for the environment to be vegetarian, and go right on not being vegetarian anyway. The same goes for a lot of things. I know my dental health isn’t great, but for some reason I can’t make myself go to the dentist. Etc.

    I think the dividing line is basically cost. Voting to prevent climate change has little direct emotional valence. If you had to, I dunno, crawl across broken glass before you could vote, maybe it would. (Alternative suggestion for democracy?) But becoming vegetarian means giving up nearly all of my favorite foods, and going to the dentist requires an unpleasant experience and potentially confronting problems I can ignore right now.

    • Aapje says:

      Perhaps a major goal in life of yours is not to be morally right or to improve your health or the environment, but to enjoy your life?

      Perhaps the discrepancy is that you don’t allow yourself to openly have that goal, so you pretend to yourself that you have different goals?

  28. morris39 says:

    If being rational is defined as acting in your own interest then it is possible to oppose something (climate warming mitigation here) while agreeing with the existence of the general threat but not the direct personal threat.
    So assuming that the climate change is human caused and that it will likely cause global (in the wide sense) harm e.g. disease, wars , reduced trade and that the effects will not be uniform, is it rational to ask if the proposed cost of mitigation by me (my group) is justified? It seems to me that that the question is completely rational. A possible conclusion based on evidence is that I do not care about the fate of others and oppose higher costs for me.
    The issue whether I should care about the (remote) others is separate from mitigation costs. This point of view seems to be rarely if ever discussed or acknowledged. Maybe someone can point out the flaw in this line of thought?

    • Scared_kid says:

      Isn’t this just the ought-is gap? Is there something special about the argument in respect to climate change? It seems to me that the issues you have with “You should try to stop or mitigate the effects of climate change” are the same issues one might have with a claim like “We should try to stop children from being hit with cars”–if you happen not to care about the effects of your actions on other people, there’s no logical case that can be made against you.

      I do think the ought-is gap an interesting philosophical question, but I’m not sure it needs to have a major role in discussing climate change–just like when we talk about setting speed limits in school zones, it would be somewhat amiss if you asked, “But what if I don’t care how many kids are hit with cars? Can you prove to me that I’m wrong?”

      I’m also not really on board with defining “being rational” with “acting in your own self-interest”. It may be that being rational leads you to act in your own self interest in some or all situations, but I generally think of “being rational” as “proceeding from logical arguments” or “being epistemically justified” or something like that. It might be in my own best interests to believe that I am objectively the center of the universe, because it will make me feel powerful and important (enough to offset the social cost of being kind of unpleasant to be around), but it still doesn’t seem rational. (Perhaps this is just semantics–but because of how much some people value being rational, I think it’s semantics with some weight behind them.)

      • morris39 says:

        No. You seem to have substituted emotion for reason. Why is it not possible for self interest to be intelligent i.e. all things considered. So one would care about speed limits and children in your group (where you have entered into social agreements) but would you care about speed limits in say India.
        Thinking that your own self interest somehow makes you feel all powerful or makes you unpleasant does not follow at all. Might it not be that what may happen is that with that attitude you will devote much more time/effort to attending to your purpose. Being agreeable in the proper context may well be the best for your self interest even if it seems silly and requires effort.
        I do not understand your emotional response but I would say that you are probably in the majority and that might have importance for you.

        • Scared_kid says:

          Thinking that your own self interest somehow makes you feel all powerful or makes you unpleasant does not follow at all.

          This is a response to my “It might be in my own best interests to believe that I am objectively the center of the universe” comment, right? Looking back on that I phrased it poorly. I didn’t mean “believing that I am the center of the universe” in the sense of being completely self-interested. I meant it literally–in the sense of believing that the physical universe has a center, and that I am that center. If I believed that, it might make me feel special and important. The degree to which feeling special and important benefits me might outweigh the degree to which holding this silly and unscientific idea about the universe harms me. However, despite that, holding that belief would be irrational. Therefore, being rational does not mean acting in your own self-interest.

          A better example might be to think of a religious belief that contradicts the evidence available to you. It might be that that holding that belief makes you feel better about the prospect of dying and helps you to fit in with your community, and so it’s in your best interests to hold that belief. But since it contradicts the evidence, I would maintain that it is nonetheless irrational.

          The other stuff about me being emotional rather than reasonable is just sass, right? For what it’s worth, I wasn’t trying to attack or morally condemn you with my response.

    • Simon_Jester says:

      If being rational is defined as acting in your own interest then it is possible to oppose something (climate warming mitigation here) while agreeing with the existence of the general threat but not the direct personal threat.

      The bolded passage is doing almost all the heavy lifting here.

      I submit that this is an idiosyncratic definition of “rational.” Rationality is not simply ‘the art of maximizing the extent to which one’s own interests are fulfilled.’ It is also ‘the art of identifying one’s interests’ and ‘the art of perceiving that which is true and ignoring that which is false.’ And probably other things.

  29. Seppo says:

    Then she asks him, while keeping the fear-of-speaking schema in mind, to remember the contradictory experience (coworker speaks up and is praised). Then the therapist vividly describes the juxtaposition while Richard tries to hold both in his mind at once.

    Reminded me of:

    Moreover, as contributing to knowledge and to philosophic wisdom the power of discerning and holding in one view the results of either of two hypotheses is no mean instrument…

    —Aristotle, Topics

  30. jerryb0222 says:

    ====REBUS And The Anarchic Brain: Toward A Unified Model Of The Brain Action Of Psychedelics===

    In skimming through the above paper I remembered coming across similar work done by Stanislav Grof and his COEX experiences model and his use of LSD psychotherapy to access the COEX memories. Grof’s work seems very much like the concepts in the Unlocking The Emotional Brain book and the REBUS paper.

    https://en.wikipedia.org/wiki/Stanislav_Grof

    https://spiritwiki.lightningpath.org/index.php/COEX_Systems

  31. Callum G says:

    Does this suggest that psychedelics would be a stronger therapy tool for people with BPD?

  32. Eigengrau says:

    I’ve never heard someone say “I know climate change is real, but for some reason I can’t make myself vote to prevent it.”

    Maybe this was a bad example of the discrepancy you’re trying to illustrate, but I personally believe that vegetarianism is morally right and good, while having no intentions of becoming a vegetarian myself, and even bemoaning the increasing proportion of vegetarian restaurants in my area.

    Whether this is the same mechanism causing “I know people won’t hate me for speaking, but for some reason I can’t make myself speak”, I’m not sure. Is it an emotional memory? Some other form of behavioural inertia? Do I just really like the taste of meat and not like the taste of vegetarian food? Can our personal tastes and aesthetic preferences be explained by the same Emotional Brain framework? If not, when does an emotional response triggered by a belief end and an emotional response triggered by a personal preference or taste begin? I know spiders won’t harm me, but I still find their appearance and movements to be so creepy and gross that they evoke a fear response.

  33. Kemp says:

    This post, along with Scott’s posts on free energy/predictive coding/psychedelics, made me go back and read some of my notes I made immediately after a psychedelic trip many years ago. Some of my observations can be very easily interpreted/explained by Scott’s framework. Just replace my concept of “filter” with the word “belief” and you’ll see what I mean:

    Everything is experienced through a series of filters. Filters are created by instinct and experience… The entire world looks and feels COMPLETELY different for each person, because no ones’ set of filters is the same.

    The “deeper” the filter, the more influence it has over your perception of reality. Evolutionarily developed filters (instinct) occurred from millions of years of selection, and are very deep. Filters originated in your childhood that survive into adulthood are generally deep, etc. Someone calling you a mean word adds a filter that might last an hour.

    These filters are stacked on top of each other like a house of cards. The deeper the filter, the more influential. But every filter effects one’s perception of reality… There are deep filters based on physical brain chemistry, instinct based on human evolution, etc. Layers of filter can be peeled away to see reality in a more “pure” way. Peeling away layers allows one to see things(reality) in a way they didn’t before, and reconsider “their” reality. Note: peeling away layers does not necessarily mean one will then go on to form a truer view of reality.

    There are many methods of peeling away filter layers. Peeling away filters allows you to rebuild your set of filters to control the “picture”. You can’t control what happens in front of the camera, which is why it’s important to be able filter it exactly how you want. Remember, the only thing you can control is your filter.

  34. benwave says:

    In his review, Kaj relates this to Internal Family Systems, a weird form of therapy where you imagine your feelings as people/entities and have discussions with them. I’ve always been skeptical of this, because feelings are not, in fact, people/entities, and it’s unclear why you should expect them to answer you when you ask them questions. And in my attempts to self-test the therapy, indeed nobody responded to my questions and I was left feeling kind of silly.

    Well I guess I sort of stumbled onto something like IFS in my own modelling of my and other brains. I’ve had some success with it helping friends through difficult emotional situations (I’ll not a therapist at all, just trying to support the people I care about as… I presume most people do?). I appreciate that’s just anecdote.

    I did want to point out that in the framework I stumbled into, I never expected anything so coherent as a voice or dialogue from the internal mind elements, just that they would express themselves by way of feelings. It wasn’t necessary for me to imagine them as anything more specific than strong priors for experience X leads to feeling Y existing in my brain. I also never imagined them to be immutable, but did acknowledge that the job of changing them and the job of managing them for best outcome as they currently exist are two different kettles of fish.

  35. Joe says:

    I believe in “global warming” or “climate change” as they are now calling it, but I can’t take the climate catastrophe people seriously. When Beto and others claim the world will end in 10 years I just have to roll my eyes and not vote for that dumb-ass. It starts to sound like satanic panic, attack of the African killer bees, and Y2K. Yes climate change is real, but is it the end of the world? Probably not.

    • Scared_kid says:

      It is undoubtedly hyperbolic to say the world is ending, but I don’t think it’s stupid or unethical to speak that way. By any ordinary usage of the word “catastrophe” climate change easily qualifies–without even trying to predict the future, it’s already caused a radical increase in the number of natural disasters, the spread of disease, and the number of people forced to seek refuge in another country. It doesn’t strike me as alarmist to call it a catastrophe. As for the claims about the world ending–I’ve done a fair bit of climate activism and it’s geared overwhelmingly around motivating people to take climate change seriously, not convincing them it’s real. The main adversary is apathy, not skepticism. And, much as it might be wished otherwise, measured Bayesian calculations of the cost/benefit analysis aren’t effective at fighting that. Those calculations do happen. They just don’t make it into slogans.

    • Nancy Lebovitz says:

      I think the more common form (at least where I hang out) isn’t that the world will end in 10 years, it’s that we only have 10 years or whatever to prevent the world from ending.

    • Simon_Jester says:

      The standard prediction, when you look at the scientists, is not “the world will end in 10 years.” It is “things are getting and will continue to get steadily worse, and within 10 years the amount of CO2 released into the atmosphere will be enough to cause Very Bad Things, potentially civilization-ending.”

      This is a bit more nuanced.

      Imagine a bundle of dynamite in a furnished living room, with a long, burning fuse that snakes around the room and under a heavy fireproof sofa.

      If the fuse is extinguished before it goes under the sofa, all will be more or less well. There will be some minor problems (scorched carpet, say), but nothing we can’t deal with. No explosion; the room does not get blown up.

      If the fuse is NOT extinguished before it goes under the sofa, suddenly the problem becomes much more serious. It may not be possible to move the sofa and get at the fuse and extinguish it before the fuse burns down and the dynamite explodes. Then again, maybe it may be possible- but the effort required will be far greater than if the fuse had been extinguished earlier.

      There may even be last-ditch measures that can be taken like grabbing the dynamite bundle itself and throwing it out the window with the fuse still burning- but it would be so much better and safer to simply cut that fuse.

      This is the kind of climate change prediction people have been making to date. “If we do not reverse current trends by the year 20XX, Bad Things will happen” does not mean “expect Bad Things in 20XX.” It means “the amount of CO2 in the air by 20XX will be sufficient to eventually cause Bad Things after some more time passes.”

      This is easy to understand from the point of view of the dynamite analogy. If the fuse on the dynamite is burning down and about to be out of reach where we can’t extinguish it before it’s too late, it is imperative to act now. And this imperative remains in place even if there will be a considerable time delay between “whoops we didn’t extinguish the fuse in time” and “oh shit kaboom.”

  36. Iznick says:

    Delighted to see these recent posts on enlightenment, trauma/emotional memories, and psychedelics. I think there might be a Grand Unified Theory here waiting to be articulated, and so I would love to hear Scott’s thoughts on what the following all adds up to:

    Teasdale thinks that meditation updates emotional schemas much as memory reconsolidation and psychedelics do… https://www.openground.com.au/assets/Documents-Openground/Articles/1b91257fa9/transforming-suffering-teasdale-chaskalson-II.pdf

    …which maps very well onto what McGilchrist is saying about brain lateralisation … https://www.openground.com.au/assets/Documents-Openground/Articles/1b91257fa9/transforming-suffering-teasdale-chaskalson-II.pdf

    … which maps onto Culadasa’s attention/awareness distinction… https://dharmatreasurecommunity.org/forums/topic/general-announcements

    … and also onto views about “awareness” in some non-dual strands of Buddhism … https://tricycle.org/trikedaily/practice-effortless-mindfulness/

    So maybe meditation and psychedelics move us out of one evolved mode of mind (focussed, goal-oriented, conceptual) and into another (expansive, receptive, embodied – it’s suggested by McGilchrist and Jordan Peterson that this is our “prey” mode: https://www.youtube.com/watch?v=ea4mEnsTv6Q), and that enlightenment is a decisive shift into the latter. And it seems that updating of emotional schemas, and thus healing from trauma, might happen in the latter mode – which is why lots of meditators and users of psychedelics report (sometimes sudden and dramatic) updating of these schemas.

    And as to why we struggle to process traumatic experiences when you would expect evolution to have equipped us to do so, I wonder whether it’s a modern problem – it’s a commonplace amongst meditators that people in developed nations are particularly “thinky” and find it hard to move out of the conceptual mode of mind. Is this inhibiting the processing of emotional memories, if it’s in the non-conceptual mode that the processing happens? I wonder what role literacy has played in this – I was interested to read in The Secret of Our Success that literacy produces observable changes in brain structure, including a thickening of the corpus callosum, which presumably would be of interest to McGilchrist, who chalks up our thinkiness to dominance of the brain’s left hemisphere.

    So, as I say, I would love to read a post that covers some of that.

  37. kai.teorn says:

    So we have a ton of different therapies — basically ways to teach people to be better/happier/more functional. They all work to some extent, because human brain is pliable. But they all largely fail because our capacity to be taught by another human evolved in response to very different pressures — it evolved to allow us to learn practical things like hunting or agriculture or math, not to learn how to become a better human. For most of our history, there was no evolutionary advantage for a capacity to quickly adjust one’s priors (except maybe in early childhood). That’s why talking (therapy) only takes you so far and not farther.

    On the other hand, we have psychedelics that somehow make changing the priors easier — just because these priors, as everything else in your mind, are material and prone to material agents. It’s quite surprising to me that we have such a master knob that magically flattens all the mountains in the brain’s landscape. Still, if this is how in fact it works, it’s obvious that we must hurry to combine all our well-meaning, well-thought-out, rational therapies with the blunt mountain-razing force of psychedelics to make it all finally bear fruit. If this indeed works out, the implications are huge.

    One problem I see, in this exciting new era of Therapies That Work, is that we don’t really have nearly enough therapists for everyone who may benefit from them. I think a lot more people must now learn to become amateur therapists for themselves and their friends and family. We need to build a culture where you are expected and encouraged (maybe via the school system, in part) to spend some time working on yourself, identifying the issues that hold you back (with the help of online communities, testing, AIs, self-experimentation), and formulating some ideas of exactly which mountain is your biggest obstacle at the moment. Only after you have formed your armies of rational arguments on both sides of the impassable mountain, you take some earth-shattering psychedelics and, bingo, you now have an 8-lane highway where the mountain once stood. Without such preparation and focus, psychedelics can take you basically anywhere (see Scott’s post on weirdness of early psychedelicists), not necessarily in a good direction.

    P.S. I especially hope that this psychedelics/therapy combo (again, if it works out!) will be empowering for therapists like Scott who, while obviously well-meaning, smart, and knowledgeable, has, by his own admission, troubles with creating deep emotional experiences in his patients — which in other therapists (maybe not as smart but more emotional) may already be working as a kind of light psychedelics contributing towards therapy’s overall success.

    • kai.teorn says:

      I would perhaps even summarize it thus: [insert lament about the state of the world] is all because the people with the talent of understanding stuff (understanders) and people with the talent of flattening/reshaping other people’s mental mountains (charismatics) are two categories with little overlap. If psychedelics can indeed bestow the superpowers of charismatics on understanders, the world may be about to change in profound ways. On the other hand, the same superpowers will be bestowed on anyone, including the not very well-meaning, which suggests interesting times ahead.

  38. wonderer says:

    One can imagine an alien species whose ability to find truth was a simple function of their education and IQ. Everyone who knows the right facts about the economy and is smart enough to put them together will agree on economic policy.

    They might agree on the likely effects of a certain policy, but that’s nowhere close to agreeing on whether the policy is good or bad. For example, should the minimum wage be increased? If you’re already employed in a minimum wage job and are unlikely to lose it, you’d yes; if you’re unemployed, you’d say no. Should regulations be tightened? If you care about safety over cost, you’d say yes; if you’re poor and really need cheap stuff, you’d say no. Should tariffs be imposed on foreign steel? If you work in a steel mill, you’d say yes; if you’re a consumer who doesn’t know anyone in the steel industry, you’d say no. Should the rich be taxed more? If you’re rich–or alternatively, if you have philosophical objections to taxation that go beyond its objective effect on the economy–you might say yes; otherwise, you’d say no.

  39. googolplexbyte says:

    >“I know climate change is real, but for some reason I can’t make myself vote to prevent it.”

    I feel similarly about God. I know he’s not real, as much as one can know something isn’t real in a bayesian framework, but the notion that God does in fact exist just looms heavy in my mind none the less.

    If I’d been born into a religious family I get the feeling I’d be a total fanatic fundamentalist.

  40. viVI_IViv says:

    The patient has failed to integrate his judgments about the doctor into a coherent whole, “doctor who sometimes does good things but other times does bad things”. It’s as if there’s two predictive models, one of Good Doctor and one of Bad Doctor, and even though both of them refer to the same real-world person, the patient can only use one at a time. …
    Some therapists view borderline as a disorder of integration. Nobody is great at having all their different schemas talk to each other, but borderlines are atrocious at it. Their mountains are so high that even different thoughts about the same doctor can’t necessarily talk to each other and coordinate on a coherent position. The capital only has enough messengers to talk to one valley at a time. If tribesmen from the Anger Valley are advising the capital today, the patient becomes truly angry, a kind of anger that utterly refuses to listen to any counterevidence, an anger pure beyond your imagination. If they are happy, they are purely happy, and so on.

    But people with borderline personality disorder don’t have general cognitive or learning problems. When they receive evidence on things or people other than themselves or people they personally have a relationship with, they update their belief just like normal people do. So the simplest explanation is that BPD people just feel heightened emotions. When the doctor does something nice to you you like them a bit more, when they do something mean you like them a little less, a BPD person instead just switches from love to hate because they can’t feel anything in between.

    One can imagine an alien species whose ability to find truth was a simple function of their education and IQ. Everyone who knows the right facts about the economy and is smart enough to put them together will agree on economic policy.

    Or they realize that mistake theory is in fact mistaken and it is impossible to agree on economic policy because different people have conflicting interests.

    • Simon_Jester says:

      Maybe the BPD patient has very high mountains in very specific sections of their mind (the parts that form qualitative judgments about people, places, and things in the external world)? There might be no similar issues affecting their ability to learn to drive or to write an essay, simply because those use different mental toolkits that are not affected.

      Or they realize that mistake theory is in fact mistaken and it is impossible to agree on economic policy because different people have conflicting interests.

      I would think that mistake theory would be all the more powerful in that imaginary species. A higher proportion of their disagreements would be rooted in one side being aware of a fact that the other side was not aware of.

      Whereas humans are far more likely to have disagreements that are rooted in something more complicated than a pure matter of fact, such as deeply differing ways of interpreting a shared fact.

      If I look at a border crossing and see hundreds of thousands of prospective criminal malcontents coming to disrupt my culture, while you who share that culture see hundreds of thousands of prospective hardworking contributing members of a thriving society, then the problem likely won’t be simply resolved by one or the other of us pointing to a sheaf of crime statistics.

      But on the Planet of the Reasoners, it probably would be- because minds of reasonably high IQ with access to the same objective facts would reach broadly similar conclusions.

  41. Cecilpl says:

    > In his review, Kaj relates this to Internal Family Systems, a weird form of therapy where you imagine your feelings as people/entities and have discussions with them. I’ve always been skeptical of this, because feelings are not, in fact, people/entities, and it’s unclear why you should expect them to answer you when you ask them questions. And in my attempts to self-test the therapy, indeed nobody responded to my questions and I was left feeling kind of silly.

    I’ve actually used this to great success on myself. It works with feelings, with habits, with belief patterns. I’m often able to have extended conversations with them as though they were my children. I have solved some bad habits this way (say, biting my nails), by trying to “work with” the habit. I’ve treated the habit as a manifestation of a bundle of needs, and then anthropomorphized it. Laying out “both” of our needs and talking through how the habit affected me (painful), and learning what the other gets out of it (stress relief), allowed me to try and compromise on alternative behavior patterns that meet both of our needs.

    It’s also been helpful to eradicate undesirable belief systems (eg monogamy). In those cases I’ve tried treating the system as a virus. Simply disassociating it from my identity made it much easier to challenge and part with its component beliefs over time.

    I could see it being very helpful (personally, YMMV) in the example case of Richard being afraid to talk at work.

  42. Roger Sweeny says:

    As I was reading this, I heard the (late, great) Hank William’s Cold, Cold Heart:

    I’ve tried so hard my dear to show
    That you’re my every dream.
    Yet you’re afraid each thing I do
    Is just some evil scheme.

    A memory from your lonesome past
    Keeps us so far apart.
    Why can’t I free your doubtful mind
    And melt your cold, cold heart?

    Another love before my time
    Made your heart sad an’ blue.
    And so my heart is paying now
    For things, I didn’t do.

    In anger, unkind words are said
    That make the teardrops start.
    Why can’t I free your doubtful mind
    And melt your cold, cold heart?

    There was a time when I believed
    That you belonged to me.
    But now I know your heart is shackled
    To a memory.

    The more I learn to care for you
    The more we drift apart.
    Why can’t I free your doubtful mind
    And melt your cold, cold heart?

  43. Nancy Lebovitz says:

    Scott, would you be willing to update your review about Richard’s case? You’ve misrepresented how the therapy works, and you’re getting quoted.

    Here’s what the book (p.46) says:

    ***

    Richard closed his eyes and imagined being in a meeting at work, making some useful comments and being confident about the knowledge he had shared. This is what ensued:

    Cl: Now I’m feeling really uncomfortable, but– it’s in a different way.
    Th: OK, let yourself feel it– this different discomfort. [Pause] See if any words come along with this uncomfortable feeling.
    Cl:[Pause] Now they hate me.
    Th:”Now they hate me”. Good. Keep going.See if this really uncomfortable feeling can tell you why they hate you now.
    Cl:[Pause] Hnh. Wow. It’s because– now I’m– an arrogant asshole– like my father– a totally insensitive, arrogant, know -it-all.
    Th: Do you mean that having a feeling of confidence as you speak turns you into an arrogant asshole, like Dad?
    Cl: Yeah, exactly. Wow.
    Th: And how do you feel about being like him in this way?
    Cl: It’s horrible! It’s what I always vowed not to be!

    ****

    Here’s what you wrote: “During therapy, he described his narcissistic father, who was always mouthing off about everything. Everyone hated his father for being a fool who wouldn’t shut up. The therapist conjectured that young Richard observed this and formed a predictive model, something like “talking makes people hate you”. This was overly general: talking only makes people hate you if you talk incessantly about really stupid things. But when you’re a kid you don’t have much data, so you end up generalizing a lot from the few examples you have.”

    ****

    There is no mention of the therapist knowing about Richard’s father or making a theory about Richard’s relationship with his father.

  44. Nancy Lebovitz says:

    I think there’s a large difference between being willing to become a vegetarian or act to prevent global warming and the sort of problems described in UtEB.

    The first two are fairly remote and based on argument. What I’ve read of UtEB is about problems in personal life where there’s a lot of direct evidence that changing would be to the person’s advantage.

    I think a lot of procrastination would be in the latter category– why put off something which isn’t all that bad, won’t get better if done later, and poisons enjoyment of the present by not being done?

  45. paulbali says:

    “Smart, well-educated people believe all kinds of things, even when they should know better. We call these people biased, a catch-all term meaning something that prevents them from having true beliefs they ought to be able to figure out.”

    Indeed like when you’ll figure out, Scott, that if you’re going to cite studies summing the scientific torture [and eventual gassing / decapitating / intracardial stabbing / cervical dislocating] of our animal relatives – beings similar enough to us, psychologically, to make these studies relevant to hoomans – you should append a big fat ethical asterix.

  46. Rewrite says:

    I’m curious if you’re familiar with this take on the brain, LSD, neuroscience, etc: https://opentheory.net/2019/11/neural-annealing-toward-a-neural-theory-of-everything/

  47. enye-word says:

    And then Richard was instantly cured, and never had any problems speaking up at work again. His coworkers all applauded, and became psychotherapists that very day. An eagle named “Psychodynamic Approach” flew into the clinic and perched atop the APA logo and shed a single tear. Coherence Therapy: Practice Manual And Training Guide was read several times, and God Himself showed up and enacted PsyD prescribing across the country. All the cognitive-behavioralists died of schizophrenia and were thrown in the lake of fire for all eternity.

    Sick https://afloweroutofstone.tumblr.com/post/142025388037/averyterrible-afloweroutofstone reference!

  48. denis says:

    Scott – about this time last year (11/28/2018) you reviewed The Mind Illuminated, by Culadasa. Doesn’t the premise of that book also map to this paradigm where conscious awareness = capital city, subminds = valleys? In TMI also, communication of subminds with consciousness is low-bandwidth, but it’s the way subminds can talk to each other. Meditation is then the practice of clearing consciousness to let the subminds talk, and the result is unification of mind leading to enlightenment.

    If the mountains between the valleys – the barriers between subminds – are the causes of various biases, this suggests the purpose of rationalists is to unify the mind, and thereby, er, achieve enlightenment?

  49. HectorVictorious says:

    “Many of them have read a lot on the subject (empirically, reading more about climate change will usually just make everyone more convinced of their current position, whatever it is).”

    Fun fact – Nyhan’s work has been failing to replicate.