codex Slate Star Codex

"Talks a good game about freedom when out of power, but once he’s in – bam! Everyone's enslaved in the human-flourishing mines."

Meditative States As Feedback Loops

Three years ago, in Going Loopy, I wrote:

If the brain had been designed by an amateur, it would enter a runaway feedback loop the first time it felt an emotion. Think about it. You see a butterfly. This makes you happy. Being happy is an unexpected pleasant surprise. Now you’re happy that you’re happy. This makes you extra happy. Being extra happy is awesome! This makes you extra extra happy. And so on to as much bliss as your neurons are capable of representing. In the real world, either those feedback loops usually don’t happen, or they converge and stop at some finite point. I would not be surprised to learn that a lot of evolutionary innovation and biochemical complexity goes into creating a strong barrier against conditioning on your own internal experience.

“Evolutionary innovation and biochemical complexity”? Haha no, people are just too distractable to keep having the same emotion for more than a couple seconds.

I get this from Leigh Brasington’s excellent Right Concentration, a Buddhist perspective on various advanced meditative states called jhanas. To get to the first of these jhanas (there are eight in all), you become really good at concentration meditation, until you can concentrate on your breath a long time without getting distracted. Then you concentrate on your breath for a long time. Then you take your one-pointed ultra-concentrated mind, and you notice (or generate, or imagine) a pleasant feeling. This produces the first jhana, which the Buddhist scriptures describe as:

One drenches, steeps, saturates, and suffuses one’s body with the rapture and happiness born of seclusion, so that there is no part of one’s body that is not suffused by rapture and happiness.

Brasington backs this up with his own experience and those of other meditators he knows. The first jhana is really, really, really pleasurable; when you hear meditators talk about achieving “bliss states”, it’s probably something like the first jhana.

And here’s the book’s description of why it happens:

When access concentration is firmly established, then you shift your attention from the breath (or whatever your meditation object is) to a pleasant sensation. You put your attention on that sensation, and maintain your attention on that sensation, and do nothing else…

What you are attempting to do is set up a positive feedback loop. An example of a positive feedback loop is that awful noise a speaker will make if a microphone is held too close to it. What’s happening is that the ambient noise in the room goes into the microphone, is amplified by the amplifier, and comes out the speaker louder. It then reenters the microphone, gets amplified even more, comes out louder still, goes into the microphone yet again, and so on. You are trying to do exactly the same thing, except, rather than a positive feedback loop of noise, you are attempting to generate a positive feedback loop of pleasure. You hold your attention on a pleasant sensation. That feels nice, adding a bit more pleasure to your overall experience. That addition is also pleasurable, adding more pleasure, and so on, until, instead of getting a horrible noise, you get an explosion of pleasure.

The book doesn’t come out and say that the other seven jhanas are the same thing, but that seems consistent with the descriptions. For example, the fourth jhana is a state of ultimate calm. Seems like maybe if you become calm, then being so calm is kind of calming, and that’s even more calming, and so on until you’ve maxed out your mental calmness-meter.

And the explanation of why this doesn’t happen all the time is that non-meditators just can’t concentrate hard enough. A microphone-amp system that turns on and off a couple of times each second will never get a really good feedback loop going. A mind that’s always flitting from one thing to another can’t build up enough self-referentiality to reach infinite bliss.

Book Review: Mastering The Core Teachings Of The Buddha

I.

I always wanted to meditate more, but never really got around to it. And (I thought) I had an unimpeachable excuse. The demands of a medical career are incompatible with such a time-consuming practice.

Enter Daniel Ingram MD, an emergency physician who claims to have achieved enlightenment just after graduating medical school. His book is called Mastering The Core Teachings Of The Buddha, but he could also have called it Buddhism For ER Docs. ER docs are famous for being practical, working fast, and thinking everyone else is an idiot. MCTB delivers on all three counts. And if you’ve ever had an attending quiz you on the difference between type 1 and type 2 second-degree heart block, you’ll love Ingram’s taxonomy of the stages of enlightenment.

The result is a sort of perfect antidote to the vague hippie-ism you get from a lot of spirituality. For example, from page 324:

I feel the need to address, which is to say shoot down with every bit of rhetorical force I have, the notion promoted by some teachers and even traditions that there is nothing to do, nothing to accomplish, no goal to obtain, no enlightenment other than the ordinary state of being…which, if it were true, would have been very nice of them, except that it is complete bullshit. The Nothing To Do School and the You Are Already There School are both basically vile extremes on the same basic notion that all effort to attain to mastery is already missing the point, an error of craving and grasping. They both contradict the fundamental premise of this book, namely that there is something amazing to attain and understand and that there are specific, reproducible methods that can help you do that. Here is a detailed analysis of what is wrong with these and related perspectives…

…followed by a detailed analysis of what’s wrong with this position, which he compared to “let[ting] a blind and partially paralyzed untrained stroke victim perform open-heart surgery on your child based on the notion that they are already an accomplished surgeon but just have to realize it”.

This isn’t to say that MCTB isn’t a spiritual book, or that it shies away from mysticism or the transcendent. MCTB is very happy to discuss mysticism and the transcendent. It just quarantines the mystery within a carefully explained structure of rationally-arranged progress, so that it looks something like “and at square 41B in our perfectly rectangular grid you’ll encounter a mind-state which is impossible to explain even in principle, here are a few woefully inadequate metaphors for this mind-state so you’ll know when you’ve found it and should move on to square 41C.”

This is a little jarring. But – Ingram argues – it’s also very Buddhist. If you read the sutras with an open mind, the Buddha sounds a lot more like an ER doctor than a hippie. MCTB has a very Protestant fundamentalist feeling of digging through the exterior trappings of a religion to try to return to the purity of its origins. As far as I can tell, it succeeds – and in succeeding helped me understand Buddhism a whole lot better than anything else I’ve read.

II.

Ingram follows the Buddha in dividing the essence of Buddhism into three teachings: morality, concentration, and wisdom.

Morality seems like the odd one out here. Some Buddhists like to insist that Buddhism isn’t really a “religion”. It’s less like Christianity or Islam than it is like (for example) high intensity training at the gym – a highly regimented form of practice that improves certain faculties if pursued correctly. Talking about “morality” makes this sound kind of hollow; nobody says you have to be a good person to get bigger muscles from lifting weights.

MCTB gives the traditional answer: you should be moral because it’s the right thing to do, but also because it helps meditation. The same things that make you able to sleep at night with a clear mind make you able to meditate with a clear mind:

One more great thing about the first training [morality] is that it really helps with the next training: concentration. So here’s a tip: if you are finding it hard to concentrate because your mind is filled with guilt, judgment, envy or some other hard and difficult thought pattern, also work on the first training, kindness. It will be time well spent.

That leaves concentration (samatha) and wisdom (vipassana). You do samatha to get a powerful mind; you get a powerful mind in order do to vipassana.

Samatha meditation is the “mindfulness” stuff you’re always hearing about: concentrate on the breath, don’t let yourself get distracted, see if you can just attend to the breath and nothing else for minutes or hours. I read whole books about this before without understanding why it was supposed to be good, aside from vague things like “makes you feel more serene”. MCTB gives two reasons: first, it gets you into jhanas. Second, it prepares you for vipassana.

Jhanas are unusual mental states you can get into with enough concentration. Some of them are super blissful. Others are super tranquil. They’re not particularly meaningful in and of themselves, but they can give you heroin-level euphoria without having to worry about sticking needles in your veins. MCTB says, understatedly, that they can be a good encouragement to continue your meditation practice. It gives a taxonomy of eight jhanas, and suggests that a few months of training in samatha meditation can get you to the point where you can reach at least the first.

But the main point of samatha meditation is to improve your concentration ability so you can direct it to ordinary experience. Become so good at concentrating that you can attain various jhanas – but then, instead of focusing on infinite bliss or whatever other cool things you can do with your new talent, look at a wall or listen to the breeze or just try to understand the experience of existing in time.

This is vipassana (“insight”, “wisdom”) meditation. It’s a deep focus on the tiniest details of your mental experience, details so fleeting and subtle that without a samatha-trained mind you’ll miss them entirely. One such detail is the infamous “vibrations”, so beloved of hippies. Ingram notes that every sensation vibrates in and out of consciousness at a rate of between five and forty vibrations per second, sometimes speeding up or slowing down depending on your mental state. I’m a pathetic meditator and about as far from enlightenment as anybody in this world, but with enough focus even I have been able to confirm this to be true. And this is pretty close to the frequency of brain waves, which seems like a pretty interesting coincidence.

But this is just an example. The point is that if you really, really examine your phenomenological experience, you realize all sorts of surprising things. Ingram says that one early insight is a perception of your mental awareness of a phenomenon as separate from your perception of that phenomenon:

This mental impression of a previous sensation is like an echo, a resonance. The mind takes a crude impression of the object, and that is what we can think about, remember, and process. Then there may be a thought or an image that arises and passes, and then, if the mind is stable, another physical pulse. Each one of these arises and vanishes completely before the other begins, so it is extremely possible to sort out which is which with a stable mind dedicated to consistent precision and not being lost in stories. This means the instant you have experienced something, you know that it isn’t there any more, and whatever is there is a new sensation that will be gone in an instant. There are typically many other impermanent sensations and impressions interspersed with these, but, for the sake of practice, this is close enough to what is happening to be a good working model.

Engage with the preceding paragraphs. They are the stuff upon which great insight practice is based. Given that you know sensations are vibrating, pulsing in and out of reality, and that, for the sake of practice, every sensation is followed directly by a mental impression, you now know exactly what you are looking for. You have a clear standard. If you are not experiencing it, then stabilize the mind further, and be clearer about exactly when and where there are physical sensations.

With enough of this work, you gain direct insight into what Buddhists call “the three characteristics”. The first is impermanence, and is related to all the stuff above about how sensations flicker and disappear. The second is called “unsatisfactoriness”, and involves the inability of any sensation to be fulfilling in some fundamental way. And the last is “no-self”, an awareness that these sensations don’t really cohere into the classic image of a single unified person thinking and perceiving them.

The Buddha famously said that “life is suffering”, and placed the idea of suffering – dukkha – as the center of his system. This dukkha is the same as the “unsatisfactoriness” above.

I always figured the Buddha was talking about life being suffering in the sense that sometimes you’re poor, or you’re sick, or you have a bad day. And I always figured that making money or exercising or working to make your day better sounded like a more promising route to dealing with this kind of suffering than any kind of meditative practice. Ingram doesn’t disagree that things like bad days are examples of dukkha. But he explains that this is something way more fundamental. Even if you were having the best day of your life and everything was going perfectly, if you slowed your mind down and concentrated perfectly on any specific atomic sensation, that sensation would include dukkha. Dukkha is part of the mental machinery.

MCTB acknowledges that all of this sounds really weird. And there are more depths of insight meditation, all sorts of weird things you notice when you look deep enough, that are even weirder. It tries to be very clear that nothing it’s writing about is going to make much sense in words, and that reading the words doesn’t really tell you very much. The only way to really make sense of it is to practice meditation.

When you understand all of this on a really fundamental level – when you’re able to tease apart every sensation and subsensation and subsubsensation and see its individual components laid out before you – then at some point your normal model of the world starts running into contradictions and losing its explanatory power. This is very unpleasant, and eventually your mind does some sort of awkward Moebius twist on itself, adopts a better model of the world, and becomes enlightened.

III.

The rest of the book is dedicated to laying out, in detail, all the steps that you have to go through before this happens. In Ingram’s model – based on but not identical to the various models in various Buddhist traditions – there are fifteen steps you have to go through before “stream entry” – the first level of enlightenment. You start off at the first step, after meditating some number of weeks or months or years you pass to the second step, and so on.

A lot of these are pretty boring, but Ingram focuses on the fourth step, Arising And Passing Away. Meditators in this step enter what sounds like a hypomanic episode:

In the early part of this stage, the meditator’s mind speeds up more and more quickly, and reality begins to be perceived as particles or fine vibrations of mind and matter, each arising and vanishing utterly at tremendous speed…As this stage deepens and matures, meditators let go of even the high levels of clarity and the other strong factors of meditation, perceive even these to arise and pass as just vibrations, not satisfy, and not be self. They may plunge down into the very depths of the mind as though plunging deep underwater to where they can perceive individual frames of reality arise and pass with breathtaking clarity as though in slow motion […]

Strong sensual or sexual feelings and dreams are common at this stage, and these may have a non-discriminating quality that those attached to their notion of themselves as being something other than partially bisexual may find disturbing. Further, if you have unresolved issues around sexuality, which we basically all have, you may encounter aspects of them during this stage. This stage, its afterglow, and the almost withdrawal-like crash that can follow seem to increase the temptation to indulge in all manner of hedonistic delights, particularly substances and sex. As the bliss wears off, we may find ourselves feeling very hungry or lustful, craving chocolate, wanting to go out and party, or something like that. If we have addictions that we have been fighting, some extra vigilance near the end of this stage might be helpful.

This stage also tends to give people more of an extroverted, zealous or visionary quality, and they may have all sorts of energy to pour into somewhat idealistic or grand projects and schemes. At the far extreme of what can happen, this stage can imbue one with the powerful charisma of the radical religious leader.

Finally, at nearly the peak of the possible resolution of the mind, they cross something called “The Arising and Passing Event” (A&P Event) or “Deep Insight into the Arising and Passing Away”…Those who have crossed the A&P Event have stood on the ragged edge of reality and the mind for just an instant, and they know that awakening is possible. They will have great faith, may want to tell everyone to practice, and are generally evangelical for a while. They will have an increased ability to understand the teachings due to their direct and non-conceptual experience of the Three Characteristics. Philosophy that deals with the fundamental paradoxes of duality will be less problematic for them in some way, and they may find this fascinating for a time. Those with a strong philosophical bent will find that they can now philosophize rings around those who have not attained to this stage of insight. They may also incorrectly think that they are enlightened, as what they have seen was completely spectacular and profound. In fact, this is strangely common for some period of time, and thus may stop practicing when they have actually only really begun.

This is a common time for people to write inspired dharma books, poetry, spiritual songs, and that sort of thing. This is also the stage when people are more likely to join monasteries or go on great spiritual quests. It is also worth noting that this stage can look an awful lot like a manic episode as defined in the DSM-IV (the current diagnostic manual of psychiatry). The rapture and intensity of this stage can be basically off the scale, the absolute peak on the path of insight, but it doesn’t last. Soon the meditator will learn what is meant by the phrase, “Better not to begin. Once begun, better to finish!”

If this last part sounds ominous, it probably should. If the fourth stage looks like a manic episode, the next five or six stages all look like some flavor of deep clinical depression. Ingram discusses several spiritual traditions and finds that they all warn of an uncanny valley halfway along the spiritual path; he himself adopts St. John’s phrase “Dark Night Of The Soul”. Once you have meditated enough to reach the A&P Event, you’re stuck in the (very unpleasant) Dark Night Of The Soul until you can meditate your way out of it, which could take months or years.

Ingram’s theory is that many people have had spiritual experiences without deliberately pursuing a spiritual practice – whether this be from everyday life, or prayer, or drugs, or even things you do in dreams. Some of these people accidentally cross the A&P Event, reach the Dark Night Of The Soul, and – not even knowing that the way out is through meditation – get stuck there for years, having nothing but a vague spiritual yearning and sense that something’s not right. He says that this is his own origin story – he got stuck in the Dark Night after having an A&P Event in a dream at age 15, was low-grade depressed for most of his life, and only recovered once he studied enough Buddhism to realize what had happened to him and how he could meditate his way out:

When I was about 15 years old I accidentally ran into some of the classic early meditation experiences described in the ancient texts and my reluctant spiritual quest began. I did not realize what had happened, nor did I realize that I had crossed something like a point of no return, something I would later call the Arising and Passing Away. I knew that I had had a very strange dream with bright lights, that my entire body and world had seemed to explode like fireworks, and that afterwards I somehow had to find something, but I had no idea what that was. I philosophized frantically for years until I finally began to realize that no amount of thinking was going to solve my deeper spiritual issues and complete the cycle of practice that had already started.

I had a very good friend that was in the band that employed me as a sound tech and roadie. He was in a similar place, caught like me in something we would later call the Dark Night and other names. He also realized that logic and cognitive restructuring were not going to help us in the end. We looked carefully at what other philosophers had done when they came to the same point, and noted that some of our favorites had turned to mystical practices. We reasoned that some sort of nondual wisdom that came from direct experience was the only way to go, but acquiring that sort of wisdom seemed a daunting task if not
impossible […]

I [finally] came to the profound realization that they have actually worked all of this stuff out. Those darn Buddhists have come up with very simple techniques that lead directly to remarkable results if you follow instructions and get the dose high enough. While some people don’t like this sort of cookbook approach to meditation, I am so grateful for their recipes that words fail to express my profound gratitude for the successes they have afforded me. Their simple and ancient practices revealed more and more of what I sought. I found my experiences filling in the gaps in the texts and teachings, debunking the myths that pervade the standard Buddhist dogma and revealing the secrets meditation teachers routinely keep to themselves. Finally, I came to a place where I felt comfortable writing the book that I had been looking for, the book you now hold in your hands.

Once you meditate your way out of the Dark Night, you go through some more harrowing experiences, until you finally reach the fifteenth stage, Fruiition, and achieve “stream entry” – the first level of enlightenment. Then you do it all again on a higher level, kind of like those video games where when you beat the game you get access to New Game+ . Traditionally it takes four repetitions of the spiritual path before you attain complete perfect enlightenment, but Ingram suggests this is metaphorical and says it took him approximately twenty-seven repetitions over seven years.

He also says – and here his usual lucidity deserted him and I ended up kind of confused – that once you’ve achieved stream entry, you’re going to be going down paths whether you like it or not – the “stream” metaphor is apt insofar as it suggests being borne along by a current. The rest of your life – even after you achieve complete perfect enlightenment – will be spent cycling through the fifteen stages, with each stage lasting a few days to months.

This seems pretty bad, since the stages look a lot like depression, mania, and other more arcane psychiatric and psychological problems. Even if you don’t mind the emotional roller coaster, a lot of them sound just plain exhausting, with your modes of cognition and perception shifting and coming into question at various points. MCTB offers some tips for dealing with this – you can always slow your progress down the path by gorging on food, refusing to meditate, and doing various other unspiritual things, but the whole thing lampshades a question that MCTB profoundly fails at giving anything remotely like an answer to:

IV.

Why would you want to do any of this?

The Buddha is supposed to have said: “I gained nothing whatsoever from Supreme Enlightenment, and for that reason it is called Supreme Enlightenment”. And sure, that’s the enigmatic Zen-sounding sort of statement we expect from our spiritual leaders. But if Buddhist practice is really difficult, and makes you perceive every single sensation as profoundly unsatisfactory in some hard-to-define way, and can plunge you into a neverending depression which you might get out of if you meditate hard enough, and then gives you a sort of permanent annoying low-grade bipolar disorder even if you succeed, then we’re going to need something better than pithy quotes.

Ingram dedicates himself hard to debunking a lot of the things people would use to fill the gap. Pages 261-328 discuss the various claims Buddhist schools have made about enlightenment, mostly to deny them all. He has nothing but contempt for the obviously silly ones, like how enlightened people can fly around and zap you with their third eyes. But he’s equally dismissive of things that sort of seem like the basics. He denies claims about how enlightened people can’t get angry, or effortlessly resist temptation, or feel universal unconditional love, or things like that. Some of this he supports with stories of enlightened leaders behaving badly; other times he cites himself as an enlightened person who frequently experiences anger, pain, and the like. Once he’s stripped everything else away, he says the only thing one can say about enlightenment is that it grants a powerful true experience of the non-dual nature of the world.

But still, why would we want to get that? I am super in favor of knowledge-for-knowledge’s-sake, but I’ve also read enough Lovecraft to have strong opinions about poking around Ultimate Reality in ways that tend to destroy your mental health.

The best Ingram can do is this:

I realize that I am not doing a good job of advertising enlightenment here, particularly following my descriptions of the Dark Night. Good point. My thesis is that those who must find it will, regardless of how it is advertised. As to the rest, well, what can be said? Am I doing a disservice by not selling it like nearly everyone else does? I don’t think so. If you want grand advertisements for enlightenment, there is a great stinking mountain of it there for you partake of, so I hardly think that my bringing it down to earth is going to cause some harmful deficiency of glitz in the great spiritual marketplace.

[Meditation teacher] Bill Hamilton had a lot of great one-liners, but my favorite concerned insight practices and their fruits, of which he said, “Highly recommended, can’t tell you why.” That is probably the safest and most accurate advertisement for enlightenment that I have ever heard.

V.

I was reading MCTB at the same time I read Surfing Uncertainty, and it was hard not to compare them. Both claim to be guides to the mysteries of the mind – one from an external scientific perspective, the other from an internal phenomenological perspective. Is there any way to link them up?

Remember this quote from Surfing Uncertainty?:

Plausibly, it is only because the world we encounter must be parsed for action and intervention that we encounter, in experience, a relatively unambiguous determinate world at all. Subtract the need for action and the broadly Bayesian framework can seem quite at odds with the phenomenal facts about conscious perceptual experience: our world, it might be said, does not look as if it is encoded in an intertwined set of probability density distributions. Instead, it looks unitary and, on a clear day, unambiguous…biological systems, as mentioned earlier, may be informed by a variety of learned or innate “hyperpriors” concerning the general nature of the world. One such hyperprior might be that the world is usually in one determinate state or another.

Taken seriously, it suggests that some of the most fundamental factors of our experience are not real features of the sensory world, but very strong assumptions to which we fit sense-data in order to make sense of them. And Ingram’s theory of vipassana meditation looks a lot like concentrating really hard on our actual sense-data to try to disentangle them from the assumptions that make them cohere.

In the same way that our priors “snap” phrases like “PARIS IN THE THE SPRINGTIME” to a more coherent picture with only one “the”, or “snap” our saccade-jolted and blind-spot-filled visual world into a reasonable image, maybe they snap all of this vibrating and arising and passing away into something that looks like a permanent stable image of the world.

And in the same way that concentrating on “PARIS IN THE THE SPRINGTIME” really hard without any preconceptions lets you sniff out the extra “the”, so maybe enough samatha meditation lets you concentrate on the permanent stable image of the world until it dissolves into whatever the brain is actually doing. Maybe with enough dedication to observing reality as it really is rather than as you predict it to be, you can expose even the subjective experience of an observer as just a really strong hyperprior on all of the thought-and-emotion-related sense-data you’re getting.

That leaves dukkha, this weird unsatisfactoriness that supposedly inheres in every sensation individually as well as life in general. If the goal of the brain is minimizing prediction error, if all of our normal forms of suffering like hunger and thirst and pain are just special cases of predictive error in certain inherent drives, then – well, this is a very fundamental form of badness which is inherent in all sensation and perception, and which a sufficiently-concentrated phenomenologist might be able to notice directly. Relevant? I’m not sure.

Mastering The Core Teachings Of The Buddha is a lucid guide to issues surrounding meditation practice and a good rational introduction to the Buddhist system. Parts of it are ultimately unsatisfactory, but apparently this is true of everything, so whatever.


Also available for free download here

Classified Thread 3: Semper Classifiedelis

This is the…monthly? bimonthly? occasional?…classified thread. Post advertisements, personals, and any interesting success stories from the last thread. Also:

1. Iacta_Procul, who posted about some of her life/mental health problems on the subreddit a few weeks ago, and who lots of people said they wanted a way to help, has decided to quit her dead-end job and try to start a math tutoring company. She has a Masters in math and offers to tutor any non-statistics undergrad mathematics, or any necessary test prep for the SAT/ACT/GRE/GMAT (including English/vocabulary/non-math sections). If you’re interested, contact her on Wyzant.

2. Isak – who doesn’t comment here much but is pretty active on Rationalist Tumblr Discord – is homeless right now, having trouble getting his disability check, and asking for some money to help stay afloat and get his life back on track. See his Fundly campaign page for more information.

3. An old friend of mine is looking for AI/data science people in North Carolina who he can ask questions about the opportunities there. If that describes you, email me at scott [at] shireroth [dot] org and I can get you in touch. (got enough responses; thanks to everyone who emailed)

Posted in Uncategorized | Tagged | 163 Comments

Toward A Predictive Theory Of Depression

[Epistemic status: Total wild speculation]

I.

The predictive processing model offers compelling accounts of autism and schizophrenia. But Surfing Uncertainty and related sources I’ve read are pretty quiet about depression. Is there a possible PP angle here?

Chekroud (2015) has a paper trying to apply the model to depression. It’s scholarly enough, and I found it helpful in figuring out some aspects of the theory I hadn’t yet understood, but it’s pretty unambitious. The overall thesis is something like “Predictive processing says high-level beliefs shape our low-level perceptions and actions, so maybe depressed people have some high-level depressing beliefs.” Don’t get me wrong, CBT orthodoxy is great and has cured millions of patients – but in the end, this is just CBT orthodoxy with a neat new coat of Bayesian paint.

There’s something more interesting in Section 7.10 of Surfing Uncertainty, “Escape From The Darkened Room”. It asks: if the brain works to minimize prediction error, isn’t its best strategy to sit in a dark room and do nothing forever? After all, then it can predict its sense-data pretty much perfectly – it’ll always just stay “darkened room”.

Section 7.10 gives a kind of hand-wave-y answer here, saying that of course organisms have some drives, and probably it makes sense for them to desire novelty and explore new options, and so on. Overall this isn’t too different from PCT’s idea of “intrinsic error”, and as long as we remember that it’s not really predicting anything in particular it seems like a fair response.

But I notice that this whole “sit in a dark room and never leave” thing sounds a lot like what depressed people say they wish they could do (and how the most severe cases of depression actually end up). Might there be a connection? Either a decrease in the mysterious intrinsic-error-style factors that counterbalance the dark room scenario, or an increase in the salience of prediction error that makes failures less tolerable?

(also, there’s one way to end all prediction error forever, and it’s something depressed people think about a lot)

II.

Corlett, Fritch, and Fletcher claim that an amphetamine-induced mania-like state may involve pathologically high confidence in neural predictions. I don’t remember if they took the obvious next step and claimed that depression was the opposite, but that sounds like another fruitful avenue to explore. So: what if depression is pathologically low confidence in neural predictions?

Chekroud’s theory of depression as high-level-depressing-beliefs bothers me because there are so many features of depression that aren’t cognitive or emotional or related to any of these higher-level functions at all. Depressed people move more slowly, in a characteristic pattern called “psychomotor retardation”. They display perceptual abnormalities. They’re more likely to get sick. There are lots of results like this.

Depression has to be about something more than just beliefs; it has to be something fundamental to the nervous system. And low confidence in neural predictions would do it. Since neural predictions are the basic unit of thought, encoding not just perception but also motivation, reward, and even movement – globally low confidence levels would have devastating effects on a whole host of processes.

Perceptually, they would make sense-data look less clear and distinct. Depressed people describe the world as gray, washed-out, losing its contrast. This is not metaphorical. You can do psychophysical studies on color perception in depressed people, you can stick electrodes on their eyeballs, and all of this will tell you that depressed people literally see the world in washed-out shades of gray. Descriptions of their sensory experience sound intuitively like the sensory experience you would get if all your sense organs were underconfident in their judgments.

Mechanically, they would make motor movements less forceful. Remember, in PP movements are “active inferences” – the body predicts that the limb it wants to move is somewhere else, then counts on the motor system’s drive toward minimizing prediction error to do the rest. If you predictions are underconfident, your movements are insufficiently forceful and you get the psychomotor retardation that all the pathologists describe in depressed people. And what’s the closest analog to depressive psychomotor retardation? Parkinsonian bradyphrenia. What causes Parkinsonian bradyphrenia? We know the answer to this one – insufficient dopamine, where dopamine is known to encode the confidence level of motor predictions.

Motivationally – well, I’m less certain, I still haven’t found a good predictive processing account of motivation I understand on an intuitive level. But if we draw the analogy to perceptual control theory, some motivations (like hunger) are probably a kind of “intrinsic error” that can be modeled as higher-level processes feeding reference points to lower-level control systems. If we imagine the processes predicting eg hunger, then predicting with low confidence sure sounds like the sort of thing where you should be less hungry. If they’re predicting “you should get out of bed”, then predicting that with low confidence sure sounds like the sort of thing where you don’t feel a lot of motivation to get out of bed.

I’m hesitant to take “low self-confidence” as a gimme – it seems relying too much on a trick of the English language. But I think there really is a connection. Suppose that you’re taking a higher-level math class and you’re really bad at it. No matter how hard you study, you always find the material a bit confusing and are unsure whether you’re applying the concepts correctly. Eventually you start feeling kind of like a loser, you decide the math class isn’t for you, and you move on to something else where you’re more talented. Your low confidence in your beliefs (eg answers to test questions) and actions (eg problem-solving strategies) create general low self-confidence and feelings of worthlessness. Eventually you decide math isn’t for you and decide to drop the class.

If you have global low confidence, the world feels like a math class you don’t understand that you can’t escape from. This feeling might be totally false – you might be getting everything right – but you still feel that way. And there’s no equivalent to dropping out of the math class – except committing suicide, which is how far too many depressed people end up.

One complicating factor – how do we explain depressed people’s frequent certainty that they’ll fail? A proper Bayesian, barred from having confident beliefs about anything, will be maximally uncertain about whether she’ll fail or succeed – but some depressed people have really strong opinions on this issue. I’m not really sure about this, and admit it’s a point against this theory. I can only appeal to the math class example again – if there was a math class where I just had no confidence about anything I thought or said, I would probably be pretty sure I’d fail there too.

(just so I’m not totally just-so-storying here, here’s a study of depressed people’s probability calibration, which shows that – yup – they’re underconfident!)

This could tie into the “increased salience of prediction error” theory in Part I. If for some reason the brain became “overly conservative” – if it assigned very high cost to a failed prediction relative to the benefit of a successful prediction – then it would naturally lower its confidence levels in everything, the same way a very conservative better who can’t stand losing money is going to make smaller bets.

III.

But why would low confidence cause sadness?

Well, what, really, is emotion?

Imagine the world’s most successful entrepreneur. Every company they found becomes a multibillion-dollar success. Every stock they pick shoots up and never stops. Heck, even their personal life is like this. Every vacation they take ends out picture-perfect and creates memories that last a lifetime; every date they go on leads to passionate soul-burning love that never ends badly.

And imagine your job is to advise this entrepreneur. The only advice worth giving would be “do more stuff”. Clearly all the stuff they’re doing works, so aim higher, work harder, run for President. Another way of saying this is “be more self-confident” – if they’re doubting whether or not to start a new project, remind them that 100% of the things they’ve ever done have been successful, odds are pretty good this new one will too, and they should stop wasting their time second-guessing themselves.

Now imagine the world’s most unsuccessful entrepreneur. Every company they make flounders and dies. Every stock they pick crashes the next day. Their vacations always get rained-out, their dates always end up with the other person leaving halfway through and sticking them with the bill.

What if your job is advising this guy? If they’re thinking of starting a new company, your advice is “Be really careful – you should know it’ll probably go badly”. If they’re thinking of going on a date, you should warn them against it unless they’re really sure. A good global suggestion might be to aim lower, go for low-risk-low-reward steady payoffs, and wait on anything risky until they’ve figured themselves out a little bit more.

Corlett, Frith and Fletcher linked mania to increased confidence. But mania looks a lot like being happy. And you’re happy when you succeed a lot. And when you succeed a lot, maybe having increased confidence is the way to go. If happiness were a sort of global filter that affected all your thought processes and said “These are good times, you should press really hard to exploit your apparent excellence and not worry too much about risk”, that would be pretty evolutionarily useful. Likewise, if sadness were a way of saying “Things are going pretty badly, maybe be less confidence and don’t start any new projects”, that would be useful too.

Depression isn’t normal sadness. But if normal sadness lowers neural confidence a little, maybe depression is the pathological result of biological processes that lower neural confidence. To give a total fake example which I’m not saying is what actually happens, if you run out of whatever neurotransmitter you use to signal high confidence, that would give you permanent pathological low confidence and might look like depression.

One problem with this theory is the time course. Sure, if you’re eternally successful, you should raise your confidence. But eternally successful people are rarely eternally happy. If we’re thinking of happiness-as-felt-emotion,itt seems more like they’re happy for a few hours after they win an award or make their first million or whatever, then go back down to baseline. I’m not sure it makes sense to start lots of new projects in the hour after you win an award.

One way of resolving this: maybe happiness is the derivative of neural confidence? It’s the feeling of your confidence levels increasing, the same way acceleration is the feeling of your speed increasing?

Of course, that’s three layers of crackpot – its own layer, under the layer of emotions as confidence level, under the layer of depression as change in prediction strategies. Maybe I should dial back my own confidence levels and stop there.

OT84: Threadictive Processing

This is the bi-weekly visible open thread. Post about anything you want, ask random questions, whatever. You can also talk at the SSC subreddit or the SSC Discord server. Also:

1. New sidebar ad for Relationship Hero, a phone-in help line for social interaction related questions. Liron Shapira – whom many of you probably know from Quixey/CFAR/etc – is a co-founder, which makes me think they’re probably pretty reasonable and above-board.

2. In fact, thanks to everyone who’s emailed me about sidebar ads recently. I’m trying to walk a careful line here, where I’m neither so selective that it looks like I’m endorsing them, nor so unselective that actually bad or scammy companies make it in. If you ever feel like I’m erring on one side or the other, let me know.

3. Several good comments from last week’s thread on developmental genetics vs. evolutionary psychology. See eg Sam Reuben on how different animals implement instincts, TheRadicalModerate on the connectome, and Catherio on how across different individual animals, novel concepts seem to always get encoded in the same brain areas for some reason. Several people also brought up claims that some animals seem innately afraid of eg snakes, or innately susceptible to learning those fears, suggesting that genetics has managed to find a way to connect to the concept “snake” somehow. But it confuses me that this can be true at the same time as eg the experiment where kittens were raised in an artificial environment with no horizontal lines and weren’t able to see horizontal lines when grown up. I know there’s a difference between having a hard-coded concept and having a biased ability to learn a concept, and I know it makes sense that some hard-coded-ish concepts might need data before they “activate”, but it still seems weird to both have “snake” hard-coded enough to produce behavioral consequences, and “horizontal line” so un-hard-coded that you just might not learn it.

(also weird: trap innocent kittens in a freaky bizarro-dimension without horizontal lines and you win a Nobel, but try to give people one fricking questionnaire…)

Posted in Uncategorized | Tagged | 978 Comments

How Do We Get Breasts Out Of Bayes Theorem?

[Epistemic status: I guess instincts clearly exist, so take this post more as an expression of confusion than as a claim that they don’t.]

Predictive processing isn’t necessarily blank-slatist. But its focus on building concepts out of attempts to generate/predict sense data poses a problem for theories of innate knowledge. PP is more comfortable with deviations from a blank slate that involve the rules of cognition than with those that involve the contents of cognition.

For example, the theory shouldn’t mind the existence of genes for IQ. If the brain works on Bayesian math, some brains might be able to do the calculations more effectively than others. It shouldn’t even mind claims like “girls are more emotional than boys” – that’s just a question of how different hormones affect the Bayesian weighting of logical vs. emotional input.

But evolutionary psychologists make claims like “Men have been evolutionarily programmed to like women with big breasts, because those are a sign of fertility.” Forget for a second whether this is politically correct, or cross-culturally replicable, or anything like that. From a neurological point of view, how could this possibly work?

In Clark’s version of PP, infants laboriously construct all their priors out of sensory evidence. Object permanence takes months. Sensory coordination – the belief that eg the auditory and visual streams describe the same world, so that the same object might be both visible and producing sound – is not assumed. Clark even flirts with the possibility that some really basic assumptions might be learned:

Plausibly, it is only because the world we encounter must be parsed for action and intervention that we encounter, in experience, a relatively unambiguous determinate world at all. Subtract the need for action and the broadly Bayesian framework can seem quite at odds with the phenomenal facts about conscious perceptual experience: our world, it might be said, does not look as if it is encoded in an intertwined set of probability density distributions. Instead, it looks unitary and, on a clear day, unambiguous…biological systems, as mentioned earlier, may be informed by a variety of learned or innate “hyperpriors” concerning the general nature of the world. One such hyperprior might be that the world is usually in one determinate state or another.

I realize he’s not coming out and saying that maybe babies see the world as a probability distribution over hypotheses and only gradually “figure out” that a determinate world is more pragmatic. But he’s sure coming closer to saying that than anybody else I know.

In any case, we work up from these sorts of deep hyperpriors to testing out new models and ideas. Presumably we eventually gain concepts like “breast” after a lot of trial-and-error in which we learn that they generate successful predictions about the sensory world.

In this model, the evolutionary psychological theory seems like a confusion of levels. How do our genes reach out and grab this particular high-level category in the brain, “breast”, to let us know that we’re programmed to find it attractive?

To a first approximation, all a gene does is code for a protein. How, exactly, do you design a protein that makes men find big-breasted women attractive? I mean, I can sort of imagine that if you know what neurons carry the concept of “breast”, you can sort of wire them up to whatever region of the hypothalamus handles sexual attraction, so that whenever you see breasts you feel attraction. But number one, are you sure there’s a specific set of neurons that carry the concept “breast”? And number two, how do you get those neurons (and no others) to express a certain gene?

And if you want to posit an entire complicated breast-locating system made up of hundreds of genes, remember that we only have about 20,000 genes total. Most of these are already involved in doing things like making the walls of lysosomes flexible enough or something really boring like that. Really it’s a miracle that a mere 20,000 genes can make a human at all. So how many of these precious resources do you want to take up constructing some kind of weird Rube-Goldbergesque breast-related brain circuit?

The only excuse I can think of for the evo psych perspective is that it obviously works sometimes. Animals do have instincts; it can’t be learning all the way down.

Sometimes when we really understand those instincts, they do look like weird Rube Goldberg contraptions made of brain circuits. The classic example is baby gulls demanding food from their mother. Adult gulls have a red dot on their beaks, and the baby bird algorithm seems to be “The first thing you see with a red dot is your mother; demand food from her.” Maybe “red dot” is primitive enough that it’s easier to specify genetically than “thing that looks like a mother bird”?

The clearest example I can think of where animals clearly have an instinctive understanding of a high level concept is sex/gender – a few gay humans and penguins aside, Nature seems pretty good at keeping its creatures heterosexual. But this is one of the rare cases where evolution might really want to devote some big fraction of the 20,000 genes it has to work with to building a Rube Goldberg circuit.

Also, maybe we shouldn’t set those few gender-nonconforming humans aside. Remember, autistic people have some kind of impairment in top-down prior-based processing relative to the bottom-up evidence-based kind, and they’re about eight times more likely to be trans than the general population. It sure looks like there’s some kind of process in which people have to infer their gender. And even though evolution seems to be shouting some really loud hints, maybe if you weigh streams of evidence in unusual ways you can end up somewhere unexpected. Evolution may be able to bias the process or control its downstream effects, but it doesn’t seem able to literally hard-code it.

Someone once asked me how to distinguish between good and bad evolutionary psychology. One heuristic might be to have a strong prior against any claim in which genes can just reach into the level of already-formed concepts and tweak them around, unless there’s a really strong reason for evolution to go through a lot of trouble to make it happen.

Posted in Uncategorized | Tagged | 310 Comments

Predictive Processing And Perceptual Control

Yesterday’s review of Surfing Uncertainty mentioned how predictive processing attributes movement to strong predictions about proprioceptive sensations. Because the brain tries to minimize predictive error, it moves the limbs into the positions needed to produce those sensations, fulfilling its own prophecy.

This was a really difficult concept for me to understand at first. But there were a couple of passages that helped me make an important connection. See if you start thinking the same thing I’m thinking:

To make [bodily] action come about, the motor plant behaves (Friston, Daunizeau, et al, 2010) in ways that cancel out proprioceptive prediction errors. This works because the proprioceptive prediction errors signal the difference between how the bodily plant is currently disposed and how it would be disposed were the desired actions being performed. Proprioceptive prediction error will yield (moment-by-moment) the projected proprioceptive inputs. In this way, predictions of the unfolding proprioceptive patterns that would be associated with the performance of some action actually bring that action about. This kind of scenario is neatly captured by Hawkins and Blakeslee (2004), who write that: “As strange as it sounds, when your own behavior is involved, your predictions not only precede sensation, they determine sensation.”

And:

PP thus implements the distinctive circular dynamics described by Cisek and Kalaska using a famous quote from the American pragmatist John Dewey. Dewey rejects the ‘passive’ model of stimuli evoking responses in favour of an active and circular model in which ‘the motor response determines the stimulus, just as truly as sensory stimulus determines movement’

Still not getting it? What about:

According to active inference, the agent moves body and sensors in ways that amount to actively seeking out the sensory consequences that their brains expect.

This is the model from Will Powers’ Behavior: The Control Of Perception.

Clark knows this. A few pages after all these quotes, he writes:

One signature of this kind of grip-based non-reconstructive dance is that it suggests a potent reversal of our ordinary way of thinking about the relations between perception and action. Instead of seeing perception as the control of action, it becomes fruitful to think of action as the control of perception [Powers 1973, Powers et al, 2011].

But I feel like this connection should be given more weight. Powers’ perceptual control theory presages predictive processing theory in a lot of ways. In particular, both share the idea of cogntitive “layers”, which act at various levels (light-intensity-detection vs. edge-detection vs. object-detection, or movements vs. positions-in-space vs. specific-muscle-actions vs. specific-muscle-fiber-tensions). Upper layers decide what stimuli they want lower levels to be perceiving, and lower layers arrange themselves in the way that produce those stimuli. PCT talks about “set points” for cybernetic systems, and PP talks about “predictions”, but they both seem to be groping at the same thing.

I was least convinced by the part of PCT which represented the uppermost layers of the brain as control systems controlling various quantities like “love” or “communism”, and which sometimes seemed to veer into self-parody. PP offers an alternative by describing those layers as making predictions (sometimes “active predictions” of the sort that guide behavior) and trying to minimize predictive error. This allows lower level systems to “control for” deviation from a specific plan, rather than just monitoring the amount of some scalar quantity.

My review of Behavior: The Control Of Perception ended by saying:

It does seem like there’s something going on where my decision to drive activates a lot of carefully-trained subsystems that handle the rest of it automatically, and that there’s probably some neural correlate to it. But I don’t know whether control systems are the right way to think about this… I think maybe there are some obvious parallels, maybe even parallels that bear fruit in empirical results, in lower level systems like motor control. Once you get to high-level systems like communism or social desirability, I’m not sure we’re doing much better than [strained control-related metaphors].

I think my instincts were right. PCT is a good model, but what’s good about it is that it approximates PP. It approximates PP best at the lower levels, and so is most useful there; its thoughts on the higher levels remain useful but start to diverge and so become less profound.

The Greek atomists like Epicurus have been totally superseded by modern atomic theory, but they still get a sort of “how did they do that?” award for using vague intuition and good instincts to cook up a scientific theory that couldn’t be proven or universally accepted until centuries later. If PP proves right, then Will Powers and PCT deserve a place in the pantheon besides them. There’s something kind of wasteful about this – we can’t properly acknowledge the cutting-edgeness of their contribution until it’s obsolete – but at the very least we can look through their other work and see if they’ve got even more smart ideas that might be ahead of their time.

(Along with his atomic theory, Epicurus gathered a bunch of philosophers and mathematicians into a small cult around him, who lived together in co-ed group houses preaching atheism and materialism and – as per the rumors – having orgies. If we’d just agreed he was right about everything from the start, we wouldn’t have had to laboriously reinvent his whole system.)

Posted in Uncategorized | Tagged | 77 Comments

Book Review: Surfing Uncertainty

[Related to: It’s Bayes All The Way Up, Why Are Transgender People Immune To Optical Illusions?, Can We Link Perception And Cognition?]

I.

Sometimes I have the fantasy of being able to glut myself on Knowledge. I imagine meeting a time traveler from 2500, who takes pity on me and gives me a book from the future where all my questions have been answered, one after another. What’s consciousness? That’s in Chapter 5. How did something arise out of nothing? Chapter 7. It all makes perfect intuitive sense and is fully vouched by unimpeachable authorities. I assume something like this is how everyone spends their first couple of days in Heaven, whatever it is they do for the rest of Eternity.

And every so often, my fantasy comes true. Not by time travel or divine intervention, but by failing so badly at paying attention to the literature that by the time I realize people are working on a problem it’s already been investigated, experimented upon, organized into a paradigm, tested, and then placed in a nice package and wrapped up with a pretty pink bow so I can enjoy it all at once.

The predictive processing model is one of these well-wrapped packages. Unbeknownst to me, over the past decade or so neuroscientists have come up with a real theory of how the brain works – a real unifying framework theory like Darwin’s or Einstein’s – and it’s beautiful and it makes complete sense.

Surfing Uncertainty isn’t pop science and isn’t easy reading. Sometimes it’s on the border of possible-at-all reading. Author Andy Clark (a professor of logic and metaphysics, of all things!) is clearly brilliant, but prone to going on long digressions about various esoteric philosophy-of-cognitive-science debates. In particular, he’s obsessed with showing how “embodied” everything is all the time. This gets kind of awkward, since the predictive processing model isn’t really a natural match for embodiment theory, and describes a brain which is pretty embodied in some ways but not-so-embodied in others. If you want a hundred pages of apologia along the lines of “this may not look embodied, but if you squint you’ll see how super-duper embodied it really is!”, this is your book.

It’s also your book if you want to learn about predictive processing at all, since as far as I know this is the only existing book-length treatment of the subject. And it’s comprehensive, scholarly, and very good at giving a good introduction to the theory and why it’s so important. So let’s be grateful for what we’ve got and take a look.

II.

Stanislas Dehaene writes of our senses:

We never see the world as our retina sees it. In fact, it would be a pretty horrible sight: a highly distorted set of light and dark pixels, blown up toward the center of the retina, masked by blood vessels, with a massive hole at the location of the “blind spot” where cables leave for the brain; the image would constantly blur and change as our gaze moved around. What we see, instead, is a three-dimensional scene, corrected for retinal defects, mended at the blind spot, stabilized for our eye and head movements, and massively reinterpreted based on our previous experience of similar visual scenes. All these operations unfold unconsciously—although many of them are so complicated that they resist computer modeling. For instance, our visual system detects the presence of shadows in the image and removes them. At a glance, our brain unconsciously infers the sources of lights and deduces the shape, opacity, reflectance, and luminance of the objects.

Predictive processing begins by asking: how does this happen? By what process do our incomprehensible sense-data get turned into a meaningful picture of the world?

The key insight: the brain is a multi-layer prediction machine. All neural processing consists of two streams: a bottom-up stream of sense data, and a top-down stream of predictions. These streams interface at each level of processing, comparing themselves to each other and adjusting themselves as necessary.

The bottom-up stream starts out as all that incomprehensible light and darkness and noise that we need to process. It gradually moves up all the cognitive layers that we already knew existed – the edge-detectors that resolve it into edges, the object-detectors that shape the edges into solid objects, et cetera.

The top-down stream starts with everything you know about the world, all your best heuristics, all your priors, everything that’s ever happened to you before – everything from “solid objects can’t pass through one another” to “e=mc^2” to “that guy in the blue uniform is probably a policeman”. It uses its knowledge of concepts to make predictions – not in the form of verbal statements, but in the form of expected sense data. It makes some guesses about what you’re going to see, hear, and feel next, and asks “Like this?” These predictions gradually move down all the cognitive layers to generate lower-level predictions. If that uniformed guy was a policeman, how would that affect the various objects in the scene? Given the answer to that question, how would it affect the distribution of edges in the scene? Given the answer to that question, how would it affect the raw-sense data received?

Both streams are probabilistic in nature. The bottom-up sensory stream has to deal with fog, static, darkness, and neural noise; it knows that whatever forms it tries to extract from this signal might or might not be real. For its part, the top-down predictive stream knows that predicting the future is inherently difficult and its models are often flawed. So both streams contain not only data but estimates of the precision of that data. A bottom-up percept of an elephant right in front of you on a clear day might be labelled “very high precision”; one of a a vague form in a swirling mist far away might be labelled “very low precision”. A top-down prediction that water will be wet might be labelled “very high precision”; one that the stock market will go up might be labelled “very low precision”.

As these two streams move through the brain side-by-side, they continually interface with each other. Each level receives the predictions from the level above it and the sense data from the level below it. Then each level uses Bayes’ Theorem to integrate these two sources of probabilistic evidence as best it can. This can end up a couple of different ways.

First, the sense data and predictions may more-or-less match. In this case, the layer stays quiet, indicating “all is well”, and the higher layers never even hear about it. The higher levels just keep predicting whatever they were predicting before.

Second, low-precision sense data might contradict high-precision predictions. The Bayesian math will conclude that the predictions are still probably right, but the sense data are wrong. The lower levels will “cook the books” – rewrite the sense data to make it look as predicted – and then continue to be quiet and signal that all is well. The higher levels continue to stick to their predictions.

Third, there might be some unresolvable conflict between high-precision sense-data and predictions. The Bayesian math will indicate that the predictions are probably wrong. The neurons involved will fire, indicating “surprisal” – a gratuitiously-technical neuroscience term for surprise. The higher the degree of mismatch, and the higher the supposed precision of the data that led to the mismatch, the more surprisal – and the louder the alarm sent to the higher levels.

When the higher levels receive the alarms from the lower levels, this is their equivalent of bottom-up sense-data. They ask themselves: “Did the even-higher-levels predict this would happen?” If so, they themselves stay quiet. If not, they might try to change their own models that map higher-level predictions to lower-level sense data. Or they might try to cook the books themselves to smooth over the discrepancy. If none of this works, they send alarms to the even-higher-levels.

All the levels really hate hearing alarms. Their goal is to minimize surprisal – to become so good at predicting the world (conditional on the predictions sent by higher levels) that nothing ever surprises them. Surprise prompts a frenzy of activity adjusting the parameters of models – or deploying new models – until the surprise stops.

All of this happens several times a second. The lower levels constantly shoot sense data at the upper levels, which constantly adjust their hypotheses and shoot them down at the lower levels. When surprise is registered, the relevant levels change their hypotheses or pass the buck upwards. After umpteen zillion cycles, everyone has the right hypotheses, nobody is surprised by anything, and the brain rests and moves on to the next task. As per the book:

To deal rapidly and fluently with an uncertain and noisy world, brains like ours have become masters of prediction – surfing the waves and noisy and ambiguous sensory stimulation by, in effect, trying to stay just ahead of them. A skilled surfer stays ‘in the pocket’: close to, yet just ahead of the place where the wave is breaking. This provides power and, when the wave breaks, it does not catch her. The brain’s task is not dissimilar. By constantly attempting to predict the incoming sensory signal we become able – in ways we shall soon explore in detail – to learn about the world around us and to engage that world in thought and action.

The result is perception, which the PP theory describes as “controlled hallucination”. You’re not seeing the world as it is, exactly. You’re seeing your predictions about the world, cashed out as expected sensations, then shaped/constrained by the actual sense data.

III.

Enough talk. Let’s give some examples. Most of you have probably seen these before, but it never hurts to remind:

This demonstrates the degree to which the brain depends on top-down hypotheses to make sense of the bottom-up data. To most people, these two pictures start off looking like incoherent blotches of light and darkness. Once they figure out what they are (spoiler) the scene becomes obvious and coherent. According to the predictive processing model, this is how we perceive everything all the time – except usually the concepts necessary to make the scene fit together come from our higher-level predictions instead of from clicking on a spoiler link.

This demonstrates how the top-down stream’s efforts to shape the bottom-up stream and make it more coherent can sometimes “cook the books” and alter sensation entirely. The real picture says “PARIS IN THE THE SPRINGTIME” (note the duplicated word “the”!). The top-down stream predicts this should be a meaningful sentence that obeys English grammar, and so replaces the the bottom-up stream with what it thinks that it should have said. This is a very powerful process – how many times have I repeated the the word “the” in this paragraph alone without you noticing?

A more ambiguous example of “perception as controlled hallucination”. Here your experience doesn’t quite deny the jumbled-up nature of the letters, but it superimposes a “better” and more coherent experience which appears naturally alongside.

Next up – this low-quality video of an airplane flying at night. Notice how after an instant, you start to predict the movement and characteristics of the airplane, so that you’re no longer surprised by the blinking light, the movement, the other blinking light, the camera shakiness, or anything like that – in fact, if the light stopped blinking, you would be surprised, even though naively nothing could be less surprising than a dark portion of the night sky staying dark. After a few seconds of this, the airplane continuing on its (pretty complicated) way just reads as “same old, same old”. Then when something else happens – like the camera panning out, or the airplane making a slight change in trajectory – you focus entirely on that, the blinking lights and movement entirely forgotten or at least packed up into “airplane continues on its blinky way”. Meanwhile, other things – like the feeling of your shirt against your skin – have been completely predicted away and blocked from consciousness, freeing you to concentrate entirely on any subtle changes in the airplane’s motion.

In the same vein: this is Rick Astley’s “Never Going To Give You Up” repeated again and again for ten hours (you can find some weird stuff on YouTube). The first hour, maybe you find yourself humming along occasionally. By the second hour, maybe it’s gotten kind of annoying. By the third hour, you’ve completely forgotten it’s even on at all.

But suppose that one time, somewhere around the sixth hour, it skipped two notes – just the two syllables “never”, so that Rick said “Gonna give you up.” Wouldn’t the silence where those two syllables should be sound as jarring as if somebody set off a bomb right beside you? Your brain, having predicted sounds consistent with “Never Gonna Give You Up” going on forever, suddenly finds its expectations violated and sends all sorts of alarms to the higher levels, where they eventually reach your consciousness and make you go “What the heck?”

IV.

Okay. You’ve read a lot of words. You’ve looked at a lot of pictures. You’ve listened to “Never Gonna Give You Up” for ten hours. Time for the payoff. Let’s use this theory to explain everything.

1. Attention. In PP, attention measures “the confidence interval of your predictions”. Sense-data within the confidence intervals counts as a match and doesn’t register surprisal. Sense-data outside the confidence intervals fails and alerts higher levels and eventually consciousness.

This modulates the balance between the top-down and bottom-up streams. High attention means that perception is mostly based on the bottom-up stream, since every little deviation is registering an error and so the overall perceptual picture is highly constrained by sensation. Low attention means that perception is mostly based on the top-down stream, and you’re perceiving only a vague outline of the sensory image with your predictions filling in the rest.

There’s a famous experiment which you can try below – if you’re trying it, make sure to play the whole video before moving on:

About half of subjects, told to watch the players passing the ball, don’t notice the gorilla. Their view of the ball-passing is closely constrained by the bottom-up stream; they see mostly what is there. But their view of the gorilla is mostly dependent on the top-down stream. Their confidence intervals are wide. Somewhere in your brain is a neuron saying “is that a guy in a gorilla suit?” Then it consults the top-down stream, which says “This is a basketball game, you moron”, and it smooths out the anomalous perception into something that makes sense like another basketball player.

But if you watch the video with the prompt “Look for something strange happening in the midst of all this basketball-playing”, you see the gorilla immediately. Your confidence intervals for unusual things are razor-thin; as soon as that neuron sees the gorilla it sends alarms to higher levels, and the higher levels quickly come up with a suitable hypothesis (“there’s a guy in a gorilla suit here”) which makes sense of the new data.

There’s an interesting analogy to vision here, where the center of your vision is very clear, and the outsides are filled in in a top-down way – I have a vague sense that my water bottle is in the periphery right now, but only because I kind of already know that, and it’s more of a mental note of “water bottle here as long as you ask no further questions” than a clear image of it. The extreme version of this is the blind spot, which gets filled in entirely with predicted imagery despite receiving no sensation at all.

2. Imagination, Simulation, Dreaming, Etc. Imagine a house. Now imagine a meteor crashing into the house. Your internal mental simulation was probably pretty good. Without even thinking about it, you got it to obey accurate physical laws like “the meteor continues on a constant trajectory”, “the impact happens in a realistic way”, “the impact shatters the meteorite”, and “the meteorite doesn’t bounce back up to space like a basketball”. Think how surprising this is.

In fact, think how surprising it is that you can imagine the house at all. This really high level concept – “house” – has been transformed in your visual imaginarium into a pretty good picture of a house, complete with various features, edges, colors, et cetera (if it hasn’t, read here). This is near-miraculous. Why do our brains have this apparently useless talent?

PP says that the highest levels of our brain make predictions in the form of sense data. They’re not just saying “I predict that guy over there is a policeman”, they’re generating the image of a policeman, cashing it out in terms of sense data, and colliding it against the sensory stream to see how it fits. The sensory stream gradually modulates it to fit the bottom-up evidence – a white or black policeman, a mustached or clean-shaven policeman. But the top-down stream is doing a lot of the work here. We are able to imagine the meteor, using the same machinery that would guide our perception of the meteor if we saw it up in the sky.

All of this goes double for dreaming. If “perception is controlled hallucination” caused by the top-down drivers of perception constrained by bottom-up evidence, then dreams are those top-down drivers playing around with themselves unconstrained by anything at all (or else very weakly constrained by bottom-up evidence, like when it’s really cold in your bedroom and you dream you’re exploring the North Pole).

A lot of people claim higher levels of this – lucid dreaming, astral projection, you name it, worlds exactly as convincing as our own but entirely imaginary. Predictive processing is very sympathetic to these accounts. The generative models that create predictions are really good; they can simulate the world well enough that it rarely surprises us. They also connect through various layers to our bottom-level perceptual apparatus, cashing out their predictions in terms of the lowest-level sensory signals. Given that we’ve got a top-notch world-simulator plus perception-generator in our heads, it shouldn’t be surprising when we occasionally perceive ourselves in simulated worlds.

3. Priming. I don’t mean the weird made-up kinds of priming that don’t replicate. I mean the very firmly established ones, like the one where, if you flash the word “DOCTOR” at a subject, they’ll be much faster and more skillful in decoding a series of jumbled and blurred letters into the word “NURSE”.

This is classic predictive processing. The top-down stream’s whole job is to assist the bottom-up stream in making sense of complicated fuzzy sensory data. After it hears the word “DOCTOR”, the top-down stream is already thinking “Okay, so we’re talking about health care professionals”. This creeps through all the lower levels as a prior for health-care related things; when the sense organs receive data that can be associated in a health-care related manner, the high prior helps increase the precision of this possibility until it immediately becomes the overwhelming leading hypothesis.

4. Learning. There’s a philosophical debate – which I’m not too familiar with, so sorry if I get it wrong – about how “unsupervised learning” is possible. Supervised reinforcement learning is when an agent tries various stuff, and then someone tells the agent if it’s right or wrong. Unsupervised learning is when nobody’s around to tell you, and it’s what humans do all the time.

PP offers a compelling explanation: we create models that generate sense data, and keep those models if the generated sense data match observation. Models that predict sense data well stick around; models that fail to predict the sense data accurately get thrown out. Because of all those lower layers adjusting out contingent features of the sensory stream, any given model is left with exactly the sense data necessary to tell it whether it’s right or wrong.

PP isn’t exactly blank slatist, but it’s compatible with a slate that’s pretty fricking blank. Clark discusses “hyperpriors” – extremely basic assumptions about the world that we probably need to make sense of anything at all. For example, one hyperprior is sensory synchronicity – the idea that our five different senses are describing the same world, and that the stereo we see might be the source of the music we hear. Another hyperprior is object permanence – the idea that the world is divided into specific objects that stick around whether or not they’re in the sensory field. Clark says that some hyperpriors might be innate – but says they don’t have to be, since PP is strong enough to learn them on its own if it has to. For example, after enough examples of, say, seeing a stereo being smashed with a hammer at the same time that music suddenly stops, the brain can infer that connecting the visual and auditory evidence together is a useful hack that helps it to predict the sensory stream.

I can’t help thinking here of Molyneux’s Problem, a thought experiment about a blind-from-birth person who navigates the world through touch alone. If suddenly given sight, could the blind person naturally connect the visual appearance of a cube to her own concept “cube”, which she derived from the way cubes feel? In 2003, some researchers took advantage of a new cutting-edge blindness treatment to test this out; they found that no, the link isn’t intuitively obvious to them. Score one for learned hyperpriors.

But learning goes all the way from these kinds of really basic hyperpriors all the way up to normal learning like what the capital of France is – which, if nothing else, helps predict what’s going to be on the other side of your geography flashcard, and which high-level systems might keep as a useful concept to help it make sense of the world and predict events.

5. Motor Behavior. About a third of Surfing Uncertainty is on the motor system, it mostly didn’t seem that interesting to me, and I don’t have time to do it justice here (I might make another post on one especially interesting point). But this has been kind of ignored so far. If the brain is mostly just in the business of making predictions, what exactly is the motor system doing?

Based on a bunch of really excellent experiments that I don’t have time to describe here, Clark concludes: it’s predicting action, which causes the action to happen.

This part is almost funny. Remember, the brain really hates prediction error and does its best to minimize it. With failed predictions about eg vision, there’s not much you can do except change your models and try to predict better next time. But with predictions about proprioceptive sense data (ie your sense of where your joints are), there’s an easy way to resolve prediction error: just move your joints so they match the prediction. So (and I’m asserting this, but see Chapters 4 and 5 of the book to hear the scientific case for this position) if you want to lift your arm, your brain just predicts really really strongly that your arm has been lifted, and then lets the lower levels’ drive to minimize prediction error do the rest.

Under this model, the “prediction” of a movement isn’t just the idle thought that a movement might occur, it’s the actual motor program. This gets unpacked at all the various layers – joint sense, proprioception, the exact tension level of various muscles – and finally ends up in a particular fluid movement:

Friston and colleagues…suggest that precise proprioceptive predictions directly elicit motor actions. This means that motor commands have been replaced by (or as I would rather say, implemented by) proprioceptive predictions. According to active inference, the agent moves body and sensors in ways that amount to actively seeking out the sensory consequences that their brains expect. Perception, cognition, and action – if this unifying perspective proves correct – work together to minimize sensory prediction errors by selectively sampling and actively sculpting the stimulus array. This erases any fundamental computational line between perception and the control of action. There remains [only] an obvious difference in direction of fit. Perception here matches hural hypotheses to sensory inputs…while action brings unfolding proprioceptive inputs into line with neural predictions. The difference, as Anscombe famously remarked, is akin to that between consulting a shopping list (thus letting the list determine the contents of the shopping basket) and listing some actually purchased items (thus letting the contents of the shopping basket determine the list). But despite the difference in direction of fit, the underlying form of the neural computations is now revealed as the same.

6. Tickling Yourself. One consequence of the PP model is that organisms are continually adjusting out their own actions. For example, if you’re trying to predict the movement of an antelope you’re chasing across the visual field, you need to adjust out the up-down motion of your own running. So one “hyperprior” that the body probably learns pretty early is that if it itself makes a motion, it should expect to feel the consequences of that motion.

There’s a really interesting illusion called the force-matching task. A researcher exerts some force against a subject, then asks the subject to exert exactly that much force against something else. Subjects’ forces are usually biased upwards – they exert more force than they were supposed to – probably because their brain’s prediction engines are “cancelling out” their own force. Clark describes one interesting implication:

The same pair of mechanisms (forward-model-based prediction and the dampening of resulting well-predicted sensation) have been invoked to explain the unsettling phenomenon of ‘force escalation’. In force escalation, physical exchanges (playground fights being the most common exemplar) mutually ramp up via a kind of step-ladder effect in which each person believes the other one hit them harder. Shergill et al describe experiments that suggest that in such cases each person is truthfully reporting their own sensations, but that those sensations are skewed by the attenuating effects of self-prediction. Thus, ‘self-generated forces are perceived as weaker than externally generated forces of equal magnitude.’

This also explains why you can’t tickle yourself – your body predicts and adjusts away your own actions, leaving only an attenuated version.

7. The Placebo Effect. We hear a lot about “pain gating” in the spine, but the PP model does a good job of explaining what this is: adjusting pain based on top-down priors. If you believe you should be in pain, the brain will use that as a filter to interpret ambiguous low-precision pain signals. If you believe you shouldn’t, the brain will be more likely to assume ambiguous low-precision pain signals are a mistake. So if you take a pill that doctors assure you will cure your pain, then your lower layers are more likely to interpret pain signals as noise, “cook the books” and prevent them from reaching your consciousness.

Psychosomatic pain is the opposite of this; see Section 7.10 of the book for a fuller explanation.

8. Asch Conformity Experiment. More speculative, and not from the book. But remember this one? A psychologist asked subjects which lines were the same length as other lines. The lines were all kind of similar lengths, but most subjects were still able to get the right answer. Then he put the subjects in a group with confederates; all of the confederates gave the same wrong answer. When the subject’s turn came, usually they would disbelieve their eyes and give the same wrong answer as the confederates.

The bottom-up stream provided some ambiguous low-precision bottom-up evidence pointing toward one line. But in the final Bayesian computation, those were swamped by the strong top-down prediction that it would be another. So the middle layers “cooked the books” and replaced the perceived sensation with the predicted one. From Wikipedia:

Participants who conformed to the majority on at least 50% of trials reported reacting with what Asch called a “distortion of perception”. These participants, who made up a distinct minority (only 12 subjects), expressed the belief that the confederates’ answers were correct, and were apparently unaware that the majority were giving incorrect answers.

9. Neurochemistry. PP offers a way to a psychopharmacological holy grail – an explanation of what different neurotransmitters really mean, on a human-comprehensible level. Previous attempts to do this, like “dopamine represents reward, serotonin represents calmness”, have been so wildly inadequate that the whole question seems kind of disreputable these days.

But as per PP, the NMDA glutamatergic system mostly carries the top-down stream, the AMPA glutamatergic system mostly carries the bottom-up stream, and dopamine mostly carries something related to precision, confidence intervals, and surprisal levels. This matches a lot of observational data in a weirdly consistent way – for example, it doesn’t take a lot of imagination to think of the slow, hesitant movements of Parkinson’s disease as having “low motor confidence”.

10. Autism. Various research in the PP tradition has coalesced around the idea of autism as an unusually high reliance on bottom-up rather than top-down information, leading to “weak central coherence” and constant surprisal as the sensory data fails to fall within pathologically narrow confidence intervals.

Autistic people classically can’t stand tags on clothing – they find them too scratchy and annoying. Remember the example from Part III about how you successfully predicted away the feeling of the shirt on your back, and so manage never to think about it when you’re trying to concentrate on more important things? Autistic people can’t do that as well. Even though they have a layer in their brain predicting “will continue to feel shirt”, the prediction is too precise; it predicts that next second, the shirt will produce exactly the same pattern of sensations it does now. But realistically as you move around or catch passing breezes the shirt will change ever so slightly – at which point autistic people’s brains will send alarms all the way up to consciousness, and they’ll perceive it as “my shirt is annoying”.

Or consider the classic autistic demand for routine, and misery as soon as the routine is disrupted. Because their brains can only make very precise predictions, the slightest disruption to routine registers as strong surprisal, strong prediction failure, and “oh no, all of my models have failed, nothing is true, anything is possible!” Compare to a neurotypical person in the same situation, who would just relax their confidence intervals a little bit and say “Okay, this is basically 99% like a normal day, whatever”. It would take something genuinely unpredictable – like being thrown on an unexplored continent or something – to give these people the same feeling of surprise and unpredictability.

This model also predicts autistic people’s strengths. We know that polygenic risk for autism is positively associated with IQ. This would make sense if the central feature of autism was a sort of increased mental precision. It would also help explain why autistic people seem to excel in high-need-for-precision areas like mathematics and computer programming.

11. Schizophrenia. Converging lines of research suggest this also involves weak priors, apparently at a different level to autism and with different results after various compensatory mechanisms have had their chance to kick in. One especially interesting study asked neurotypicals and schizophrenics to follow a moving light, much like the airplane video in Part III above. When the light moved in a predictable pattern, the neurotypicals were much better at tracking it; when it was a deliberately perverse video specifically designed to frustrate expectations, the schizophrenics actually did better. This suggests that neurotypicals were guided by correct top-down priors about where the light would be going; schizophrenics had very weak priors and so weren’t really guided very well, but also didn’t screw up when the light did something unpredictable. Schizophrenics are also famous for not being fooled by the “hollow mask” (below) and other illusions where top-down predictions falsely constrain bottom-up evidence. My guess is they’d be more likely to see both ‘the’s in the “PARIS IN THE THE SPRINGTIME” image above.

The exact route from this sort of thing to schizophrenia is really complicated, and anyone interested should check out Section 2.12 and the whole of Chapter 7 from the book. But the basic story is that it creates waves of anomalous prediction error and surprisal, leading to the so-called “delusions of significance” where schizophrenics believe that eg the fact that someone is wearing a hat is some sort of incredibly important cosmic message. Schizophrenics’ brains try to produce hypotheses that explain all of these prediction errors and reduce surprise – which is impossible, because the prediction errors are random. This results in incredibly weird hypotheses, and eventually in schizophrenic brains being willing to ignore the bottom-up stream entirely – hence hallucinations.

All this is treated with antipsychotics, which antagonize dopamine, which – remember – represents confidence level. So basically the medication is telling the brain “YOU CAN IGNORE ALL THIS PREDICTION ERROR, EVERYTHING YOU’RE PERCEIVING IS TOTALLY GARBAGE SPURIOUS DATA” – which turns out to be exactly the message it needs to hear.

An interesting corollary of all this – because all of schizophrenics’ predictive models are so screwy, they lose the ability to use the “adjust away the consequences of your own actions” hack discussed in Part 5 of this section. That means their own actions don’t get predicted out, and seem like the actions of a foreign agent. This is why they get so-called “delusions of agency”, like “the government beamed that thought into my brain” or “aliens caused my arm to move just now”. And in case you were wondering – yes, schizophrenics can tickle themselves.

12. Everything else. I can’t possibly do justice to the whole of Surfing Uncertainty, which includes sections in which it provides lucid and compelling PP-based explanations of hallucinations, binocular rivalry, conflict escalation, and various optical illusions. More speculatively, I can think of really interesting connections to things like phantom limbs, creativity (and its association with certain mental disorders), depression, meditation, etc, etc, etc.

The general rule in psychiatry is: if you think you’ve found a theory that explains everything, diagnose yourself with mania and check yourself into the hospital. Maybe I’m not at that point yet – for example, I don’t think PP does anything to explain what mania itself is. But I’m pretty close.

IV.

This is a really poor book review of Surfing Uncertainty, because I only partly understood it. I’m leaving out a lot of stuff about the motor system, debate over philosophical concepts with names like “enactivism”, descriptions of how neurons form and unform coalitions, and of course a hundred pages of apologia along the lines of “this may not look embodied, but if you squint you’ll see how super-duper embodied it really is!”. As I reread and hopefully come to understand some of this better, it might show up in future posts.

But speaking of philosophical debates, there’s one thing that really struck me about the PP model.

Voodoo psychology suggests that culture and expectation tyrannically shape our perceptions. Taken to an extreme, objective knowledge is impossible, since all our sense-data is filtered through our own bias. Taken to a very far extreme, we get things like What The !@#$ Do We Know?‘s claim that the Native Americans literally couldn’t see Columbus’ ships, because they had no concept of “caravel” and so the percept just failed to register. This sort of thing tends to end by arguing that science was invented by straight white men, and so probably just reflects straight white maleness, and so we should ignore it completely and go frolic in the forest or something.

Predictive processing is sympathetic to all this. It takes all of this stuff like priming and the placebo effect, and it predicts it handily. But it doesn’t give up. It (theoretically) puts it all on a sound mathematical footing, explaining exactly how much our expectations should shape our reality, and in which ways our expectation should shape our reality. I feel like someone armed with predictive processing and a bit of luck should have been able to predict that placebo effect and basic priming would work, but stereotype threat and social priming wouldn’t. Maybe this is total retrodictive cheating. But I feel like it should be possible.

If this is true, it gives us more confidence that our perceptions should correspond – at least a little – to the external world. We can accept that we may be misreading “PARIS IN THE THE SPRINGTIME” while remaining confident that we wouldn’t misread “PARIS IN THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE SPRINGTIME” as containing only one “the”. Top-down processing very occasionally meddles in bottom-up sensation, but (as long as you’re not schizophrenic), it sticks to an advisory role rather than being able to steamroll over arbitrary amounts of reality.

The rationalist project is overcoming bias, and that requires both an admission that bias is possible, and a hope that there’s something other than bias which we can latch onto as a guide. Predictive processing gives us more confidence in both, and helps provide a convincing framework we can use to figure out what’s going on at all levels of cognition.

Highlights From The Comments On My IRB Nightmare

Many people took My IRB Nightmare as an opportunity to share their own IRB stories. From an emergency medicine doctor, via my inbox:

Thanks for the great post about IRBs. I lived the same absurd nightmare in 2015-2016, as an attending, and it’s amazing how your experience matches my own, despite my being in Canada.

One of our residents had an idea for an extremely simple physiological study of COPD exacerbations, where she’d basically look at the patient and monitor his RR, saturation, exhaled CO2 temporal changes during initial treatment. Just as you were, I was really naive back in 2015, and expected we wouldn’t even a consent form, since she didn’t even have to *talk* to the patients, much less perform any intervention. Boy I was wrong ! The IRB, of course, insisted on a two-page consent form discussing risks and benefits of the intervention, and many other forms. I had to help her file over 300 pages (!) of various forms. Just as in your case, we had to abandon the study when, two years after the first contact with the IRB, they suggested hilarious “adjustments” to the study protocol “in order to mitigate possible risks”.

From baj2235 on the subreddit:

Currently working in a brand new lab, so one would think I’d have a lot to do. Instead, thus far my job has consisted of sitting in an empty room coming up with increasingly unlikely hypotheses that will probably never be tested because our IRB hasn’t approve our NOU (Notice of use) forms. For those who don’t know, NOUs are essentially 15 page forms that say “We study these this, and we promise to be super responsible while studying it.” We have 4 currently awaiting approval, submitted in May. The reason they aren’t approved yet? The IRB hasn’t met since June, and likely won’t meet again this month because of frickin’ Harvey. Which in essence means the fine American taxpayer has essentially been paying me to sit in a room and twiddle my thumbs for the past 3-months because I can’t even grow E. coli without a frickin’ NOU.

From Garrett in the comments:

Oh, dear! I’ve actually been through this. I work in tech, but volunteer in EMS. As a part of wanting to advance the profession of EMS I figured I’d take on a small study. It would be a retrospective study about how well paramedics could recognize diabetic ketoacidosis (DKA) and Hyperosmolar hyperglycemic state (HHS) in comparison to ER doctors. […]

I had to do the “I am not a Nazi” training as well. In order to pass that, I had to be able to recite the FDA form number used as a part of new implantable medical device investigations. I wasn’t looking at a new device. I wasn’t looking at an old device. I was going to look at pairs of medical records and go “who correctly identified the problem?” […]

It’s now ~5 years after IRB and because of all of the headaches of getting the data to someone who isn’t faculty or a doctor, and who doesn’t have a $100k+ grant, I still don’t have my data. I need to send another email. I’m sure we can get an IRB extension with a few more trees sacrificed.

From Katie on Facebook:

I used to work at an fMRI research center and also had to take the Don’t Be a Nazi course!

My favorite story about the annoying IRB regulations is how they insisted on an HCG (pregnancy) test for our volunteers, despite the fact that MRI has no known adverse effect on pregnancy. So, fine, extra caution against an unknown but possible risk, sure.

But they insisted on a *blood test* done days in advance instead of five minute urine dip stick test that *actual doctors offices* would use. You know what doesn’t have risks? Peeing in a cup. And what does have risks of fainting, infection, collapsing a vein, etc? A blood draw.

Of course, we had an extra consent form for them to sign, about the risks of the blood draw the IRB was helpfully insisting on.

From Hirsin on Hacker News:

My freshman year of college I proposed a study to our hospitals IRB to strap small lasers to three week old infants in an effort to measure concentrations of a chemical in their blood. The most frustrating part was not the arcane insistence on ink and bolded study names, but the hardline insistence that it was impossible (illegal) to test the device before getting IRB approval – even on ourselves. Meaning that without any calibration or testing, our initial study would likely come back with poor results or be a dud, but we couldn’t find out until we filled out all the paperwork.

What is our country coming to when you can’t even attach lasers to babies anymore?

Some of the other stories were kind of cute. Dahud in the comments:

I’ve had exactly one interaction with an IRB – in 6th grade. My science fair project involved studying the health risks of Communion as performed in the Episcopal church. (For those unfamiliar, a priest lifts a silver chalice of port wine to your lips, you take a small sip, and the priest wipes the site with a linen cloth and rotates the chalice.)

Thing was, the science fair was being held by a Baptist University. The IRB was really not fond of the whole wine thing. They wanted me to use grape juice instead, in the Baptist fashion. I, as a minor, shouldn’t be allowed anywhere near the corrupting influence of the communion wine that I had partaken of last Sunday.

Of course, the use of communion wine was essential to the study, so we reached a compromise. I would thoroughly document all the sample collection and preparation procedures, and let someone of age carry out the experiment while I waited in the hall.

And of course James Miller is still James Miller:

Several forms I have to sign to do things at my college ask if what I will be doing will expose anyone to radiation. Although I’m an economist, this has caused me to think of experiments I could do with radiation such as secretly exposing a large number of students to radiation and seeing, years later, if it influences their income.

Along with these, a lot of other people were broadly sympathetic but thought that if I knew how to play the system a little better, or was somewhere a little more research-focused, things might have gone better for me. Virgil in the comments:

FWIW, I’m a graduate student in the Social Sciences. Our IRBs have the same rules on paper, but we get around it by using generic versions of applications with the critical info swapped out, or just ignoring them altogether. Though we don’t have to face audits, so…I’ve found that usually if you make one or two glaring errors in the application on purpose, the IRB will be happy to inform you of those and approve it when you correct them. They just want to feel powerful / like they’re making a difference, so if you oblige them they will usually let you through with no further hassle.

From Eternaltraveler in the comments:

Most of the bureaucracy you experienced is institutional and not regulatory. I have done research both in an institutional setting (turn around time at UC Berkeley=5 months to obtain ethics approval and countless hours sucking up to self important bureaucrats who think it’s their sacred duty to grind potentially life saving research to a halt over trivia they themselves know is meaningless), and as an entrepreneur and PI at a biotech startup (turn around time for outsourced IRB=5 days with reasonable and informed questions related to participants well being), where we also do quite a bit more than ask questions. FYI the kind of research I did at UC Berkeley that took 5 months for approval has absolutely no regulatory requirements outside of it.

And from PM_ME_YOUR_FRAME on the subreddit (who I might hunt down and beg to be my research advisor if I ever do anything like this again):

Amateur. What you do is you sweet talk the clinicians into using their medical judgement to adopt the form as part of their routine clinical practice and get them to include it as part of the patient’s medical records. Later… you approach the IRB for a retrospective chart review study and get blessed with waived consent. Bonus: very likely to also get expedited review.

And this really thorough comment from friendlygrantadmit:

I’m not an expert in IRB (although that’s kind of my point–getting to that), but I think your headaches were largely institutional rather than dictated by government fiat. Let me explain…

I used to be the grant administrator for a regional university while my husband was a postdoc at the large research university 20 miles away. Aside from fiscal stuff, I was the grants office, and the grants office was me. However, there was an IRB of longstanding duration, so I never had to do much other than connect faculty whose work might involve human subjects with the IRB Chair. I think I was technically a non-voting member or something, but no one expected me to attend meetings.

This was in the process of changing when I left the university because my husband’s postdoc ended and we moved. It was a subject that generated much bitterness among the small cadre of faculty involved. Because I was on my way out, I never made it my business to worry about nascent IRB woes. My understanding was that they had difficulty getting people to serve on the IRB because it was an unpaid position, but as the university expanded, they were going to need more and different types of expertise represented on the IRB. I can’t be more specific than that without basically naming the university, at which I was very happy and with which I have no quarrel. I never heard any horror stories about our IRB, and I would have been the first point person to hear the them, so I presume it was fairly easy to work with.

Anyway, the IRB auditing stuff you outline is just insane. The institutional regulations pertaining to the audits were probably what generated the mind-numbing and arcane complexity of your institution’s IRB. Add in finicky personalities and you have a recipe for endless hassle as described.

So here’s the other thing to bear in mind: almost everyone in research administration is self-trained. I think there are a few programs (probably mostly online), but it’s the sort of field that people stumble into from related fields. You learn on the job and via newsletters, conferences, and listservs. You also listen to your share of mind-numbing government webinars. But almost everyone–usually including the federal program officers, who are usually experts in their field but who aren’t necessarily experts in their own particular bureaucracy–is just winging it.

Most research admins are willing to admit the “winging it” factor among themselves. For obvious reasons, however, you want the faculty and/or researchers with whom you interact to respect your professional judgment. This was never a problem at my institution, which is probably one reason I still have a high opinion of it and its administration, but I heard plenty (PLENTY) of stories of bigshot faculty pulling rank to have the rules and regulations bent or broken in their favor because GRANT MONEY, usually with success. So of course you’re not going to confess that you don’t really have a clue what you’re doing; you’re just puzzling over these regulations like so many tea leaves and trying to make a reasonable judgment based on your status as a reasonably well-educated and fair-minded human being.

What this means in practice is almost zero uniformity in the field. Your IRB from hell story wasn’t even remotely shocking to me. Other commenters’ IRB from just-fine-ville stories are also far from shocking. Since so few people really understand what the regulations mean or how to interpret them, let alone how to protect against government bogeymen yelling at you failing to follow them, there is a wild profusion of institutional approaches to research administration, and this includes huge variations in concern for the more fine-grained regulatory details. It is really hard to find someone to lead a grants or research administration office who has expertise in all the varied fields of compliance now required. It’s hard to find someone with the expertise in any of the particular fields, to be honest.

There is one area in which this is not so much true, and that is financial regulations. Why? Well, for one thing, they’re not all that tricky–I could read and interpret them with far greater confidence than many other regs, despite having a humanities background. The other reason is that despite their comparative transparency, they were very, very widely flouted until the government started auditing large research institutions around 15-ish years ago.

I have a short story related to that, too–basically, when my husband started grad school, we would frequently go out to dinner with his lab group and advisor. The whole tab, including my dinner and that of any other SOs and all alcoholic beverages (which can’t be paid for with grant funds aside from narrow research-related exceptions), would be charged to whichever research grant because it was a working meal. I found it mildly surprising, but I certainly wasn’t going to argue.

Then the university got audited and fined millions of dollars for violations such as these and Found Religion vis-à-vis grant expenditures.

With regards to your story, I’m guessing that part of the reason the IRB is such a big deal is that human subjects research is the main type of research, so they are really, really worried about their exposure to any IRB lapses. However, it sounds like they are fairly provincial in that they aren’t connected to what more major research institutions are doing or how they handle these issues, which is always a mistake. Even if you don’t think some other institution’s approach is going to work for you, it’s good to know about as many different approaches as you can to know that you’re not some insane outlier as your IRB seems to be. As others have noted, it also sounds like that IRB has become the fiefdom of some fairly difficult personalities.

I already know how extensive, thorough, and helpful training pertaining to IRB regs is, which is not very. I remain deeply curious about the qualifications and training of your obviously well-intentioned “auditor.” My guess is she inherited her procedures from someone else and is carefully following whatever checklist was laid down so as not to expose herself to accusations of sloppiness or lack of thoroughness … but that is only a guess.

Even though I hate hearing stories like yours–there is obviously no excuse for essentially trying to thwart any and all human subjects research the way your IRB did–I am sympathetic to the need for some regulations, and not just because of Nazis and the Tuskeegee Syphilis experiments. I’m sympathetic because lack of oversight basically gives big name researchers carte blanche to ignore regulations they find inconvenient because the institutional preference, barring opposing headwinds, will always be to keep researchers happy.

Some people thought I was being too flippant, or leaving out parts of the story. Many of them mentioned that the focus on Nazis overshadowed some genuinely horrific all-American research misconduct like the Tuskegee Syphilis Experiment. They emphasized that my personal experience doesn’t overrule all of the really important reasons IRBs exist. For example, tedwick from the subreddit:

So, I wrote out all of the ways in which Scott’s terrible IRB experience was at least in part self-imposed, and how a lot of the post was about stuff that’s pretty straightforward, but it was kind of a snarky comment. Not unlike his post, but you know, whatever. Long story short, I’ve done similar work (arranged a really simple survey looking at dietary behaviors in kids, another IRB-protected group) and had to interface with the IRB frequently. Yep, it can be annoying at times. But the reason they ask people like Scott whether they’re going to try anything funny with prisoners is because sometimes people like Scott are trying something funny with prisoners. Just because Scott swears that he’s not Mengele doesn’t mean that he’s not going to do something dumb a priori. As his experience with expedited review might indicate, sitting down with an IRB officer for maybe 30 minutes would have cleared up a lot of things on both sides.

Is there room for IRB reform? Sure! Let’s make the easy stuff easy, and let’s make sure IRB intervention is on actual substance. I’m with him on this. However, a lot of the stuff Scott is complaining about doesn’t fall into that category (e.g. “why do all the researchers have to be on the IRB!?”). I get that the post was probably cathartic for Scott to write, but there are plenty of great researchers who are able to navigate this stuff without all the drama. “Bureaucracy Bad” is a fine rallying cry and all that, but most of the stuff Scott is complaining about is not all that hard and there for a reason.

And kyleboddy from the comments:

Nazism isn’t the reason IRBs exist. Far worse. American unethical experimentation is, and omitting it is a huge error. Massive and bureaucratic oversight exists because American scientists would stop at nothing to advance the field of science.

The Tuskegee Syphilis Experiment is the landmark case on why ethical training and IRB approval is required. You should know this. This was 100% covered in your ethical training.

I get why IRB approval sucks. My Informed Consent forms get banged all the time. But we’re talking about consent here, often with disadvantaged populations. It pays to be careful.

Last, most researchers who need speed and expedited review go through private IRB organizations now because the bureaucracy of medical/university systems is too much to handle. Our private IRB that we engage with sends back our forms within a week and their fees are reasonable. Their board meets twice per week, not once per month. The market has solved at least this particular issue.

EDIT: Private IRBs do not care about nonsensical stuff like the Principal Investigator having an advanced degree or being someone high of stature. (For example, I am a college dropout and have had multiple IRB studies approved.) Only bureaucratic, publicly-attached ones do. That’s a very reasonable complaint.

A lot of these are good points. And some of what I wrote was definitely unfair snark – I understand they’ve got to ask you whether you plan on removing anyone’s organs; if they don’t ask, how will they know? And maybe linking to Schneider’s book about eliminating the IRB system was a mistake – I just meant to show there was an existing conversation about this. I definitely didn’t mean to trivialize Tuskegee, to say that I am a radical Scheiderian, to act like my single experience damns all IRBs forever, or to claim that IRBs can’t possibly have a useful role to play. I haven’t even begun to wade into the debate between the critics and proponents of the system. The point I wanted to make was that whether or not IRBs are useful for high-risk studies, they’ve crept annoyingly far into low-risk studies – to the detriment of everyone.

Nobody expects any harm from asking your co-worker “How are you this morning?” in conversation. But if I were to turn this into a study – “Diurnal Variability In Well-Being Among Office Workers” – I would need to hire a whole team of managers just to get through the risk paperwork and the consent paperwork and the weekly reports and the team meetings. I can give a patient twice the standard dose of a dangerous medication without justifying myself to anyone. I can confine a patient involuntarily for weeks and face only the most perfunctory legal oversight. But if I want to ask them “How are you this morning?” and make a study out of it, I need to block off my calendar for the next ten years to do the relevant paperwork.

I feel like I’m protesting a police state, and people are responding “Well, you don’t want total anarchy with murder being legal, do you?” No, I don’t. I think there’s a wide range of possibilities between “police state” and “anarchy”. In the same way, I think there’s a wide range of possibilities between “science is totally unregulated” and “scientists have to complete a mountain of paperwork before they can ask someone how their day is going”.

I dare you to tell me we’re at a happy medium right now. Go on, I dare you.

I regret to say this is only getting worse. New NIH policies are increasingly trying to reclassify basic science as “clinical trials”, requiring more paperwork and oversight. For example, under the new regulations, brain scan research – the type where they ask you to think about something while you’re in an fMRI to see which parts of your brain light up – would be a “clinical trial” since it measures “a health-related biomedical or behavioral outcome”. This could require these studies to meet the same high standards as studies giving experimental drugs or new gene therapies. The Science Magazine article quotes a cognitive neuroscientist:

The agency’s widening definition of clinical trials could sweep up a broad array of basic science studies, resulting in wasted resources and public confusion. “The massive amount of dysfunction and paperwork that will result from this decision boggles the mind” and will hobble basic research.

A bunch of researchers from top universities have written a petition trying to delay the changes (if you’ve got an academic affiliation, you might want to check and consider signing yourself). But it’s anyone’s guess whether they’ll succeed. If not, good luck to any brain researcher who doesn’t want to go through everything I did. They’ll need it.

Posted in Uncategorized | Tagged , | 183 Comments

Links 8/17: Exsitement

Hackers encode malware that infects DNA sequencing software in a strand of DNA. Make sure to run your family members through an antivirus program before ordering genetic testing.

Every time I feel like I’ve accepted how surprising optical illusions can be, somebody comes out with an even more surprising one that I have to double-check in Photoshop to confirm it’s really illusory.

Effective altruist organizations estimate it may cost about $7,500 in efficient charitable donations to save one life. But the median American believes it only takes about $40. This and more from a survey on charity discussed on 80,000 Hours.

OpenAI creates AI that can beat some of the best human players in a limited version of the complex online multiplayer game DOTA 2. A few days later, Reddit’s DOTA2 community develops strategies for defeating the AIs. Human creativity wins again!

New method of killing bacteria, a “star-shaped polymer [that rips] apart their cell walls” may be a breakthrough in the fight against antibiotic resistance.

Did you know: Pablo Picasso was once questioned by police who suspected he had stolen the Mona Lisa.

Study: Asian-Americans are treated differently due to their weight – ie fat Asians are viewed as more likely to be “real” Americans.

The list of Michigan Department Of Corrections’ list of books prisoners may not read (h/t gabrielthefool). Includes the atlas (providing maps raises escape risk), textbooks on making webpages with HTML (what if they learn to hack?), and all the Dungeons and Dragons manuals (marked as “threat to order and security of institution”, for some reason). “I shouldn’t be astounded at the level of control and dehumanization in such a list, but somehow I am.”

From the jury selection hearings for the Martin Shkreli trial. I refused to believe this was real at first, but I’ve seen it in multiple credible sources and I guess I’m satisfied. And Ross Rheingans-Yoo spoils our fun and reminds us that actually all of this is deeply disappointing.

LiveScience reaches Peak Rat Study: Why Men Love Lingerie: Rat Study Offers Hints. “Just as lingerie turns on human males, tiny jackets do the same for male rats, a new study finds.”

Did you know: Dwayne “The Rock” Johnson’s Twitter was the first source to report on Osama bin Laden’s death.

I assume this is just lawyers amusing themselves, but technically a New Zealand law could disqualify all Australians from serving in their own Parliament.

Annie Dillard’s classic essay on a solar eclipse. I wanted to write something serious and profound about my eclipse experience, but I gave up after realizing there was no way I could match this.

The mountains of Victoria, Australia, include Mount Useful, Mount Disappointment, Mount Terrible, Mount Buggery, and Mount Typo.

Voting system theorists use voting system to systematically vote on voting systems, determine that among 18 options approval voting is best, plurality voting (what the US uses) is worst.

Julia Galef’s List Of Unpopular Ideas About Social Norms. Number 3: “It should not be considered noble to remain anonymous when donating to charity, because publicizing one’s donation encourages other people to donate.”

New Yorker: Is There Any Point To Protesting? This seems like a really important question, especially given how hard it is to trace whether any recent protests have resulted in real change. The article discusses it briefly (and presents some evidence against), but then shifts topics to a less interesting (though still worth reading) tangent about whether modern decentralized protests work worse than 60s-style highly-regimented ones.

I’ve mentioned a bunch of times on here that studies show going to a therapist isn’t necessarily any better than getting a good therapy self-help workbook. Now unsurprisingly a meta-analysis of these studies shows the same thing (paper, popular article).

Just learned 80,000 Hours has a podcast. This week’s topic: pandemic preparedness. I got to talk to some biosecurity researchers at EA Global. The consensus was that we should all be really scared of bioterrorism, but that they can’t explain why – sharing their list of Top Ten Easy Ways To Create A Global Pandemic might not be the best way to promote public safety. If you want to work on this cause and have (or can get) relevant skills, contact 80,000 Hours at the link on their website.

A cartoon from a 1906 newspaper’s Forecasts For 1907 (h/t Steve Omohundro)

I’d previously heard the good news that, even though inequality was rising within developed countries, at least global inequality was on its way down. This good news may no longer be true.

Did you know: Happy, hapless, perhaps, mishap, happen, and haphazard all come from from the same Norse root “hap” – meaning “chance”.

Darktka does a really good nootropics survey – way better than mine – but with mostly expected results. Their tl;dr: “Most substances seem to have no or only slight effects when rated subjectively. Most substances with substantial effects were already well-known for their effects, some of them are prescription drugs or pharmaceuticals.” Do note how selegiline and l-deprenyl often get very different results, sometimes barely within each other’s confidence intervals, despite being different names for the same chemical.

GoogleMemoGate update: Fired memo-sender James Damore has set up a Twitter account at @Fired4Truth with 78,000 followers and is well on his way to receiving $60,000 from crowdfunding. Part of me is optimistic; maybe people will feel less afraid if there’s an expectation that other people will look after them if they’re fired. But another part of me is worried that this creates a strong financial pressure for martyrs to transform themselves into sketchy alt-right-linked celebrities obsessed with being politically incorrect – which will retroactively justify firing them, and leave anyone who defended them with egg on their face. In some ways this is a difficult debate without a clear answer. In other ways – Fired4Truth?! Really?! You really couldn’t think of a less sketchy-sounding brand?!

Related: Quillete has an article by four domain-expert scientists who support some of the Google memo’s claims; their site then gets DDoS-ed and taken down. It seems to be back online now. Remember they’re dependent on reader donations.

Vs. Traffic Court. “Traffic laws are supposed to be about safety. But many of us feel strongly that they’re mostly about money. And in that short trial, I was able to make that point…”

Viral joke going around Chinese Twitter about what they would tell Chairman Mao if he came back today, translated by Matt Schrader.

Finally, AI learns to do something useful: remove watermarks from stock images.

I like Venkatesh Rao’s work, because it gives me a feeling of reading something from way outside my filter bubble. Like it’s by a bass lure expert who writes about bass lures, secure in the knowledge that everyone he’s ever met considers bass lures a central part of their life, and who expects his readers to share a wide stock of bass-lure-related concepts and metaphors. But Rao writes about modern culture from a Bay Area techie perspective, which really ought to be my demographic. I guess filter bubbles extend along more dimensions than I thought. Anyway, everybody’s talking about The Premium Mediocre Life Of Maya Millennial, and people who know more about bass lures than I do assure me it’s really good (it also says nice things about me!)

Spotted Toad: Good And Bad Arguments Against The Obamacare Opiate Effect – ie the claim that some of the increased opiate-related mortality is due to easier access via Obamacare.

Would an ancient Roman dressed in 50s AD clothing look hopelessly out of style to an ancient Roman in the 60s AD? r/AskHistorians on fashion trends in the ancient world.

Big econ study shows that the rates of profit have skyrocketed over the past few decades, adding a twist to standard labor vs. capital narratives. Likely related to monopolies/oligopolies and restriction of competition. Takes from Tyler Cowen, Robin Hanson, Karl Smith, and Noah Smith.

In the aftermath of Hurricane Harvey, cell phone carriers fight the government over proposed changes to emergency alert systems. My position might be biased by my eclipse trip, when the state of Oregon decided it was necessary to send out Statewide Emergency Alerts telling people not to stare at the sun.

Trump’s cybersecurity advisors resign, cite both bad cybersecurity policy and general moral turpitude. Does Trump even have any advisors left at this point?

In some parts of the world, snake oil remains a popular folk treatment, and you can even buy it on Amazon.

I guess I can’t get away without linking McSweeney’s article on Taylor Swifties.

Substances don’t have to be a liquid or a gas to behave as a fluid. For example, have you considered a fluid made of fire ants? (h/t fuckyeahfluiddynamics.tumblr.com)

Samzdat finishes its excellent series on metis, narcissism, and nihilism with a two-post summary/review: The Uruk Machine, The Thresher.

New study in the Lancet (study, popular article) finds that saturated fat in moderation might be good for you, carbs potentially worse. I can’t bring myself to really look into this, but the fundamental questions are always where you started and what you’re trading off against. If someone eats 100% sugar and switches some of their sugar for a little saturated fat from meat, that’s good. If someone eats 100% donuts and switches some of their donuts for a little bit of carbs from fruit, that’s also good. I’m not sure how seriously this study considered these things, but I would warn against taking it as some sort of final “SCIENCE SHOWS FAT GOOD, CARBS BAD, EVERYONE GO HOME NOW.”

QZ: All The Wellness Products Americans Love To Buy Are Sold On Both Infowars and Goop. Infowars is super-Red-Tribe and Goop is super-Blue-Tribe, so it’s fun to compare the way they pitch the same items. See eg the herb advertised on Goop as “Why Am I So Effin’ Tired” vs. on Infowars as “Brain Force Plus”. The former advertises that it “replenishes nutrients you may be lacking..sourced from ancient Ayurveda”, vs. the latter “fights back [against] toxic weapons…with the next generation of advanced neural activation”.

The first written use of the f-word in English is exactly what you expected.

Posted in Uncategorized | Tagged | 245 Comments