Three years ago, in Going Loopy, I wrote:
If the brain had been designed by an amateur, it would enter a runaway feedback loop the first time it felt an emotion. Think about it. You see a butterfly. This makes you happy. Being happy is an unexpected pleasant surprise. Now you’re happy that you’re happy. This makes you extra happy. Being extra happy is awesome! This makes you extra extra happy. And so on to as much bliss as your neurons are capable of representing. In the real world, either those feedback loops usually don’t happen, or they converge and stop at some finite point. I would not be surprised to learn that a lot of evolutionary innovation and biochemical complexity goes into creating a strong barrier against conditioning on your own internal experience.
“Evolutionary innovation and biochemical complexity”? Haha no, people are just too distractable to keep having the same emotion for more than a couple seconds.
I get this from Leigh Brasington’s excellent Right Concentration, a Buddhist perspective on various advanced meditative states called jhanas. To get to the first of these jhanas (there are eight in all), you become really good at concentration meditation, until you can concentrate on your breath a long time without getting distracted. Then you concentrate on your breath for a long time. Then you take your one-pointed ultra-concentrated mind, and you notice (or generate, or imagine) a pleasant feeling. This produces the first jhana, which the Buddhist scriptures describe as:
One drenches, steeps, saturates, and suffuses one’s body with the rapture and happiness born of seclusion, so that there is no part of one’s body that is not suffused by rapture and happiness.
Brasington backs this up with his own experience and those of other meditators he knows. The first jhana is really, really, really pleasurable; when you hear meditators talk about achieving “bliss states”, it’s probably something like the first jhana.
And here’s the book’s description of why it happens:
When access concentration is firmly established, then you shift your attention from the breath (or whatever your meditation object is) to a pleasant sensation. You put your attention on that sensation, and maintain your attention on that sensation, and do nothing else…
What you are attempting to do is set up a positive feedback loop. An example of a positive feedback loop is that awful noise a speaker will make if a microphone is held too close to it. What’s happening is that the ambient noise in the room goes into the microphone, is amplified by the amplifier, and comes out the speaker louder. It then reenters the microphone, gets amplified even more, comes out louder still, goes into the microphone yet again, and so on. You are trying to do exactly the same thing, except, rather than a positive feedback loop of noise, you are attempting to generate a positive feedback loop of pleasure. You hold your attention on a pleasant sensation. That feels nice, adding a bit more pleasure to your overall experience. That addition is also pleasurable, adding more pleasure, and so on, until, instead of getting a horrible noise, you get an explosion of pleasure.
The book doesn’t come out and say that the other seven jhanas are the same thing, but that seems consistent with the descriptions. For example, the fourth jhana is a state of ultimate calm. Seems like maybe if you become calm, then being so calm is kind of calming, and that’s even more calming, and so on until you’ve maxed out your mental calmness-meter.
And the explanation of why this doesn’t happen all the time is that non-meditators just can’t concentrate hard enough. A microphone-amp system that turns on and off a couple of times each second will never get a really good feedback loop going. A mind that’s always flitting from one thing to another can’t build up enough self-referentiality to reach infinite bliss.
There is also the fact that neurons are physical systems not mathematical abstractions. Even if something that looked like this kind of feedback loop got started it would end up depleting your neurotransmitters and other limited neural resources in ways that would eventually let signal levels drop to a less extreme level
Can’t people in mania feel really great for long stretches of time? You don’t want that annoying mania part here, but capacity for sustained feeling of joy could exist and be reachable in other ways.
Concentration and exclusion of everything except joy could play interesting tricks with the self-reported level of intensity, after all.
A feedback loop isn’t necessarily the same neurons the whole time. It can be a pettern moving between different areas and being copied
Maybe the feedback is not more pleasure neurons firing, but fewer displeasure neurons firing? Or a combination of both?
Are you familiar with Brasington’s meditating MRI and blood test results?
I can’t remember if that’s actually what he did, but I think it was something like that.
Glad to see a review of MCTB here — I’ve been curious, but have a huge backlog of jhana and meditation-in-practice texts to get through and never did read it. Sounds like one of the two or three works that really hit the core of what (I think) the most fundamental/important teachings really were (jhanic, experiential, if not effable then at least reproducibly ineffable…).
Hi, this doesn’t happen in the brain because it is a hierarchical negative feedback control system, not a positive one. Positive feedback loops in nature are typically constrained by negative feedback loops, when they do exist. Most runaway states are caused by the conflict between two negative feedback loops, and this is a consequence of trying to organise multiple negative feedback loops within the same wider system. This conflict is resolved through reorganisation of the hierarchy. Any way, this is how we see it from the perspective of Perceptual control theory (pctweb.org)
This seems far too over-simplistic. There are many positive feedback loops in the nervous system (cGMP increase as a result of rod/cone hyperpolarization due to cGMP decrease, allowing for further hyperpolarization). A bit confusing but here’s a diagram:
Light activates opsin
|
v
cGMP levels reduce
|
v
CNG channels close
|
v
neuron hyperpolarizes and calcium ion levels reduce
|
v
cGMP levels increase
|
v
opsin re-opens for further activation
This is a convergent positive feedback loop, but it is definitely a positive feedback loop.
Another example: Long-term potentiation
Sender neuron tends to fire before receiver neuron
|
v
Synaptic bond increases between these neurons
|
v
Sender neuron has greater impact on receiver neuron
|
v
Sender neuron more likely to fire before receiver neuron
So the idea that the brain is fundamentally a negative-feedback system (although there are many examples, especially in control (e.g. cerebellar motor control)) seems like a broad overgeneralization.
PS Here is a great experimental example of what happens when you artificially transform the natural negative feedback loop controlling perception of motion in locomotion in chicks into a positive feedback loop: https://link.springer.com/article/10.3758/BF03200092
I’d like to see how well people would do at that puzzle.
There’s a fictional version of it in one of the classics of pulp science fiction, E.E. “Doc” Smith’s Lensman series. In a space battle, someone’s firing missiles of a novel type of antimatter technobabble, that behaves in the opposite way to normal matter in a tractor beam. They fire a missile of it at the bad guys’ spaceship, which they respond to by trying to push it away, which draws it closer, but their rigidly authoritarian culture and training makes them too inflexible to realise that they need to reverse the polarity of the tractor beam. The good guys, in contrast, are able to use their initiative to respond successfully to the novel situation.
That puzzle is a metaphor for lots of human social behavior.
Here is an elaborate flash game version of it
Simian Interface
I’ve heard this before, with people quoting various time-frames. Is it really a thing? If so, where can I learn more?
‘Cause it would be ace if the feelings involved in depression only last a few seconds, or if I could be distracted out of anxiety. I’m wondering if perhaps different emotions have different half-lives? Or maybe there’s a meaningful distinction between moods and emotions that matters here?
Yeah, that’s a fair point. Personally when I try to distract myself from anxiety or stress by doing something else, it can work to some extent, but it feels like the anxiety is still there under the surface.
Personally, and again this is just me, but I’ve come to the conclusion that if I am stressed or anxious about something real that the best thing to do is to try to take some action to deal with the underlying problem, even if it’s mostly symbolic. If I’m stressed about money, I could play video games and try to not think about it, but I’ll still be stressed; it’s better to do something about it, even if it’s just “look for coupons to cut out to save a few bucks on my next grocery trip” or something, because if I feel like I’m doing something about the problem then that actually makes the stress go away for a while. That’s more for stress or anxiety that has a real cause though, I know depression often doesn’t have a real cause you can address.
I think the counterargument is that negative feelings do only occur for a few seconds, but when you’re in a depressive state they reoccur at such a high rate that your memory of your experience for the day is “I was fucking miserable, please god someone help me.” Because when you’re depressed it’s like every thought is an attack that stirs up some negative emotion or sentiment, and every desire or action is squelched by some overriding rejection of all that is. Everything looks like shit. But if you really pay attention to every single moment, it’s not like any single annoyance or negative thought lasts more than a few moments. But there’s probably another one following right on after the last one.
I think part of the point of meditation is to break down your “mood” and observe the thoughts/feelings/etc. that occur on the moment by moment scale. And hopefully observing can be the first step in changing, say if you notice yourself falling into a negative thought pattern and then consciously attempt to change your thinking to something at least more neutral. Which also sounds a lot like some of the CBT techniques.
I guess I don’t see a meaningful distinction between “emotions can last for hours” and “emotions only last a few seconds, but you can have the same emotion recurring for hours”. Either way, at the end of those hours I’ll be able to say “the only emotion I’ve felt since lunch was X”.
What does emotions as sequences of discrete feelings bring to the table that wasn’t already there when considering emotions as continuous over varying lengths?
The model in The Mind Illuminated – which AFAIK has empirical support, but I don’t have the chance to dig up the references right now – is that the brain is actually doing something like multithreading in computers: cycling through lots of different strands of sensory experience, thought, emotion, etc., so quickly that you don’t notice them switching without extended practice. So something like depressive feelings really do only last for a brief while, but they keep coming back again. (And often, people can be distracted out of anxiety etc.! You can see this most clearly with small children, but it works with adults too; that said, if there’s something around to constantly remind the person about why they were anxious, the anxiety is going to keep coming back.)
Achieving jhanas has to do with reducing the amount of “computational time” that the non-pleasure strands get, until the pleasure strands make up most of your experience during that moment.
Thanks, I’ve grabbed the book and I’ll poke around the references.
Oh, sorry; I’m not sure if the book actually had much in the way of convincing references to empirical work, I meant that I’ve seen work that supports its model. (though the book is definitely worth reading in any case!)
I definitely continue to feel bad long after the trigger happened, and quite often after I forget what the trigger was. This is actually really annoying for doing CBT to my emotions, because it takes a lot of detective work to figure out what the hell I’m upset about.
I don’t think your “Going Loopy” explanation made that much sense to begin with. In the simplified model, I imagine the way happiness works is that there is a certain number of signals in your brain that indicate that things are going well (whatever that means) and your responds to these signals by producing a “happiness” signal. But why should the “happiness” signal itself be one of the things used to recognize wellness? Sure, the “happiness” signal correlates with wellness, but it can only be as good a signal as whatever inputs it originally came, so once the mechanism is in place for producing “happiness” signal from inputs there is no additional gain in accuracy from producing “happiness” from more “happiness”. Even if the “happiness” signal does induce the production of more “happiness” signal, I expect this to be a weak effect that eventually reaches a fixed point rather than an unstable feedback loop, for exactly the reason you state: If it were an unstable feedback loop then your brain can easily reach extreme levels of the “happiness” signal entirely disconnected from any sort of wellness the “happiness” signal is supposed to indicate, and so I expect your brain to simply not do that.
This is true according to the Reinforcement Learning paradigm which is a fairly dominant view of how positive and negative reinforcement works.
Notice how easy it is to map these feedback loops onto the top-down/bottom-up feedback of the Predictive Perception model. That seems to match up nicely to the frequently-reported description of how the top-level “model” of pleasure/calmness/etc. suffuses all parts of the body – suggesting that the feedback loop is the meditator’s conscious ability to impose a strong top-down model down through the layers.
The idea “Feedback Enlightenment” is so universally encompassing as to encourage us to view everything as a “Feedback Enlightenment Loop”, including for example entire novels and entire historical epochs. As Homer Simpson puts it:
And yet, the quantitative reductions that are a defining characteristic of control theory sensu stricto are (at present) entirely lacking from the “Feedback Enlightenment Loop” worldview.
To borrow a previously quoted phrase from SF “surf noir” author Kem Nunn, Feedback Enlightenment is, in the above two senses, “entirely plausible and hilariously wrong”. Until these ideas are quantitatively and actionably fleshed-out, a blend of hope and humor will have to sustain us.
Persistent bliss states are not commonly experienced because they are maladaptive: if you are always happy then you have little incentive to put effort into doing something great.
For instance, when you where in Michigan, toiling as an intern, far from your friends, you were presumably not very happy and you wrote great posts such as “The Control Group Is Out Of Control” and “Meditations on Moloch”. Now that you are in a full doctor living in a San Francisco commune with your friends, you write hippie posts about jhanas.
Good for you, I suppose, not so good for your audience.
One of my growing fears is Scott is going to vanish down one of those rat-holes of weirdness that he describes in his post about “why do people who take a lot of psychoactive drugs get so weird”, only without quite so many psychoactive drugs, and a lot more meditation, at least at first, and that will be a great loss for the world. One of the least bad worst outcomes is he turns into Yet Another one of those Grinning Yogis talking up his Meditative Retreat and Book that already plaster too much surface area of too many community bulletin boards in too many cafes.
This is sort of the reverse of the hypothetical “What if a Time Traveler gave Edgar Allen Poe some good anti-depressants? It would be good for that man, and a loss for mankind.”
Nah Scott’s always been a weirdo. Remember the cactus person thing? I love it.
I remember the cactus person thing. One of my fears is that one these days, Scott is going to get out of the car, and what drives the car back will be an extradimentional horror that smiles all the time, and stops writing Scott’s essays.
It does not seem obvious to me that depressed people write better fiction than nondepressed people. Perhaps if Poe had had an antidepressant he would have written even more fiction and poetry of equal quality.
They write *different* fiction.
There are more dimensions of interest than just “better (for me)” and “worse (for me)”.
“Meditations on Moloch” was preceded by such classic, much-cited blog posts as “Weird Psychiatric Ads of the Seventies”, “Some Antibiotic Stagnation” and “The Other Codex” (the content of that post is that Scott has bought a copy of a weird book and he is happy about it). I suspect you are comparing the most memorable posts of 2014 to a random sample of posts in 2017.
Which memorable posts from 2017 should I compare to?
The ones you remember in 2020.
From my limited experience of the first Jhana it’s not unalloyed bliss; for me it’s almost painfully intense and accompanied by involuntary twitching and muscle spasms. According to Brasington this is a common experience.
From Brasington’s book and MCTB, the Jhanas involve a chain of concentration where you focus on one perception to get to Jhana N until that dominates then there will be a specific secondary perception within that, which you focus on to get to Jhana N+1.
If you look at Brasington’s web site he also does some hand wavy hypothesizing as to neurochemical correlates of these states.
Brasington’s book is by far the best I’ve found, and is super practical. Really the issue is putting in enough time to get Concentrated enough; I’ve only achieved this sporadically or on retreat. Working on it though!
I agree both with this and with your original explanation. On the one hand, this isn’t meant to happen to people, so your first explanation is right: the brain is programmed not to let this happen. On the other hand, evolution didn’t take into account careful meditative practices, so the “distraction” explanation also corresponds with my experience of meditation.
Scott’s preparing to write buddhist fiction.
I sense it.
This sensation makes me happy.
Which in turn,
The definitive Buddhist/Hinduist fiction is IMO Lord of Light, by Roger Zelazny. It would be difficult, if not impossible, to beat that.
Not buying it. A butterfly may be a pleasant surprise and result in happiness. However happiness at seeing a butterfly is not. Therefore there is no feedback happiness and no positive loop.
Also just mathematically speaking, it is pretty easy to come up with a monotonically increasing series that converges to a finite number.
I wonder if the strength of specific feedback loops and predictive processing model has any explanative power for each aspect of the Big 5 personality traits.
This is a little off-topic, but since you seem interested in meditation these days I thought I would share my experience with meditation, which touches on a benefit I’ve sometimes seem advertised but which you hadn’t discussed. I meditate to make better decisions and to have better intuition. This seems somewhat related to the models you’re interested in.
My family is too imaginative, and sometimes we act on light delusions. We are all prone to over-optimism, jumping to conclusions, trusting people too much, getting involved with quixotic campaigns and businesses, etc.
Anyway, when I was about 23 or 24 I tried meditation and noticed that it made my common sense feel stronger. Like if my mind were pictured as a sort of council with different “people” chiming in, then meditation made the homunculus standing for “common sense” turn up his microphone. I don’t mean to say I gained magical insights, just I was able to listen to facts my intuition was already telling me, and thereby to make fewer obvious-in-retrospect, unforced errors.
Today I am an academic, so I often have to decide what ideas to pursue, which people to collaborate with, how deep to go, etc. When I have been meditating regularly for a while, I find it much easier to limit project scope, turn people and opportunities down, intuit promising leads and write quickly.
Recently in a Vox article, Yuval Harari (the Sapiens guy) made some similar points:
I am really interested in how meditation helps helps judgement from a neurological model perspective. Maybe it’s related to resting certain brain systems or strengthening certain connections. But certainly I don’t agree with Harari’s theory of causation. He says “focus on breath” helps you to “focus on what’s important.” I believe he’s succumbing to a coincidence about how we use the word “focus”; physically focusing on the breath is not, in reality, that similar to choosing the leads with the highest expected value and ignoring the others (i.e., focusing on what’s important).
(Note there was also a Penn study specifically about mindfulness and the sunk cost fallacy.)
Another angle to meditation helping judgment it seems to me is about lowering emotional reactivity. We humans are prone to a lot of emotional reasoning (I feel anxious about going to this party, so that must be because something bad is going to happen there, etc). Seeing more clearly how we proliferate the “million different tiny stories” in our minds and how often we do that seem to help imbue the stories with less reality.
I like the image of your common sense guy tapping at his microphone and clearing his throat to speak.
My guess is there are times though when the flights of fancy are also useful?
“Another angle to meditation helping judgment it seems to me is about lowering emotional reactivity.”
Yes this seems right to me, at least for part of it. But in addition I would say there is a distinction of imagination verses reality that is important. If I have not been meditating regularly, I am prone to act on ideas with no evidence to support them. I’m reminded of the Mencken quote: “The most costly of all follies is to believe passionately in the palpably not true. It is the chief occupation of mankind.”
For example, my father is a nice guy who owns a business. He does a good job running the business, except that he regularly hires people who are down on their luck, who often turn out to be totally incompetent, shifty or, in one case, mentally handicapped. I do get that he has an immediate, strong emotional reaction to the idea of helping someone. But he follows through for a much longer time than an emotional reaction is usually sustained (except a genetic one like love or fatherhood). I think, rather, that he lives in a world of imagination; the whole time he is receiving information about the person’s fitness for the job, he ignores it because he is instead getting his perception from a totally imagined vision. Later, when things go south, he is not only disappointed (as would be the case with an emotional reaction) but totally bewildered (as would suggest he was just perceiving false information the whole time).
Somewhere down thread I have some (way too long) text trying to put meditation into the predictive processing paradigm. I hypothesize that concentrating on the breath without ignoring it is possible only if the prediction of “what it’s like to breathe” is extremely precise, forcing attention to be paid to the top-down vs. bottom-up mismatch all the time, because the predictive precision is so high that it never exactly matches the sensory information.
This sorta jibes with what you’re describing. If a lack of intuitive sense is caused by making overly vague predictions, which then match sensory information that would otherwise act as a cue to stop and think something through, then an exercise that makes predictions more precise will give you exactly the behavior that you’re describing.
“If a lack of intuitive sense is caused by making overly vague predictions, which then match sensory information that would otherwise act as a cue to stop and think something through, then an exercise that makes predictions more precise will give you exactly the behavior that you’re describing.”
That sounds right to me. “Overly vague predictions” is a lot like “failing to think something through.” Also, one mental exercise shown to help people plan is to imagine the outcome and then backtrack what steps might have led up to it. This is another way of gaining specificity.
One quibble: If we use Kahnemann-esque metaphors, then things like visualization exercises are purely a system 2 function. The trick is to get bad system 1 predictions kicked over to system 2 so they can be examined.
So there’s a major difference between an “overly vague prediction” (a failure of system 1) and “failing to think something through” (a failure of system 2). The system 1 failure is that it’s not triggering system 2 when it should. There are a variety of possible system 2 failures, but the one that’s most germane here is probably erroneously deciding that all is well and that system 1 can handle things from here, when it can’t yet.
I see your point. Yes that’s accurate.
For those of us who are just plain grouchy, the brain is dominated by negative feedback loops rather than positive. I’ve never meditated, though; trying is boring and leaves me irritated.
There’s probably prosaic physical reasons a positive feedback loop couldn’t be sustained; neurotransmitter depletion, receptor saturation, that sort of thing.
Seems like a predictive processing explanation of this might be useful.
In typical PP fashion, the more the underlying bottom-up state confirms the top-down prediction, the more the system damps itself out, going into “nothing to see here, all is well” mode. So if an ordinary, untrained person concentrates on his breath, the prediction is confirmed, and attention shifts to something else.
That would imply that meditation, which requires an atypical use of attention, is able to maintain the top-down prediction in the face of constant confirmation. On some of the other threads, I’ve hypothesized two things:
1) Attention is triggered by surprise (i.e., a mismatch between prediction and sensory stimulus).
2) Predictions become less precise in the presence of so much surprise that the attention system becomes saturated, and more precise in the absence of surprise (allowing unused attentional resources to be devoted to improving perceptual and/or motor performance).
Under these hypotheses, in the absence of any stimuli other than the sensory input being attended to, the top-down prediction becomes ultra-precise, essentially making the bottom-up stimulus continually surprising. However, since there are no other demands on the attention system, the precision doesn’t degrade.
So: concentrate on the breath, with high precision, in the absence of other stimuli, and the breath can become arbitrarily surprising without degrading the precision. Once you learn to do that, moving attention to other stimuli will work equally well, as long as nothing else is surprising other parts of the prediction/sensory system.
I admire the PP concept as a hypothesis although I think that too many people over simplify it. When we talk of bottom up and top down signals the implication is that these signals traverse the whole brain yet I rather suspect there are many many PP layers between sensation and top level awareness and action – and each layer must be ‘detached’ to some extent to enable poor predictions to be suppressed. If there were no dampening then the brain would lock onto the strongest ‘signal’ and never let go, which would be an unlikely characteristic in an organism selected by evolutionary processes to survive. Which is why we don’t eat our favourite food all the time (even though we could) because the power of the stimulus fades over time as it has evolved to. A better analogy is the Mexican Wave – lots of individuals (levels) conspire to create it but it doesn’t exist as an object and doesn’t continue indefinitely.
My best guess is that meditation is one of the methods of changing the PP weightings of bottom up and top down signals. There is no magic and other methods are available.
I’m going to drone on here just because I’m trying to get a handle on some new ideas, and your response gives me a chance to take another crack at them.
The signals don’t traverse the whole brain. Instead, they traverse the set of regions that are involved in making a particular prediction, which in turn have been activated by a previous set of sensory information.
The proper way to think about this is that the brain is a bunch of regions and the regions are hooked together in a partially connected directed graph. There are indeed many levels between the top and the bottom, and each region can have more than one upward and downward set of projections to other regions. So a lower level region will activate all of the higher level regions it’s connected to, and vice-versa.
One other thing, though: All neocortical regions work the same way. It’s only how they’re hooked together that makes different regions work differently. On the other hand, there are lots of different anatomical structures in the brainstem and limbic system that have evolved to have very specific functions. As an ex-software geek, I’m always tempted to think of the brainstem and limbic systems as “I/O” and the neocortex as “CPU”, but this is a terrible analogy. However, thinking of neocortex as a general purpose computational resource capable of adapting, while the other stuff is less adaptive but more closely evolved to achieve specific tasks, is a pretty decent metaphor for describing the brain’s architecture at its most general.
That’s the essence of PP. If the prediction and the sensory information agree, then pretty much everything gets damped out, which makes each region keep doing what it’s doing.
The real unanswered question is what happens when they disagree. Some mechanism has to be triggered to step in to clean up the mess. That cleanup process has to do two things:
1) Take short-term actions to get things back on track.
2) Cause synaptic weights to change (learn) to reduce the chance that things will result in another surprise.
I think the first of these is mediated by attention. Attention and short-term memory can sequence a series of predictions (which in PP are the same thing as actions) to make everything agree, but the process is going to be slow and energetically expensive. The goal of attention is to generate new sequences that work.
Once a sequence sorta-kinda works, the second mechanism kicks in. It can’t be a cognitive function, because it’s ultimately the mechanism that builds cognition in the first place. There has to be something that recognizes that the various layers involved in the thing being attended to are now all signaling “things all seem to be in agreement between top-down and bottom up”, and then sends out some kind of neurotransmitter to adjust the weights in the direction of this new sequence of predictions.
Because that mechanism is sub-cognitive, meditation can’t really affect it. What meditation can do is develop a sequence of actions to direct attention back to a set of prediction/sensory interactions that is signaling “you don’t really need to pay attention to this, we’ve got it”. In other words, you need a set of predictive actions that will constantly override the “nothing to see here” signals coming out of the network.
This is where I think the precision of the prediction becomes important. Not all attentional tasks are going to result in new predictive sequences (i.e., “concepts”). Many are simply going to result in the “conclusion” that the prediction was too precise or too fuzzy. (Example: You will fail to recognize an apple if the precision of the prediction requires that it always be oriented upright in a bowl of fruit, but you will also fail to recognize it if it’s sitting next to an orange and your prediction doesn’t care about color.)
The attentional system is a scarce resource, so you don’t want to overload it. However, it’s also an incredibly valuable resource, in that it is the mediator of new behavior, so you don’t want to under-utilize it, either. So there has to be some kind of just-right control process that adjusts predictions to be just precise enough to result in effective behavior, without being so precise that they require constant attention.
And this is where the predictive sequences you learn during mediation can fool with the system a bit. Basically, the concepts that drive meditation convince the lower-level regions that they should constantly be sending out the “I need some help here!” signal to the attentional system. In other words, in learning to meditate, you’ve created a set of concepts that are intentionally over-precise for the task.
In the wild, this would be a terrible pathology, because while you were constantly over-predicting what your breath felt like, something would come along and eat you. But in a secure environment, those over-precise predictions will drive the attentional system in some interesting ways. It is, literally, an altered state of consciousness, because consciousness and attention are intimately connected. (Indeed, I don’t think there’s any real difference between the two concepts.) But the only thing that’s necessary to achieve it is the learning of this over-precise pattern to drive down onto various sensory regions.
So meditation is just a way of making the brain go wrong, and experiencing what it is like to be a brain that is going wrong? Like pouring strange chemicals into your brain, and of no larger significance?
Perhaps all the thousands of years of talk about insight, enlightenment, and so on is nothing but the misinterpretation of such experiences by people who have not realised that their mind is a physical process.
Jhanas are a specific type of meditation outcome, but all of the meditation texts that I’ve seen specifically say that jhanas should not be confused with enlightenment (and MCTB specifically warns that doing too much jhanas can make you go a little crazy).
This is one of the problems with reinforcement theory: in general we do not see this happen, and it is questionable whether it is what is happening in the case of addiction.
Similar feedback-failures are discussed at-length in Marvin Minsky’s Society of Mind (1986, text on-line here).
E.g., in “Section 6.13: Self-knowledge is dangerous” Minsky writes:
And in “Section 29.6: Autistic children” Minsky writes:
These Minskian meditations highlight the danger(s) of a solitary search for unitary explanations of the world … they explain the mechanisms by which drug-takers and introspective-meditators earn a (deserved?) reputation for “weird” cognition … and they explain too why psychoactive drug-taking and introspective soul-seeking alike are viewed by society at large (again, deservedly?) as dangerous practices.
Bringing to mind one of the best pieces of shortform SF horror I’ve ever read: “Chaff”, by Greg Egan.
The title is a reference to the phrase “chaff in the wind”, which is what would happen to your mind if a “rewrite your own drives and desires” user interface was placed under your conscious and semi-conscious control.
I was reminded by the same line of a different SF story: Understand by Ted Chiang.
Another Ted Chiang story, “The truth of fact, the truth of feeling” (2013), is the heart-wrenching story of the narcissistic protagonist’s difficult escape from a self-gratifying, yet functionally abusive, cognitive feedback loop.
Perhaps this selfsame, difficult, realization is a core “point,” too, of rationalism more generally?
I never thought I’d see meditation explained in terms of biology in the course of my life, but here it is! Meditation used to be treat as some probably fake but relatively harmless delusion by science.
Why practice on the breath initially? Why not go ahead and start with the pleasurable thing? Wouldn’t that be more fun?
Here‘s an account of someone who took a detour from mindfulness of breathing to directly working with pleasure with good results. I experimented a little with following that idea and it does seem promising (and fun).
At the same time, I think it wouldn’t have worked very well if someone jumped directly into that to the exclusion of more traditional techniques. In meditation it’s important to learn how to maintain strong ongoing attention without getting frustrated or self-judgmental, attached to the idea of reaching a specific state in any given meditation session, distracted by second guessing yourself, etc. I suspect that learning those things would be harder if you were trying to focus on fun stuff from the start rather than something neutral and automatic like the breath.
“If the brain had been designed by an amateur, it would enter a runaway feedback loop the first time it felt an emotion.”
That does not follow–not all infinite series diverge.
Suppose that seeing a butterflyu gives you one unit of happiness. Recognizing that you have one unit of happiness gives you a half unit of happiness. Recognizing that half unit gives you …
1+1/2+1/4+1/8+ … =2, not infinity.
So, if you have a feedback of happy-for-happy of 0.5, then you are twice as happy as you “should” be.
Maybe some forms of depression are suppression of the “happy for happy” loop.