Rereading The Hungry Brain, I notice my review missed one of my favorite parts: the description of the motivational system. It starts with studies of lampreys, horrible little primitive parasitic fish:
How does the lamprey decide what to do? Within the lamprey basal ganglia lies a key structure called the striatum, which is the portion of the basal ganglia that receives most of the incoming signals from other parts of the brain. The striatum receives “bids” from other brain regions, each of which represents a specific action. A little piece of the lamprey’s brain is whispering “mate” to the striatum, while another piece is shouting “flee the predator” and so on. It would be a very bad idea for these movements to occur simultaneously – because a lamprey can’t do all of them at the same time – so to prevent simultaneous activation of many different movements, all these regions are held in check by powerful inhibitory connections from the basal ganglia. This means that the basal ganglia keep all behaviors in “off” mode by default. Only once a specific action’s bid has been selected do the basal ganglia turn off this inhibitory control, allowing the behavior to occur. You can think of the basal ganglia as a bouncer that chooses which behavior gets access to the muscles and turns away the rest. This fulfills the first key property of a selector: it must be able to pick one option and allow it access to the muscles.
Many of these action bids originate from a region of the lamprey brain called the pallium…
Spoiler: the pallium is the region that evolved into the cerebral cortex in higher animals.
Each little region of the pallium is responsible for a particular behavior, such as tracking prey, suctioning onto a rock, or fleeing predators. These regions are thought to have two basic functions. The first is to execute the behavior in which it specializes, once it has received permission from the basal ganglia. For example, the “track prey” region activates downstream pathways that contract the lamprey’s muscles in a pattern that causes the animal to track its prey. The second basic function of these regions is to collect relevant information about the lamprey’s surroundings and internal state, which determines how strong a bid it will put in to the striatum. For example, if there’s a predator nearby, the “flee predator” region will put in a very strong bid to the striatum, while the “build a nest” bid will be weak…
Each little region of the pallium is attempting to execute its specific behavior and competing against all other regions that are incompatible with it. The strength of each bid represents how valuable that specific behavior appears to the organism at that particular moment, and the striatum’s job is simple: select the strongest bid. This fulfills the second key property of a selector – that it must be able to choose the best option for a given situation…
With all this in mind, it’s helpful to think of each individual region of the lamprey pallium as an option generator that’s responsible for a specific behavior. Each option generator is constantly competing with all other incompatible option generators for access to the muscles, and the option generator with the strongest bid at any particular moment wins the competition.
The next subsection, which I’m skipping, quotes some scientists saying that the human motivation system works similarly to the lamprey motivation system, except that the human cerebrum has many more (and much more flexible/learnable) options than the lamprey pallium. Humans have to “make up our minds about things a lamprey cannot fathom, like what to cook for dinner, how to pay off the mortgage, and whether or not to believe in God”. It starts getting interesting again when it talks about basal ganglia-related disorders:
To illustrate the crucial importance of the basal ganglia in decision-making processes, let’s consider what happens when they don’t work.
As it turns out, several disorders affect the basal ganglia. The most common is Parkinson’s disease, which results from the progressive loss of cells in a part of the basal ganglia called the substantia nigra. These cells send connections to the dorsal striatum, where they produce dopamine, a chemical messenger that plays a very important role in the function of the striatum. Dopamine is a fascinating and widely misunderstood molecule that we’ll discuss further in the next chapter, but for now, its most relevant function is to increase the likelihood of engaging in any behavior.
When dopamine levels in the striatum are increased – for example, by cocaine or amphetamine – mice (and humans) tend to move around a lot. High levels of dopamine essentially make the basal ganglia more sensitive to incoming bids, lowering the threshold for activating movements…Conversely, when dopamine levels are low, the basal ganglia become less sensitive to incoming bids and the threshold for activating movements is high. In this scenario, animals tend to stay put. The most extreme example of this is the dopamine-deficient mice created by Richard Palmer, a neuroscience researcher at the University of Washington. These animals sit in their cages nearly motionless all day due to a complete absence of dopamine. “If you set a dopamine deficient mouse on a table,” explains Palmiter, “it will just sit there and look at you. It’s totally apathetic.” When Palmiter’s team chemically replaces the mice’s dopamine, they eat, drink, and run around like mad until the dopamine is gone.
The same can happen to humans with basal ganglia injuries:
Consider Jim, a former miner who was admitted to a psychiatric hospital at the age of fifty-seven with a cluster of unusual symptoms. As recorded in his case report, “during the preceding three years he had become increasingly withdrawn and unspontaneous. In the month before admission he had deteriorated to the point where he was doubly incontinent, answered only yes or no questions, and would sit or stand unmoving if not prompted. He only ate with prompting, and would sometimes continue putting spoon to mouth, sometimes for as long as two minutes after his plate was empty. Similarly, he would flush the toilet repeatedly until asked to stop.”
Jim was suffering from a rare disorder called abulia, which is Greek for “an absence of will”. Patients who suffer from abulia can respond to questions and perform specific tasks if prompted, but they have difficulty spontaneously initiating motivations, emotions, and thoughts. A severely abulic patient seated in a bare room by himself will remain immobile until someone enters the room. If asked what he was thinking or feeling, he’ll reply, “Nothing”…
Abulia is typically associated with damage to the basal ganglia and related circuits, and it often responds well to drugs that increase dopamine signaling. One of these is bromocriptine, the drug used to treat Jim…Researchers believe that the brain damage associated with abulia causes the basal ganglia to become insensitive to incoming bids, such that even the most appropriate feelings, thoughts, and motivations aren’t able to be expressed (or even to enter consciousness). Drugs that increase dopamine signaling make the striatum more sensitive to bids, allowing some abulic patients to recover the ability to feel, think, and move spontaneously.
All of this is standard neuroscience, but presented much better than the standard neuroscience books present it, so much so that it brings some important questions into sharper relief. Like: what does this have to do with willpower?
Guyenet describes high dopamine levels in the striatum as “increasing the likelihood of engaging in any behavior”. But that’s not really fair – outside a hospital, almost nobody just sits motionless in the middle of a room and does no behaviors. The relevant distinction isn’t between engaging in behavior vs. not doing so. It’s between low-effort behaviors like watching TV, and high-effort behaviors like writing a term paper. We know that this has to be related to the same dopamine system Guyenet’s talking about, because Adderall (which increases dopamine in the relevant areas) makes it much easier to do the high-effort behaviors. So a better description might be “high dopamine levels in the striatum increase the likelihood of engaging in high-willpower-requirement behaviors”.
But what is high willpower requirements? I’m always tempted to answer this with some sort of appeal to basic calorie expenditure, but taking a walk requires less willpower than writing a term paper even though the walk probably burns way more calories. My “watch TV” option generator, my “take a walk” option generator, and my “write a term paper” option generator are all putting in bids to my striatum – and for some reason, high dopamine levels privilege the “write a term paper” option and low dopamine levels privilege the others. Why?
I don’t know, and I think it’s the most interesting next question in the study of these kinds of systems.
But here’s a crazy idea (read: the first thing I thought of after thirty seconds). In the predictive processing model, dopamine represents confidence levels. Suppose there’s a high prior on taking a walk being a reasonable plan. Maybe this is for evo psych reasons (there was lots of walking in the ancestral environment), or for reinforcement related reasons (you enjoy walking, and your brain has learned to predict it will make you happy). And there’s a low prior on writing a term paper being a reasonable plan. Again, it’s not the sort of thing that happened much in the ancestral environment, and plausibly every previous time you’ve done it, you’ve hated it.
In this case, confidence in your new evidence (as opposed to your priors) is a pretty important variable. If your cortex makes its claims with high confidence (ie in a high-dopaminergic state), then its claim that it’s a good idea to write a term paper now may be so convincing that it’s able to overcome the high prior against this being true. If your cortex makes claims with low confidence, then it will tentatively suggest that maybe we should write a term paper now – but the striatum will remain unconvinced due to the inherent implausibility of the idea.
In this case, sitting in a dark room doing nothing is just an action plan with a very high prior; you need at least a tiny bit of confidence in your planning ability to shift to anything else.
I mentioned in Toward A Predictive Theory Of Depression that I didn’t understand the motivational system well enough to be able to explain why systematic underconfidence in neural predictions would make people less motivated. I think the idea of evolutionarily-primitive and heavily-reinforced actions as a prior – which logical judgments from the cortex have to “override” in order to produce more willpower-intensive actions – fills in this gap and provides another line of evidence for the theory.
The evo-psych explanation seems solid, but I propose a different one: the disorder penalizes plans based on their complexity. There’s a clear gradient, where ‘writing a term paper’ and ‘doing & feeling nothing’ are on opposite ends. Going out with friends requires coordination, ideas, & social interaction; going for a walk requires getting up to do it; sitting in a dark room doesn’t take much planning at all.
Maybe planning is the right thing, but my experience with behaviors that require planning crowding out simple behaviors (take out the trash) suggests otherwise.
I think the activation energy is adjusted over time dynamically: if you write a term paper and feel good about it right away, it’s easirr to write term paper in the future; if you experience an exercise high it’s easier to exercise in the future. That would be the mechanism by which habits form.
“What about exercise?” was my first thought on reading this–I *do* experience exercise highs (or at least a general feeling of well-being and satisfaction) after doing aerobic exercise. However, the activation energy to do it remains very high unless either it’s an activity that’s intrinsically fun (e.g. skiing), or I have built a routine that makes exercise the default (e.g. biking to work: my bike is tuned up and ready to go by the door, I picked out an outfit that I’ll be comfortable biking to work in, etc.). And I’ll still say “fuck it” and drive to work instead about a third of the time, even when I’ve made it easy as possible and *know* that I’ll feel better biking than driving.
Hyperbolic discounting is a bitch.
Complexity is frequently going to be correlated with lack of confidence, and Scott’s suggestion about confidence is what a lot of people think these systems are computing.
Complexity is also going to be correlated with high opportunity cost/high risk. Engaging in complex tasks is going to prevent monitoring the environment for threats and opportunities.
On the other hand virtually everything we do is complex. Walking is incredibly complex, and takes us years to master, and yet children push themselves to do it with no good reason for confidence (unless watching other people walk around gives it to them), with tons of spills, bumps, cuts and tears.
Cognitive complexity. Walking doesn’t need much of your attention, whereas writing a term paper requires a lot of attention, as well as enough information that holding all of it in your mind at once can be difficult or impossible.
Walking only doesn’t need your attention because you spend years mastering it.
Walking is also a cognitively complex thing but its complexity is “under the hood” because we’ve mastered it in the childhood. But if you try to consciously doing it, you’ll notice that it is not that simple.
I’m somewhat echoing the comment above.:)
I think you could actually combine this + Scott’s reasoning to get a pretty solid explanation as well. If a plan is complicated, it stands to reason that it’s more difficult to hypothetically model that plan in your head, which makes your brain extra unsure of its predictions regarding your plan.
For a concrete example: suppose you want to write a term paper. It’s very difficult for your brain to model what “my current plan to write a term paper” includes, because it’s complicated and there are a lot of possible ways to go about it and your brain is only kind of sure what actions it will entail.
Compare that to writing a term paper based off a good outline you’ve already prepared. It’s much more easy to model the steps to writing a paper now (“I clean up the intro paragraph for a minute, spend maybe ~45 minutes looking for more sources for paragraph #2, re-write that connector between paragraph #2 and paragraph #3,” etc). Because the mental work that you’re doing includes less unknowns, it’s easier for your brain to model you completing the paper using the outline than it is for your brain to model you completing the outline from scratch.
(Don’t forget, neuroscience shows that a lot of mental planning involves mentally simulating the actions to be taken. To make it simple, if you’re planning to write a paper and imagining yourself doing so, your brain will start to activate the “sit in the chair” neurons because you’re imagining sitting in a chair. So if that plan then has 80 more steps, most of which are unknown, your brain might start to complain about all the effort it’s being required to put in for uncertain future benefit).
If you assume that more complicated plans get penalized because they’re harder for your brain to simulate, it makes a lot of sense why your motivation to write a term paper is less than your motivation for taking a walk. It also explains why rigorous intellectual activity can seem so unappealing to me sometimes (“ugh, can’t this stupid paper wait another couple hours,”) but I’m perfectly happy to clean my room in the meantime. Cleaning the room is more caloric effort, but far, far less complicated for my brain to model.
This seems on the right track. In Predictive Processing terms, taking a walk do sn’t “cost” much surprisal, but writing a term paper involves dealing with a massive load of surprisal, and at the highest layers of your cognitive stack.
My intuition about my own problems with term papers / professional articles is that they don’t push extroversion buttons. I can perform complex cognitive tasks for quite a long time, as long as they’re mediated by social interaction. Teaching, presenting, debating, board gaming, dating are all activities where motivation seems to beget more motivation. Sitting in front of a screen with my own thoughts is agonizing — sometimes I’ve gone so far as to say that I don’t know what “my own thoughts” are until I express them in dialogue.
Somehow dopamine differences are likely to be involved, but simply relating it to difficulty or complexity doesn’t seem to work.
What’s the extinction time on queries to your brain of the form “Should I do this thing?”
That is to say, if we think of our process-by-which-we-do-things as a literal probability distribution with literal prior probabilities of “how likely am I to do this”, then repeatedly intentionally asking yourself “should I do this thing now?” should eventually result in you doing the thing, so long as those queries are actually independent events.
When I’m trying to get myself to do something I don’t want to do, and if I just repeatedly query my brain, it kind of defaults to, “No, not right now, for the same reason as last time,”. That is, there’s a timescale over which I can short circuit the entire decision process and preempt any introspection on ‘why’ I’m not doing the thing. But if I wait a little bit and ask again, it feels like a different sort of “No, not right now,”–like I had to go through the mental motions of justifying to myself that, no, really, I do not want to do the thing right now.
But this seems to suggest that spaced repetition of asking yourself, “Should I do the thing now?” might be a hack for getting yourself to do the thing, so long as you can intentionally “query the basal ganglia” so to speak.
It’s not a probability thing, it’s a comparison of signals thing. The signal that dominates isn’t random, it’s the one that’s strongest after the modifiers apply.
And mental actions are some of the things that you might choose to do, making it a chicken-egg problem. Chicken-egg problems can be solved by presenting a chicken or an egg, like pharmacological dopamine.
Yeah, continually asking yourself if you should do a thing will sooner or later result in you doing it. I’ve done some incredibly stupid things because I didn’t stop asking myself whether I should do them.
Aren’t you going to hook all this into “Breakdown of Will”?
> due to the inherent implausibility of the idea
You understand me so well.
I made an account to make this comment, something I’ve been meaning to do for a while (it was this second thought, not the dumb joke above, that motivated me). I wonder why my prior on ‘making accounts’ is so low, considering it has essentially no chance of backfiring.
By making an account you’re positing the possibility of continued posting of comments for the unforeseeable future. Then reading the replies to those comments and replying in turn.
This is a huge time and effort sink that can’t happen unless you first make that account, but is quite likely to happen after you have made the account.
Or so it seems to me.
Yeah that seems reasonable. Come to think of it, I know multiple people in real life who, on various sites, are shockingly resistant to making accounts. Fear of too large a time sink seems reasonable. Maybe also some sort of loss of anonymity? Although in reality one can always just not log in.
I wonder if you’re just not understanding the psychology of certain people. Anonymity sometimes doesn’t matter at all to people. I’m an extremely self-conscious person, and I tend to worry a lot about what other people think of me. And in theory, I should be more relaxed on the internet (or in other cases where no one knows who I am – like, say I visit a random city and go to a gym that I’ve never been to before, and will never visit again: should I still be self-conscious in the same way that I am at a hometown gym? In theory, no, but I find that’s not the case.)
I think that some people are just really, really averse to judgement, period. It doesn’t really matter whether they’re anonymous, and whether that judgement will ever be connected to their actual real-life identity or not – they just don’t like being judged, end of story. So they avoid doing things that will result in them feeling judged, which includes creating (and posting from) anonymous internet accounts. And they avoid this because people will still read what they post on those anonymous accounts, and respond to them, sometimes negatively, resulting in negative feedback and the feeling of being judged.
I don’t know if the positive feedback means enough to you to cancel out the explicit acknowledgement that judgment has occurred, but I’ve always found your comments to be of unusually high quality, thepenforests.
Honestly, I didn’t think that I had posted enough here to warrant any kind of reputation, let alone a positive one. So yes, that’s appreciated.
That’s a really interesting point.
To clarify what I meant, I was using “anonymous” to mean more than just nameless, but also hidden in presence. Your description of feeling judgement even when anonymous rings really true, including for myself, although I doubt I could have phrased it that well.
I sort of don’t like exposure to judgement as a source of resistance though, and I don’t think your response really addresses it, because having an account doesn’t require ever actually using that account to make comments. I’ve seen (including in myself) resistance to making an account in the first place, commenting aside. One example that comes to mind is someone who didn’t want to make a reddit account, which I was suggesting because of the ability to cater what one sees (rather than the ability to comment). I don’t really see how fear of other people can play into that at all.
By “being judged” do you mean that people may confront some opinions that contradict their prior beliefs? Because if we look from this side, so it’s definitely true: they don’t like to question their beliefs. And therefore those contradicting opinions may result in negative feelings.
This is an important point. Before commenting one can read all of the posts which contradict their beliefs and come up with arguments as to why they are wrong. But once one posts those arguments the arguments themselves can be proven wrong, leaving one no choice but to consider the possibility that the belief is wrong.
If this is the motivation, not creating an account is a strong version of sticking fingers in one’s ears.
I think this is pretty spot on, the idea is once you’ve commented, you have a persona in this space, and that implies upkeep. Not only that, the standard of discourse here seems pretty high. If you want your ideas taken seriously it requires a bit of effort, in both initial eloquence and responding to critique.
I was in a similar boat re: creating an account. I think part of it, at least for me, was the infinite potential of opportunities never pursued. As long as I never comment, I can still believe my comments would have been brilliant.
I would like to claim some credit for, several years ago, telling Eliezer that he really ought to like Guyenet’s reward-based view of appetite/obesity given his other views.
I don’t know how useful biased views from the inside are, but as someone who’s struggled with depression and ADD, it seems like “dopamine as confidence” is correct. Once I got my depression largely in check, the ADD came roaring back. Instead of thinking I could never do anything, I had a lot more trouble figuring out which ideas were good enough to put effort into. It wasn’t outright mania, but I couldn’t prioritize because everything seemed important. When I was depressed, I actually had much better focus, but little success in executing complex plans. I could sit and watch foreign films in black and white all day, but doing the dishes seemed impossible. Now the reverse is true: simple chores and achievable tasks get done immediately…if I don’t get distracted. Reading a whole news article is much tougher than home improvement. Overall, an improvement, but still progress to be made with my doctor’s help.
With lots of dopamine and a brain trained on little dopamine, you might have the thing where you update too quickly: filled with the ability to decide to do things, your brain says “reading article hasn’t been ‘rewarding’ (in the chemical sense) for a few minutes. I’ll find something that I think will be rewarding.”
The failure mode of that emergent behavior is that it selects things which are rewarding on the short timeframes it is using to update, not the longer timeframes that rational actors use to make decisions.
I don’t know for sure how to solve that problem, but I suspect that positively reinforcing behavior that you are doing, on the short timeframe before you lose interest, might make it easier to continue with the thing longer. That might look like having pieces of candy that you eat while you are on task, or something.
Thanks for the advice! The depression had masked the worst of the ADD for so long, so I’m still figuring out strategies to handle that. It’s a different issue than depression, but it’s certainly easier to deal with a lack of an attention span rather than despair. Maybe I’ll even go zookeeper and get my monkey brain some nice rewarding fruit. 🙂
Just model what the rational actor with your goals should do under this specific set of additional meta-constraints. And then do that.
Under this model, the core problem still seems to be that complex behaviours are still too implausible to be competetive. Simply define a ‘good-enough’ state for the cleanliness of your room. Ensure it. Set a timer for 2 hours, take out pen and paper (for work with complex/valuable/rational things) and a ‘thing that you can inflict pain on yourself, without permanent damage’, stand in front of your desk and hit yourself very hard whenever you think of ‘A Game of Thrones’/Internetsurfing/whatever competes with your desired rational behaviour.
This should raise the plausibility/feasibility of rational action in no time, by making them more viscerally appealing.
Very interesting post.
First question: have you read Crystal Society? For anyone who hasn’t, it’s a fascinating book about a hypothetical AI that (roughly speaking) consists of a number of different competing submodules. The submodules all have different goals, and they “bid” on motor actions of the AI in question in much the same way that Scott describes in this post. The currency of bidding in Crystal Society is called Strength, and submodules accumulate Strength when actions that were due to their previous bids result in positive outcomes for the AI (this is kind of underspecified in the book – obviously there must be some kind of global utility function that the AI uses to decide whether an outcome was good or not, but the author doesn’t really talk about this much, as far as I can recall). Anyway, it’s an extremely interesting book that I would recommend to anyone reading this. After I read it I found it gradually worming its way into my thinking more and more, to the point where I now find it kind of hard to think of the brain as anything but a competing group of submodules with different goals and different Strengths (where Strength is apportioned to submodules by some other part of the brain, which could kind-of sort-of be said to represent our “utility function”). And probably these submodules could each have their own internal versions of Strength which they use to reward sub-submodules, and those sub-submodules could reward sub-sub-submodules, and so on and so forth down to the neuronal level, where individual neurons are being reinforced by Hebbian learning or whatever.
Anyway, under this picture, I think that different parts of the brain have different goals, some of which are long-term and some of which are short-term. And I view “willpower” as pretty much equivalent to “the limited amount of Strength that the submodules concerned with long-term goals have.” Some people have more “willpower,” in that their long-term-focused submodules generally have more Strength (either because the brain more easily notices when long-term planning has had a good result and thus rewards it, or because the brain’s “utility function” is just intrinsically more inclined to reward it in the first place). But in either case, regardless of how much Strength a person’s long-term submodules have, anyone can “run out” of willpower (which we’ve all experienced) when those submodules run out of Strength.
Second question: what might this have to do with addiction? When I think of someone addicted to e.g. heroin when using this picture, I basically imagine a submodule of the brain that has come into existence, and is solely concerned with getting more heroin, and which has gained an enormous amount of Strength (enough to overpower almost any other submodule in the brain, except maybe those concerned with basic life support functions). As a result, the person in question will take actions almost exclusively based on how likely those actions are to lead to getting more heroin. Maybe in extreme situations, or in cases when the addiction hasn’t quite taken hold, the other submodules of the brain might be able to team up and overwhelm the “heroin” module to do something else. But almost always the heroin module will win (hence, they’re addicted). Does this seem like a useful way of conceiving of addiction?
This whole thing is giving me flashbacks to Everyone is John.
The thing about submodules you describe is pretty much the main theme of this non-fiction book. It’s worth reading though, as far I remember, I didn’t see a lot of real science behind the ideas the author eleborates on: there were mostly assumptions.
My lengthy comment below describes a different non-fiction book that has similar things to say.
This reminds me of work some people were doing a long time ago, which involved designing a market system inside a computer for allocating resources. Different subsystems could bid for use of RAM, CPU cycles, and the like. I’m afraid I no longer remember the details.
This fits with my experience.
My parents weren’t the most predictable people, and in college I was stupid and joined a cult. (Which I didn’t realize was a cult until much later.) The overarching pattern is that punishment can occur for any reason or no reason at all. (In extreme cases, some cult members were known to defend pederasty and this was considered OK, but other people would have defamation spread all over the internet about them for liking the wrong TV shows. I eventually changed my name and moved far away and I’m still not sure if I’ve escaped yet.
My general motivation steadily decreased over all this time, and it feels like it’s because of uncertainty or lack of confidence — I can’t be sure if I’ll benefit from taking action, but I can be sure I’ll be wildly disproportionately harmed by it, so I just don’t do anything anymore.
I’m not really sure what to do about that.
That sounds like a story you should tell a psychiatrist. You should call a psychiatrist to make an appointment to tell them that story.
The psychiatrist is not going to harm you for calling them.
Once, I was having a stupidly hard time with this online course. The problem wasn’t the material (ur, in terms of difficulty, anyway), but activation. A therapist hooked me up to one of those focus-detecting games made to teach inattentive children what focusing feels like by making them telekinetic, and, sure enough, thinking about this coursework turned the thing off completely.
OK, this sounds like run-of-the-mill severe ADHD / moderate Depression, which, ok, probably was, but since I was pet-sitting for my parents for a few days during this fiasco, there was a simple intervention: hide the laptop charger and the router. So what happened? A lot of staring at the wall for hours. At that point, it was clear that this was a physical impossibility with the resources available to me, and I just gave up.
That was the n>=4th time something of that magnitude—possible distractions eliminated, plenty of reason to do the work, incapable of doing the blasted work for anything. One of these (not the above) was while on Focalin, which presumably kicked in during the “GAH JUST DO IT!” ugh-of-war and kept me there for hours. It’s not a lack of desire (“just get through it and you can get out of here” is sufficient motivation when you’re stuck in a tiny lifeless office/car/etc until the work gets done, and yet…). The first severe episode of this nature that I recall was when I was mine, and all I had to do was read a chapter of a book that I felt mildly positive about. Several minutes of nothing, a couple minutes of flailing and crying, and it’s kinda blurry after that but at some point, I actually got through that one and read the book, which probably took less time than it did to generate enough AP to start.
So, yeah, what the hey is up, there? I brought all this up because of the “sitting there staring at a wall” thing.
I presume this varies between people and between topic. I have no idea why you’d have difficulty starting reading a chapter of a book you were interested in (finishing is another story altogether as writer’s style can easily stymie this).
I had to take Physics 1 lab four times in college (at three different universities) due to writer’s block on the experiment write-ups. I’d end up staring at a piece of paper with my name, the date, and the title of the lab for up to half an hour before going off to do something else.
I attributed it to the following:
1) All of this had been done before by literally millions of people. Nothing new was being learned or discovered; I even knew exactly what I was expected to learn prior to performing the lab.
2) I was being asked to regurgitate experimental results, and simply rewrite the descriptions of the lab in the textbook/labbook without adding anything novel. And to do so as introduction, material and methods, results, analysis – a rigid, pre-determined-by-someone-else layout with no novelty (textbooks at least sometimes interweave the sections). No novelty whatsoever, because there simply wasn’t room for it given the simplicity of the experiments and the fact they had been done millions of times before with expected results (even expected deviations). Somewhere this triggered my sense of “too much like plagiarism, and plagiarism is wrong!!!!”.
The fourth time at the third university I was both a decade older and simply had to fill out a worksheet instead of write up a report. That was manageable.
I’ve never heard of these focus-detecting games. Do you have any names I could follow up on? Googling “Focus detecting game” just gives me programming results.
The one he used was Mindflex. Wiki says it’s disputed whether or not it actually works. Anecdotally, it was reasonably consistent when I tried it, with small amounts of noise – E.G. trying to focus on the course consistently brought it down to 0-10% or so, thinking about imaginary friends consistently brought it over 75%, and adding something simple to focus on (a kamehameha powered by The Power of Friendship™) kept it at 95-100%. Without trying anything in particular, and just talking to the therapist, it was mostly noise.
The alternative modes of action suggested by those who doubt the mini-EEG claim are randomness and detecting jaw-tightening. I didn’t pay attention to what my face was doing at the time. The relative consistency I got from it makes me doubt that it’s entirely random, but I will point out that the examples I gave were the only objects of attention that I could consistently control it with. I couldn’t really generalize the feeling to do anything more interesting with it.
This sounds exactly like me. To me the battle to do something is almost like someone screaming in your ear as you try to work, a fuzzy wave of distraction that gets louder the harder you are trying to concentrate. It’s exhausting, and you can wear yourself out before you even start. The only thing that ever helped me was the adrenaline buzz once the deadline got close enough, though the deadline was always seemed too close to do an actually good job. The strangest part is that it can be something interesting, or even enjoyable, once it gets in that certain category, it gets resisted.
It costs calories to think, so it’s not necessarily obvious that writing a term paper uses less calories than taking a walk.
Perhaps more importantly, in the EEA a walk is more likely to result in a calorie surplus than sitting and thinking hard, because humans are natural hunter/gatherers. Even if walking costs more calories than thinking, it gives you the chance to encounter tasty animals and plants, as well as keep an eye on what other humans around you are doing.
See the post just below yours, or replace “write a term paper” with “take out the trash” or “open the letter that’s been sitting on the table for a week”
One thing I find strange is that while the brain uses “up to 20%” of the calories we burn, that doesn’t seem to change depending on how hard we think!
You’d think that if you spend your day thinking really hard, the brain would use more energy than if you just watch TV, but I’m told be people who claim to know that that is not how it works.
My best theory for why is that most of what the brain actually does is not conscious thought. Maybe 98% is about “making the trains run on time” in the physical body and the little part that is conscious “me” probably does use more energy when I think hard, but it’s such a small subunit that it doesn’t register on a whole body measurement.
Maybe the brain doesn’t have Intel’s “SpeedStep” technology built in, so it’s always running at maximum clock speed. When you’re not consciously thinking very hard, it just runs a spin wait loop really intensely 🙂
Maybe the brain doesn’t have Intel’s “SpeedStep” technology built in, so it’s always running at maximum clock speed.
The Sherlock Holmes Model?
Mine is digit-sum sequences often enough seeded (or continued) by words, numbers, and symbols in the environment. This is very annoying and very distracting when it pops into conscious thought, which happens more frequently when I’m stressed.
The brain spends largely the same amount of energy on neuronal inhibition as neuronal excitation.
Here’s the wrong, but intuitive, model. The “baseline” state is being at mental rest. Like a CPU, all we need to do to be idle is dial back neuronal activity. Keep the breathing circuit on, but turn off the higher level cognition functions.
The right model is understanding that maintaining any coherent mental state is keeping a chaotic system balanced on a razor’s edge. Unlike electronic components, individual neurons are wet, messy and misbehaved. If you power down a transistor, it’s not going to randomly fire. But neurons, even absent any input are just constantly generating noise all the time. Even being at mental rest, especially being at mental rest, requires a huge amount of inhibitory error correction.
When thinking hard, the marginal excitation energy is mostly cancelled out by the reduced inhibitory demands. So, the energy expenditures roughly comes out in the wash. The CPU analogy doesn’t hold, because the vast majority of CPU activity is excitatory in nature. (Though as components keep shrinking relatively more inhibitory activity is required to counteract quantum noise.)
Even being at mental rest, especially being at mental rest
Maybe that’s why meditation is quite challenging for some people. But it’s not all about meditation. Even ordinary rest is hard: people (I’m not going to overgeneralize though) constantly try to engage in some activity. I wonder don’t they fall into the If-you-do-not-sleep, you-engage-in-something pattern?
Thanks, that really clarifies how this works!
I think another way of viewing this is future vs present time orientation. Writing a term paper is effort now for a reward that pays off much later (getting a good grade). Both watching TV and taking a walk, pay off almost instantly. This view is also supported in that the dopamine signaling gene, DRD2, strongly affects the ability to delay satisfaction.
Preferring the instant reward acts as a sort of Bayesian regularization. With perfect information, you should always prefer the bigger payoff, even if it requires deferring satisfaction. However in the face of uncertainty, near-term payouts are more likely sure things than long-term payouts. Particularly long-term payouts requiring a long-chain of Rube Goldberg-esque events.
(“Okay, first I write this paper. Then I send it by email. Then my professor receives it by email. Than my professor will read it. Then after reading it, she’ll enter a good grade in the university system. Then the university will add that grade to my transcript. Then my future employer will see that transcript and be impressed. Then that employer will offer me more money when I start work 3 years from now.” Vs. “Turn on sitcom. Wait at most 3 minutes for joke.”)
Seems like the distinction isn’t the effort of the inputs, but the certainty of outputs. Most “hard” activities are hard because they require us to have high confidence in a complex, evolutionarily unnatural system. E.g. making comments on Internet forums is much “easier” than writing a novel, even though they’re both essentially the same activity. It’s just that Internet comments come with a much tighter reward cycle (“Write comment. Wait 10 minutes for upvotes/responses/likes/etc.”)
Thank you! This ties really neatly in how addiction and depression goes hand in hand – addiction emerging from looking for short-term, easy to achieve, certain rewards, dopamine hits, because they are easy to predict if you are underconfident in your predictions, and long-term, uncertain rewards, that you don’t really trust you will get if you are underconfident in your predictions.
I think we have a winner here.
Another thing that I think both Scott and you were maybe missing: CONTROL. That is, feeling you are in control of your life vs. feeling helpless, powerless, not in control. Control and prediction are closely related. Control means I make this thing go beep. Every time I want it, it will go beep, every time I don’t want it, it won’t. So control is confidence in your predictions to make stuff you want happen.
Present time orientation (high time preference) was tied to not feeling in control already in the sixties. Banfield’s famous (for some, infamous) book tying time preference to socioeconomic class (The Unheavenly City Revisited) mentioned that lower-class criminals don’t really think that getting into prison is a consequence of committing crimes and they can not go in prison by not committing crimes. They think they are not in control, going to prison is just bad luck, not a consequence. Now it is entirely possible that they are in some cases right (when the police is very prejudiced or plants evidence on them, whatever), the point is that when and if people feel life is largely luck outside their control, i.e. cannot predict confidently they are capable of making things go the way they want them to go, they are likely to be present oriented, get those kinds of rewards that are short-term, often addictive, and easily predicted and controlled.
I don’t know why Scott says the most typical depressed sentence is “I feel I am a burden”. For me the most typical depressed sentence is “I feel like I have lost control over my life, I am helpless, things just happen to me”.
It looks very much like to me that the way out of depression and addiciton is to chase first short-term, easily predicted, but healthier rewards (say, stretch out good instead of hitting the bottle), and gradually switch to more and more longer term, but still fairly easier predicted rewards.Slowly learning to gain confidence in longer and longer time predicitons?
How about stuff like getting depressed, addicted people together for a 2 hour long session where they build something, out of lego or whatever, the point is they can surely do it, but it takes 2 hours to make that prediction come true? So instead of chasing hits that are seconds or minutes away, they learn to confide in a prediction that working on something that is rewarding 2 hours later is OK?
EDIT: another insight. When you are depressed you chase short term rewards because you can predict them, you can control them in the sense of being sure if you do X you will feel better. Then you get addicted, realize you have a problem, but now feel even less in control when you realize you cannot really control your drinking, or phone checking or procastinating with sitcoms. Welcome to even more depression.
This is why I think the way out is to make people do stuff they can control.
I used to take public transport and then now bought a car and sometimes use public transport (commuting), sometimes the car (everything else) and every time I drive the car I feel empowered – I control where I am going! While on the bus it is like they take me wherever they want to…
Thanks for the kind words and additional insights.
I think this is essentially correct. In statistical control theory the optimal step size is inversely related to the magnitude of the noise. One way to think about this is walking around in a dark room vs a well-lit room. In the latter you can take large, brisk steps confidently. In the dark room, you take baby steps to avoid slamming your shins. A walker in the dark room would “feel” much less control over his ability to get from one place to another.
I like this idea. It’s like the equivalent of what CBT does for anxiety. Gradually train your brain to increase its confidence in actions that result in longer-term, more circuitous rewards. Anecdotally, I know the advice of making your bed everyday is popular for dealing with depression. Make, your bed in the morning, then reap the rewards of lying down in a well-made bed later at night.
Perhaps a computer game that gradually spaces out the rewards more and more.
These don’t have to be thought of as entirely separate. Sometimes the reason you feel like a burden is because you can’t tie your actions to your outcomes, so you feel like other people have to take care of you and get nothing in return because you lose the belief that your actions have any influence. If you can’t believe that what you do makes any difference, how could you possibly be anything other than dead weight in your own mind?
I’m not saying this ties the two together perfectly, but it’s consistent with my own experience.
My experience with depression was dealing with a sense of total control; that I could do anything, so what was the point in doing anything?
It wasn’t a negative depression, more like… a total absence of anything. A depression of zero affect. No sadness, no happiness, no anything.
Where does it start to feel “longer”? I mean if we go deeper we see that pretty much everything is out of your control (and all you’ve got is an illusion). Sure, the degrees of control varies — as with the watching-TV thing — but inside you do realize that life does not work as you predicted. For example, you write a term paper, send it to a professor, and they reject/grade it much lower it for whatever reason they possibly could think of. You can ask that professor about the reasons but you already feel somewhat disappointed. And those feelings weren’t predicted: they are out of your control, as it were.
“There are things which are within our power, and there are things which are beyond our power. Within our power are opinion, aim, desire, aversion, and, in one word, whatever affairs are our own. Beyond our power are body, property, reputation, office, and, in one word, whatever are not properly our own affairs.
Now, the things within our power are by nature free, unrestricted, unhindered; but those beyond our power are weak, dependent, restricted, alien. Remember, then, that if you attribute freedom to things by nature dependent, and take what belongs to others for your own, you will be hindered, you will lament, you will be disturbed, you will find fault both with gods and men. But if you take for your own only that which is your own, and view what belongs to others just as it really is, then no one will ever compel you, no one will restrict you, you will find fault with no one, you will accuse no one, you will do nothing against your will; no one will hurt you, you will not have an enemy, nor will you suffer any harm.
Aiming therefore at such great things, remember that you must not allow yourself any inclination, however slight, towards the attainment of the others; but that you must entirely quit some of them, and for the present postpone the rest. But if you would have these, and possess power and wealth likewise, you may miss the latter in seeking the former; and you will certainly fail of that by which alone happiness and freedom are procured.
Seek at once, therefore, to be able to say to every unpleasing semblance, “You are but a semblance and by no means the real thing.” And then examine it by those rules which you have; and first and chiefly, by this: whether it concerns the things which are within our own power, or those which are not; and if it concerns anything beyond our power, be prepared to say that it is nothing to you.”
Thus begins the Enchiridion. CBT, mentioned above, is closely related to this.
There is very little over which we have 100% control, and it is folly to aim for things outside of it. This does not mean we have to stare at a wall all day, it means we have to internalize our goals. An archer (that doesn’t want to be miserable) aims to be a good archer, not to hit the target, since that is only partially under their control (there might be a sudden gust of wind, the target may get up and run). Hitting the target is to be chosen, but not to be desired.
This resonates personally for me right now. Right now in my life I have a long-term project, which may work or it may not, and thus may be financially and socially rewarding, or it may not. The horizon for knowing whether it will or won’t work is years, and fraught with things I can’t anticipate. So when I go home, I can work on my project, or a I can play a video game, feedback from which I will see practically immediately. Even if I do poorly in the game, there is no uncertainty involved.
Starting off with lampreys made me immediately think of a surfeit of lampreys, the reason given for the death of Henry I.
Plainly my dopamine system is easily distractable and likes wandering off on historical tangents instead of buckling down to reading about real science 🙂
The wikipedia page on lampreys has an excellent historical tangent, with this quote from Seneca:
Vedius here is Publius Vedius Pollio, a friend of the Emperor Augustus, possibly this explains why the slave was in a position to bring the matter directly to the Emperor himself. Apparently another historian remarked that Vedius “could not punish his servant for what Augustus also had done”.
Here is another crazy idea. It’s kinda obvious that some actions (like writing a term paper) require numerous further judgments when chosen, and other actions (like watching TV) require fewer judgments down the row. So with each action bid, cerebral cortex may submit not only the information about the expected utility, environment and so on but also about the expected intensity of further queries to dopamine subsystem.
Then, the logical way to organize it is when the system is low on dopamine, it’s biased for selection of actions that require very little further intervention from basal ganglia, and when the system is high on dopamine it chooses the other way.
This hypothesis would explain the relation between willpower and ability to make decisions, and also helps explain why there is a certain bias in decisions made under Adderall. The interesting thing to do would be to compare the mechanism of decision making in important situations versus game-like situations (such as in role-playing games).
But perhaps simpler is to note that any chain of decisions will have compounded uncertainties – action 10 in a series is more uncertain than it would be as a solitary action. So if you have a reasonably well-calibrated uncertainty measurement system, which is likely what we have here, then discounting for number of judgements gets built in.
On the other hand, special dopamine subsystem that regulates uncertainty level is isolated from the cortex – both physically and chemically. My understanding is that cortex presents information about possible actions and then the decision is made in basal ganglia. The uncertainty level depends on dopamine level and therefore is mostly confined to basal ganglia. The cortex may access this information, construct estimates such as “this action requires N consecutive decisions, and each decision was x uncertain last time it was made, so the total uncertainty is Y= x^N”, and provide Y to the basal ganglia, but then Y quickly becomes irrelevant as soon as dopamine level fluctuates, and prediction of these fluctuations is a mental task far beyond mental abilities of most humans, let alone mammals.
So, I think with the given roles of the cortex as an information gatherer/planner/reporter and the basal ganglia as a decision maker, the cortex is providing N, and not Y. In other words, there is no “reasonably well-calibrated uncertainty measurement system” in a typical cortex.
Sorry, I just cannot wrap my head around dopamine-as-confidence-in-prediction. Please answer these questions:
1) Why does dopamine generally feel good/rewarding? Shouldn’t you get your rewards after you finished an action? Dopamine’s feel-good effect is generally explained as “expectation, excitement looking forward to a good thing”. While the real reward, “you got it, now relax” is serotonine…
2) Maybe dopamine in and of itself does not feel rewarding but fscking with it, say, cocaine, does. Is there a difference? How can one explain a coke high in the confidence context?
3) How can one explain alcohol addiction in the confidence context? Alcohol is simply a selective downer, it shuts off those parts of the brain that are responsible for worries or inhibitions first so the rest can feel relatively better. Yet getting addicted to it is a dopamine thing…
4) How can one explain non-chemical stimulant addiction, gambling, porn, internet, in the confidence context?
5) Finally, any insight from this for people struggling with addictions? From the normal, usual, dopamine feels good viewpoint you gotta do other stuff that feels good and has a rewarding progress cycle, feedback cycle, say, sports. From a confidence angle? Do stuff you are really sure about they gonna happen as you expect them to happen or what?
6) bonus question people say the book The Power Of Habits is pretty good at breaking the lighter kind of, say, the early stages of addictions, when they are still just bad habits, the stages that don’t yet require the more radical interventions. How the habit loop would look like from a confidence perspective? The summary of the book is that you have the stimulus/craving -> action -> reward cycle, and you swap out the action. So you have the stimulus/craving to check your phone or drink wine, you replace it with some other action that feels good, say go for a walk, stretch or just tell yourself you are beautiful, and of course it gives you a reward. How does this look from a confidence viewpoint?
I thought about my questions and formed some tentative answers:
1) Dopamine is “want, not like”. Craving but also the expectation buzz. I think stripping is a good parallel. Seeing a good looking person of your preferred sex naked is pretty cool at least if you are not asexual, but the slow stripping down of the clothes rather works with the expectation, the promise of future nakedness rather than the nakedness itself.
2) The coke high and suchlike roughly feel like “I won ten million dollars and gonna get paid tomorrow, so excited!”
3) Alcoholism is very strongly in the want, not like category. Craving, but not expectation – i.e. the bad feeling kind of dopamine. It seems dopamine highs can feel good or bad. Usually good but in such a craving case bad. Looking forward to get something cool vs. damn I MUST get this thing RIGHT NOW. Could possibly come from reinforcing the prediction that drinking feels good.
4) It is seriously weird that people say intermittent rewards are more addictive than constant rewards. This really does not make sense. This is the biggest WTF here. Unless we say that with constant rewards you stop putting effort into prediction i.e. you get the most dopamine NOT when you are 100% confident but when you hit the sweet spot between winning often enough for max confidence yet not too often enough to stop paying attention and see it an automatic thing like walking. THIS SOUNDS IMPORTANT. In other words, you need to hit a sweet spot when things are challenging yet not too challenging – same thing as in Csikszentmihalyi’s flow?
5) It seems obvious to do things we can reliably predict and thus learn to trust our predictions, but if it is too easy, if it is too 100% then maybe we stop paying attention. So it seems we need to do slightly challenging things? If you want to get addicted to running and you know you can run a 5K in X minutes, you should generate a random target every time between 1.05X and 0.9X so usually you can do it but not always thus it is still challenging?
6) Sounds sort of unrelated.
Your answers are generally pretty good. For what it’s worth, I am not an addiction scientist, but I have friends who are, and if I understand what they say correctly, the idea is that you are not addicted to being high, rather you are addicted to getting high. Addiction is (largely) related to problems in subsystems associated with the pursuit of goals rather than the pleasure that results from their attainment. I suspect your point in #4 about automaticity and unpredictableness of reward is important here. You don’t need to pursue reward that falls like manna from the sky.
This (I think) explains addictions to things other than drugs. We are very good at representing even highly abstract goals, and for they can become the focus of a twisted goal-pursuit system. Of course there will be differences between the types of addictions and those that involve drugs that directly target pleasure systems (the “being high” part) or drugs that change normal functioning of important systems to require their presence, so when they are no longer present there are massive problems (the classic withdrawal symptoms). But the goal pursuit issue is still pretty standard.
How does the lamprey decide what to do? Within the lamprey basal ganglia lies a key structure called the striatum, which is the portion of the basal ganglia that receives most of the incoming signals from other parts of the brain. The striatum receives “bids” from other brain regions, each of which represents a specific action. A little piece of the lamprey’s brain is whispering “mate” to the striatum, while another piece is shouting “flee the predator” and so on… Each little region of the pallium is responsible for a particular behavior, such as tracking prey, suctioning onto a rock, or fleeing predators…
Do we actually know all this? Are we able to isolate and decode these “bids” on a material level? Can we point a region of the pallium and say which exact behavior it’s responsible for?
In short, how much of this is actually known and how much a just-so story?
It’s a bit tangential to my own field, but I’m close enough to know that lampreys are a fairly common model organism in neuroscience (albeit nowhere near as common as mice/rats/worms/flies), particularly for having a very “simple” vertebrate brain. The above account may be simplified/idealized, but AFAIK there’s a lot of solid empirical work on lamprey nervous systems which it probably pulls from. But that’s about all I really know, as it’s outside my area.
…and any explaination needs to account for one of the most baffling things about the human experience, which is the staggering degree of variation in one human to another in what gets more money to make the bids in the first place.
Case in point it’s 6A.M. for yours truly, and I have this truly beastly day ahead of me, so I’m getting a final few moments of procrastination in…my thinking is basically “oh, before I go to all that [high effort] stuff like [meeting people] and [reading stuff that analyzes stuff at a very high level] I’ll do something fun and low effort, like visit Slate Star Codex, a blog that analyzes things at an EXTRAORDINARILY high level, then see what all the other people who read the article thought.
Related: some stuff “charges you up” in that it builds up your willpower. Other stuff “runs you down” in that you ‘expend’ your willpower to do it. The problem is its so variable from human to human. I’m a lazy slob. “Running” requires immense willpower. I have to do like, a whole day of fun stuff to charge myself up for a 15 minute run. I’m creative. “Writing” (fiction, even very complicated fiction) requires no willpower at all. I do it INSTEAD of doing other things. I do it BEFORE doing other things, as a way to “charge myself up” to go do unpleasant stuff. Many people I know are totally backwards. “I need to go on a run to clear my head before I can get to work on this term paper” sounds objectively false to me, like, no…that’s not an effective way to do that. But so many of my friends say this, so it must be true for them. Certainly not for me.
I don’t find this entirely convincing. Evo psych doesn’t work nearly as well when you phrase it as “TV vs term paper” and predicts generally wrong when you do “TV vs walk”. And simple reinforcement doesn’t work all that well for other comparisons, either. Sure, reinforcement makes me dislike writing term papers, but it should make me like writing Naval Gazing over watching TV. And yet if it’s been a long and exhausting day, I generally prefer TV.
Naval Gazing requires work and the payoff is extremely delayed (and abstract at best). Watching TV is immediately rewarding. Not sure what the conflict is.
The down payment comes a lot sooner than the posting. I enjoy the research process, and particularly the moments when I understand something better because I’m trying to explain it. I’m pointing out that the simple conditioning explanation at the very least needs more terms added than just concrete vs abstract payoff.
With respect, I think you’re grossly over-privileging your conscious explanations of your behaviour here. TV is immediately rewarding for 0 effort. We are a social species, we love stories, sitting in front of a box that flashes them at you is a great experience (from the perspective of a pure immediate reward-seeker). Writing and thinking is (a) work, and (b) delayed reward, even if it does come sooner than one might think. At the very best, reward comes after seconds for TV and minutes (but more likely hours at the earliest) for research and blogging.
I agree the TV vs walk comparison doesn’t work particularly well.
I recently re-read the review rather than the book… and really wanted to know the source for the graph showing correlation of height/BMI/cognitive skills etc. Particularly because of not just the strong correlation with monozygotic twins but the surprising apparent zero correlation between adoptees’ weight (which I take to mean one adopted kid and one genetic kid or two adopted kids, but either way with the same parent).
I always assumed I was overweight at least partially because of the ‘culture’ of my overweight upbringing, not just the (obviously linked) genetics. Interesting if it really is basically genes.
Scott or anyone else aware of the source?
This seems related to the phenomenon whereby it is suddenly much easier to concentrate on the term paper the night before it is due than it was in the weeks you knew you ought to be working on it. And this isn’t just all-nighters–if it’s a project that actually can’t be done in a day my brain will gain the ability to focus on it a week, even months in advance.
I would say my ability to concentrate is fueled by fear of failure, but interestingly I experience being able to concentrate as more pleasant than the reverse. Maybe my dopamine systems’ default is to aim at low-hanging, if long-term unfulfilling, reward fruit like donuts and Reddit; but when enough urgency arises to aim it at a bigger goal it makes my executive decision mind, usually able to direct and marshall my attention but feebly, happier.
I am the same way. The way I explained it to my parents was “left-queueing vs right-queueing” – if you imagine task planning as a timeline, I tend to queue up tasks towards the “right” of the interval – ie. right up against the point where they come due. In comparison, my parents queue to the left – they do tasks as soon as they become viable. I don’t know why, but from conversations it seems they perceive outstanding tasks as unpleasant.
Can anyone explain to me the difference between prediction and expectation (in this context)? Is there one?
If you still haven’t read The Mind Illuminated, I think you would find its section on the “colony of subminds” model of consciousness to have a lot of applicability here.
Culadasa writes about how your attention is constantly being tugged at by inputs from a plethora of individual subminds, each of which has its own suggestions for what needs to be injected into consciousness. These can be thoughts, memories, or even motor impulses. Five seconds of meditation confirms this sense of an endless series of thoughts “arising” and the need to continuously keep your hand on the tiller of attention to avoid letting the subminds sweep it away. Half an hour of meditation confirms that motor impulses such as “God how much longer do I have to sit here please let me look a the clock” can be overwhelmingly strong. That’s one way in which meditation is useful; you hone the skill of controlling where your attention rests, rather than letting every “bid” jerk it onto whatever submind at that moment is yelling loudest.
To editorialize, I would emphasize that doing even a moderate amount of meditation has really underlined how much we don’t have willpower. We only ever take actions that correspond to the submind that won the bidding war for your motor neurons.
If you feel like you’re “expending willpower” to take an action that you would prefer not to take, I think what’s really happening is that the subminds are fighting it out more than usual. Your experience of “inner struggle”, of your hand hovering over the plate of chocolates and neither picking one up nor moving back to your side, is the real-time battle between different subminds for control of your motor neurons. (None of them is “you”, by the way.)
The “eat chocolate” submind puts in a very strong bid but is countered by a host of others that say “you told Fred that you would work out and not eat chocolate, don’t let him down” and/or “if you don’t start working out now you’re really going to be late” and/or “the eat-chocolate action that is currently winning the bidding war is one that we have previously all agreed you shouldn’t actually do”. All these smaller bids are aligning against bid of the “eat chocolate” impulse, and supporting the bid of the submind that is injecting the “go work out” impulse. You lose the struggle if these bids don’t collectively exceed the eat-chocolate bid.
And of course, that’s why you only ever observe yourself to successfully work out rather than eat chocolate when you’ve actually put in the prior groundwork of establishing intentions to not let Fred down, to not be late, or to specifically not eat chocolate. It doesn’t happen spontaneously, unless you’ve made a habit of working out, which is the kind of thing that gives the work-out impulse a character of being a “default action”, or in your predictive model, an action that has a very high likelihood of being a good choice relative to whatever other choices seem salient.
There is no “you” that is “choosing” between bids. At best, you’ll have a discriminatory submind inject its meta-bid to prioritize one existing bid over another. But that discriminating bid is itself subject to the rules of the bidding war, and you can still find yourself still eating the chocolate even if the discriminatory, rational submind made a very good argument that you shouldn’t.
> That’s one way in which meditation is useful; you hone the skill of controlling where your attention rests, rather than letting every “bid” jerk it onto whatever submind at that moment is yelling loudest.
But what is “you” in this context? Wouldn’t it just be the emergent sum of submodules, so that you’ve just created a regress (like envisioning a homonculus in our mind)?
I don’t mean to dismiss the usefulness of meditation at all. But superficially I might think based on your description that you are simply making one submodule (the one who likes to meditate, think slowly and/or not at all, or something) stronger at the expense of others.
First of all, you definitely are making the submodule(s) that want to meditate stronger at the expense of the others. But that’s not really where the benefits are coming from. That’s just the aspect that allows you to actually do something as boring as meditate day after day.
Rather than handwaving something about where the nonexistent “you” in my statement really resides, I think it’s more productive to distinguish “learning a skill” as a separate mental activity that is relatively independent from “self”. Just as practicing your tennis swing over and over trains a suite of separate interoperating subskills, training your mind to remain consistently focused on very boring sensations actually trains a set of subskills which are generally useful outside of the meditation context.
For example, you train the skill of being metacognitively aware of where your attention actually is focused. You train the skill of being aware of the quality of your attention – tight or broad, detailed or vague, scattered or singleminded. You train the skill of noticing when your mind has wandered, and in fact being aware of what your mind is doing at a more birds-eye-view level so that you can see when it’s about to wander to stop it in advance. You train the skill of dispassionately observing the content that the submodules are “bidding” with and remaining detached and objective, no matter what the content is. Even if that content is physical discomfort.
All of these skills are useful in daily life. When you really introspect on it, “skills” are like affordances. They’re either present or they aren’t.
My guess is that low dopamine threshold activities require less overall executive focus. Walking and watching TV let you relax and zone out, requiring little focused concentration. Writing a term paper requires intense focus, on and off, across many hours or days. I would also guess that there’s a reward function involved, and perhaps some comparison of reward vs focus level. Writing term papers is initially high focus, low reward, but with practice it might become “medium focus” as you learn the technique and “high reward” if you come to like writing papers. This might eventually outcompete low focus, medium reward activities like walking. Reward levels probably change during task execution, as does ability to sustain focus, so perhaps after five hours of working on a term paper, a walk starts to sound much more appealing.
Armchair Evo Psych explanation to follow:
How much should you do is a question with a non obvious answer. If you do nothing you will die, of starvation, predation, dehydration- one of the ations is going to do you in well before you reproduce. If you do to much you will also die, of exhaustion or distraction. Even a simple model where you work until you are very tired and then sleep will kill you eventually, after all emergencies and surprises happen at night as well, even if only 1/1000 nights actually has a life or death risk then you will be dead 3-4 times before you hit puberty if you can’t react to them well. I think this is an explanation for why you can feel “to tired to sleep”, when you get to that point you are really vulnerable and when you are vulnerable you body wants to be hyper vigilant. Eventually fatigue wins, but only eventually.
The more complex the environment and the more complex the tasks the more difficult it is to parse what you should do now. “Hungry, find something to eat” isn’t enough for an omnivore. You can be dumb as hell if you only eat eucalyptus leaves because the routine “hungry, find food” just means find one distinct type of tree, eat its leaves. For humans “hungry, find food” means something like “hungry, should I eat my store of saved food, go out and pick berries, dig for roots, organize a hunting party”, and the best answer is a combination of costs and likelihood of success and future discounting. For our middle of the food web ancestors this is even more complex. Hunting risk isn’t just about the energy cost of trying to hunt, it exposes you to mortal danger. While you are focusing on the antelope a lion could be focusing on you, and many group hunting strategies mean fanning out and sacrificing the strength in numbers you once had. There is also opportunity cost for high focus activities, if you focus on counting basketballs you can easily miss a gorilla. If you focus on hunting you can easily walk past a patch of ripe berries, or run past a frozen rabbit who would make an easier meal. I’ve heard (citation needed) that bears won’t try to fish if there are ripe berries in the area, even for species which have fish as a large component of their diet.
The solution is to build two systems, one based on effort/reward and the other satiation/desire, and hit their intersecting point. Desire and satiation prevent you from doing nothing, or to much. You go for a hunt, kill an antelope and eat it and satiation stops you from going hunting almost immediately afterwards despite some Bayesian updates that conditions are conducive to hunting. The effort reward system starts up with “what is easy with a quick payoff” to prevent you from jumping to high effort/high cost actions, because the cost is often as much in opportunity cost as it is in energy cost. If you go fishing when there are ripe berries to be eaten then birds or other bears eat all the berries. Even if your fishing was successful, and more successful per calorie in vs calorie out than berry eating you have sacrificed the best case scenario of berry eating and then successful fishing.
I think this explains procrastination pretty well, lots of people procrastinate big projects by doing trivial things, with tiny payoffs until eventually the pressure gets large enough to overwhelm the low payoff activities. Also why Bean would rather watch TV (low cost, low reward) than work on a Naval gazing post (high cost/high reward), and why keeping sweet food (low cost/low reward) out of the house making it moderate cost/low reward to obtain is a better dieting strategy than “don’t eat junk food”.
Greetings all — Recently a colleague and I have published an in-depth interview with Dr. Karl Friston (UCL), the brain behind the Free Energy Principle. It is in the most recent ALIUS Bulletin:
In the interview, Friston addresses the “Dark Room” paradox, Consciousness in humans/ants, the evolutionary neurophysiology of schizophrenia, the relationship between Free Energy & Bayesian Brain et al. — and more.
Also there is a supplemental piece that Friston wrote called, “Am I Autistic?”, which may be of interest to many of you.
Also as to this blog’s final point about predictive coding and depression, check out “The Depressed Brain: An Evolutionary Systems Theory” (2017):
May the Dopamine be with you all ~
From the second link:
I’ve never seen a more succinct explanation for the negative attractor state induced by social media addiction.
I am blissfully(?) unaware of how social media works (I have like 20 people on my FB almost never posting), but doesn’t it sound more like that theory that depression comes from seeing yourself as being on the bottom of the social totem pole? Of course the existence of highly successful depressed people (Churchill) suggests that it has almost nothing to do with real social status. But say when I was a boy my parents scolded me when I was lying, bad boy, be ashamed of yourself etc., a very strong “now you have temporarily very low status” message. And I internalized it. And if I tell a lie as an adult it triggers feeling very low status, basically feeling like a piece of shit, even though in reality I may be succesful. Does this make any sort of sense?
Any recommendation for a comprehensible explanation of the free energy thing? I’ve read a bunch of Friston’s stuff and I’ve never really felt like I had a good intuitive sense of what he was talking about. I’d happily read a whole book on this if it existed and was good.
Speaking as someone in a related field, everyone finds Friston hard to understand. All my coworkers who are much more capable of dealing with the kind of concepts he throws around (I don’t have a background in either biology or computation) readily admit that (a) he’s clearly really smart and seems to be saying something important and (b) they don’t really get it.
That is a comment sentiment.
See my response to Scott on this thread for some more resources.
Thanks, will look those over.
I think that a good place to start is actually with the interview that I linked to in my first comment. In our interview, we ask him to specify what exactly the Free Energy Principle is, explain how it is related to other theories (e.g. Bayesian Brain), and we push him on how the framework can be useful. There are some explanations in the interview that are not explained or published elsewhere. Also he states the Free Energy Principle in plain English (!).
After reading the interview, for more technical neuroscience-centric perspectives on the Free Energy Principle, see:
For the most recent zoomed-out review of how Free Energy Principle applies to all scales of biological systems, see:
Unfortunately, I do not believe that there is a single reference book of Free Energy Principle.
Let me know if I can provide other resources or writeups here.
Maybe you have a non-paywalled version of the second paper?
>Boredom is simply the product of explorative behavior; emptying a world of its epistemic value a barren world in which all epistemic affordance has been exhausted through information seeking, free energy minimizing action.
Would you or someone else care to rephrase it in simpler terms? I find that highly intelligent people on SSC/LW tend to be never bored, rather have a “too much I want to do, not enough time, I want to be more productive” attitude, while many other just as intelligent people are somehow quite often bored, the “spleen” of rather well educated, intellectual type of people was a fairly major element in early 20th century European literature and even in the mid-century existentialists. You can be someone like Sartre and feel very bored.
Is boredom running out of things to explore? Why do some intelligent people run out of them and others not? Or rather it is getting to a state where the uncertainties you see are irreducible – the world looks fairly incomprehensible?
I am always bored. Always.
At a certain point you just have to deal with it.
But boredom, for me, isn’t “Everything is either understood or irreducible”. It is the metaconstruct, the realization that nothing that isn’t impossible will ever really challenge me. The closest I get to “challenge” is “not knowing how to get around an obstacle”, which itself follows a predictable pattern, which is to say, I just wait a while and the answer comes to me.
For example, the P=NP problem. P=NP, but the solution for every problem is unique and may require domain-specific definition. The problem can be said to originate in the way matrices are designed; P=NP trivially for any matrix whose properties enable a conversion to a canonical form. This is so because there is a polynomial-time solution to NP problems with a canonical matrix representation, which has already been written. The NP time component arose in comparing the matrices for identicalness, as no general-purpose canonical form can be written.
By canonical form, I mean a single representation of every possible permutation of identical matrix. So, to consider the network case, [0,1,0;1,0,1;0,1,0] is identical to [0,0,1;0,0,1;1,1,0], because both represent the same network; one node connecting two identical nodes. The problem of canonical representation is a long-standing problem in matrices – but is eminently solvable in most real-world cases. (Just assign arbitrary numeric IDs to the nodes, then sort them with whatever valid transformation schemes are available.)
Now, obviously this works fine for, for example, the traveling salesman problem. It is less obvious how it applies to more abstract problems, and only gives a glimmer of a possibility of how to go about solving intermediary case problems, such as when you start with matrices without a canonicalization scheme available.
That is where the domain-specific solutions arise; you have to develop some mechanism of canonicalizing your data. There isn’t a general-purpose approach to this problem – every class of problems has to be dealt with individually, and the solutions will be dependent on their specific characteristics. Which rules out a general purpose approach.
So, P=NP, at least for non-abstract cases, but it doesn’t help us, because we want the abstracted solution.
And that is pretty much where I lose interest, because the problem that keeps arising is mathematical domain, which suggests the P=NP problem is probably related to incompleteness. I suspect P=NP for any specific case in some set of axioms, but that there is no set of axioms for which P=NP for all possible cases (which is to say, there is no universal solver).
A totally ungrounded assumption: maybe someone has a greater curiosity capacity than others, as it were.
Full text: https://www.researchgate.net/publication/313265601_The_Depressed_Brain_An_Evolutionary_Systems_Theory
We cannot really eliminate loss or rejection, but we could make social behavior more predictable. We could have a formal set of etiquette rules how to behave. Start with this sexual approach thing, as it became a big discussion recently where does normal approach end and harassment begin. Set formal rules how men should ask a woman out, how she should accept or politely reject. This is not a new idea, just sometimes during the 20th century people figured it is too restrictive and you should just be yourself. Apparently not. Codified behavior is predictable behavior. When you have a limited number of acceptable things to say to a person you don’t know well and they have a limited number of acceptable things to answer there is little uncertainity. And people we do know well we can predict better so we can relax the rules.
Hey, start here. Start comments with “Dear X” and end them with “regards, Y”. This encapsulates the message predictably in a “you are okay” envelope. Have a praise – criticism – praise sandwich which I think used to be a custom back in old times when people wrote actual letters and Toastmasters also told me to sandwich criticism in praise so it may as well become a formal etiquette role. Try it. Make a thread or two to practice predictable etiquette. Not necessarily the nicest possible etiquette, but predictable etiquette. That means, probably, a fairly easy one.
Known better under a scatalogical name. But consider what you’re reinforcing with such a delivery method. You’re teaching people that praise is merely a prelude to criticism. Sort of like saying “Nice doggy” than whacking the dog with a stick. Any wonder the dog cringes the next time someone says “nice doggy”?
Make sure you stratify those classes though, can’t have people of different backgrounds mixing with these rules.
I think a mix of loss aversion and time preference makes more sense than priors. Low confidence means you over-value things that will happen very soon, because you don’t expect to be able to predict things happening in the more distant future – because, again, you have low confidence. Therefore, you prefer to go for a walk or watch TV because there’s no danger of immediate bad feelings associated with it. When your confidence is high, however, your intellectual understanding that not writing the term paper will cause you to fail the course (an undesirable outcome) becomes an expectation rather than a distant abstraction, and you can choose to write it.
This also predicts panicked all-nighters the day before the paper is due. Once the due date crosses your time horizon, the term paper becomes predictable and you’re suddenly motivated to do it.
(For cross-persona consistency, I’m actually UltraRedSpectrum.)
Evolution confers camouflage on species that are subject to high predation in the natural environment, and consequently associated predators may evolve high visual or auditory sensitivity to motion as a means of identifying prey. A bias toward remaining generally still (motionless most of the time) would therefore enhance survival prospects in the target prey species. And since some movement will be necessary in normal life functioning, a high threshold for override in the striatum ensures that this discretionary activity will be rare rather than dominant. A variation on this theme is the “deer in the headlights” autonomic response.
How is choosing to read SSR rather than the countless other alternatives on the web an evolutionarily-primitive and heavily-reinforced action? I get the heavily-reinforced, insofar as reading SSR stimulates my thinking, but evolutionarily-primitive?
It’s a dopamine thing. Seeking/gaining new information is also about prediction (you could think of it in evo-psycho terms as well): you know for sure you’ll get something new. You basically feed your curiosity (besides, your curiosity is fed right now/in the very near future). Even if it is fed with somewhat complex info, the mechanics behind this is relatively simple. In other words the means may be sophisticated but the ends are nevertheless evolutionary-primitive. It also depends on a person (their nurture/nature): some are prone to the watching-TV thing, some are SSC readers. Though sometimes even the latter may prefer browsing FB/watching series.
Four pieces of recommended reading, tangentially related:
And recommended listening:
This is brilliant. A couple more points in favor of it: habit and hypnosis.
1) The fact that doing something often makes it likely we will do it again is so basic to human nature that it’s never struck me before how interesting this is. The truth of that statement requires that there must be some mechanism in the brain in which we predict our actions, and then that prediction makes the predicted action more likely to be taken. This seems to be exactly that mechanism.
2) It seems plausible to me that hypnosis works the same way. We have some sort of wiring in place to deeply trust certain individuals, so much so that hearing their opinions immediately changes our beliefs. And once they say we are doing or will do something, this becomes our default action–the one we have strong beliefs we’ll do, and doing the default action is incredibly relaxing. As we continue doing what they say we’re about to do, our trust in them strengthens because they appear to be accurate. Maybe this is why hypnotists start with breathing. The friendly man said I would breathe deeply. I AM breathing deeply! Wow he knows what he’s talking about. He says my eyes will close when he gets to 1. He’s probably right. OH MY GOD he was totally right!
I forgot if these have been addressed already but how do susceptibility to addiction and hypnosis covary with all this?
I always assumed that it’s because the brain, as a sort-of neural network, tends to produce the same outputs for the same inputs+state (unless the weights have been updated sufficiently to change that). Of course, ANN isn’t in any way a good model for a real brain, but I think this might be an alternative plausible hypothesis.
With that explanation, I wouldn’t predict that doing an action repeatedly would build the habit. It would explain the correlation between past actions and future actions in the same circumstance, but it wouldn’t explain how a causal intervention would change the output of the neural net.
The reinforcement explanation seems much stronger than the evo-psych one. In my experience, the low-effort behaviors I default to when depressed or distracted are highly artificial, e.g. browsing the internet (or using a spoon, in Jim’s case). Meanwhile, if evolution really made people highly likely to pick walking as a default pattern, we would probably not have an obesity epidemic. If you choose to go for a walk instead of writing a paper, it seems more likely that you’re convincing yourself that you are doing something good for you, so you’re giving your superego something even as you are shirking the work you should really be doing. Which suggests that the mechanism in humans involves multiple levels of complexity.
Where evo-psych may come back in is with how certain behaviors appear to be inherently more addictive than others. Mobile games are infamous for taking advantage of this, and it’s not uncommon for people to fall into playing one as a low-effort fallback pattern even after just a modest amount of reinforcement.
Another thing that is probably obvious to you, but perhaps not to all readers: the description of dopamine levels as representing “confidence” makes sense in a certain model, but it cannot be taken as an accurate description of how the system actually works. As we know from Darwin, an evolutionary explanation should be causal, not teleological. So what we should really say is that the system causes certain actions or perceptions to carry more weight than others, and a better choice of weights ultimately leads to better outcomes, and the reinforcement mechanism helps adjust the weights correctly. At a highly evolved level, it’s possible that this mechanism underlies our conscious experience of confidence; but this would only be a specific case of it. And we can also take an anthropomorphic view of the system, and describe its variables as confidence; but this would only be an abstraction. Either way, we should expect that there will be instances where the system will operate in a way that strays from our intuitive notion of confidence; especially in pathological situations. In such cases, we must be careful not to try and force everything into a just-so story that looks like confidence again. Perhaps thinking about it in the more mechanical terms of reinforcement would help.
I remember reading a study saying that when reinforcement learning does not work well – lack of motivation, depression – it could be either not feeling the rewards, which the serotonine-hacking SSRIs target, or not LEARNING the action-reward correlation, which is more dopamine-related. Note that this kind of learning is not the same as learning a textbook for an exam.
My point is, it sounds like the second case is people who are perfectly capable of enjoying things but their friends kind of have to drag them to those activities because even though they liked it last time and the time before last time somehow they still are not motivated to go and enjoy it again. Sounds like one of Scott’s older posts about a person who did not really realize it is possible to like some food more than some other food. The deficiency was not in the enjoyment (serotonine) but the drive to seek enjoyment (dopamine).
Does that ring a bell? Do you have friends like that?
If your pleasure-receptors are blunt (serotonine) I think only medication can help albeit there is something to say for trying to reset/downregulate their threshold via stuff like meditation, stoicism, hormesis (https://gettingstronger.org/) I mean eating plain brown rice for a week could probably help in enjoying other food more.
But if it is the motivation that is lacking then I think, hope, that people simply forcing themselves to try activities would help. Just pushing yourself to go out and do stuff people you find similar to yourself do…
“For example, if there’s a predator nearby, the “flee predator” region will put in a very strong bid to the striatum, while the “build a nest” bid will be weak…
Each little region of the pallium is attempting to execute its specific behavior and competing against all other regions that are incompatible with it. The strength of each bid represents how valuable that specific behavior appears to the organism at that particular moment”
This strikes me as very much “at this point a miracle happens” or (to be kinder) “more research is needed”. It sounds plausible for half a second until you start thinking it through.
What’s going on here? The idea seems to be that we want “distributed” analysis of responses, but what’s the mechanism that decides that center A should out in weaker bids than center B? The model as describes begs that question.
This is not merely pedanticism. The model is supposed to be interesting insofar as it leads to the way we make decisions, but perhaps the most interesting thing about our choices is the way that some of them force themselves in our minds more so than others. Scott talks about this in the context of watching TV vs writing the term paper, but a different sort of example might be something like assuming the intentional stance (the spirit living in that tree made it move the way it did) vs monotheistic stance vs scientific stance.
In these sorts of decisions we’ve been told to assume something like a Darwinian model — each meme fights, lies, dissembles in the brain, and the one that somehow best matches the underlying hardware gets to win. But go back to our lamprey — that model of centers putting in bids has no scope for lying or even honest miscalibration by different centers, because it falls apart if every center simply insists all the time that it’s going to make a maximal bid.
You see my point? Either there’s something extremely fundamental missing in the lamprey model, something that forces honest bids; or it’s a model that doesn’t seem to match the problem as we care about it in most humans (wandering attention, ear worms, …)
I came here to write something similar. Thanks for making this point!
I think you’re taking the use of the word “bid” as an implicit anthropomorphization of distinct brain modules as independent agents who “want” to “win” the bid. Yes, it’s true that the word “bid” is loaded and hides a lack of completeness to the model, but we know it has to be something like “net excitory signal strength minus inhibitory signal strength”. Yeah, it’s vague, but it’s not so bad as you’re making it out.
I’m curious by what mechanism you think “dishonest” bids would ever occur in the first place. The part of your brain that’s constantly scanning your environment for tigers doesn’t abstractly want you to notice tigers that aren’t there. It’s just hanging out detecting tigers. It’s not scheming to create a scenario where it can startle you.
The closest thing to a “dishonest” bid would be intrinsically addictive behaviors. Any time you take an opiate drug, you’re increasing the odds that behaviors associated with consuming that substance win future bidding wars.
Maybe building on what moridinamael (jesus that’s hard to spell) said above…
What’s missing from the model is simply what causes variation in the strength of signal from a different region. To be honest, though I know nothing at all about lamprey brains, I’d be very surprised if this is a purely regional matter – the same region, even the same specific cells, can and are used in wildly different functional networks. “Region” is easier to think about, and occasionally these things really are that simple, but to my knowledge not very often. So what we should really just be talking about is strength of output from a given network, where “network” is a very fuzzy term that basically means any collection of neurons that are currently doing something together.
So you might have a “mate” network, a “flight” network, a “nest” network, etc. What keeps them “honest”, in your sense, is simply that the strength of each network’s output is affected by its input, which is to say by all the networks that feed into it. So (obvious hand waving here) you might have some sort of predator detection network that responds to sensory signals of, I dunno, particular visual shapes or chemicals in the water or whatever (as noted, I don’t know anything about lampreys). When these inputs fire particularly strongly, the “flight” network will produce a stronger output.
Now this is grossly oversimplified, but you see the basic idea? The honesty simply comes out of the assumption that sensory and “cognitive” processing is operating normally, if this is the case then the output strengths of each network (their “bids”) will be roughly calibrated to the actual current state of the environment. When this doesn’t happen, you get mistaken activity, and in the worst cases, serious mental illness.
> Each little region of the pallium is attempting to execute its specific behavior and competing against all other regions that are incompatible with it.
This reminds me of parts psychology, such as Internal Family Systems (referenced by Yudkowsky on hpmor.com).
Which Scott apparently tried doing on himself lol
This seems to have a lot in common with the concept of the Modular Mind; in an Ezra Klein interview with Robert Wright, author of Why Buddhism is True (
herestarting at 26:00) the point comes up about different modules of the mind competing to get the attention of the “consciousness”).
+1. Minsky was a fan of this I think.
Interesting stuff, but I’m suspicious of the idea that dopamine in the striatum encourages taking more high-willpower actions relative to low-willpower actions (assuming, as you said, that some action is going to be taken). Certainly this fits with the image we have of college students (and Senior Regional Manipulators Of Tiny Numbers, etc.) using Adderall. But that isn’t the only way that people use dopamingeric stimulants, and in a lot of the other use cases, people don’t seem to be doing higher-willpower actions after taking the stimulant — think of what people tend to do after snorting coke, smoking crack, or taking meth.
I don’t think this is a difference in the drugs themselves (Ritalin has the same mechanism of action as cocaine and so forth). It seems like more of a situational difference — people who take these kinds of drugs at a party will channel the drug effect into partying, even if the same drug effect could have been channelled into a term paper if they were at home, trying to write one. So whatever the drug effect is, it can help people perform high-willpower actions but doesn’t automatically make their action preferences shift toward favoring higher willpower actions.
I’ve always thought about this kind of thing using something kind of like the predictive coding framework (although really my own bastardization of it): “we tend to take actions that our midbrains predict will be rewarding. ‘High-willpower’ tasks are actually those that our midbrains predict to be unrewarding (at least according to the EEA-focused metrics they use, which may have higher time preference than we’d like). There’s some baseline of predicted reward that an action has to cross before it starts to look actively appealing, and dopaminergic stimulants push more things above the threshold, so that while some actions may still be more appealing than others, most actions (even formerly ‘boring’ or ‘difficult-looking’ ones) are now at least in the ‘appealing’ category.” (IIRC I came up with that independently, so IDK if it is a hypothesis that a neuroscientist would take seriously)
This whole article hit really close to home for me.
I always seem to have an enormous amount of difficulty initiating action. It’s not that I’m lazy or unfocused; I can work hard at a task once I’ve started doing it, and I can focus on things to the point of obsessiveness (usually if it’s something that really interests me, but sometimes even if it isn’t), I just have an inordinate amount of trouble getting started in the first place. It’s not a matter of distractions; even if I force myself not to go online or watch TV or play video games or otherwise distract myself, I still can’t force myself to actually do anything productive, to the point where I’ll just lay in bed staring up at the ceiling. It’s not just depression or lack of energy either; there are times when I get depressed or feel low on energy (especially in winter), but there are also times when I feel energetic to the point of restlessness and I still can’t bring myself to do any specific tasks, even though I desperately want to do something.
Also, when I am doing something, I have a lot of trouble switching to a different task, even if it’s more important or more urgent or more interesting than whatever I’m already doing. I can’t even count how many times I’ve been late to something because I was too busy posting on forums or playing video games to get ready and leave the house on time. It’s not even just a matter of enjoyment, because I’d much rather be out on a date than posting on Facebook, but I’ll still be late for the former because I spent too much time on the latter.
Sometimes I have trouble just getting up from my desk to get a snack or drink some water or use the bathroom, or even do something as simple as change out of uncomfortable clothes. If I have anything on my chair, I’ll just sit on top of it for extended periods of time instead of moving it out of the way, even if that’s less comfortable. If I just got out of the shower and I’m wearing a towel and I make the mistake of going on my computer, even if it’s just to check something for a second, I’m likely to still be sitting there wearing that towel half an hour later. Basically, I’ll ignore feelings of discomfort (or hunger, thirst, etc.) until they’re strong enough for me to prioritize them, instead of just reacting to those feelings right away like a normal person. Occasionally I’ll even have moments where I’ll suddenly freeze up and stand still doing nothing for a few seconds, for no particular reason. I tend to feel mildly dissociated from my body most of the time and that’s probably a big part of the problem, but I think my weird inability to switch tasks is also a big part of it (or maybe they’re both just different aspects of the same underlying issue, I don’t know).
I keep coming back to this concept of action thresholds. I’m not sure if it’s physiological or neurological or psychological or some mix of all three, but I feel like my threshold for taking action is a lot higher than most people’s. Not to the extent where I’ll spend every day just sitting in a dark room doing literally nothing, but enough to massively interfere with every aspect of my life. My girlfriend says I have “low inertia” and that sounds like a very accurate description of my problem, but I still don’t get why I’m like this or what I can do about it.
My roommate says it’s just because I’m timid and indecisive, and maybe that’s part of it. The “confidence levels” theory would explain why I’d be hesitant about taking complex, difficult, time-consuming, high-risk actions like writing my thesis. It might even explain why I’d be hesitant about cleaning my room, since I’d be inclined to completely reorganize everything and that would require a large expenditure of time and energy. But it doesn’t really explain why I’d have difficulty getting up to grab a glass of water. Can someone’s insecurities really affect their fundamental action systems like that?
Is there any logical reason for my brain to prioritize my current action state so strongly? If it’s not about calorie expenditure and not about complexity and not about the risk involved, if it’s not even about what’s the most enjoyable or what leads to the best outcomes or what I have the best associations with, then what is it?
I wonder if activity of this basal ganglia can explain impression of having free will.
Just some additional food/links/ideas for thought.
I remember a while back Scott making fun of Ben Carson as being a ‘murderer’ in an earlier post about Hemispherectomies and how patients who’ve undergone them exhibit behaviors as though their left and right hemispheres acted as two independent people. CGP Grey did a video that touches upon this (https://www.youtube.com/watch?v=wfYbgdo8e-8) so it could be possible that humans are just a Russian nesting doll of multiple systems trying to control motor control. Would it be far-fetched to believe there will always be at least 2 competing interests of the brain (even though say ‘be lazy’ and ‘work out’ not be influenced on separate hemispheres)?
I remember listening to an NPR program that talked about how some researchers were looking into localized electrical stimulation of the brain in order to “speed-up” the learning process of certain domains of expertise. While the closest article that I could find on this topic looks into use by students (https://www.npr.org/sections/alltechconsidered/2017/01/07/507133313/students-zap-their-brains-for-a-boost-for-better-or-worse) I remember the program talking about one of the people going through the program learning how to become an expert rifle shot. Either way, the article talked about how tDCS could improve mood and learning. A lot of previous commenters mentioned meditation, but it seems like meditation and tDCS are just analogues for exercise-this-this-brain-function-to-strengthen-willpower-like-you-would-strengthen-a-muscle
Also I put two this’s in the previous hyphenated word, because lol