What Is Depression, Anyway?: The Synapse Hypothesis

I.

The problem with depression research isn’t that we don’t have any leads on what causes depression. It’s that we have so many leads on what causes depression that we don’t know what to do with all of them. For example:

1. Life adversity, like getting fired or breaking up with a partner, can make people depressed. The biological correlate of this seems to be the hypothalamic–pituitary–adrenal axis (HPA), where your brain tells your adrenal glands to produce glucocorticoid stress hormones like cortisol and this does something to your brain that increases the risk of depression.

2. Inflammation and immune overactivity can make people depressed. The classic examples of this are cancer-related depression (which exceeds what you would expect just from cancer being stressful) and depression induced by administration of the immunomodulator interferon-a. Antiinflammatory drugs have a small but clinically relevant antidepressant effect. Some of the relevant chemicals here seem to be TNF-A and IL-1; these do something to your brain that increases the risk of depression.

3. Serotonin and other monoamines seem to be involved. Most existing antidepressants, like SSRIs and MAOIs, seem to work by increasing monoamine levels. There are some conditions which affect monoamine levels and also increase risk of depression, though it’s nothing like a perfect correlation.

4. The glutamate system (eg NMDA and AMPA receptors) seem to be involved. Ketamine acts on both of these receptors in different ways, and one of those actions is the source of its rapid and unprecedented antidepressant effects.

5. There’s some kind of important link between depression and folate balance. Various folate-related chemicals (eg l-methylfolate and s-adenosylmethionine) are effective antidepressants. Some studies show that people with depression sometimes have disrupted folate cycles, for example elevated homocysteine levels.

6. Electroconvulsive therapy (“shock therapy”) is very effective at treating depression if it induces a seizure in the patient, so the increased activity from seizures must be helpful somehow.

So if we wanted to know what depression really was, it might be promising to look for some process that seems to match depressive symptoms and affects/is affected by life adversity, inflammation, monoamines, glutamate, folate, and electricity.

Recently some people think they’ve found one. According to Duman’s Neurobiology of Stress, Depression, and Rapid Acting Antidepressants, it’s decreased synaptogenesis, and it’s regulated by a protein complex called mTORC1.

Neurons communicate with other neurons through branches called dendrites and connections called synapses. Healthy neurons often create new dendrites and synapses to expand their network of connections and adjust to new information. The process of making new synapses is called “synaptogenesis”, and it’s common throughout the adult brain.

As mentioned above, depressed people have decreased volume in some brain areas. But in postmortem studies, they don’t actually have fewer cells in those areas. So it looks like maybe these neurons just have less synaptogenesis going on.

Synaptogenesis is partly controlled by a protein complex called mechanistic target of rapamycin complex 1 (mTORC1 to its friends). Like every other protein, mTORC is controlled by a giant mess of receptors and second messengers and intracellular signals with names like VDCC and GSK3.

People try to make this seem simple by displaying it as a system of billiard balls and tubes in a cute cartoon, but don’t be fooled – no human being has ever remembered any of it for more than two seconds.

The factors that affect synaptogenesis and mTORC are many of the same factors that affect depression. Let me count the ways:

1. Life adversity causes chronic stress, biologically represented by upregulation of the HPA axis and increased corticosteroid production. A 2008 study finds that rats who are subjected to chronic stress develop atrophy of dendrites in their prefrontal cortex. Administering glucocorticoids directly mimicked some of these effects, suggesting that stress is a whole cocktail of things including glucocorticoids and other things. When humans take glucocorticoids (they’re a useful medicine for various diseases) they tend to develop hippocampal atrophy and “simplification of dendrites” there, which I think is the same as decreased synaptogenesis. They also tend to get depressed – in some studies of Cushing’s Syndrome (the medical name for the collection of bad things that happen when you take too much glucocorticoid medication), up to 90% of patients are depressed.

2. I didn’t find the linked paper’s attempt to link inflammation to synaptogenesis very convincing, but it looks like there’s a little bit of research that has found that systemic inflammation decreases synaptogenesis. “Morphometric analysis of dendritic spines identified a period of vulnerability, manifested as a decrease in [dendritic] spine density in response to inflammation. The density of presynaptic excitatory terminals was similarly affected. When the systemic inflammation was extended from 24h to 8 days, the negative effects on the excitatory terminals were more pronounced and suggested a reduced excitatory drive.” This seems pretty relevant.

3. Everyone used to think that traditional antidepressants like SSRIs worked by increasing serotonin (and so by extension depression must have something to do with low serotonin levels). But SSRIs increase serotonin very quickly (within hours) yet take months to work. Something longer-term must happen when serotonin levels have been increased for long enough. That something has now been pretty conclusively identified as an increase in brain-derived neurotrophic factor (BDNF) – although I can’t find any good explanation of why increased serotonin should cause increased BDNF after a month. BDNF is a nerve growth factor – its main action is activating mTORC and telling nerve cells to grow more dendrites and synapses. And it’s most active in the cortex and hippocampus.

4. Ketamine affects the brain by either blocking NMDA receptors (boring traditional explanation), activating AMPA receptors (exciting new explanation), or possibly both (wishy-washy neoliberal compromise explanation). Duman et al are kind of ambiguous about which explanation they accept, but I think they present a theory where NMDA blockade causes AMPA activation, or something, which I’d never heard before. In any case, they present ample evidence that AMPA rapidly affects BDMF and dendritogenesis – for example, Positive AMPA Receptor Modulation Rapidly Stimulates BDNF Release And Increases Dendritic MRNA Translation. The “rapidly” part is important – the surprising thing about ketamine is how quickly it works compared to other antidepressants, so it’s exciting to find a theory that predicts this should happen.

5. I haven’t seen much attempt to fit folate into this theory, which is a shame. A quick Google search brings up a few people talking about how folate deficiency decreases neurogenesis in the hippocampus, which is sort of related.

6. Studies show that ECT increases BDNF levels and increases hippocampal volume, though I’m not sure exactly how or why giving someone a seizure should do that.

So the synapse hypothesis can unify at least five of the six lines of research into the causes of depression.

II.

My remaining skepticism is mostly based on a worry that anyone can do this with anything. The body is so interconnected, and there’s so much bad biology research out there, that I worry that if I said that the real cause of depression was, uh, thickness of the blood, I could find some way that all of those lines of research above affected blood thickness.

A quick demonstration: glucocorticoids can cause thicker blood, inflammation can cause thicker blood, SSRIs cause thinner blood, folate causes thinner blood. Huh, actually that’s kind of creepy.

My point isn’t that the (very respectable) academic research on depression is anywhere near this silly. It’s just to explain why I can hear a theory that seems to explain everything beautifully and my only reaction is “Eh, sounds like it has potential, let’s see what happens.”

Here are some of the things that confuse me, or that I hope get researched more in the future:

1. Why should decreased synaptogenesis cause depression, of all things? If you asked me, a non-neuroscientist, to guess what happens if the brain can’t create new synapses very well and loses hippocampal volume, I would say “your memory gets worse and you stop being able to learn new things”. But this doesn’t really happen in depression – even the subset of depressed people who get cognitive problems usually just have “pseudo-dementia” – they’re too depressed to put any effort into answering questions or doing intelligence tests. Why should decreased synaptogenesis in the hippocampus and prefrontal cortex cause poor mood, tiredness, and even suicidality? All that the Duman et al paper has to say about this is:

This reduction in dendrite complexity and synaptic connections could contribute to the decreased volume of PFC and hippocampus observed in depressed patients. Moreover, loss of synaptic connections could contribute to a functional disconnection and loss of normal control of mood and emotion in depression (Fig. 1). In particular, the medial PFC exerts top down control over other brain regions that regulate emotion and mood, most notably the amygdala, and loss of synaptic connections from PFC to this and other brain regions could thereby result in more labile mood and emotion, as well as cognitive deficits.

…which sounds more like an IOU for a theory than anything really fleshed out.

2. Why can’t we just give people BDNF for depression? I’ve been looking into this and it seems like the answer is something like “this works great if you cut open someone’s skull and inject it directly into their brain, but most people aren’t up for it” (the relevant studies were done in rats). But why can’t it be given peripherally? Some studies suggest it’s stable on injection and crosses the blood-brain barrier. Some people tried this in mice and got modest results, but why aren’t people looking into it more?

3. Why does the body have so many “decrease synaptogenesis” knobs? That is, why go through the trouble to evolve all these chemicals and systems whose job is to tell your brain to decrease synapse formation so much that you end up depressed? Is there some huge problem with having too much synapse formation which the brain is desperately trying to avoid? For that matter, what is it like to have too much synapse formation? If it’s the opposite of depression, it sounds kind of fun. If I got someone to open up my skull and inject a lot of BDNF, could I be really happy and energetic all the time? How come all the good stuff is always reserved for rats?

4. Why is depression an episodic disease? That is, how come so many people get depressed for no reason, stay depressed for a few months to a few years, and then get better – only to relapse back into depression a few years later? If people get depressed because of some life stressor like a divorce, how come they don’t get un-depressed once the life stressor goes away? Is depression some kind of attractor state? If so, why?

5. Why doesn’t rapamycin cause depression? Remember, mTORC is “mechanistic target of rapamycin”, so named because the drug rapamycin inhibits it. But we give people rapamycin for various things all the time, and depression isn’t really known as a major side effect (even though IIRC it crosses the blood-brain barrier). If depression is really under the immediate control of mTORC, rapamycin should be the most depressive thing. Instead it’s not obviously depressive at all.

6. How does bipolar disorder fit into all of this? Is mania the answer to my “what is it like to have too many synapses?” question from point (3)? If so, why do some people go back and forth between that and depression?

A lot of these questions could be answered in one stroke if we had a good evolutionary theory of depression. I’m skeptical that this exists – depression just seems too fitness-decreasing, and the various just-so stories people have come up with for why it might increase fitness in certain weird situations seem a little too convoluted. So it’s not that I’m expecting some sort of evolutionary story to work out. Just noticing that, even if the synapse theory of pathophysiology turns out to be right, there’s still a lot more that needs to be explained.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

147 Responses to What Is Depression, Anyway?: The Synapse Hypothesis

  1. baconbacon says:

    “A lot of these questions could be answered in one stroke if we had a good evolutionary theory of depression. I’m skeptical that this exists – depression just seems too fitness-decreasing, and the various just-so stories people have come up with for why it might increase fitness in certain weird situations seem a little too convoluted. So it’s not that I’m expecting some sort of evolutionary story to work out. Just noticing that, even if the synapse theory of pathophysiology turns out to be right, there’s still a lot more that needs to be explained.”

    I think at the basic level there is a good evolutionary base for why depression exists. An organisms optimal activity level depends heavily on its environment. You don’t want to spend your winters wandering around looking for scarce food when you should be hibernating, you don’t want to fight with other members of your clan/tribe after challenging and alpha and losing and you don’t want to flower at the wrong time of year. Having multiple energy states that your body cycles through makes a lot of sense.

    I don’t know if fleshing this theory out will help anything though. It would if depression is an important function, but it won’t if depression is the symptom of an important function getting messed up for one reason or another, and it doesn’t generate clear pathways to investigate for depression if the body is as complicated as you say (which it is).

    • nhnifong says:

      Depression may exist because it can amplify sexual selection pressures. If you get dumped, but you stay upbeat about it, find another mate, and go on with life, maybe your tribe’s gene pool is worse off than if you had gotten depressed about it and dropped out of the race all together.

      • ryanwc4 says:

        I think this is interesting. I was about to suggest that stress itself, broadly defined, might work similarly, as an amplifier of minor differences in fitness. Populations with large degrees of genetic similarity may trend towards a Malthusian situation in which every individual is on the margins of survival, leaving the entire population vulnerable to small normal fluctuations in resource availability. Some mechanism for avoiding that situation seems adaptive in populations. Though it’s not precisely the same thing, consider how many species have adapted so that the young eliminate and in many cases eat their siblings, usually as a result of the ‘fitness’ of higher birth order. The problem this is overcoming is insufficient resources for offspring to survive. In the absence of birth order, it would seem that some biologically primed mechanism for eliminating the less successful (though possibly genetically identical) individual at little cost to the more successful individual helps assure the survival of the genetic legacy of both, and certainly the genetic legacy of the parents.

        • Debug says:

          I believe in the Selfish Gene a group of birds are discussed in which after the mating season the birds who failed to procure a mate go off on their own and typically die before the next mating season. These birds passively observe other birds and, if a mate opens up, they can then procure a mate for the season. This strategy isn’t maladaptive as typically the behavioural costs of struggling to find a new mate (Competing with another male) are higher.

          This sounds like a variant of the mechanism you have proposed. Such a strategy makes the group better adapted for dealing with the problem of having a surplus of unfit males and reduces competition for resources in malthusian conditions. However, such a gene isn’t actively selected against as if you are a carrier the gene is also beneficial. (Reduces probability of you dying wasting energy trying to procure a mate someone else has already procured)

      • Broseph says:

        Sounds wrong. Any maladaptive trait that’s likely to take you out of the gene pool has to have some pretty hefty selective pressure on it for it to become so common. From what I remember the rule of thumb is double your siblings chance of reproduction, then double that for cousins, and so on.

        • nhnifong says:

          If I understand it right, it doesn’t have to double the sibling’s fitness, only increase it at least 2x as much as it decreases the gene bearer’s own fitness. So if a gene decreases it’s bearer’s offspring by 1%, and increases his sibling’s offspring by 2%, then it spreads right?

          If only a small effect was needed, maybe depression was supposed to function in small doses, tempered with lots of exercise.

      • baconbacon says:

        Depression may exist because it can amplify sexual selection pressures. If you get dumped, but you stay upbeat about it, find another mate, and go on with life, maybe your tribe’s gene pool is worse off than if you had gotten depressed about it and dropped out of the race all together.

        This is exceedingly unlikely, as others have pointed out. It is basically impossible for you to have an adaptation that kills you without leaving offspring but improves your overall group fitness by helping some distant relative.

        Additionally most depressed people don’t commit suicide, many (most?) people will undergo at least one depressive episode in their lives, but they generally don’t kill themselves.

      • MawBTS says:

        maybe your tribe’s gene pool is worse off than if you had gotten depressed about it and dropped out of the race all together.

        Remember that a gene coding for this behavior wouldn’t know or care about the purity of your tribe’s gene pool. Genes exist to make more of themselves, and people with suboptimal genetics (cripples, retards, etc) strive to reproduce just like everyone else.

      • The idea that depression is s specific response to being dumped reminds me of reminds me of Kevin Simler’s theory that tears evolved as a specific response to bullying.

        http://www.meltingasphalt.com/tears/

        • The being-dumped theory could be generalised into a theory that depression is way of responding to status-lowering events, a way of saying “I get the message, I will be low-statussy from now one”.

          That doesn’t entirely contradict the idea that depression is a energy-saving adaptation, since signals have to come from somewhere. Simler thinks that crying evolved as a signal from the physiological response to being punched on the nose

          Signalling theory also explain its apparent maladaptiveness: that’s the cost of the costly signal.

          (But it’s less adaptive in modern societies, where stressors tend to be impersonal forces).

    • sixo says:

      My awful, unrigorous, anecdotal, biased theory of depression (my source: many years of it) is:
      a) (my) depression seemingly WAS some combination of social factors: not being taken seriously, lacking power, not having things to be proud of, or not being able to model where future self-esteem/pride/being-taken-seriously were going to come from. I don’t really mean to assert those things caused the depression, or that depression caused me to think those things, though I think the former is true.

      And b) depression was incompatible with actual, practical survival instincts. When money got short, depression left me and was replaced by something else, stress about finding a job, cheap food, etc. This stress WAS my body knowing it needed to locate resources for the future.

      That’s my impression at this point, anyway. Perhaps this is useless or trivial. Depression left me (mostly for good) when I found a way to take some control of my life, moved cities, switched fields & greatly increased income (to tech, so basically like cheating). Did I do those things because depression departing allowed me to? Not sure.

      Probably I’m ignorant. I do think it’s important (and probably hard) to distinguish between what depression IS and what’s causing it.

    • thoramboinensis says:

      This makes a whole lot of sense to me as an ecologist/evolutionary biologist. At least at first blush, depression looks an awful lot like ‘energy-saving mode’ for an organism. Sleep a ton–check. Don’t move around a lot–check. Don’t leave your home/den/cave–check. It also nicely ties in a seventh common trigger for depression that Scott doesn’t touch on: seasonality. I’m from Alaska originally, and it’s common knowledge that during the winters you need to maximize use of limited daylight or get one of those UV happy lamps to stave off Seasonal Affective Disorder (SAD).

      If you think about populations of animals that are resource limited in environments with variable resource availability, you’d sure as shit expect an evolutionarily stable strategy to come about that looked an awful lot like ‘energy-saver mode’. And you’d want it to be potentially triggerable in a variety of ways, including general stress, seasonality, and potentially things like traumatic events (if I’m getting the shit kicked out of me in life, it probably means I don’t have the best access to resources, and I should turn on energy-saver mode). It looks like the connection between animal metabolic depression (i.e. torpor/aestivation/hibernation) and clinical major depression has been made before in the medical literature.

      As far as why this might be so seemingly maladaptive in humans, it could just have something to do with all the downtime spent in pseudo-hibernation/torpor opens up opportunity for existential dread to set in (I don’t imagine this is a problem for, say, our pro-simian ancestors). 21st century humans have a lot of evolutionary baggage.

      • Nornagest says:

        It’s a clever idea, but I wonder about the side effects. We have a couple of other energy-saving modes that point toward hanging around and not doing very much. Tiredness, for example, or the malaise that comes with flu-like illness. Both pretty much do what it says on the box and not much more. If we’ve got these, why would we evolve a completely different mechanism that comes with all these random dysfunctions?

        • thoramboinensis says:

          One possibility is time-scale. You’d want sleepiness to be pretty well synced to your circadian rhythm, but torpor/hibernation to be a more seasonal, stickier state.

        • baconbacon says:

          There are situations where tiredness doesn’t cut it. Take fighting over mates, this is often ritualized in a lot of animals. Big horn sheep, as an example, butt heads until one just sort of wanders off aimlessly, this is a great result evolutionary for both participants. They could easily continue on until one dies from injuries/exhaustion but then not only does that one not reproduce, but the opposition is also likely to be exhausted/injured by the end and will be greatly reduced in his ability to mate.

          A mechanism that allows the loser to signal (and credibility is key, the winner doesn’t want to have to battle another challenger and have the initial loser comeback refreshed for another shot) that they are done allows both to gain.

      • Jaskologist says:

        So we’re basically saying that depression is hibernation gone haywire?

        If that’s the case, we should find it more prevalent in human populations that evolved in colder areas, and probably close to non-existent in Africans. Do we find that?

        (Then again, maybe there are relevant seasons there, too. But surely we should find some sort of correlation with ancestral climate.)

        • markk116 says:

          Apparently, it is rampant in Scandinavia, but this is attributed to vitamin D deficiency.

        • phil says:

          To the extent to which suicide rates are a proxy for depression…

          Black people have a lower suicide rate than white people

          http://students.com.miami.edu/netreporting/?page_id=1285

          I suspect there is more directly on point data somewhere

        • thoramboinensis says:

          ‘Hibernation gone haywire’ is a catchy handle, but a bit different from what I would contend or the author of the paper I cited would probably contend (caveat: haven’t had a chance to read the main text yet). What I’m saying is that metabolic depression (which underpins hibernation, aestivation, and other forms of extended torpor in animals) in humans may be what we identify as major depression.

          That’s still a long way from saying that depression = hibernation. I can’t think of any primates that undergo hibernation or aestivation, so we’re talking about physiological pathways that long ago would have been evolutionarily co-opted from creating a true hibernation state, to a softer, torpor-like ‘energy-saver mode’ (alternatively, there could have been an independent evolutionary origin of bistable mood/activity states). To give just one example, I know white-faced capuchins in dry forests of Central America face extreme seasonality in food availability, and researchers who study them have a hundred different ways of quantifying just how lethargic they are during the dry season.

          All that’s to say that even if the evolutionary origin of major depression in humans is the same physiological/neurological state as hibernation, tens of millions of years of evolutionary history would separate the two and one would expect that they would look pretty different. It could even be vestigial physiology (i.e. evolutionary baggage that hasn’t been selected out of the gene pool yet), and certainly to the degree that it’s maladaptive in modern humans, it is. If that’s the case, you might even expect less clinical features of depression (e.g. suicidal thoughts) in polar human populations that have existed in those regions for a sufficient number of generations (since selection would have a chance to weed out some of the problematic aspects of human ‘energy saver mode’). A stronger prediction of this theory would be high levels of major depression in populations of humans from seasonal mid-latitudes translocated to higher-latitudes during winter.

          I think the more immediate questions derived from the theory to test at this point would be things along the lines of ‘do we see similar physiological pathways in animals (especially primates) that experience strongly seasonal resource availability as we do in hibernating animals (especially mammals)?’ or ‘is there genetic evidence of shared evolutionary origin of pathways underpinning metabolic depression in primates and hibernation in other mammals?’ If it is shared ancestral genetic architecture, it would be really interesting to figure out at what point in our evolutionary tree the state became triggerable by things like traumatic events, low social standing, etc. Also it would be interesting to see if apes in very seasonal habitats have clear depressed and elevated metabolic equilibria, and if any of the characteristics of the former look like symptoms of major depression. I bet there’s clear bistable metabolic equilibria that have a lot of behavior manifestations for orangutans (Southeast Asia’s crazy dipterocarp forests are crazy variable in resource availability–basically its a bumper ‘mast’ crop of fruit every few years, then very little in between).

      • markk116 says:

        This also is my go-to hypothesis.The decrease in synaptogenesis also fits with this. The brain uses about a fifth of the calories we expend, and growing stuff always costs a lot of energy. It makes sense to me that if you go into forced hibernation mode you not only stop building stuff but you slack on the upkeep too. This would also explain the atrophy seen in existing structures.

      • Tarhalindur says:

        I’d agree as well, with the caveat that this explanation might be even more applicable to bipolar disorder than it is to depression. If depression is the energy-saving response to low resource conditions, then the obvious explanation for mania/hypomania is that it is the inverse effect – upregulation of energy expenditure in response to a perceived increase in resources. If that’s correct, then the first hypothesis that comes to mind for bipolar disorder is that people with it have lower-than-usual thresholds for activating the “low resources, save energy” and “high resources, go nuts” modes.

        That doesn’t explain depression quite as well, though. The best hypotheses I can think of are depression involving missing signals to come out of energy-saving mode (or not those signals just not being there?) and/or the depression energy-saving mode running headlong into some other effect (Durkheim’s original work on anomie comes to mind here).

      • vV_Vv says:

        This makes a whole lot of sense to me as an ecologist/evolutionary biologist. At least at first blush, depression looks an awful lot like ‘energy-saving mode’ for an organism. Sleep a ton–check. Don’t move around a lot–check. Don’t leave your home/den/cave–check. It also nicely ties in a seventh common trigger for depression that Scott doesn’t touch on: seasonality. I’m from Alaska originally, and it’s common knowledge that during the winters you need to maximize use of limited daylight or get one of those UV happy lamps to stave off Seasonal Affective Disorder (SAD).

        Also muscle loss and fat gain, which occur in depression and are specific side effects of corticosteroid therapies.

        Synapses are metabolically expensive: an adult brain consumes 20% of the body energy intake, the brain of a young child, around the age of peak synapses, consumes over 40% of the body energy intake. Therefore, decreased synaptogenesis in the brain is probably another energy saving mechanism. Maybe it does not cause depression per se, but it is a correlate of it.

    • Scott Alexander says:

      Humans wouldn’t have evolved such a system in tropical Africa, and it’s unlikely there was enough time to evolve this complex of a system in the few tens of thousands of years they’ve been in cold climates. It couldn’t have come from Neanderthals or anything because we find it in lots of different racial groups in pretty much equal amounts.

      More to the point, depression only correlates a little with the season. And it has some features – like suicidality – which don’t make sense in a hibernation context. And early humans don’t seem to have had a tendency to store up enough food for proper hibernation. And we see hunter-gatherers all the time, including some far-northern ones, and none of them do anything like hibernating.

      The one about not wasting your time in losing situations makes a little more sense, but still not much. Why become suicidal and insomniac after you lose something? How do you know it’s not better to work extra hard to maintain your place in the group and make up for what you lost? If you’re already in people’s bad books, isn’t stopping all work pretty much the worst response? Don’t people already know how to lay low without being told by a mental state which most of the time misfires and results in your death or impoverishment?

      These are exactly the kinds of just-so story that make me very suspicious of the whole field.

      • Garrett says:

        Why become suicidal and insomniac after you lose something?

        I think of it more as driven to ignore pain and make a change. Much like a wolf will (allegedly) gnaw off its own leg to escape a trap. For hunter-gatherers, perhaps this manifests itself as fleeing the current tribe with the possible result of being adopted by another tribe. In today’s culture that could look like going to Tibet to “find yourself”, running off and joining the circus, or committing suicide.

      • thoramboinensis says:

        I think the much more compelling version of this just-so story involves inherited, rather than de novo genetic architecture (more mammalian dive reflex than… I dunno, speech?). Bistability (or adjustable set points, to use a control theory framework) of metabolic state is almost certainly an ancestral characteristic. I bet there was an early mammal/mammal precursor somewhere along the line that hibernated, and from first principles that physiology would be super useful to maintain or tweak in any environment with seasonal/interannual cycles in resource availability.

        To get even more just-so, in primates you might even predict that that system would evolve toward triggerability by low social status. If you’re the omega of a troop of monkeys, you probably get last pick of food resources, and you’d want to start drifting toward ‘low-activity equilibrium’ sooner than if you’re alpha.

        A few hundreds of thousands of years of hominid evolution probably isn’t enough to scrub out such a deeply ingrained physiology, I’d think. Plus, if ‘human depression = vestigial torpor’ is right, my hunch is that it wouldn’t be that maladaptive in hunter-gather humans, both because it allows them to appropriately respond to resource availability, and because you’re probably going to be forced into being functional anyway, or else starve/get left behind by your clan. It was only super relatively recently in evolutionary time that humans encountered situations where we’re capable of storing up enough food to last long enough for depression to get so severe you commit suicide.

      • baconbacon says:

        Humans wouldn’t have evolved such a system in tropical Africa

        The oldest human remains we have don’t come from tropical Africa (I don’t even concede that point as food scarcity isn’t the only reason it could be adaptive), they come from Ethiopia and (now) Morroco.

      • Andrew Klaassen says:

        Humans wouldn’t have evolved such a system in tropical Africa

        Why not? All that’s needed for the hypothesis is alternating times of dearth and times of plenty. A dry season would work as well for the hypothesis as a cold season.

        and it’s unlikely there was enough time to evolve this complex of a system in the few tens of thousands of years they’ve been in cold climates.

        It seems like an amplification of the “bad mood”/”lethargy”/”learned helplessness” systems which many animals have for various reasons. A whole new system doesn’t need to be evolved; you just need to turn an existing system up to 11, which is an easier evolutionary challenge. I’ve read somewhere – sorry for forgetting where – that the personality of North American wolves was altered toward excessive shyness after only a few generations of hunting which sought to wipe them out completely, and that they’re getting bolder now that hunting has decreased. Foxes were domesticated in half a century in a classic Soviet/Russian experiment, presumably by neoteny. With an existing system and strong selection pressure, personality traits can be altered fairly rapidly.

        I’m not saying that it did happen, but it’s not a ridiculous hypothesis.

      • Tarhalindur says:

        There’s a reason I framed it as regulating energy expenditure in response to perceived changes in resources (and apparently did a bad job of it). I’d expect access to food and mates to be more relevant to those systems overall (if you don’t have current access to resources, conserve what resources you have so that you are more likely to live to the next point resources are abundant), with hibernation and estivation are the extreme form of the general effect appearing when you get severe seasonal changes in resource availability, and food/mate access (and, by proxy, status) were definitely important to human ancestors!

        I remember some research a few years back about some fans of sports teams seeing a testosterone boost/drop when their team wins/loses, respectively. If I’m right then the systems responsible for that are also responsible for bipolar/depression. (PPE: Actually, I’m slow. Forget sports fan blood testosterone, the obvious modern example of upregulating in response to resource/status/mate availability is the Baby Boom.)

        Which, now that I actually think about it, offers an obvious test for the up/down regulation hypothesis: is there an inverse correlation between blood testosterone levels and bipolar/depression? If there is no such correlation then that’s a hit to this energy regulation hypothesis, though it could also be due to varying sensitivity to whichever molecule or molecules are responsible.

        (That doesn’t explain suicide, but I suspect suicide is another matter entirely and just piggybacking on depression mechanisms.)

      • vV_Vv says:

        Clinical depression may be just an extreme and maladaptive form of a normal and adaptive behavioral and physiological mechanism: sadness, melancholia, being pissed off, whatever you want to call it.

    • Andrew Klaassen says:

      The Discovery of France has an interesting historical example of human semi-hibernation. As thoramboinensis mentions in their example from Alaska, seasonality was the driver in France, too. If growing seasons are such that a big expenditure of energy during some times of the year will yield minimal returns, it would be idiotic for most people not to semi-hibernate. It’s going to increase your chance of death, and reduce the resources you have available to feed your children.

      What’s weird is the modern era, in which fossil fuels mean that there is endless-ish energy available all the time. People who are “on” all the time are suited to this new environment, but they would’ve been at a disadvantage through most of human history.

      The cognitive features of depression are interesting, too. Last I looked at the research, cognitive approaches to the treatment of depression worked as well as any of biochemical approaches. Animals which can’t speak can suffer obvious analogues of human depression, so it’s not purely a cognitive thing, but our internal monologues have an amplifying effect. It’s a reminder that, while DNA and genes hold a lot of information and decision-making power, they have ceded some of that power to neural networks which can learn and adapt more quickly. We use narratives to guide some of our decision-making, including decisions about whether or not to get out of bed in the morning. How do you map that to the physical, chemical and electrical events happening in the brain? I have no idea, but the relative effectiveness of cognitive therapies suggests that a mapping exists, even though it’s too complicated for us to map at this point.

  2. ryanwc4 says:

    Wait. “Life adversity” can affect synapto-genesis? But, but, that can’t be.

    We just had a 300-comment post that took as an assumption that inter-group differences in intelligence must be mostly genetic, despite clear differences between groups in the levels of “life adversity” which seem to correlate pretty well with intelligence.

    Much of the discussion was based on a 10-year old paper positing the primacy of beneficial synaptic effects (well, dendritic, but you also equate synaptic and dendritic effects) of otherwise deleterious mutations.

    You should have told us about this adverse life issue earlier, Scott. It might have improved the quality of the discussion. A lot of commenters look rather ridiculous now.

    • Nornagest says:

      Before I started jumping to conclusions here, I would want to know what “life adversity” means specifically and how it cashes out in inter-group differences. Most minority groups in the US have lower rates of depression than whites (although women have higher rates than men, IIRC), so if we’re using that as a proxy it doesn’t seem to fit the model of oppression -> stress -> lower synaptogenesis -> poor outcomes.

      • ryanwc4 says:

        Doesn’t entirely matter. Scott’s gloss of the study is that life adversity reduces synaptogenesis. What Cochran and Harpending argued was that it was increased “dendritogenesis” in people with Tay Sachs and Niemann-Pick genes that led to increased intelligence. The fact that depression can be another expression of reduced synaptogenesis, an alternate pathway for the pathology when circumstances are different, doesn’t really change the devastating implications for Cochran and Harpending, since the demographics of “life adversity” are much more consistent with population intelligence than the demographics of Tay Sachs or Niemann Pick.

        I’m editing to add that I agree with you about jumping to conclusions. I don’t think Scott’s reference to a 2008 study of rats is proof positive that stress is the primary cause of inter-population intelligence differentials. That would be jumping to conclusions.

        What I believe pretty strongly is that Cochran and Harpending jumped even farther. And that Scott’s reference is a very big strike against them. I find it bizarre that so many commenters two weeks ago clung to such a poorly proven theory. Or rather, I find it self-serving.

        • Nornagest says:

          since the demographics of “life adversity” are much more consistent with population intelligence than the demographics of Tay Sachs or Niemann Pick.

          The point is more that we don’t have the data for statements like this. The examples of life adversity given in the OP are divorce and losing a job; these are rare, severe, acute events. I don’t think we even have a good model for what the stress of oppression looks like, but I’ll bet it isn’t much like the stress of divorce.

          • alwhite says:

            Might have to get even more specific. Neither divorce nor job loss have to be stressful events. Things like “loss of security”, “animosity”, “interpersonal conflict” would be better terms to focus on.

          • baconbacon says:

            Divorce is tricky though, happy marriages don’t end in divorce, so while the actual event is a one off, the build up is often about years of stress. This is often (but less often) the case with job loss as well.

          • ryanwc4 says:

            I think you raise interesting complications.

            I wasn’t actually thinking of “oppression”, whether as an abstraction or in the post-modern “micro-aggression” mode.

            I was thinking more of direct causes, like high rates of violence or a parental reliance on negative reinforcement.
            (I cut most of my comment b/c I feel I’ve hijacked an interesting post on depression.)

          • Trevor Adcock says:

            @alwhite Stress doesn’t really have to do with whether something is good or bad though. It mostly has to do with whether something is different. Both getting married and getting divorced are very stressful events. Anything that means you have to adapt to a new circumstance, whether that circumstance is good or bad for you, is going to be stressful.

          • Zodiac says:

            Is the the theory of eustress and distress disproven? If not maybe getting married counts as eustress while divorce as distress.

          • liberosthoughts says:

            Yet divorce has been used to compare adaptive and non-adaptive theories of depression in this study.
            Granted, antidepressant purchases might not be the best proxy for depression, but the best models here are the adaptive and stress-relief

        • GravenRaven says:

          the demographics of “life adversity” are much more consistent with population intelligence than the demographics of Tay Sachs or Niemann Pick.

          Yep, early 20th century Jews sure had remarkably little life adversity.

          • Andrew Klaassen says:

            Yep, early 20th century Jews sure had remarkably little life adversity.

            You’re saying that sarcastically, but it’s true of most of the remarkable Jews who formed our impression of Central European Jewish genius. They were mostly raised in well-off homes, and experienced only brief adversity.

            If you were able to get intelligence scores which included all the Jews who experienced long-term, grinding childhood suffering, anxiety and malnutrition as a result of pogroms and prejudice, I suspect that the average IQ would come down a bit.

      • baconbacon says:

        This assumes there is a base standard for “depression”. If people respond relative to their expectations (which could or could not be true) you would have people with high levels of perpetual stress not identify their depression as it would be within their normal range of emotions. Just one hypothetical where this line of reasoning wouldn’t work.

        • Nornagest says:

          Sure. Could be other reasons for demographic differences in reporting, too; to take men vs. women as an example, men might be acculturated to minimize their symptoms and stay out of the doctor’s office as long as possible. This would look in the medical stats like rarer but more severe depression.

          These are all pretty standard issues in medical surveys, though, and we can figure them out.

      • wiserd says:

        A quick search turns up a study which suggests that African Americans have a higher rate of depression, but a lower rate than what might be predicted after an attempt to control for confounding variables.

        “Results. African Americans … exhibited elevated rates of major depression relative to Whites. After control for confounders, Hispanics and Whites exhibited similar rates, and African Americans exhibited significantly lower rates than Whites.”

        https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1199525/

    • Scott Alexander says:

      Sunlight, exercise, and coffee also increase synaptogenesis. I don’t think it means what you think it means, or implies what you think it implies. And depression has no effect on intelligence (as far as we know).

      This comment is the kind of combination of mean, misunderstanding, and bringing-culture-wars-into-a-previously-unsullied-thread that may get people banned in the future.

      • ryanwc4 says:

        That’s unfair. I admitted it isn’t definitive, but pointed out that synaptogenesis is absolutely critical to Cochran and Harpending, whose study was the basis of dozens of replies to your Hungarian piece, most of which were actually mean and culture-war oriented.

        • ryanwc4 says:

          On reflection, I think it fair to describe my initial post as mean. It was troll-like. I want to apologize to the host. I did offer caveats and balance in my replies to others.

  3. sidewalkProf says:

    I am not a neuroscientist or anything close, so apologies if this is totally off base. But re: the question of why decreased synaptogenesis leads to depression instead of memory/learning effects – is it possible that there’s a control system-y answer here? I can tell a vague sort of story where having fewer connections available means the brain has to adapt its behavior to ensure that its fewer connections doesn’t cripple it. Thus a tendency to avoid high-complexity tasks, etc, so as to preserve the availability of what connections it does have for pre-existing memories or tasks of extremely high import.

    Is there any feasibility to a theory like this, or am I glossing over too many important details?

    • agmatine says:

      YES. Very long time and very varied depression sufferer here. Have done lots of thinking about this and I suspect it is something like a drop below some minimal level of structural complexity that drops mechanism of consciousness below some threshold of irreducible complexity–I’ve long thought the best description of chronic and or severe depression is “sub human”. Sadly my guess is that it will be a long time before our models both in the neuroscience and control-complexity theory are sophisticated enough to characterize these effects in a way that would offer more insight than the hand-wavy hypothesis you described

    • Nav says:

      That study requires what I would consider relatively large amounts of caffeine for the results to show up. From Wentz & Magavi, the study you linked:

      A low dose of caffeine, 10 mg/kg/day, had no effect on proliferation. Moderate doses, between 20 and 25 mg/kg/day, depressed proliferation in the dentate gyrus by 20 to 25% (Figure 2a, Table 1).

      While extrapolating effects from mice to humans presents a number of challenges, 25 mg/kg of caffeine in rodent models is generally assumed to be equivalent to 5–7 cups of coffee, or 625 mg of caffeine, in adult humans (Fredholm et al., 1999).

      To investigate, as I didn’t feel comfortable assuming linearity in dose equivalence (i.e. assuming that 25 mg/kg in mice being equivalent to 625mg in humans means 10 mg/kg is equivalent to 250mg in humans), I dug up the Fredholm article, and it confirmed that this is indeed what the authors were assuming:

      A similar dose-concentration relationship is found in many species, including rodents and primates (Hirsh, 1984). However, because the metabolism of caffeine differs between rodents and humans and the half-life of the methylxanthine is much shorter in rats (0.7–1.2 h) than in humans (2.5–4.5 h) (Morgan et al., 1982), it seems reasonable to correct for the metabolic body weight when comparing animal and human doses. Thus, it is generally assumed that 10 mg/kg in a rat represents about 250 mg of caffeine in a human weighing 70 kg (3.5 mg/kg), and that this would correspond to about 2 to 3 cups of coffee [emphasis added].

      I personally do not drink more than 250mg of caffeine per day and I weigh more than 70kg. I wonder how many people consume beyond 7 mg/kg/day of caffeine, the equivalent level at which significant effects were found in mice — it seems like quite a lot!

      • It’s not much actually. A strong medium coffee from, say, Starbucks, will hit somewhere in the 200 range. (Too lazy to provide a link, but google it)

        • Nav says:

          Starbucks is known for having large amounts of caffeine, larger than other comparable coffees. It’s such that CaffeineInformer, which has copious stats on caffeine content in various beverages, suggests:

          This chain has some of the highest caffeinated retail brewed coffee available!… Those new to Starbucks should use caution when drinking their coffee as it’s probably much higher in caffeine than the coffee they are used to.

          I wanted to find data on how much coffee people are drinking, to see if my intuitions were correct, so I found a study that surveyed caffeine intake in Americans: the mean was 2.2 mg/kg/day, and even the 90th percentile consumed only 5.0 mg/kg/day (350mg for a 70kg individual, less than 2 medium coffees at Starbucks).

          Another paper from the FDA summarizes multiple surveys. One particularly interesting study was from the National Coffee Association, who surveyed “regular coffee drinkers,” and found the mean caffeine intake (at time of survey in Winter, 2009) to be 374.7mg/day: “Each annual NCA survey since 1974 has found daily consumption by regular coffee drinkers to be 3 to 3.6 cup of coffee.” I expect this is on the high side relative to other surveys.

          Using data from the Wentz & Magavi study, for 70kg individuals, 250mg represents a lower bound and 500mg represents an upper bound on amount one needs to drink to witness the demonstrated effect.

          Now, I’m not trying to imply that there aren’t people out there drinking far more than 500mg of coffee per day, but most people are not consuming that much caffeine regularly. Many seem to be in the ambiguous region (250mg-500mg) where it’s unclear whether it’s safe or not, but many are also below 250mg.

          I also wonder whether the study’s methodology makes a difference: they spaced the two doses of caffeine at 12 hours apart, whereas most coffee drinkers are unlikely to space their drinks such. Most I know have a morning cup followed by an afternoon cup 4-6 hours later. I wonder if this dosage pattern would affect the results of the Wentz & Magavi study.

  4. manwhoisthursday says:

    Bipolar depression shares genetic correlations with high B5 Openness:
    http://www.nature.com/ng/journal/v49/n1/full/ng.3736.html

    Schizophrenia also has genetic correlations with high B5 Openness. ADHD has genetic correlations with high Extraversion. And in the not-terribly-surprising category, anxiety disorders and such have genetic correlations with high Neuroticism.

  5. rahien.din says:

    Is there some huge problem with having too much synapse formation which the brain is desperately trying to avoid?

    Epilepsy!

    In my field, aberrant synapse formation in the hippocampus is a good candidate for the pathogenesis of temporal lobe epilepsy. The purported cycle is : seizure damages hippocampus, abnormal neurons proliferate and synapse inappropriately, these neurons cause seizures, which damage the hippocampus, etc.

        • rahien.din says:

          Ozy Frantz,

          People with epilepsy have a higher rate of mood disorders, too.

          And “low synaptogenesis causes depression” (even if it is found to be true) does not have to mean/imply “high synaptogenesis prevents depression.” There’s a lot wrapped up within “synaptogenesis.” Synaptogenesis could have different effects based on synapse types, dose, micro-/macroscopic localization, and if its source is genetic, the other effects of the causative gene.

          Probably, as Deiseach suggests, depressive symptomatology acts as a funnel for multiple causes (just as with dizziness, or sedation, or fever).

          There is so much left to learn. “Synaptogenesis” is yet a placeholder for verified pathophysiology.

          • Scott Alexander says:

            Re: people with epilepsy having more mood disorders – I thought I heard that ECT was invented by a guy who noticed that people with seizures had less depression. I wonder why the contradiction.

            I agree with Ozy that a low-synapse model of depression plus a high-synapse model of autism is at least suspicious.

          • rahien.din says:

            Scott,

            I may have overinterpreted/misread. Maybe the common ground is overall suspicion of the abundance-of-synapses model of depression. That’s definitely sufficient.

            A possible other hypothesis is that some degree of depressability is an outcome of the overall synaptogenesis plan, and that it is a normal and necessary adaptive reaction that can become maladaptive if excessive or if it occurs with no cause. Like anxiety : a little tiny bit of anxiety could be described as “attention to detail,” (and there are situations in which it is very weird not to be anxious, no matter your brain microstructure), but turn up the volume and you have “generalized anxiety disorder.” Some people are tuned to be a little more anxious – and some people are tuned a little more depressable.

            That would unite the “something about your brain makes you easily depressable,” and “depression is a reaction to your circumstances” ideas. It follows then that : maybe people with epilepsy and people with autism have higher rates of depression because those things just plain suck. We don’t have to second-guess their depression – being depressed is a plausible reaction to the suckiness of epilepsy or autism.

            So maybe what’s weird is that some people with autism don’t get depressed. Perhaps every person with autism[1] would be depressed, if not for their plenitude of synapses[1]?

            Any or all of that would fall to the same reasoning I described earlier. Much is unknown and I am just spitballin’.

            [1] mumble mumble phenotypic variability mumble mumble

            Regarding the interface between seizures and psychiatric disease, the rabbit hole is deeper yet :
            – Sometimes, a kid with autism will have a seizure and postictally their behavior and language will temporarily improve. Sometimes people (autistic and otherwise) will have a seizure and have postictal psychosis.
            – There is the somewhat controversial phenomenon of forced normalization, wherein abruptly gaining control of a patient’s seizures is linked to the development of psychosis. I always wondered if this is the-other-side-of-the-coin from ECT
            – Many of the antiseizure medications can cause a mood disturbance as part of their sedative effects, which is more likely the worse your seizures are and/or the more affected your substrate. Some of the antiseizure medications stabilize mood. Neither of these is necessarily predictable by the drugs’ known mechanisms.

  6. Briefling says:

    A quick demonstration: glucocorticoids can cause thicker blood, inflammation can cause thicker blood, SSRIs cause thinner blood, folate causes thinner blood. This only draws in four of the six lines, and they’re pointing in opposite directions,

    It sounds like those are all pointing in the same direction. Things that cause depression cause thicker blood, things that treat depression cause thinner blood.

    A lot of these questions could be answered in one stroke if we had a good evolutionary theory of depression.

    Isn’t depression one of those things that basically doesn’t exist in the ancestral environment?

    My prior is, it would be really hard to get clinically depressed if you’re getting frequent social interaction and high levels of physical activity (which should have been the case for almost everyone until recently). If that’s not true, I’d be very interested to hear.

    • Deiseach says:

      Isn’t depression one of those things that basically doesn’t exist in the ancestral environment?

      Since we can’t hop into a handy time machine and go back 100,000 years to check, we don’t know.

      My prior is, it would be really hard to get clinically depressed if you’re getting frequent social interaction and high levels of physical activity

      This is the “all you need is a nice cup of tea and a chat” mindset which really pisses me off. You know when I really wanted to/was thinking about the best way to throw myself off a bridge? When I was getting all that nice healthy physical activity out in the sunshine and fresh air, and daily interacting with people at my place of employment.

      Oddly enough, I do better in winter where I can stay indoors and not have to talk to or see people. Sunshine and getting outside and mingling with humans drives me down, not raises me up.

      • Briefling says:

        I’m not trying to trivialize depression. But I suspect that, like diabetes (and cancer? and Alzheimers?), it’s a disease that occurs primarily due to the extreme lifestyle shifts made possible by modernity. And if that’s true, it’s worth saying.

        You know when I really wanted to/was thinking about the best way to throw myself off a bridge? When I was getting all that nice healthy physical activity out in the sunshine and fresh air, and daily interacting with people at my place of employment.

        I think these would still qualify as extraordinarily low levels of social interaction and physical activity, by historical standards.

        • Deiseach says:

          I think these would still qualify as extraordinarily low levels of social interaction and physical activity, by historical standards.

          I don’t want to eat the face off you because I think you are trying to offer what you think are genuinely successful interventions.

          But consider: if there was One Weird Trick to cure depression where all that was needed was “go to the gym five nights a week and make lots of friends”, then Scott’s profession could roll up its tents and the pharma companies could concentrate on the search for the female viagra.

          Can you see why your advice does not strike me as hugely helpful? You don’t know my particular levels of exercise and social interaction apart from what you can pick up in my comments, yet your stance is:

          You: Exercise and social interaction, just like the Good Old Days! That’s what’ll knock depression for six!

          Me: Exercise does not have the magic “ah, endorphins!” result for me

          You: Plainly the trouble here is YOU ARE NOT DOING IT RIGHT, YOU NEED TO EXERCISE MOAR!

          Me: I don’t like being around people, interacting with humans gives me headaches, makes me feel light-headed and nauseous, and if prolonged for too long a period makes me think that running amok is a reasonable way of dealing with social requirements

          You: Ah, I see the problem! What is lacking is MOAR PEOPLE ALL THE TIME LESS ALONE TIME!

          Me: I prefer winter to summer, unlike some I don’t find that SAD sets in and indeed the brighter, hotter weather does not agree with me (also, I sunburn terribly)

          You: See, here’s the snag! What you need is MOAR UV EXPOSURE FOR LONGER!

          As far as I’m concerned, your advice – though well-meant – falls on the spectrum of “the beatings will continue until morale improves” 🙂

          • Forge the Sky says:

            Both of these things could be true.

            Consider that exercising is a great way to reduce your risk of heart disease. Consider also that, once you HAVE heart disease, exercise is a great way to cause a heart attack.

            Maybe proper diet/exercise/socialization when done from infancy could help prevent depression, but once a few fuses blow they don’t help and can hurt by disrupting the body’s habitual patterns even more.

            But it’s also true that that is a fairly unsubstantiated, even if intuitive, theory at this point.

          • Briefling says:

            Thank you for being civil, my comment was definitely kind of dismissive and I apologize.

            Really I don’t mean to argue that high levels of socialization and exercise can heal major depression; certainly not that they can heal major depression quickly. (And even if they could, it’s hard for a depressed person to exercise a lot, and extremely hard for a depressed person to socialize a lot.)

            What I do believe is that these factors are strongly preventive before an individual gets depression. As Forge the Sky says.

      • leoboiko says:

        It’s also highly subject to survivorship bias. Social ostracism is a pretty real and dramatic thing in forager groups; we’ve seen it happen and it will kill you, or make you permanently and miserably alone. Even if we can prove that there are no depressive people among foragers (and I insist we don’t know if that’s true), that may be simply because they all die. Postagricultural societies may have more depressives because they can afford to keep them around.

        Further, when I look at the list of suicides by country, I can’t see obvious correlates of exercise, social isolation and suicide. What’s the common cultural or lifestyle trend between Sri Lanka, Lithuania, South Korea, Bolivia and India (high-suicide) that opposes all of them to Brunei, Albania, Myanmar, Guatemala and Pakistan (low-suicide)? Exactly.

        • Postagricultural societies may have more depressives because they can afford to keep them around.

          On the “going on strike” theory, depression would make more sense in agricultural societies,where most people are under someone’s thumb.

      • Ralf says:

        > Since we can’t hop into a handy time machine and go back 100,000 years to check,

        But we could check indio tribes still living in the rain forest and other very rural cultures?

        • Zodiac says:

          Are they sufficiently isolated and are there great enough numbers of them to draw useful conclusions?

          • Nancy Lebovitz says:

            Also, modern hunter-gatherers have as much time behind them as we do, and aren’t on the best land. They may be different from earlier hunter-gatherers.

    • TheEternallyPerplexed says:

      My prior is, it would be really hard to get clinically depressed if you’re getting frequent social interaction and high levels of physical activity (which should have been the case for almost everyone until recently).

      Although entries involving interaction or activity are rated among the higher effective interventions here, the historical record is less clear. During ~1200—1900 in Europe, there was an abundance of both, but it did not prevent the shift towards depression in the general population that is seen in (among others) the whole “we are all doomed sinners”, “repent”, “vale of tears” mood and coloration of art and the predominant religion (please read the book below for the detailed argumentation, no “but there were no psychiatrists then so we can’t really know”, ‘K?).

      Cause was a colder climate (the ‘little ice age’) – reflected e.g. in the clothing of people in pictures from that times: light flowing dresses, cleavages before 1200 vs. furs, high (anti-draft) collars, velvet after 1300.

      Among the discussed mechanisms are cloudier skies (less light, SAD longer/stronger/all-year), hard rains and storms or too little sun, leading to staying indoors more, crammed together with fellow humans (more agression, easier spread of infections) or rats that moved in (the plague), not to mention frequent crop failures and stocks rotting. Down the causal chains, distribution conflicts, social upheaval and witch hunts added to life stress.

      I recommend Behringer’s “A Cultural History of Climate”, it’s easy reading and very informative.

    • Murphy says:

      Don’t mistake lack of documentation for lack of physical reality.

      There are few records from WW1 of PTSD.
      There are however plenty of records of people being shot for “cowardice” and later they started getting a handle on the idea of “shell shock” which was still often viewed as a “lack of moral fiber”.

      Do you expect them to have called depression depression? Up until quite recently in real terms a large fraction of the population were living in abject poverty and surviving pretty much on a knife edge.

    • Scott Alexander says:

      “It sounds like those are all pointing in the same direction. Things that cause depression cause thicker blood, things that treat depression cause thinner blood.”

      Oooh, you’re right, thanks.

    • vV_Vv says:

      Isn’t depression one of those things that basically doesn’t exist in the ancestral environment?

      There is no way of knowing. What we know is that since writing was invented, we have accounts of people being sad and inactive for long times (depressed, as we would say in modern parlance), people committing suicide, and people becoming alcoholics.

      Depression certainly existed in ancient times, though we don’t know if it was as frequent as it is now. Probably there was stronger selection against it: the few wealthy people could stay depressed for long times, write poems about it and maybe even eventually kill themselves, while the depressed peasants either pulled themselves up by their bootstraps or starved to death.

  7. ryanwc4 says:

    >Is there some huge problem with having too much synapse formation …

    Isn’t one mainstream theory of autism …

    Edited because Azure said it better with a link.

  8. Surprise, surprise! Psychedelics also increase BDNF and mitigate depressive symptoms.

    As for how it could be problematic to have too many synaptic connections, I have one word for you: tripping. If there were any subjective experience that just screamed “too many synaptic connections being made, too many thoughts, not enough pruning and logical organizing,” it is the subjective experience of tripping on psychedelics. While some people enjoy visiting this state from time to time, I imagine that few people would want to (or find it adaptive to) feel like this all the time, whether in an ancestral environment or in a modern context.

    Still, habitual micro-dosing and/or occasional supervised tripping sessions could be promising for treating depression and…dare I say it, even becoming mentally sharper all around and bumping one’s IQ up a few points in a lasting way. One of the nicest things about going on a trip every few years is the renewed feeling of mental crispness that I’m left with for several months afterwards. The best way I can describe it is, it feels like learning new things and recalling things becomes as easy once again as when I was 10 or 11 years old. So, it’s like I get to return to that fluid intelligence while also keeping my accumulated crystallized, domain-specific intelligence as an adult. It’s pretty rad. And it makes sense if the stuff really is helping to grow new synaptic connections.

    If there were a pill that gave this feeling without having to go through a trip first, I’d be very interested. “Ask your doctor about orally active BDNF!” One can dream….

  9. alwhite says:

    What is the argument against “software” causes of depression and/or software cures?

    CBT is claimed as the most effective non-medication treatment for depression and it’s even claimed as equally effective as medication. CBT is all about changing thought patterns, ie changing software.

    Can we even describe or define depression as a biological/hardware problem? It seems like all of our diagnostic criteria, whether DSM or Beck Depression Inventory, rely on self-reported software like symptoms (“I feel down”, etc).

    • Deiseach says:

      I’d say CBT works for some people/some forms of depression, but not for others. And depression is a lot more than simply “I feel down”, but that’s where symptoms get murky. It’s very hard to communicate how it feels to constantly be “not there”, when you’re not sure exactly what there should be anyway, and all you have are phrases like “I feel down, I have no energy, I’ve lost my enthusiasm, I don’t enjoy the things I used to anymore” etc.

      • baconbacon says:

        Link 1

        Link 2

        This is the best explanation of what I felt when I was depressed as well. The most important intentional factor in not being depressed anymore for me was getting a dog. Something about the whole package really worked for me.

  10. Deiseach says:

    I think that there isn’t one over-all illness called “depression”, there are various depressive illnesses that get lumped in under the one umbrella. So that’s why different treatments seem to work in different fashions.

    I do think biology has something to do with it, but who the hell knows what. Some people may be pre-disposed to be depressives, so that you get whomped with the life stressors or something like “whoops your auto-immune system is attacking itself with the inflammatory response” and as an added bonus just for you, we’ll throw in depression as part of the package!

    Other people have it from birth, practically, so even if “life is okay, I’m not sick, things are pretty normal”, they still suffer from depression (and I think treatment-resistant depression falls into this side of the balance).

    I have a family member who is depressed and on top of that she has hypothyroidism, so when her doctor got that sorted out, it helped with the depression. If that gets out of balance again, the depression gets worse, so see the biological explanation.

    I think there is also “normal” depression, in that if your life goes to hell in a handbasket, it’s normal to react with depression. But once you get things sorted out to an acceptable or functional level, the depression clears up. Maybe in the biologically pre-disposed people, this is what triggers the depressive reaction: “how come so many people get depressed for no reason, stay depressed for a few months to a few years, and then get better – only to relapse back into depression a few years later?”

    The ‘normal’ people get normally depressed but once the stressors are removed, or they’ve learned how to deal with them, that knocks the depression on the head (I think this is why CBT works for some people). The ‘biological’ people get depressed for the same reasons in the same situation, once the situation improves the depression goes away – but it’s like getting shingles or a cold sore virus; once you’re run-down again or something triggers it, it flares back up. The depression hasn’t really gone away, it’s just gone into remission after it’s been activated and your system has been sensitised to it.

    • alwhite says:

      The really weird thing about CBT is how it works. Much of the time the client is asked to keep a thought record, a practice in learning awareness about your own thoughts. Evidence is almost always demanded for thoughts. Depression says “everybody hates me”, therapist “what is the evidence for that thought?”, “Does literally EVERYBODY hate you?” And it continues on this way.

      I think there’s a very real phenomenon that if you think the wrong kind of thoughts for a long enough period, you will give yourself depression, and I think most of CBT relies on this idea. At the very least, I think we can say this is true for some people.

      From this perspective, when we talk about depression in remission, it seems off. It’s like I have a practice of smashing my leg with a hammer, which results in repeated broken legs. After stopping the hammer beatings we then say my broken leg has “gone into remission”. Just doesn’t seem right.

      Then there’s loneliness. The research shows that loneliness can cause all of the symptoms represented by depression and that increasing social connection both cures and protects against depression.

      Sure, I can totally understand how things like hyperthyroidism causes depression, but when trying to tackle the giant umbrella of depression, it seems like we need software-like solutions held in equal regard as hardware-like solutions, unless we can effectively detect and segregate the different types.

      • Deiseach says:

        From this perspective, when we talk about depression in remission, it seems off. It’s like I have a practice of smashing my leg with a hammer, which results in repeated broken legs. After stopping the hammer beatings we then say my broken leg has “gone into remission”. Just doesn’t seem right.

        Which is why I said CBT works for some people and some forms of depression. If you’re complaining about crippling leg pains, and a neutral third party points out mildly that this might have something to do with you hitting your shins with a hammer, then you can ‘put your depression into remission’ by stopping doing that.

        However, if your leg pains are caused by a bear gnawing on your ankle, a third party telling you “now just ignore the bear, don’t entertain its presence, negative thoughts reinforce negative behaviour” is not going to help. You need to get the bear to let go.

        Or maybe it’s not a bear, maybe you have leg pains because both your legs have been cut off below the knees. Again, talking about ways to feel upbeat and ignore the missing halves of your lower limbs are not going to be much use.

        In other words, CBT is not going to help as much when it comes to “your negative thoughts are exaggerating and distorting the reality of the situation and things are not, objectively, that bad” if you are in a situation where objectively things are gone to hell and are indeed shitty.

        But what I meant by depression going into remission was meant to parallel an allergic or viral condition; you suffer an attack of depression for a good reason (you lose your job unexpectedly and it’s hard to find a new one; your house burns down with all your goods the day after your insurance lapsed), you get over that (maybe by gritting your teeth and bootstrapping your way out, maybe with treatment) and then we have two possibilities: you are not someone genetically predisposed to depression, so further down the road you are not likely to lapse back into depression for a minor set-back, or you are someone who is so disposed, so now your system is ‘sensitised‘ and when a lesser stressor or some other situation occurs, or maybe just out of the blue, you relapse into depression.

    • Debug says:

      I think that there isn’t one over-all illness called “depression”, there are various depressive illnesses that get lumped in under the one umbrella. So that’s why different treatments seem to work in different fashions.

      If this is true – it might be why it’s so hard to develop an evolutionary theory of depression. We have a bunch of subsystems which we need to develop evolutionary theories for but due to the interactions between these subsystems its really hard to identify the function of each subsystem. In a simple case imagine you have three subsystems and subsystem one – due to genetic load – isn’t performing in an evolutionary optimal manner. Subsystems two and three will also be performing differently than expected. This gets worse if there is feedback between the subsystems.

      So as you say – if life goes to hell in a handbasket it might be normal to react with depression but the other cases are abnormal as they are primarily maladaptive. I’m unsure if people have developed extensive vocabularies to describe the different types of depression but it seems like it would be necessary to begin to understand depression. There’s also the problem of developing vocabularies to describe the range of symptoms people exhibit so maybe it’s all a wash.

    • manwhoisthursday says:

      One might quibble with the details, but this is essentially right: we don’t know yet if this is one disease or different diseases with similar symptoms. Kinda like flu and cold wouldn’t have been distinguished from each other back in the day.

  11. Bram Cohen says:

    There’s been a suggested evolutionary theory of depression focusing around how it happens in humans and baboons but not other primates, which postulates that depression happens to creatures capable of changing their environment and it causes the individual to spend time trying to figure out what they should do. This seems to dovetail reasonably with the sort of situations which trigger depression in the first place (including just plain living in the modern world) and an easy just so story for why people pull out of it is that their brain eventually gives up on changing anything and goes back to being content. If this is the case, then maybe trying to stop depression directly while it’s happening is a fool’s errand and what should really be done is fixing the mechanism for pulling out of a funk which seems to be broken in depressive people. If existing drugs do that, then that might explain why they take so long to kick in. Using drugs to pull the person directly out might have the ill effect of forcing the person to stay on very strong medications the rest of their lives or rapidly go into a tailspin of depression if there’s any hiccup in their medication. It also might cause people to living what are fundamentally unfulfilling lives and not do anything about it.

    • TheEternallyPerplexed says:

      Stop giving people choices?

      • John Nerst says:

        You mean like in “Choiceless mode”, which the author argues is the natural way for humans to live?

        • TheEternallyPerplexed says:

          Naaaaa… Can’t follow that, because
          Unfortunately, the choiceless mode depends on ignorance of alternatives. It’s usually impossible for nearly everyone in the developed world, and survives mostly only in remote areas in the most “backward” countries.

          EDIT: Maybe a temporary relief from having to manage all of life could work, say, in a clinic. With an additional re-evaluation of life priorities and lifestyle (and if necessary, training required skills) afterwards (do it too early and the patients are just pushed back into overwhelmed mode). Would require clinics to accept that it’s not always the dreaded “facilitating regressive development” for the first part.

    • agmatine says:

      THIS. Longtime depression sufferer here, both chronic and or severe at times. I have a working hypothesis, suggested by my own and others (low n) experience, that being on antidepressants then going through a breakup, say; or starting antidepressants right after a breakup, IS EXACTLY WHAT YOU DO NOT WANT TO DO. not being able to ” prune” the old memories, always being bathed in a flood of BDNF to protect those synapses that really really need to wither, just makes things worse, more obsessive, and in the long term, makes the person less able to grow and move on.

      Being on semi effective antidepressants, then going through a sudden and traumatic breakup for which the tesponse was to increase dosage, resulted in over a year of horroific pathologocial obsessive grief…in my mined, caused by “unpruned” synapses

  12. Slocum says:

    I’ve always been attracted to the bargaining model of depression:

    http://anthro.vancouver.wsu.edu/media/PDF/Hagen_2003_The_bargaining_model_of_depression.pdf

    I wouldn’t say that this accounts for all forms of depression, but it seems to have quite a lot of explanatory power. The idea is that depression is inherently social. It’s a form of going on strike, but not a voluntary one. And it’s a costly signal — the depressed person neglects their duties, neglects their own interests even their personal hygiene. True depressive behavior of this kind is very hard to fake (sort of like being head-over-heels in love is very hard to fake — any non affected person would find it terribly embarrassing to do the silly things that an infatuated person does. Which is the evolutionary point of it). Depression is a way for a low-status person to involuntarily demand better treatment from their friends and loved ones.

    • Yakimi says:

      Compelling. This would suggest that allowing sick leave for depression would only enhance the bargain, thereby further incentivizing depression.

      • tmk says:

        Compelling because it fits the evidence well, or because you like the moral implications?

    • Zodiac says:

      This is also just from low n observation but don’t depressed people often try to hide their depression for as long as they can?

      • TheEternallyPerplexed says:

        Even from themselves, and then they beging acknowledging with the term “burnout”.

  13. Pseudocydonia says:

    Scott, I tried Semax a while back on the basis of one of your nootropics surveys – and I found it to be very effective. It’s mechanism of action is also supposed to be an acute, immediate release of BDNF.

    If this model is essentially correct, it is yet another reason why the iron curtain of psychopharmacology is so tragic.

    • Scott Alexander says:

      I didn’t know Semax was a direct BDNF releaser. The only thing like that I’d ever heard about was Lion’s Mane, which doesn’t really do anything. Anyone know of anything else in this class?

  14. Kevin says:

    My first thought experiment not mentioned in your post, was exercise.

    Exercise appears to increase synaptogenesis (https://scholar.google.com/scholar?q=exercise+synaptogenesis).

    Thus, one would predict that exercise would improve depression, which it appears to do (http://bjsm.bmj.com/content/35/2/114.short).

  15. abc says:

    Well, one theory is that depression (or at least bipolar disorder) is a way of acting out to force other tribe members to pay attention to you and thus raise one’s status.

    Sometime ago in my wild and reckless youth that hopefully isn’t over yet, a certain ex-girlfriend took to harassing me with suicide threats. (So making her stay alive was presumably our common interest in this variable-sum game.) As soon as I got around to looking at the situation through Schelling goggles, it became clear that ignoring the threats just leads to escalation. The correct solution was making myself unavailable for threats. Blacklist the phone number, block the email, spend a lot of time out of home. If any messages get through, pretend I didn’t receive them anyway. It worked. It felt kinda bad, but it worked.

  16. hforrestalexander says:

    Why does the body have so many “decrease synaptogenesis” knobs? That is, why go through the trouble to evolve all these chemicals and systems whose job is to tell your brain to decrease synapse formation so much that you end up depressed?

    Hand-wavy thick-blood* hypothesis (along the lines of the optimal activity theory of depression “baconbacon” introduces in a top-level comment):

    Sometimes an organism might find itself in a non-ideal situation where it has very few options. Maybe the safest immediate thing to do is reduce activity, and correspondingly reducing risk. Once you’ve correctly recognized the situation as one in which you have few options, it isn’t necessarily a good idea to learn from the experience. That is, the sustained state of reduced options might be relatively harmless, but isn’t a fruitful way to exercise interesting synaptic connections. Perhaps, besides being the safest thing to do, curling up and sleeping is a way to avoid developing an entirely new set of habits that will be inappropriately once the depression somehow passes.

    Major flaws in this vague hypothesis include:
    * At the very least, anecdotal evidence suggests that depressed people probably do form more bad habits, while depressed. Similarly, as Scott hinted, I don’t get the impression that depressed people actually fail to learn new things (say, from a book) if they actually engage in the right activities. On the other hand, anecdotally, the most depressed people I know do seem to fail to retain certain kinds of information, including personal insights about their own depression and mental state.
    * The situation I described where an organism has few options, in nature is probably either escapable or fatal. If it’s the former, then “curl up and wait it out and repress it” sounds a lot less helpful. If it’s a certain doom, then there’s no selection pressure to favor a particular cognitive strategy.
    * Learning and neuroplasticity entail a much more complicated correspondence than “new synapses = new behaviors”.

    In any case, it seems clear that humans have a lot of ways to create “depressing” situations that are somewhat decoupled from transient or periodic environmental conditions. And, again, anecdotally, it seems obvious that depression in humans is rife with feedback loops (a classic one being depression-induced inactivity leading to a lack of new, fulfilling experiences, and, presumably, more depression). I’d be interested in re-examining the factors Scott discussed in his post, with an eye towards the extent to which depression causally feeds back into stimulating the intensification of the (purportedly causal) factors preceding it. Presumably, depressed people are more prone to inflammation (as a correlation). But which way does causality go, and is there a feedback loop?

    * Comparing to Scott’s cherry picking is generous, since I didn’t even cherry pick. But I claim that all of the bold assertions here could probably be justified with a literature search, adding absolutely zero clarity to the discussion.

  17. justinliebernotes says:

    Why should decreased synaptogenesis cause depression, of all things? If you asked me, a non-neuroscientist, to guess what happens if the brain can’t create new synapses very well and loses hippocampal volume, I would say “your memory gets worse and you stop being able to learn new things”. But this doesn’t really happen in depression – even the subset of depressed people who get cognitive problems usually just have “pseudo-dementia” – they’re too depressed to put any effort into answering questions or doing intelligence tests. Why should decreased synaptogenesis in the hippocampus and prefrontal cortex cause poor mood, tiredness, and even suicidality?

    My intuition here is that depression may be tied to lower-than-normal synaptogenesis somewhere, but not necessarily the hippocampus. So increasing synaptogenesis everywhere increases it in a whole bunch of places that don’t matter (including the hippocampus) as well as the mystery target area that’s causing the problems.
    Given how strongly depression seems to affect motivation, I’m surprised there aren’t stronger links with the reward/dopamine system.

    • TheEternallyPerplexed says:

      There is an interesting one: retinal contrast processing neurons are influenced by dopamine level. Depressed patients have less of their activity, literally seeing more gray-in-gray than bright wight besides dark black. It can be measured easily with a thin thread electrode in the lower lid of the eye (and a counterpart somewhere else) – think of a one-point EEG. Eyes of depressed patients give readings of <2µV when presented with strong b/w checkboard patterns, eyes of healthy probands are 3µV. Voltages even correlate with the timing of depressive episodes. Media speculations say you could go to your ophthalmologist for a quick depression check in a few years.

      • doublebuffered says:

        Thank you for explaining why lights seem brighter to me when I’m not feeling as depressed! This explanation is making me wonder if there’s actually some correlation between the literal contrast sensations and the feeling that life is “greyer” under depression. Maybe that metaphor develops out of the sensory changes.

      • TheEternallyPerplexed says:

        * wight => white

        Also, the opposite may also be true. Some hypersensitivity to what patients say is “light” in other mental diseases may be more a sensitivity to sharp contrast.
        I don’t know enough to say which are correlated with *increased* dopamine levels or effects.

        EDIT: Original paper, voltages here were even more different than I remembered. They used b/w patterns, maybe there is a similar effect for colors, also possible at a later stage of processing (related to this?)?

    • Scott Alexander says:

      I think that MRIs and BDNF injection experiments have pretty conclusively found the hippocampus and parts of the cortex are the relevant area, but I’m not sure how strong those results are.

  18. fortaleza84 says:

    Just shooting from the hip, but my approach would be to understand depression by first looking at the mild non-clinical depression that most people suffer now and then.

    From introspection, what seems to cause mild depression are (1) lack of sunlight; (2) insufficient exercise; (3) social rejection; and (4) other bad news where I have little control. So I would guess that depression (at least mild depression) is an evolutionary response to put you into an energy-saving state. Perhaps clinical depression is a situation where the normal depression response goes overboard. Kind of like clinical anxiety.

    • engleberg says:

      Just shooting from the hip, I’d look at Darwin’s Expression of the Emotions in Animals and Man– the six basic emotions of anger, fear, appetite, disgust, happiness, unhappiness. What is depression? Unhappiness. Is it noticeable in lower animals, plants, bacterial colonies? Yes, but you kind of have to want to see it and have read Darwin recently. Can you see it in dogs, pigs, chimps, the enlisted swine, lawyers? Sure.

  19. void_genesis says:

    Given the long history of depression/melancholia I think it is unhelpful to link it too closely with modern influences such as caffeine or artificial light, other than to speculate about why it is more common. It also seems to be present in simple societies, even the !Kung.

    It seems more like a homeostatic state change that can be brought on by multiple factors, similar to the idea of a fever being a general physiological state in response to all sorts of infections or hormones or other causes. As such it probably has an adaptive advantage under the right circumstances. I see the parallel with obesity here- in precivilisational and even preindustrial societies the ability to eat as much as possible and put on weight during good times would be adaptive for unreliable food supplies. Under industrial conditions depression can be maladaptive, in that we demand people to be constantly engaged and “productive”. Even non-depressive introverted people find this burdensome.

    If depression is designed to realign behaviour to minimise energy output or distractions to ruminate on a problem with a view to solving it sooner, then in modern societies people may be so highly constrained by their circumstances that they get stuck in that state longer than they should, or even worse are simply denied the “luxury” of going through the necessary process of being depressed. Kind of similar to people taking pain killers and fever reducing medication because life demands they keep work-work-working and cannot rest and recover instead.

    This brings up the disturbing possibility that modern science may indeed find a way to switch off depression (more consistently and safely than current medications at least) so people can pop a pill and remain at least serene if not blissful in their allotted work station in the ant-hill we call civilisation.

  20. stochasticlight says:

    Is there some huge problem with having too much synapse formation which the brain is desperately trying to avoid?

    How about the good old stability-plasticity dilemna? The unbridled creation of new synapses would lead to catastrophic forgetting just as well as the loss of old ones, by altering what each neuron responds to.

    Maybe all these fast depression therapies act by temporarily shifting the stability-plasticity balance a tiny, tiny bit towards more plasticity, knocking the brain out of its stable but miserable thought patterns. But massive, constant rewiring would essentially scramble your brain.

    • agmatine says:

      Which is of course what ECT does…blank out all memories from months to a year before the treatment series

  21. Furslid says:

    There’s a possible evolutionary explanation for depression even if it is maladaptive. The anti-depressive state is so maladaptive that avoiding it is worth risking depression. Think sickle cell disease. The resistance to malaria is so valuable that sickle cell disease exists despite even stronger selection pressures.

    What could be wrong with anti-depression?
    Rather than lethargy, needless activity. This increases starvation risk. This makes getting proper rest and healing for injuries less likely.
    Rather than avoiding useful risks, taking needless risks. This increases predation and conflict between humans.
    Rather than sticking to a bad routine, excessive innovation. Most innovations don’t work in the evolutionary environment. This wastes resources and increases risks.
    Rather than social isolation, increased social contact. If depression is linked to immune response, depression may slow the spread of disease, especially STDs.

    • Anonymous says:

      I’ve read somewhere that depressed people have a more accurate view of the world than non-depressed people. Which is plausible, since the latter group is more likely to be unjustifiably optimistic than the former. Maybe “depression” is piggy-backing on “accurate perception”, like the Jewish genetic illnesses are piggy-backing on intelligence?

    • What you are calling anti-depressive is pretty much mania, but it is quite possible to be neither manic nor depressed.

  22. tinyangrycrab says:

    Worth noting there are some other interesting drugs that have strong effects on growth factors eg BDNF and NGF. One that I have looked at in particular is apomorphine, which seems to in some case have protective effects in Parkinson’s and actively regenerative effects on dopaminergic varicosities. It had an interesting history in the treatment of addiction (initially as an aversion therapy, but then at subemetic doses). Arvid Carlsson is still convinced to this day it works.

    I would be very interested to see a test of apomorphine for depression, actually, as it’s anxiolytic to boot.

  23. Baughn says:

    I might be able to suggest an explanation for your last question. At least, this seems like a regime in which control theory may be applicable.

    Control systems that try to maintain homeostasis tend to fall into one of three stereotypes, depending on whether or not they’re correctly tuned.

    – Stable. If everything works right then the error signal will stick close to zero, and any excursions will last only a short time (on the timescale of the system) before being corrected. The system won’t overcorrect, or overcorrections will be small relative to the original excursion.

    – Cyclic. Stability is fleeting and unreliable; the system will tend towards a cyclic form of metastability where it constantly overcorrects. A plot of this may look roughly like a sine curve. The error graph is limited to a maximum amplitude, though the exact value of that may vary.

    Of course there are states in-between the two above to consider as well, but… Did that sound a bit like bipolar disorder? I can certainly see why it would be attractive to think so.

    You can get this behavior with about three analog components in a PID controller, though, and the brain is vastly more complicated. Though that doesn’t mean it can’t be what’s happening.

    And for completeness:

    – Unstable. Errors are magnified rather than corrected, or alternately there are cyclic errors whose magnitude grows over time up to the physical limit of the system. These get bundles together because both conditions are generally fatal, as the control system usually has a purpose for existence. (The usual fix is to start from scratch.)

    … Does *that* sound like anything?

    • Scott Alexander says:

      The problem with using any cycling theory to explain bipolar disorder is that most sufferers have long periods of normal mood in between their depressive and manic episodes.

      • liberosthoughts says:

        What about ciclotimia?
        Besides, with Fourier analysis you could theoretically decompose different rhytms, ultradian and infradian.

  24. marchemars says:

    The interesting implication of one hypothesis for the evolutionary adaptiveness of depression has to do with treatment: the idea that people become depressed as a way to credibly signal that they need more support from their group than they are getting.

    Sister Y speaking on this topic 9 years ago. Surely the field has advanced since then, and I couldn’t predict how, but this “social motivation function” seems interesting.

  25. Nancy Lebovitz says:

    I’m extremely dubious about depression being adaptive, especially the more severe sort of depression. If depression were adaptive, I think it would have a better off switch.

    My model is that broken bones aren’t adaptive. Having bones is adaptive, and we live in a world where sometimes bones are subjected to more stress than they can take.

    Also, evolutionary theories seem to just address the lethargy part of depression and ignore the misery part.

    If I’m right, the thing to research might be trying to get better understanding of how healthy people work.

    For example, speaking as a person with some serious problems with akrasia, how does a person go from having an intention to doing something to achieve it? I can do that much more easily at some times than others, but what’s the difference? Sometimes, enacting an intention seems like having to haul myself over a high threshold. This might be mostly physiological, but I’m not sure.

    • Andrew Klaassen says:

      My model is that broken bones aren’t adaptive. Having bones is adaptive, and we live in a world where sometimes bones are subjected to more stress than they can take.

      That’s a good point. Another model for you: Lack of oxygen to a baby’s brain causes brain damage. But if you cool the brain just after the injury, the amount of injury is greatly reduced. Without cooling, there’s a cascade of biochemical events which makes the injury much *worse* than what was strictly caused by the lack of oxygen. This natural response is clearly maladaptive; it increases the chance of mental retardation by about 50%. Without intervention, the system is pushed beyond what it can handle, and it amplifies the injury instead of limiting it.

      Also, evolutionary theories seem to just address the lethargy part of depression and ignore the misery part.

      The misery part seems like it fits well with your broken bone example, in support of the evolutionary Just So story: The pain is there to enforce inaction. It says: Don’t move your broken leg, or you will be punished with physical pain; don’t get your lethargic self out of bed, or you will be punished with emotional pain. (Although… it doesn’t always work that way with depression, does it?)

    • baconbacon says:

      I’m extremely dubious about depression being adaptive, especially the more severe sort of depression. If depression were adaptive, I think it would have a better off switch.

      The evolutionary discussion isn’t about X’s current experiences with depression, they are about understanding how basic, short term depression could be beneficial. There is a big difference between looking at clinical depression as a screw up of a system that would normally produce a base level of depression at times, and a screw up of a system that constantly prevents any kind of depression from occurring when it is working well.

      Also, evolutionary theories seem to just address the lethargy part of depression and ignore the misery part.

      For at least some people the initial stages of depression don’t include misery, link for one person’s description.

      My model is that broken bones aren’t adaptive. Having bones is adaptive, and we live in a world where sometimes bones are subjected to more stress than they can take.

      While broken bones aren’t adaptive, having portions of your bone that are easily broken at times of your life is adaptive. See growth plates.

    • rahien.din says:

      Think of anxiety.

      If a person has a tiny bit of anxiety, it exists as a little worry in the back of their brains. “Did I double-check my calculations?” “Have I considered all potential causes for my patient’s symptoms?” “Is there something I could be doing better to help my child succeed?” Etc.

      It is adaptive in small doses, in which case we call it something like “attention to detail” or “being driven.” Turn up the volume and you get so anxious that it starts to affect your life adversely, and we call it something like “generalized anxiety disorder” or “this is why you have migraines.” Being anxious isn’t being broken. Being anxious is having an excess dose of worry.

      Depressability could exhibit a similar dose-response.

      I get that it is hard to say exactly what a tiny dose of depressability would do for a person, but if we are going to entertain the notion that it could be adaptive in small doses, we first have to discard the “depressed = broken” model. Depression isn’t being broken. It may be a normal response. People who are depressed all the time or for no reason may just have an excess dose of depressability.

      • swarmofbeasts says:

        A tiny dose of depressability might be ordinary responses of guilt/shame. What depression looks like for a lot of people is thoughts like “I’m a burden on those around me,” “I’m a bad person,” “I don’t do worthwhile things with my life.” It’s not inconceivable that these thoughts could be adaptive in tiny doses – as an incentive to repair social relationships, make amends, work hard. (Haven’t we all met people who seem completely unfazed by any kind of guilt or social rejection, immune to any kind of shame or guilt response, who seem to breeze through life until they realize they’ve burned one too many bridge?)

        The trouble with full-scale depression, of course, is that it often drives one to socially isolate oneself and do less, so that one ends up “proving” to oneself all those depressed ideas about being a burden and insufficiently productive and so on.

  26. Mediocrates says:

    Fascinating stuff. I would take issue with the encaptioned claim that nobody is capable of memorizing the mTOR pathway in all its terrible majesty; I’ve met such folks, they walk among us. I’m not one of them, but I do have a few points to add.

    Rapamycin/sirolimus is commonly referred to as an mTORC1 inhibitor, but it’s sort of the odd man out among that class of drugs (despite being the founding member). Unlike most of the synthetic mTOR inhibitors ground out by big pharma med-chem teams, rapamycin doesn’t directly inhibit the kinase activity of either mTOR complex. Instead, it works through a complex mechanism involving a bank shot off of an accessory protein (FKBP12), which seems to inhibit mTORC1 and have some more complex, tissue-specific effect on mTORC2. Even the mTORC1 inhibition is only partial: not for nothing did David Sabatini, perhaps the world’s preeminent mTOR expert (who definitely has the whole pathway stored in RAM at all times) published a paper bluntly titled “Rapamycin inhibits mTORC1, but not completely“.

    All that’s simply to say that rapamycin’s failure to induce depression may only count as weak evidence against mTORC1’s involvement, depending on exactly what mTORC1 is doing to regulate synaptogenesis. The second-generation mTOR inhibitors coming out of pharma, which completely block the signaling activity of mTORC1 and/or mTORC2, might make a better test of the hypothesis if someone took a close look at the Phase I trial data. All I could find with a quick dive is this slide deck claiming that 18% of the patients in Novartis’ BKM-120 trial reported depression.

    Also, I wouldn’t necessarily count the folate connection out just yet. There may not be any research directly linking folate metabolism to synaptogenesis, but it looks like mTOR can probably sense folate levels, and folate deficiency probably suppresses mTOR signaling: paper. Not terribly surprising, since mTOR’s generally thought to be the “nutrient availability” signal integrator.

    • Scott Alexander says:

      Is there anything that does inhibit mTOR really well, and do we know if it causes depression?

      • Mediocrates says:

        Pharma’s cranked out a boatload of mTOR inhibitors in the past decade or two, a gold rush sparked by the observation that the mTOR pathway is frequently dysregulated in cancer. However, unlike rapamycin these compounds pretty much all hit both mTORC1 and mTORC2, which govern somewhat different pathways (so if synaptogenesis is truly mTORC1-driven, as the article suggests, they may be imperfect tools). There’s been a lot of interest in specific mTORC1 inhibitors but I don’t think anyone’s cracked that nut; looks like Sabatini’s taking a swing at it, though.

        A lot of the mTORC1/2 inhibitors have made it into clinical trials for cancer, but to my knowledge none of them have made it back out – looks like the FDA-approved mTOR inhibitors are all still “rapalogs”. These compounds tend to fail on efficacy, as most tumors seem to be able to weasel their way around an mTOR blockade pretty quickly (for an interesting and non-obvious reason). What this means is that we – or rather, some Big Pharma datavaults – have a lot of Phase I/II safety data, but we may never get the chance to see what long-term dosing in a large population looks like.

        That said, that PhI/II safety data might be instructive if anyone could get a close look. A lot of those compounds were brain penetrant, and could get up to high enough levels to hit brain tumors in animal models.

  27. Forge the Sky says:

    This is obviously something we don’t know nearly enough about yet, and I can’t provide much past reiterating the questions in the OP. But two things that might be interesting:

    First (and correct me if I’ve misremembered) it seems like whenever we look at the neurology involved in depression, the forebrain is much more implicated than mid- or hindbrain structures. This is interesting because human forebrain structures are in a sense fairly primitive; they only evolved fairly recently, and have therefore had little time to become strong or optimized compared with more ancient hindbrain structures. In addition to the concerns Scott has with evo-bio explanations above, this makes me think that an adaptationist explanation for depression is less likely. Possibly there is some sort of regulatory function at play, but that regulatory function may have ‘unintended’ side effects when a pre-frontal cortex is in the warpath in addition to the original intended targets.

    This is kind of just saying evolution is blind in a lot of words, but it suggests that maybe the solution isn’t turning on or off a single trigger or imbalance, but in trying to find an optimal balance of many systems that properly nurtures the more delicate parts of our neurology. Hard problem.

    Second: this is anecdotal but maybe weird enough to be interesting. I’ve never been diagnosed with clinical depression but have had subclinical depressive symptoms before. Supplementing with s-AMe has tended to help a good deal, and would work pretty quickly – in about 5 days, and it also seemed to have a mild stronger effect for a few hours after taking it, starting about an hour after ingestion. That seems like an awful fast effect for it to be doing things with synaptic development in the brain, but I’m not a neurologist so I’m not sure. At any rate, after about 3 weeks I’d have to half the dose because I would become manic – unable to sleep, having more ideas about things I wanted to do than I could do, pacing, etc. I’ve heard of others getting the same effect, it subsides very quickly once you decrease the dose. Making people manic if they have pre-existing manic spells is a known side-effect of s-AMe, but I haven’t heard it reported scientifically in people who usually do not suffer from mania.

    But that’s interesting because you don’t hear about mania from SSRI’s….maybe the ‘manic/depressive’ dipole is only relevant in certain aspects of the depression dynamic. I did subjectively notice that s-AMe seems to give motivation more than it dispells dysphoric mood.

    Finally, since I’m somewhat rambling – does anyone know of diseases other than cancer that create greater risk of becoming depressed? I work in health optimization and am trying to figure out how much optimizing lifestyle might have to do with avoiding depression, and the sorts of things it’s co-morbid with could strongly inform that.

  28. Andrew Klaassen says:

    Supplementing with s-AMe has tended to help a good deal, and would work pretty quickly…

    Because of the proven power of placebo in depression, there’s unfortunately little-to-nothing that we can learn about treatment for it (and any mechanisms that a treatment might suggest) from individual cases. You really do need a blinded, placebo-controlled trial. Even those present challenges, since the side effects of treatments often pierce the blind, and depressive symptoms lift faster when people are convinced that they’re getting the real drug.

    Edit: Did the comment I was responding to disappear?

    • Forge the Sky says:

      Looks like you were responding to my comment, which has disappeared. I have no idea why, don’t think there was anything that would have made it mod-worthy.

      At any rate, I’m not trying to make a case for anecdata here. But in studies s-AMe does seem to work for depression; given that I found it interesting that a few people without prompting mentioned getting mania from it and never, say, euphoria. And also that I experienced a state of mind I never had before, without being told that was a possibility.

      These are the sorts of things that don’t give us any power to speculate about mechanisms with any rigor, but can be the starting-point to actual research being done.

  29. patrissimo says:

    Why does the body have so many “decrease synaptogenesis” knobs? That is, why go through the trouble to evolve all these chemicals and systems whose job is to tell your brain to decrease synapse formation so much that you end up depressed?

    This seems silly for a variety of reasons. First, it would silly be to notice that many things can decrease someone’s chance of partnering or living to 100, and then posit some type of “decrease pairing” and “anti-centenarian” knobs. We understand these are complex, difficult outcomes, so of course many things affect the final outcome. Building a brain seems plausibly to be such an outcome.

    Second, if synaptogenesis is expensive, then many things should affect the optimal amount of it to do, and a proper feedback system should involve tuning up or down the amount in response to a variety of inputs. (This holds even if the first point is false, and the body can easily choose an exact degree of synaptogensis) We don’t wonder why the body has so many “how much / what should I eat” knobs, because it’s obvious that past & predicted exercise, food availability, weather/season, sleep, etc. should all affect how much / what a forager eats.

    And finally, as the “eat” example alludes to, all of these control systems evolved in a different environment with a different parameter space than we have now. It is not in the slightest bit surprising that if we take a complex control system optimized for a particular range of parameters (some shape in multi-dimensional space of possible inputs to control system) and feed it parameters from a different part of the space, it will do the wrong thing. Or to put another way the performance of any learning system must degrade towards zero as the test set diverges from the training set. Any performance demonstrates (perhaps tautologically) similarity in structure between test and training sets.

  30. Andrew Klaassen says:

    How does the effectiveness of active placebo in the treatment of depression tie into the synapse hypothesis?

  31. Yair says:

    The classic examples of this are cancer-related depression (which exceeds what you would expect just from cancer being stressful)

    I wonder how did they work that one out. How did they decide how many cancer sufferers you’d expect to be depressed just from cancer been stressful?

    Did somebody say something like 52% of people with cancer suffering depression is what you’d expect so if 70% suffer from depression is way above what you’d expect?!

    Seems totally arbitrary (which means I must be missing something).

    • Aapje says:

      My guess:

      1) Develop a fairly objective measure of stress
      2) Measure the correlation between stress and depression
      3) Notice that cancer patients have higher depression than can be explained by the normal correlation between stress and depression

  32. AZpie says:

    Hello; this being my first comment ever on a SSC post, allow me to thank you for having created a highly interesting blog.

    Thanks! ^__^

    Now, not having read all SSC posts, it could be that you’ve already seen this. But on that evolutionary theory: http://www.sciencedirect.com/science/article/pii/S0149763415000287

    There are some very intriguing ideas in that article, perhaps the most relevant to this article being the hypothesis that the serotonergic system evolved to regulate energy metabolism (in a sense).

  33. Peter Gerdes says:

    Here is the simplest most obvious evolutionary theory of depression and I feel we need to eliminate it before we feel any need to look for alternatives.

    At any time how happy you are is a combination of some innate happiness set point and what is going on in your life.

    For obvious reasons evolution balances our happiness set point to best avoid both the harms of mania and depression. Being very unhappy much of the time is an obviously shitty thing so is itself a negative life event explaining why one can get stuck being depressed and why it can disappear and reappear. Having a lower happiness set point makes one disposed to this state.

    The obvious maladaptive nature of depression can be accounted for by noting that mistakes on the manic side were probably even more dangerous as they encouraged excessive risk taking. In other words given the tools available to it evolution struck a balance…not sure what more is wanted from an evolutionary explanation.

    Maybe special kinds of depression like anhedonic depression do require more explanation but doesn’t this suffice for the basic phenomena?

  34. liberosthoughts says:

    Here’s a theory that I find promising: http://psych-networks.com/challenges-to-the-network-approach/

  35. Robin K says:

    ” For that matter, what is it like to have too much synapse formation? If it’s the opposite of depression, it sounds kind of fun.”

    I think cannabis increases connectivity in the brain and it is kind of fun ;-).

    “The results suggest increases in connectivity, both structural and functional that may be compensating for grey matter losses. Eventually, however, the structural connectivity or ‘wiring’ of the brain starts degrading with prolonged marijuana use.”

    https://www.theguardian.com/society/2014/nov/10/cannabis-smoking-brain-shrinks-increases-connectivity-study-texas
    “How does bipolar disorder fit into all of this? Is mania the answer to my “what is it like to have too many synapses?” question from point (3)? If so, why do some people go back and forth between that and depression?”
    Anecdote: a friend of mine got something like manic episodes when he was using cannabis and was depressed when he was not using. So the down side of too many synapses might be, that you get too many crazy ideas and magical thinking and perhaps even act upon them. The behavior in turn could get you into trouble. The negative feedback could force you to think about your own behavior and rewire your brain.
    From an evolutionary point this could have resulted in an exclusion from the tribe or group that goes in hand with deprivation of resources. So the “energy-save-mode” as discussed earlier might work for this scenario as well.