The Omnigenic Model As Metaphor For Life

The collective intellect is change-blind. Knowledge gained seems so natural that we forget what it was like not to have it. Piaget says children gain long-term memory at age 4 and don’t learn abstract thought until ten; do you remember what it was like not to have abstract thought? We underestimate our intellectual progress because every every sliver of knowledge acquired gets backpropagated unboundedly into the past.

For decades, people talked about “the gene for height”, “the gene for intelligence”, etc. Was the gene for intelligence on chromosome 6? Was it on the X chromosome? What happens if your baby doesn’t have the gene for intelligence? Can they still succeed?

Meanwhile, the responsible experts were saying traits might be determined by a two-digit number of genes. Human Genome Project leader Francis Collins estimated that there were “about twelve genes” for diabetes, and “all of them will be discovered in the next two years”. Quanta Magazine reminds us of a 1999 study which claimed that “perhaps more than fifteen genes” might contribute to autism. By the early 2000s, the American Psychological Association was a little more cautious, was saying intelligence might be linked to “dozens – if not hundreds” of genes.

The most recent estimate for how many genes are involved in complex traits like height or intelligence is approximately “all of them” – by the latest count, about twenty thousand. From this side of the veil, it all seems so obvious. It’s hard to remember back a mere twenty or thirty years ago, when people earnestly awaited “the gene for depression”. It’s hard to remember the studies powered to find genes that increased height by an inch or two. It’s hard to remember all the crappy p-hacked results that okay, we found the gene for extraversion, here it is! It’s hard to remember all the editorials in The Guardian about how since nobody had found the gene for IQ yet, genes don’t matter, science is fake, and Galileo was a witch.

And even remembering those times, they seem incomprehensible. Like, really? Only a few visionaries considered the hypothesis that the most complex and subtle of human traits might depend on more than one protein? Only the boldest revolutionaries dared to ask whether maybe cystic fibrosis was not the best model for the entirety of human experience?

This side of the veil, instead of looking for the “gene for intelligence”, we try to find “polygenic scores”. Given a person’s entire genome, what function best predicts their intelligence? The most recent such effort uses over a thousand genes and is able to predict 10% of variability in educational attainment. This isn’t much, but it’s a heck of a lot better than anyone was able to do under the old “dozen genes” model, and it’s getting better every year in the way healthy paradigms are supposed to.

Genetics is interesting as an example of a science that overcame a diseased paradigm. For years, basically all candidate gene studies were fake. “How come we can’t find genes for anything?” was never as popular as “where’s my flying car?” as a symbol of how science never advances in the way we optimistically feel like it should. But it could have been.

And now it works. What lessons can we draw from this, for domains that still seem disappointing and intractable?

Turn-of-the-millennium behavioral genetics was intractable because it was more polycausal than anyone expected. Everything interesting was an excruciating interaction of a thousand different things. You had to know all those things to predict anything at all, so nobody predicted anything and all apparent predictions were fake.

Modern genetics is healthy and functional because it turns out that although genetics isn’t easy, it is simple. Yes, there are three billion base pairs in the human genome. But each of those base pairs is a nice, clean, discrete unit with one of four values. In a way, saying “everything has three billion possible causes” is a mercy; it’s placing an upper bound on how terrible genetics can be. The “secret” of genetics was that there was no “secret”. You just had to drop the optimistic assumption that there was any shortcut other than measuring all three billion different things, and get busy doing the measuring. The field was maximally perverse, but with enough advances in sequencing and computing, even the maximum possible level of perversity turned out to be within the limits of modern computing.

(this is an oversimplification: if it were really maximally perverse, chaos theory would be involved somehow. Maybe a better claim is that it hits the maximum perversity bound in one specific dimension)

One possible lesson here is that the sciences where progress is hard are the ones that have what seem like an unfair number of tiny interacting causes that determine everything. We should go from trying to discover “the” cause, to trying to find which factors we need to create the best polycausal model. And we should go from seeking a flash of genius that helps sweep away the complexity, to figuring out how to manage complexity that cannot be swept away.

Late-90s/early-00s psychiatry was a lot like late-90s/early-00s genetics. The public was talking about “the cause” of depression: serotonin. And the responsible experts were saying oh no, depression might be caused by as many as several different things.

Now the biopsychosocial model has caught on and everyone agrees that depression is complicated. I don’t know if we’re still at the “dozens of things” stage or the “hundreds of things stage”, but I don’t think anyone seriously thinks it’s fewer than a dozen. The structure of depression seems different from the structure of genetic traits in that one cause can still have a large effect; multiple sclerosis might explain less than 1% of the variance in depressedness, but there will be a small sample of depressives whose condition is almost entirely because of multiple sclerosis. But overall, I think the analogy to genetics is a good one.

If this is true, what can psychiatry (and maybe other low-rate-of-progress sciences) learn from genetics?

One possible lesson is: there are more causes than you think. Stop looking for “a cause” or “the ten causes” and start figuring out ways to deal with very numerous causes.

There are a bunch of studies that are basically like this one linking depression to zinc deficiency. They are good as far as they go, but it’s hard to really know what to do with them. It’s like finding one gene for intelligence. Okay, that explains 0.1% of the variability, now what?

We might imagine trying to combine all these findings into a polycausal score. Take millions of people, measure a hundred different variables – everything from their blood zinc levels, to the serotonin metabolites in their spinal fluid, to whether their mother loved them as a child – then do statistics on them and see how much of the variance in depression we can predict based on the inputs. “Do statistics on them” is a heck of a black box; genes are kind of pristine and causally unidirectional, but all of these psychological factors probably influence each other in a hundred different ways. In practice I think this would end up as a horribly expensive boondoggle that didn’t work at all. But in theory I think this is what a principled attempt to understand depression would look like.

(“understand depression” might be the wrong term here; it conflates being able to predict a construct with knowing what real-world phenomenon the construct refers to. We are much better at finding genes for intelligence than at understanding exactly what intelligence is, and whether it’s just a convenient statistical construct or a specific brain parameter. By analogy, we can imagine a Martian anthropologist who correctly groups “having a big house”, “driving a sports car”, and “wearing designer clothes” into a construct called “wealth”, and is able to accurately predict wealth from a model including variables like occupation, ethnicity, and educational attainment – but who doesn’t understand that wealth = having lots of money. I think it’s still unclear to what degree intelligence and depression have a simple real-world wealth-equals-lots-of-money style correspondence – though see here and here.)

A more useful lesson might be skepticism about personalized medicine. Personalized medicine – the idea that I can read your genome and your blood test results and whatever and tell you what antidepressant (or supplement, or form of therapy) is right for you has been a big idea over the past decade. And so far it’s mostly failed. A massively polycausal model would explain why. The average personalized medicine company gives you recommendations based on at most a few things – zinc levels, gut flora balance, etc. If there are dozens or hundreds of things, then you need the full massively polycausal model – which as mentioned before is computationally intractable at least without a lot more work.

(you can still have some personalized medicine. We don’t have to know the causes of depression to treat it. You might be depressed because your grandfather died, but Prozac can still make you feel better. So it’s possible that there’s a simple personalized monocausal way to check who eg responds better to Prozac vs. Lexapro, though the latest evidence isn’t really bullish about this. But this seems different from a true personalized medicine where we determine the root cause of your depression and fix it in a principled way.)

Even if we can’t get much out of this, I think it can be helpful just to ask which factors and sciences are oligocausal vs. massively polycausal. For example, what percent of variability in firm success are economists able to determine? Does most of the variability come from a few big things, like talented CEOs? Or does most of it come from a million tiny unmeasurable causes, like “how often does Lisa in Marketing get her reports in on time”?

Maybe this is really stupid – I’m neither a geneticist or a statistician – but I imagine an alien society where science is centered around polycausal scores. Instead of publishing a paper claiming that lead causes crime, they publish a paper giving the latest polycausal score for predicting crime, and demonstrating that they can make it much more accurate by including lead as a variable. I don’t think you can do this in real life – you would need bigger Big Data than anybody wants to deal with. But like falsifiability and compressability, I think it’s a useful thought experiment to keep in mind when imagining what science should be like.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

257 Responses to The Omnigenic Model As Metaphor For Life

  1. MB says:

    What Astronomy needs is a polycausal model of planetary motion.

  2. Bill Murdock says:

    “One possible lesson here is that the sciences where progress is hard are the ones that have what seem like an unfair number of tiny interacting causes that determine everything. We should go from trying to discover “the” cause, to trying to find which factors we need to create the best polycausal model. And we should go from seeking a flash of genius that helps sweep away the complexity, to figuring out how to manage complexity that cannot be swept away.”

    Argument for markets against centralized planning; argument for republic against democracy. Well said.

  3. Ketil says:

    The problem isn’t that traits (or phenotypes) are polycausal (or polygenic), but that the multiple causes interact in complicated ways. If factors contributed a small but predictable amount, we would surely be able to pry them apart and describe their contribution using our standard reductionistic approach. But gene A interacts with gene B, which depends on gene C, and so on in a complex network. Having A and B together could raise your IQ by 10 points, but having either without the other could lower it by 5 points, for an average contribution of … nothing. And with three million SNPs in the genome, checking possible pairs is 9e18, triples is 27e27…to infinity and beyond. We are not talking about factors that contribute, we are talking about a huge, dynamic, nonlinear system.

    A similar argument goes for ecosystems. We can check how species behave in isolation, in our controlled environment. But we can’t really generalize to nature, because there are so many interactions, and especially, so many possible interactions. If cod eats capelin, and the capelin goes away, what happens? The cod eats something else. But what? No way of knowing until it happens. Or the cod goes elsewhere. But where? I’d like to model this, but I worry that we have no reasonable way of simulating systems like this. And so we end up with cases like Newfoundland, were we fished too much cod, and the ecosystem (probably) switched into a different stable state where there no longer was any niche for cod.

    There are probably many similar systems where standard reductionist approaches fail to resolve the complexity.

    I’d like us to do better, but we are stuck with this: Enthusiastic molecular biologists who find what seems to be the most interesting SNPs for their phenotype (and there will always be some), and then look at the genomic region for genes (there will always be some), and use similarity search to databases of dubious quality to identify pathways that could reasonably be thought to be relevant (which you can always find). Or ecologists either studying a single organism in a test tube, or building models with no basis for their parameters, happily publishing their results that apply only in their lab or in their simulations.

  4. hollyluja says:

    This reminds me of Tyler Cowen’s Average is Over prediction; that we expect true scientific discoveries to be beautiful and intuitive, but in the future all discoveries will be ugly and take a lifetime to understand, much less advance.

    It also made me think of this article on incremental medicine (for migranes in this case) . You keep a detailed diary of your symptoms and treatments, and they tweak one tiny thing at a time, aiming for 50% reduction in frequency over two years!! No medical specialty is set up for that kind of long-term, labor intensive care… except psych therapy. But all of our big-dollar diseases right now are chronic and will probably require something like this model.

    Incremental care treats diseases like a black box – you test inputs and see if you get the desired outcome. The VA has done a good job in standardizing the treatment of chronic insomnia that way, but it still requires the patient to collect a lot of data points and try a lot of different things to see results.

  5. JohnNV says:

    Just to pick at a thread that was meant as a throwaway line – the reason that we don’t have flying cars isn’t that the technology or engineering is harder than we thought. People have been building flying cars since the early 1950’s. And the technology is certainly there to build and even mass-produce flying cars today at the same scale as road-based cars. The failure is more regulatory – there’s no way to satisfy both the FAA and NHTSA safety standards at the same time and still have a device that satisfies both purposes. (The NHTSA requires that cars be manufactured in a certain way, and the FAA requires that aircraft be manufactured in a different way, and there’s no possible way of satisfying both). That’s why you see every year the headline that some group of students at MIT or Caltech has invented a flying car, but then nothing comes of it.

    • beleester says:

      The idea of a “flying car” in sci-fi generally implies an airplane that’s as easy to drive as a car. Something that would make a plausible commuter vehicle for George Jetson. Present-day attempts at “flying cars” are more like airplanes that happen to be car-shaped.

      Rephrase the question as “why can’t I fly an airplane from my house to my job?” and the reason why nobody’s seriously working on flying cars becomes obvious.

  6. JohnBuridan says:

    Certain fields might be maximally perverse.
    History for example does not seem mathematically tractable, and yet that has not stopped an enormous number of people noticing patterns and feeling like there must be some laws of history. Whatever laws of history do exist, they are provisional and chaos theory limits predictability. I would call history maximally perverse.

    This why we as a society have left it to University of Connecticut Russian Peter Turchin to go toe-to-toe with the maximally perverse. Because what we call a truly nasty quest which will destroy the souls of all who enter upon it, the Russians call Tuesday.

    Historical dynamics, he says, on its best days is akin to seismology, fraught with imprecision, lacking in direct observation, and only moderately useful at predicting sites of future quakes.

    https://www.amazon.com/Historical-Dynamics-Princeton-Studies-Complexity/dp/0691116695

  7. deciusbrutus says:

    The collective intellect is change-blind. The collective intellect has always been change-blind, Citizen.

  8. dark orchid says:

    > you would need bigger Big Data than anybody wants to deal with

    Google is estimated to store on the order of 10^19 bytes of data in its datacentres and handle 3.5 billion searches per day (and that link is more than a year out of date). Encoding and searching a few billion data points for each of a few billion humans shouldn’t be beyond the realm what’s possible, although actually sequencing the DNA in the first place would be a challenge worthy of one of the “Your mission is” threads.

    • baconbits9 says:

      Google’s search results don’t have to be nearly as accurate to get good results as there is a secondary filter (the user) selecting from them. They just have to be a decent approximation of what is out there, which wouldn’t lead to good results with genomic differences.

  9. adamshrugged says:

    Although this is by no means universally accepted, modern developmental cognitive science leans strongly against Piaget’s claim; there’s a ton of evidence that very young infants have some kind of abstract thought. For example, they seem to able to think about approximate large numbers, objects even when they can’t currently sense them, certain physical laws, certain geometric laws, addition and subtraction, other people’s goals, and even other people’s moral status (in some limited sense).

    I’ve only linked to some of the seminal studies, but there’s a huge body of work on this; Susan Carey (who was, funny enough, Piaget’s student) wrote a book called The Origin of Concepts that goes through the evidence in painstaking detail.

  10. vV_Vv says:

    but I imagine an alien society where science is centered around polycausal scores. Instead of publishing a paper claiming that lead causes crime, they publish a paper giving the latest polycausal score for predicting crime, and demonstrating that they can make it much more accurate by including lead as a variable. I don’t think you can do this in real life – you would need bigger Big Data than anybody wants to deal with.

    The problem is that the highly polycausal models trained on big data, such as deep learning models, are brittle: they are very good at learning correlations in the training data, in fact they are too good and learn all sorts of spurious correlations introduced by the data sampling process that you used to create the training set.
    Then, if you use them to make predictions on data from a slightly different distribution their accuracy goes down a lot.

    And of course deep learning is the latest fashion in the maximally perverse discipline of artificial intelligence.

    It seems that Lethe, the goddess of concealment, always rears her ugly head.

    • Ilya Shpitser says:

      This entire paragraph made no sense, fyi.

      • vV_Vv says:

        Better?

        • Ilya Shpitser says:

          Not really. If the model is just learning associations, it is not causal, sort of by definition.

          Did you mean something like:

          “The problem is that the high dimensional multivariate models trained on big data, such as deep learning models, are brittle: they are very good at learning correlations in the training data, in fact they are too good and learn all sorts of spurious correlations introduced by the data sampling process that you used to create the training set.”

          I partly agree on “brittle”, depending on how you define brittle. Obviously DL model can be very robust, depending on how they are trained. Not picking up causality is not what I would call “brittleness.” More like “wrong tool for the job.”

          DL models will not give you causality out of the box, and understanding why is also a good exercise, re: understanding the clear limits of DL in the enterprise of development of general AI.

          DL models can certainly be used as a subroutine in causal modeling, however. In fact, people do this now. But it’s not just using DL itself, additional ingredients are needed, e.g. understanding “confounding” and so on.

          • Nootropic cormorant says:

            I would say that not picking up on causality (or rather not considering causality at all) makes a model “brittle” even when only considering it as a predictive model.

          • quanta413 says:

            Are there non brittle deep learning results outside of games? Where I would define brittle as basically generalizing well outside the sample. To a reasonable if lesser extent than a human for image recognition for example.

            I’m curious. Some of the practical results with supervised labeling of images are pretty neat. Yet adversarial attacks on imagenets can lead to misclassifications obvious to a human.

            Deep learning is not a technique that leaves me excited enough to learn it right now for the sort of things I’m interested in. I generally prefer working with rich mathematical models. I guess these would fall under the label of being “causal”. They’re very field specific. Like using Maxwell’s equations in physics or replicator-mutator equations in evolutionary biology. Basically, I guess I like working with difference and differential equations that time evolve the system I’m studying.

            I vaguely remember seeing an interesting looking result on extending the prediction time for chaotic equations using a neural net as a predicter (I think), but that’s all I can think of off the top of my head.

          • Ilya Shpitser says:

            I agree highly detailed models, e.g. ODE models are generally causal in the sense in which I use that word. But those are difficult to get in domains outside of engineering or physics.

            You can try to use DL to try to learn certain kinds of causation, if you can phrase that sort of problem as a classification problem with known labels, and this causation is actually identified (otherwise your classifier will not do better than random).

            That’s sort of ‘boring’, though.

          • quanta413 says:

            ‘boring’ and straightforward yet not a use that occurred to me or that I’d heard of. I guess it’s similar in some sense to using deep learning to approximate DEs although in the DE case you already have the “true” model.

            Not sure how many problems it would turn out to be useful for. It implies a lack of a stronger model to work with, but also a lot of causal knowledge about what’s going on.

            There are solid differential or difference equation models in biology. Also some tree-like models in biology. However, they are much more abstracted than most physical models. There’s a lot more detail they have to leave out or that we don’t have the data for. This hurts their power a lot.

  11. hannesmalmberg says:

    I only partially agree. It is good to push back on monocausal statistics, but I do not agree with your vision for science, since I lack the role of mechanisms. Equating science with statistics creates a false dichotomy between utterly silly monocausal scores, and only slightly silly polycausal scores.

    Take malaria. On a basic level, the mechanism is an unexciting “mosquito-bites-man” type of story.

    However, this simple mechanism generates very complex causal patterns, relating infection to outside temperatures, presence of still water, sleeping times, bednets as well as complex dynamics reflecting genetic drift due to human interventions.

    As long as we don’t include “parasite-infected mosquito bites person” in the equation, the polycausal model would load on tonnes of things, each giving some insights into the causal predictors of malaria. The polycausal model would, of course, be better than a silly monocausal statistical model which seeks to explain malaria only with “still water” or “lack of bednets”. The polycausal model might also be of some help in fighting malaria.

    But calculating polycausal scores is not a good vision for what “science should be like”. Indeed, before science realizes that malaria is about mosquitous biting people, the polycausal scores would be very hard to interpret. It would also dramatically limit their practical use, because credibly answering “why” is often central for external validity, and we often underestimate just how important knowing the mechanism is for interventions (e.g., giving bednets too coarse to block mosquitos is pointless).

    So we should drop the hope for monocausal statistical explanations, but not the hope for dramatic conceptual dimension-reductions of problem, which are central to understanding the polycausal scores.

  12. cmurdock says:

    Maybe this is really stupid – I’m neither a geneticist or a statistician – but I imagine an alien society where science is centered around polycausal scores.

    This society sounds like it’s falling into the attractive snare of “if it looks sophisticated, it must be true”. It’s already too easy here in Earth society for fake intellectual types to dismiss others’ ideas as being “simplistic” or “reductive” or whatever just because they lack some superficial veneer of complexity. Sometimes things do have simple explanations. Tycho Brahe, man.

  13. johnsonmx says:

    My expectation is that Explaining Depression is necessarily hard, because of the factors Scott mentioned, but Explaining Emotional Valence can be easy, because emotional valence seems to be a phenomenological natural kind, and could be simple (highly compressible) if we get a good framework for understanding phenomenology.

    If we can do this, we can then redefine Depression as chronic low emotional valence, and explore what sorts of biological states could lead to this phenomenological state. This could help us make progress on understanding both neuroscience and phenomenology.

    (This is the short version of how QRI is approaching this – https://qualiaresearchinstitute.org/2018/04/13/videos-from-tsc2018/ )

  14. fwiffo says:

    I think it is incorrect to say that a polycausal model is somehow easier to identify than a single cause. After all, the polycausal model is going to consist of a whole bunch of coefficients on individual genes, plus some interactions — exactly the same kind of coefficients which you assume are really difficult to determine to begin with. Why is it easier when you consider a lot of causes at once?

    If your best attempt to measure a single coefficient is biased, then the coefficients in the polycausal models are going to have the same kind of bias.

    Further, even with a small number of interesting base pairs, if you allow effects to be interactive (i.e. the combination of gene 1 and 2 have a bigger effect than the sum of the independent effects), you can very rapidly get to a level of complexity where there aren’t enough humans in the world to identify the model. The set of potential meaningful permutations grows at 2^x. Throw in the fact that many of the observed outcomes we care about are also influenced by environment and you get a fundamental identification problem from a totally different direction.

    Some other commenters are interpreting this post as saying we can use some kind of machine learning algorithm to predict phenotypes from the set of all genes. That might or might not work, because it depends on the complexity of the interaction effects. This is still just trying to predict a bunch of coefficients — except that in some contexts machine learning seems to be better at predicting coefficients than human modelers. So the scale problem is still there — 7 billion people isn’t enough to get the model right — and far fewer people’s genomes have ever been sequenced. The machine is also going to be foiled by environment interactions — if everyone with gene X grows up in a resource poor environment and ends up stunted, we’re going to have a lot of errors.

  15. Christian Kleineidam says:

    Modern genetics is healthy and functional because it turns out that although genetics isn’t easy, it is simple. Yes, there are three billion base pairs in the human genome. But each of those base pairs is a nice, clean, discrete unit with one of four values.

    Unfortunately, we are at six values: https://www.sciencedaily.com/releases/2009/04/090416144639.htm

  16. Jakub Łopuszański says:

    Today machine learning could probably look at this 6Gb of data and predict something from it by building much more complicated models than people could. But if the genetic code is really a code, like programming language, than I would expect that ML would still have problems analyzing it for essentially “Turing halt problem”-related reasons.

    • Anon. says:

      Genetic code is not really a code like a programming language. We know that genetic effects are almost entirely additive so there’s not much to gain by complicated models. People have had success with more sophisticated regression approaches (the Hsu height prediction paper used LASSO regression to improve the PGS substantially), but other than that there’s not much room in this direction.

      • greghb says:

        How do we know that genetic effects are almost entirely additive? I would think there could be complex, non-linear interactions among different proteins. And I would, therefore, think that non-linear models (i.e., beyond regression) would be potentially fruitful — given enough data to avoid over-fitting and enough computation power to optimize. But I don’t know much about genetics.

        • Michael Watts says:

          We don’t know this. (And in fact, the standard dominant/recessive paradigm directly contradicts this.) However, for approximately-continuous traits, there is good reason to believe that genetic effects of genes on which there is standing variation in the population are mainly additive, because meiotic recombination is unkind to genes that depend on the presence of other genes.

          If a gene is fixed in the population, e.g. because altering it is invariably lethal, then other genes are free to interact with it in weird ways that violate the assumption of independent additivity, but we won’t see those counterexamples to the assumption demonstrated because they’ll just end up in a miscarriage.

    • Bugmaster says:

      Machine learning is routinely used in genetic research. It does indeed build “much more complicated models than people could”, but the predictive power of such models is… somewhat lacking. The problem is that, when you have a 1-billion-dimensional space, finding relationships in that space is really easy (for a machine learning algorithm, at least).

      Additionally, genetic code is not really a “code”, nor a programming language; it’s more like millions (if not billions) of little verniers in some vast analog contraption. It’s all being “executed” in parallel, via chemistry (I put the word “executed” in scare quotes, because the biochemistry of a cell is not at all similar to a modern digital computer).

  17. greghb says:

    Elements of AI research work this way, for example image processing and language processing. There are standard tasks/datasets which everyone uses to compare performance, and improvements in performance are basically taken as prima facie strong evidence of scientific progress. There are lots of papers of the form, “if we include X sort of input data in doing XYZ task, then our powerful statistical model ends up getting better performance.” There are also papers of the form, “if we use the standard set of input data, but make these algorithmic changes to our powerful statistical model, then we get better performance.”

    I think resistance to this comes in around explicability. Some people see science as, definitionally, about gaining a mechanistic, explicable understanding of nature. Sometimes these big predictive models don’t give much mechanistic insight. Is it really science? That’s a terminological/definitional question, so not worth debating. Perhaps we draw a distinction between “predictive” science and “mechanistic” science.

    You can debate the importance of having a mechanistic understanding. At a minimum, mechanistic understanding has aesthetic value. A data-driven algorithmic prediction of the location of the planets in the sky is less satisfying than Newton’s Laws. Beyond that, though, Newton’s Laws give you insight that lets you apply similar insights across domains (it explains the tides as well), and also lets you guide future research.

    Of course, some systems may not be amenable to mechanistic understanding, at least not within the limits of human cognition: there’s complexity that’s simply beyond us. It’s not clear which fields are this way, but they probably exist in principle. So I personally do agree that predictive science is very valuable.

    Also, I don’t see why this should lead to principled skepticism about personalized medicine, rather than an insight that personalized medicine probably needs more data and more sophisticated statistics. Although then you get back to explicability: do we care if we have statistically valid care recommendations that can’t really explain (in terms we can understand) why they’re making these particular recommendations?

    • Nootropic cormorant says:

      Mechanistic explaination blows “Predictive science” out of the water when it comes to generalization, which is arguably the most important facility of learning.
      Knowing which way the causality flows is invaluable when adapting to previously unobserved circumstances, such as those an agent might create himself.

      Abstracting all hidden confounders away is clever, but we cannot expect these to be the same across time and space, which means our model has only a narrow area of applicability.

  18. Gustavo says:

    “Instead of publishing a paper claiming that lead causes crime, they publish a paper giving the latest polycausal score for predicting crime, and demonstrating that they can make it much more accurate by including lead as a variable”

    This quote summarizes deeper miscompreension problem. Causality and prediction are not so clearly connected. Suppose, for example, that you want a model to predict rain. You can look at how many people left home with umbrellas. In general, this number will be very correlated with rain and help to predict it. But it does not cause rain!

    Now about the lead and crime paper. When it claims that lead cause crime, what is meant is that, everything else constant, increasing lead would make crime increase. Note this is not necessarelly helpfull to make predictions. If lead is just a small part of what causes crime and it is generally correlated with police presence, it may be the case that the best prediction when you seel lead rising is to expect a decrease in crime!

    So, what causation is good for? If you know that, everything else constant, increasing lead would make crime increase, you can make better policy choices. You can advocate for a decrease in lead in a “independent way”, i.e., without decreasing police presence.

  19. Eponymous says:

    On the topic of genetics, I am actually confused about something, which maybe somebody could clear up.

    Question: Why is there so much common variation across quantitative traits in the first place?

    I get rare variants doing things (mostly bad). And I get that you might have a gene that’s midway through a sweep. And you have cases of heterozygote advantage.

    But it seems that these GWAS studies are finding thousands of genes that affect IQ or height or whatever, that are at some medium level of frequency. That’s weird to me. 10kya, either a particular gene is on the whole a net + or a net – in the ancestral environment. So why wouldn’t it just go to fixation, one way or the other?

    Unless there are a lot of genes that are sort of close to neutral (+IQ, but maybe have some other negative effects), and they just sort of drift up and down over time, sometimes slightly favored by the local environment, sometimes not. Is that what this is?

    It just seems that there’s a whole lot more effective variation from common variants than I would expect based on theory. Which suggests my understanding of the theory has a hole in it, which I would like to patch.

    • gwern says:

      This is something I’ve wondered too. In some cases, like personality traits, they seem to be under balancing selection (hawk-dove), so no mystery there but on the other hand, there’s also little additive genetic variance, which is clearly not the case for many traits. Mutation-selection balance does seem to describe IQ and other traits, but not why there’s so much of it. The explanation so far seems to be a mix of:

      1. very small ‘effective’ population size; humans may be 8 billion now, but for almost all of human history, the population was tiny and the reproducing population being selected on was even tinier. This is exacerbated by constant population bottlenecks from invasions/colonization/genocide and severe skew in reproductive success, making genetic drift very powerful (sort of the Amish or Ashkenazi or Mormon or Genghis Khan thing writ large).
      2. introgression from other species like Neanderthals; their alleles on net seem to be harmful and are slowly being selected away except where they happen to be useful like the Denisovan altitude adaptations in the Tibetans. Probably a minor effect.
      3. need to adapt to local environments; farmers vs fishers, cold vs hot climates, white vs dark skin for vitamin D (possibly where the East Asian EDAR comes from, not the smell or earwax stuff at all). Too recent for everything to have been finetuned. Something similar might be true for human brains in general: perhaps we became intelligent/social too recently for all the kinks to be worked out and this is more or less where schizophrenia/autism risk common alleles come from. (Although given the increasingly ancient dates of divergence between human races, this is a dangerous thing to think.)
      4. slowness of selection: the fitness effects of these traits can be quite subtle. How much does being 1cm taller on average actually affect fitness? At least in a modern context, the effect isn’t even consistent between sexes. And it’s worse for anything which is a binary liability threshold trait like schizophrenia: when only 1% of the population ever develops it, truncation selection is very weak.
      5. constraints on selection – you can’t strongly select for everything at once, and when you do select, you can get unwanted consequences like genetic hitchhiking. Considering that the overwhelming mode of human adaptation is soft selective sweeps on many genes simultaneously… For schizophrenia, this might be why it’s still around so much: “Common schizophrenia alleles are enriched in mutation-intolerant genes and in regions under strong background selection”, Pardiñas et al 2018 – background selection might be the mechanism driving all the human mutation load in common schizophrenia variants. Could this also explain all the mutation load on other traits like intelligence?
      6. recent reversals/relaxation of selection: dysgenics obviously is not going to reduce mutation load. IQ/EDU PGSes are falling in the US and Iceland, so that will maintain the common variants as none of the good ones are driven to fixation, and last year there was an interesting study looking at a bunch of disease PGSes in ancient genomes: “The Genomic Health Of Ancient Hominins”, Berens et al 2017 – if I’m reading the graphs right, the overall mutation load from common variants were decreasing steadily for tens of thousands of years, only for that to reverse sometime in the past millennia and result in a ~+35% percentile increase in general disease risk.

      • Eponymous says:

        These all sound reasonable, but even after reading them all written down (and I think I was familiar with all of them beforehand) I still don’t think I would predict as many common variants as seem to exist.

        Out of curiosity: do you know whether working geneticists regard this as (1) puzzling, (2) not a puzzle, or (3) don’t talk about it?

        • gwern says:

          I am surprised too. The general paradigm among human geneticists (still) seems to be oriented toward expecting rare variants – eg the continued lust for exome/WGS data, the downplaying of SNP heritability and ignoring that they are lower bounds due to measurement error etc, the constant enthusiasm for measuring CNVs, the continued existence of some handwringing over ‘missing heritability’ – but the tension of this with how many common variants there are & how much of variance they account for appears to just go largely ignored.

      • BlindKungFuMaster says:

        “very small ‘effective’ population size”

        That would drive variants to fixation not keep them around.

        “need to adapt to local environments”

        But we see the variation in local populations as well, right?

        • gwern says:

          That would drive variants to fixation not keep them around.

          It would drive many of them up, yes. Not necessarily to fixation. (Again, look at the Amish etc – how many of those genetic disorders are fixed? None, as far as I know.) More importantly, the small effective size means selection is ineffective so you get lots of variants sitting around with meaningful effect sizes.

          But we see the variation in local populations as well, right?

          Which are still responding to selection. The altitude adaptations aren’t fixed in Tibetans either. Which means lots of variants sitting around with meaningful effect sizes.

      • vV_Vv says:

        Too recent for everything to have been finetuned. Something similar might be true for human brains in general: perhaps we became intelligent/social too recently for all the kinks to be worked out and this is more or less where schizophrenia/autism risk common alleles come from.

        If the environment changes quickly enough, this creates a group selection pressure to maintain enough variability in the population in order to quickly track the environment, since its faster to tune the frequencies of existing alleles rather than create new alleles by random mutation.

        • gwern says:

          Only if you have a ton of pressure at the group-level and constant replacement and also rapid fluctuation in selection pressures, none of which is true (all of the recent human evolution work I know tends to show consistent selection on traits, not fluctuating selection), and the history of group selection arguments is such that the burden of proof is on anyone claiming them to show that they are even plausibly possible via Price’s equation rather than making verbal arguments.

    • Michael Watts says:

      Question: Why is there so much common variation across quantitative traits in the first place?

      The quick answer to this question is “group selection”. 😀

      In more detail, this question is related to a major question in biology: Why are there males? Organisms which reproduce sexually in the “standard” manner pay an enormous cost by having fully half their mature population unable to reproduce. This makes population expansion (the entire goal of natural selection) much slower in the event that favorable circumstances occur and the population finds an ecological niche into which it can expand.

      The usual answer is that sexual reproduction maintains a body of standing variation in the population which allows the population to adapt to changing circumstances. (For example, if it suddenly became extremely disadvantageous for humans to be shorter than 190cm, we have enough people who are already that tall that the population could quickly converge on that height over just a couple of generations. If it suddenly became extremely disadvantageous to be shorter than 390cm, we would just go extinct instead.) The heavy burden of supporting an unproductive male population makes expansion under favorable circumstances slower, and that is bad. But the variation maintained by the sexual reproduction that they enable makes decline under unfavorable circumstances slower, and that is good.

      Against that background, the common variation you’re asking about is the purpose of sexual reproduction. It’s there because populations which don’t maintain it are very prone to going extinct.

      • Eponymous says:

        Regarding your answer: is this the standard answer in biology, or a minority view, or not a common answer?

        I can see why genetic diversity would be a species-level advantage; but I’m pretty suspicious of group selection arguments, and I don’t see a mechanism that would generate the genetic diversity. So just how is it generated and maintained?

        Why are there males?

        I thought males were just the sex with smaller gametes? No more or less able to reproduce than females, and 1/2 is the only equilibrium?

        From the rest of your comment it seems that you mean, “Why is there sexual reproduction?” Here I thought the answer was “Muller’s Ratchet”, which is about mutational load, not common variants / genetic diversity.

        I’m entirely self-taught on biology, so everything I just said might be wrong.

        • Randy M says:

          I thought males were just the sex with smaller gametes? No more or less able to reproduce than females, and 1/2 is the only equilibrium?

          Males are also the sex without wombs. Consider hermaphrodytes as a possible alternative.

          • Randy M says:

            Not all organisms have wombs:

            Not sure how this contradicts males not having wombs.

          • [Thing] says:

            There was an interesting discussion of this very question on Robin Hanson’s blog recently. My favorite factoid that I learned from it, from an essay linked in one of the comments: One factor militating against hermaphroditism is non-nuclear DNA (mitochondrial DNA, and also plastids, in plants). Because they are typically only inherited from the mother, a hermaphrodite’s non-nuclear genes “want” their host organism to divert resources from the male to the female reproductive role, and in some plant species they have been observed to accomplish this, by sabotaging the development of male reproductive organs.

            I guess the stabilizing factor is that if non-nuclear genes made too high a fraction of a population female, it would increase the payoff nuclear genes could reap by making their host organism male.

        • Michael Watts says:

          I thought males were just the sex with smaller gametes? No more or less able to reproduce than females, and 1/2 is the only equilibrium?

          This depends on how you mean the word “reproduce”. It is definitely true that males contribute genetically to their children.

          However, in the sense of my answer, reproducing refers to physically creating more organisms. A population of humans can produce one child per woman every two years. The number of men in the population is not relevant, as long as there is at least one. Thus, the number of women limits the rate at which the population can grow, and from this perspective, at first glance, men are a massive cost with no benefit.

          The question is well-known in biology under the name “the paradox of sex” (e.g. here, here). I was under the impression that the answer I gave is well accepted, but I’m not familiar enough with the space to say how well accepted it is or if it is considered “standard”.

          I can see why genetic diversity would be a species-level advantage; but I’m pretty suspicious of group selection arguments

          If I hadn’t labeled my answer as “group selection”, would you have recognized that that was the implication? Assume I left in the language about how “It’s there because populations which don’t maintain it are very prone to going extinct”.

          There is a pronounced stigma against “group selection”, but it seems undeserved to me. For another example, The Selfish Gene takes some pains to describe how grooming behavior is adaptive, and also to emphasize that it’s a crapshoot whether a species develops mutual grooming or not, because “mutual grooming” and “no grooming” are both equally valid Nash equilibria. This completely fails to explain why grooming is so common, but group selection handles the question very easily, by noting that “mutual grooming” and “no grooming” are both Nash equilibria within-species, but a species that happens upon “mutual grooming” will last longer than one that settles on “no grooming”. Thus, if your sampling procedure is “choose a point in time such as ‘now’, look around, and see whether animals engage in mutual grooming or no grooming”, you’d expect to see mutual grooming overrepresented. (Modulo asymmetric grooming like crocodile birds.)

          • Christian Kleineidam says:

            It seems to me like your argument is a pop-science argument about what you think makes sense while academia largely rejects group theory these days because they actually looked at mathematical models of it.

          • Eponymous says:

            If I hadn’t labeled my answer as “group selection”, would you have recognized that that was the implication? Assume I left in the language about how “It’s there because populations which don’t maintain it are very prone to going extinct”.

            I see those two phrases as having the same meaning. That said, it’s plausible that I have an automatic suspicion reaction to the particular phrasing “group selection” which might not trigger when presented with a description of group selection. I can’t be sure.

            There is a pronounced stigma against “group selection”, but it seems undeserved to me.

            Do you mean among the rationalist community (broadly defined) or among biologists?

            Group selection arguments are difficult to make work mathematically, but they can apply in some cases. Selecting between two nash equilibria is one such case.

            The problem is when you use group selection to argue for an adaptation that is a group benefit but that is actually nonadaptive at the individual (or more correctly, gene) level.

            Speciation and extinction are an obvious case where group selection works: local groups get stuck in equilibria (local fitness maxima), and when they run into more fit species (that competes with them) they go extinct. So there is some higher-order selection going on.

            I’m having trouble seeing how this mechanism could increase overall genetic diversity much by itself, though. I mean, I can sort of work out the mathematical conditions that would need to apply in my head, and it doesn’t seem to add up. But I could be wrong, which is why I wanted to know whether this is a standard answer to the question I asked (about the number of common variants, not the existence of sex or males).

    • Faza (TCM) says:

      The boring answer is that natural selection is a satisficer and that a lot of those variations don’t really affect reproductive success that much.

      If you look at things like height, IQ or even eye colour or blood type, it’s not readily apparent that any of these things make you more or less fit to reproduce (in terms of predicting whether you will, in fact, reproduce) – and that’s not even considering stuff that doesn’t typically manifest, as discussed elsewhere.

      If they did, the inferior variants would gradually be selected against.

      IQ is especially fun, because I believe that currently it is negatively correlated with the number of offspring (in the West, anyway), so – if anything – it’s being selected against, despite offering clear individual advantage (as well as being advantageous, generally speaking, to the group as a whole).

      Given imperfections in the DNA copying process and assuming that organisms are capable of homeostasis most of the time, it would be surprising if we didn’t have a large amount of overt variability between individuals – as long as the species as a whole remains viable in its environment.

    • BlindKungFuMaster says:

      It takes a long time for a variant to go to fixation, even if the effect is pretty big (say, lactose tolerance). Everybody introduces a couple of dozens new variants into the gene pool. Most of them have basically negligible effect. So why would there not be lots and lots of variants around?

  20. Bugmaster says:

    If you think that genetics (as a science) and/or genetic engineering (as a technology) are mostly solved problems, that merely await a large enough computing cluster to be built, then… well… let’s just hope you are well-versed in dealing with disappointment.

  21. nacht says:

    Your points about “…doesn’t understand that wealth = having lots of money” and “boondoggle” makes me think of Douglas Adams’ Hitchhikers Guide to the Galaxy…

    Life the Universe and Everything = 42, now what was the question again?

    The whole thought of finding some billion-order equation to predict the human mind is kind of out there. Maybe we are just at leeches-part of this developmental cycle and in another 150 years we will see it that way (leeches were still a thing just 150 years ago)

  22. Lila Rieber says:

    The worst case scenario for predicting phenotype from genotype isn’t 3 billion, it’s (3 billion)^x, because the variants are probably non-independent.

  23. John Schilling says:

    Even if we can’t get much out of this, I think it can be helpful just to ask which factors and sciences are oligocausal vs. massively polycausal.

    The omnigenic metaphor should apply fairly broadly to the important normal behaviors of complex, general-purpose systems that have undergone extensive evolutionary optimization. Which maps pretty closely to “life”, but there’s a whole lot of caveats that need to be kept in mind even if we do limit this insight to biology and especially if we try to move beyond. As I just argued elsewhere, if any important trait isn’t polycausal, evolution will hammer the significantly divergent genes into conformity and low variability.

    Unless we’re dealing with a trait of no great importance to reproductive fitness, e.g. eye color, in which case a single gene causing broad variation will be allowed to persist.

    Unless we’re dealing with clearly abnormal behavior, which can often be attributed to a single cause, and in a living system a specific genetic defect. As Murphy notes above, it takes a whole lot of parts to make a car move fast, but if a car stops moving fast a good mechanic will usually be able to find the one part that did that.

    Unless we’re dealing with engineered rather than evolved systems, in which case the engineer may have decided to focus his efforts on a few key components rather than spend hundreds of generations tinkering with everything. If you want to know why a Grumman Tiger can fly faster and carry more weight than a Grumman Cheetah, I can pin that down to one cause each (the 40 extra cubic inches of engine displacement and 0.05″ extra thickness in the wing spar, respectively). And this may apply to the crude sort of genetic engineering practiced by selective breeding, which might well grab hold of a handful of genes and start stretching.

    Note that these three examples provided most of the high-signal-to-noise data from which the early science of genetics was deduced, hence the initial misleading paradigm.

    Now add in systems with specialized variants. Going back to cars, you might find a “sport” model of an otherwise-generic car that does go significantly faster than the normal variety. And when you find e.g. the supercharger, you’ll have the one cause for that. If you want specialized variants of the base model, the easiest way to get them is by varying a few key parts. Even evolution often settles for that solution.

    This mostly isn’t an issue for the normal operation of human beings; we’re all pretty much optimized to be generic hunter-gatherers with some patches for living in preindustrial civilization. We don’t have biological castes like bees or ants. We sometimes have culturally-defined castes, but these rarely if ever persist long enough to have significant evolutionary effects. We do have a measure of sexual dimorphism – and if we can’t find the specific gene for any sexually dimorphic aspect, we can usually pin down the chromosome. We might also observe some differences between e.g. the optimal Sub-Saharan African hunter-gatherer and the optimal Northern European model, and then find a handful of distinct genetic loci for things like melanin production and malaria resistance. And for the species that do have biological caste systems, there’s usually a specific cause (not necessarily genetic) for the difference.

    Then there’s the very simple systems, some of which are still within the category “life”. These can have single causes for significant traits even if evolution is hard at work looking for a local optimum, because there isn’t room for many causes.

    And then there’s all the stuff in the universe that is neither engineered nor evolved, but just is. Don’t look for omnicausality in, e.g., astrophysics. Stars basically just shine because of p-p fusion. In geology, igneous rocks are different than the other types because they were squirted out of a volcano and left to solidify and the other types weren’t. Rocks, or even planets and stars, don’t have to be massively complex to perform their function, and they aren’t optimized by evolution or deliberate engineering. A few simple causes are left to work on the raw materials, and we get what we get.

    TL,DR: I’m guessing that most sciences are going to be basically “oligocausal” (nice phrasing), but that the massively polycausal ones will be disproportionately interesting to humans.

  24. proyas says:

    Think of how much better our understanding of human genetics will be once environmental factors can be much more closely monitored and controlled. For instance, under a benign AI dictatorship, humans would be monitored 24/7, and the effects of every environmental stimulus could be accounted for.

  25. b_jonas says:

    Wait, three million base pairs? Last I heard, humans had four million base pairs. And that was back in the days of the big effort to “sequence the human genome”, you know, the one that every human has with a few changes. I guess I must have rounded the number I heard before committing it to my memory.

    I also must have listened to different news, because I don’t remember anyone talking about any number of genes of diabetes. What I heard is people asking about diabetes in my family, so my current internal model is that I will likely get type 2 diabetes twenty or thirty years from now because my mother and grandmother have diabetes, I’m overweight, have hypertension, and don’t eat healthy. Twenty years from when I get diabetes, I will likely die from a cardiovascular disease. If I’m lucky, my death might get postponed by ten or even twenty more years afte that, at which point I’ll be hard of hearing, have brain damage and serious motor control difficulties from multiple strokes, and be blind, not necessarily in this order.

  26. Nootropic cormorant says:

    If a trait is massively polygenic, maybe it needs to be broken down into smaller high-level traits.

    I would expect intelligence to be an interplay of several mental faculties that we might be able to test effectively some day and that these will be explainable using a lesser number of genes.

    Likewise, I suspect that conditions such as depression, diabetes or transgenderism subsume different phenomena that may have very different causal structures.

    Results obtained by brute force may be useful, but how robust to generalization can we expect them to be? It seems like these are likely to detect lots of factors that contribute only indirectly by affecting the environment in ways that aren’t stable across societies, time or societal groups. This is without getting into objections given above about how statistically weak this methodology may be.

    • Bugmaster says:

      If a trait is massively polygenic, maybe it needs to be broken down into smaller high-level traits.

      Well, maybe, but right now no one knows how to do that. I’m not talking in terms of genes, but phenotypes: AFAIK, no one knows how human intelligence works, or what other abilities comprise it.

      • Nootropic cormorant says:

        I would be surprised if we didn’t have many workable hypotheses about it that require an enormous amount of serious multidisciplinary work to go anywhere.

        In general it seems to me that when we have hundreds of small causes effecting a phenomenon there’s either 1) hope that proximal causes will greatly simplify the picture once discovered 2) prospect of the phenomenon not being meaningful, that is being too arbitrarily defined.
        Is this unreasonable?

        • Bugmaster says:

          It’s not unreasonable, but in practice, it often (not always, but often) turns out that even relatively simple and clear-cut traits can have mind-bogglingly complex mechanisms behind them. For example, in plants, traits like “soil aluminum tolerance” are reasonably well defined, but that doesn’t mean that you can just knock out some gene and get an aluminum-resistant plant; nor does it mean that the full chemistry behind what makes a plant aluminum-resistant is well understood. It will definitely “require an enormous amount of serious multidisciplinary work to go anywhere”, as you said, but in many cases we don’t even know what that work would look like. Intelligence is one of those cases, I think.

  27. Freddie deBoer says:

    Of course, inductively this is all an argument for the inevitability of another big paradigm shift in genetics!

    • baconbits9 says:

      Its already happening.

      • [Thing] says:

        Can you elaborate? Also, not sure I understand GP’s point. Is it that the phenomenon of geneticists once again making confident predictions about future discoveries is a signal that they will soon be needing to come up with a new paradigm to explain why their confident predictions failed?

        • baconbits9 says:

          The ability to sequence entire genomes quickly and cheaply has led to a lot of issues. A lot of the easy answers (when a small number of genes directly controls a trait/disease) have apparently been found and now researchers are bumping up against those issues. It has been known for a long time that some plants behave in weird ways genetically but the examples have been written off as not generalizable and fluky. While the first part might be true there is a growing number of such flukes and it might be that there might be many more flukes than previously assumed.

          When I was working with somaclonal variation in date palms 10+ years ago it was pretty much assumed that now with the ease of sequencing (which has gotten much easier since then) that it was only a matter of time before we could identify if not the causes then at least the markers needed to solve an issue that had been first identified in the 70s. Last I hear (5ish years ago) no one had made significant progress over our (turned down) proposal where we though that with a combination of 5-6 markers we could identify maybe 60% of mutants (the proposal was to test those markers to see if our sample expanded and that would have been a good to great outcome).

  28. Eponymous says:

    I think the use of the word “causal” is wildly incorrect here.

    What you’re suggesting is basically generating predictive scores from large masses of data. And if you’re smart about it, you try to make your predictor perform well out of sample.

    But this is decidedly *not* identifying the underlying causal mechanisms, which is exactly why scientists are leery of it.

    Despite genetics being your go-to example (and a sort of perfect case to do this), there is a very active debate right now about just how causal the variants people have identified really are. One basic problem is “linkage disequilibrium”, which is a fancy way of saying that genes are correlated: if I have gene X, that makes it more likely that I’m descended from certain lineages, and therefore have gene Y and don’t have gene Z. But this means that your predictor might be weighting gene X, but really genes Y and Z are the causal variants.

    Another problem is the “red-headed gene” problem; basically that in a society that forces red-headed kids to work themselves to death in mines instead of going to school, genetic variants for red hair will show up as hits for educational attainment and mortality.

    This problem gets worse when you try to shift the approach to other domains. At least if you have the entire genome, you know you have all the genes in your data. But in many contexts you don’t have a measure of every possible variable. This makes it way more likely that any correlation you discover (which is essentially what you use to construct your predictor) is non-causal.

    I’m not opposed to using the methodology you describe as a supplement to more traditional approaches, but I don’t think it should be the main approach in most fields. It seems too much like putting your ignorance in a box and trying to reason about it, rather than trying to unpack the underlying causal mechanism in the box, which I take to be the main work of science.

    • greghb says:

      This data-driven approach can have tremendous economic value, even if it’s not that enlightening.

      For example, many hedge funds trade on statistical patterns, and because they’re careful to make their models work out of sample, they mostly avoid blowing up — or, some do, anyway. They are successful at this even if they don’t have much of a causal understanding of the economy.

      So, it’s valuable and useful, but it falls short of the high goals of science which, as you say, help you understand the world mechanistically.

  29. Eponymous says:

    Piaget says children gain long-term memory at age 4 and don’t learn abstract thought until ten

    Either my son is a prodigy, or long-term memory develops a good deal before 4.

    I’ll start checking for abstract thought in a few years. Does Piaget offer a specific test?

    • brmic says:

      He does. You don’t want to use it.
      See here (https://www.simplypsychology.org/formal-operational.html) and here (https://www.verywellmind.com/formal-operational-stage-of-cognitive-development-2795459) for examples. But later researchers have generally found that Piaget’s methods tend to overestimate the time it takes to master certain skills. For a variety of reasons, like having to verbalize the answer, needing domain (physics) specific knowledge, needing familiarity with that kind of task and with testing environments etc. In some cases it has been found that e.g. abstract thought (formal operational stage) is available earlier in some domains (areas of knowledge) while still inaccessible in other areas.

    • BlindKungFuMaster says:

      I think this is more about stable long-term memory. The kind that lasts a lifetime. Obviously kids have long term memories earlier. These just don’t make it into adulthood.

      • Maxwell says:

        Stable long term memory isn’t normal, is it? Aside from things that get massively reinforced.

        The things I remember from my childhood are just a few things that for some flukey reason I remembered in the past. What I’m remembering is past remembering of past remembering; the original memory is long gone, except perhaps for the faintest wisps.

        • BlindKungFuMaster says:

          One of the interesting things about memories is that they resurface in waves. When I was in my mid-twenties I started to remember a lot of things from my early childhood.

          That makes some sense from a functional point of view, because I was biologically supposed to have kids by that point. It also means that you are right, and that there might actually be a systematic reinforcing going on. But it also means that some memories stay unremembered for decades and then just pop up again. That’s pretty stable.

      • Michael Watts says:

        Most of your memories of being 30 years old don’t last either. Should we conclude that 30-year-olds haven’t yet developed long-term memory?

        • BlindKungFuMaster says:

          This is not about “most” it’s about “any”.

          Even if some people do remember stuff from the age of two instead of four, it is pretty clear that the earliest formed memories (maybe those of the first year) don’t make it into anybody’s adulthood.

          To me that makes some sense, because you need to learn a stable representation of aspects of the world, before you can remember them.

  30. melboiko says:

    Related: Polygenic threshold model for the biological roots of transgender identity, published this year in the Behavior Genetics journal (one can use sci-hub to bypass the paywalls). AFAICT this is currently the best model of transgender etiology we have, data-wise.

    • a reader says:

      In the case of early onset MTF transgenders (and also of male homosexuals), afaik the best model involves something happening in the womb and preventing masculinisation of the fetus’s brain:

      “This explanation posits that some mothers develop antibodies against a Y-linked protein important in male brain development, and that this effect becomes increasingly likely with each male gestation, altering brain structures underlying sexual orientation in their later-born sons.[…] In addition, after statistically controlling for number of pregnancies, mothers of gay sons, particularly those with older brothers, had significantly higher anti-NLGN4Y levels than did the control samples of women, including mothers of heterosexual sons. The results suggest an association between a maternal immune response to NLGN4Y and subsequent sexual orientation in male offspring.”

      http://www.pnas.org/content/early/2017/12/05/1705895114

      Genes probably play a role, but they couldn’t be decisive, because there are identical twins, one transgender (from childhood) and one not:

      https://people.com/books/twin-transgender-story-jonas-and-nicole-maines-told-in-book-becoming-nicole/

      Although the other model also needs an explication – why one was less affected by mother’s antibodies than the other?

      • Ozy Frantz says:

        The problem with identical-twin models for transness is that you’ll always get a considerable nonshared environment component from whether or not the person decides to transition. (A similar thing, of course, is true for same-gender attraction– one could have three men with the exact same ‘objective’ sexual orientation, but one identifies as bi, one as straight, and one as gay.) I don’t think that explains 100% of the variation– if nothing else, one would expect one’s twin transitioning to cause one to seriously contemplate transition– but I expect even if a single gene explained transness you’d get some level of identical-twin nonconcordance.

        • a reader says:

          The problem with identical-twin models for transness is that you’ll always get a considerable nonshared environment component from whether or not the person decides to transition. (A similar thing, of course, is true for same-gender attraction– one could have three men with the exact same ‘objective’ sexual orientation, but one identifies as bi, one as straight, and one as gay.)

          That would apply if the twins were raised apart – for example one in a big city, with a large gay community, the other on a farm, in a conservative community. There was such a case in a study, “Homosexuality in Monozygotic Twins Reared Apart“: the one in the city was gay, the one at the farm had a gay relation as teen for 3 years, but at 20 he married, staid married and had 4 children.

          But the trans twin case was different: the identical twins were raised together and, although looking the same, their interests were different from the beginning:

          Jonas and Wyatt Maines were born identical twins, but from the start each had a distinct personality. Jonas was all boy. He loved Spiderman, action figures, pirates, and swords. Wyatt favored pink tutus and beads. At 4, he insisted on a Barbie birthday cake and had a thing for mermaids. On Halloween, Jonas was Buzz Lightyear. Wyatt wanted to be a princess; his mother compromised on a prince costume.

          http://archive.boston.com/lifestyle/family/articles/2011/12/11/led_by_the_child_who_simply_knew/?page=full

          The trans twin transitioned in the fourth grade. If the other twin had the same wish but was less assertive, he would have expressed it too, once he saw the parents being supportive. But he seemed to be really a regular boy, with usual interests.

  31. Robin says:

    Typo in first paragraph: “because every every sliver”

  32. Deiseach says:

    Human Genome Project leader Francis Collins estimated that there were “about twelve genes” for diabetes, and “all of them will be discovered in the next two years”.

    Yeah, this is why I’m always banging on about being old and having lived through a series of Chicken Little scares. I mean, by the time you hit fifty, you are going to have a selection of the following under your belt:

    (1) Genuinely Cool Discovery is made
    (2) Media, abetted by Excited Scientists With Eye On Grant Applications, trumpets ‘Cure For Fizzywhizz Just Five Years Away!’
    (3) Five years on, much less trumpeted article sneaks out shamefacedly about ‘Actually, Turns Out Fizzywhizz is Really Complicated, Guys!’
    (4) Rinse and repeat with next Cool Discovery

    Which is why I tend to the “smiling with affectionate amusement which yes I do understand comes across as patronising” at all the young’uns running around excited about AI, cryogenics, transhumanism, climate change is going to wipe out humanity in an extinction event, and so on. A good rule of thumb is always the oft-quoted laws, whether Finagle’s or Murphy’s or somebody else’s: “Everything takes longer. And costs more.”

    50s SF is full of this, and Asimov’s Foundation series was just the logical culmination: there are a few basic simple physical laws underpinning human society, and once we figure out those principles, we can predict behaviour en masse, influence it, and social programme our way to Utopia and happy, healthy, sane, productive, non-criminal, non-aberrant citizens. Psychiatry/psychology/sociology are all sciences just like physics and chemistry, and just like physics and chemistry there are natural laws which we can understand and distill down to a set of processes where you put in the correct input/stimuli and get out the desired outcome reproducibly, precisely, and accurately all the time.

    We want simple, clear rules and methods we can implement to make things work out, the way we developed industrial production of acetic acid to bypass reliance on the biological methods up to then. But biology is messy (in an elegantly complex way) when you start getting complex systems that is not easily reducible to “this system of steps”, so yeah – we’ll be looking at “this single gene turned on or off makes you a genius or a cretin, but to get an average intelligence of 110 up to 115 requires this interlocking series of a gazillion genes all working together and even then it might not pull through” for quite a while yet, until the next Big Breakthrough.

    • Murphy says:

      > Asimov’s Foundation series was just the logical culmination

      It did turn out that there was a psychic super-illuminati actually keeping everything on course since there were far too many uncontrollable factors that could throw the predictions off.

      I had a grumpy old genetics professor who’d gotten a tad too cynical, liked to rant about how “all this AI nonsense” never paid off with regards to annotation of genetics data.

      I basically did my paper for his class on the details of the progress in machine annotation and the interesting point was that basically people had made big promises a few years ago, the grumpy farts had decided it was all smoke and mirrors when it didn’t pay off right away… meanwhile in the intervening years the promises had been slowly fulfilled to the point where machine annotation had surpassed humans. A few years on now, working in the field and nobody sane would try to hand annotate things like they used to have to do because of course you just download some software packages and hit go.

      (1) Genuinely Cool Discovery is made
      (2) Media, abetted by Excited Scientists With Eye On Grant Applications, trumpets ‘Cure For Fizzywhizz Just Five Years Away!’
      (3) Five years on, much less trumpeted article sneaks out shamefacedly about ‘Actually, Turns Out Fizzywhizz is Really Complicated, Guys!’

      (3.1) over the next 5 years lots of quite smart and dedicated people grapple with the problems in #3 and fulfill the promises from 2 but because these are all gradual improvements it gets little attention

      (4) Rinse and repeat with next Cool Discovery

      it’s like how people whinge about fusion always being 50 years away and you have to be like “actually they’re pretty much perfectly on track as measured in money spent and have been making good progress, it’s just that their budget was massively massively slashed once oil prices went down”

      people love a big story about how XYZ CURES CANCER!!!!! which is why the daily mail continues it’s holy quest to separate everything in the universe into that which causes and/or cures cancer.

      But people kinda ignore boring charts of infant mortality or mortality from various diseases that slowly creep towards zero thanks to thousands of little people making small discoveries and improvements.

      • James C says:

        people love a big story about how XYZ CURES CANCER!!!!! which is why the daily mail continues it’s holy quest to separate everything in the universe into that which causes and/or cures cancer.

        Ironically, it’ll eventually turn out that reading the Daily Mail causes cancer.

      • Randy M says:

        Out of curiosity, what did your prof think of your paper?

      • Faza (TCM) says:

        people love a big story about how XYZ CURES CANCER!!!!!

        Obligatory xkcd.

      • Deiseach says:

        It did turn out that there was a psychic super-illuminati actually keeping everything on course since there were far too many uncontrollable factors that could throw the predictions off.

        Well, Asimov realised that a successful story requires drama, and there’s not much drama in an all-conquering organisation that can successfully forecast the future for the next thousand years. Hari Seldon’s hologram pops up on the dot every time there’s a particular crisis to push the plot forward the next part of the way, and to introduce drama there could only be a failure of the infallible theory.

        Which is a sticky problem, because if the infallible theory is indeed fallible, then all Seldon’s work is likely to come undone over the course of the millennium because more noise will accrete the further away from the origin point in his time the (remnants of the) Empire gets. I think Asimov wrote himself into a corner with “here’s a sophisticated social science that is proper science,and being proper science it can never fail!” so he had to introduce a figure like the Mule – a mutant so out of the ordinary course that he could not have been predicted by Seldon’s theory. And in order to beat the Mule, which the First Foundation couldn’t do, there had to be others with similar or greater powers, and that’s the Second Foundation.

        And then he tries putting that on a proper scientific basis with talk of “mentalics” and the usual developing natural human powers of emphatic sensing and the like. I don’t really take the later novels in the series as ‘canon’ so much, since there was a lot of 80s/90s SF writers and/or their collaborators revisting their 50s selves and works to re-jig them to fit the sensibilities of the times, as well as publishers wanting to milk a cash cow till it keeled over and died and then indulging in necromancy with the corpse (looking at you, Dune books!)

    • Michael Watts says:

      A good rule of thumb is always the oft-quoted laws, whether Finagle’s or Murphy’s or somebody else’s: “Everything takes longer. And costs more.”

      You want Hofstadter’s Law:

      It always takes longer than you expect, even when you take into account Hofstadter’s Law.

      😀

  33. Murphy says:

    I think this post in one way is completely correct… and in other ways is deeply misleading re: genetics.

    Imagine I said “there’s many things about a car that contribute to how fast it can go”

    That would be entirely true, there’s thousands of little design tweaks in car and engine that can be the difference between something chugging along at 30mph belching smoke and whizzing along at 120 miles an hour purring like a kitten.

    You could call it polycausal.

    Change the body shape slightly and you get a slight variation in speed and performance. Change the shape or diameter of this pipe here in the engine or the timing of this control signal or vary the octane of the fuel slightly and you get tiny effects.

    But then there’s also single-cause issues. Poke a hole in the fuel tank, jam a potato in the exhaust, spike the fuel with sugar. Suddenly you’ve got some very simple sources of massive variation with very clear causes.

    There’s a thousand ways to screw up a working brain just a little bit or to let it work ever so slightly more smoothly… and then there’s a lot of things that can have gigantic effects.

    So pick a number. 1, 4 , 40, 2000… you can find people with a 60 point difference in IQ from a single mutation. the difference between genius and profoundly disabled, you can find thousands of variants which have tiny positive or negative effects and everything in between.

    There being thousands of things which can affect a trait is not mutually exclusive vs potential gigantic modifiers existing and there’s some evidence that there’s sometimes genetic games of chicken going on to allow us to sustain more complex/effective neural architecture.

    https://www.economist.com/science-and-technology/2015/03/05/a-faustian-bargain

    • Michael Watts says:

      There’s much better than “some” evidence of genetic games of chicken. Prader-Willi / Angelman Syndrome is the clearest example I know of for humans, but the same dynamic is why the offspring of a male lion and a female tiger is much larger than the offspring of a female lion and a male tiger.

      Actually, although I thought of this immediately on seeing the phrase “genetic games of chicken”, it’s not a good match to how games of chicken actually work. In a game of chicken, you have two parties who are each attempting to stay on a disastrous course longer than the other one, but disaster is assured if either party stays on too long.

      But in Prader-Willi / Angelman phenomena, each side has long since overshot the point of disaster, but because the two sides are in balanced opposition, the disaster will not occur so long as both sides stay on track. It’s when one side bails out that the problem occurs.

      Is this what you were thinking of, or was it something else?

      Edit: I read your link; the article appears to use “genetic game of chicken” to refer to a circumstance where as the amount of X which an organism has increases, fitness first increases and then decreases. This sounds like pretty basic evolution to me. Height behaves the same way, but I suspect nobody quoted in the article would call height “a genetic game of chicken”.

  34. zzzzort says:

    What if we apply that intuition backwards? In the preDNA era there were a number of candidates for what controlled heritability. It turns out that it was (more or less) just one molecule. Even in the early DNA period it wasn’t known how DNA controlled protein expression. It would have been plausible that proteins could be the result of a complex combination of properties of DNA, including the global sequence, interaction with DNA proteins, and topology. Should the fact that there exist things called genes which are spatially localized on DNA and have a universal three base pair alphabet be surprising?

    Even if the mechanism was not known a priori, the payoff of discovering the single intelligence gene/hereditary molecule/genetic code is so much higher than discovering even a pretty good polycausal function that it makes sense to look for the easy answer first.

  35. nameless1 says:

    Can we say information in genes is zipped?

    Suppose I have 30 christmas lights in a long chain, and each light bulb can be switched between 3 colors. A computer is controlling it and to set up a certain combination of lights we need to enter data into a .txt file (Notepad). The first position represents the first light bulb, the second the second one etc. Into each position I can write a number 1 or 2 or 3, all other inputs are illegal and the bulb will be dark. 1 is red, 2 is white and 3 is blue. So if I want the christmas lights look like a French flag, I will enter 111111111122222222223333333333 into the file. Each digit, each position directly corresponds to one light and directly controls it.

    Then I zip the file. I am not exactly sure how zipping works but the logical thing is to write something like 10×1,10×2,10×3 which is indeed shorter than the imput file. I modify the light bulb controller program to be able to unzip the file and work with that.

    In my zipped file, positions do not correspond directly to light bulbs anymore. It has to be unzipped to do that. I cannot tell which byte controls light 4 anymore and modifying the zip file directly to make light 4 white is non-obvious. 3×1,1×2,7×1,10×2,10×3.

    Is this how genes work?

    • The Element of Surprise says:

      Compression is probably a bad analogy. Genes actually encode (in an “uncompressed” format) the proteins they give rise to, together with signals that control how much and under what conditions the proteins are to be produced. These proteins then go on to lead to consequences (cell division, cell differentiation, tissue formation, up to the actual behaviour of specific cell types) that determine the ultimate shape and behaviour (“phenotype”) of the living organism.

      Organisms of the same species agree on almost all DNA, two humans have >99% of DNA in common. However, small variability between individuals (often just single-point differences in DNA) lead to e.g. proteins that have small difference in their chemical characteristics, or small differences in how much they are expressed. This small variability is what we actually look at when we look for “genes” for something.

    • zzzzort says:

      It’s actually really remarkable how simple the genetic code is. In situations where the length of the total genetic code in constrained, such as viruses, more information is sometimes packed in through frameshifting. Essentially, if you have a sequence of ABCABCABC you can read it as a repeating sequence of either ABC’s, BCA’s, CAB’s, CBA’s, BAC’s, or ACB’s by changing the starting point and direction that you read. I seem to remember some virus used 5 of the 6 possible senses to encode different proteins.

    • Murphy says:

      Not so much. From a computer scientist/coder point of view….

      If you were programming a system to produce a certain alternating signal with certain timing you’d need a clock and you’d do something logical like have a loop that checks the clock and changes the signal.

      When you evolve something to do the same task…

      So how did evolution do it–and without a clock? When he looked at the
      final circuit, Thompson found the input signal routed through a
      complex assortment of feedback loops. He believes that these probably
      create modified and time-delayed versions of the signal that interfere
      with the original signal in a way that enables the circuit to
      discriminate between the two tones. “But really, I don’t have the
      faintest idea how it works,” he says.

      One thing is certain: the FPGA is working in an analogue manner. Up
      until the final version, the circuits were producing analogue
      waveforms, not the neat digital outputs of 0 volts and 5
      volts. Thompson says the feedback loops in the final circuit are
      unlikely to sustain the 0 and 1 logic levels of a digital
      circuit. “Evolution has been free to explore the full repertoire of
      behaviours available from the silicon resources,” says Thompson.

      That repertoire turns out to be more intriguing than Thompson could
      have imagined. Although the configuration program specified tasks for
      all 100 cells, it transpired that only 32 were essential to the
      circuit’s operation. Thompson could bypass the other cells without
      affecting it. A further five cells appeared to serve no logical
      purpose at all–there was no route of connections by which they could
      influence the output. And yet if he disconnected them, the circuit
      stopped working.

  36. Clegg says:

    Instead of publishing a paper claiming that lead causes crime, they publish a paper giving the latest polycausal score for predicting crime, and demonstrating that they can make it much more accurate by including lead as a variable.

    This is more or less what Jessica Wolpaw Reyes did in her original lead-crime paper: http://www.nber.org/papers/w13097.pdf
    See especially the introduction, and table 6 on p. 59.

    In general, genetics is a bad metaphor for empirical mircoeconomics, because in genetics there’s a fixed universe of possible causes given by the genome. Finding a gene that explains 1% of the variation in some phenotype isn’t that exciting in part because you expected some gene or other to be important. But finding out that e.g. HBCUs have to pay an extra 25 basis points to sell their bonds is a startling and compelling result, even though adding racial bias to an asset pricing model doesn’t help it fit the data much better.

    • Faza (TCM) says:

      I’m not sure that genetics offers a more fixed universe than other complex systems (such as empirical microeconomics), given that genes don’t operate in a vacuum.

      To the best of my knowledge (I am by no means a specialist), an actual organism has numerous mechanisms built in to ensure correct development, regardless of what the genome actually says (in other words: correcting for mutations). When these mechanisms fail – for example, due to environmental insults – such mutations may suddenly manifest, despite having been passed down for generations with no prior signs of their presence. It is, therefore, not possible to predict actual gene expression in advance, without the ability to also predict the environment the organism will be functioning in.

  37. JulieK says:


    The Anna Karenina principle
    states that a deficiency in any one of a number of factors dooms an endeavor to failure. Consequently, a successful endeavor (subject to this principle) is one where every possible deficiency has been avoided.

    The name of the principle derives from Leo Tolstoy’s book Anna Karenina, which begins:

    Happy families are all alike; every unhappy family is unhappy in its own way.

    In other words: in order to be happy, a family must be successful on each and every one of a range of criteria e.g: sexual attraction, money issues, parenting, religion, in-laws. Failure on only one of these counts leads to unhappiness. Thus there are more ways for a family to be unhappy than happy.

    In order for a person to have normal or above-normal intelligence, a lot of things have to be working correctly. Thus, there are a lot of potential ways for a genetic mutation to reduce someone’s intelligence.

    I expect that there are also a lot of ways that something could go wrong and a person could be depressed.

  38. Frederic Mari says:

    @Scott Alexander : Out of curiosity, are you saying that lead isn’t a big component of crime? The models and stuff I read around it (arguably, scientific vulgarisation articles rather than actual scientific papers) make it sound pretty solid.

    It’s not like poverty, gun laws, behavioural expectations etc. don’t play a role. It’s that lead is the element responsible for the 80s and early 90s crime waves we saw around the world. But Japan crime stats are still better than the US’s or Mexico’s.

    • Deiseach says:

      It’s that lead is the element responsible for the 80s and early 90s crime waves we saw around the world.

      Oh sure. But dig into it further: who are the children most likely to be exposed to lead, and in large amounts? And then the class and race angle emerges as it’s the children of the poor not the rich. So little Johnny grew up to be Tommy Gun Taylor the Terror of Chicago, but if he hadn’t been the eldest son of a poor widow making a living by being a seamstress raising six kids in the ghetto, but instead the son of the respectable lawyer prosecuting his case, would his future have been different? How much is the fault of Johnny’s criminal genes and how much the fault of society? 🙂

      • Frederic Mari says:

        Yes, I am quite aware. In the 90s and early 00s, I was quite willing to believe the rise in violence was entirely down to social or socio-economic issues, including the economics of the drug trade versus, say, a protection racket.

        But the fact that lead poisoning isn’t evenly distributed and/or lead poisoning does not lead to specific outcomes independently of other economic factors is refining on the basic acceptance that lead poisoning is responsible for the surge in violence in the 80s/90s.

        I was wondering if Scott rejects that fact/hypothesis.

    • Douglas Knight says:
    • Scott Alexander says:

      No, I’m not saying that. I agree it sounds pretty solid.

  39. Douglas Summers-Stay says:

    This is what happened to computer vision in the last 20 years, going from a few explainable causes to a model with many tiny undescribable ones. A neural network is what you end up with when you say “everything in the image needs to be able to affect everything else, and those effects need to be able to effect each other, many layers deep.” A convolutional neural network is what happens when you simplify the complexity by only allowing local interactions to happen at each level.

  40. Michael Watts says:

    Piaget says children gain long-term memory at age 4 and don’t learn abstract thought until ten

    Um… so what? Lots of people say lots of stupid things. When did Piaget get any more credible than the Tongue Map?

    For decades, people talked about “the gene for height”, “the gene for intelligence”, etc. Was the gene for intelligence on chromosome 6? Was it on the X chromosome? What happens if your baby doesn’t have the gene for intelligence? Can they still succeed?

    Meanwhile, the responsible experts were saying traits might be determined by a two-digit number of genes. Human Genome Project leader Francis Collins estimated that there were “about twelve genes” for diabetes, and “all of them will be discovered in the next two years”. Quanta Magazine reminds us of a 1999 study which claimed that “perhaps more than fifteen genes” might contribute to autism. By the early 2000s, the American Psychological Association was a little more cautious, was saying intelligence might be linked to “dozens – if not hundreds” of genes.

    I was about to say something here, and then I noticed this:

    And even remembering those times, they seem incomprehensible. Like, really? Only a few visionaries considered the hypothesis that the most complex and subtle of human traits might depend on more than one protein?

    This essay seems to be agonizing over the fact that people have used the same word, “gene”, to mean different things. The original meaning is the abstract unit of inheritance — genetics was discovered long before DNA was. And in that sense, it’s perfectly reasonable to talk about “the gene for X” to whatever degree X is narrow-sense heritable.

    DNA was discovered as the culmination of the effort to find the physical basis for inheritance. It is, in fact, the physical basis for inheritance, and so to the extent that “genes” had physical form they were necessarily embodied in DNA. Investigation revealed that the more immediate function of DNA is to produce proteins, so theorists working in an entirely different paradigm adopted the term “gene” to refer to the stretch of DNA which codes any particular protein. But that’s mostly unrelated to the inheritance sense. There’s plenty of information contained in stretches of DNA which code for no proteins at all.

    I don’t see how using the term “gene” to refer to the raw code for a protein means that using the older term “gene” to refer to the abstract concept of inheritance is a “diseased paradigm”.

    • Scott Alexander says:

      Any paradigm that allowed people to say “the gene for intelligence is on chromosome six” or “there could be as many as ten genes affecting autism” is just object-level bad. I appreciate your attempt at charity here, but I lived through this period and I think people meant what it sounded like they meant.

      • Michael Watts says:

        There never was a paradigm that allowed people to say “the gene for intelligence is on chromosome six”. Nothing stopped them from saying it anyway, but why do you believe there was a theoretic paradigm backing them up?

        People say “race does not exist” and “there hasn’t been enough time for Indians (in India) and Eskimos to evolve separate adaptations to their local climates” all the time too. They describe things as “light years ahead of their time”. They hallucinate “scientific” beliefs out of nothing:

        Turing describes a number of objections that people are inclined to raise to the possibility of AI. One, which he calls the “Heads in the Sand” objection, he characterises thus: “The consequences of machines thinking would be too dreadful. Let us hope and believe that they cannot do so.” It is, he says, particularly prevalent among intellectual people, since they most value the power of thinking. Turing notes that this position needs no refuting: “Consolation would be more appropriate.”

        (from here)

        • Scott Alexander says:

          It makes perfect sense to me that there could be three or four different genes that affect intelligence, each contributing 25% of the hereditary portion of the variance.

          This is how most of the traits we understood in 1990 (mostly simple genetic diseases) worked. Given that this was plausible, matched our existing evidence, and lots of people said they believed it, why do you think it wasn’t a real paradigm?

          Or on a more philosophical level – yes, false paradigms will eventually be found to have errors in them, and some of those errors will even be found to be contradictions and nonsense. But saying they didn’t really exist as paradigms seems to be dismissing that “false paradigm” is a useful concept.

          • Michael Watts says:

            false paradigms will eventually be found to have errors in them, and some of those errors will even be found to be contradictions and nonsense. But saying they didn’t really exist as paradigms seems to be dismissing that “false paradigm” is a useful concept.

            That’s a fair criticism.

            I tried to choose some examples that I felt were related to what was going on with “genes”. Someone describing something as “light years ahead of its time” has confused “light years”, which measure distance, with “years”, which measure time, because of the unfair similarity of the words. I think this type of confusion over “gene” was the norm in science journalism and lay understanding and explains most of what you talk about in the post. I don’t think this sort of situation involves a false paradigm, just people confusing two good ones.

            Someone saying “race does not exist” definitely is working from a false paradigm. If there were no paradigm, the least we could expect to see is an occasional rephrase like “there’s no such thing as race”. However, I don’t see that paradigm as being part of the field of genetics, and as genetics progresses I wouldn’t describe it as having overcome its own diseased roots. The “race does not exist” paradigm exists outside of genetics and is simply irrelevant to it except to the extent that believers attempt to run political interference.

            People independently concluding that machines cannot think because, in their opinion, that would be bad, aren’t working from a paradigm either. But where a desire exists, people will try to deliver on it, and many of those people will be totally incoherent, and I think that also explains a lot of “we will find the gene for X” publicity.

            I say there was no paradigm that allowed saying “the gene for intelligence is on chromosome 6” because regardless of how you want to use the word “gene”, it has always been clear that this couldn’t possibly be true. For example, Down’s Syndrome is unrelated to chromosome 6, but has a dramatic effect on intelligence. Many, many abnormalities have a dramatic effect on intelligence.

            Food for thought: Turner’s Syndrome is genetically straightforward. According to the paradigm of your choice, where is the gene, or genes, that code for it?

          • Eponymous says:

            It makes perfect sense to me that there could be three or four different genes that affect intelligence, each contributing 25% of the hereditary portion of the variance.

            Given that twin studies showed high heritability, and the absence of mendelian inheritance patterns for IQ among siblings, there clearly were more than just 3 or 4.

            Now you probably couldn’t distinguish between 40, 400, and 4k+ back in the 90s based on this line of reasoning.

    • BlindKungFuMaster says:

      “The original meaning is the abstract unit of inheritance — genetics was discovered long before DNA was.”

      But it wasn’t just the abstract unit, it was known to be a discrete unit. At least that drops out of Mendel’s laws. And that rules out just retroactively defining a polygenic cause as “what we meant when we said gene for X in 1990”.

      • Michael Watts says:

        Why? People investigating genetics in plants have always recognized that some inheritance appears to be discrete with low resolution and some appears to be “blending”. Breeding a red-flowered plant with a white-flowered plant to get a pink-flowered plant shows you that the plant’s flower color is (1) inherited in a manner that (2) appears to be continuous.

        Any trait that appears in more than 3 discrete variations cannot possibly be described by the classical Mendelian model (in a diploid species). But nobody concluded from this that “height is not a genetic phenomenon, because people come in more than three different heights”.

        Gregor Mendel’s data is so clean that it is generally believed to be falsified.

        • John Schilling says:

          Why? People investigating genetics in plants have always recognized that some inheritance appears to be discrete with low resolution and some appears to be “blending”.

          Some inheritance being discrete, indicates that there exists a discrete unit of inheritance. That plus Occam’s razor, strongly suggests that “blending” is the result of multiple discrete units of inheritance contributing to the same observed trait.

          Regardless, I believe that BlindKungFuMaster is correct that the word “gene” was introduced to define the discrete unit of inheritance. And that from about the time of Mendel, inheritance was widely believed to consist only of discrete units.

          • Michael Watts says:

            But it [the word “gene”] wasn’t just the abstract unit, it was known to be a discrete unit. At least that drops out of Mendel’s laws. And that rules out just retroactively defining a polygenic cause as “what we meant when we said gene for X in 1990”.

            Why? People investigating genetics in plants have always recognized that some inheritance appears to be discrete with low resolution and some appears to be “blending”.

            Some inheritance being discrete, indicates that there exists a discrete unit of inheritance. That plus Occam’s razor, strongly suggests that “blending” is the result of multiple discrete units of inheritance contributing to the same observed trait.

            I fully agree. But multiple discrete units of inheritance contributing to the same phenotypically continous trait is a longer way of describing a “polygenic cause”. So… why would that paradigm, which is much much older than 1990, rule out saying that a polygenic cause is “what we meant when we were talking about genes in 1990”?

          • caethan says:

            > Some inheritance being discrete, indicates that there exists a discrete unit of inheritance. That plus Occam’s razor, strongly suggests that “blending” is the result of multiple discrete units of inheritance contributing to the same observed trait.

            There’s also the obvious problem with the “blending” model of inheritance, that was seen as far back as Darwin: if the offspring of red-flowered and white-flowered parents are always pink-flowered, after enough random breeding, all the plants should have pink flowers. Why is the variance retained? And the solution is: particulate inheritance.

          • Michael Watts says:

            if the offspring of red-flowered and white-flowered parents are always pink-flowered, after enough random breeding, all the plants should have pink flowers. Why is the variance retained?

            This is not correct; assuming that color is a pure real number varying between white and red and that the child’s color is the arithmetic mean of its two parents’, random breeding will always retain a distribution of color from white to red. (The shape of the distribution is given by the rows of Pascal’s Triangle, if you assume that in the beginning there were equal numbers of whites and reds.) To wipe out the reds, you have to force them to breed with the whites (and not the other reds), and vice versa.

        • Deiseach says:

          Gregor Mendel’s data is so clean that it is generally believed to be falsified.

          AHEM. Sticking up for the honour of my co-religionist, there has been discussion of this on here before. The “he must have fudged the figures” was the conclusion of one person, while competing discussions give plausible reasons for why this need not be true.

          A large part of the problem is that we don’t have Mendel’s original papers, as the new abbot cleared out and burned documents after his death for various reasons. So what we have are the ‘cleaned-up’ results presented for lectures and essays, not the rough work leading up to them.

          The “ZOMG Mendel totes cheated!!111” narrative is sexy clickbait and has been picked up as such (I’m not ascribing such base motives to the original Ronald Fisher) but it does seem to be becoming one of those unexamined “everyone knows” ‘factoids’ that gets thrown around.

  41. Clarence says:

    Biological determinism, also known as genetic determinism[1] or genetic reductionism,[2] is the belief that human behaviour is controlled by an individual’s genes or some component of their physiology, generally at the expense of the role of the environment, whether in embryonic development or in learning.[3] It has been associated with movements in science and society including eugenics, scientific racism, the supposed heritability of IQ, the supposed biological basis for gender roles, and the sociobiology debate.

    https://en.wikipedia.org/wiki/Biological_determinism

    • Scott Alexander says:

      Yes, and research has shown it to be largely accurate. Obviously genes and the environment both play a role, but the recent trend in research has been to find genes are much more important than people thought possible even a few decades ago, with traditional “nurture” assumptions much less relevant than people would have considered plausible.

      You can find a very high-level overview of the IQ case here, but also look into twin studies ever, GCTAs, adoption studies, etc. In terms of books, The Nurture Hypothesis will be especially helpful, but any other book on genetics or intelligence written in the past 10-20 years should at least give you the basics. See also this post here.

      • Michael Watts says:

        the recent trend in research has been to find genes are much more important than people thought possible even a few decades ago, with traditional “nurture” assumptions much less relevant than people would have considered plausible

        I find this phrasing odd. If you go farther back in time, the trend is to believe that genes are more important, not less important. The period a few decades ago is the wacky outlier. How did people feel about breeding vs. raising 45 decades ago?

      • Faza (TCM) says:

        Conjecture: Looking through the overview of studies on the Wikipedia page, it seems to me that the heritable component of intelligence could be viewed as “intelligence potential”, while environmental factors as “intelligence filters”.

        What do I mean by that? Presumably, the capacity for “intelligent operations” (however defined; say: solving puzzles on IQ tests) is dependent on the actual structure of the person’s nervous system – a biological feature and hence subject to genetic heritability. I believe this isn’t a controversial assumption.

        In practice, this would imply that “maximum achievable intelligence” is in some part determined by actual inherited genes. In ideal conditions (that is: with no adverse environmental factors affecting gene expression) it might be wholly dependent on genetics. I say “might”, because we don’t really know whether “maximum achievable intelligence” is a coherent concept – that is: whether the capacity for intelligence has an intrinsic limit. Nevertheless, I believe there’s no reason to believe it doesn’t.

        Having the potential to reach some level of intelligence (a particular IQ score, for example) does not, however, imply that any particular individual will reach that level. It is my understanding that development of intelligence is driven, to an extent, by environmental needs. An individual that has the capacity for high intelligence may never achieve the fullness of their potential, unless they are challenged with appropriately difficult problems. If that seems dubious, it should at least be obvious that a person with exceptional aptitude for the kind of problems one finds in computer science will never realize their potential in a society that hasn’t gotten around to inventing computers, yet.

        The ability of an individual’s environment to stimulate the development of their intelligence to its maximum, biologically-dependent level – or rather, lack thereof – is what I call an “intelligence filter”.

        I find this hypothesis attractive, because it explains a number of features observed in the data:

        1. Environmental effects tend to be more pronounced in lower-income families (Turkheimer 2003; Harden 2007), which is consistent with the environment restricting the development of intelligence below its biological potential. The opposite is true of higher-income families which is to be expected if there are few environmental constraints towards developing intelligence to its biological maximum.

        2. Environmental effects tend to be more pronounced in childhood than in adulthood (Tucker-Drob 2011), which is consistent with the idea that stimulation coming from the individual’s environment drives development of intelligence. For bonus points, environmental effects are much more prominent in lower SES families than higher SES ones, which again is consistent with the idea of a “filter”.

        3. The hypothesis is also consistent with the findings of Capron and Duyme (1999), with difficult early circumstances having a markedly adverse effect on orphans’ intelligence development and post-adoption gains being proportional to the wealth of the adopting family, which is what we’d expect if we assume that:
        a. intelligence has a positive correlation to wealth (which I believe to statistically be the case),
        b. more affluent foster parents will have more resources to devote to improving the lot of the adopted child (this is kinda obvious).

        I realize that the idea is likely not in any way novel, but I’m surprised nobody else has brought it up.

        • BlindKungFuMaster says:

          What would be a competing hypothesis? This just seems to be the default way of looking at it.

          • Faza (TCM) says:

            One competing hypothesis, which started this whole conversation, is that there is little or no biological determinism to intelligence and hence no “maximum achievable intelligence” on an individual level.

            There’s two ways I can read this: either it is the case that there is some theoretical maximum that is essentially the same for all humans and differences can be explained by what environmental filters are in place, or it is the case that there exist environmental “intelligence amplifiers” available to some people, but not others, that explain observed correlations between SES and IQ.

            I don’t think I need to elaborate on why this is an attractive position, politically.

            That’s the “nurture” perspective. A vulgar reading of the “nature” perspective is that most or all eventually observed variance (in adults) is purely down to genetics and therefore environmental considerations aren’t going to affect outcomes in the long run.

            This is one way to read Rushton and Jensen (2010) and is also attractive politically – to a different political mindset.

            I wouldn’t accuse Scott of holding such a position, but it’s an impression that an inattentive reader might get from reading his writings on the subject.

          • Eponymous says:

            A vulgar reading of the “nature” perspective is that most or all eventually observed variance (in adults) is purely down to genetics and therefore environmental considerations aren’t going to affect outcomes in the long run.

            This is one way to read Rushton and Jensen (2010) and is also attractive politically – to a different political mindset.

            It’s also attractive scientifically.

          • Faza (TCM) says:

            I’m not sure it’s that attractive scientifically, to be honest.

            In fact, I’d have to see some really strong evidence that environment isn’t an important factor, because of the aforementioned filtering effects.

            Simply having the potential to be really smart doesn’t mean you will be, if superior intelligence isn’t particularly necessary or helpful in your day-to-day life.

          • Eponymous says:

            In fact, I’d have to see some really strong evidence that environment isn’t an important factor

            ~0 effect of shared environment from twin studies doesn’t do it for you?

            Simply having the potential to be really smart doesn’t mean you will be, if superior intelligence isn’t particularly necessary or helpful in your day-to-day life.

            For the genes to evolve, they must have been reliably expressed in the ancestral environment, and resulted in differential reproductive consequences.

            Whatever genes made Einstein able to do what he did came from somewhere. Sure, they wouldn’t let a neolithic HG invent GR; but they would help him do something useful, or they wouldn’t be there.

          • Deiseach says:

            I wouldn’t accuse Scott of holding such a position, but it’s an impression that an inattentive reader might get from reading his writings on the subject.

            Oh God, we’re going to have to tip-toe around The Forbidden Subject. Yes, political viewpoints M, Z and F will all have favourite theories they flog to death. But that does not mean we can’t discuss them. For instance, there is an interplay between nature and nurture where – take me as an example – I’m a shortarse from a family of shortarses. Put me in an optimum environment, feed me properly from babyhood, give me plenty of healthy exercise, and I am not ever going to make the women’s basketball team because there’s a genetically determined limit to my height*. HOWEVER, suppose I had the genes (however many of them and in whatever complex interplay) to be six foot tall, but I grow up in a Dickensian hell-hole and develop rickets as a toddler, then naturally I will not grow up to be as tall as I could optimally have been. So good outcomes and bad outcomes are a dance between environment and heredity. Am I short because of my environment, or because my genes put a limit on it? How can we tell, unless we do some testing and investigating here?

            And yeah, I see the trap gaping for us if I move one tiny step into discussion of That Which Must Not Be Breathed With Reference To Intelligence and I’m not going to walk that direction. But it’s a crying shame we can’t have a discussion without the skeleton at the feast of warnings about “You do realise what some people – not us, we’re smart and sophisticated and aware – but some other readers, inattentive ones from outside, might make of this topic? Nice blog you’ve got, be a shame if a howling Twitter mob denounced you, better not risk it”.

            *Also I would fail to do so because I have no hand-eye co-ordination, can never remember the rules of games, and couldn’t hit a cow’s arse with a banjo, but that’s beside the point here.

          • Faza (TCM) says:

            I don’t disagree.

            I must point out, however, that even a poorly developed visual organ (I was half-tempted to say “half-an-eye”) is better than no eye at all.

            As it happens, probably the best candidate for “intelligence-selecting mechanism” is “other people”. Just looking at the Wikipedia entry for feral children it appears that children deprived of human contact in the earliest stages of development are likely to suffer at least some intellectual impairment for the rest of their lives. Outcomes are markedly better for those children who have at least learned to speak before going feral.

            Re twins studies: unless one of the twins was subject to a sufficiently strong filter, we wouldn’t really see any difference.

            EDIT:

            @Deiseach – The reason I wouldn’t accuse Scott of holding this view is that it isn’t nuanced enough. I frankly don’t care a whit about people’s feels, but I try not to go around calling smart people dumb.

          • Faza (TCM) says:

            Note: I am a dunce and this was meant to reply to the post here.

            My, admittedly limited, understanding is that the number of gene variants – alleles – in existing populations is large. Nevertheless, most members of those populations do not appear to be measurably more or less “fit”, in an evolutionary sense, and the presence or absence of any particular allele cannot necessarily be used to predict fitness.

            Given expected rates of mutation, these variants must have accumulated over a long time, but do not, for the most part, affect biological fitness. In certain cases, however, homozygous presence of a recessive allele may result in the manifestation of a genetic disease, for example.

            The second part – suppression of pathological expression – is a bit more subtle, but the gist of it is this: organisms have evolved numerous ways of ensuring correct development (chaperone proteins come to mind) in less than optimal conditions. As long as these mechanisms continue to operate, the organism propagates normally.

            If, for whatever reason, the bounds of tolerance are exceeded, things start to go wrong. A rather sad example was the phocomelia outbreak in the 1960s as a result of Thalidomide use during pregnancy. Phocomelia is a genetically heritable recessive disease and was known prior to introduction of Thalidomide, but manifested rarely. The introduction of Thalidomide during pre-natal development apparently disabled whatever mechanisms normally suppressed expression of this mutation, leading to tens of thousands of cases – meaning that there existed (and still exists) a considerable population that carries the phocomelia mutation – without themselves being sufferers.

            I hope that makes matters clearer.

        • quanta413 says:

          One thing I’m uncertain about. How does this idea square with the fact that adopted children’s IQ starts closer to their adopted parents but then moves strongly towards their biological parents IQ as they become young adults? How are the environments’ effects unwinding?

          That implies that many positive environmental effects tend to be temporary. It’s one of the most severe interventions we can morally try and it does something but falls way short of what you’d expect if the variance due to environmental effects was strong.

          There’s definitely a filter, but I think it’s more negative than positive. That also squares with the idea that environmental variance is higher at lower SES. I expect the difference there is that more bad things happen to people of lower SES, not that rich people’s enrichment attempts do much. If someone hits their noggin hard enough…

          • Faza (TCM) says:

            My take is that environment can at best facilitate and at worst stunt intellectual development, so the matter boils down to “how smart a person can this particular environment support?”

            Higher SES offers greater opportunities for developing one’s intellectual potential, but probably does not affect the magnitude of this potential, which is probably hereditary.

            I’d expect children adopted from low SES families into high SES families to have a good chance of achieving more than they would have, had they remained in a low SES environment. However, they’ll probably achieve less than children raised in the same environment who also have the genetic advantage.

          • Eponymous says:

            Everyone agrees certain things can lower IQ. Physical trauma most obviously (as you mention).

            But most people who advance this sort of argument seem to have something else in mind; something like, you need a sufficiently intellectually enriched environment, which most middle class and above households achieve, but which (some) households at the bottom don’t.

            Setting aside some mathematical issues with these papers, there is a logical problem: relative to our society, the ancestral environment was highly deprived. How could these variants evolve if they weren’t reliably expressed in the EEA?

            Given what we now know about inheritance, I think we should be skeptical of claims that rely on large nonlinearities, showing up in very convenient locations, without any good theoretical basis.

          • Deiseach says:

            How are the environments’ effects unwinding?

            Not so much the environmental effect unwinding as the genetic component becoming expressed. You take sixteen babies, feed them properly, attend to their needs, be affectionate and disciplined parents/caretakers, raise them all in as similar an environment as you can manage, and for the early years they’ll develop similarly. But as they get older, the kids who are (in my earlier example) genetically disposed to be short and the kids who are going to be tall will start to show differences and no amount of “eat your greens and drink your milk” will make the five foot eight kids catch up to the six foot four kids. What you will do is make sure the five foot eight kids get to be five foot eight, and not five foot three with rickets and scoliosis, and that’s just as important and valuable as cranking out sixteen seven foot tall kids for the basketball team.

          • Faza (TCM) says:

            How could these variants evolve if they weren’t reliably expressed in the EEA?

            Interestingly enough, what little I know of genetics seems to imply that most selection operates on combinations of ancient mutations that are kept safely repressed if everything goes well – giving offspring that looks and acts much like its parents.

            Then you have stuff like genetic assimilation which is also quite fun.

            Probably the best approach, with regards to the topic of Scott’s post, is that real-life genetics, as far as individual expression is concerned, is incredibly messy.

          • Eponymous says:

            most selection operates on combinations of ancient mutations that are kept safely repressed if everything goes well

            I don’t understand this part. Could you expand?

          • Faza (TCM) says:

            Posted reply in the wrong thread, it is here.

  42. romeostevens says:

    ‘If a problem seems hard the problem formulation is probably wrong’ -Chapman

    ‘one must understand information processing systems at three distinct, complementary levels of analysis. This idea is known in cognitive science as Marr’s Tri-Level Hypothesis:

    computational level: what does the system do (e.g.: what problems does it solve or overcome) and similarly, why does it do these things
    algorithmic/representational level: how does the system do what it does, specifically, what representations does it use and what processes does it employ to build and manipulate the representations
    implementational/physical level: how is the system physically realised (in the case of biological vision, what neural structures and neuronal activities implement the visual system)’ -Wiki summary of David Marr’s work.

    So, we encounter a problem ontologized at the algorithmic and our solution is to go down to the implementation level. What about going up to the computational level? My inside view on depression is that when I was depressed it was because I didn’t understand what it was I was trying to do. Lacking such knowledge I did things that were of the same sort of thing as trying to train a dog to be vegan. That these all went horribly wrong constantly in many varied ways (why is the dog acting so crazy?) eventually paralyzed me into inaction. Playing wack a mole with all the specific ways the dog is acting crazy is something I don’t think would have ever worked.

    • Scott Alexander says:

      I’m using genetics because it’s the best example we have right now, and so far it’s a counterexample to the Chapman quote. It turned out genetics was just hard. Once you brute-forced the hard thing, you could do genetics just fine.

      • Freddie deBoer says:

        But doesn’t your own history here equally suggest that paradigm shifts are inevitable and that someday we will likely view “doing genetics just fine” as constituting a failure to really understand it?

        • Michael Pershan says:

          In science there is a persistent tendency to take our most currently useful models and build a metaphysics out of them. But it’s not obvious that our most useful models also give us our best explanation of the world.

          It might be that the world really is exactly like the precise thing that our best tools can measure and not like the thing the previous generation’s best tools could measure…but we need to make that case.

        • quanta413 says:

          Science is like ogres.

    • whateverthisistupd says:

      I keep harping on this, but brains are the product of evolution, not designed computer systems. They aren’t being optimized for anything. They are the result of trillions of years of randomness bumping against the limits of natural selection and probably some more abstract high level principles like possibly systems flow theory or tendencies of symmetry breaking long term results of thermodynamic processes. Nothing like the intelligently designed computer. I keep seeing this bias from programmers again and again, who assume that the body in general and the brain in particular is fundamentally reducible to a computer program, when it evolved and functions under totally different principles, environments, purposes, and boundary conditions.

  43. johnswentworth says:

    I’ll throw out a claim, and if anyone wants to operationalize it and offer a bet, let me know.

    Intelligence isn’t massively polycausal. Neither is height. Massive polycausality basically never happens except when there’s symmetry involved (e.g. air pressure resulting from a huge number of identical particles). 80/20 rule is a thing, and if a phenomenon looks massively polycausal, it’s because we haven’t figured out the 80 yet.

    In the case of both intelligence and height, I see two likely ways for this to play out (though there may of course be others). First, the hereditary component of intelligence/height isn’t primarily genetic – e.g. maybe it turns out to be all about microbiota or something like that. Second, it could be that we’re missing a level of abstraction – e.g. it turns out there’s a bunch of proteins which implement some simple predictive model of the environment and then make height/intelligence investment decisions, but we haven’t managed to decode the model yet.

    I expect, fifty years from now, we’ll look back and say “obviously intelligence/height isn’t massively multicausal, we just hadn’t figured out the actual causal process yet.”

    For background on why I expect this, see The Epsilon Fallacy.

    • quanta413 says:

      At a large scale, the effect of genes on traits often is a lot like the effect of the molecules of a gas on pressure and temperature. Independent additive variance type models work pretty well for a lot of cases.

      You’re made of trillions of not identical but very similar complex units. Yes, those units can be specialized. But they still have a really deep underlying similarity that the wheels on your car don’t have with its electronics for example.

      I’m trying to figure out how to operationalize a bet with you or if it’s possible. I think the number of genes contributing to height and intelligence is large. We already have enough measurements. But I think polycausal is the wrong way to think about it, so I agree with you there. I wouldn’t say the temperature of my room is polycausal because there are a lot of molecules in the room. Strictly speaking it’s kind of true, but…

      • BlindKungFuMaster says:

        “But I think polycausal is the wrong way to think about it, so I agree with you there. I wouldn’t say the temperature of my room is polycausal because there are a lot of molecules in the room. Strictly speaking it’s kind of true, but…”

        The causal SNPs are much more independent than the molecules in the room. If you speed up a single molecule it’ll collide and dissipate the additional energy among the other molecules. It doesn’t matter which one you chose. If you change a single SNP, it’ll have an effect on the phenotype that is specific to only that SNP. Looking at SNPs as causes makes sense, because that’s were you can intervene.

        • quanta413 says:

          Sure, but independence of contributions normally makes things simpler to handle not less simple. My statement would be true for a noninteracting ideal gas too.

          If I put a pile of sand grains on a scale and each grain has a slightly different mass, the weight of the pile is polycausal from the point of view that each grain of sand makes an independent contribution to weight (note to others: obviously human traits mechanistically more complicated than this). And each grain of sand has a unique weight. But the weight of the pile isn’t polycausal in an interesting way.

          Genetics is obviously much more interesting than that, but I don’t think the idea of polycausality is really gaining us anything here. At a really low level of description, biology is a very large physical system with a particularly interesting set of initial conditions. At a high level of description, we find that genes often add to traits as if they were unique grains of sand adding weight to a scale. At some middle levels, it’s more polycausal in an interesting way. But those middle levels are deeply structured by the physical level below and the emergent rules of natural selection above. What is the idea of polycausality really gaining us here?

          • BlindKungFuMaster says:

            A causal model allows us to predict and to intervene. That is exactly what finding causal SNPs is about. That is what we gain. Saying that a causal relationship that can be used to predict and intervene isn’t interesting enough to be thought of as causal is just absurd.

          • quanta413 says:

            I’m not saying the idea of causality isn’t useful.

            I’m saying the idea that polycausality is special isn’t very useful.

            The weight on the scale in my example is indeed polycausal. And if for some reason it was hard to remove all the grains of sand at once, knowing the weight of each individual one helps you decide which ones to remove first to change the weight by some amount.

            But my point it that polycausality doesn’t make genetics special in and of itself in anyways. Lots of physics is polycausal in a way that doesn’t make physics hard. No big computers or fancy new theories were required by the idea of polycausality in the case of the scale. Admittedly a fancy new theory was needed for the case of ideal gases.

      • johnswentworth says:

        Kind of tangential, but… I just did a little back-of-the envelope math, and found that 20% of air molecules carry about 52% of the energy. Even if we buy that genes are like air molecules, Scott’s example of “about twenty thousand” genes involved in height or intelligence (i.e. ~2/3 of human genes) seems excessive – that should be well into diminishing-returns territory, as far as predictive power goes.

        For massive polycausality to be a useful idea, even independent additivity may be too weak – we may need something which actively forces the inputs to have similar impact size. John Schilling suggests one such mechanism below, although I’m not convinced it’s the dominant effect behind intelligence or height variation in practice.

        • quanta413 says:

          Currently, there’s no selection going on for increased intelligence in humans. At least not on average. But if there was in the past (and it looks like it, although exactly when that stopped is hard to determine), John is right that any genes that caused a large enough change in intelligence to significantly affect fitness would either quickly vanish from the population or quickly sweep the population.

          That doesn’t force variants to all have the same effect size, but it definitely narrows the range of effect sizes we should expect to see for common variants.

    • BlindKungFuMaster says:

      We can predict height with an accuracy of something like 2 inches directly from genetic data. That’s basically all the additive heritability encoded in a polygenic score. Where does that leave your claim?

    • Anatoly says:

      I want to agree with you, but primarily I’m afraid of staking out an opinion on this because I don’t fully understand what all this “polygenic” and “multicausal” explanations are really saying. I think I’m confused. So I’ll just babble on and hope to be corrected.

      So they find a “polygenic score” that explains 10% of variability in “education attainment”. Say there’s a gene that codes for a protein that makes you feel slightly healthier as you go through life, like your immune system is a bit stronger and your flue bouts last less. I think that’ll correlate with educational attainment weakly, because healthier people find it easier to study. So is it correct intuition that the polygenic score picks up such a gene and hundreds of others like it, each of them influencing the target variable a tiny bit, often through confounders like “having fewer flues”? Or are such confounders (if it’s correct to call them so) weeded out by the analysis (how?).

      If this intuition is correct, I don’t understand how it makes sense to call this a resounding success of recent-years genetics as opposed to earlier failures, the way Scott is doing. Obviously having fewer flues does not underpin or explain intelligence in the sense that we might reasonably want to explain it. And the entire approach seems really prone to p-hacking (if the same analysis was done on an entirely different dataset from a different country, how confident should be we the same polygenic score should behave as well?). If the score explained close to 100% of the variability, I think one could say “there’s no there there, all of intelligence is built up from these thousands of fewer-flues tidbits in some sort of strange Skinnerian manner”, but with 10% explained, there’s not a strong case even for that claim. What am I getting wrong here?

    • John Schilling says:

      Intelligence isn’t massively polycausal.

      Intelligence is massively polycausal at the genetic level, and necessarily so. Intelligence comes from working brains. Brains are constructed from proteins – lots and lots of different kinds of proteins. A gene encodes one protein. Therefore it takes many many different genes to cause intelligence.

      What you probably meant to say was that variation in intelligence isn’t massively polycausal, and that doesn’t work either. It is the nature of evolution that the human genome will consist of minor variations in all the genes for all the proteins that are used to construct an optimally-intelligent(*) human brain. An individual might have little more than the optimum of protein X, a little less of protein Y, a minor variation on protein Z, and with thousands of genes for thousands of proteins, thousands of little variations. Each of which will make their brain slightly less optimum, mostly in the “less intelligent” direction, sometimes in the “more intelligent but at too great a cost” direction, all of which add flexibility to adapt to different environments and/or incorporate new mutations in a positive way.

      If any one of those variations resulted in even 10% of the observed variation in IQ, evolution would hammer it hard to cull out the far-from-optimal variants of the responsible gene. Single IQ points represent a non-trivial variation in reproductive fitness, and single genes are easy for evolution to tweak. Over thousands of generations, the highly variant versions of that gene would be culled, and the range would be restricted to that which produces no more IQ variation than any of a thousand other brain-protein genes. Only when it vanishes into the noise, will evolution leave it (mostly) alone.

      Polycausal intellegence is an almost inevitable consequence of genetics and evolution. In hindsight, at least, an obvious one.

      * Which is not the same as maximally-intelligent, because intelligence has costs, and which may vary depending on environment.

      • baconbits9 says:

        A gene encodes one protein.

        No, the “one gene, one protein” dogma is gone.

        • Randy M says:

          I’d be interested in elaboration. Are you talking about things like editing producing different proteins, for example in anti-bodies? Various genes combining proteins into one functional unit? More?

          • baconbits9 says:

            You have alternative splicing, where one sequence can be read in several different ways producing different RNAs leading to different proteins. You can have multiple genes code for the same protein, and if you take protein folding (we are beyond the edge of my knowledge) I believe you can have a gene produce two proteins that are the ‘same’ in terms of transcription and translation but perform two different functions within the cell.

          • Randy M says:

            You can have multiple genes code for the same protein

            Presumably activated at different rates depending on position, too. With potential of mutation in one without altering the other.

            protein folding

            Generally I think protein folding is automatic, depending on the amino acid sequence (and interaction of the chemical side chains thereof) and environment–pH, H2O vs lipid, etc). But I wouldn’t be surprised if there was something else that altered this “purposely”, like translation rate.

          • baconbits9 says:

            Generally I think protein folding is automatic, depending on the amino acid sequence (and interaction of the chemical side chains thereof) and environment–pH, H2O vs lipid, etc)

            I’m out of my depth here in terms of personal knowledge/understanding and confidence in it so I am just going to link and quote something that I think disagrees with this statement.

            Protein post-translational modifications (PTMs) increase the functional diversity of the proteome by the covalent addition of functional groups or proteins, proteolytic cleavage of regulatory subunits, or degradation of entire proteins. These modifications include phosphorylation, glycosylation, ubiquitination, nitrosylation, methylation, acetylation, lipidation and proteolysis and influence almost all aspects of normal cell biology and pathogenesis. Therefore, identifying and understanding PTMs is critical in the study of cell biology and disease treatment and prevention.

            I am pretty sure that protein folding is far more complex than just amino acid chain + the most basic environmental factors. I believe that one aspect controlling protein folding even includes how many copies of the protein were made, but I fully admit that you shouldn’t take this as true on my word and hopefully someone more knowledgeable can correct me or confirm and expand.

          • Loris says:

            In prokaryotes (bacteria), a sequence of DNA encodes a sequence of RNA, which (barring various exceptions) encodes one protein sequence.

            In eukaryotes (animals, plants &c), a sequence of DNA encodes a sequence of RNA, which is then edited to remove “introns” and leave a protein-coding sequence of the remaining bits (“exons”). Sometimes most of the sequence which notionally encode a gene may be introns of various sizes. The editing to remove introns is sometimes variable – alternate exons may be used. This may be dependent on the nature of the cell – those in different tissues may produce different versions of the product, or it may vary dependent on stage of the cell cycle, or other wierdness.

            At least, that’s what I assume they’re referring to.
            It’s not really an issue for John Shilling’s argument.

            For what it’s worth, I think for evolution to work it’s pretty essential that it’s not the case that all genes affect everything. I mean, they obviously do in the weak uninteresting sense. But given the nature of how genes work you’d expect a mutation in a gene to make one specific change to the very complicated biochemistry of the cell. That can obviously be in itself significant, or have knock-on effects which are significant.

            That doesn’t mean that “high level” whole-body effects like height, running speed, disease etc. can’t have multiple genes acting on them, but it probably does mean that there may be some genes with very significant contributions to an individual organism’s variation from the mean in a specific trait.

          • Randy M says:

            Very well I’m… not surprised.
            The relevant fact of those processes is that they are, as you say, more than one gene coding for one protein–the gene for the sequence of this protein, and the one doing the methylation etc.

            I am pretty sure that protein folding is far more complex than just amino acid chain + the most basic environmental factors.

            As far as I understand it, a protein function is largely determined by its shape, which is an emergent property of it’s chemical structure and how that structure interacts with the environment it is in. Looks like there is a lot more to “chemical structure” than the codons specify.

            one aspect controlling protein folding even includes how many copies of the protein were made

            RNA degrades rapidly, but I don’t see how this doesn’t cash out to some change to the translation or subsequent alterations.

            hopefully someone more knowledgeable can correct me or confirm and expand.

            Hey man, that’s my motto.

          • Randy M says:

            The editing to remove introns is sometimes variable – alternate exons may be used.

            I still remember hearing about this in intro to cell biology back in 1999 or there abouts, in regards to anti-bodies as I mentioned–in this case, it allows for a compression of information which can make sort of generic pathogen receptors which can then back modify into specific receptors based on which pieces of the sequence are used.

            It took what was a conceptually simple mechanism–transcription–> translation–> function, and dramatically increased the amount of potential outcomes based upon a somewhat black box (at least to me, at the time) process.

            but it probably does mean that there may be some genes with very significant contributions to an individual organism’s variation from the mean in a specific trait.

            To predict the factors involved in a trait, we’d want to think about molecular function of the trait, like John talks about for the brain. Something like digestion of a certain specific chemical, like the lactase enzyme is responsible for, it makes a great deal of sense for it to be monogenic (with some of the caveats discussed above).
            It’s easy to hypothesize some changes that could height dramatically–duplication of some genes that send out hormones involved in growth. Definitely a more complicated process than digestion, though, and one where some mutations could be catastrophically wrong, which is another thing that would lower the factors contributing to phenotypic diversity despite a very high number of interacting parts.

          • Loris says:

            Randy, the generation of antibodies is kind of a special case.

            (Please bear in mind I’m not an expert in immunology.)

            [antibody production] allows for a compression of information which can make sort of generic pathogen receptors which can then back modify into specific receptors based on which pieces of the sequence are used.

            This is true, but it’s something which happens through alteration of the DNA sequence during development. (A description, for those interested, on Wikipedia: V(D)J recombination)

            It took what was a conceptually simple mechanism–transcription–> translation–> function, and dramatically increased the amount of potential outcomes based upon a somewhat black box (at least to me, at the time) process.

            While you do get more variation, this is between cells – each cell only has antibodies recognising one thing. It works for the immune system but it’s quite a costly process – at least two thirds of the cells which do this generate useless products and (ideally) terminate themselves. This is not because the process went wrong as such, it’s just that it has an inherent 1/3 chance of creating a valid junction (essentially the reading frame must be conserved).
            That’s not the sort of thing which can scale to general gene regulation. If it wasn’t for the existential threat posed by infectious disease, it wouldn’t happen at all – because in itself it introduces a significant risk of cancer.

            What comes out the end is ideally a single gene which is regulated with all the hilarious complexity involved in Eukaryotic regulation – that is, I don’t think it’s regulation is going to be anything special. (The genome editing involved definitely is!)
            So its kind of unrelated to transcription and translation.

            I’d go so far as to say that it definitely isn’t relevant to intelligence, except in the weak sense in which all genes (and the rest of the environment) affect intelligence.

            baconbits9 preempted my reply with a list of options. I don’t think any of the others are particularly significant:

            * multiple genes [can] code for the same protein
            So what?

            * protein folding
            Sometimes misfolded proteins can cause disease (see: prions).
            Many proteins need help folding correctly. Do mis-folded proteins count as different proteins? I’d say not.
            Many, many proteins take multiple different conformations as part of their function. That’s just part of being a protein.
            Did you know proteins are often post-translationally modified in various interesting ways?
            Anyway, the important thing is how these changes occur. Well, insomuch as it’s a controlled effect (in whatever protein is under consideration), it’s encoded in the genome. There’ll be a gene for it.

            … and some other things which can happen (I believe these to be much more relevant in prokaryotes than eukaryotes):

            * Long and short form of a protein
            A gene can have alternate start codons. May have biological significance.

            * proteins encoded in different reading-frames of a sequence
            I like this a lot, but seems to be much more common in organisms with compact genomes, like viruses and plasmids.

          • baconbits9 says:

            At Loris

            The original post stated that

            Intelligence is massively polycausal at the genetic level, and necessarily so. Intelligence comes from working brains. Brains are constructed from proteins – lots and lots of different kinds of proteins. A gene encodes one protein. Therefore it takes many many different genes to cause intelligence.

            This is the sentiment that I was responding to (though I agree that intelligence is massively poly-causal I disagree with the logic here). That something is complex and thus must be massively poly-causal does not hold. If one protein can be folded in many different functional ways, and one gene can be read multiple ways and translated into several proteins and if the number of proteins made is controlled by small numbers of genes upstream then you can build very, very complex structures off a small number of genes.

          • Randy M says:

            If one protein can be folded in many different functional ways, and one gene can be read multiple ways and translated into several proteins and if the number of proteins made is controlled by small numbers of genes upstream then you can build very, very complex structures off a small number of genes

            bb9, I don’t think this is true for the fact that the reading and folding of those proteins almost certainly has to be regulated by something, likely very precisely so. Unless it is the case that the folding and reading is dependent on some external factor intrinsically… but this seems very unlikely to me (I’m probably not explaining this well, because I have a difficult time conceptualizing it).

            In any event, I enjoyed the thread.

          • baconbits9 says:

            bb9, I don’t think this is true for the fact that the reading and folding of those proteins almost certainly has to be regulated by something, likely very precisely so. Unless it is the case that the folding and reading is dependent on some external factor intrinsically… but this seems very unlikely to me (I’m probably not explaining this well, because I have a difficult time conceptualizing it).

            Yes, but also yes (normal disclaimer, not my field ever and I’m well out of any tangential field so could be anywhere from slightly wrong to flat earthing here) Protein folding is kind of automatic in that it involves the attraction and repulsion of the bonds of the protein and the composition of the cellular liquid. Differences in Ph levels, salt levels etc will force different folds, additionally groups can be attached at points on the protein to alter how it will fold, these changes are regulated by other genes in the cell, but (in theory) a handful of them could create changes for all the potential proteins. So you could require 1 regulatory gene + 1 protein coding gene to make 2 different folds for a single protein but all the available protein coding genes could pair with the regulatory gene where 1 regulatory gene + 10 coding genes = 20 different functional proteins (10 amino acid chains with 2 folds each).

          • Randy M says:

            What are the odds that there will be a situation where a suite of proteins will have necessary functions in one environment, and different necessary functions in a second environment, that correspond to the needs of the cell? That seems highly serendipitous and unusual.

          • Loris says:

            Okay, so I hope I can respond to all this in a comprehensible manner.

            a) I believe we know empirically (from data and analysis) that Eukaryotes have a very significant heirarchy of regulators. And these act in combinations. There are also ways specific proteins can be modified which affect their state post-translationally, and these can act in ‘cascades’ to affect the state of the entire cell.

            b) Folding is what a protein does just after/during synthesis. (This is distinct from conformational changes, which can happen many times in the life of a protein, and are widely recognised). I don’t know of any examples of a single protein sequence having two different, functional folds.

            b2) I don’t believe that proteins do generally fold in multiple different, functional ways. As with all things eukaryotic, folding is going to be complicated (even in prokaryotes there are proteins which help other proteins to fold correctly). I think this is a distraction from the discussion and will disregard it henceforth.

            c) Alternate splicing means eukaryotic genes are complicated. Sure. But in a sense that doesn’t really matter. Because either i) there’s something else in the cell which regulates it – which is itself encoded in the genome on some level, or ii) it’s random. Or some combination of both.
            If what it means is that organisms have fewer, more complicated genes, then that doesn’t really make a saving on complication, it just “moves” it.

            d) I was going to have an analogy here but thought better of it. Be grateful.

            e) We can hopefully get away without a semantic argument about what a ‘gene’ is. It’s a fuzzy concept at the best of times.
            For the sake of this discussion only let’s use the term subgene to mean part of a gene which is involved in the production of at least one form of a protein in at least one tissue.
            (So an exon would be a subgene and so would a regulatory sequence.)

            f) What you’re saying is that you can get a lot of complexity from interactions between genes, and a gene can be comprised of many subgenes – so we don’t need lots of genes.
            But the thing is, that doesn’t really matter. Either way there’s quite a lot of subgenes involved. John’s point is not that we need lots of genes to describe the brain, it’s that all the subgenes that are involved can vary a little bit and be tolerated, but if they do vary too much they’ll be eliminated. The consequence of this is that the variability in lots of subgenes has a small effect on intelligence.

            g) Because of the iterative nature of the system, in a sense all the genes in an organism are involved in development of the brain. Plus the environment. This is true of all emergent traits; it’s not really a secret.

            h) In my experience what happens next in this sort of conversation is that someone asks for a specific, clear example of a ‘random’ gene affecting intelligence, then is disappointed that it is so blatant and yet irrelevant to healthy brain function.

    • fr8train_ssc says:

      Taking a systems approach, I would make the argument that height and intelligence are highly likely to be multi-factored, because height and intelligence are governed by complex systems, and are not expressed as pathologies in those systems but as performance or output of them. Your Epsilon Fallacy piece is highly reductive in assuming that intelligence or height can be likened to a singular computer program, and that the emergent property of intelligence (such as g-score) is likened to optimization in a program.

      Consider the brain to be less like a computer program, and more like the whole computer itself, and g factor score to be less like FLOPS and more like a Passmark CPU bench-score. Genetics in this case, isn’t just the source code for the program, but is instead the schematics/source code that govern:
      The different types of processors on board: CPU, Graphics Card, Sound Card, any other ASICs
      Size of various memory modules: Cache, RAM, hard-drives
      The interconnects/buses between those components as well as their protocols,
      Availability and interactivity of peripherals.

      I assume your deference to the 80/20 principle seems to be based on exposure to problems when optimizing program performance (i.e. here’s a program that isn’t performing well. Profile it, turns out that it’s a problem related to an I/O blocking call that should be threaded or polled to be non-blocking, or an algorithm that doesn’t fully exploit locality in memory) This makes sense because we view sub-optimal performance as a pathology, similar to how many genetic based diseases are single source genetics like Cystic Fibrosis, Sickle Cell, or Huntington’s. In this case, the pathology of the sub-optimal software is a failure to properly activate those hardware components in the system.

      However, go back to my interpretation of genetics as also being the instructions that govern the hardware. Consider that the program’s performance could not only be caused by the software not being optimized, but that there’s a problem in the hardware itself, such as a component being under-voltage, noisy data or poor error correction on a bus, or bad disk sectors on a hard-drive. Hell, it could even just be multiple software processes, from kernel drivers to application, that are having trouble piping data across eachother.

      Remember, tests for g-factor won’t just test computation in the brain. They also test:
      Speed in which visual data is perceived.
      Speed in which perceived visual data is processed into abstract symbols.
      Speed in which abstract symbols are processed into meaning/parameters for the problem.

      Consider all the research on genetic causes of Dyslexia

      Your desire to put “Skin in the game” is admirable though. What’s your confidence level/odds that a single genetic marker responsible for more than 50% of reading deficiency (Dyslexia) will be found by 2020? 2025? 2070?

      The SHA-1 of a text file containing my estimates is: 7e14e5f1edc04fb447429867e70b4be37ec79f1f
      If you propose a monetary bet, establish your odds and amount, and I’ll reveal mine.

      • johnswentworth says:

        Oh, I definitely do not think a single genetic marker responsible for the bulk of dyslexia will be found, if none is known at this point. That would have been noticed by now. I’d put well-below-10% on that one, probably somewhere in the 2-5% range.

        A few specific scenarios I’d put larger odds on, without actually googling anything about dyslexia:
        – >50% of reading deficiency explained by overall mutation load: ~15%
        – >50% of reading deficiency explained by something inherited outside the main genome (e.g. mitochondrial DNA, microbiota, etc): ~15%
        – >50% of reading deficiency explained by an environmental factor that previous studies somehow didn’t manage to control for: 5-10%

        If any of those or something like them appeal to you, I’ll spend five minutes googling and then set harder numbers.

    • whateverthisistupd says:

      I think it’s mistake to think that evolution occurred similar to the way a computer is optimized, or even similar to the way physics manifests with power laws.

      The human organism is the result of (lots) of years of random interactions with no designer that bumps against the limit of natural selection and possibly follows some more abstract rules like systems flow complexity. If you imagined based on observing single celled organisms, you could use a 20/80 formulation to describe what would evolve in 2 trillion years… you would be wrong.

  44. albertborrow says:

    Tractable is on the word calendar today, I guess.

    I think the biggest difference between polycausal models and single-cause models is that polycausal models are only polycausal because we don’t understand the precise underlying mechanisms. A polycausal models helps us predict things better, but it still hasn’t reduced what we’re studying to its components the way a sufficiently detailed normal model would. In the case of depression and the genetic components of intelligence, the reduction might not happen for a long while, but if it ever will, I don’t think it will be because we’ve mapped the inputs and outputs of the black box – it will be because we made it transparent and tracked the path of each of these inputs and outputs, which is orders of magnitude more complicated. This probably means that experts in the relevant fields with need to be either hyper-specialized or half-computer, as well.

  45. Briefling says:

    One reason science is resistant to the idea of polycausality is: in highly polycausal domains, science isn’t very useful.

    Science is great at figuring out how simple things operate — complex things not so much. Saying “phenomenon X is highly polycausal” also implies “studying phenomenon X with scientific rigor is almost certainly not going to lead to breakthroughs; likely not very useful at all; very possibly harmful due to ensuing misapplication of knowledge gained.” It’s a tough pill for scientists to swallow.

    By the way, the inaccurate belief that science is highly effective in polycausal domains, is one of the great sins of the New York Times-Harvard-Democratic Party triumvirate. Hopefully this belief will soften as polycausality becomes an explicit concept among the elite.

    (Having written all this, not sure if I really endorse my point 100%, but publishing anyway because… it’s provocative. And it lets me mention TALEB.)

    • Scott Alexander says:

      This is what I would have said aside from the example of genetics, but the recent success of genetics makes me hopeful that occasionally it’s possible to just steamroll through this by finding all of the complexity and dealing with it.

      Obviously there are cases where it’s much harder, but I still wonder if genetics can provide a blueprint that can be approximated in other areas.

      • Bugmaster says:

        I said this in a separate comment down below, but just to reiterate: genetics, and especially genetic engineering, is not nearly a simple as “let’s take all the base-pairs in the entire genome, build a giant computing cluster, let it run for a year and then we’re done”. As it turns out, genes are used to build proteins, which interact in complex networks to regulate the function of genes. As it also turns out, even relatively simple plant traits such as “drought resistance” are really hard to understand. Forget genes — what makes a plant resistant to drought, in the first place ?

        If you think that genetics is basically a solved problem, plus or minus some computing infrastructure, then you’re not just a little wrong — but disastrously so.

        • sclmlw says:

          That’s all I could think about when reading the article above. How exactly is genetics as simple to study as Scott suggests? (He gives it its due, but claims a bounded level of complexity, which we honestly can’t claim to have reached the boundaries for yet.)

          For example, say we wanted to understand what “gene X” does in a cell. Let’s assume it’s a well-conserved gene, whith little population-level meaningful variation. Understanding what it does should just be a matter of studying that one gene, right?

          And its promoter region, and all the proteins that modify its expression,
          And histone modifications of various types,
          And microRNA modulations that control its expression,
          And splice variants. It’s not like that one gene only necessarily codes for one protein, after all,
          And the balance of all of the above in different cell types,
          Don’t forget microenvironments, small-molecule availability, and which different interacting proteins are available, and at what levels of abundance; because those can change things, too.

          Should we mention how we’ve basically ignored protein modifications, glycosylations, etc., because they’re often just too hard to study? What about the existence of redundant pathways, which often makes the knockout approach nearly meaningless, or at least misleading?

          We should at least mention that each gene appears to have multiple functions, making the question, “What does this gene do?” really hard to answer in a consistent way.

          The most striking revelation from the human genome project early on was that there were only projected to be about 20,000 genes. How could all of the complexity of the human body come out of so few genes? Simple! The system appears at our current level of analysis to have nearly fractal levels of complexity. That’s how you get a working human from 20,000 genes. Yes, we appear to learn new things, but we’re honestly plucking the low-hanging fruit, with most of the inaccessably complex stuff well beyond our reach.

          It’s a little premature to say, “genetics has paved the way to understanding how to handle multicausal phenomena.” Unless you’re trying to say, “we chip around the edges and hope future generations will figure out how to approach the bulk of it.”

          • Bugmaster says:

            Yeah, I’m just a humble merchant of quality genome visualization software, and even I know most of that stuff (and don’t get me started on polyploid phased SNPs). Whenever I talk to the bioinformaticians who are our customers, I always get this feeling they’re on the verge of tears 24/7. About a year a go, we had a meeting on the best way to implement pathway visualization in our viewer, and the consensus was, basically, “let’s just not do it for now, because we don’t even know what to visualize or how pathways work”. And that’s just for plants, not even humans !

    • whateverthisistupd says:

      So the Darpa brain project has been incredibly succesful at treating depression through the use of stimulation guided by neural nets which “train” on comparisons of scans of depressed and non-depressed brains and then making inferential guesses about were to stimulate.

      This seems to work too well- the problem isn’t that it can’t cure depression, it has a tendency to make people overly euphoric. Not the worst problem.

      Anyway, the point is we may be approaching areas of limits to what’s comprehensible to the human brain, and will have to rely on neural net based-engineered solutions or transhumanist upgrades to human cognitive abilities.

      • [Thing] says:

        Anyone have a reference for this? I found some links about DARPA’s deep-brain-stimulation research, but nothing that quite matches what whateverthisistupd described.

  46. zrkrlc says:

    I get that there’s been a failure of models of the form, genes -> observable traits, but maybe it’s just that we’re looking at this from the wrong abstraction level? Like DNA, we take machine code as the theoretical complete specification of programs (sans bit errors and module machine memory models), but that doesn’t mean it’s the best language to work with.

    Sure, with more statistics and Big Data™ we’re going to arrive at arbitrarily accurate genetic predictions but this is still essentially brute force, and I’m not sure I’m ready to accept having scientific models of larger and larger computational complexity.

    PS Is it just me, or does this read more like an essay I’d read in a magazine and less like usual Scott?

    • Murphy says:

      Thing is: a lot of the time

      genes -> observable traits

      Works really really well. A lot of times, when people have dug into various disorders they’ve been able to narrow the cause down to specific mutations and can then figure out how the mutation is affecting various pathways.

      On the other hand polygenic risk scores have a tendency to replicate poorly across different populations and can be prone to P-hacking.

    • Randy M says:

      PS Is it just me, or does this read more like an essay I’d read in a magazine and less like usual Scott?

      I’m not sure it really makes sense to look for monogenic causes to article quality. Sure, many articles written by Scott have a similar feel, but it’s certainly far from uniform, and this one may simply fall slightly below the arbitrary cut-off on the bell curve of Scottiness.
      You also have to consider environmental interactions. Are you in the same mood today as you were when you read previous articles? Really the factors are multitudinous, as we’d expect from the SSC landscape.

  47. It almost seems like academia as a whole would have to change to match this alien culture. Currently, we have independent papers thrown over the fence that measure one or a few causes. This alien society would somehow have an interface between all the academics, so that all their research would be synthesized and shared in real-time. Even though we as humans are quick to adopt signals, we are slow to adopt standards, thus making large-scale integration of research impossible. We might have to wait for an early-AGI that can understand science papers and integrate that knowledge for us.

    • gwern says:

      The way I would phrase it is, “everything correlates with everything”. Just on the genetic level, correlations between complex traits are pervasive. Individual-differences/personality/psychometric psychology has been aware of this for a long time: you need large (ideally longitudinal) multiple-measured datasets if you want to make progress. The American Soldier, Project Talent, SMPY, the many twin registries, the Scandinavian population registries (how many things have been debunked with a quick family study in a population registry?), and so on. The UK Biobank has led to a revolution in human genetics and related fields – not because it just has genetics data on 500k people, but because it has that and lots of other data like MRI imaging as well. What is necessary is a realization that lots of small studies is penny-wise pound-foolish: it’s better to run 1000 studies on a single n=500k than 1000 studies with n=500 each…

      • [Thing] says:

        What is necessary is a realization that lots of small studies is penny-wise pound-foolish: it’s better to run 1000 studies on a single n=500k than 1000 studies with n=500 each…

        This sounds like a potentially worthwhile lever for people who want to change the world (EAs, ambitious academics, social entrepreneurs) to pull on. I won’t personally be doing that, because I lack relevant expertise, among other reasons, but maybe someone reading this would be interested. Does anyone have an opinion on whether it would be a good idea? Compared to most of the problems filed under “civilizational inadequacy” (e.g. perverse incentives in academic research, or the fact that many important subjects are inherently extremely complex), this one seems reasonably tractable. Just round up a bunch of scientists currently doing small-n studies, persuade them that they would get better results if they pooled their grant money to study of a bunch of variables at once on a large n, and create organizational infrastructure to facilitate that. That seems like it’s well within the range of “projects that sometimes actually succeed,” and depending on what kind of research it helps to advance, it could have a pretty big impact.

        (Also, you could ask Tyler Cowen for seed money.)

        • gwern says:

          ‘Oh lord, grant me consortiums with large sample sizes rather than a small sample of my own – just not yet!’

          The story behind most of the big-datasets seem to be a mix of third parties who demand real results and not p-hacking (Project Talent was post-Sputnik, American Soldier was for WII optimization) and isolated visionaries (SMPY was due to the chance coincidence of Stanley meeting a kid at just the time the Spencer Foundation had been created and was looking for something to fund; the Reproducibility Project and associated things are almost entirely funded by the Arnold Foundation because Arnold is personally convinced it’s a problem so it’s a good thing he’s a billionaire; the twin registries, well, behavioral geneticists manage to survive somehow despite everyone hating them, maybe the key is that they’re cheap to run and mostly the occasional survey), and some origins are simply mysterious to me (how did SSGAC or UKBB get going and succeed despite the structural incentives against them? I have yet to see any public explanation but I’m sure it must be quite a story).

          persuade them that they would get better results if they pooled their grant money to study of a bunch of variables at once on a large n

          They would get real results. But I’m not sure they would be better off. The incentives are all against it, and the counter-arguments are predictably tiresome: only data thugs and research parasites care about this, it’s a slur on hardworking scientists to suggest their results are irreproducible, a large dataset accessible to many researchers is ‘unethical’, it will exacerbate systemic racism and raises nuanced questions about minorities and community participation, such centralization destroys research creativity and diversity and creates groupthink, overlap of results will akshully lead to lots of false positives, the race to analyze data and get a preprint out before anyone else is destructive etc.

          • [Thing] says:

            I kinda suspected there were incentive problems. Still, the scattered success stories already suggest strategies for overcoming them: Try to route around expert gatekeepers and get the people controlling the purse strings on your side. Work with large centralized organizations like the US military or the NHS (or just “governments”). Look for cases where there are incentives beyond academic prestige for getting the right answer.

            We hear a lot about Big Business’s enthusiasm for Big Data. There are probably all sorts of obstacles to bridging the gap between their data collection and analysis efforts and what would be most useful to the scientific community (trade secrets, differing ethical standards, etc.). But presumably those could be overcome if enough money were at stake.

            As for the culture-war political objections, they are certainly a big deal in subjects like human genetics, psychology, and probably lots of other fields, but I wonder about fields that aren’t as afflicted by such controversies. Do any of them nevertheless suffer from underpowered studies? Outside of the social sciences and life sciences that are at least tangentially about humans, I can’t recall hearing anything about a replication crisis. Like, do people studying plants or microbes have this problem, and it just hasn’t been front page news? Are astronomy or geology journals littered with p-hacked junk papers? If so, trying to improve that state of affairs might be easier than in more politicized fields. If not, that raises the intriguing question of why the study of our own species is uniquely susceptible to corruption.

    • whateverthisistupd says:

      The problem I think is we don’t have a unified theory of consciousness to be able to put all this information into context. There are exciting candidates for unified theories currently, such as Connectome-Specifc Harmonic Waves and Frisson’s complexity reduction drive model that has been covered here before.

      It’s as if people were doing all these isolated physics experiments without the benefit of the Principia, and trying to make sense of “physics” hoping the accumulation of enough raw data would add up to something at some point.

  48. gwern says:

    I would point out that what Plomin actually said was

    If genes responsible for genetic variance in IQ scores typically account for less than one percent of IQ variance in the population, dozens of such genes might be required to make a reasonable prediction of children’s IQ. Moreover, if many genes individually account for far less than 1% of the variance, as I suspect, most of these genes will never be identified and thus genes that can be identified will fall far short of predicting all of the genetic variance of IQ.

    ‘reasonable prediction’ != ‘explain all genetic variance’, and of course, if most relevant genes are less than 1% of variance, then at 80% variance (excluding the catalogue of genetic disorders which have effects on intelligence like retardation, which must have been into the hundreds by 1995), the implication is that there are easily hundreds. (I recently scanned a book from around the same time where in the discussion transcript Plomin also implies he expects hundreds of variants but since the link makes the same point, I don’t need to dig it up.) His appreciation of this point, incidentally, is probably one of the reasons he and associated researchers pushed for some very early genetic sequencing of SMPY participants (using high-IQ samples delivers much higher power so their tiny sample wasn’t as useless as you would expect) and successfully debunked some ’90s candidate-gene hits for IQ ( https://www.gwern.net/SMPY#chorney-et-al-1998 ).

    And Fisher’s infinitesimal model was both a landmark achievement in genetics and widely used and accepted in many fields, and that model literally entails all genes having an effect of some sort. So the real question shouldn’t be ‘why was polygenicity a surprise’ but why were human medical geneticists, specifically, so surprised and allowed to get away with the claims they did…

    Even if we can’t get much out of this, I think it can be helpful just to ask which factors and sciences are oligocausal vs. massively polycausal. For example, what percent of variability in firm success are economists able to determine? Does most of the variability come from a few big things, like talented CEOs? Or does most of it come from a million tiny unmeasurable causes, like “how often Lisa in Marketing gets her reports in on time”?

    It’s a good question. Let’s go back to heritability. What is it? It’s a variance component: the total amount of differences explained by an entire category of effects. One of the things that makes talking about heritability/shared-environment/error is that, well, you hardly ever see anyone talking about the variance components directly in any field other than quantitative genetics and behavioral genetics specifically. It’s part of the mathematical machinery, but the focus is always on direct effects: does this specific value of variable X increase or decrease Y, and how does it compare to Z? You pretty much never see anyone ask ‘how much of variance could variables X-Z explain even in the limit of infinite data?’

    One of the only counterexamples I know of is “Morphometricity as a measure of the neuroanatomical signature of a trait” http://www.pnas.org/content/early/2016/09/07/1604378113.long , Sabuncu et al 2016; instead of using relatedness coefficients (we know siblings are ~50% genetically similar etc), it defines similarity by brain measurements and looks at how similar similar brains are to gauge the limits of perfect prediction from those measurements. Possibly relevant is “Phenomic selection: a low-cost and high-throughput alternative to genomic selection”, Rincent et al 2018 https://www.biorxiv.org/content/early/2018/04/16/302117

    It would be nice to see more use of this approach to try to quantify bounds on what sets of variables could deliver.

    • quanta413 says:

      Thanks for saying what I wanted to say but much better and with more evidence.

      Given that the assumption of a very large number of additive genetic effects worked well for explaining variation and evolution in a quantitative manner, it’s a mystery to me why any scientist would have thought we’d get many hits for genes explaining most of the variance in some particular trait that all humans have. I’m not really sure how many did and how much of it was the public misunderstanding. For certain crippling inherited diseases, it does make sense to look for mutations after all. Especially if it the disease shows signs of being on a single recessive allele from genealogy and basically working out punnett squares with your data.

      • Deiseach says:

        I wonder if things like Down’s Syndrome led early researchers astray? Here’s a very clear and simple example of “one extra copy of this gene = this condition” where there is a large effect on intelligence. I can see why it would be very tempting to think of intelligence as being influenced by one-some-not a whole lot of genes from models like that; if one too many of one single gene means lower intelligence, then maybe correspondingly having this other gene (or an extra copy of that gene, or one copy from both parents) means higher intelligence.

        You can’t fault people at the start thinking in broad brush strokes, because otherwise you’d look at the problem then you’d sit down and cry at the prospect and decide to go be a sheep shearer in Australia instead of a geneticist 🙂

    • BlindKungFuMaster says:

      Yeah, polygenicity pretty much drops out of the bell curve, so people must habe known that’s the way to bet for intelligence.

      • Scott Alexander says:

        Given that there’s also a non-heritable component to intelligence, would we really have been able to tell the difference between a bell curve formed by the interaction of 10 genes (+ non-heritable component) vs. 10,000?

        • BlindKungFuMaster says:

          If these ten genes all have the same effect size, occur in 50% of the population and are statistically independent? Then probably not.

          But these are strong assumptions. It’s not that you can’t get a bell curve with few causal variants, it’s just that the bell curve is derived from a model with many small effect variants, so whenever you see that distribution, that’s probably the underlying model.

        • quanta413 says:

          Trying to think of a good way to ballpark it but my instinct is you’d want a little more than 10 as a bottom bound. Maybe that’s enough though. Somewhere between 10-100 might have been a reasonable bottom bound on expectations.

          But why would anyone expect to find the bottom bound? The upper bound was something like all non-junk DNA contributes.

          Genetic diseases carried on a single gene or small set of genes do not show patterns of inheritance like what we see for most traits. Those diseases are often binary traits (on or off) and recessive to boot. That’s totally unlike a continuous trait. Why would anyone have thought rare genetic diseases were a good model for most normal traits?

          Selective breeding wouldn’t work as well if we typically only had ~10 genes that could be selected on. We’d find that the variance in the trait we were selecting on shrank really fast.

          • BlindKungFuMaster says:

            With the assumptions I made in the above post, 10 variants would lead to 1024 little IQ bins. Add to that some non-hereditary variance that smoothes out the edges and that would be very much a bell curve in smaller samples.

            The biggest problem with this would be that the genetically most intelligent person is just 1:1000, something like IQ 145? So in huge sample sizes we’d see a tail that is solely due to non-hereditary factors and that should be a very detectable deviation from the bell curve.

            The selective breeding example is a good one. We increased milk production and chicken size and stuff by so many standard deviations, those just have to be massively polygenic.

          • quanta413 says:

            The biggest problem with this would be that the genetically most intelligent person is just 1:1000, something like IQ 145? So in huge sample sizes we’d see a tail that is solely due to non-hereditary factors and that should be a very detectable deviation from the bell curve.

            Yes, this sort of question. The shape’s right but the test you give or something similarly clever might mean we could detect the difference. I’m not sure how to work out if 10 is enough to fool cleverer tests with what we knew in 1990, or more than enough or not enough.

            My intuition though is that by the time we get to 100, that’s definitely enough to make even very clever tests unable to tell and probably somewhere well before 100.

          • Deiseach says:

            We increased milk production and chicken size and stuff by so many standard deviations, those just have to be massively polygenic.

            Where we had no idea what we were doing, simply selecting “that cow is a good milker, let’s breed her” and “that sheep is the heaviest, breed him” and selecting out offspring with the traits we wanted and going on breeding and culling like that. If you’d asked why we were getting bigger animals producing more milk and meat, you’d have gotten the common sense answer “breed two big animals and you’re going to get big offspring” but not how, exactly, that happened – until people learned about genes, then they’d say “well, plainly big animals have the genes for bigness and those are inherited by their offspring”.

            I think there’s a lot of hindsight going on in the comments here – ‘well it’s obvious when you look at the data that this is what was going on’ but hindsight is only wisdom about the past, not the future.

          • quanta413 says:

            @Deiseach

            Breeding is old, but mathematical models of breeding are relatively recent. Last century basically. And they often encoded a set of assumptions in theory (and then fit that theory to data) that’s exactly the opposite of the few genes per trait model.

            So breeding would be hindsight in and of itself… except Fisher and Wright (founders of quantitative genetics; neo-darwinian synthesis) and others long ago built up quantitative genetics and it’s highly doubtful any of them would’ve thought to model intelligence as under the control of a only a few genes. It’s just… really hard for me to imagine Crow or Kimura (more famous quantitative geneticists but of a generation after Fisher and Wright) being surprised by any of this. I dunno. Crow was still alive into the 2000s, so maybe I should see if he ever said anything on the topic?

            I wasn’t clear enough. My point isn’t just that we had the examples, it’s that we had a theory that fit the examples long ago. And that theory was totally opposite to how some people talked about genetics later. It’s kind of surprising that people talked about a few genes, but not super surprising though, because a lot of biologists even now are pretty suspicious of mathematics and think it can’t possibly show anything useful. And because it sounded a lot cooler to say we’d totally figure this out in 5 years.

            It feels a lot like “people made fancy new tools; assumed tools were the solution and all problems would fall before them! Proceeded to completely ignore important past mathematical and experimental work that preceded tools and would have shown them to have a little humility”.

      • gwern says:

        Jensen, among others, makes that argument in Bias in Mental Testing. (I don’t know if that influenced the polygenicist imperialists like Visscher who invaded human geneticists’ domain when the god of candidate-genes failed and the alternative was missing-heritability atheism…) But while he doesn’t give a specific estimate like Plomin does there, his hypothetical example of how a normal curve becomes approximated very well suggests he was thinking of scores, not hundreds or thousands, of variants.

        Hard to see how looking at the distribution of observed score measurements would let you distinguish between that and the current status quo where the distribution of effects and family-GCTA are implying ~10k relevant variants in normal people. I don’t think a DeFries-Fulker could detect a discretization, underneath all the noise and environmental influences, and if it did, it would be easy to rescue by just adding a few more hypothetical variants, and one could further argue at the time that the Spearman effect and apparent compression of cardinal-measurements like digit span are in fact evidence for only modest polygenicity. It is not as if anyone has done any of the relevant experiments like selective breeding for intelligence in humans, and the psychological selective breeding experiments like maze-bright rats can only go so far, unlike maize oil percentage or liters of milk, so are still consistent with modest polygenicity.

    • baconbits9 says:

      So the real question shouldn’t be ‘why was polygenicity a surprise’ but why were human medical geneticists, specifically, so surprised and allowed to get away with the claims they did…

      Because there are enough situations where 1 or a few genes are doing all the heavy lifting and those are also the ones that are the easiest to identify. So the people working under the simpler assumptions were moving faster for a time, but they hit their limits faster as well.

      • secondcityscientist says:

        Yeah, I was going to say this. Francis Collins, before becoming famous for running the Human Genome Project and later the National Institutes of Health, was known for his work identifying the one gene responsible for the overwhelming majority of cystic fibrosis cases.

        And of course there’s other monogenic diseases, like Tay-Sachs or sickle-cell anemia. Of course most of these are diseases, not population level variation.

  49. Shion Arita says:

    Sorry to be a bit off the main topic, but Piaget is very obviously wrong; I have many long-term memories from before 4 (as early as 2), and definitely had abstract thoughts before 10; I very distinctly remember thinking about things like the nature of space and time, among many other things. I’d like to see how they came to that conclusion, because whatever they based it on is obviously bogus. I guess it’s possible that I’m some kind of extreme rarity in this regard, but I seriously doubt that.

    • imoimo says:

      I have no intention to gaslight you, but I wonder how you’re so sure those memories are real. For years I thought I had scattered memories from my far youth, but recent research has made me question. Studies on false memories seem to suggest they’re easy to make, just by attempting to remember something you can’t. And I myself know of times I’ve gone years mistaking a dream for a memory.

      • Shion Arita says:

        Some of them were independently confirmed by other people, like I mentioned to my mom that I liked to watch minnows in a river near where I lived as a small child, and she said she remembers me doing so as well.

        Others I remember my internal thought process, which was unusual enough to be very unlikely to be post-hoc, like I remember my dad telling me we were going to move because our current house was “getting too small for us”, and I thought that he meant that he was still growing and soon wouldn’t be able to fit under the ceiling.

        • BlindKungFuMaster says:

          “Some of them were independently confirmed by other people, like I mentioned to my mom that I liked to watch minnows in a river near where I lived as a small child, and she said she remembers me doing so as well.”

          That’s not independent confirmation, that’s just the same mechanism again.

          But I also remember some stuff about from before my fourth birthday and I guess many people do. I’m pretty sure Piaget wasn’t super categorical about it.

          I remember our neighbour visiting in our old apartment (we moved when I was four). He had to stoop to get through the door. I remember him being absurdly large, brushing the ceiling with his head.

          It must have been your dad.

          That reminds me of a ya book by Diana Wynne Jones, “Fire and Hemlock”, which features a “giant” and a lot of magical stuff, but for a long time it is unclear whether this is just a little kid’s imagination interpreting more ordinary events or really something magical.

        • whateverthisistupd says:

          Same with me, independent confirmation from parents and shock that I could remember things from such a young age.

      • Frederic Mari says:

        I personally got a lot of problems remembering things before 7 or 8. Even stuff around that age is unusually shrouded in mist… while I can remember with a precision that unnerves friends conversations or stories they shared 10-15 years ago (I’m mid 40s).

        It could be Piaget just worked out some baseline for “most commonly in people”…

    • Evan Þ says:

      I also have scattered clear memories from age 3, some of which have been independently confirmed.

      However, while I also used to think about abstract ideas before age 10, I seem to recall my thoughts had a different nature back then – I was to a great extent parroting what I had most recently read, and I don’t believe I ever got close to synthesizing different positions until I turned 12 or so. Yes, that could have other explanations, but given Piaget et al I’m strongly inclined to attribute that to my age.

      • BlindKungFuMaster says:

        When I was a teenager, I was of the opinion that I had basically gained consciousness at the age of 12. Before that I was more like a little animal, never thinking any deep thoughts, just running around playing all the time.

      • Tarpitz says:

        I don’t think independent confirmation gets you very far – it could be that the people who confirm the events happened also told you about them in the dim-but-less-dim past, and you’ve reconstructed false memories of real events and forgotten being told about them. Definitively identifying a memory as “real” is damn near impossible.

        • imoimo says:

          +1. One way to probe memories for fakeness is to ask someone who claims to remember the same event as you about smaller details. What was the physical environment like? What were you wearing? How long were you there? What was the occasion? Get each of you to commit to a description before comparing out loud.

          I’ve had “memories” that took place in entirely the wrong location. While this could be a distorted memory rather than an outright false one, it’s pretty suspicious.

          Edit: My memory is pretty garbage in general, I realize I may be like-minding into thinking others have as many fake memories as me.

          • Shion Arita says:

            I think there’s a big difference between some details of the memory being wrong and the memory being false

          • imoimo says:

            @Shion Sure but without a way to tell between them distortion is decent Bayesian evidence of falsification.

            And how else would you spot a false memory? If false memories are often built from cues by others, I’d expect them to have the main details right but mess up other stuff that was never provided.

        • Error says:

          Even for recent memories, I don’t really trust them.

          One reason I prefer to communicate in writing is that it kills a certain type of argument, where two people remember a previous conversation differently. Even if both parties are honest, on the inside, it appears to each of them that the other is gaslighting or just plain lying. But while your memories can lie to you, your logs won’t. It’s nice to have a reference in such cases.

          • nonyorker says:

            This is why I always take notes in business meetings and email them out afterward – if people disagree later on what happened, you can always refer back. Like writing history as it happens. (Not that my work is particularly history-worthy.)

          • axiomsofdominion says:

            Ah this is the plot The Truth of Fact, the Truth of Feeling.

      • whateverthisistupd says:

        at the age of seven, I was already observing how the subjective happiness produced by certain experiences decreased with age and repetition, which made me realize happiness wasn’t rooted in solely external causes but was inherently related to novelty.

        I also thought about things like- the fact that I spent a lot of my free time in imaginary cartoon worlds where heros were saving the world, but imagined that world consisted of people imaging their world saved by fictional heros, and it made me think that perhaps the way I was spending my free time wasn’t the best, as tere seemed to be a lack of substance. I would also decontruct for example, why spontaneous playing with action figures lacked the narative satisfaction of scripted stories, and what elements were necessary to create that narrative submersion. Also thoughts about the contrast between the ethics of said heroes and the harsh ethics of the christian god, and questioning whether the morality of “God” was good, i.e., is good a thing dependent on God or is it independent? Can a human judge God?

        Thoughts about mortality and inevitabiltily, questioning why the instinctive fear of death should exist if an afterlife was assured.

        Thoughts about how thoughts desolved into processes- like I would try to image say, a ninja turtle, and realized when I tried to focus on the image, I couldnt actually hold it in my mind, it dissolved into processes. Concern that reality wasn’t “real” and similarly could dissolve into such processes.

        Comparison of how humanity lived in the past, how at mercy to the forces of nature they were, and how from the perspective of a futur person we were equally as helpless. Concern with cosmological phenomenon that could wipe out th species, and what it would mean for the universe to exist without any conscious being to perceive it. Terror at the thought that every night, I fell asleep, experienced hallucinations beyond my control but despite a conscious effort to observe the phenomenon, could never observe or remember anything like a boundary condition.

        All before the age of ten. But again, neuroatypical.

        • whateverthisistupd says:

          A thought here-could there be an observation bias going on? It’s possible that children have the ability to think abstract thoughts, but lack the linguistic ability to communicate those thoughts to adults in an intelligible way. When I was describing the abstract thoughts I had as a kid, I wasn’t using those words. I didn’t have words for a lot of the things I was thinking. The “dissolving into processes thing”- it was something I could think, but I remember trying to express it, and failing to have adults understand me at all.

          Me at age 8- Ok so like, when George plays with toys, he just makes it up as he goes along. But the thing is, it’s not really fun, because it’s not like a show. He makes the toys do thing that don’t make any sense, not like the way a show would make sense. So for me, it’s more fun if like, we have a story in mind we want to play beforehand, and we can make some stuff up, but we stick to the story, and I tried to say we should do that, but he couldn’t understand. It’s like, you see the toys, and you think it’s going to be like you have the show, but it’s not really like the show because… in the show things make sense. Like, the reason it’s interesting is because, it’s not just like anything can happen. There are rules, and it’s fun to watch, because you don’t know what’s going to happen, but they can’t brea the rules so it makes sense. But he just makes up whatever he wants, which seems like it would be fun, but its not, cause then it’s nothing like the show, and why even bother having the toys in the first place?

          To which my mom would say something like- I think you’re a little different then most boys your age. Most people like to play without scripts or rules. You have to remember if your friends all like to do things one way, and you like to do it a different way, and you insist on that, you’re not going to have friends.

          Which was actually pretty good advice, but I never got the sense that she understood what I meant.

    • brmic says:

      Most likely. I’m not up to date on development psych, but as a general rule assume everything Piagets gives age ranges for to happen earlier, in many cases much earlier. Combined with the possibility, that any particular person may be an outlier on the normal distribution (i.e. if the average child develops long-term memory at 2.5 years, some will then start to do so at 2) there is no reason to doubt your recollection.

    • JulieK says:

      My 2-year-old certainly has long-term memory, if that means she can remember things that happened yesterday or last month.

      I have two memories of things that happened before I was 2, but one of them may be a false memory, because there is a picture of the event I remember in a photo collage in my parents’ house. Or maybe it’s in-between – seeing the photo reinforced the memory and helped me not forget? My older sister was going to visit our grandparents. I wanted to go too, and I thought that if I got in the car and refused to get out, they would have to take me along. The photo shows me sitting in the car.
      The other memory is of the time I was in the hospital because of epiglottitis. (Obviously I didn’t know the name at the time, but I remember having a tube in my throat.)

      I guess it’s possible that I’m some kind of extreme rarity in this regard, but I seriously doubt that.

      It’s definitely possible that a lot of the commenters here were precocious children.

      • sandoratthezoo says:

        My three year old has long term memory as well, though she also is starting to experience an advancing veil of amnesia over stuff more than a not so months old.

    • Tarpitz says:

      What is an “abstract thought”, for purposes of this discussion?

      • roystgnr says:

        I second this question. A typical 10 year old is in 4th or 5th grade, practicing conversions between decimals and fractions. Maybe they won’t be discovering the existence of irrational numbers on their own, but they’re definitely many steps beyond “if you have two apples and I give you two more apples”…

    • Chalid says:

      More anecdata: my 3.75 year old recently described to us an event that happened 14 months ago, including some details that I’m pretty sure she hadn’t ever had any reminders of.

      • baconbits9 says:

        Anecdata the other way- I don’t have any memories before age 5 and only a few fragments from that age on for many years. I thought my 5 year old son was similar, but am less convinced now as he spontaneously remembered who gave him a gift 2 dears ago, and knew who gave him a different one 3 years ago. These aren’t frequent topics of conversation or particularly notable gifts (a train bridge for a wood train set he has 1,000 pieces to, and a train car that plays music when I leave the battery in).

        Abstract thought? I don’t have any idea when that started.

        • Joseph Greenwood says:

          It is possible that as a 4 year old you remembered things from when you were two, but as a 40 year old you no longer do.

        • Mark V Anderson says:

          he spontaneously remembered who gave him a gift 2 dears ago,

          I love this typo.

      • youzicha says:

        This is standard Childhood amnesia: young children can remember events far back, then as they grow older the start to forget things that happened when they were young.

      • Bla Bla says:

        Early childhood memories are lost during puberty.

    • SamChevre says:

      I have two clear visual memories from before I was 3.

      The one I’m most certain is a real memory is funny. My parents were garden-variety hippies, including using pot, and started attending a Plain church (and, obviously, stopped using pot) when I was 3. And my clear visual memory is of my father rolling a joint; I had the picture in my head all along, but it wasn’t until I was in my early twenties and worked with someone who rolled his own cigarettes that I had the referent of a rolling paper to identify it.

    • MasteringTheClassics says:

      My Dev. Psych professor stressed really strongly that when Piaget developed his theory he never stated age ranges for his stages – he just said they happened in order. Others have tacked on typical ages for the stages, but they’re just x% confidence intervals, not indispensable parts of the model.

      All of which is to say that even if your recollections are entirely accurate, maybe you were just an early bloomer. If you have above average intelligence that tends to move the states earlier, so maybe that explains part of it.

      • CherryGarciaMillionaire says:

        when Piaget developed his theory he never stated age ranges for his stages – he just said they happened in order.

        Interesting aside, regarding that developmental order: M.I.T.’s Seymour Papert, inventor of the LOGO (Turtle) programming language and math-learning environment, worked with Piaget for five years in Switzerland, from 1959 to 1964. Papert had this to say about the individual’s evolution from concrete operational thinking (~ age 7+) to the formal operational stage of awareness (~11-15 years of age):

        What is the nature of the difference between the so-called “concrete” operations involved in conservation [e.g., where the results of counting do not depend on the order in which the relevant objects are counted, or where the volume of a liquid remains the same whether it is in a tall or a short glass] and the so-called “formal” operations involved in the combinatorial task? The names given them by Piaget and the empirical data suggest a deep and essential difference.

        [But from] a computational point of view, the most salient ingredients of the combinatorial task are related to the idea of procedure—systematicity and debugging. A successful solution consists of following some such procedure as:

        1. Separate the beads into colors

        2. Choose a color A as color 1

        3. Form all the pairs that can be formed with color 1

        4. Choose color 2

        5. Form all the pairs that can be formed with color 2

        6. Do this for each color

        7. Go back and remove the duplicates

        So what is really involved is writing and executing a program including the all-important debugging step. This observation suggests a reason for the fact that children acquire this ability so late: Contemporary culture provides relatively little opportunity for bricolage [i.e., do-it-yourself “experimentation”] with the elements of systematic procedures of this type….

        [Endnote: Of course our culture provides everyone with plenty of occasions to practice particular systematic procedures. Its poverty is in materials for thinking about and talking about procedures….]

        I see no reason to doubt that this difference could account for a gap of five years or more between the ages at which conservation of number and combinatorial abilities are acquired….

        It may well be universally true of precomputer societies that numerical knowledge would be more richly represented than programming knowledge. It is not hard to invent plausible explanations of such a cognitive-social universal. But things may be different in the computer-rich cultures of the future. If computers and programming become a part of the daily life of children, the conservation-combinatorial gap will surely close and could conceivably be reversed: Children may learn to be systematic [a purportedly distinguishing characteristic of formop, and one standard experimental “proof” that a child is at that stage of development] before they learn to be quantitative [in conop]!

    • Simon_Jester says:

      Among other things, Piaget was trying to chart stages of development in typical human brains (and the cute little tykes those brains occupy).

      Note that SSC is not read by a representative sample of humans. And consider exactly how the sample “SSC readers” deviates from the general human population.

      We’re likely to run into a LOT of SSC readers who went through some of these stages of development earlier than normal. I wouldn’t be at all surprised to learn that there are plenty of SSC readers who were capable of abstract thought considerably before the age of ten; I strongly suspect that I was.

      Nor would I be surprised to learn that there were SSC readers whose brains were at least sporadically capable of forming long term memories earlier than Piaget’s arbitrary number for the age at which the average human is consistently capable of forming them.

      In itself, this doesn’t make Piaget wrong in the sense of ‘bullshit.’ It just means that Piaget was doing one of the very first scientific projects to even try to classify how fast human minds develop. And as a result his model is rather simplistic and is best taken with a certain amount of wiggle room as to specific details because human brains aren’t mass-produced on an assembly line to identical parameters.

    • Michael Handy says:

      Can confirm, my earliest confirmable memory is the days leading up to my 3rd birthday. they’re specific enough that I was able to confirm the details with my parents 15 or so years later, having never mentioned it in the interim.

      I was having quite complex thoughts as early as 4-5…but then I was a pretty bookish child and deeply interested in science (well, space and dinosaurs, which amounts to the same thing.) from pre school.

      I have a memory of me aged 5, answering the teacher asking “what do we breathe?” with “Oxygen and Carbon Dioxide” and being shot down in favour of “Air” (which in hindsight is probably more true, but annoyed the hell out of me at the time.) Meeting her years later, she recalled the incident.

    • whateverthisistupd says:

      This might sound odd, but are they actual memories, or memories of memories?

      For example, I at one point had distinct memories (they were isolated short moments) from the ages of 1 to 2.
      However, thinking carefully about it, I no longer have the original memories, but I do have the memories of memories, i.e. incidents from when I was young when I recalled those memories.

      He’s absolutely wrong about abstract thought though.

      Of course I am a bit neuroatypical, (Geshwind’s, it’s sort of the opposite end of the spectrum from autism) so perhaps I am not representative.

      Are you by chance neuroatypical? (If you’ve never heard of geschwinds’ i can describe it for you.)

  50. Ilya Shpitser says:

    I mostly endorse this.

    What you call “polycausal models” is what we call “causal models,” and is mostly what we work on, in practice.

    Looking for a single “gene for intelligence” is extra funny to me, because it’s a multi-cause, multi-effect problem. It’s looking for something that can’t exist, sort of by definition.

    “Personalized medicine” is defined in different ways by different people. I can suggest some reading, not all work in this space uniformly deserves skepticism.

    • vaaal888 says:

      Why do you say that it can’t exist by definition? What about a gene that determines the speed of neuron transmission? I can imagine different way in which this could be determined by one factor, which could be expressed by one gene…

      • Eponymous says:

        Agreed.

      • Randy M says:

        Not with our current understanding of neurons, nuero-transmitters, etc. For example, do you want your gene to target axon transmission speed, or neurotransmitter re-uptake (you know, getting the chemicals out of the synapse so it can re-fire; someone correct me if I’m typing gibberish…) or myelination?
        You could certainly imagine a gene that would screw up intelligence if altered, but that’s different from on gene determining it.

      • Ilya Shpitser says:

        Because lots of biological things may potentiallly “improve intelligence” (you named one possibility, but lots of others exist). And each of those may have many genetic causes.

        So it’s a problem with many causes and many effects. The issue is the label “intelligence” for something biologically very complicated.

        • Radu Floricica says:

          And also because “improving” itself is unlikely to be a single variable thing. Brain is already too complicated to have a single maximizable trait – it would have been selected for already many generations ago. It’s more of a “less fuckup possible” type of situation, plus very small marginal improvements to the design.

      • whateverthisistupd says:

        I think this is a bias amongst people from programming backgrounds, that brains basically work on the same principles as computers. They don’t. Speed of neuron transmission would probably be a better predictor of risk for epilepsy then “intelligence.”

        • Nootropic cormorant says:

          Why not both? I could imagine there being a balancing selection for transmission speed due to this, with only some people having intelligence-optimal speed.