Open threads at the Open Thread tab every Sunday and Wednesday

What Developmental Milestones Are You Missing?

[Epistemic status: Speculative. I can’t make this post less condescending and elitist, so if you don’t like condescending elitist things, this might not be for you.]

Developmental psychology never struck my interest in the same way as a lot of other kinds of psychology. It didn’t seem to give me insight into my own life, help me understand my friends, or explain weird things about society.

I’ve changed my mind about all of that after reading David Chapman’s Developing Ethical, Social, and Cognitive Competence.

First, a refresher. Developmental psychology describes how children go from helpless infants to reasonable adults. Although a lot of it has to do with sensorimotor skills like walking and talking, the really interesting stuff is cognitive development. Children start off as very buggy reasoners incapable of all but the most superficial forms of logic but gradually go on to develop new abilities and insights that allow them to navigate adult life.

Maybe the most famous of these is “theory of mind”, the ability to view things from other people’s perspective. In a classic demonstration, researchers show little Amy a Skittles bag and ask what she thinks is inside. She guesses Skittles, but the researchers open it and reveal it’s actually pennies. Then they close it up and invite little Brayden into the room. Then they ask Amy what Brayden thinks is inside. If Amy’s three years old or younger, she’ll usually say “pennies” – she knows that pennies are inside, so why shouldn’t Brayden know too? If she’s four or older, she’ll usually say “Skittles” – she realizes on a gut level that she and Brayden are separate minds and that Brayden will have his own perspective. Sometimes the same mistake can extend to preferences and beliefs. Wikipedia gives the example of a child saying “I like Sesame Street, so Daddy must like Sesame Street too.” This is another theory of mind failure grounded in an inability to separate self and environment.

Here’s another example which tentatively sounds like a self-environment failure. Young children really don’t get foreign languages. I got a little of this teaching English in Japan, and heard more of it from other people. The really young kids treated English like a cipher; everybody started out knowing things’ real (ie Japanese) names, but Americans insisted on converting them into their own special American-person code before talking about them. Kids would ask weird things like whether American parents would make an exception and speak Japanese to their kids who were too young to have learned English yet, or whether it was a zero-tolerance policy sort of thing and the families would just not communicate until the kids went to English school. And I made fun of them, but I also remember the first time I visited Paris I heard somebody talking to their dog, and for a split second I was like “Why would you expect your dog to know French?” before my brain kicked in and I was like “Duuhhhh….”

The infamous “magical thinking” which kids display until age 7 or so also involves confused self-environment boundaries. Maybe little Amy gets mad at Brayden and shouts “I HATE HIM” to her mother. The next day, Brayden falls off a step and skins his knee. Amy intuits a cause-and-effect relationship between her hatred and Brayden’s accident and feels guilty. She doesn’t realize that her hatred is internal to herself and can’t affect the world directly. Or kids displaying animism at this age, and expecting that the TV doesn’t work because it’s angry, or the car’s not starting because it’s tired.

Psychology textbooks never discuss whether this progression in and out of developmental stages is innate or environmental, which is weird because psychology textbooks usually love that sort of thing. I always assumed it was innate, because it was on the same timeline as things like walking and talking which are definitely innate. But I’ve been moved to question that after reading some of the work comparing “primitive” cultures to primitive developmental stages.

This probably isn’t the most politically correct thing to do, but it’s notable enough that anthropologists have been thinking about it for centuries. For example, from Ethnicity, Nationality, and Religious Experience:

Primitive people are generally as intelligent as the people of any culture, including the contemporary industrial-electronic age cultures. that makes it all the more significant that their publicly shared cognitive style shows little identifiable formal operational thought. The probable explanation for this, if true, is simply that formal operational thought is more complexly difficult than earlier modes of thought and will be used in a culture in a publicly shared way only if that culture has developed techniques for training people in its use. Primitive cultures do not do that, and thus by default use easier styles of thought, ones closer in form to concrete oeprational and even pre-operational thought, as defined by Piaget.

Primitive cultures certainly exhibit the magical thinking typical of young children; this is the origin of a whole host of superstitions and witch-doctory. They exhibit the same animism; there are hundreds of different animistic religions worldwide. And although I didn’t talk much about theories of moral development, primitive cultures’ notion of taboo is pretty similar to Kohlberg’s conventional stage.

But if different cultures progress through developmental milestones at different rates or not at all, then these aren’t universal laws of child development but facts about what skills get learned slowly or quickly in different cultures. In this model, development is not a matter of certain innate abilities like walking “unfolding” at the right time, but about difficult mental operations that you either learn or you don’t depending on how hard the world is trying to cram them into your head.

So getting back to David Chapman: his post is mostly about Robert Kegan’s account of “stages of moral development”. I didn’t get much from Kegan himself, but I was fascinated by an idea just sort of dropped into the middle of the discussion: that less than half of the people in modern western countries had attained Kegan’s fourth stage, and only a small handful attained his fifth. This was a way of thinking about development that I’d never heard before.

On the other hand, it makes sense. Take General Semantics (please!). I remember reading through Korzybski’s giant blue book of General Semantics, full of labyrinthine diagrams and promises that if only you understood this, you would engage with the world totally differently, you’d be a new man armed with invincible cognitive weapons. And the key insight, maybe the only insight, was “the map is not the territory”, which seems utterly banal.

But this is a self-environment distinction of exactly the sort that children learn in development. It’s dividing your own representation of the world from the world itself; it’s about as clear a reference to theory of mind as you could ask for. Korzybski considered it a revelation when he discovered it; thousands of other people found it helpful and started a movement around it; I conclude that these people were missing a piece of theory-of-mind and Korzybski gave it to them. Not the whole deal, of course. Just a piece. But a piece of something big and fundamental, so abstract and difficult to teach that it required that whole nine-hundred-something page book to cram it in.

And now I’m looking for other things in the discourse that sound like developmental milestones, and there are oodles of them.

I remember reading this piece by Nathan Robinson, where he compares his own liberal principles saying that colleges shouldn’t endorse war-violence-glorifying film “American Sniper” to some conservatives arguing that colleges shouldn’t endorse homosexuality-glorifying book “Fun Home”:

It is hypocrisy for liberals to laugh at and criticize the Duke students who have objected to their summer reading book due to its sexual and homosexual themes. They didn’t seem to react similarly when students at other universities tried to get screenings of American Sniper cancelled. If you say the Duke students should open their minds and consume things they disagree with, you should say the same thing about the students who boycotted American Sniper. Otherwise, you do not really have a principled belief that people should respect and take in other opinions, you just believe they should respect and take in your own opinions. How can you think in one case the students are close-minded and sheltered, but in the other think they are open-minded and tolerant? What principled distinction is there that allows you to condemn one and praise the other, other than believing people who agree with you are better?

He proposes a bunch of potential counterarguments, then shoots each counterargument down by admitting that the other side would have a symmetrical counterargument of their own: for example, he believes that “American Sniper” is worse because it’s racist and promoting racism is genuinely dangerous to a free society, but then he admits a conservative could say that “Fun Home” is worse because in their opinion it’s homosexuality that’s genuinely dangerous to a free society. After three or four levels of this, he ends up concluding that he can’t come up with a meta-level fundamental difference, but he’s going to fight for his values anyway because they’re his. I’m not sure what I think of this conclusion, but my main response to his article is oh my gosh he gets the thing, where “the thing” is a hard-to-describe ability to understand that other people are going to go down as many levels to defend their self-consistent values as you will to defend yours. It seems silly when I’m saying it like this, and you should probably just read the article, but I’ve seen so many people who lack this basic mental operation that this immediately endeared him to me. I would argue Nathan Robinson has a piece of theory-of-mind that a lot of other people are missing.

Actually, I was kind of also thinking this with his most recent post, which complains about a Washington Post article. The Post argues that because the Democrats support gun control and protest police, they are becoming the “pro-crime party”. I’m not sure whether the Post genuinely believes the Democrats are pro-crime by inclination or are just arguing their policies will lead to more crime in a hyperbolic figurative way, but I’ve certainly seen sources further right make the “genuinely in favor of crime as a terminal value” argument. And this doesn’t seem too different from the leftist sources that say Republicans can’t really care about the lives of the unborn, they’re just “anti-woman” as a terminal value. Both proposals share this idea of not being able to understand that other people have different beliefs than you and that their actions proceed naturally from those beliefs. Instead of saying “I believe gun control would increase crime, but Democrats believe the opposite, and from their different perspective banning guns makes sense,” they say “I believe gun control would increase crime, Democrats must believe the same, and therefore their demands for gun control must come from sinister motives.”

(compare: “Brayden brought the Skittles bag with him for lunch, so he must enjoy eating pennies.” Or: “Daddy is refusing to watch Sesame Street with me, so he must be secretly watching it with someone else he likes better instead.”)

Here are some other mental operations which seem to me to rise to the level of developmental milestones:

1. Ability to distinguish “the things my brain tells me” from “reality” – maybe this is better phrased as “not immediately trusting my system 1 judgments”. This is a big part of cognitive therapy – building the understanding that just because your brain makes assessments like “I will definitely fail at this” or “I’m the worst person in the world” doesn’t mean that you have to believe them. As Ozy points out, this one can be easier for people with serious psychiatric problems who have a lot of experience with their brain’s snap assessments being really off, as opposed to everyone else who has to piece the insight together from a bunch of subtle failures.

2. Ability to model other people as having really different mind-designs from theirs; for example, the person who thinks that someone with depression is just “being lazy” or needs to “snap out of it”. This is one of the most important factors in determining whether I get along with somebody – people who don’t have this insight tend not to respect boundaries/preferences very much simply because they can’t believe they exist, and to simultaneously get angry when other people violate their supposedly-obvious-and-universal boundaries and preferences.

3. Ability to think probabilistically and tolerate uncertainty. My thoughts on this were mostly inspired by another of David Chapman’s posts, which I’m starting to think might not be a coincidence.

4. Understanding the idea of trade-offs; things like “the higher the threshold value of this medical test, the more likely we’ll catch real cases but also the more likely we’ll get false positives” or “the lower the burden of proof for people accused of crimes, the more likely we’ll get real criminals but also the more likely we’ll encourage false accusations”. When I hear people discuss these cases in real life, they’re almost never able to maintain this tension and almost always collapse it to their preferred plan having no downside.

Framed like this, both psychotherapy and LW-style rationality aim to teach people some of these extra mental operations. The reactions to both vary from enlightenment to boredom to bafflement depending on whether the listener needs the piece, already has the piece, or just plain lacks the socket that the piece is supposed to snap into.

This would have an funny corollary; the LW Sequences try to hammer in how different other minds can be from your own in order to develop the skill of thinking about artificial intelligences, but whether or not AI matters this might be an unusually effective hack to break a certain type of person out of their egocentrism and teach them how to deal with other humans.

This raises the obvious question of whether there are any basic mental operations I still don’t have, how I would recognize them if there were, and how I would learn them once I recognized them.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

565 Responses to What Developmental Milestones Are You Missing?

  1. There’s a typo at the start of the “Ability to think probabilistically” paragraph, causing a broken link and disappearing text.

  2. On the topic of thinking probabiblistically, it seems to me that a lot of people have trouble thinking in terms of distributions. Like if a study finds that group X has some tendency to do something more often than group Y, people will invariably try to refute the study by bringing up some person they know from group Y who does the thing all the time. But almost always the study is just talking about some difference in the mean for group X and group Y, and the two distributions overlap significantly, so you would absolutely expect to find counterexamples (in fact, it would be weird if there weren’t counterexamples). This is basically the ecological fallacy, and it’s always really annoyed me when people commit it. I think that might just be my physics training, though – physics really drills distributions into your brain.

    • Now that I think of it, there’s a more general cognitive skill here which should have a name – I guess you could call it “the platonic switch”, or something like that. I can’t describe it that well, but it’s the difference a set of tabulated data and a binned histogram of that data. Or the difference between describing the universe as a set of momentum+position vectors for each particle in existence, or describing it all as a point in 6N-dimensional phase space. Or the difference between thinking of intelligence as something that *generates* solutions to problems, and thinking of it as something that *selects* solutions from an enumerated set of possible solutions.

      Argh, I’m finding this hard to articulate. Does anyone know what I’m talking about/have a better name for it?

      • Jeffrey Soreff says:

        I’m not sure I would consider the tabulated data/histogram an example of the same thing as the other two. I’d call both the distinction between selecting a plan from a set of possible solutions vs constructing a solution and the distinction between a 6N dimensional point in phase space vs N particles with 6 locations+momenta as instances of switching between a description of something as a single unit with lots of attributes/dimensions vs a description of the same thing as lots of parts with many fewer attributes per part. I don’t know a compact name for skill. Aggregating/disaggregating degress of freedom?

      • Harald Korneliussen says:

        Not quite sure it’s the same thing you’re getting at, but I got an “aha”-experience about how (imperative) programmers think of the world vs. how mathematicians think, when learning the modeling language Alloy. It looks like a programming language, but because it’s really first other logic modeling, whenever you specify that something should “change”, you also have to specify all the things that don’t change. Otherwise, the modeler (the logic, really!) assumes that they may or may not change. Forget a constraint, and anything may happen.

        In math, we think in terms of an immutable universe. In a sense I guess I’d “known” this since I learned algebra, but I didn’t really appreciate it until I first tried making formal models.

        Afterwards, I’d always be a little annoyed at people who marketed a game (like Fluxx) with “Rules change during play”. Dude, that adds nothing! You have fixed (possibly implicit) rules for how the rules “change”, you could just think of it as one set of unchanging rules anyway.

        Is that what you’re getting at? Immutable universe vs. mutable universe?

      • Aegeus says:

        I think I know what you’re grasping at. I’d call it “Separating the label on the data from the numbers which make up the data.” Or possibly “How you frame the data determines how you interpret it.” Or “the View is not the Model.”

        It’s an important skill for programmers. For instance, computer hacking is built on the ability to recognize “This web page says it’s a shopping cart, but what it *really* is is a place you can inject SQL queries that report any piece of data you want.” Or “this text field says that it’s a username, but what it really is is a block of memory right next to the stack pointer, which could be modified if I dumped a suitably large block of shellcode in there.”

        • Brian says:

          This is interesting. I wasn’t super sure of what thepenforest was trying to get at with his examples, but the way you and Harald above formulated made it clear for me.

          Terry Tao talks about the same thing in his lecture notes on foundations of probability theory as an instance of the map-territory distinction.

          Like, the same abstract idea of probability spaces, events, and random variables can be instantiated by various different concrete models, and we often want to change our concrete models to incorporate differemt facets of the abstract system in question.

          I think we can take this example even further by using different axioms or formal frameworks for modeling non-deterministic phenomena. Like, we can model probability theory using discrete event spaces or the Kalmogorov axioms, or with non-probabilistic frameworks all together.

          More generally, we can model the same abstract structure using different concrete concepts; and those abstract structures are themselves concrete concepts modeling more abstract structures.

          I guess another way to phrase this insight might be “It’s maps all the way down.” I’m not sure we ever get to the “territory”–that seems to be the Aristotlelian fallacy, that we can identify the way the world is essentially outside our minds.

      • Lambert says:

        Are you talking about the sense that a program written in C is in some way the same thing as a provably equivalent compiled LISP program? That pieces of data with different encodings are the same? If so, It kind of reminds me of extensionality, the notion that equality depends only on the external effects of an object or concept.

      • Okay, in the clear (and more importantly, sober) light of day I think I can explain what I meant a little better.

        Take the histogram example first. Let’s say that someone makes the claim that…I don’t know, that Canadians are more polite than Americans. And let’s say they go out and do surveys and the data backs them up – Canadians are, on average, more polite than Americans. Say Canadians score an average of 7/10 on some Politeness Inventory while Americans average only 6/10.

        Now, it seems to me that when some people hear the sentence “Canadians are more polite than Americans” they come to the conclusion that Canadians are fundamentally different from Americans in some way, because Canadians have the “politeness” attribute and Americans do not. But if we look at things in terms of distributions/histograms, this is simply utter nonsense. If the possible politeness scores are 1 through 10, then for each score both countries will obviously have many many people receiving that score. And there’s absolutely nothing that differentiates an American who received a certain score from a Canadian who received that same score. A Canadian who scores 9/10 is exactly as polite as an American who scores 9/10 – they’re literally identical, at least as far as politeness goes. So even if the data shows that Canadians are more polite than Americans on average, it’s still true that for every single Canadian you can find an American who’s equally polite.

        The actual difference between the countries, of course, is not any kind of metaphysical politeness attribute, but rather a difference in the relative fraction of the population that falls into each bin of the politeness distribution. Canadians would have a higher fraction who score 10/10, yes – but Americans who score 10/10 are just as polite as those Canadians, even if there are fewer of them.

        This is what I meant by the “platonic switch” – if we switch over to thinking of politeness levels as things that “already exist” in some weird platonic sense, then the only question in determining how polite a country is is figuring out how many people that country has at each politeness level – how full each bin is, in other words.

        The momentum+position example is similar – we can think of 6N-dimensional phase space as a kind of platonic realm of “possible universes”. Describing the (classical) universe as a point in phase space is saying that this particular universe, rather than any other, is instantiated as “real”. It’s just like the politeness histogram, except here all platonic bins but one will have a “population” of zero, and one particular bin will be filled.

        And the intelligence example is also similar. To make it less abstract we can talk about a concrete example that requires intelligence: writing a novel. Most people think of writing a novel as the process of generating that novel from scratch. But of course it’s trivial to enumerate the space of all possible novels – just assume that all novels are less than N characters long, where N is very large, and then list all the possible permutations of allowed characters that are less than length N. If we view things in this light, people aren’t generating novels – they’re choosing them, from the (vast) platonic space of all possible novels. Other examples of problem solving using intelligence are similar – you can always just enumerate the possible solutions to the problem as some kind of platonic list, and then reframe the process of generating solutions as the process of choosing solutions from a generalized solution space.

        So I guess all I’m really talking about is the ability to think in terms of “spaces” – Politeness Space, Phase Space, Novel Space, Person Space, Configuration Space, Thing Space, whatever. All these spaces already “exist” in some kind of weird platonic sense, and then anything “real” can be thought of as a certain point (or distribution) that’s instantiated from the space in question. I’ve found this to be a very useful mode of thought, and I think I picked it up both from my physics background and the LW sequences.

        Or that’s what I was trying to say last night, anyway. Other people have posted some interesting responses to that comment, and I honestly can’t tell if we’re all getting at the same thing or not. So I’ll ask again: is any of this resonating with people?

        • Aegeus says:

          That definitely wasn’t what I guessed the first time around. The only time I’ve heard something like that was in Neal Stephenson’s Anathem, which is basically about applied Platonism.

          But it’s not resonating with me. I can’t think of a situation where it’d be useful to me to conceptualize a problem as “the solution already exists somewhere in Platonic Space and you just need to find it.”

          The novel example came close – when I write a story, I’m very aware that there are many possible ways I could take the story, and I have to choose one to actually write it. I can imagine it as hacking away at a branching plotline, or carving away bits of Platonic Novel Space until only a single story remains.

          But that’s just not how my brain actually operates when writing a story. I’m not iterating over all the possibilities in Platonic Novel Space. I’m picking single, mostly-arbitrary points. “That character sounds like a Rick to me.” If I sat there and iterated names from Adam to Zeke, I’d never get anything done.

          • Richard Frankel says:

            > I can’t think of a situation where it’d be useful to me to conceptualize a problem as “the solution already exists somewhere in Platonic Space and you just need to find it.”

            This is basically what genetic algorithms are.

        • Outis says:

          I think you’re confused. Yes, vector spaces are a useful concept that can be applied to many things. But that has nothing to do with why people are confused by the statement “Canada is more polite than America”. It also has nothing to do with platonism or metaphysics, or at least no more so than any other mathematical concept.

          Constructing a CxP space, where C is the set of Canadians and P is the 1-10 scale of politeness, is not the insight people are missing. Everybody understands the notion of “giving each Canadian a politeness score”, even if they cannot express it in abstract terms. In fact, the abstraction is of absolutely no use in comparing the politeness of Canada and America: it’s not like the CxP space implies a total order. In fact, it’s a step backwards from P, which did!
          The operation that’s needed is the mean (or the median, or some other statistic – again, the abstract notion of the CxP space gives zero insight towards the choice of one). That lets you place Canada (and America) back into the P space, where you can tell that it is in fact more polite than America.

          And the notion that people are confused with is that of simple statistics, and what they imply – and, more importantly, what they *don’t* imply – about the original population. The notion of spaces is a complete red herring.

          • FullMeta_Rationalist says:

            Your comment is accurate mathematically. But I think thepenforests was trying to describe the phenomenon as humans experience it from the inside. My own brain definitely recognizes a qualitative difference between Platonic Niceness Trait which all Canadians share, and the actual statistical distribution.

            I believe it was in the Bravery Debates post which Scott detailed a particular experience at work. IIRC he attended a slideshow about the ADHD epidemic. At one point during the presentation, the presenter said ADHD is overdiagnosed. At another point, the presenter said ADHD is underdiagnosed.

            After the presentation had concluded, Scott inquired about the apparent contradiction. The presentation responded that ADHD is overdiagnosed because lots of Tiger Moms are begging for Ritalin because little Johnny dared to earn as low as an A-. Simultaneously, ADHD is underdiagnosed because not all the kids who actually have ADHD are getting the meds and support they need.

            If you think about ADHD statistically, i.e. as a distribution of several separate elements, the paradox disappears. Underdiagnosis maps to “too many False Negatives” (poor sensitivity); overdiagnosis maps to “too many False Positives” (poor specificity). The two words describe two different dimensions. But on the surface level, the quotidian language suggests the existence of a single dimension (called diagnosis accuracy) which can be above XOR below a certain threshold. This raises the appearance of a contradiction.

            It’s as if we measured and calculated the average hospital patient’s temperature to be “100 degrees” (which happens to be above the correct threshold of 98 degrees) and therefore decided that fevers were undertreated and not overtreated. This single temperature is then applied across the hospital’s residence as if the distribution were homogeneous and not variable because “the thermometer says 102 degrees, which reflects the platonic attribute of all the patients. And since God created all patients created equally, all patients share this one attribute (even the hypothermic patients).”

            Likewise. When someone says “Canadians are nicer people”, my system 2 is trained enough to ask “what statistical distributions does this statement allow?”, but my system 1 instantaneously crystallizes a model which predicts that every single Canadian that exists is impeccably nice (or at least nicer than I am).

            The concept of temperature is testament itself that humans think this way. How long as a kid did you believe that hot and cold were non-emergent properties of material objects, rather than an average of vibrational energies of molecules? Even now, do you think the statistical way first? Or do you just shrug and implicitly assume that “the thermometer says 50 F, so the entire object is literally 50 F.”

        • RCF says:

          Have you heard this story?

          Also, suppose you and I write programs to play Tic-Tac-Toe. I write a bunch of code that evaluates each possible move by calculating a score for each of them and taking the move with the highest score. Your program is just a look-up table, and an instruction to take the current position and perform the associated move. Is this getting at the distinction you’re making?

    • Jeffrey Soreff says:

      I’m curious as to what fraction of the population _ever_ grasps the idea of independent random events. As far as I know, the monte carlo fallacy is very common…

      • Marc Whipple says:

        A reasonable subset of the population is capable of understanding it.

        The ability to really accept that it applies to everything, and that sometimes things just happen even though it looks like they are connected, is vanishingly rare.

        Both of the above are nothing more than my opinion, but I’m fairly confident in my opinion.

    • Zur says:

      The trouble is that claims of this sort are so insidious. Let’s say someone makes a claim like “women are, on average, harder working than men.” This claim, if true, has all kinds of policy consequences. For instance, large companies might be advised, as a policy, to hire only women, assuming that it is nearly impossible to get information about how hard working someone is from a job interview.

      So you want to argue against this claim, because you don’t think it’s true and because its consequences if true are terrible. How are you going to do it? Its almost impossible to refute. If you’re lucky, there are studies on the topic that you can cite (of course the other guy probably has his studies), but usually you won’t know about anybstudies, and all you can do is think about people you know, and how they are, and you get these sort of anecdotal counter examples out of frustration.

      • scav says:

        Except, policy consequences should only be as extreme as the claim itself, even if true.

        A small difference in the mean of two greatly-overlapping distributions is not a lot of information to make a decision on. If you can’t screen it off by gathering more information and deciding on that, then maybe you might play the odds (such as they are) and prefer to choose a random member of group A over one of group B.

        But in practice, the failure to collect more data to make that choice puts you in group F. It would mean you literally were unable to find ANYTHING ELSE relevant about the two individuals in question.

        • JBeshir says:

          This is true. And believing a true fact can’t by itself create terrible consequences anyway- it can’t make what’s currently happening any worse than it already is, just highlight potential better options. “What’s true is already so, believing it doesn’t make it any worse.”

          Humans are sufficiently bad at dealing with and communicating ideas, though- and sufficiently prone to opportunistic use of them- that the widespread acceptance of a true fact *could* have negative consequences.

          Possible mechanisms include them grossly overweighting the factor due to difficulty dealing with evidence/screening of evidence, them having a sizeable reduction in sense of “moral worth” assigned to the group in question, or them simply not factoring long term cumulative effects of a shift in policy into their evaluation of the consequences because they’re hard to know (e.g. the sort of dynamics discussed in

          My personal assessment is that there’s probably not anything in current discourse which we’re better off “the masses” not knowing; accurate beliefs have a lot of positive consequences so much of the time to understanding. But I think it’d be reasonable to think otherwise.

        • Jiro says:

          Except, policy consequences should only be as extreme as the claim itself, even if true.

          No, not really. Suppose that women really do work harder than men. Then given a population of a lot of women and a lot of men, and no other information, the policy effect is that someone who wants to hire the hardest worker should hire 100% of the women before starting on any of the men, even if the percentage difference is quite small.

          If there is other information than sex that is also a useful predictor, the employer would not hire 100% of the women first, *but* the extent to which he does hire the women first would depend on the degree to which other predictors are good compared to the degree to which sex is good as a predictor–it does not depend on the *absolute* usefulness of sex as a predictor. So even then it is still possible for the policy effect to be more extreme than the claim.

          • Steven says:

            More precisely, in order for a man to be hired before a woman in your model, the man needs to be paid a lower wage. The magnitude of the wage difference required to make the employer indifferent between hiring a man and hiring a woman is increasing in the average difference in work effort between the populations (and in the effect of worker effort on the employer’s profit). In your example, a small difference implies a small wage gap.

          • Jiro says:

            That would require that the employer gives different starting wages to men and women with (otherwise) equal qualifications. It is illegal to discriminate that way (and would continue to be illegal with more realistic examples than women working harder than men).

          • Anonymous says:


            It would also, technically, be illegal for the employer to hire only women until the pool of female candidates was completely depleted, and only then start hiring men. In practice I suspect that doing that is much more easy to hide than paying men and women differently, thus is likely to be much more common.

            Although if it really were the case that there were no factors that employers could use to distinguish ability other than gender, discriminating on this factor probably wouldn’t have been made illegal in the first place.

          • CatCube says:

            I don’t see why hiring all women first should be true, given the conditions stated. Just because there’s a slight tendency for women to work harder doesn’t mean that every woman will work harder than every man. Hiring *all* the women first will fill your slots with a larger proportion of lazy women while leaving harder-working men on the table.

            If the standard devations of the populations were the same and the difference betweens the means large, what you proposed makes sense. For small differences between μ, it doesn’t.

          • Anonymous says:


            One of the assumptions involved was that you have no way to tell candidates apart other than whether they’re a man or a woman. Under those conditions, it really does make sense to base your selection entirely on gender, exclusively choosing women for as long as there are women to choose.

          • RCF says:


            Suppose men’s Hard Working Index is uniformly distributed from 0 to 100, and women’s from 10 to 110. Suppose you know that 20% of the women have been hired, and 0% of men have. If you assume that those 20% are taken from the top of the female population, then it does indeed make sense to hire a man. However, in the hypothetical that others are imagining, no one knows anything about any of the workers other than whether they are a woman or a man. Thus, when people hire women, they aren’t hiring from the top of the female worker pool, they are hiring randomly from the female worker pool. So if a company is considering hiring a woman, their chances of getting a hard working woman aren’t lowered by the fact that other women had already been hired.

      • Outis says:

        > For instance, large companies might be advised, as a policy, to hire only women, assuming that it is nearly impossible to get information about how hard working someone is from a job interview.

        And also assuming that how hard working someone is is absolutely the only thing that matters to job performance. In fact, you have to assume that it is absolutely the only property that could possibly affect the choice to hire someone.

        It’s like how it would be horrible if people had dogs, assuming that dog’s bodies are covered with constantly firing guns, and therefore we should not entertain the notion that dogs make good pets.

        • RCF says:

          It is not necessary that there not be any other parameter that affect job performance; it is necessary only that the employer not know the value of those parameters for their applicants.

      • Brian Donohue says:

        1. But “hard working” is only one attribute to consider.

        2. The idea that you can’t learn anything about how hard working someone is (or any other attribute of relevance) is what strikes me as insidiously smuggled into your argument.

        Yes, there are instances when we must fall back on gross categories to make decisions, but, considering the massive overlap in distributions, better information about individuals is almost always obtainable and worth the effort.

    • Banananon says:

      I’ve long felt that the ‘weirdness’ associated with quantum mechanics is primarily people having crappy understanding/intuition of distributions. Complex amplitudes vs. real probabilities is much smaller intuition gap than uniform or product distributions vs. arbitrary distributions.

      • This is very true. Shrodinger’s cat isn’t weird because of the probability distribution (1/2) dead + (1/2) alive. That’s just a reasonable posterior based on an experiment that might have killed the cat.

        Shrodinger’s cat is weird because (based on the mathematics of the time) one could imagine an experiment where interference happens, and the cat suddenly has a nearly 100% chance of being alive. I.e., you can do stern gerlach experiments where you have (1/sqrt(2)) dead + (i/sqrt(2)) alive which combine with (1/sqrt(2)) dead – (i/sqrt(2)) alive, and result in 1 x dead.

        Now that we understand decoherence (which Schrodinger did not at that time), it turns out this can only happen to a cat sized object with exceedingly small probability.

        • Merkwürdiglieb says:

          If fault tolerant quantum error correction is possible, and at least theoretically it is, then decoherence is not a problem for macroscopic superpositions.

          • 27chaos says:

            What do you mean by “not a problem”? You mean that we should expect there to be a (mathematically) zero percent chance of weird macroscopic occurrences, right? As opposed to a universe in which in one Everett branch I might have a functional blue hand suddenly burst forth from my chest.

          • Luke Somers says:

            I’m pretty sure that the techniques for fault tolerant quantum computing do not apply to arbitrary objects.

        • Paul Tiplady says:

          Is that so? I thought the weird thing about the Schrodinger’s Cat thought experiment is that until the wavefunction collapses, the cat is in a superposition of “alive” and “dead” states.

          I don’t really see it as anything to do with probability distributions per se, since everyone can understand 50:50 I’m this example.

          The fundamental quantum weirdness is in things being literally in both states at once, without even getting into the mathematics.

    • MicaiahC says:

      Sort of the add to this, back when I taught premed physics I would ask some question like: “Why do you think the light bends in this experiment?” And some dedicated subsection of students queried would just pick one of the words in the question and sciencify it. That section seems to expect the answer to questions to be contained in the question.

      I have no idea whether I should treat this as just them grasping desperately for an answer or them having some internal model which always tries to answer things in terms of themselves, as opposed to finding lower level abstractions

      • Loquat says:

        Possibly related: in college, I spent some time working as a peer tutor, and in the process encountered some students (in a non-mathy major) who had tremendous difficulty with the concept that formulas might need to be rearranged. They had a certain formula in their flash cards, Thing X / Thing Y = The XY Ratio, but when given a word problem that gave them Thing X and the XY Ratio and asked them to solve for Thing Y, they went searching through their flash cards for a formula that already had Thing Y on the right-hand side.

        And one existed! But it called for elements A and B, which weren’t mentioned in the word problem.

        Great confusion ensued.

        I think I managed to explain the concept of using basic algebra to move variables around so that you could solve for any one of them, but I’m not sure they absorbed it as a lesson with broader implications that that specific problem.

        If I had to model their mental processes, I suspect they were seeing mathematical formulas not as descriptions of the relationships between the component items, but as the way to find the 1 item which happened to be on the right-hand side when the formula was taught to them.

    • JBeshir says:

      I favour using the language “Group X has a greater proportion of people who are Z than Group Y” as a counter to this sort of thing; it is a little imprecise where Z is not a binary, but it seems to parse correctly and generate a sense of its implications in line with its meaning for most people in my experience, and humans seem to draw the implication that the rest of the distribution is appropriately shifted on their own, which makes sense- it would be very weird if it wasn’t.

      Whereas “Group X is more Z than Group Y” automatically parses for me to something like “Almost all members of group X are more Z than group Y”, if not something fuzzier like “You should update your archetypical example of Group X to include Z”, and every time I hear it I need to replace that claim with the weaker one about distributions “manually”, and then wait to see if everyone else noticed they needed to do that or if I need to point out the issue.

      I think this might be because the latter invokes reasoning-by-archetype, and the former explicitly invokes reasoning-by-proportions at the least.

      I do see some people who still respond the way you describe, though, with counterexamples and other things which make little sense. The idea that they’re doing it because they lack the skill of reasoning about distributions makes a lot of sense.

      Maybe we will improve it if society moves to explicitly talking in terms of distributions more for some reason or another, presumably like how other skills increased?

    • Brandon Berg says:

      I coined the phrase “Correlations are real-valued” (as opposed to integral) as a succinct response to this fallacy. Sadly, those who most need to hear this are least likely to understand it.

    • Neanderthal From Mordor says:

      I call this anecdotal thinking.

  3. Jeremy says:

    I’m always skeptical when people categorize and theorize about complex things in this way, that everything fits into some hierarchy, or “stage of development”, or whatnot. For example “Understanding the idea of trade-offs” I don’t think corresponds at all to a “stage of development”. Like, everyone understands the idea of a tradeoffs at a young age. It’s the ability to apply it to situations (which requires practice with abstraction, working memory, a desire to understand things well) that it rarer. I think that the number of things that people “just don’t have the mind ability for”, is pretty small compared to the things that people don’t care to put in the effort to understand well.

    • lunatic says:

      I think some developmental milestones – like the Amy and Brayden theory of mind – seem to more or less make sense as such. Things that are learned almost universally by young kids. However, there is a very blurry line between that and things that people think should be learned by everyone but are unfortunately not. Piaget is one of the big names in the founding of developmental psychology (correct me if I’m wrong), and he came up with a lot of very interesting experiments showing what children could and couldn’t do. On the other hand, he conceived of psychological development as a one-way train to maths town, which always stuck me as a dubious universal endpoint, especially coming from an academic.

    • Paul Kinsky says:

      I think it’s worth tabooing ‘hierarchy’ for the time being: there are many mental operations, there seem to be dependencies among them (a theory of mind requires the concept of other people as distinct entities, for example), but there’s no single hierarchy to be found. Viewing members of enemy villages as inhuman is a mental operation that’s probably adaptive during periods of ethnic strife. Different Buddhist meditation disciplines could be viewed as branching hierarchies of mental operations.

      There are different paths, not all of which are exclusive, in mental operation space. (Ooh, here’s a mental operation: replacing lists with trees or graphs when advantageous)

      • Yeah, I think “dependency graph” is the best view. Not a tree, since you often need many skills with independent learning histories in order to develop a new higher-order skill.

        For instance, in my model of Kegan’s model, reaching stages 3 and 4 aren’t dependent on each other, but stage 5 is dependent on having developed both 3 and 4.

        Or as I summed it up the other day, “don’t teach the synthesis until you are sure your student knows both the thesis and the antithesis.”

    • Zur says:

      Or you could just be commuting a typical mind fallacy, reasoning about other people’s abilities based on your own experience.

      • Brad (the other one) says:

        @ Zur

        That’s absurd. I could just as easily apply a “aha, typical mind fallacy” to Piaget (as mentioned in lunatic’s comment) as to Jeremy.

        What exactly should I be using, other than my own experience, to reason about people’s minds?

    • Jeffrey Soreff says:

      Some of it might also not be a matter of effort, but at how prone one is to notice relevant patterns (or applicable paradigms – even if they had previously used the paradigms in other situations):

      Does someone perceive some problem as more budget/resource-limited, or do they perceive an undesirable effect which goes up with resources used and makes it a trade-off, regardless of budget/resources available?

      Does someone look at a set of events and think of each as having idiosyncratic causes, or do they see them as viewable as samples of a random variable?

      Do they look at some ongoing process, and just view it as continuing, or do they perceive it as case where thinking about an equilibrium state is useful?

    • Yeah, my sense is that certain trade-offs are much more complex than others. A 10-year-old might get the tradeoff inherit in “I have 2 hours at the beach, and any hour I spend swimming is an hour I’m not spending building sandcastles.” Or if not a 10yo, then a 20yo at least. But very mature people fail at the complex tradeoff Scott describes, possibly because the tradeoff has a taboo-tradeoff nature, or possibly just because of the greater complexity making the necessity of the tradeoff less obvious.

      I actually just realized that I had a milestone like this 2 years ago, and wrote a blog post about it.

    • Eli says:

      I think that the number of things that people “just don’t have the mind ability for”, is pretty small compared to the things that people don’t care to put in the effort to understand well.

      Maybe. But I think you missed a major point Scott made: the public practices of reasoning in which people are trained are a major influence on what sorts of cognitive skills they actually acquire.

    • Aegeus says:

      Yeah, this article doesn’t seem to describe “milestones” so much as “learnable mental skills.” Heck, Scott points out that some of these “milestones” don’t exist in some cultures.

      Can we come up with a better name? “Mode of thinking”? “Mental pattern”?

      • Jeremy says:

        It’s a rather surreal experience to post a comment and have almost every response not only understand my original comment but further expand on some aspect of the concept in a more eloquent way than I could have fit into the original post.

        So, yes, I absolutely agree, not only with you, but with almost everyone who responded above. Of course, I don’t think this is a typical mind fallacy on my part because it’s based on experiences with people who fail at applying one of these skills, but then I talk to them for a bit and it’s not like talking to a child who doesn’t understand some concept, the response is more like “Oh, that’s an interesting way of thinking about it, I guess it’s kind of like when I do X…”

    • PGD says:

      Very much agree with this. Imposing developmental hierarchies on mental operations like this actually makes it harder to understand others, since you are baking in a linear more/less developed judgementalism into differences in mental style that may make sense from the perspective of the other person. A lot of seemingly ‘correct’ mental approaches don’t make second order sense — excessive empathy and ‘understanding’ of others can be paralyzing or harmful if taken too far. Fast intuitive thinking contains predictable errors but, hey’ it’s fast and economizes on effort. etc.

      Having a toddler, I think some of this stuff is a little too negative on kids too. It’s not magical thinking to believe that the car might not be starting because it’s tired. You don’t know what an engine is but you have observed a lot about being tired, and seen plenty of complex systems not function because of fatigue. You just have a lot to learn about the world. Kids do an amazing job considering they are figuring out the entire world on the fly.

    • Michael vassar says:

      Not to mention things that people Do care to put in the effort to Not understand because doing so would threaten their social embeddedness. In particular, as far as I can tell, almost all middle class Western children of age 8 understand ‘stage 4’ or as Rao would say, are clueless. Rao’s ‘losers’ get a clue and learn not to understand that their stage 3 behavior is parasitic, because it is normal.

  4. Ton says:

    Data point: I’ve explicitly formulated the thought before reading this post that sometimes I assume others are like me with differences. I think that I never fully picked up that development, and try to compensate by explicitly thinking that others are different.

    Like, I’ll assume that everyone gets a joke or reference I made, instead of realizing that I didn’t provide enough context.

    • Johannes Dahlström says:

      Yes, the typical mind fallacy. The tendency to think one’s words convey more meaning than they actually do is known as illusion of transparency.

      An interesting thing is that the golden rule, “One should treat others as one would like others to treat oneself” should be understood to be just a first-approximation heuristic – it contains the assumption that others like to be treated the same way you would!

      • Marc Whipple says:

        I once wrote an essay twisting the Categorical Imperative until it screamed in pain which was based on rephrasing it thusly:

        “Treat people as you would wish to be treated if you were them.”

  5. Michael Watts says:

    The infamous “magical thinking” which kids display until age 7 or so also involves confused self-environment boundaries. Maybe little Amy gets mad at Brayden and shouts “I HATE HIM” to her mother. The next day, Brayden falls off a step and skins his knee. Amy intuits a cause-and-effect relationship between her hatred and Brayden’s accident and feels guilty. She doesn’t realize that her hatred is internal to herself and can’t affect the world directly.

    This is, um, generous. I would have said kids tend to display this throughout their entire lives.

    • lunatic says:

      I presume there are huge differences in how much people of different ages do it.

    • Simon says:

      I would assume that strong emotions later in life can also lead to regression into magical thinking. I’m a pretty rational person, but when late at night I stub my toe against the table I can curse the damned evil piece of shit table, before realizing it’s my fault to have put it there.

      Minsky’s idea of emotions regulating what parts of the brain are active is maybe the more straightforward view.

  6. Tracy W says:

    I was recently re-reading Suzanne Elgin and she talks about George Miller who reportedly said that if you want to understand what someone says you have to imagine how that person could view the world so that thing is true to them. This strikes me as another variant on this idea in your post.
    (The Miller rule is not useful when dealing with Brits, they’re being ironic.)

  7. Lightman says:

    Re: the modeling other people’s minds as being very different from yours thing, I find some difficulties. I can imagine people having very different beliefs pretty easily, and can imagine the logical processes that would lead them to these beliefs. But I can do this only when I believe that the person in question is similar in intelligence to me (or smarter, I suppose). I have a lot of trouble getting a sense of what (apparently) less intelligent people are thinking.

    Anyone else share this sort of implicit elitism?

    • I have this problem as well. In high school, I had a difficult time understanding why some people couldn’t do their math homework, since the concepts were so obvious. I have come to accept, intellectually, that different people have different abilities to process abstract concepts, but I have a very difficult time modeling that difference.

      On a related note, when I’m talking with someone that I don’t know well, I sometimes spend a long time trying to figure out what knowledge we have in common. Knowledge is not quite the same thing as intelligence, but I still have to stop and wonder: does this person know who Pythagoras was? Nietzsche? Rawls? Thomas Aquinas?

    • Matthew says:

      When I used to play go regularly, I perceived opponents’ play as follows:

      Opponent is 1-3 stones weaker than me — She makes slightly suboptimal moves, obvious immediately.

      Opponent is 1-3 stones stronger than me — She makes slightly better moves than me at the margin, obvious in hindsight.

      Opponent is 4+ stones weaker than me — She appears to be throwing down stones at random. I cannot model what she is attempting to accomplish.

      Opponent is 4+ stones stronger than me — I am defeated by deep magic beyond my ken.

      • Walter says:

        I play go each week, about 1d amateur. I don’t have the trouble modeling 4+ stones weaker than myself. I agree with you about the 4+ stones stronger player though, losing with a huge handicap to a pro is more or less the direct opposite of a miracle.

      • Peter says:

        There’s a scene in Hikaru no Go where a very strong player has been persuaded to play two simultaneous blindfold games agains weaker players. He protests at this, saying that go isn’t like chess, partly because the board is bigger, but goes through with it anyway. Against the stronger of the two, he manages to play a legal game – against the weaker one, he ends up trying to play an illegal move, placing a stone on top of another stone. He says in exasperation something like, “So-and-so’s moves at least have some logic to them, but your moves are so asinine that I’m having trouble remembering them.”

        • Diadem says:

          This agrees with my own experience in chess. During chess training we sometimes practised remembering positions: Study a position you’ve never seen before for 30 seconds, then wait a minute and try to set up the position from memory. Back when I was actively playing I could generally do this correctly.

          Place the 32 chess pieces *randomly* on a chessboard though, and there’s no way in hell I’m even going to remember half of them. My pattern matcher just runs into a brig wall, and my short term memory alone can’t memorize 32 things in 30 seconds.

          Pattern matching is extremely important in chess. If you want to defeat someone playing a blindfold simultaneous game, play a weird opening that has atypical patterns. Even better is if you can get several people to play a nearly but not quite identical variation. Even the most skilled players will be in trouble then.

          • Nombringer says:

            Slightly off topic, but I’m currently a developing chess player (No official rating yet, but my performance so far puts me in the 15-1700 range), and I would be interested in any blindfold/general training exercises you know.

            Chess isn’t particularly big here is New Zealand, and although I’m lucky to have exceptionally strong players at my club, there is no way for me to learn or train beyond finding resources myself. Just any study tips would be helpful but did you ever train with any specific exercises?

            I managed to play a full blindfold game once, but I can’t practice a lot of the drills without a training partner. Do you have any that are appropriate for just one person?

            I realise that I may have put some big asks in there and I apologise if I’m coming off as a bit demanding, but I always find talking and learning from other players a huge help, despite my limited opportunities to do so.

          • patzer says:

            Nombringer: if you haven’t gotten a copy yet, Lazlo Polgar’s “Chess: 5334 Problems, Combinations, and Games” is excellent.

            But it would probably be even more worthwhile to find an online peer group.

          • 27chaos says:

            I always play The Grob. Always. My initial reasoning was that eventually I’ll have enough experience playing The Grob that I’ll have an advantage even against players who ought to beat me in a normal match. However, I don’t play often enough for that to be meaningful, so really I am just doing it because I find the idea funny.

        • Harald Korneliussen says:

          It’s not Akira who says it, but one of the relatively stronger players who says in in internal monologue. Yuri Hidaka, a senior woman at the Go club, walks in on them and sets the situation straight. I love that series.

      • James Picone says:

        I wonder if there’s some kind of level of complexity or feature that a game needs to have to have that property. Considering some examples from games I’m at least pretty good at:
        – Tetris (I’m okay): Play much worse and play worse than mine are very obvious, usually very distinguishable, and I can usually figure out what they’re thinking, just that it takes them much longer and they miss plays I would go for. Play better than mine is mostly just much faster than mine, with occasional drops that I look at and immediately recognise as a Good Play that I wouldn’t have spotted. Play much better than mine is impenetrable, though – I’ve had the experience of watching someone much better than me play tetris, and not being able to figure out whether they’re really good or really bad for something like ten minutes. T-spin setups and apparently-poor-drops that clear out fine in several more plays are nonobvious at my level of skill until they’re activated. I suspect some of this is because being better at Tetris allows you to get away with sloppier play, so as a result if you’re not being pushed to the limit, moving up in skill can result in apparently-sloppy play that works out in the end.

        – Super Smash Bros Brawl, miscellaneous other fighting games (I’ve been at baseline Serious Business competitive levels of play; my friendship group considers me the best player of a miscellaneous and generally-unpopular game that we all play in the area): I do not have an explicit mental model of my opponent at all. If you asked me to tell you what I thought an opponent was thinking while I was playing, I could not do it. If I’m in the mental state required to make explicit long-ranged plans, either my opponent is much worse than me and I’m ‘playing down’, trying to do something silly, or trying to teach. It’s not that I can’t predict my opponent; it’s that the prediction occurs below conscious thought and is not easily brought up to the surface. Poor play is easily recognised – they’re the people who repeatedly make unsafe attacks, fall into simple traps, don’t follow up opportunities or pick strange ways to punish, etc.. Terrible play is obvious because it makes no sense, it’s just button-mashing, your opponent has no internal model other than “hit them until they fall down”. Better play is recognisable as the inverse of poor play. Extremely good play is recognisable more because of the feeling than the appearance – it’s intensely frustrating, your opponent always seems to be just out of reach, just a bit faster than you, just a bit better priority, etc., because they’re spacing and timing way better than you are. It’s not recognisable why it works at all; the better spacing and timing is invisible except for the effect of winning. I think the subsurface prediction is a product of the game requiring extremely fast decisions – you get better at fighting games by training your subconscious to make good decisions, then getting into a flow state so it can take over. Concentration is as much of an asset as raw reflexes, because you can buy reflexes with concentration. Or it might just be a quirk of how I play.

        – Starcraft 2 (I was in Platinum league in SEA IIRC, which is essentially “I had the basics down, and nothing else”): Really bad players just do things, they make no sense, and there’s no understanding why. Players that are worse than you make observable mistakes that you can understand and then exploit. Players that are better than you feel like they’ve just got more – that they’re doing the same thing you are, but they can do more of it. Players that are really good win engagements against ludicrous odds; if you’re watching closely you can see that they won the engagement because they were positioned better, or micro’d better, or whatever. They appear to be doing the same basic things you are, but much, much more of it, and much, much better. Build order differences are essentially invisible – it’s just that sometimes seventeen million marines rock up at your base when you weren’t expecting them.

        Hearthstone (I’m bad): Really bad players don’t actually appear to know what the cards do. Worse players do “the obvious thing”, but without taking into account your response or strategy. Better players do “the obvious frustrating thing”, the thing that you were really hoping they didn’t have in hand when you did your thing. Much better players just always have the right card, and always seem to have board control, and you have no idea where it went so wrong.

        One general conclusion: Being ‘really much worse’ at a game appears to be a problem mostly of subgoal creation – the RMW player understands what they have to do (‘clear lines’, ‘hit the other guy with attacks’, ‘blow up his structures with units’, ‘hurt the hero’), but they don’t understand how to get there from here, and try to just draw the straight-line path. For some games, this is readily modellable even at low skill levels, because you genuinely can just go in a straight line – in tetris, for example, the goal ‘clear lines’ is not readily broken down until much higher levels of play, and getting better at the game mostly consists of finding more routes that clear lines, faster. In fighting games, the goal ‘hit the opponent with attacks’ can and should be broken down into subgoals like ‘make my opponent vulnerable’, ‘try to stay at a range my opponent finds awkward and I do not’, etc., but the straight-line approach is easy to understand – the model is “my opponent is there, I am in a state where I can attack, attacking is how you win the game, so press an attack button”. In some other games, the straight-line path is a ludicrous choice – if you start a game of Starcraft by sending all your workers to attack the enemy base, you’re either playing the SC2 beta as Zerg, or you’re about to lose – so even RMW players don’t take it. They just take so long forming subgoals that more experienced players don’t even realise are subgoals any more (like ‘collect resources’, ‘build more workers to collect resources faster’ and ‘figure out what my opponent is doing’) that the experienced player watches them and has no idea what they’re thinking – because the experienced player hasn’t had to think about those subgoals for a long time.

        Another general conclusion: Sometimes being much, much better at a game than someone allows you to play more sloppily and get away with it, which muddies the waters of recognition. A MMB player in a fighting game will confuse you about when they are vulnerable, and as a result they will be able to do things that leave them vulnerable and that you would have punished a worse player for. A MMB tetris player can have worse structure, because they can find the weird drops that appear to leave holes until three blocks later. A MMB starcraft player needs less military units to defend themselves safely, and so they can play simcity instead. A MMB Hearthstone player knows when they can ignore possibility Q, because they have counter Y in hand; or because they recognise your deck and realise that Q is a low-probability card in that deck.

        • Anon says:

          For a while I was playing Magic: the Gathering about once a week (cube drafting) in a group with a gradation of skill. There was one guy who was clearly the best the reason why was mostly opaque. (This is not just related to superior drafting skill — we’d often play each other with other people’s decks after the draft proper was done, and his win percentage was noticeably higher than I would have expected when using other people’s card pools.) I think I was second in the group in terms of technical skill of play. I’m not sure whether he was better than I am or much better than I am — it was rare that he’d play in a way that I would not have seen. I currently think most of the skill gap could be explained by something like him having a higher stamina for “decision fatigue”.

          In contrast, it was usually easy to tell when someone made a mistake, usually by failing to remember something.

          • James Picone says:

            Similar ‘invisible’ skills that translate across wide game domains:
            – Adaptability – the ability to learn things ‘on the fly’ in a game. For example, in fighting games, it’s fascinating how often you can show people a trap – by doing it to them – and then they’ll just fall right back into it later. I’ve had games where I’ve gotten someone with the exact same trap setup something like four times. Some of them right after a previous instant. And they’d spot it, every time, right after it was too late to do something about it, because they hadn’t trained the ability to learn new habits on the fly.

            – Situational awareness, in a broad sense. Weak players in games often develop tunnel vision, either literally or metaphorically. Metaphorical tunnel vision in Magic would be looking to destroy another player’s permanent – because it’s scary or a problem or whatever – without paying enough attention to other threats or resources. Tunnel vision in a fighting game is continuing some sequence or manoeuvre without paying attention to respective resources, like super meters or escape mechanics (Say, Guilty Gear’s Burst). Tunnel vision in an RTS is prosecuting an attack without noticing that they’ve dropped a force of units on your base.

            – Equanimity. I know so many people who would be better at games if they didn’t get upset the instant things started going poorly. The worst cases basically can’t learn to play games well, because if they lose a few games in a row they’ll get angry and stop playing.

        • LCL says:

          I got good enough at a couple online multiplayer games to be one of the few best players. At least good enough that when people outplayed me, I always understood how they’d done it. No inverse-miracle losses.

          From that level there wasn’t much secret; performance was about:

          1) Correctly (and quickly) understanding the situation as it unfolds
          2) Correctly understanding the possible space of outcomes of the current situation
          3) Acting decisively to steer towards favorable outcomes and away from unfavorable ones

          Poor players tended to misunderstand the situation entirely. Average players mostly got the situation but tended not to anticipate potential developments very well. Strong players anticipated developments but reacted too slowly; often their errors were better characterized as “missing opportunities” rather than “making mistakes.”

          The best players ruthlessly exploited any advantage and, if they did err, usually did so via overaggressive play. That’s going to be difficult to see for lower level players; the pro is repeatedly prosecuting small odds advantages that lower-level players don’t even recognize, eventually adding up to an overwhelming snowball effect. The type of error the pro was inclined to make (if he made one at all) was “presenting an opportunity for counterattack” which even strong players often miss. As a result the pros could seem invincible or unfairly powerful.

          Also, a major and persistent theme was lower-skilled players misattributing to “communication” or “team discipline” (these were team games) what was actually explained by situational awareness. Leading them to experiment with all kinds of useless interventions aimed at improving team communication or discipline. In fact what was happening was simply pros reading the game situation correctly and independently taking appropriate action, where appropriate action was often “support team member X to do Y.”

          The assumption that coordinated teams were in constant communication or (even wronger) were hierarchically organized and following orders was a persistent illusion. It was an artifact of low-level players being unable to read the situation well enough to know when the situation dictates action, and therefore assuming some “leader” was dictating action.

          • RCF says:

            So, how do high level players deal with being on a team with a low skill player? Coordinating with a low skill player requires modeling them.

          • DrBeat says:

            Generally, they yell at the low-skill players and call them noob pieces of shit.

          • FullMeta_Rationalist says:

            It depends on the game.

            In fast-paced games like Counter Strike, a single strong player can easily “Solo-Carry” their team to victory. It only takes a headshot to “Outplay” an opponent, so it’s easy for a single player to cause decisive outcomes. A single match’s outcome is dictated by the strongest outlier, so teamwork doesn’t really affect the metagame much.

            A slower game like League of Legends requires team work and coordination. Outplaying an opponent requires at least several seconds, which gives opponents more time to react. So decisive battles are more difficult for a single player to achieve. This makes slower games more difficult for a single player to Carry. Rather, a single match’s outcome is more often dictated by the weakest link. Teamwork is more important in order to ensure that all teammates are exerting their best efforts. This lowers the chances of the weakest link being on your team (which raises the chances the weakest link is on your opponent’s team).

            One distinction in Metagame theory is between Power and Resources.

            Power usually means Damage Per Second, which is a World of Warcraft term used to describe the ability of a player to consistently damage an opponent over time. It’s described as Power because it’s the rate at which you change the external state of the game. Dps is what wins you games. Carries have the Dps. Item-efficiency is usually measured in Dps / gold.

            Resouces usually means anything else the player has — including spells, health, items, etc. E.g. health is looked at as a Resource because it doesn’t actually affect your chances of winning (until it hits 0). Resources are subsidiary to Dps.

            Therefore in slower paced games, it’s best for the pro to tell the noob “play defensively (so you make less exploitable mistakes) and follow my lead”. The pro is then better able to carry the game (to victory) by leveraging the noob’s resources towards the pro’s higher Dps and superior decision-making.

          • James Picone says:

            I play DOTA, not LOL, and there are definitely heroes you can play as a high-skill player with low-skill teammates and dominate with. The traditional suggestion is someone like Storm Spirit or Templar Assassin, a snowbally, high-skill-cap, self-sufficient mid. In the terms you just defined, they’re really good at parlaying power into more power, as long as you don’t screw up too badly (because once they’re behind the curve, they stay there).

            This might be because DOTA’s laning game has much more direct conflict than LOL’s; a significant skill gap in midlane usually means the lower-skilled mid will get something like half as much gold/experience as the high-skilled mid. And they’ll probably die a few times.

        • 27chaos says:

          This is really interesting, I nominate for comment of the month.

          You might like David Sirlin’s series of posts on competition and gamesmanship. I don’t play Street Fighter, or any of the games you mentioned, but I found it very insightful for thinking about strategy in general.

        • FullMeta_Rationalist says:

          I think the term you’re looking for is “deep Metagame”. The Metagame might be defined as the strategic landscape of a particular game when played at the highest competitive level. Or more concisely, the set of optimal (or at least viable) strategies.

          Some of your comment describes what’s known as “Win Conditions”: the subgoals a particular strategy depends on. E.g. the Metagame of Smash Bros, Tekkan, Mortal Kombat, etc. invariably revolve around “Juggling”. For the uninitiated, Juggling entails controlling the opponent by knocking them into the air repeatedly.

          Fox was arguably #1 in SSB Melee because despite his poor damage, power, reach, and weight — his raw agility allowed him to Juggle quickly and easily. One major drawback of Fox is the inability to “Spike”. A Spike is a technique unique to SSB where the player punts the opponent downward into the abyss.

          Marth was competitive in the Metagame because his Ken Combo was able to incorporate a Spike at the end of a Juggle. Marth’s sword also afforded great reach and priority. The reason I suspect he never dethroned Fox was because his agility was average and his combos required extremely precise mechanics to properly execute.

          While I’m on the subject, the subconscious decision-making falls under something called “Mechanics”. Mechanics is an umbrella term for quick-decision-making, execution, and reaction-time. The reaction-time shortens and the decision-making improves with experience, because you’re able to anticipate certain attacks before the opponent executes them. In Heartstone, the canonical example is when a Zoo deck (zerg rush) vs a Mage (weak vs zoo) plays a Nerubian Egg (insurance against Board-Clears) on turn 6 in anticipation of a Flame Strike (a mage’s only reliable Board-Clear) on turn 7.

          Games which continue to evolve after release often experience shifts in the Metagame. Meaning strategies which prioritize different Win Conditions become more favored. Sometimes, players break the meta by employing unorthodox tactics. Basically, just slightly outside the Overton window.

          Rarely, players employee “cheese” strategies. Cheese strategies means strategies that are 100% unorthodox and 100% dumb. Except they work… sometimes. Cheese strategies usually have a Zerg Rush feel because it puts all the eggs in the same basket early on. If it fails, it’s hard to come back from. But it sometimes succeeds because it Munchkins something the developers didn’t intend to be exploited, and the opponent just doesn’t know how to counterplay it because they’ve never seen it before.

          A looser definition of Cheese is sometimes used to describe tactics outside the metagame which exploit noob mistakes or laziness.

          (N.B. all of these definitions I’m pulling from experience. After googling Cheese Strategies, it seems that the formal definitions don’t line up exactly with my personal definitions. Whatever, definitions follow from language use.)

          PvP games are all about punishing the opponent’s mistakes. In higher levels of play, decisive events tend to be low frequency. This is because the top players tend to make fewer mistakes. Higher levels of play also tend to close out games comparatively quickly, as new players don’t know how to “Snowball” their advantage after they gain them. From The Art of War:

          1. Sun Tzu said: The good fighters of old first put
          themselves beyond the possibility of defeat, and then
          waited for an opportunity of defeating the enemy.

          2. To secure ourselves against defeat lies in our
          own hands, but the opportunity of defeating the enemy
          is provided by the enemy himself.

          3. Thus the good fighter is able to secure himself against defeat,
          but cannot make certain of defeating the enemy.

          4. Hence the saying: One may know how to conquer
          without being able to do it.

          • FullMeta_Rationalist says:

            I feel like my explanation of Metagame was lacking. So I’m going to explain the Chess Meta, since it’s a popular game (and since I don’t have any experience with Go, which I hear is more elegant and plan to learn some day).

            Historically, the Chess Meta used to have a single basic strategy. That strategy was to control the 4 squares in the center of the board using pawns. If you can control the center, you have more space to maneuver your units, while the opponent feels claustrophobic. In other words, you gain a Positional Advantage.

            This Positional Advantage ideally allows you to set up a Double Attack. A Double Attack means attacking two pieces with a single move. It forces the opponent to sacrifice at least one of their units, leaving the attacker with a Material Advantage. All chess tactics are variations of the Double Attack.

            Once a player has a Material Advantage, they are free to force equal trades like Berserkers. This is how you Snowball the game. Because a 20 vs 18 piece advantage isn’t much. But by forcing equal trades, the initial advantage becomes a 4 vs 2 piece advantage which is much more significant. Furthermore, you can’t afford to dilly dally because drawing out the game gives the opponent more opportunities to reverse the tides. Equal trades ossify any material advantages which existed before hand.

            After WWI, the HyperModernists decided that black was at an inherent disadvantage given white’s inherent tempo (because white goes first). Therefore, black should relinquish direct control of the center to white. Instead, black should seek to flank the center from the sides, thereby undermining white’s control of the center and eventually usurping it.

            (No, this wasn’t because racism. In ye olde days, black was believed to have the inherent advantage because moving second allowed it to react to white’s opening.)

            The Hyper Modern School reflected the largest change in the metagame in the history of chess. This wasn’t because the rules of the game changed, but because the contemporary grand masters grew more fond of a different overall strategy. Arguably, it had the same Win Conditions as before (one of which was to Control the Center), but it satisfied that Win Condition in a novel way.

            The Metagame isn’t the space of all possible strategies, just the space of contemporary strategies that are successfully implemented. E.g. I have a friend who doesn’t have any notion of strategy. I keep telling him to fight for the center. But his opening move is invariably to advance a pawn on the one of the edge files (a4 or h4 from white’s perspective). While it’s a safe move, it doesn’t do anything to develop his position. His waste of a move gives me a tempo. Thus, such opening moves are notably absent in the metagame, though they are technically valid moves by the game’s rules. (I know this isn’t an instance of the Blub Paradox because I could beat him even if I were blindfolded.)

            p.s. The 4-Move Checkmate could be considered a Cheese. It’s easy to see coming and trivial to block. A rookie often falls for it the first time they see it. Rarely a second time.

          • FullMeta_Rationalist says:

            erratum: a Chess Player only has 16 pieces to work with. I don’t know why I said 20.

          • James Picone says:

            I think the term you’re looking for is “deep Metagame”. The Metagame might be defined as the strategic landscape of a particular game when played at the highest competitive level. Or more concisely, the set of optimal (or at least viable) strategies.

            I don’t think so. Tetris has the property of “It is extremely difficult to model players much better than yourself, to the point where sometimes they’re difficult to recognise if they’re just playing casually”, and battle-tetris doesn’t exactly have a deep metagame. At least, not the ones that are just based around clearing lines really fast.

            And on the other hand, there’s Starcraft 2, where a thoughtful novice watching a pro player play could probably explain why the pro is microing in certain ways or what they’re trying to achieve out of a given attack or drop, even if they wouldn’t necessarily understand build order concerns or why attacks are occurring at a given timing, and Starcraft 2 has a pretty deep metagame.

            If I had to guess, the property occurs when one of the most significant aspects of winning the game is best modelled as deep tree search in a situation where deeper/better tree search leads to better outcomes. Tetris is pretty much exactly that – finding ways of best maintaining a structure that allows arbitrary block placement and a channel for an I piece is just a search problem, and doing it faster and deeper makes you a better player – and only some of Starcraft’s goals are readily modelled like that (specifically, build order/timing attacks), and the rest of it doesn’t go nearly as deep down the tree or is easier to model more directly (“I am blinking my stalkers back when they are damaged to spread damage more evenly” as opposed to an explicit tree structure of blinks back you can make and selecting the one with the highest outcome).

            [juggling, melee]

            I’m not sure fighting-game subgoals are ‘win conditions’ in the same sense that a combination of cards in a CCG is a ‘win condition’, but that might be because I read ‘win condition’ as an absolute and as a specific individual strategy – ‘If I have card X, Y, and Z, and can play them in one go, I will set up a crushing advantage that is almost guaranteed to lead to a win’ – and the fighting-game equivalent is more about the route that leads to winning and isn’t individual (everybody wants to space well, bait their opponent into being vulnerable, etc.).

            Interestingly, I think of the Smash Bros. series as being unusually not-focused on comboing as compared to other 2D fighters. Take that with a hint of salt; I don’t have much experience with other 2D fighters. Let me explain via a simplified model of a fighting game. There are some number of phases:

            Neutral phase: Characterised by both players having almost free rein, with most of their options available. You start in the neutral phase. Play consists of jockeying for position/spacing, pokes, and trying to bait a mistake. If a player makes a mistake, move into the Punish phase. If a player commits to a line of play (perhaps by jumping in), move to the Attack phase.

            Attack phase: Characterised by one player being ‘on the attack’, and the other player attempting to defend themselves. Characterised by limited options on both player’s parts – you could draw a flowchart with the moves the attacking player could make in this situation, the responses the defending player could make, the responses to the responses, etc.. Making ‘the wrong response’, or one not on the flowchart, is usually a mistake and leads to the Punish phase. When the attack peters out we move to either the Neutral phase or the Advantage phase, depending on the sequence chosen. The attacking player’s goal is to convince the defending player to make the wrong response and then punish them for it, or at the very least to not be punished and maintain advantage. The defending player’s goal is to find a flaw in the attacking player’s offense that allows them to counterattack, or at the very least to weather the storm unharmed. This is what you’re in if somebody jumps in and goes for an attack.

            Punish phase: Characterised by one player having pretty much complete freedom and the other one having few to zero options. The attacking player is free to choose whichever line they believe will lead to the best outcome for them, in terms of damage or position or resources, and the defending player can only watch. In some games there are escape mechanics that allow defending players in the Punish phase to escape into the Neutral or Advantage phases, depending on the game – see Guilty Gear’s Burst. Games that have that have an interesting sub-game built around baiting out the escape in a situation where the attacking player can quickly regain control. Punish phase leads to either Neutral phase or Advantage phase, depending on the game and the punish option chosen. This is what you’re in if there’s a combo going on.

            Advantage phase: Characterised by one player having a positional advantage over the other in a way that narrows the field of options without actually being in a flowchart situation. For example, if one player is knocked down or in the corner, the other player has the advantage. In Smash Bros., if one player is guarding the stage edge against the other, the guarding player is on advantage. The advantaged player’s goal is to make the advantage more concrete by causing a mistake or committing, leading to the Punish or Attack phase, and the disadvantaged player’s goal is to escape the advantage situation or *maybe* turn it around, leading to the Neutral phase, or the Advantage phase with the roles reversed.

            In that formulation, I’d suggest that Smash Bros. spends most of its time in the Advantage phase, with comparatively little time spent in Neutral, Attack, or Punish compared to other games. More combo-heavy games – like Street Fighter – spend much more time in the Punish phase. Less mobile games – like Street Fighter – also spend more time in the Neutral and Attack phases (because it’s harder to ‘get out’ of the attack flowchart by rolling away or whatever).

            This might be because I played more Brawl than Melee. The combo game in Melee was definitely more there (and it’s back again in Smash 4). Most of what looks like juggling in Brawl is totally escapable if the other player plays well, and there’s enough time and range of motion there that sticking it in the same category as a blockstring isn’t really appropriate, but one player definitely has an advantage and is exploiting it for damage.

            My personal definition of ‘cheese’ is basically ‘weird all-in, usually at game start’. It’s all-in in the sense that if it fails, you’re completely screwed. It’s weird and off-kilter, in the sense that it’s outside the Overton window of usual strategies and attacks in a strange direction. And in a lot of games, it’s a strategy you execute towards the start of the game, not one you execute later – it’s harder to all-in later. Example: Cannon rushes in Starcraft. Milling in Hearthstone. Playing Meepo at all in DOTA.

          • Linch says:

            I think different people are defining the “Metagame’ different ways. Personally, I will define “Metagame” in the usual sense of “meta”: something like “The game above the game.” So in Chess the metagame is almost entirely on strategic opening preparation, often to counter specific individuals (so, basically the metagame is not that useful for players below 2000), in Magic/Yugioh the metagame is deck selection, where uppercase “Meta” decks are the decks seen as having the greatest advantage in the current set or banlist, in MOBAs it will be Hero/God selection through picks and counterpicks, in Starcraft it’s the Zerg>Protoss>Terran>Zerg, etc.

    • MicaiahC says:

      See also: The blub paradox, languages more powerful look weird, languages less powerful look needlessly painful.

      • science says:

        Presumably without the mysterious ascension effect whereby practitioners of sufficiently high programming languages cease doing much programming.

    • zz says:

      I’m currently helping an autodidact transition to “upper” (read: proofy) math. It’s an interesting experience when they send me proofs, because they often feel a lot like proofs from a mature (read: lots of small steps left out), slightly-too-high-level-for-me text, which generally take quite a bit of working through to fully understand the argument. The difference is that when you work through them, the latter gives you a valid argument (and whenever you become convinced that it doesn’t and try to make counterexamples, you can’t) whereas the former tend to collapse into nonsense.

      So, insofar as I have about as much trouble modelling what an author writing somewhat above me and an author writing somewhat below me are thinking, I have the same experience. I should also mention that I don’t think my student is substantially less intelligent than myself; writing valid proofs is just this weird skill that’s really tricky to pick up, but once you do, you use it with such a degree of automaticity that it’s really difficult to emulate someone who doesn’t have it.

      • Mammon says:

        Per the Curry-Howard correspondence, writing good proofs is a skill very similar to writing good programs, especially in functional languages. You have a “library” of already-proved theorems, “parameters” which are your assumptions and axioms, and you try to produce an object which inhabits the set of all proofs of the theorem.

        • science says:

          Like computer generated proofs maybe, but those are too tedious for anyone to read and not terribly useful because they don’t actually help you understand the insight.

          We keep on being promised that someday a language will come along that will be like writing (human) math proofs but it has yet to happen.

    • Ghatanathoah says:

      I’ve had some success with this by modeling them as similar to the way I think when I get insufficient sleep. When I’m sleepy I tend to do more things automatically, and often don’t think about things that I really should. I don’t have insights as frequently, and so on.

    • pneumatik says:

      I do experience that level of elitism, though I try to fight it. I’ll hear people talking about something that to me sounds completely ridiculous and has no solid evidence of any sort to support it, and I’ll get confused trying to understand how they could do that. I usually end up assuming they behave in some of the ways Scott talks about in this post. If someone doesn’t assign heavier weight to belief’s supported by strong evidence, for example, they may believe in crystal-based healing because it sounds as good to them as medicine they get from medical doctors. Once I can develop a simple model that seems to predict their decisions I find it’s easier to communicate with them.

    • Besserwisser says:

      My assumption tends to be that people disagreeing with me fall in one of two categories. One, they miss information or are just plain dumb and therefore believe a thing whether it is actually right or wrong. Obviously, people who agree with me can often be described in the same way, though I’m confident they won’t leave my side, so to speak, once they see the full picture.

      Two, they don’t actually believe what they say and have ulterior motives to do as they do. Some people just are “pro-crime” or whatever ridiculous concept you can come up with and they just pay lip-service to the idea that they aren’t.

      I think both assumptions will accurately describe many people, my fallacy is to believe they line up nicely with whether or not they agree with me so I try to avoid those conclusions. Still, it’s hard to come up with effective ways to deal with this other than being open to change.

    • Lola says:

      Yes! I found it much easier to combat when an episode of “Extra Credits” suggested the model that “kids are simply less experienced adults.” This model has helped me considerably when it’s clear that who I’m talking to is literally not understanding what I’m saying (as opposed to understanding and disagreeing) – as when tutoring in math – because it is actionable. What experience can I give this person so that they will see the concept? For example, young kids are worse at addition than I am – but they literally have far less experience seeing five things and seeing twelve more things and seeing that overall there are seventeen things! Similarly for algebra – often, people haven’t seen enough manipulation of variables to have grasped the character of variables and what they can and can’t do.

  8. Eric says:

    I’m surprised that you didn’t list “conditional” or “hypothetical thinking” in your developmental milestones. It has been my experience that many people, who are otherwise intelligent, are very uncomfortable with ungrounded discussion, discussions about consequences of unlikely or arbitrary premises. That is, people have a very tough time distinguishing soundness from validity.

    Perhaps, you consider this an aspect of statistics since Bayesianism includes conditional probability in its formulation.

    • Randy M says:

      I think in some cases people may have trouble with abstract models, in others they just suspect that people are going to try to pull a fast one.
      “Suppose I were to give you a hundred dollars. Would you buy product x then?”
      “Well you’re not going to give it to me, so what’s the point?”
      It’s more of “I don’t want to give you a chance to persuade me of whatever it is you’re after.”

      • Eric says:

        It’s true that people can entertain hypothetical “facts” such as, “What if I got a million dollars?” But really that just requires imagining one case facts and everything else falls out. However, if you generalize to conditional propositions, it becomes much more tenuous. Something I have to frequently remind myself of is that the general public has little or no experience with theoretical thinking. When people think of the Theory of Economics or Theory of Physics, they think of each theory as fundamentally descriptive. The idea that an economic theory may be valid (theorems following from axioms) rather than sound (and therefore descriptive) is a foreign idea.

        • Glen Raphael says:

          I was just going to suggest “economic reasoning” or perhaps “economic intuition” as one of these milestones. Particularly thinking of
          Batiat’s That Which is Seen, and That Which is Not Seen – the skill of perceiving what likely won’t happen as a result of a rule or economic action rather than focusing only on what actually does happen or, worse, on what someone intends should happen.

      • Tracy W says:

        Or the deep suspicion that the questioner going to take their agreement to something under an unlikely or arbitrary premise out of context and pretend it’s an agreement to that something unconditionally.
        It only takes one person to distort someone’s words that way to induce distrust.

        • DrBeat says:

          Unfortunately, it’s WAY more than one person that’s actually doing this, so that distrust is entirely well-founded. It’s why I hate the Hypothetical Trolley in all its forms.

          It always frustrates me when people talk about analyzing responses to hypothetical questions and what it means about what people are thinking and what their values are and what is wrong with them, because they always completely ignore the possibility of “The person responded to this question in the way you dislike because they think your question is a bullshit attempt to trick them and gain some advantage over them, and your ‘analysis’ proves they were correct in this assessment.”

        • Deiseach says:

          Yeah, pulling the “gotcha!” on you after getting you to eventually agree to some premise, then out of nowhere going “So you do think witches should be burned at the stake!” or the likes.

          Once bitten, twice shy. When someone asks you to agree or disagree with something that seems very unlikely or unusual, you start to ask yourself “What’s the hook hidden under this bait?”

          • XerxesPraelor says:

            Do you really think lying to someone just because you don’t want to “lose” is moral? I feel like it’s more important to cultivate the character of telling the truth than to preserve your own sense of rightness.

          • Tracy W says:

            XerxesPraelor: I don’t think anyone is lying, just refusing to engage with the question in the way the questioner wants.

            I once had to deal with a persistent questioner who wanted to know what gender of child I wanted to have (I was obviously pregnant), after a couple of joking non-answers didn’t deflect her, I sat and stared at her silently until she gave up.

    • LCL says:

      As someone who resists discussing ungrounded or silly hypotheticals, my experience is that my judgement-making module resists thinking about them carefully. I try to apply concentration and careful consideration, but keep getting interrupted by preemptive returns of “it doesn’t matter; stop process.” Persisting past a bunch of those returns is effortful and I can come to resent people asking me to do it.

      So it’s not that I think you’re trying to trick me into saying something weighty that’s against my values. It’s almost the opposite – I don’t seem to be capable of assigning any serious weight, internally, to possibilities I model as ridiculous or impossible. I can’t care enough about them to think carefully about them.

      I should add that the cognitive feature which persistently interrupts with “it doesn’t matter; stop process” is generally a very useful one. Separating signal from noise is a tremendously valuable skill, and this feature provides some intuitive noise-recognition ability that would otherwise take a lot of conscious effort.

      • Deiseach says:

        I don’t find that happens; with me, it’s more a case of I can feel my mind slamming shut when some topics are up for discussion or being argued.

        It’s like “No, I don’t care. I am not going to be persuaded by what you say” and the gates slam shut.

        Mostly it’s old arguments that have been rehashed over and over again, and there’s nothing new going to be said, and my brain hasn’t the tolerance or resources to go through the same old thing over again when it already knows what the conclusion is going to be.

        • XerxesPraelor says:

          Do you think that’s good, though? Having some situations where it’s impossible for you to accept the truth?

    • Eli says:

      It depends what sort of hypothetical. Certainly, if I think that a hypothetical situation exposes some uncertainty in my model of the world or the concepts out of which I built that model, if it’s so far from average that I’m not entirely certain my normal vocabulary is meaningful, I’ll resist talking about it.

      Take, for instance, the old Star Trek “transporter question”: are you alive after going through the transporter, or did you die and get replaced with an identical copy?

      For most people who don’t have a very thorough education, one that not only includes physics but the notion of a causal-role concept, their conceptual vocabulary is too vague to yield a good answer. Instead, they’ll answer in a way that has more to do with their preconceptions than with the content of the question, because they can’t ground the question in their existing knowledge.

    • Mark says:

      Let’s assume we live in a world where people will be killed every time you engage in an ungrounded hypothetical…

    • brad says:

      This is a big part of the “thinking like a lawyer” thing that is supposed to be why it is worth it to spend six figures and three years going to law school. When you can argue in the alternative (“my client wasn’t there, and if he was there he didn’t do it”) with no cognitive dissonance that’s means you’ve got it.

    • Mike says:

      People keep talking about weird hypotheticals, but this comes up in much more grounded cases. It’s at the root of much of political ideologies.

      “Would this logic make sense *if* you believed that global warming was real, or that fetuses have rights, or that self-defense is an important right, or that poverty/culture/biology causes crime?”

      It’s what we all do when trying to judge which opposing viewpoints are crazy/stupid, which ones are sound logic built on bad premises, and which are built on premises that are unlikely to be proven.

      There are so many people who have a hard time separately evaluating the assumptions, logic, and conclusions of an particular stance.

      • Anthony says:

        There are so many people who have a hard time separately evaluating the assumptions, logic, and conclusions of an particular stance.

        In the case of politics, it doesn’t help that the arts of rhetoric include deliberately confusing the assumptions, logic, and conclusions of a particular stance, for fun and profit.

  9. Totient says:

    With the caveat that I don’t think of these sorts of things as “developmental milestones” so much as “things everyone has to a certain degree, but with high variance on how well they can do it”, I’d add:

    Ability to think counterfactually: think in terms of “Given that X is x_1 and Y is y_1, Y would have been been y_2 if X had been x_2. Everyone I know can think in these terms (“I wouldn’t have burned my hand if the stove hadn’t been hot when I touched it”) but I’ve seen incredible variance in how well, and to what level of abstraction people do so.

    • Sniffnoy says:

      Yes, I think counterfactual reasoning is an important one.

      To tie this in with an earlier recent thread, towards the end of that word problems link I posted in the other thread, one finds the following:

      What do we know about cognitive development of people who have always solved only practical problems
      with real data? This question is answered by several expeditions into regions populated by people
      belonging to so-called ‘traditional’ cultures. The scientists observed that these people do not solve
      even simple word problems if these problems go beyond their experience, although they can perform
      arithmetical operations. This is what some scientists wrote:

      Luria about his expedition to Central Asia [Luria], p. 120:

      Subjects who lived in remote villages and had not been influenced by school instruction were incapable of solving even the simplest problems. The reason did not involve difficulties in direct computation (the subjects handled these fairly easily, using special procedures to make them more specific). The basic difficulty lay in abstracting the conditions of the problem from extraneous practical experience, in reasoning within the limits of a closed logical system, and in deriving the appropriate answer from a system of reasoning determined by the logic of the problem rather than graphic practical experience.

      Cole & Scribner about their expedition to Africa [CS], p. 162:

      Experimenter: Spider and black deer always eat together. Spider is eating. Is black deer eating?
      Subject: But I was not there. How can I answer such a question?

      Scribner [Scribner], p. 155:

      Both Luria and Cole identified this empirical bias as an important determinant of the poor problem performance of nonliterate traditional people…”

      The most interesting observation made by these scientists is that ”traditional” (that is, belonging to trditional cultures) subjects to not solve simple syllogistic problems, that is problems to solve which it is necessary and suficient to perform one syllogism. This does not mean that they try to solve these
      problems in our sense and fail or make mistakes. This means that they refuse to make statements which
      are not substantiated by their personal experience. One of Luria’s subjects said exactly this: “We speak
      about what we have seen. What we have not seen, we do not speak about.”

      So, counterfactual reasoning seems too to be something people learn only as their culture demands it.

      • Adam Casey says:

        I have an amusing case of a very well educated westerner. My girlfriend does geography at university, but really struggled with the maths around triangulation.

        I tried to help out, and realised what was up. People were telling her “suppose I have a point here and a point there, the angle is such and this distance is so, then we calculate that distance with this formula”. What she needed to hear was “I am at a base station here, the mountain is there, I take a reading like that and measure a distance like that”. Anything more abstract was hopeless.

        • Tracy W says:

          Dan Willingam, a cognitive scientist, says this is common if not always so evident. He distinguishes between shallow knowledge and deep knowledge, the latter being the abstract stuff, and apparently cognitive scientists and teachers know of no way to skip the shallow stage in learning: most people just have to go through it.

      • Ghatanathoah says:

        I was once the literacy tutor for an older women with a low reading level. Whenever I asked her to use a vocabulary word in a sentence she always composed a sentence that related to her personal experiences in some way. I think now I understand why that was.

        I also wonder if this is why framing narratives used to be such a popular part of fiction before the 20th century. People before then had trouble reading completely made-up stories because they were so abstract, framing narratives were a way to help coax them into the idea.

        • Eric says:

          I’m interested in what you mean by framing narrative in this context.

          • PDV says:

            “Here are a bunch of ordinary people sitting around together telling stories, and here are the fantastic stories they told”

          • Ghatanathoah says:

            In lots of classic literature there is some sort of bit at the beginning that explains how the author got ahold of the story you’re about to read. For instance, in “Heart of Darkness” starts with an unnamed narrator who is implied to be Joseph Conrad telling us about the time he met the main character, who related the story to him. “Frankenstein” starts as letters from an explorer to his sister, the explorer later meets Doctor Frankenstein and is told the story by the Doctor. “Rime of the Ancient Mariner” starts with the author running into the main character outside a wedding, and the main character relating his story to him.

            It seems like in earlier times the idea that you could just tell a totally made up story and expect people to immerse themselves in it was controversial.

          • onyomi says:

            Absolutely true with East Asian literature as well. Almost all fictional narratives prior to the 19th century are framed somehow in terms of either reporting/embellishing on a historical event, and/or relating a weird story you heard from a totally reliable source. You can’t just be like “here’s a cool story that I just pulled out of thin air.”

          • The original Mr. X says:

            It seems like in earlier times the idea that you could just tell a totally made up story and expect people to immerse themselves in it was controversial.

            I dunno, some of the most popular and successful authors made no use of framing narratives (Dickens, Austen, etc.). Besides, if “tell[ing] a totally made up story” was controversial, I’m not sure why telling a totally made-up story about somebody being told a totally made-up story would be any better.

          • onyomi says:

            In terms of the history of literature, Dickens and Austen don’t count as “earlier times” by most reckonings. And of course you don’t tell the reader you made up the story about hearing the story from someone else.

          • The original Mr. X says:

            Dickens and Austen were in the same literary period as Coleridge, Shelley and Conrad, whom Ghatanathoah used as his examples of authors “in earlier times”.

            And of course you don’t tell the reader you made up the story about hearing the story from someone else.

            Yeah, but I suspect they know it already, and even if they didn’t you could just not tell them you made up the main story itself. If people are going to quibble with “A funny thing happened to me recently. It all started the day I decided to seek my fortune in London…” because it didn’t really happen are going to be any more satisfied with “I met a stranger on the road the other day, who came up to me and said, ‘A funny thing happened to me recently. It all started the day I decided to seek my fortune in London…'”

          • onyomi says:

            I’m not sure why you’re quibbling with the reality of premodern literature. It doesn’t seem like it should make a difference, but, apparently, it did.

            And it’s also not necessarily about getting the reader to actually believe you, but because, even if they know it’s a lie, premodern readers especially seem to find the experience more immersive when it’s presented in such a frame. And I think the framing device still works on us: I find the epistolary nature of Bram Stoker’s Dracula to be immersive, for example, even though I obviously knew it was fiction from the beginning.

        • FullMeta_Rationalist says:

          I have a friend whom I watch often movies with. When he suggests a film and I ask how good the film is, he invariably likes to add how it was “based on a true story” (if applicable). Which I’ve always found weird. I’ve asked him if he might somehow appreciate the film less if the “true story” attribute were revealed to be a total hoax. To which he replies “no”, yet continues to fanboy about films which are “based on a true story”. He gets confused when I try to explain counterfactual reasoning. Ho hum.

    • moridinamael says:

      I wonder if “skill with abstraction” is a general trait.

      If you show me a complex machine that I’ve never seen before, I’ll be able to “get it” within minutes, and probably correctly predict the consequences of turning a given knob and possibly diagnose problems.

      If you show me a mathematical representation of exactly the same system, it will take me much, much longer to understand what’s going on, particularly if the variables are named things like “x”.

      A big breakthrough that I made, which seemed to come to me way too late in my education, was that I could mentally treat the variable x the same way that I would treat a physical valve, such that my “gut” understood the properties of x, since my working memory can’t actually hold the formal properties of the entire mathematical system at a glance. This meant that I could “visualize” or “feel” the consequences of varying x, rather than trying to use the same part of my brain that manipulates symbols syntactically.

      I feel like many (most? a few?) people who are “bad at math” have simply not made this transition between doing math and/or programming “in their head” versus “in their gut.”

      • Mammon says:

        This is a very important insight. When doing math, I’m often faced with new abstract structures, described only through a handful of properties. No rationale, no background, no context.

        In those situations, the easiest thing is to take a central example of that data structure. (“Central” in the sense of “non-central fallacy”.) That’s usually the “least object such that A”, or a free construction.

        For example, what the hell is a monoid? All I know is that it has an associative operation, and an identity element. Well, it turns out that the “free monoid” over a set is the set of all finite strings of characters with the elements of that set as an alphabet, so the free monoid over {A, B, C} would be A, B, C, AA, AB, AC… and the empty string. So any time you’re playing with a monoid, you can just pretend you’re playing with a string.

        You can rinse and repeat with monads (free monads), fields (rational numbers), rings (polynomials), complete metric spaces (the reals)… Just pick a representative sample, and roll with it. You’d be surprised at how far you can get.

        • Magnap says:

          Thank you for giving these examples. I realize that I have done this before, but without considering exactly what I was doing. “Basing intuition on a central example” is a good expression for it, I think.

          Another area where this is useful: learning conjugation rules of a foreign language. Just pick a regular verb and roll with it. Personally, I’ve always remembered German conjugation by the word “kaufen”.

  10. Anaxagoras says:

    It occurs to me that people can have these insights for positions they disagree with, but be totally blind to them for ones that fit their worldview. Like, the notion of tradeoffs are something people get very easily when defending controversial positions they hold (e.g. “Yes, it’s theoretically possible embryos are morally significant, but that remote possibility is outweighed by the certain harm caused to women by restricting access to reproductive care” or vice versa), but not that their opponents may simply see the tradeoff going the other way.

    Eh, not sure if this is the best example, but it’s late, and I’m a bit too tired to construct a better one right now. Still, you can probably think of some.

  11. Jacobian says:

    After being immersed in the LessWrong world (on and off-line) for a year and a half my main thought when contemplating Jacobian-2013 is how *childish* I was. I almost can’t believe I managed to get along in life with all the stuff I believed in, crazy notions I took seriously like “my political party is right and the other one is evil”. After reading Chapman, his level 4 sounds exactly like the general mindset of LW rationality.

    Unfortunately, it seems like the sequences are only appealing for a very narrow segment of people, based on Scott’s own excellent surveys. Surely level 4 development doesn’t have as a necessary requirement a top 1% SAT math score. I wonder how the mindset can be spread to people who LW isn’t a fit for, any ideas?

    • Mark says:

      It’s a bit long, isn’t it?
      How about a one page summary.

    • Brandon Berg says:

      I almost can’t believe I managed to get along in life with all the stuff I believed in, crazy notions I took seriously like “my political party is right and the other one is evil”.

      Much of what’s wrong with politics follows from the fact that your incredulity at having been able to get through life with this belief is almost entirely unfounded—i.e., that there is effectively zero practical cost to holding this belief.

      • Jacobian says:

        The cost is low, but it’s definitely not zero. I am now able to enjoy friendships with several “evil tribe” people. Whenever they start a political conversation I take it up to the meta level until they give up in confusion.

        Still, obviously most of humanity gets by just fine without rationality. It’s just that how I think of myself has changed drastically over two years, so much so that I can hardly recognize my 2013 version.

    • Jaskologist says:

      This is but the beginning, grasshopper. When you reach true enlightenment, you will see that it is in fact more useful and rational to adopt the “my tribe is right and the other is evil” viewpoint.

  12. Anonymous says:

    Scott: I notice you tend to post around the same times. Do you schedule your posts, or is this an artifact of your writing schedule?

  13. How about “ability to separate words from their meanings” as per A Human’s Guide to Words?

    Edit: Oooh, I just thought of one that’s a personal bugbear! The inability to separate how one feels about something from it’s value. That it, being able to be repulsed by homosexuality or experience machines while still acknowledging that said revulsion is not proof of their moral disvalue. (I call this distinction “feeling” vs “feeling about” because I think qualia is the only candidate for moral value that our brains interact with directly.)

    • “ability to separate words from their meaning”. That an important one. I find an almost palpable difference between people who think in words, and people who think in concepts.

      • Creutzer says:

        This is interesting. Would you care to elaborate with some prototypical examples of ways in which the two groups differ?

        I wonder if this could explain why I have, for all intents and purposes, lost the ability to bullshit: If I think only in concepts and never in words, then of course producing verbiage that is conceptually meaningless will be difficult, because I have to be mistaken about concepts.

        Note that the loss of this ability is a very double-edged sword. It does actually reduce my employment prospects.

        • Niall says:

          Hey, that’s my experience too! I’ve been accused of being pathologically honest because I just can’t bullshit anything. I discussed accountant exams with a colleague – he liked narrative ones because you just write a lot of crap and get marks for it, which was exactly what I found difficult about them, because I have to write something that makes sense.

        • apart form “feel”, the main test of whether someone is thinking in concepts is their ability to paraphrase or unpack statements..educators place store by rephrasing things in your own words because they are trying to teach conceptual thought.

          “I wonder if this could explain why I have, for all intents and purposes, lost the ability to bullshit: If I think only in concepts and never in words, then of course producing verbiage that is conceptually meaningless will be difficult, because I have to be mistaken about concepts.”,

          No, you just need to translate them differently.

    • Deiseach says:

      Or the opposite side of the coin to that: the assumption that your opponent or the person disagreeing with your view is motivated by disgust or revulsion: “X is perfectly moral or harmless, so your disapproval of it cannot be motivated by any reasonable objection, so you must be rationalising a feeling of disgust! And if you deny you are basing your objection on belly-feel, that only proves you are motivated by it!”

    • Emily says:

      Revulsion is informative regarding moral value. Particularly if you’re someone who can convince yourself of all sorts of bad things, your intuition may actually be the best guide you have to whether something is right or wrong. Certainly it’s possible to over-rely on it, but it’s also possible to under-rely on it. I don’t think either qualifies as a developmental milestone.

    • Ghatanathoah says:

      I think there’s a difference between disgust at homosexuality and disgust at experience machines. The error in the homosexuality example is when someone is disgusted by gay sex, and therefore concludes that nobody should have it. If you asked a heterosexual person if they, personally wanted to have gay sex, and they disgustedly replied that they would not, they are not making an error. The error is overgeneralizing to others (#2 on Scott’s list of milestones), not in using disgust to make a judgement.

      Most of the time someone is posed the experience machine question they are asked if they, personally would want to go inside one. If they are disgusted and reply that they never would, they are not making an error, the same way one is not making an error if one personally declines having gay sex out of disgust, but is willing to allow others to have it.

      I think “feeling about” can sometimes be more reliable than “feeling” because “feeling” sometimes involves unwanted changes to your brain, and “feeling about” something can warn you of the danger of those changes. For instance, what I “feel about” being made into an anti-wirehead is absolute utter horror. I know that my anti-wirehead self, however, would feel that pain is awesome and that he’d need to be in as much of it as possible. He would feel just as horrified about being changed back as I would be opposed to being turned into him. That is precisely why I am so opposed to being turned into him.

    • Cord Shirt says:

      The inability to separate how one feels about something from it’s value. That it, being able to be repulsed by homosexuality or experience machines while still acknowledging that said revulsion is not proof of their moral disvalue.

      Until about 2000-2005 or so, I could say, “I feel X, but I believe Y,” and people would nod along. It didn’t sound weird to them, and they came away with the understanding that I believe and act on Y and ignore my silly feelings of X. We kind of all shared an assumption that of course I did because that’s…what you do. 😉

      But with the people who became adults in or after 2000-2005 or so, there’s a tendency to hear, “I feel X, but I believe Y,” and freak out. “OMG, you think X! You monster!” It’s as if they, and not older folks, haven’t been taught to make the distinction.

      I think it is a matter of cultural training and not just that they’re young (or somehow Inferior Beings or w/e), because many of them are as old now as many people were in 2005 who had no problem with it then.

  14. Toggle says:

    This raises the obvious question of whether there are any basic mental operations I still don’t have, how I would recognize them if there were, and how I would learn them once I recognized them.

    Alternately: is the number of significant milestones in potentia a relatively small number (order 5), such that we might think about a few that we might not have yet? Or can a sufficiently advanced civilization discover an arbitrarily large number of them?

    I have a friend who lists ‘the ability to detect anachronism’ as one of these that we don’t necessarily think about, because it’s so fundamental. For example, people in the middle ages wouldn’t necessarily have seen anything wrong with a letter from Peter the Rock, written in Italian. Certainly a lot of the art is flagrantly anachronistic in terms of architecture and clothing. If true, it’s a bit scary to think that equivalently flagrant blind spots might be operating on everyone on Earth, including us, right now.

    (Also, this line of reasoning seems unusually susceptible to ‘listing things that describe me as if they were objectively better’. It is easy for me to see such a discussion that elides the difference between a developmental milestone and a personal attribute that is seen as virtuous.)

    • Evan Þ says:

      I think you’re generalizing much too fast about the Middle Ages. Remember that medieval art also always showed halos around saints’ heads, even though there were people who’d seen other saints when they were still alive and could confirm they had no visible halos. I’m sure the clothes were also symbolic rather than representational.

      And with respect to anachronistic language, remember that in late antiquity, St. Jerome pointed out that the tale of Susanna and the Elders could not have been part of the original Hebrew text of Daniel, because it depended on several significant puns only present in Greek. Also, by the 1400’s, people were pointing to several anachronistic terms to prove the Donation of Constantine a forgery.

      • See Rick Steves’ “Europe 101” for a good overview of early Christian art. Each of the Saints had a characteristic object that appeared in early paintings with them so viewers could distinguish all the Saints. There were items of clothing, animals, tools, buildings among many other things. Since no one knew what any of them had looked like, it’s not clear there was a better solution, but it’s still surprising how long it took to develop the ability to actually draw recognizable faces.

    • Tracy W says:

      Certainly a lot of the art is flagrantly anachronistic in terms of architecture and clothing.

      But that doesn’t mean that the artists, let alone the general population, had no idea of anachronism, any more than Picasso’s paintings prove that early twenthieth century Europeans knew nothing about anatomy. It just means they had other concerns, in the case of middle age art the donors of the painting were often painted in to scenes like the crucifixion, praying at the foot of the cross but that doesn’t mean that the artists believed them to have literally been there. Nor that the general population wouldn’t have realised that there was something off with, say, a wealthy young man wearing the fashions of his grandfather’s generation.

    • Banananon says:

      In addition to what the other commenters have said about the sophistication of people in the middle ages, I think you’re giving the modern population to much credit for its ability to detect anachronism.

      Consider the old joke “if English was good enough for Jesus it’s good enough for me”, and the plethora of images depicting biblical scenes with everyone noticeably white. While most people will acknowledge the absurdity of the above if pointed out, it sort of floats in the cultural background without being questioned — routinely calling this out is unusual.
      Together with what the other commenters’ examples, it’s not clear to me how much the spectrum of ‘anachronistic awareness’ of the present day differs from that of the past.

    • anodognosic says:

      A little pet peeve from Game of Thrones: grammar and usage. In the series, Stannis at some point corrects a less/fewer error, and I remember in the books someone corrected someone else on the hanged/hung distinction.

      Whereas at the (analogous real-world) time, not only were there no standard grammar and usage rules, but English varied wildly from place to place, so much so that speakers might not even have recognized it as the same language.

      (The only way to deal with pedants is to out-pedant them.)

      • Paul says:

        Well, one *can* go a bit deeper, and suggest that there might well be a loosely-standardized grammar in Westeros that is taught to the nobility. They’re (more or less) all educated by maesters, who themselves are centrally educated in the Citadel. Given that, it would hardly be surprising that well-educated people even across a continent have an understanding of what ‘proper’ grammar or word choice should be.

        It would be a bit like the Catholic Church essentially being the authority on the linguistic rules of ecclesiastical Latin, even before widespread printing.

      • Deiseach says:

        I would dispute that Westeros is based on anything analogous to real-world time/place; you can argue it’s loosely based on the Wars of the Roses, but George R.R. Martin seems to have taken the “mud and grime” view of the past to an extreme.

        It’s one thing to react against artificial, romantic notions of perfection but it’s just as artificial to swing to the other extreme and say that everyone was caked in filth, there was no colour, cleanliness or honour, and life was all murder and rape enlivened only by the occasional bout of torture and treachery.

        But even in mediaeval England, one dialect of English did win out and establish itself as the standard, derived from the London and East Midlands dialects. If we suppose that Stannis et al speak a particular dialect associated with the most populous and most politically important centre(s), and that people wanting to get on in the courts (both royal and law), finance, the church(es), etc. need to speak this (besides whatever native dialect they speak), then it is possible and indeed probable that Stannis would correct someone not familiar with the grammar of that dialect and importing their own dialectal usage instead.

        • Nornagest says:

          ASoIaF‘s grittiness level seemed appropriate to a civil war of the period to me; high medieval warfare really was very nasty, and more about raids and slaughtering peasants and what’s euphemistically called “foraging” than about honorable setpiece battles. The peacetime scenes we see earlier in the series weren’t too bad. I did find my suspension of disbelief being strained a bit when it came to the Boltons, but I suppose we could fanwank that away by noting that Ramsay’s the only member of that family we see doing anything a noble in a feudal society couldn’t expect to get away with.

          There’s a bit in Foucault’s Pendulum where Umberto Eco’s mouthpiece character notes that you can learn a lot about the problems a society’s having by looking at what it forbids. If a crusader order’s rule talks about trading in slaves or having sex with the locals or worshiping what they thought of as the local idols, it follows that they had enough crusaders going out and doing just that that they felt the need to expend scarce resources to prevent it — and probably with limited success, considering how much weaker central authority was back then. Apply that idea to conventional chivalry…

          • The original Mr. X says:

            Yes, but at least with chivalry there was a recognition that certain types of behaviour aren’t right and an attempt to rein them in. In ASOIAF we don’t even get that.

          • Cet3 says:

            ASOIAF’s setting has chivalry, too. Hence Sansa and Brienne going on about “true knights” all the time.

      • Lignisse says:

        There’s also a spot in the latest book where a character asks *whence* they’re travelling *to*. Very jarring, because at no point in the history of the English language has it been permissible to vary “whence” and “whither”. Of course now we’d say “where” for both, but GRRM clearly reached for “ye olde tyme synonym” and hit the wrong one.

        The point being, GRRM isn’t an expert on English and his editors aren’t able to correct him in either of these cases.

      • The original Mr. X says:

        Westeros is based on mediaeval England (or at least the pop culture version thereof), but it’s not actually meant to be mediaeval England, and there’s no obligation for Martin to make it exactly like its real-world analogue.

    • Simon says:

      There is also the question if the ‘milestone I don’t have’ might even be recognizable as such. Having full insight into a system also means being less able to step into another person’s shoes.

      A politician says crime statistics have gone down, but An Angry Voter replies just last week his grandmother was robbed is often dismissed. The politician apologizes, but ultimately dismisses this because the voter isn’t seeing the system and loses the next election.

      Progressives who try really hard to see the nuance and motivations in everything might think they are on a higher level than a reactionary who thinks it’s necessary to see evil for what it is and fight it. One of the two is right, but both certainly don’t see the other side as such.


    • Deiseach says:

      Oh, we have our own version of not detecting anachronism. Someone commented that modern-day writers should ignore historical realism when writing historical fiction or fantasy based on it; to avoid further harm to marginalised peoples (ethnic, gender and sexuality based, etc.) they should not be concerned with “Would someone in the 18th century really have spoken, thought or acted in those terms?” but instead provide representation.

      That means, and I do think we see it, people unable to consume older (and that seems to mean “anything written 50 years ago”) media because it’s boring, it’s too hard to understand, and if the characters do not talk and act like late 20th century/early 21st century Americans, they can’t be “identified with”.

      I’m seeing huge enthusiasm for this rap musical “Hamilton” on Tumblr, and no doubt it’s very funny and well-done, but it’s based on the contrast between “dress people up in 18th century clothes but have them talk like 21st century people”. Which is all very well, but there seems to be little to no recognition amongst the fans that Benjamin Franklin (or whomever) didn’t really share the same mindset as a modern-day person, and if he were brought forward to the present day, he would have beliefs and motivations completely different to those of people today.

      • Nornagest says:

        I think you’re overestimating the differences there; certainly there are going to be some differences in beliefs and motivations, but those are generally pretty separable from the actual language. We go through an almost identical process every time we translate something into English, especially from a non-European language, and while that usually loses some of the symbolism and wordplay in the original the characters usually come across pretty well.

        I’ve never seen “Hamilton”, but a couple link threads ago, Scott linked to a nearly line-for-line rap translation of the first book of the Iliad. It struck me both as a pretty good rendition and as a lot more readable than the traditional free verse (which I’ve read in a couple of different translations). Similarly, you can tell the story of “Hamlet” over anything from a 1920s gangster family to a pride of cartoon lions and preserve all the important motivations.

    • Deiseach says:

      Certainly a lot of the art is flagrantly anachronistic in terms of architecture and clothing.

      Ehhhh. Are modern-dress productions of Shakespeare plays “flagrantly anachronistic”? Shakespeare may have set some of his plays (“Lear”, “Cymbeline”) in ‘Dark Age’ Britain, but the actors played them in the costume of the day. It really wasn’t until the 19th century that the notion of “authentic” costume took hold, so ironically we could say that presenting the history plays in mediaeval costume is conscious antiquarianism; for Shakespeare’s audience, these would not have been “Ah yes, real old-fashioned armour!” for Henry V and so putting on the plays not in modern dress is going against the spirit and intention of them.

      And indeed, the notions of “genuine historical costume” that the 19th century used were based as much on romantic notions as actual scholarship and research, so while we may laugh at the idea of putting on “Hamlet” with actors in bag wigs, it’s no more ridiculous than putting on “Richard II” with Fascist imagery or The Ring Cycle in a post-apocalyptic wasteland imagining.

      • Salem says:

        It’s normally Richard III (or Julius Caesar) with fascist imagery. Richard II is a very different play.

    • NN says:

      Certainly a lot of the art is flagrantly anachronistic in terms of architecture and clothing.

      That is because church paintings and the like were intended to be viewed by common people, most of whom could not read and had little, if any, formal education. Depicting contemporary clothes made it so that anyone looking at the painting could immediately recognize that this guy is a tax collector, that guy is a soldier, etc. If the artist had depicted authentic historical clothes (assuming that it was even possible for them to find out what authentic historical clothes looked like), then only an educated person who had studied the fashion and customs of Ancient Palestine would be able to tell what was going on.

      I agree with Banananon that you seem to be giving modern societies too much credit. The practice is pretty similar to how Hollywood historical movies will usually depict everyone talking in modern English, even if the movie is set in Ancient Rome or Egypt. But just looking at religious art specifically, Jesus is still in the West commonly depicted as a white man, even though everyone knows that Jesus was a Middle Eastern Jew.

  15. Chirpchirp says:

    Maybe it goes without saying, but i think people’s ability to engage in this type of higher level thinking would also vary by topic. How personal it is for them, whether they’ve given it a lot of thought already, etc.

  16. Ed says:

    I’d suggest as another milestone: being open to falsification of your beliefs. It takes a lot of energy and maturation to reach that level where when confronted with evidence against your beliefs, you modify your beliefs instead of doubling down on them.

    This is especially hard to do when said evidence is flawed, and instead of harping on the flaws of that evidence to avoid modifying your beliefs, you carefully consider whether those flaws are relevant to whether they falsify your beliefs.

    • Ryan Beren says:

      Quite right.

      And the next step: Being actively open to falsification (correction and improvement!) of our beliefs by noticing when we hold a belief that is non-falsifiable and deliberately changing it to a falsifiable version.

      And the next step: Deliberately attempting to falsify (correct and improve) our beliefs by searching out disconfirmatory evidence.

      And the next step: ???

    • onyomi says:

      “…and instead of harping on the flaws of that evidence to avoid modifying your beliefs, you carefully consider whether those flaws are relevant to whether they falsify your beliefs.”

      I think this is really important because it may be *the* number one strategy for resolving cognitive dissonance without changing a cherished belief: try to undermine the source or quibble about the details without considering the substance of the argument.

  17. hamnox says:

    I spend my days trolling politically-inclined FB friends using my ability to understand that other people are going to go down as many levels to defend their self-consistent values as my friends will to defend theirs.

  18. He proposes a bunch of potential counterarguments, then shoots each counterargument down by admitting that the other side would have a symmetrical counterargument of their own: for example, he believes that “American Sniper” is worse because it’s racist and promoting racism is genuinely dangerous to a free society, but then he admits a conservative could say that “Fun Home” is worse because in their opinion it’s homosexuality that’s genuinely dangerous to a free society. After three or four levels of this, he ends up concluding that he can’t come up with a meta-level fundamental difference, but he’s going to fight for his values anyway because they’re his. I’m not sure what I think of this conclusion, but my main response to his article is oh my gosh he gets the thing, where “the thing” is a hard-to-describe ability to understand that other people are going to go down as many levels to defend their self-consistent values as you will to defend yours. It seems silly when I’m saying it like this, and you should probably just read the article, but I’ve seen so many people who lack this basic mental operation that this immediately endeared him to me. I would argue Nathan Robinson has a piece of theory-of-mind that a lot of other people are missing.

    I actually think this might be the reason that supreme court justices get more liberal as they age. They definitely seem to Get The Thing more than anyone else, and it’s not hard to imagine that Getting the Thing and being around other people who Get The Thing leads toward a particular ideology.

    • Scott Alexander says:

      Okay, that sounds really interesting, but you’re going to have to spell it out in more depth, especially its relevance to the Supreme Court in particular.

      • platypus says:

        If I understand what Wirehead Wannabe is saying properly, they’re suggesting the following:

        1) Getting The Thing makes it easier to treat competing ideologies as competing ideologies. That is, when you understand that the people on all sides of an argument have access to the same rhetorical tools and the same tricks of reasoning and you have to apply any valid metric or concept or technique equally to all of those ideologies, that makes it a lot easier to treat an ideological difference as an ideological difference, as opposed to, say (and I’m picking this example to help with my own self-deprogramming), asserting that everyone who believes money spent on speech should be treated as speech from a rights perspective only believes that because they have money to spend on speech that they don’t want regulated (or value-identifies as if they did). That was a horrible beast of a sentence, so restating for clarity – Getting The Thing is an important tool for having meaningful thoughts about ideological issues.
        2) Some legal conflicts are also ideological conflicts, and the supreme court deals with a disproportionate number of these.
        3) As a supreme court justice, you are exposed to a lot of really good arguments, made by very smart people who are both passionate about their causes and highly incentivized to advocate for them, on ideological issues, that are in direct conflict with each other. Even more, you’re (at least theoretically) expected to make a sober, reasoned judgment between them, which can be justified on its legal and rational basis, not on the basis of your personal ideology. This is a brute-force approach to Getting the Thing, but it would certainly follow that you can only experience that for so many decades without Getting the Thing despite yourself, if you’re intelligent enough to be a supreme court justice in the first place.
        4) Getting the Thing (and thus, seeing ideological issues more clearly for what they are, and considering the arguments on their own merits, instead of weighting things in “your side’s” favor by applying logical arguments and mental effort more strongly for “your side” than for “the opposition”) is likely to have an effect on your final opinions on ideological conflicts.
        5) If Supreme Court justices really do become more liberal as they age, perhaps that’s the manifestation of the increasing degree to which they Get the Thing.

        That all seems relatively straightforward to me, though, so I’m probably missing something you’re looking for. Either that, or you just wanted Wirehead Wannabe to be more thorough so you could link to their comment somewhere, and I’m butting in!

        • MasteringTheClassics says:

          I’m following your train of logic right up until 5), but that final step doesn’t seem to click. Based on that reasoning I’d expect Supreme Court justices to move towards the middle of the political spectrum – I can’t see how it would encourage an already liberal justice to get even more liberal.

          • Linch says:

            Doesn’t your comment partly presume that some level of Platonic truth exist in the median of contemporary American politics? Ex ante it seems quite possible that SC justices will converge on a point of view that is not the same as the median American voter, though of course in their public statements/decisions they have to operate within the Overton window.

        • Hypothesis: Getting The Thing works only inside broad frameworks and paradigms. The supreme court judge has to choose between a liberal ideology and actually that kind of conservative ideology that is basically hardly a decade or two older version of the liberal ideology. The first is obviously more consistent and more in line with the Zeitgeist. After all the kind of conservative he deals with and his arguments are just more awkward and slower liberals.

          My point is, the conservative has at some level healthier instincts, but still operates within the liberals framework, he is sitting in the same car, just in the back seat.

          To fully realize the healthy aspects of the conservative instincts, one needs a radical break with modernity as such. Something as weird and extreme as Faye’s Archeofuturism, for example.

          And that is simply not the supreme court judges job. He has to choose between proposals that are inside the general accepted framework of the current society that hired him. He chooses between preserving things as they are (conservative) or **preserving the spirit of how the current things were made (liberal)** but he cannot promote a reactionary counterrevolution any more than he could promote an anarcho-communist revolution. Current conservatism is the tradition of current things, current liberalism is the tradition of how current things were made. The judge works inside these two. Inside the system.

          The judge operates within the framework and withing the framework liberals are right. The framework itself is wrong. And that is not the judge’s job to judge.

          Test: judges moving left is a fairly new phenomenon. Was not so 100 years ago. Reason: not everything that existed was constructed by liberalism. Thus, there was a way to preserve a real, tangible alternative to liberalism. E.g. it was not about gay marriage but no-fault divorce or something similar. Back then marriage looked more like a thing about children and not just a romantic partnership. It was more coherent to resist its liberalization. The modern judge sees how straight people totally not take marriage seriously anymore and figures keeping gays out of these romantic-only Hollywood marriages is inconsistent, which it is. The old conservative judge had something tangible to protect, some elements of an actually functional civilization, natalism and all that, could even think like we don’t allow no-fault divorce because kids, and we want kids because we want soldiers because history is a brutal competition and all that, so had something more real to protect, not just a vaguely conservative ideology that is basically just older liberalism.

          • onyomi says:

            I think it is very true and important to realize how “mainstream” conservatism in the US really is a kind of sluggish version of liberalism.

            I think this is the source of a strange feeling I get when I read “respectable,” “mainstream,” “not-crazy” conservatives like David Brooks and Ross Douthat: in order to sound respectable they have to assume the liberal worldview as a given. In order to be conservative, they have to somehow dissent from that worldview, but since they’ve already accepted its premises, all they can do is basically say it goes too far, too fast.

            This is a fundamentally inconsistent, boring, annoying sort of opinion to read, because it is exciting to be told “we are moving in the right direction; here’s how to move faster,” or “we are moving in the wrong direction; here’s how to reverse course.” It is not exciting to hear, “we are moving in the right direction, but let’s do it more slowly and cautiously.”

        • FJ says:

          The problem with this theory is that, of course, SCOTUS justices aren’t picked up off the street: all of the current justices, and most of the previous ones, spent significant time either litigating before the Supreme Court or serving on inferior appellate courts (or both). Anthony Kennedy, for example, was a law professor, litigator, advisor to then-Governor Reagan, and judge on the Ninth Circuit before being elevated to SCOTUS. To the extent that SCOTUS is a favorable environment for Getting the Thing, why weren’t those earlier positions also conducive to Getting the Thing?

          (I’m also a little skeptical of the extent to which Getting the Thing really entails contemporary liberalism; Kennedy’s pre-appointment rhetoric that states have “the right to make a wrong decision”, for example, sure sounds like Getting the Thing to my possibly un-Getting ears).

          • FullMeta_Rationalist says:

            I feel like this will be explained by some obscure statistical phenomenon which I’m sure I’m totally ignorant of. Like Berkson’s Paradox or something.

    • Evan Þ says:

      Are you saying that justices will tend more liberal as they Get The Thing because they’re around other liberal people who Get The Thing, the commonality draws them together, and they adjust to their friends’ views? That’s possible, but remember that virtually all the justices were educated at Ivy League schools, which I’d guess are marked by the same two traits. I think that people who come out of those schools still conservative have a higher resistance than average to adopting their compatriots’ political views.

      Of course, there could still be some however-small effect, and extended over several decades, that could be what we’re seeing here.

    • Sebastian H says:

      I suspect that a huge part of the “Supreme Court becomes more liberal” phenomenon is survivorship bias about what counts as “liberal”. A large part of what counts as “liberal” is the change from the past that we accept as good. The Supreme Court is an enormous institution in mediating what changes the United States accepts as good. This is clearest when you look at concepts which were thought to be liberal but that we don’t currently accept. Eugenics is a classic example. In the 1910s-1940s eugenics had a strong grounding in what we would otherwise think of as ‘liberal’ thought. If you were against eugenics in 1935 and still against it in 1955, the basket of views which agree with ‘liberal’ would have increased despite the fact that your views had not changed.

      • JBeshir says:

        This is interesting. It would predict that if you broke down their attitudes issue by issue, holding which issues counted as “liberal” constant, you wouldn’t see as clear or strong a signal in their shift on individual issues. Conversely, it’s be falsified if you did.

        • Sebastian H says:

          Further, it may change based on an increasing willingness to make the law with less concern about tradition and be more ‘liberal’ in a change sense but not in a ‘changed my idea about the underlying question’ sense.

          Brennan is a classic example. He was clearly uncomfortable with the idea of the death penalty from the beginning, but didn’t vote to actually ban it until the end. Similarly he clearly didn’t like the idea of censoring obscene material very much, but felt compelled in the beginning to try to draw up rules about it anyway.

          If you look at it that way, we are seeing the Supreme Court Justices adjusting to their power. They grow into the idea that they are the last word on the subject if they can a convince particular 4 other people in the world of their point of view.

    • Milosz Tanski says:

      Are they more liberal as they age, or do they do not want to be on the wrong side of history with their viewpoint?

      • Saradin says:

        Maybe a phase of cognitive development is recognizing your own axioms.

        For example, I realize one of the axioms I lack, but many people have, is that history is a straight line terminating into a universalist system of ethics.

        • TrivialGravitas says:

          I don’t think that’s a stage of development so much as having become unhinged from the modern western culture.

        • ad says:

          “The statesman’s task is to hear God’s footsteps marching through history, and to try and catch on to His coattails as He marches past.”

          – Otto von Bismarck

          Or to put it another way – try to make sure you are on the side that is going to win. That is the right side of history.

          • Mary says:

            The problem being is that the right side is usually determined by the people who don’t care what the right side is.

      • hlynkacg says:

        You’re assuming that the “wrong side of history” is going to be universal or predictable.

    • The original Mr. X says:

      I think there’s probably a simpler explanation.

      The left tends to have less respect for tradition than the right, and (at least in America) to support more power for the central government rather than localism. Since tradition and devolved powers both tend to limit the power of individual Supreme Court Judges, it’s not really surprising that they’d tend over time towards an ideology that views the past negatively and supports a greater role for the central government.

  19. SCPantera says:

    I think I slowly came around to point #2 as I developed an understanding of (what I would eventually discover is called) linguistic relativity. After even just casual (but regular) exposure to people who are themselves immersed in their own language-centric culture, I think it’s probably at least intuitively obvious to most people that the structure and form of a language influences patterns of thoughts in pretty different ways and could at least be presumed to be responsible for major differences in cultures.

    • Anonymous says:

      I think it’s probably at least intuitively obvious to most people that the structure and form of a language influences patterns of thoughts in pretty different ways and could at least be presumed to be responsible for major differences in cultures.

      But this is wrong, though. Not just a little bit wrong but hugely wrong, proven false. The structure of a language has no measurable effect on thinking at all; the hypothesis that it does (the “Sapir-Whorf hypothesis”) was very popular among progressives for much of the previous century, essentially for the quite literally Orwellian reason that it would allow the progressive vanguard to change and control the thoughts of the recalcitrant populace.

      Empirically, however, it’s occupied the same territory as phlogiston or racial biology since the Sixties. There are still some holdouts, of course, but no evidence.

      • Hm, casually Googling around, I only find the opposite conclusion in linguistics resources. But there are multiple forms of the Sapir Whorf hypothesis, ranging from linguistic determinism to linguistic relativism.

        Anecdotally, I know that my awareness of certain concepts only began with study of languages that express them differently than English does. E.g. Spanish raised grammatical gender and verb conjugation to my awareness; ancient Greek added in noun declension and case systems and strange word order effects; Lojban made me see ambiguity everywhere; Ithkuil taught me to take note of the configuration, association, extension, essence, etc of nouns.

        Sometimes in discussion with a monolingual person I really wish I could make use of the expressive resources of a different language to get at a distinction that is overtly marked in one language but hidden or subtle in another. Compare “Oesyawuŝfa iuzvaft okhe” to its multiparagraph expansion into English. 😀

  20. Innocent Bystander says:

    > “American Sniper” is worse because it’s racist and promoting racism is genuinely dangerous to a free society

    I don’t know how people can watch this film and come to this conclusion.

    First the film is very clear about the horrible cost of war.

    Second I see, perhaps I am imagining this though, the film subtly brings out a parallel between the enemies. Both patriotic, both paying a huge price, both thinking they are fighting for right and for their country. It is the antithesis of goodies versus baddies.

    • Montfort says:

      Interesting. I had interpreted it as quietly satirizing Chris Kyle’s black and white view of the war. I mean, Clint Eastwood may have a taste for the dramatic, but when the enemy sniper started doing flips and comic-book-style parkour I figured something was up. The butcher scene, too, seemed deliberately sensationalized in the same way (he was even wearing a black leather trenchcoat!).

      But no one else I talked to saw it that way, and we’ve come up with independent and somewhat orthogonal explanations for how the film is really anti-war, so maybe this is on shakier ground than I thought.

      • Innocent Bystander says:

        Clint Eastwood has also claimed it is anti-war. I think people often confuse the actions of the characters with the message of the film.

        The person I saw the film with said she though it was anti-war and portrayed the Americans as invaders.

        Still, a good film makes you think and may not have a single interpretation and message.

        • Brandon Berg says:

          There’s another one: The ability to grasp the distinction between depiction and endorsement.

        • Simon says:

          Anti-war can mean different things to different people.

          American Sniper does show the consequences of war on the life of a man, and it’s almost purely negative. So it’s anti-war.

          On the other hand, it doesn’t say the war in question is itself bad, and it certainly doesn’t say that The Other Side of the war were victims. Therefore, it’s pro-war.

          People in more liberal spaces would probably go more for the second reading. Liberal’s favorite war-film (/violent uprising-film) The Battle Of Algiers doesn’t make the point that the war is bad.

          It suggests that, while both sides commit horrific acts, it’s for a good cause. So you might even say it is pro-war.

          But proponents will say that it shows the side of people rebelling for freedom as humane and deserving of freedom, so it’s anti-war in the correct way.

        • Peffern says:

          I think the distinction between the characters in a work and the author of the work is lost on people (maybe one of the milestones?) I’ve seen people criticize, say, Cryptonomicon for being sexist because the main character is sexist, despite the fact that his opinions are presented as ridiculous in context.

    • vV_Vv says:

      Second I see, perhaps I am imagining this though, the film subtly brings out a parallel between the enemies. Both patriotic, both paying a huge price, both thinking they are fighting for right and for their country. It is the antithesis of goodies versus baddies.

      That’s why the Manichaean SJWs don’t like it.

  21. patient_one says:

    On the point of something sounding “like a self-environment failure”…

    One of the major ways that Kegan talks about his constructive-developmental theory is in terms of the “subject-object distinction”: what things one is “subject to” (like, what things feel like part of the “self”), vs what things one can “hold as an object” (like, what things are part of the environment). His claim is that as one reaches more developmental milestones, one develops a more complicated construction of the self (constructive-developmental, see?), which means a more nuanced ability to reason about the self-environment separation.

  22. Brian Slesinsky says:

    This reminds me a bit of the conceptual issues around learning computer programming. The notion that the machine will do exactly what you told it, without doing any of the automatic correction of minor errors that a human listener would do, seems to take a while to sink in.

    But there are are lots of insights that people learn in school. I’m not sure it makes sense to count all of them as developmental milestones?

    • Possibly related tot he widespread dificulty of “getting” what “literal” literally means.

      • Peter says:

        I have a theory about the abuse of “literally”; I think people use it as a metaphor revitaliser.

        So, “I was torn apart” uses a pretty dead metaphor; you don’t get mental images of people being savaged by lions or anything like that, you mind goes straight to the translation “I was subjected to mild-to-moderate distress”.

        My theory says “I was literally torn apart” means “forget that ‘I was torn apart’ is a thing at all, imagine you were encountering this metaphor for the first time, think about literal lion-like tearing apart and then use that image to represent how I was treated.” To which the implicit response is, “no, you don’t get to do that, if you want a fresh, vivid metaphor, get your own, don’t use cheap tricks like that, or I’ll laugh at your lack of claw marks”.

        Except I can’t seem to convince anyone of my theory so maybe it’s a bad one.

        • roystgnr says:

          It’s definitely a hyperbole revitalizer. People roll their eyes at exaggeration, but one good way to make them pay attention is to use a word that connotes “I’m not exaggerating”. The trouble is the usual euphemism treadmill: once you use such a word hyperbolically too often, it will inevitably devolve into just another bland intensifier (e.g. “very”, “really”, “seriously”, “truly”) and your spiritual successors will have to suck the life from a different word. (I’m guessing “actually” will be next on the chopping block?)

          • LCL says:

            Agree completely; had similar thought. The irony is that any word meaning “seriously, just like I said it, without any hyperbole” is going to be an especially striking word to use for hyperbole. Even if we invent a new one, it’s going to get co-opted for hyperbole too. The only recourse is to phrases too unwieldy to make good tools for overstatement.

    • Ghatanathoah says:

      I remember having very few problems with that aspect of computer programming. I attribute it to reading a ton of Amelia Bedilia when I was little and generalizing that to computers.

  23. Daniel Speyer says:

    I think there’s a milestone about treating an idea in a discursive context differently from an idea as a whole.

    It’s seeing a specific analogy regarding how one thing resembles another in a particular way and not being outraged at those two things being compared.

    It’s hearing an argument by contradiction and not concluding that the arguer supports the claim they start by assuming, or any of the claims they derive from it.

    In its simplest form, it’s recognizing the difference between “don’t do X” and “do X”.

    • J says:

      Yeah, people get uptight about hyperbolic examples used in a rhetorical setting. What do you mean you’re upset when I talk about the number of dead babies caused by Hitler?

      Sadly, I think the political machine actively tries to harm the level of political discourse. One of my pet peeves is the cry of “false equivalence” used to derail any argument that involves comparing two things.

      If you search Google Trends for “false equivalence”, you’ll see it takes off right after Jon Stewart’s famous rally speech where he encourages people to empathize with their neighbors and avoid divisiveness. I remember everyone talking about the speech online and starting to say “hey, yeah, both parties do a lot of crappy things”, and almost immediately seeing faux-rationalist accusations of “false equivalence”.

      • Steven says:

        The uptick was especially weird for me; the only place I’d previously seen extensive use of “false equivalence” was in 1980s Cold War rhetoric of the pattern:

        “The Soviet Union is an evil empire. For example . . .”
        “The US does lots of evil things, too. For example . . .”
        “False equivalence! The Soviets . . .”

        So, when I started seeing . . .

        “Both parties do a lot of crappy things. For example . . .”
        “False equivalence! The Orange Party does X only because it needs it to compete with the Purple Party . . .”

        . . . my initial emotional associations were such that I read it as people accusing the Purple Party of being as evil as Stalin.

        Which, well, sometimes they were, but.

    • Sarah says:

      Yes. this one is legit.
      I once saw a blogger describe as “Talmudic thinking” the ability to entertain an idea and see what it implies while *temporarily shelving* the question of whether it it true or good or whatever.

    • Simon says:

      But this *is* really hard!

      I find it pretty weird that arguments for A or B (should we go to a restaurant tonight or cook at home?) go in almost the same way as arguments for a little more A or a little more B (should we sleep in for 30 minutes or get up 30 minutes early tomorrow?).

      There will soon be arguments for why getting up early makes you more productive, or staying in later helps you be more healthy, even though neither of the two parties would be for staying in till 12 or waking up at sunrise. When a third person in this poly-relationship conversation comes in and says she wants to wake up an *hour* earlier the person previously defending waking up early might now completely switch arguments.

      This happens a lot, especially on the internet, and I’m wondering if it’s not so much something you have to learn not to do or just a necessity of discours.

      • JBeshir says:

        People are really, really bad not just at keeping tradeoffs that are already stated to them in the form of a tradeoff, but in noticing when their intuition is of the form “I assess the best balance between these factors as being here”.

        That said, going through pertinent factors which the other person might not have properly considered that would account for your balance being away from yours in that direction is probably a decent strategy to try to reach alignment. Just the language used is confused.

    • Adam Casey says:

      A thing I notice in response to this: Lots people use (or seem to use) this skill in bad and illegitimate ways.

      Take the example of an analogy. Sure it’s perfectly reasonable to say “feature x of A relates to feature x of B like so, we expect feature y of A will relate to feature y of B like such”. But it’s often used as “mumble mumble A mumble mumble B” where A is the focus of the argument and B is value-laden.

      Add to that the problem of treating “wow, you sound like an asshat” as an argument which should be evaluated as one, rather than a comment outside the argument about how to be kind.

      That said, a correct use of this skill in a consensual context is immensely valuable.

  24. yli says:

    Talk of developmental milestones is an easy way to smuggle in prescriptive claims while being superficially objective. In my incompletely developed, emotion-dominated state, I’m always annoyed by this. Just because people can move from one stage to another doesn’t mean that it’s good to do so. If it *is* good to do so, then argue for that, instead of appealing to how it’s a developmental milestone.

    If you want to feel this annoyance yourself, think of how Kierkegaard defined a hierarchy of existential stages that goes from “aesthetic” to “ethical” and finally to “religious”. You’re an atheist, but that’s just because your development’s been stunted. You’ve got some growing up to do.

    I think I’ll start defining having Alzheimer’s as the ultimate developmental stage. After all, people reach it at the end of their lives, and only a subset of all people ever attain it…

    • blacktrance says:

      Seconded. I have similar problems with the assumptions that Kohlberg sneaks in his theory of moral development.

    • Innocent Bystander says:

      It was a long time ago that someone pointed out that every philosophical/moral hierarchy places the philosopher who invented it at the pinnacle.

      From Kant to Ken Wilber.

      • But how is one to see the level above one’s own?

        • Irenist says:

          Yeah. I don’t know how fair it is to expect people to posit ethical systems where they themselves are wrong. I mean, I was a little annoyed when I read the linked Chapman article and the conclusion was that Stage 5 is basically just being a (Chapman-style) Buddhist. But then I thought, “Hey, Chapman’s a Buddhist. What else is he gonna think? Give the guy a break.”

          • roystgnr says:

            It’s almost a tautology that people won’t posit ethical systems where the ethical system is predictably wrong – if they knew how to improve the system then they’d posit the improved version instead.

            But it’s quite common that people will espouse ethical systems where their own actions are predictably wrong. Total utilitarians almost always prioritize their own comfort more highly than others’, even if they’re deliberately doing so much less than others; Christians believe themselves to be sinners in need of redemption, even if they’re trying harder than others to avoid sin.

          • Irenist says:

            I agree with that. My “they themselves are wrong” phrasing was unhelpfully ambiguous, though, so I’m glad you made the point explicit. Thanks.

        • Peter says:

          Possibly there’s a difference between being fluent in something and being able to say anything about it at all.

          You can imagine a person saying, “I’m dimly aware that there are terminal values, although I’m not sure what they are, and being able to give justifications of one’s actions (and thus being able to act in a justifiable manner) in terms of those values would be really neat, however when I thought about actually doing this in specific cases I got stuck, so for practical purposes I’m going to go back to my usual muddling though with vague analogies and widely-accepted principles, and accept that in some situations I’m going to get drawn into futile shouting matches.” (I’m not sure what Kegan (or Kohlberg) stages those levels correspond to but they look different to me)

          A thought: is being at a higher stage on your own like being the only person to own a fax machine?

        • FullMeta_Rationalist says:

          This is called Moore’s Paradox and wikipedia says no one has figured it out.

      • ChristianKl says:

        Ken Wilber didn’t invent spiral dynamics but copyied it from Graves. I’m also not sure whether he places himself in “coral” which is theoretically the highest known stage.

    • Salem says:

      I agree with this. The first time I read that article I was thoroughly put off by that, although re-reading it now I see there is a great deal of value in it.

      I note in particular that stage 5 (at least in that article) is the least described, and most nebulous of the stages. We are given very concrete illustrations of the difference between stage 3 and 4, and very little of the difference between 4 and 5. I would be very interested if Chapman has written any more about this.

  25. J says:

    That’s funny, just yesterday my friend was telling me about the concept of Adult Developmental Psychology. I think he was talking about Erik Erikson and Gail Sheehy’s work:

  26. J says:

    I always figured one mental skill I lack which disqualifies me for work in politics is that I’m used to knowing when I’ve succeeded at a task. In politics, you almost never get a clear victory, your vision never gets implemented the way you wanted, and nobody agrees on whether it was overall a good or bad thing, even if it did go just the way you wanted.

  27. I think you will have a large number of replies about #2. I struggle with dealing with other minds. My sense is very hot and cold. There are some things that are other-mindly but understandable/allowable. Someone liking spinach seems entirely reasonable to me, they just have different preferences. Other things are significantly harder to deal with as they are either alien or have been completely rejected by my brain. Someone who actually is using disgust as part of their moral framework is nearly incomprehensible to me.

    Anytime something falls into that territory I really have no clue as to what to do. I know something is up but have no idea how to deal with it. I get a profound sadness and even anger, and can be reduced to tears, whenever I encounter a situation where someone disagrees with me and any attempt to pierce or understand their reasoning fails or reaches some alien fundamental difference. Just about the only thing that will fill me with anger/rage is not understanding why someone did something at any level. I hate the crazy person at the board game table, the one who is behaving seemingly at random, it irks me deep down inside. If you betray me/hurt me when playing board games to get ahead that’s completely cool, but you you betray me and I can’t think of any good reason why that makes sense then I get really sad/angry, nearly to tears. I also get the same feeling, probably out of jealousy, whenever someone is deriving from pleasure from something I don’t understand.

    I understand that respecting others’ preferences is a thing but it is very difficult when it isn’t returned and seems to default to society default. That generally screws people, like me, with unusual minds/preferences. E.g. I don’t really care about gifts/traditions and would much rather not deal with while most others prefer such things. As a result I have to participate/reciprocate and gain little benefit out of it but others participate and gain significantly and expect me to do so.

    • The_Dancing_Judge says:

      “I hate the crazy person at the board game table, the one who is behaving seemingly at random, it irks me deep down inside. If you betray me/hurt me when playing board games to get ahead that’s completely cool, but you you betray me and I can’t think of any good reason why that makes sense then I get really sad/angry, nearly to tears.

      lolol you should (or perhaps for sanity reasons should not) play diplomacy. Everyone thinks they are the rational agent. Everyone thinks they know all possible moves on the board (or at least in thier local area). Someone always ends up enraged when their ally turns on them, convinced that is a self defeating move that is so hopelessly illogical that no reasonable person could have made that move. Happens everytime, myself included. oh and then there are those that enjoy creating chaos through “unreasonable” moves and then trying to take advantage of the chaos.

      • I have played diplomacy a decent amount online and a few time in person. I’m not a huge fan of it since the game is poorly balanced and nearly always the number of people new to the game makes things even more wierd. Oftentimes I am reasoning the worst case more from my opponents and they just make clearly inferior moves.

        It would be interesting to play with a set of fully knowledgeable players though.

        • Chevalier Mal Fet says:

          I play regularly online with a group of friends, we all fairly well know what we’re doing and the games usually have a good level of discussion and intrigue. You’re welcome to join in any time you wish!

    • Outis says:

      “E.g. I don’t really care about gifts/traditions and would much rather not deal with while most others prefer such things. As a result I have to participate/reciprocate and gain little benefit out of it but others participate and gain significantly and expect me to do so.”

      You gain enormous benefit from participating in society, which completely eclipses the small annoyance of not living in a society of people all exactly like you (assuming such a society would even be viable).

    • Qetchlijn says:

      > Someone who actually is using disgust as part of their moral framework is nearly incomprehensible to me.

      I found that not very long after I became a vegetarian for ethical reasons I started to become disgusted by the thought of eating meat. So the disgust response can have a supporting function in a moral framework.

  28. Acedia says:

    I’ve always been fascinated by the fact that as children we have to learn object permanence, the fact that things continue to exist when you can’t see them. Even the most fundamental rules of reality are not instinctive to us.

    Like, does that mean that you could place a newborn infant into a reality where the rules were radically different, where things really did stop existing when nobody observed them, and they’d adapt to it with no trouble and not even think it was strange?

    • Innocent Bystander says:

      > does that mean that you could place a newborn infant into a reality where the rules were radically different, where things really did stop existing when nobody observed them, and they’d adapt to it with no trouble and not even think it was strange?

      There must be some limits. But two anecdotes:

      1. I showed my three year old daughter a dead bird by the road as part of my attempt to explain what had happened to my deceased grandmother. She misunderstood – she later reported, without any sign of surprise, that I had told her my grandmother had died and turned into a bird.

      2. Children have to learn what is real and what is fictional. I took my daughter a few years later to a festival a tiny bit like burning man. She looked at me and said “I thought people like this were only in movies”.

      • Anonymous says:

        A humorous anecdote: My older sister taught me left from right when I was very young by standing in front of me facing me and pointing out “this is your left, and my right, and this is your right and my left.”

        I immediately presumed that right and left were inverted for girls.

      • Adam Casey says:

        > I took my daughter a few years later to a festival a tiny bit like burning man. She looked at me and said “I thought people like this were only in movies”.

        To be fair, I’m not quite convinced the kind of people who go to burning man are real yet either.

    • gold-in-green says:

      “where things really did stop existing when nobody observed them”

      Well, this is how dreams work, and we all get used to them, more or less.

  29. Anonymous says:

    I’m not sure how relevant this is. But, one fascinating and useful experience I had was finding sites on the internet populated by intelligent right wing people. Sounds ludicrous perhaps, but up until my late teens I was under the impression that there were no intelligent right wing people, at least no honest ones. I had been so totally immersed in blue tribe culture that I had completely taken for granted the view that the left is always correct, and anyone proposing a right wing idea must be either stupid or evil.

    The dangerous side of this is when it turns into the view that, because there are all these smart people who have been hidden from you, with good arguments for things you were told are obviously wrong, it must be that they are the good guys who are right about everything and it’s the left that is stupid and evil. This is something I was pulled in by for a little while but then moved on from. But I see lots of people who seem stuck there – so angry at how the left betrayed them that they have totally switched worldviews, and see anything left wing as evil, anything right wing as good.

    So perhaps that’s an example of something like moving though these different systems. I don’t know. Has anyone else here had similar experiences? I think this bears a resemblance to things Bryan Caplan has said, with Objectivism and Austrian economics as his right wing counterculture of choice.

    • Acedia says:

      I had a similar experience when I discovered Chesterton and CS Lewis as an insufferably smug teenager who thought that all religious people were morons. I remained an atheist but I was thoroughly disabused of the notion that I was smarter than everyone who believed.

    • Actually if ideas are low-prestige and untrue, they have little in the way of evolutionary advantage as memes or mental virii. If you see low-prestige ideas surviving, and perhaps not even in stupid minds, you could guess their fitness factor is their truth. Of course there could be other things like contrarian posturing (but one can also do that as a wannabee communist with more prestige).

      Imagine you are selling watches that are better than the watches your competitor sells, but those are simply cooler. They bring more prestige. So all the cool people wear that, and because cool people, high prestige people dominate the public discourse, people who wear your watches are seen as idiotic losers. Who will be your customers? Could be the misfits who hate elite prestige culture, but could also be people who discovered the truth that your watches are better. The predictable outcome is that despite “everybody” thinking your watches suck and only idiots wear them, you still have a small but reliable market share and some of your customers are not stupid. Sounds familiar?

      Also, as the West is a guilt culture, not shame culture, prestige loss is often internalized, not externally enforced by shaming. I remember my mental gymnastics how I tried to avoid that horrible, horrible label “conservative”, so low-prestige, so outcast, so uncool. I even tried calling myself a “Neocon” or “Neo-Neocon” just to be able to stick a “new” label on it to signal it is not some “horrible” “outdated” low-prestige thing, it is hip and cool new. It took a lot of time to accept that yes I suck now, from a social angle.

      “Stupid” is in most usages a prestige-word, is not about actual mental capacities. Living in a culture that values brains more say physical courage, simply the high-prestige who dominate the discourse will call the low-prestige stupid. In a culture that would be martial, focused on physical courage, the high-prestige would call the low-prestige simply cowardly wimpish traitors or something. In a religious culture, the high-prestige call themselves holy and call the low-prestige wordly. Same story.

      This is why, from the other angle, are mostly open to low-prestige left-leaning ideas. Distributism, while culturally conservative, is economically curiously egalitarian and anticapitalist, and has about zero prestige. Maybe some truths keep it alive? Or from the opposite angle, Objectivism. Everybody loves to hate them so better check it out.

      • JBeshir says:

        I think that there’s another reason to hold these kind of ideas which is probably more important to most than either truth or prestige and that’s whether they’re convenient; whether they support things that you already support.

        For one example, I see people in my social circles who post about how meat does all kinds of horrible things to you often (far beyond a mild cancer risk increase). These aren’t things they believe because they’re high prestige in society in general or because they’ve got a truth value making them more competitive, but because they support a world view in which as a vegan they are doing the best thing.

        Another is the Soviet Union apologetics I see from time to time which assert things which are both untrue and low prestige; again, it’s nice for them to believe that the failure of Soviet economics wasn’t really a big problem/was caused by outside rather than communism, regardless of whether it’s true.

        More or less any belief which suggests that the believer is better, or should be treated as more important or a higher priority, or should be higher status, than another person, is also convenient in this fashion, as are negative beliefs about long-term enemies.

        I think it’s worth asking why unpopular beliefs which are upwards of ~30 years old survive, but as well as being true beliefs kept down by lack of convenience and prestige to believing them, they could also be convenient beliefs for their holders kept down by lack of prestige or truth. Especially if a belief would be convenient to you, I’d suggest being cautious in the evaluation.

        (There’s also an absolute ton of the natural cures, anti-vaccination stuff going around that doesn’t seem convenient, true, or high-prestige-in-wider-society, so I think there may be other reasons big enough to create movements which aren’t captured by any of those three broad categories. More reason to be cautious before assuming survival is due to truth.)

        • This is also a very good point. If the same conservative belief is held by a small town religious e.g. meat business owner who just feels great in his local patriarchy and economic setup and culture, and a similar conservative belief is held e.g. by an urban person of partially or fully Jewish extraction who is generally part of the “cool” culture and really has about nothing in his life that is really justified or supported or helped by them, it is far more likely to be true. I think I have learned most of my conservative views from such “reluctant” conservatives, Irving Kristol, Moldbug, Scruton, as they had to go through a lot of struggles to get to this point, and that struggle itself is far more convincing than when some people just conveniently have ideas that justify their lives.

          The same way, if I had to name two leftists whom I respect, it is Orwell and Terry Eagleton. For Orwell the struggle was really there, that he could not fit in a movement which goals he supported but the people he found repulsive for the same reasons as I do, and for Eagleton, he is a Marxist, it would be so convenient for him to support any kind of atheism so his vocal critique of Dawkins et al sounds really honest. Similarly his Marxist critique of postmodernism jus cannot be convenient in an academic environment where postmodernism is big. Eagleton comes accross as honest to me. I think in his case leftism-as-religion is not a legacy, it is 100% real, he really is trying to build that new jerusalem or die trying.

          • E. Harding says:

            “it would be so convenient for him to support any kind of atheism so his vocal critique of Dawkins et al sounds really honest.”

            -I don’t think you understand just how strongly socially desirable anti-atheism is, and how strong the incentives for one-upsmanship within atheism are. I have a very dim view of Eagleton for his anti-Dawkinsism.

            Orwell, though, I get.

          • Deiseach says:

            What praise does Eagleton get for not being on board the Dawkins fan train? He’s still an atheist himself, and moreover a Marxist, so he won’t be applauded by religious/non-Marxists for his bold stance, and for the other side of the fence, your own reaction shows how it’s presumed that he is only doing this for the sake of social desirability.

            Why should his critique of Dawkins be motivated by “one-upmanship”? If someone in a movement or belief is mistaken or going astray in some way, isn’t it better and healthier if others within the group point out errors for self-correction?

            Either Eagleton is expected to keep his mouth shut for the sake of “the cause”, or Dawkins is genuinely regarded as without flaw and infallible, a kind of Pope of Atheism, and so any criticism of him must be from heretics and the envious.

            Declaration of animus: Yes, I very much enjoyed Eagleton’s review of “The God Delusion” because I dislike Dawkins (not so much for his atheism as for his smuggery). I can be sympathetic to Eagleton because I recognise the traces of Irish Catholic education, and indeed anyone who wants to get his kicking boots on to give Martin Amis a going-over will always appeal to me.

            So I know why I’m prejudiced 🙂

          • E. Harding says:

            “What praise does Eagleton get for not being on board the Dawkins fan train?”

            -Lots. Just look at the comment I’m responding to.

            “He’s still an atheist himself, and moreover a Marxist, so he won’t be applauded by religious/non-Marxists for his bold stance”

            -Yes, he will. There is a substantial contingent of liberal Christians who are perfectly fine with the personal atheism of their colleagues, just not fine with their public advocacy for it. Do you read Jerry Coyne’s blog? Read the archives. You see this all the time.

            “Why should his critique of Dawkins be motivated by “one-upmanship”?”

            -Er… why not? Dawkins is popular among atheists, though not among the social elite. “Debunking” a dissenter from orthodoxy serves quite well to keep the elite’s existing preferences in line. Criticizing Dawkins is like criticizing Trump. It’s something done by the social and media elite against everyday Americans with better grasps of the central issues at hand, as well as men holding unpopular opinions who happen to be richer and more famous than all but the most elite members of that elite.

            “If someone in a movement or belief is mistaken or going astray in some way, isn’t it better and healthier if others within the group point out errors for self-correction?”

            -Maybe; maybe not. The Mormon Church believes in ridiculous doctrine, but it has great unity, as our host has pointed out. It’s the reverse with atheists, herding whom, it has been said, is like herding cats.

            “Either Eagleton is expected to keep his mouth shut for the sake of “the cause”, or Dawkins is genuinely regarded as without flaw and infallible, a kind of Pope of Atheism, and so any criticism of him must be from heretics and the envious.”

            -Or Eagleton is wrong, and Dawkins is, while not infallible, the lightning and the sun compared to the undistinguished and non-popular accomodationist writers who populate the mainstream media.

            Come on. You can do better than this. Fallacy of the excluded middle.

            And, in any case, any random non-accomodationist atheist SJW is a much more fierce critic of Dawkins than all his theology student critics combined.

          • Anthony says:

            Why should his critique of Dawkins be motivated by “one-upmanship”? If someone in a movement or belief is mistaken or going astray in some way, isn’t it better and healthier if others within the group point out errors for self-correction?

            There is no contradiction between pointing out errors being healthy for the group’s self-correction and the pointer-outer being motivated by one-upmanship. Many successful social systems operate by harnessing “ignoble” drives.

      • Viliam says:

        If you see low-prestige ideas surviving, and perhaps not even in stupid minds, you could guess their fitness factor is their truth.

        Prestige is social-group-dependent. So when I see people with low-prestige ideas, my first guess is that in their subculture those ideas are high-prestige.

        • Vaniver says:

          I think this overestimates the degree to which rare beliefs are held and the degree to which people can control their social groups.

      • “Imagine you are selling watches that are better than the watches your competitor sells, but those are simply cooler. They bring more prestige. So all the cool people wear that, and because cool people, high prestige people dominate the public discourse, people who wear your watches are seen as idiotic losers. Who will be your customers? Could be the misfits who hate elite prestige culture, but could also be people who discovered the truth that your watches are better. The predictable outcome is that despite “everybody” thinking your watches suck and only idiots wear them, you still have a small but reliable market share and some of your customers are not stupid. Sounds familiar?”

        There’s another thing that feels the same from the inside, and that’s where your watch is as bad or worse than everyone else’s, but your subculture thinks it’s better. (“Pabst Blue Ribbon syndrome”). Things is, there are always multiple status/prestige games going on.

    • LCL says:

      I think this is a relatively common experience for people who grow up in a bubble. In the bubble you hear a lot of stupid or at least unconsidered support of [original bubble side]. When you then grow up and start to make an effort to expose yourself to some of the ideas of [other side], the sources you are going to find will probably be the smartest and most eloquent articulations of those views. It’s easy to then conclude that [other side] is the smart, eloquent, correct people and [original bubble] is the stupid, unconsidered, incorrect people.

      I guess you have two alternatives to mitigate this tendency: expose yourself more directly to [other side’s] ranks of the stupid and unconsidered, or seek out the most eloquent, well-considered viewpoints supporting [original bubble] (note: not the same thing as the best publicized viewpoints – you may actually need to search). For the sake of sanity and feelings towards humanity, the second alternative is probably the best choice.

    • onyomi says:

      As an inveterate contrarian I can relate to this: the expert consensus on such issues as politics and economics and nutrition has, in the past, struck me as SO wrong as to indicate that there is nothing worthy of consideration there, which is clearly not the case. And once authorities prove unreliable in one area, it makes you question them in every area, leading to the sort of contrarian mindset which basically views all doctors as evil profiteering drug company schills, etc.

  30. Ruben says:

    Insightful post as always.
    Re mental operations that you might not have completely internalised, how about the ecological fallacy:
    I’m saying “might” and in that comment I said I wasn’t sure you missed it, so I’m honestly unsure this is true, but you asked and it’s truly an utter waste of time to talk about country correlations if you’re actually interested in individual differences. I used to think my opponents were arguing dishonestly when I saw it, but I’ve come around to believe it’s really hard not to make this fallacy for a lot of people.

    It’s certainly a common error of reasoning and I often see it even here in your comment section, so maybe it hasn’t been eliminated from this garden. It seems a worthwhile goal. Many people are of course able to see this fallacy when the conclusion disagrees with them and I think I see it more often when there are deplorably low amounts of evidence on the appropriate level (often inter- oder intra-individual). But it’s still grasping at straws.
    I don’t like the “stage” logic so much, because I think people committing fallacies when it suits them occurs more (and even in people who tried not to) than people committing fallacies the whole time. Motivated reasoning is powerful.

    On a different note, I think being a contrarian forces you to switch sides so often that you may notice your motivated reasoning more easily and force you to commit fallacies less (if you want to be able to point them out in an opposing position and then switch sides to argue the opposite you’d better not commit it yourself). It’s consistent with the reported low agreeableness, high openness scores in LW surveys, though apparently you had a bad test then, got only percentiles, would have liked a different norm and a better questionnaire for contrarianism is imaginable (but then you’d be left with no data on stability, relatedness to important life outcomes etc.).

  31. Anonymous says:

    But if different cultures progress through developmental milestones at different rates or not at all, then these aren’t universal laws of child development but facts about what skills get learned slowly or quickly in different cultures

    Not necessarily true. Keeping one eye blindfolded throughout early childhood will make that eye functionally blind. A civilization that existed in pitch black darkness would constitute entirely of blind individuals, that does not mean that sight is a social construct.

    • Peter says:

      The alternative approach you could take is to judo it; say that sight is a social construction, and so whenever anyone talks about something being a social construction they’re saying very little.

      • Adam Casey says:

        >and so whenever anyone talks about something being a social construction they’re saying very little.

        Given the number of things that I’m told are social constructs this must be true.

        • Peter says:

          Ian Hacking has an amusing list in Social Construction of What? which covers everything from authorship to Zulu nationalism, but curiously, nothing beginning with J.

          That little book helped persuade me that the term “social construction” should be done away with and people should be more specific about that they actually mean. However the real clincher was Vivien Burr’s “Social Constructionism” which, entirely contrary to the intentions of the author, killed off any remaining hope that “social construction” might be remotely salvageable.

  32. mbka says:

    I’m not sure why the idea of these stages of development should come as too much of a surprise. I would prefer to call them thinking modes. As Chapman (following Kagan I suppose) suggests, people might function in several modes at once, depending on context. There is a related psychological concept called “context dependent personality”, we all have it, and Asians tend to have it more than Westerners. Another thing that slightly bugs me is how neatly stages 3 and 4 seem to map perceived faultlines between “modernity” and “traditional society”. And “modernity” is conflated with “contemporary US culture”. That in turn makes the whole theory sound like a purpose built vehicle to explain very specific Americana, in other words, behavior patterns that might not even be applicable to that extent in Spain, or Germany, much less in Singapore. And yet, all of these are “modern” societies.

    Sticking with the classification for a moment, one thing that immediately sprung out to me is how “rationalists” really aren’t, in my humble opinion, pure stage 4. And beyond this, I’d argue that rationalism is a strong impediment to making it to stage 5. That assessment might include the LW community, I don’t know enough about it to say this with a lot of weight. Some time in the past I got a lot of mileage out of LW in terms of methods, e.g. Bayesianism etc., but overall I find the community very geekish-parochial and ultimately naive. That’s why I stopped reading the LR website a long time ago. In Kagan’s terms, I feel it’s a community competent to do stage 4, but only in select areas and enveloped in its own stage 3 community cocoon.

    Related to the above, the whole area is a brilliant example where understanding has nothing to do with IQ or rationality. High IQ or highly rational people, who are extremely capable (by definition) to navigate formal systems, in my experience are often incapable to separate map from territory, or to recognize that different people have different systems, or that the functioning of their pet system may not be the most important thing in the context of the wider world. Meanwhile, ordinary people who I would imagine of ordinary IQ or rationality, can “get the thing” very well. This includes people from so called primitive societies whom Kagan would probably put between stages 2 and 3. And again, I am scratching my head why this should be news. The wise farmer vs. the bone headed engineer who can’t see the forest for the trees, it’s already a cliche.

    So in conclusion, the whole approach is a promising opening, but in its current emulation seems a tad simplistic.

    Note: minor edits for clarity and typos.

  33. kyle says:

    Self reflection of previous mind-states.

    The ability to model others’ mind-designs is one thing, but understanding how your own mind has changed is perhaps yet another milestone. Such self reflection has been very insightful for me.

    • Harald Korneliussen says:

      I know some people who have radically changed their interpretation of their own mind in the past, but not in a good way. Like, if they hate a guy that they used to like in the past, they will say they never really liked him, or were always suspicious of him. It can be pretty scary, seeing people radically reinterpret their past selves to suit their current states of mind.

      • Hari Seldon says:

        The scariest thing is that we don’t know how much we do that to ourselves. It is easy to spot in other people; much less so when it is your own brain betraying you.

        Relevant short story:

      • no one special says:

        I have deliberately chosen not to reflect on my past for exactly this reason. My ex-wife turned into a horrible person by the end, and if I think back on any happy memory of the time we were together, I end up getting angry about what eventually happened. I’d like to be able to remember without rewriting, but I don’t seem to be able to.

      • Anonymous says:

        Like, if they hate a guy that they used to like in the past, they will say they never really liked him, or were always suspicious of him. It can be pretty scary, seeing people radically reinterpret their past selves to suit their current states of mind.

        Do you know that they do this, though? I don’t intend this as a rhetorical question, I just wonder how much of a basis you have for it because I’ve known several people — I know one or two now — whom I was suspicious of/plain disliked, from day one, but kept it to myself because it would cause problems in that social group and because I couldn’t demonstrate anything to the satisfaction of others. But then if it comes to a head I might admit I’d always loathed that person.

    • onyomi says:

      Related is the ability to understand that you will not *always* feel this way, be it good or bad. This is harder than it sounds.

  34. Scott, thank you for the mentions!

    I intend to translate Kegan’s specific stage theory into a form more accessible to rationalist folks. (Real Soon Now!)

    I agree with mbka‘s assessment. Also, to me it looks like some of the LW diaspora leave when, and because, they move beyond stage 4. (I hope to write about that.)

    I intend to explain, in a format accessible to STEM/rationalist folks, why one might want to do that, and how, and how to avoid falling into the nihilistic chasm of stage 4.5 on your way to stage 5.

    • Peter says:

      I’ve been reading your posts with interest; one thing that’s notable is how you can deal with a lot of the ideas active in continental philosophy, post*ism etc. but actually manage to write clearly while doing so.

      Another thing: the list of stages ends at 5, and it seems the justification for this is not that this is the “true final stage” but that basically that’s where the trail goes cold – there isn’t enough evidence to make a clear description of stage 6. I’ve heard some people say that stage 5 resembles stage 3, and 4 resembles 2 – could 6 resemble 4?

      • Anonymous says:

        Looking at the stages, the common theme seems to be that at each transition, your perspective is generalized outwards somehow. So depending on how you view the stage 4 to 5 transition from systematic to meta-systematic, I have different intuitions about what a stage 5 to 6 transition might look like.

        One view would be that stage 5 is kind of a “fixed-point” – if you try to generalize from a meta-systematic view to a meta-meta-systematic view, has anything actually changed?

        If there were a stage 6 described in David Chapman’s post, it looks like it would start with the sentence: “Here fluidity is relativized.” This suggests a possible line of attack – another view might be that a stage 6 would be characterized by the development of _something_ that subordinates and organizes the fluid interactions between systems. This reminds me of something Alan Kay said about Smalltalk/Squeak [0]:

        > The Japanese have a small word — ma — for “that which is in between” — perhaps the nearest English equivalent is “interstitial”. The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be. Think of the internet — to live, it (a) has to allow many different kinds of ideas and realizations that are beyond any single standard and (b) to allow varying degrees of safe interoperability between these ideas.

        I don’t know what the _something_ would necessarily look like, but it might mean a shift from appreciating the nebulosity of systems towards nailing down the patterned-ness of their interactions. This feels like the similarity with stage 4 that might be expected.


    • Skimming the linked article, I’m not impressed by lv4 being ‘systems’ and lv5 being ‘relationships between systems’ [it rather depends what you fill in the blanks created by your intuition pump]. I do like the idea/aspiration of collecting skills that Scott gleaned; there might be something farther away from basic that’s also useful (or will one day become accepted as such in service of value-drifted goals I don’t yet recognize), but it doesn’t imply to me that going more abstract or meta is always interesting or rewarding.

    • name says:

      Are you implying you can refute Nihilism or am I just misunderstanding you?

      • Harald Korneliussen says:

        Nihilism doesn’t need a refutation, since by definition, if nihilism is correct then it makes no difference whether you believe in it or not. Kind of like how the model “this signal is pure noise” is still no better than any other model if the signal is actually pure noise.

        • same name says:

          I think you are mistaking “value” for “intrinsic value”.
          Nihilism only claims nothing has the later.
          What you do with it doesn’t -really- matter, but that doesn’t mean that if someone truly accepts nihilism he should stop existing.

          In any case I understood his words to imply that he believes there are words he could say which would lead people away from nihilism(leaving aside the fact that he seems to imply that is a good thing), and I find that interesting seeing as haven’t heard anyone else come with such words that actually deliver on the promise.

          • Harald K says:

            Everything you value, you either value for the sake of something else that you value, or it’s an intrinsic value to you. The things you value must eventually bottom out in some thing (or things) that you value for its own sake.

            If you deny the existence of or refuse to hold intrinsic values, then either you must do the same to all values, or you hold an inconsistent value system.

            Though of course, if nihilism is “correct”, it doesn’t matter if you have an inconsistent value system either.

          • same name says:

            But there is a difference between what I as an imperfect machine feel values for vs what on the logical level I can evaluate them to.

            Nihilism does not imply that you as a human, cannot hold “value” towards anything that makes you feel good, just the same as it would have no issues with a computer program trying to optimize some value and just like it would have no issue with planets circling a star.
            These things just are, you shouldn’t feel compelled for them to be one way or another because of nihilism.

            This mostly has a equalizing effect, in that it basically implies that “no value system is more correct than anything else”, not in any objective sense.
            Feeling like eating a cake is about as “correct”, or has as much “actual value”(as in value in an absolute sense, as opposed to value in some relative frame of reference), as a leaf falling off a tree because of the wind.

            In that sense I believe nihilism is mostly a framework, rather than a complete “solution” in the sense that even if you agree with it you don’t feel any effects on your day to day.
            Its mostly when you try to evaluate other philosophies where it comes into play.

  35. David Moss says:

    Some of these look less like people lacking the ability and more like them selectively failing to apply it.

    For example, where people (seem to) interpret their political opponent as believing the most fantastically uncharitable, evil thing they possibly could. That’s because we’re trying to construct and defend a particular argument and aren’t incentivised to think up reasons against it. People seem to generally have that ability. For example, in cases where their argument relied on understanding that someone else held a totally different, reasonable perspective, they would.

    • Adam Casey says:

      Obvious question. If you set things up right with the Amy and Brayden experiment (give her something good if she gets it right, give her lots of time to think about it, explain it in just the right way, have her explain her thoughts in detail), could you make Amy give the correct answer before she got to that development stage?

      • David Moss says:

        Not with 3 years olds in the classic development studies I’d bet. But with an adult who asserts that their political opponent must just hate women or that there would be no disadvantages to just assuming that people accused of certain crimes are guilty unless they can prove their innocence or who think that people with PTSD must just be faking it if they respond badly to loud noises… yeh, I think so.

        • Tom Ash says:

          There are studies showing that when money’s on the line partisans adjust their opinions about factual matters (the effects of the Iraq surge, etc.) much closer to reality.

  36. Saro says:

    I don’t know what developmental milestone this refers to exactly, but the study referred to by this article certainly seems to demonstrate one regarding understanding abstraction well enough to program:

    It being a developmental hurdle is certainly less scary than the thought that it is an innate ability, anyhow.

  37. Alex says:

    I very much agree with what you wrote.

    Related phenomenon:

    Most people know some simple sender-channel-receiver Shannon-Weaver type of model on an intellectual level … and some then abuse this model for a very strange kind of emotional argument along the lines of “sender and receiver share responsibility for successful communication so if you get me wrong please be aware that as the receiver you have a responsibility here and should try harder to understand me but if I get you wrong please be aware that the sender also has a responsibility and you should try harder to make yourself understood”.

    Which sounds just silly put this way because it always deflects responsibility but is brought forward in different phrasing without the slightest hint of irony.

  38. Anatoly says:

    My go-to example of this is the Killian documents controversy (yes, it’s political, but I don’t care about the politics of it). Specifically, the part of it when a blogger typed up the crucial memo again on a modern Microsoft Word with default settings, and it was a perfect pixel match, modulo fax artifacts, with the original memo (see the top right picture on the Wikipedia page). It was a mind-blowing revelation to see how many commentators, journalists, bloggers, etc., including purported “experts”, including some not obviously incredibly partisan ones, just didn’t get the weight of the evidence here. It wasn’t ignored, either; it was reported on briefly and then most people went back to talking about what Killian’s widow thought of his frame of mind, about proportional-width typewriters in 1972, about the internal consistency of the memo’s contents… Meanwhile the people who got it just looked aghast at all this continuing hullabaloo that went on for another week or two. It seemed incredible to me back then that people, including plenty of those who were computer literate and knew things about fonts and typography, just wouldn’t get it, wouldn’t immediately stand up and say “Well, the debate’s over, or at least, if we want to be really careful about it, it isn’t over but here’s this incredibly damning piece of evidence that’s orders of magnitude more strong than anything else claimed by either side, and it’s silly to talk about those other arguments before addressing this one”. But they wouldn’t. They didn’t have something, some thing, as you say, that’s difficult to define precisely, but so clear to everyone who did have it.

    I guess I’d class this thing under your “Ability to think probabilistically”, though I’ve long suspected that perhaps it’s more specialized than that. I don’t know if it’s useful to treat it as a developmental milestone, which seems to presuppose some sort of orderly cognitive progression in which everyone is supposed to attain it. If it’s mostly cultural and not innate, then perhaps “ability to think probabilistically” is more like “ability to control fire” or “ability to build boats”.

    • stargirl says:

      A similar situation occurred in turkey.

      A forged document showed the plans for a military coup in Turkey. The documents dated as far back as 2003. It was conclusively shown they were written in Word 2007. Not everyone immediately gave up. For example a number of sentences were explicitly upheld in court after the revelation that the document was made with Word 2007.

      (eventually everyone was freed)

      • Douglas Knight says:

        The historical anachronisms in the text in that example seem much more convincing to me, to the point I find it bewildering than anyone talks about the version of Word. Why do you?

    • dndnrsn says:

      If you want a more broad non-political example, look at ref/ump calls in sports.

      Most people will regard calls against their team or player or for the opponent as bogus, and calls for their team or player or against the opponent as just and correct, and it’s rare to hear someone say “yeah the ump was right to call my team’s player out” or “the ump was wrong to say my team’s player was safe, but I’m glad he blew the call because we’re more likely to win.”

      Not as serious an example as the Killian documents, but significantly less likely to mindkill people.

      • Peffern says:

        Maybe mindkill is one of the developmental milestones Scott is talking about.

      • Mary says:

        ALL calls? Have you ever been to a game where anyone objected to ALL the calls against his team, and cheered ALL the calls against the other?

        It has been my experience that most calls pass with no comment. Why on earth would you actually say, “That was a good call’?

      • Chevalier Mal Fet says:

        Actually not as common as I would have thought.

        My sample for this is /r/baseball, where I was following the recent playoffs (since I am from Kansas City, what a time to be alive!).

        The playoffs this year were riddled with shoddy ump calls, and while there were indeed vast numbers of partisans on either side asserting that so-and-so was obviously out/safe, that was clearly a strike/ball, c’mon ump are you blind, there was a small but not insignificant minority for each call acknowledging the correct state of affairs even though it went against their team.

        That said, it’s still blindingly common – a favorite past time of mine was to dip into the subreddit of one of the contending teams, read the constant complaints about Joe Buck’s blindingly obvious bias in favor of [other team], why didn’t he ever mention [my team], this was disgusting, etc, then go into [other team]’s subreddit and read the constant complaints baout Joe Buck’s blindingly obvious bias in favor of [my team], why didn’t he ever mention [other team], this was disgusting, etc.

        Er, so basically, I agree with you, it’s definitely a very common thing, but it is by no means universal. <_< Not sure where I was going with this.

    • Seth says:

      To be fair, it was a bit more complicated overall. I think there’s some interesting discussions one can have over how people weigh evidence, and that some mental models seem very wrong to outsiders but reasonable to insiders. When you’ve got someone who has a history of dirty tricks, it’s not unreasonable to wonder if this happened – but it’s almost impossible to prove in an absolute sense.

      Let’s take for the sake of discussion “These memos are forgeries”.

      Side 1: DEBATE IS OVER! Story is *false*. This piece of evidence is false, therefore ALL weaker evidence is suspect and should be regarded invalid character-assassination.

      Side 2: *That* is false, but *this* and *this* and *this*, all weaker, but collectively, are strong evidence that the story is true. Evidence is messy in the real world.

      Which is inference is obviously correct?

      • Douglas Knight says:

        If something turns out to be a forgery, then the state of the debate should go back to what it was before it surfaced. But sensational forgeries generate attention and it is very common for the side that was supported by the forgery to say: well, now that we have your attention, listen to our other evidence. This is not going back to the prior state, but exploiting undeserved attention. (Not that most attention is rationally distributed. But distributing it by forgeries is even worse.)

    • Jaskologist says:

      Bonus: They just came out with a movie about how the documents were totes reals.

  39. I find the possibility of missing teenage developmental stages more interesting. They are obviously fuzzier, because once people are sexually mature and can reproduce it looks like biology / evolution could as well consider the job done.

    – Adolescent Narcissism – if not outgrown, could be a err hmm Tumblr syndrome

    – Late onset or incomplete puberty, primarily for boys, could be widespread, perhaps the root of “emasculation of the West”.

    – Fast Life History Strategy, obvious external signs is girls having the first menses as early as 11, usually a function of stressful upbringing and tends to predict an, um, high time preference life, teenage pregnancy, drugs etc. most common in households without fathers. Part of the story why the PUA culture says “girls like bad boys” – on the FLHS at least it is true, i.e. since single-mom households became widespread, read the whole series:

    – Teenage Girl Syndrome – largely insecurity about in-group acceptance, hypothetised as puberty estrogen effects. Maybe, if not outgrown, complaints about “marginalization?”

    For me it teenager boyhood was long and it took me into my twenties to outgrow those attitudes. Some of my views and ideas were, in hindsight, narcissistic and solipsistic, sort of an “I am super smart, if something does not make sense to me it is WRONG” attitude. I rolled my eyes if my views were deemed immature – then a few years later on Internet debates I was That Guy who told others “you just have immature views, you need to outgrow them”. I would characterize it as hubris, as lack of awareness of our own fallibility. Could we say it is a really long Adolescent Narcissism / Adolescent Solipsism?

    As an adult now, I fully support arguments like “you just need to grow up and you will see” while I understand the pain of intelligent teens who find such requirements that are per def unfulfillable as of now highly unfair. But it seems it is not just about learning and argument, there is a really deeper brain or psychological change required. A good test is Chesterton’s Fence, a person with an adult, mature brain should understand the instinctive logic in it – we don’t know everything, we are fallible, others are not idiots, lets assume others have actual reasons for doing things unknown for us. For an immature, teenage, narcissistic/solipsistic brain, we (I) are super smart, others are idiots, if they cannot give good arguments immediately defending their fence, then off with it.

    This is perhaps related to the childhood Theory of Mind and thus autism – perhaps a fully developed Theory of Mind requires not only that we know people have one, it is also knowing it can be as good as ours, and thus they could have good reasons for doing things we don’t know.

    Or maybe just developing a sense of personal fallibility.

    School is also problem. We don’t think kids in a probabilistic, fallibility-ware, “Bayesian” way. Education is often like talking about hierarchies of categories. “Give me an example of a slide brass instrument!” “Trombone.” “What are brass instruments?” “A kind of wind instruments.” “Other kinds?” “Ugh… woodwind?” Correct. You get an A.” and the problem is that such kind of teaching is always about things that are 100% sure true because simply humans decided on how to categorize. But the real world is not so… so when faced real world problems, teenagers and adults with immature brains look for 100% sure truths rather than probabilistic solutions.

    Now education method have improved a lot, that is clear, there is less of this and more sensible problem-solving, but at some level this is unavoidable because this is how science works. The true spirit of it may be falsification by observation, but the already known material is in these gigantic trees of categories.

    • Anonymous says:

      >Fast Life History Strategy, obvious external signs is girls having the first menses as early as 11, usually a function of stressful upbringing and tends to predict an, um, high time preference life, teenage pregnancy, drugs etc.

      Did you mean low time preference?

  40. >Ability to model other people as having really different mind-designs from theirs; for example, the person who thinks that someone with depression is just “being lazy” or needs to “snap out of it”.

    This is a two-way street. While our judgements about others could be too “strict”, judgements about ourselves may be “too forgiving”. The Fundamental Attribution Error has two legs. The most common leg is the “too strict”, you think someone kicking the vending machine is a bad guy, while if you do it, you just had a bad day. So one leg of it is to think the other guy too had only a bad day. But we could also as well use the other leg of it and tell to ourselves, stop doing it already because you see how it is a shameful bad thing when others do it. From a prediction viewpoint, the common leg is more useful, we can figure out bad days predictable make people kick vending machines. But the other leg is just as useful, as it tells us basically at some level we all suck and we should all do a bit of conscience-whipping on ourselves.

    If we want to taboo the moralistic aspects, and taboo both terms, strict and forgiving, we can simply say we not only need to model other minds as ours (empathy) but also **model our minds as others minds**, evaluating it from an external viewpoint! Conscience, self-criticism, “outer view”, and so on. Because yes, sometimes you think you are depressed when you are just being lazy. It is easy to find excuses. See akrasia.

    So the point is to learn it really both ways, see others minds internally and our own externally.

    What is more important? Both are, but if you want to make people feel better, you have to focus on the “forgiving” culture of seeing others minds internally. Cognitive Behavior Therapy in the water supply? If you want to make people more productive, you have to focus on a “strict” culture of seeing ourselves externally and having some kind of a self-whipping big protestant conscience as an internal mental policeman or something. See also Jesuits and Examen. A very basic and useful akrasia tactic – IS largely the spirit of Examen. This is maybe anti-CBT. You will have more depression but also better productivity and those kinds of more material results that way.

    • DavidS says:

      Agreed, except I think it’s by no means always that people are too kind on themselves and too harsh on other people. Plenty of folks will forget a slightly stupid/insensitive thing someone else said but be convinced if they say sometihng like that themselves they’re awful.

      • Also agreed, self-beating-up is a thing, hence CBT. But what is really curious that it seems – and this may be entirely wrong but still it seems – we live in a both extremely strict-with-yourself and forgiving-with-yourself age.

        Our dietary habits, compared to previous generations, are horrible. To my grandma as a child, it was you eat when it is mealtime even when not hungry, you don’t eat when it is not mealtime even when you are hungry, and you eat exactly as much as your parents put on your plate, you don’t leave anything on it and you don’t demand more. This instilled excellent, almost military discipline in her because basically trained her to ignore every desire wrt food and follow conscious rules and rational decisions instead. Result: not fat.

        Meanwhile, raising a kid like that would horrify most people today. And then the end result is being fat – and then there is a lot of self-beating-up and self-hate and all that about being fat. While my grandma’s contemporaries, if some of tem they managed to get fat by 50, they didn’t care. You don’t have to live forever or be sexy forever. They could forgive that to themselves because they could see look my accomplishment in life is 3 kids, not having a perfect figure forever.

        So it seems today we somehow manage to be strict exactly when we should be more forgiving and manage to be forgiving exactly when we should be strict.

        This is just one example but I see this all the time. Maybe we need to get strict with children again and forgiving with adults. Maybe at 8 interpret that always as laziness and at 28 interpret that all always as depression IF the 28 year old was sufficiently disciplined against laziness at 8.

        Maybe the issue is increasingly treating children as adults. The Victorian age treated younger adults something like children – well, not fully, but there was the idea that up to 40 a man really grows, in wisdom and all that, does not become old just grows and becomes really full and a 25 years old man should behave with some kind of a deference to him, taking his advice, and the older man is allowed to talk with a sense of condescension, so some elements of a child-father type relationship remained with older people well into young adulthood. And younger doctors tried to look older to look more reliable etc.

        Maybe we are paying with too much adult strictness for the mistake of not enough child strictness.

        • DavidS says:

          I’d really like to see independent evidence on that (in generally, specifically the dietary point). I know plenty of people who seem to have issues with eating caused by the ‘eat everything on your plate’ thing as well as those who were given lots of freedom and are fussy. And lots of people advise specifically ‘listen to your body, eat till you’re sated not because it’s in front of you’ as a way to deal with weight.

          Basically, without more detail your argument reads to me as essentially ‘I observe X change in outcomes and Y change in activity, therefore Y must cause X’. There are plenty of other potential causes in terms of access to food, processed/calorie-dense food, level of activity etc.

        • Salem says:

          I think what’s really going on is a lack of differentiation. It’s not just that we treat children like adults, we also treat adults like children. A lot of people do not possess the cognitive structures and justifications to exercise appropriate discernment (in Chapman’s terms, they are System 3 thinkers) and so react in an undifferentiated manner.

    • Simon says:

      This is also true because other people (who know us) are generally way better at predicting our behavior than us.

  41. smn says:

    Re: The Thing & trade-offs, it’s been my experience that debate (parliamentary style, I’m not sure how the American style of debate works but I’ve been told it’s very different) is a very effective tool for developing both of those.

    (The Thing) ‘Winning’ in debate requires you to present arguments that the opposing side cannot effectively contradict, and the position you have on a given topic is decided for you, so the game is effectively ‘who can model an intelligent proponent of their position more effectively’.

    (Tradeoffs) Presenting arguments that can’t be effectively contradicted means you can’t stipulate your correctness on there being no harm to your position (or no good as a result of the opposite position).
    My debate coach would even specifically instruct the team not to present an arguments in terms of absolutes (ex. ‘gun control will definitely eliminate violence entirely’ is contradicted by ‘gun control will probably not reduce violence and even if it does it will probably not eliminate it entirely’ which is very easy to support, so it’s better to cede that ground to begin with and consider how your position could be attractive without having to resort to those sorts of extremes).

    • Paul says:

      That sort of formal, structured debate is very dependent on effective neutral moderation, which rarely exists in the real world. Westminster parliaments, whether bound by rules or the Speaker, aren’t under those same constraints. They’re performing for a biased audience at home, not attempting to make ‘effective’ intellectual arguments.

      I’d agree that formal, moderated debate is good mental exercise for a set of skills that badly need cultivation. We’d all be much better off and have a far more effective political culture if that style of argument was the standard. Given the effectiveness of base rhetoric though, it’s difficult to imagine there’s actually much desire for a higher quality of discourse in most public settings.

  42. Who wouldn't want to be Anonymous says:

    Slightly OT, but since we’re on (developmental) psychology, it sort of fits.

    Re: Universal Human Experiences that you could be missing.

    Facial recognition is apparently sometimes broken. Rapid onset in adults is apparently very obvious and generally the result of brain trauma. Developmental impairment is much harder to notice. There’s a test you can take if you want.

  43. TeMPOraL says:

    To “basic mental operations” I would add *understanding of feedback loops*. A lot is written about economics and politics by people who don’t seem to get that one, and those discussions are mostly pointless, in a way similar to not thinking in terms of trade-offs makes a lot of discussions pointless. Myself I understood it only after control theory course in university, though now I recognize that high-school level chemistry teaches it too (chemical equilibrium).

    After you grok it you start to see how incentives can feed one another and e.g. that it’s perfectly reasonable (and common) for a bad system to exist in which there is no single actor to assign blame to (compare: is television stupid because people are dumb, or are people dumb because TV is stupid?).

    • Salem says:

      This seems very interesting.

      Can you give some examples? By feedback loops, I assume you mean positive feedback rather than dynamic equilibrium? I’d be particularly interested in political examples. Maybe this is my own bias, but I don’t see a lot of positive feedback loops in the real world*, and I would love to know what I’m missing.

      * Obviously the very existence of dynamic equilibrium implies positive feedback in some range, but if feedback loops operate quickly, then you wouldn’t expect to see a lot of them in action. I’m talking instead about long-term positive feedback processes, such as the hypothetical “better technology enables better science which enables better technology” line that some people push.

      • Murphy says:

        most things eventually reach some kind of equilibrium but in many things you can get positive feedback loops that push you towards some new equilibrium.

        A simple one might be: the government creates some new expensive regulatory requirement which happens to be a flat cost per entity which doesn’t increase with the scale of operation such as needing to pay for a license or initial inspection. Every business has to pay, say 50K worth of costs to comply whether they produce 100 units or 100,000 units.

        This creates barriers to entry which is good for existing players and for the large entities this is good because it’s a flat cost. This drives lots of the small companies out of business and cuts down on new competitors.

        Someone suggests getting rid of the license/cost/requirement but now all the existing players have already paid it and want to at least maintain it to keep out competitors while the largest organizations who produce the most units want it increased because it’s a trivial cost to them which hurts their smaller competitors far more.

        So you get a feedback loop until you reach a new equilibrium where the larger companies have little more to be gained.

        You get similar effects with professional organizations adding barriers to entry to their own field. Since it costs existing members nothing it’s in their interest to push further to increase barriers while new members are even more incentivized to maintain or increase costs to upcoming members. The costs themselves become part of the incentive to increase the costs. So you get a feedback loop until some other part of the system starts pushing back because costs or inconvenience are too high.

        • Salem says:

          Sure. But aren’t these classic examples of “quick” feedback loops that we would expect to rarely see, but see the results of often (in the same way that we rarely see landslides, but see rocks at the bottom of a slope often)? I’m talking instead about “slow” positive feedback loops where we might expect to see the ongoing process.

          Professional organisations have thoroughly cartelized their areas (centuries ago, in some cases) and are long since in that dynamic equilibrium phase where other parts are pushing back equally hard. What is the professional area that is being cartelized, right now, and we can expect to still be being cartelized in 50 years time?

          • Murphy says:

            for the production one I had in mind a specific recent example of a city bringing in licensing for craft beer makers a couple of years ago.

            In the UK nursing is currently in the middle of a shift towards more barriers to entry which has been ongoing for the best part of 20 years and is only now running up against heavy pushback due to trouble recruiting enough nurses.

            The thought depresses me but I expect programming to be bonded and licensed within 50 years as the fears of internet crime and international conflicts involving software etc get exploited as excuses to put up barriers to entry while signed code provides a mechanism for enforcement.

          • Steven says:

            Paid tax preparation in the US is another example.
            Currently, you don’t need a license if you want to charge people to help them file their taxes.
            The IRS tried to regulate the industry (cheered on by H&R Block and other large tax prep companies), but was blocked by the courts on the grounds that it lacked statutory authority.
            There are some bills floating around Congress to give it that authority, and I suspect that eventually the big tax prep lobby will get its way.

      • Nicholas says:

        The answer I was given in a systems theory survey was that runaway positive feedback is unstable: if the cycle doesn’t wind down, the system is pushed past a metabolic or material limit and ceases to function altogether.

      • ThrustVectoring says:

        In a circuits lab, I built a circuit with a transistor in it. When the transistor drew power, it got hotter (since that’s how power going through things with resistance works). When it got hotter, it’s resistance went down, which made it draw more power. This can be fine, since hotter things lose heat more quickly. But, I was using a different transistor in the lab, which had different parameters, and the overall thermal feedback was positive.

        tl;dr – this one time, an electrical component heated up more the hotter it got. I burned my thumb.

    • Peter says:

      Chemical equilibrium doesn’t give a complete understanding of feedback loops. There are chemical “oscillators”[1] and other out-of-equilibrium systems that go beyond what equilibrium can do. Also, there are cases of positive feedback – branched chain reactions, autocatalysis, exothermic reactions like combustion or Grignard reactions[2] – which aren’t to do with equilibrium.

      [1] Often systems in which the concentrations of some components – usually the colourful ones – oscillate (there are photochemical systems too), whereas others may steadily accumulate or be depleted. There was a lot of talk of chemical oscillators being thermodynamically impossible until people figured this out.
      [2] You mix your solvent, magnesium and brominated compound, and carefully warm the mixture up; you’re not supposed to use a heat gun but people do it anyway. Once you’ve got the reaction going it will happily keep going without a need to keep heating it; the heat produced by the reaction seems to balance out the heat lost to the environment (often to the reflux condenser, so evaporation is a key part of this). So you have a room temperature steady state, and a higher-temperature medium-term steady state; a bistable system.

  44. Murphy says:

    I remember having a discussion with a friend of mine a few years ago. Gender studies students (not not that type of gender studies student) talking about pro-lifers.

    It sort of kicked me up one meta level (recognizing that others aren’t even thinking in meta levels) because it made me realize that she genuinely believed that pro-lifers terminal value was hate of women.

    She’d never considered that they might have a coherent moral philosophy that simply weighted different values differently. She genuinely believed that they were taking their positions in an effort to fuck things up for women and that their statements about valuing the life of fetuses was just some kind of lie/cover for the goal of oppressing women.

    Until then I’d assumed it was obvious to other adults that in most cases where they have an opposition and there’s a significant number of people supporting their opposition that there’s a coherent moral position behind it.

    I’d assumed that the majority of people saw through obvious political crap like “they hate us for our freedom” as empty words but… no. Apparently that’s really the kind of explanation that lots of people genuinely accept of their oppositions actions.

    • E. Harding says:

      I don’t think “they hate us for our freedom” is as vacuous as you think it is. Look, for example, at the Charlie Hebdo massacre or at how many Muslims in Egypt and Pakistan support the death penalty for leaving Islam. Hating freedom is not a terminal value here, but is an instrumental one.

      • Viliam says:

        Exactly, it’s instrumental. They hate “freedom of leaving Islam”. Other people hate their “freedom to killl infidels”. Hating a “freedom to X” simply means “hating X”.

      • NN says:

        It also depends on who is saying “they hate us for our freedom.” As far as I can tell, it would be entirely accurate for an Iraqi Kurd to say that about ISIS. Most Kurds are Sunni, but ISIS still considers them apostates because of their secular, liberal, and democratic government.

    • gold-in-green says:

      Well, lies and cover-ups are actually really common in politics. People have lots of things they want to do for reasons that would be too politically unpalatable to say aloud, so they use cover stories.

      On abortion, I think that valuing fetuses differently is a real thing. But because it’s treated as an incontestable moral intuition, it also serves as a convenient cover for, maybe let’s not say hate, but a cultural conservatism that wants to keep women in their traditional roles and is therefore uncomfortable with women’s sexual liberation and the reproductive autonomy it entails. Part of the reason I think this is that denying access to abortion seems to go hand in hand with deny access to contraceptives, which have nothing to do with fetuses but everything to do with reproductive autonomy.

      • E. Harding says:

        It’s ’cause abortion and contraception lead to more out-of-wedlock births!:

        I don’t know if they lead to more unplanned pregnancies, though.

      • Gbdub says:

        It’s not a “convenient cover” for anything – it’s an honest belief that out-of-wedlock sex is morally wrong, certainly because it may result in unsupported or aborted kids but for other reasons as well. Why does this have to stem from a dislike of women at all? Certainly, the burdens of unprotected sex / no abortion fall most directly on women, but keep in mind that the framework that rejects both abortion and contraception, at least in theory, considers the male and female adulterer equally sinful. (“Thou shalt not commit adultery” doesn’t end with “unless you’ve got a willy”)

        Also, I confess to being consistently annoyed by the construction “deny access to contraception” because it usually means in practice “refuse to pay for someone else’s contraceptives”. Which has a whole different flavor to it. Certainly, there a some things you find objectionable that you’d probably balk at paying for for others, and you don’t need to have nefarious reasons for it – why not extend the same charity to Hobby Lobby?

      • Jaskologist says:

        Part of the reason I think this is that denying access to abortion seems to go hand in hand with deny access to contraceptives, which have nothing to do with fetuses but everything to do with reproductive autonomy.

        Bollocks. What the official Roman Catholic position is to oppose both, that is far from the modal pro-life position in the US, unless you stretch “deny access to contraceptives” past all reason.

      • onyomi says:

        Lots of people (like myself) are fine with contraception but think at-will, no-questions-asked late-term abortions are morally wrong.

        But assuming pro-lifers are anti-contraception, in addition to being anti-abortion, it still doesn’t follow to say they are just doing it to control women, as many religious people view it as a kind of frustration of God’s plan to have sex while preventing pregnancy: pregnancy is the purposes of sex, in their view, so to frustrate that is wrong, and may even be viewed as a kind of “soft” abortion (the pill may sometimes cause a fertilized egg to fail to implant, I believe). This is not my view, but it is a view some religious people I’ve met genuinely hold.

        Lastly, for the sake of argument, let’s examine the baseline assumption: female (and, indeed male) reproductive autonomy is an unalloyed, axiomatic good. I, for one, can certainly see some problems with that.

        Consider, for example, the case of an accidental pregnancy the father wishes to see to term but which the mother wishes to terminate. Why should the mother get full control of whether the baby is born or not? “It’s her body,” you might say, but it is also his baby, just as much as it is hers. And the baby also belongs to him/herself, or will, at least, once it is old enough to develop that kind of agency. Why does the mother’s desire overrule the desires of the father and the probable desire of the unborn child?

        There are also ways in which even unwanted pregnancies have bound people together socially in the past. This may often have been bad: unhappy “shotgun marriages,” for example, but I don’t take it as a given that our current view of marriage as all about the happiness of the individuals in the couple and nothing to do with family and society as a given. There are social, emotional, and economic advantages to big families, as China is finding out.

        That is not to say that I’m in favor of limiting access to contraception or view use of contraception as immoral–I am not and do not. But I think we also shouldn’t take it as a slamdunk argument that if we can, in fact, prove that pro-lifers are “really” just out to control women’s reproductive freedoms, that they are, therefore, evil and wrong.

        • Anonymous says:

          Lots of people (like myself) are fine with contraception but think at-will, no-questions-asked late-term abortions are morally wrong.

          Me too, yet I would oppose regulations against late-term abortions (why??).

          But assuming pro-lifers are anti-contraception . . . it still doesn’t follow to say they are just doing it to control women, as many religious people view it as a kind of frustration of God’s plan to have sex while preventing pregnancy

          Controlling women may not be an end-goal, but it is still a goal. There’s nothing in the sex=pregnancy equation that doesn’t boil down to controlling women somehow.

          Why does the mother’s desire overrule the desires of the father and the probable desire of the unborn child?

          A pro-choicer would argue that it is because, as you had said yourself, it is her body (and if something were to go wrong, she would die).

          But I think we also shouldn’t take it as a slamdunk argument that if we can, in fact, prove that pro-lifers are “really” just out to control women’s reproductive freedoms, that they are, therefore, evil and wrong.

          This might prove too much? Like, I’m not permitted to call the things that I find evil and wrong, evil and wrong? What would pro-lifers say about not being allowed to call abortion evil and wrong?

          • onyomi says:

            “A pro-choicer would argue that it is because, as you had said yourself, it is her body (and if something were to go wrong, she would die).”

            The “death of the mother” thing strikes me as something of a red herring the way it is usually deployed. Even among strong pro-lifers, I think very few are against abortion in cases of danger to the mother’s life. The problem is that a large proportion of abortions are not chosen due to genuine fear of danger to the mother’s health, but because of genuine fear of negative economic and social impact.

            “This might prove too much? Like, I’m not permitted to call the things that I find evil and wrong, evil and wrong? What would pro-lifers say about not being allowed to call abortion evil and wrong?”

            I’m just pointing out that I’ve never seen an argument for why it is evil and wrong to limit women’s reproductive choices. The argument always proceeds from “it IS evil and wrong to limit women’s reproductive choices” and then proceeds to try to show that that is the real motivation, as if proving that were self evidently enough.

            And as someone else pointed out somewhere in here, the people who are all about protecting women’s choices to do whatever they want with their bodies are mysteriously not all about freedom of choice in other areas (particularly economic).

          • Anonymous says:


            +1 for feeling mild irritation at people who invoke “two consenting adults” without being willing to follow the implications of that argument a single pace beyond the particular situation they want to apply it to.

          • Anonymous says:

            The problem is that a large proportion of abortions are not chosen due to genuine fear of danger to the mother’s health, but because of genuine fear of negative economic and social impact.

            True, yet it’s still the woman’s body at stake.

            I’m just pointing out that I’ve never seen an argument for why it is evil and wrong to limit women’s reproductive choices.

            Well, I could point to things like this or this, or I could turn the question around and ask why you think abortion is evil and wrong, or we could even discuss the nature of evil, but . . . it just seems to be getting us out into the weeds. People are going to have different terminal values (or whatever the kids are calling it these days). I’m not sure that “Well, you just *think* it’s wrong” is a great way to engage.

            the people who are all about protecting women’s choices to do whatever they want with their bodies are mysteriously not all about freedom of choice in other areas (particularly economic).

            I don’t see this as such a damning argument. As someone else pointed out, it’s just as silly to expect pro-life people to oppose wars or the death penalty (or death itself). Personally, I think choices about what to do with your body aren’t subject to the same considerations as, e.g., economic choices, and vice-versa.

          • @onyomi: according to Wikipedia, it is standard Catholic doctrine (since 1889) that the baby must not be killed to save the mother. Of course that doesn’t necessarily mean that most Catholics agree with it, but I suspect that “very few” is probably an exaggeration. (There are presumably more than a few Catholics who accept all Church doctrine as infallible.)

            And as someone else pointed out somewhere in here, the people who are all about protecting women’s choices to do whatever they want with their bodies are mysteriously not all about freedom of choice in other areas (particularly economic).

            Apples and oranges, IMO. Believing that people should be entitled to control over their body – disapproving of slavery, for example – doesn’t mandate a belief that people have the right to do whatever they please in less extreme cases.

          • onyomi says:

            @Harry Johnston

            “…doesn’t mandate a belief that people have the right to do whatever they please in less extreme cases.”

            Less extreme cases… like when the mother’s life is not in danger?

          • onyomi says:


            I’m not surprised you can find a case of a woman disagreeing with her doctor about the threat to her life, and such cases would, indeed, be a tradeoff for a more stringent policy about doctor approval for abortion. But if we are looking at it from the pro-life framework, in which the fetus’s life is as important as the mother’s, one might say that allowing millions of fetuses to be killed on the off chance a very small number of women might die due to medical error is not a very good moral calculus.

            For what it’s worth, I *don’t* think a fetus has the moral importance of a full grown woman, especially not in the early stages of its development, and certainly wouldn’t condemn anyone for having an abortion if they genuinely feared for their life (as a Libertarian, I’m not super comfortable banning any abortions unilaterally, but I do pass a moral judgment, and my moral judgment is that abortions of convenience, especially if not done very early, are morally wrong–not as wrong as murdering a five year old child–but still wrong).

            Re. the El Salvador case, I can agree that women probably need more reproductive freedom in El Salvador, but I don’t think it’s as obvious in much of the developed world, where falling birth rates are a serious concern (the birth rate in the US is actually still downright lagomorphic compared to China, Taiwan, Japan, Singapore…they are actually bribing couples in Singapore to have babies with offers of free sexy vacations).

            And again, though I think abortions of convenience, especially late abortions of convenience are immoral, I’m not particularly calling for any new laws or anything and am certainly not against contraception: I just think we shouldn’t take it as a given that there are only upsides to giving women greater control over reproduction, especially when, by physical necessity, they already have more control of it than men (though they, of course, bear a much heavier physical burden as well).

            Many of the nasty traditional cultural mores surrounding women’s behavior are, in fact, imo, an attempt to contain and control women’s *greater* natural control over sex and reproduction. We should be glad to be rid of many of these mores, but we can’t expect it to be all upside, especially when it happens at roughly the same time as (and partially because of) a revolution in contraceptive technology.

          • No, less extreme cases like having to pay taxes. It may be a metaphorical pain, but it isn’t a literal one. (And of course the mother’s life is always in some danger. Childbirth is an inherently hazardous proposition.)

          • Gerry Quinn says:

            Harry Johnston – it is also standard Catholic doctrine that if the foetus dies as an indirect side effect of action needed to save the mother’s life, that is permissible.

          • @Gerry, so the Wikipedia article said, if I remember rightly. Didn’t seem directly relevant though.

    • The fact that a lot of anti abortion people are also pro capital punishment casts doubt on “pro life” as an accurate summary of their position.

      • Salem says:

        Can “pro-choice” be an accurate summary of the pro-abortion lobby, when most of them oppose all kinds of other choices I might make? We can play this silly game all day. What does it say about vegetarians that they are generally pro-choice? And so on.

        This all seems to be based off what I call the “Montaigne Fallacy” – i.e. we see someone doing something, and think to ourselves “What beliefs, reasons, aims, etc would I have if I were doing that?” Then, we quickly conclude that the other chap is a fool or a madman or evil.

        Of course “pro-life” doesn’t mean “all lives, everywhere, should be extended indefinitely regardless of cost.” It doesn’t even mean “all killing is wrong.” It just means “it’s wrong to murder babies, even if they haven’t been born yet.” Similarly, “pro-choice” doesn’t mean “all choices, in all circumstances, are beyond moral reproach.” It doesn’t even mean “women who choose to have an abortion always do so wisely.” It just means “it’s wrong to force women to bring pregnancies to term against their will.” In both cases, the terms emphasise what the partisans view as the most salient moral factor of the abortion debate, and so are good labels.

      • Wrong Species says:

        Is it that difficult to understand that some people might actually consider the life of an unborn baby to be worth saving? Do you think that belief is less likely than a giant conspiracy to oppress women that all pro-life people are in on?

        • Anonymous says:

          It’s not difficult. The argument is that the common folk are indeed motivated by saving the unborn, but that when you look at pro-life organizations, they do not seem to share the same motivation (because, e.g., their goals would result in a greater number of abortions). This has all been stated very explicitly by pro-choicers. I’m not sure what’s going on with the straw-manning of their position here.

          • The original Mr. X says:

            but that when you look at pro-life organizations, they do not seem to share the same motivation (because, e.g., their goals would result in a greater number of abortions).

            Erm, reducing the number of abortions would result in a greater number of abortions?

          • Anonymous says:

            Erm, reducing the number of abortions would result in a greater number of abortions?

            Um. No. I was referring to their other goals (e.g., opposing contraception).

          • Wrong Species says:

            Look at what he said:

            >The fact that a lot of anti abortion people are also pro capital punishment casts doubt on “pro life” as an accurate summary of their position.

            Notice he didn’t say organization, he said people. If he is making the argument that you say he’s making, he’s not doing it explicitly. Most authors don’t explicitly make the argument you claim they are really making.

          • houseboatonstyx says:

            I think there’s no point in taking the terms ‘pro-life’ and ‘pro-choice’ literally, because both are meaningless euphemisms. ‘Pro-life’ refers to opposition to abortion and assisted suicide, and perhaps to merely-palliative care; ‘pro-choice’ refers to support for choice of abortion. Trying to argue substantively against either position by focusing on those euphemisms, is … ignoring the territory by playing map-games.

      • The original Mr. X says:

        Yes, I mean, it’s not like there could be any relevant differences between an unborn baby and a vicious mass murderer…

      • Vorkon says:

        Most of these people are also very pro-killing in self-defense, and also support abortions in the case of rape or to save the life of the mother, which is roughly analogous to self-defense. I fail to see the contradiction there.

        Pro-lifers, when you actually try to take their position charitably, are not, as Salem points out, pro “all lives everywhere.” They are “pro INNOCENT life.” Taking that into consideration, their position on the death penalty should make sense, too. (Well, their position on the moral aspects of the death penalty, at any rate…)

  45. DavidS says:

    I think you have to be really, really careful assuming that because people don’t do things they can’t. I also find it very aggravating when people make these errors, but I think pretty often it’s because they (rightly) see debate as working on rather more informal rules than systematic logical argument. Partially what you might dramatically call ‘dark arts’, but basically because they replace the issue that is explicitly being discussed with the more important practical one.

    It’s a rationalist habit to very comfortably have arguments about v controversial topics in a very detached way: I enjoy this myself! But I suspect we’re wired such taht members of the tribe sitting round discussing whether maybe your sub-group is inherently inferior, or it would be OK to kill you in some weird scenario involving trollies, or whatever, is very threatening. The way people react in those cases doesn’t show a lack of a developed ability: it shows it isn’t being applied for whatever reason.

    Finally, you can just have disagreement on the substance. Scott gives the example of people saying depressed people are ‘just lazy’ as not being able to imagine different mind models. But it’s not inconceivable that some people are indeed just lazy as well and the other people are diagonising it correctly. Similarly, I think it’s perfectly possible that some people hold beliefs for very negative reasons even if positive ones are available.

    In the UK we banned fox hunting: some people supported this for animal welfare reasons etc, some just hated posh people. Many the two were confused and combined to some degree (similarly, people who are appalled by halal slaughter but OK with battery chickens).

    Principle of Charity suggests you shouldn’t assume bad motives without evidence: but it doesn’t mean you can castigate people who do assume bad motives. And this is all made more complex by the fact that people’s reasoning is generally spotty and inconsistent. People present a value as absolute “it’s really bad for people if they live off unearned income!” but their actual view is more complex, which is why some people who say that might be anti-aristocrats, pro-welfare and others pro-aristocrat anti-welfare.

    There’s also the point that the Typical Mind fallacy being invoked here can cut both ways. Perhaps Scott in his innocence can’t model people who are genuinely motivated by their hatred of women/gays and wants to convince himself that they really are concerned with life/family/whatever. It feels like the sort of argument that could be thrown in any direction.

    • >Principle of Charity suggests you shouldn’t assume bad motives without evidence

      Does previous bad behavior count as one? Only when done by the same person or when people who sound like the same person?

      I mean by sounding like: if people who want X do bad things and their reputation is destroyed, and if I also want X, I will probably try to sound very different. And it is not difficult. People sound like each other if they have the same personality, or same hidden goals, not the same open goals.

    • Hatred is such a no-term. Time to clarify it. As it gets constantly redefined. Hatred was originally understood as something from the haters perspective. Some kind of a violent rage, the opposite of love. Some kind of an I-want-to-hurt-your attitude. Something really close to revenge. Now today it is more often seen as something defined from the hatee’s perspective e.g. “wanting to reduce the rights of women counts as hating women” – so what matters is not that the hater really feels true blood-boiling, adrenaline-pumping hatred, but whether the people affect feel threatened and as such, hated.

      This is not very logical. But it is a generic tendency in social justice to redefine everything from a victim perspective and not a perpetrator perspective.

      Asking if people feel threatened is valid, asking if people feel hated is pointless, a threat is something to be seen from the victims perspective and a motivation for threatening is something to be seen from the perpetrators perspective and hatred as such is obvious the motivation, not the threat.

      So it does not make much sense, but simply it hate, hater, hatred became a low-prestige term. So people are no longer asked if they hate someone, they are told “you are being hateful” and that is just basically a P- communication, a your prestige points just got lowered type of move.

      It would be useful to use D+, D-, P+, P- to signal dominance and prestige aspects of communication. As they tend to influence the human brain so better lay them clear. “You don’t mess with Zohan D+” “gays are sissies D-” “X is a really cool dude. P+” “antifeminists just hate women P-”

      Anyhow, to reclaim “hate” beyond the P- message it has, you could say there are many ways to threaten or harm people, and many reasons why people can feel threatened or harmed, but hate is really a rare reasons or motivation.

      Almost nobody hates women. It is just crazy to. Even if your mom hurt you, it is more often fear. In most cases, it is not hate in the I-want-revenge-on-you sense. Back to the kitchen moves are more of a sort of dominance, in the worst case, protection in the best. If “hate” was not so imbued with ideology and thus seen as just a P- message, it would be easy to see the difference. If I want to enserf some people to serve me as a noble lord, do I hate them? No, I just like having serfs. The fact that violates their rights may count as a harm, but not all harm is hate based. However, to fight back against it, it is logical that people hit back with P- messages and “you are being hateful” is one.

      As for gays, the negative emotion ranges from disgust to fear. Again not quite the same as hate.

      Hate in its pure form is usually just a revenge / vengeance and it is really rare.

      • JBeshir says:

        It seems to be commonly used to assert “wants to treat as having lower moral worth than other humans”, which doesn’t imply hatred in its usual sense; it might simply mean they perceive the needs of people like themselves as in conflict with the needs of those people and want people like themselves to be given priority.

        I think “hate” ended up used for this partly as an exaggeration, and partly because it lacks a good short verb of its own. It would be better for discourse if people used clearer, less angry language.

        And I think this ends up as a common accusation, firstly because it’s commonly true of people engaging in lower grade discourse, we are still the same species as the humans that dehumanise their enemies until genocide of them is morally fine, we just don’t go quite that far any more here.

        Secondly because it’s a very low effort accusation that’s hard to rebut- “You just don’t think group X should have normal moral value”.

      • Viliam says:

        I guess some people, when forced to develop a theory of mind, find a way to have their cake and eat it too. They develop beliefs like: “Yes, other people have different beliefs than me… but that’s because they are mentally ill.” Which admits that there are other perspectives, but there are no other valid perspectives — only my perspective is valid!

        Thus the “psychiatrization” of the opponent. They can’t have a healthy mind with a different model of the world, but they could have a mental illness that would produce the different opinions. So don’t listen to their delusions, and think about which known mental illness is most likely to explain what they say.

        For example, they disagree with a person P. The person P happens to have a trait T. Could a phobia of T explain the disagreement (via hostility towards P)? Yes, it could. Problem solved. If necessary, add a few extra explanations to resist falsification.

        Played against a conservative, it could be a hatred of women, or phobia of gays. But of course it could be (and historically was) played against other sides too. If we go too deep in the history, we have to replace “mental illness” with “imbalance of body humors” or “contract with devil” or whatever was the culturally accepted generator of invalid beliefs.

    • Michael vassar says:

      Thanks David!. It’s really helpful to have people spelling out the subtext of more cynical worldviews without endorsing the cynicism or rage. Optimism about the shared value of communication and clarity tends to make those who do have such a value lose.

    • Virbie says:

      > And this is all made more complex by the fact that people’s reasoning is generally spotty and inconsistent. People present a value as absolute “it’s really bad for people if they live off unearned income!” but their actual view is more complex, which is why some people who say that might be anti-aristocrats, pro-welfare and others pro-aristocrat anti-welfare.

      I don’t think this is really inconsistent. The former group may imagine welfare as primarily a leg up during hard times (and those who intentionally live off of it as an unavoidable downside of an imperfect system), and the latter may just have a bizarre definition of earned which implicitly includes “inherited from an ancestor who was good at killing people for land” (the more charitable way to describe this view is that aristocrats historically earned the right to their land through governing it to whatever degree, and additionally that inheritance falls under “earned income”).

      Either or both may be wrong in their assumptions, but that’s very different from inconsistent.

  46. Peter says:

    Understanding the idea of trade-offs: I think what you have here is a specific and temporary inability to understand some trade-offs. There’s this taxonomy of routine, taboo and tragic tradeoffs; I think some people are reluctant to accept that if they hold some values sacred then there are going to be tragic tradeoffs.

    I think those people saying their preferred plan has no drawbacks would understand tradeoffs just fine when going out shopping for consumer goods – although I’d like to see some actual research to be sure.

  47. TMK says:

    All this sound familiar to me (and i totally get that feeling – *hey, he gets this too!*), but one thing got me really interested. I mean, if it is really environmental, not innate, then this is absolutely huuuge. I mean, if people could start/stop also using stereotypes too much and other still commonplace cognitive biases not due to hard work and constant watchfulness, but because they learnt it in childhood and it comes to them naturally, then it would truly, really, change the world.

    • Viliam says:

      The chicken-and-egg problem here: How do you create an environment on “mental development level X” where people could absorb the necessary skills automatically from the environment, if you don’t have such people yet?

      Historically, I guess it was a question of luck — one person gained that level “naturally” (which is possible, although rare), and they happened be in a position visible to many other people, some of which started copying the skills. Could explain religious prophets.

      (Note: I am not suggesting here that everyone who “gained a level” automatically became a prophet. Only that if someone “gained a level” and happened to become a prophet, they were able to create a culture on the higher level.)

      The problem is, I guess, if the level X seems to resemble level X-2 to people who are currently on the level X-1… how do we distinguish people genuinely on the level X from… say, charming psychopaths on the level X-2. In other words, how to separate “true prophets” from “false prophets”, if we don’t have the prophetic powers ourselves.

      • Peter says:

        There’s something about each stage containing the ability to function at the lower stage, but not the higher stages. So by the theory, you could check for good X-1 emulation.

      • TMK says:

        I do not know, obviously, how this happened, if it is true. If you ask me what i think, then, based on how englithenment happened, what positivism wanted to do and scientific approach in general, and mass education, i would guess it was done by slow and steady organic work that made the latest changes. For the earlier times (the hypothesis of the dual mind, that supposedly was there as late as 4000 years ago), i have no idea… that seems to have happened without leaving tracks in the history (ie: written stuff)…

        The organic work probably starts as you described, with some, few people getting it by accident, and then conveying the mindset to others.

      • Michael vassar says:

        Hypothesis. No stages, two states. Everyone is familiar with both states, what looks like x-2 from outside always feels from the inside always feels like x performing x-2 from inside. Executing at x involves tactically not acknowledging x-1.

        • sakkyokusha says:

          So, if your current state is x-1, and your environment is x = x-2, should you try to “execute at x-1” and thus tactically not acknowledge x-2 = x? I don’t think that would work. I suspect what you would want to say instead is “no, change to x and then ignore x-1, like everyone else”. Problem is, that assumes WAY too much free will.

          My experience suggests that you have to be “snapped” between states by some significant exogenous change.

  48. vV_Vv says:

    Primitive cultures certainly exhibit the magical thinking typical of young children; this is the origin of a whole host of superstitions and witch-doctory. They exhibit the same animism; there are hundreds of different animistic religions worldwide.

    And modern cultures have plenty of people who not only still believe in traditional religions, but also in all sorts of modern conspiracy theories, quack medicine, and so on. Even among the self-professed “rationalists”, there are people who take a Matrix-like “simulation hypothesis” (that is, a creator god) seriously, not to mention the weird diets, “life hacking”, cryonics, investing in bitcoins, etc.

    Are primitive cultures really more prone to irrational thinking than us, or is their irrationality just more obvious to us because it takes forms that are culturally alien to us?

  49. Carlos says:

    “I always assumed it was innate, because it was on the same timeline as things like walking and talking which are definitely innate.”
    Are you sure? Is Chomsky around? Ughhh!

    • Peter says:

      Yeah yeah. The nativism in language debate has been going back and forth for decades, and I doubt that the book you link to is the decisive refutation; the blurb and endorsements may say so but that’s blurb and endorsements for you. I’m especially skeptical of his claim to have debunked anything.

      I notice also a “bravery debate”: both nativists and non-nativists are given to claim that they’re shocking iconoclasts going against the received wisdom. The subtitle of the book seems to be taking aim at Pinker, and Pinker’s The Blank Slate has a similar “debunk the received wisdom” vibe, but for Pinker it’s non-nativism (in various senses) he wants to debunk.

    • Scott Alexander says:

      In terms of walking, given that other animals do it and that we wouldn’t have legs if we didn’t need them, I assume it’s innate.

      • TMK says:

        The fun thing is that walking is not completely innate, too 😀 It requires practice, and without it, it does not develop properly (check the kid that was locked for 12 or so years and never learned to walk properly later on. Her name was Gracie iirc.).

        Fun related trivia: animals have to learn sexuality by observing members of their species in action to properly breed.

        • Anonymous says:

          Fun related trivia: animals have to learn sexuality by observing members of their species in action to properly breed.

          Perhaps that’s why the Victorians died out?

          • TMK says:

            Oh, i am sorry, i suppose that apes use their imagination and trial and error and hearsay if they do not have better sources of information.

            Many other animals are not as lucky, though.

    • Douglas Knight says:

      Innateness isn’t a binary but a spectrum. Everything depends on both nature and nurture. The question of whether something is so innate that it would be restored by children raised by wolves is at one end of the spectrum, not relevant to this post, which is about the other end of the spectrum: diversity within a culture and diversity between existing cultures.

  50. Good. There should be a name for this. How about “ideological empathy”? The ideological Turing Test could be seen as a (possibly somewhat crude) test of ideological empathy.

  51. If we’ve made progress historically on 1)-4), does that mean we will continue to do so in the future – that we will be better at thinking about trade-offs, modelling other people’s minds, etc? I’m thinking there is some room for cautious optimism – not the least thanks to our increased understanding of psychology. The fact that people do point out the need to override your System 1, not to fall prey for ideological biases, etc – as is done in this post – is bound to have an effect.

  52. ildánach says:

    “The car’s not starting because it’s tired”

    Hur hur.

    • Liskantope says:

      It reminds me of this Bertrand Russell quote whose context is not completely unrelated to theory of mind.

      No man treats a motor car as foolishly as he treats another human being. When the car will not go, he does not attribute its annoying behavior to sin, he does not say, “You are a wicked motorcar, and I shall not give you any more petrol until you go.” He attempts to find out what is wrong and set it right.

      For some reason, it never fails to crack me up.

  53. Sarah says:

    Putting this in a more self-critical way: what developmental stages do you (commenters) know you are missing? Here are some of mine:

    *Situational awareness. I just don’t have it. If someone comes up behind me and taps me on the shoulder I will shriek. It’s why I can’t drive; I have no way to estimate how soon the turn is coming.

    *The ability to parse multiple people’s motivations at the same time. Books like “Game of Thrones” are confusing to me because of the multiple plots. Financial instruments are confusing for the same reason — I have to work out sloooowly and explicitly why each party would find it worthwhile to participate in the transaction, and by the end I’ve forgotten how it works and have no intuition. I think most people offload such things into a “social thinking” bucket instead of just working memory?

    *Sally-Anne Test problems. The ability to keep straight “What Sally says”, “What Sally believes” and “What is true” and not forget that these may be three different things — and different from “What Anne says”, and “What Anne believes.” This is a very serious flaw in reasoning about business.

    *Skepticism. The ability to notice “Just because someone is saying this to me in an authoritative voice doesn’t mean it’s true.” (I have this in most situations but moral questions trigger my OBEY OBEY RIGHT NOW reflex unless I put a lot of energy into fighting it.)

    • Ydirbut says:

      I really don’t alieve that people really speak languages other than english

      • moridinamael says:

        Once I was hanging out with a large group of Iranians. They were all speaking English because I was there – even in side conversations. They had been speaking Farsi before I arrived and resumed speaking Farsi after I left. Experiences like this definitely make it hard for my brain to take seriously the idea that there are any languages other than English.

      • onyomi says:

        As someone who has studied many foreign languages and who is really into the mechanics of language learning, linguistics, etc. I can confirm that this is much, much harder to intuitively grasp than it seems, even though everyone grasps it conceptually.

        To really get into a foreign language enough that you can perceive it as an alternate “base level” of communication rather than just an encryption for your mother language, which is the “real” language, is surprisingly hard, and, I think, often not even accomplished by many successful foreign language learners.

        • MF says:

          Living abroad for a couple of years has kind of turned this around for me. It’s not that my mother tongue now sounds like a cipher to me, but whenever I’m back in my country, it takes some time to alieve that almost every random stranger around me understands German.

          “Strangers speak Dutch and English. They almost never speak German. That’s just a law of nature!”

          EDIT: On second thought…that actually sounds exactly like a cipher.

    • LCL says:

      +1 for trying to think of cognitive developments we lack instead of just condescending stuff we do well but think other people lack. But it’s much harder.

      My contribution:
      Hard to articulate precisely, but something like being able to evaluate another person’s situation, traits, and actions beyond the context of my relationship with that person. Seeing them more objectively without letting my judgement be too colored by my own relationship with them. Like being able to tell if someone is a caring or friendly person in general without that judgement just being a proxy for whether they’ve acted caring or friendly towards me.

      I’ve noticed that this is a skill that I associate with religious people. Not that all religious people have it, but that the people I’ve seen demonstrate it tended to be devoutly religious. Especially when the example is like being able to see someone as a sensitive, or injured, or even loving person even after they’ve been horrible to you. Something to do with forgiveness, or cultivating an ability to see the divine image in everyone.

      I’m only aware enough of it to know how bad I am at it.

    • Douglas Knight says:

      It might just be the phrasing, but some of these seem very different from the examples that Scott gave. #4 fits right it. #2 sounds pretty similar to #3, except that #3 is phrased more like Scott’s examples of people making System 1 errors, while #2 is phrased as something that your System 1 got right and alerted System 2, but System 2 couldn’t handle. The classic developmental stage examples are ones that people don’t understand even if they are pointed out. (Of course, they are classic because the demonstrations are so simple and convincing, not necessarily because they are the most important.)

      #1 is completely different in that it isn’t social and/or metacognitive. It is probably a good direction to look for other examples. I suppose it is similar in that is the failure to have an unconscious process running in System 1 continually checking.

      • Sarah says:

        Yeah, 2 & 3 are probably different versions of the same thing. And they might not be real “developmental stages” in that if you tell me, or diagram it for me, I can understand it, it’s just sometimes too cognitively taxing to keep straight all at once.

        #1 may not be in the right category (it’s not theory-of-mind per se) but it’s an input to a lot of social processes. Task switching and situational awareness problems make it hard to make sense of what’s going on socially. My impression is that attentional and sensory-processing modules are “prior” to social modules.

        Come to think of it, 4 is more like one type of *consequence* of 2/3 — if you don’t maintain awareness that “the truth” and “what Sally says” are not necessarily identical, the option of “disbelieve Sally” doesn’t come to mind.

    • Virbie says:

      I seem to have a subconscious inability to imagine a system working inefficiently, which really really screws me when dealing with systems that have no incentive to fix small inefficiencies. It took me four separate visits on four separate days to get a Cuban visa from the consulate, which most people can do in three minutes flat (the Cuban consulate is exactly as efficient as you think it is). I lost my driver’s license and it took me 111 days and multiple phone calls and visits to the DMV to get a replacement.

      I simply don’t think to check for arbitrary constraints: I’ll check things like hours and which forms I need, but not things like “you can’t download this form, you have to wait on hold for two hours and then tell someone to mail it to you” or “the Cuban embassy in Chile does not accept Chilean pesos, only USD free of even the tiniest millimeter long rip on the edge” (a friend of mine had his USD turned down in Chile because it was yellowed from age).

      I sometimes try to overcompensate, but it’s tough to know where to draw that line. Should I assume the address of the agency on their website is wrong? Should I assume that the times listed are in GMT instead of local time? (these examples are intentionally ridiculous)

  54. Tibor says:

    I can almost exactly remember myself reaching these “milestones” over the course of the last 10 years or so (I am 26). When I was 16, my model of the world and politics was that there are the good guys and the bad guys, the bad guys use the idiots to support them. So whoever opposed the Good Policy was either evil, knowing well that it is a Good Policy, or too stupid to realize that it is. Also, the good guys were good, period. Therefore, whatever they proposed had to be good as well and I would defend or oppose ideas and policies more based on whether the good guys liked them or not. I am pretty sure the vast majority of people, even a majority of people with university degrees think like this throughout their lives. I think that the realization that the others are not necessarily stupid or evil comes from reading David Friedman’s blog (and watching some recorded talks), but had I noticed LWers before, I would have gotten the message from there as well. It also spoiled a lot of other stuff for me 🙂 Things I used to enjoy, but then started seeing as partisan rants. Probabilistic thinking is something that I believe I learned well by, well, studying probability theory. It may be the effect of “it is my field, therefore it is the most important field” but I really have a hard time today to imagine how people with zero knowledge of statistics and basic probability function in modern life. I can hardly imagine a subject that would be more universally useful at school (beyond reading, writing and basic arithmetics) than the basics of probability and statistics. However, for some reason, one gets literature or chemistry instead 🙂

    I think that still the reflex in most people is the “people who do not agree with me are evil or stupid” and it takes mental effort to overcome that reflex, same as if one is a bit unfocused and sees someone talk to a dog in French is puzzled before he realizes that the Frenchmen do not speak English as their native language (and often at all 🙂 ). The more it is practiced, the more natural it becomes. The same goes for probabilistic thinking and probably everything else above the magical thinking and corresponding primitive models.

    I still have some problems realizing that other people might think differently or even that they do not get the full meaning of what I want to say (including how I personally feel about it and all the underlying experience I have), that they only hear exactly what I tell them.

    This is a problem more on a practical everyday level than on an abstract conceptual level for me. When I care about not being misunderstood, I often have to stop and “tell” something to myself in my head, picturing myself as the recipient to realize that something sounds unintentionally rude for example. From what I gather, most people can do this effortlessly and subconsciously, I for the most part cannot (although it also gets better with training and some things are automatic nowadays that did not use to be when I was 15). On the other hand, I find it quite strange that people who have this skill, empathy, in abundance, are often also the ones who have an especially hard time to realize on a more abstract level that other people are different and do not necessarily share their worldview, and who therefore often fall back to “my opponents are either stupid or evil”. I think this is often the case of artists, who, on the other hand, can usually get along with other people on a personal level with much more ease (I am not talking about the “mad genius” stereotype, but most “artsy types”).

  55. I am really not sure exactly what Nathan J Robinson is really “getting”. Let’s put this way: there are substantial values (homosexuality wrong, racism wrong) and there are instrumental values (we should be tolerant to wrong things) and I think instrumental values are in most cases just posturing, in some really rare cases attempts to build a functional framework for different people to coexist. But in most cases we wish nothing more than everything we disagree with a substantial level be hurt and destroyed by any means possible. Winning, and not winning nice. Ends justifying means. Burning the heretics and make them a scary example. But since it is socially unacceptable, you have to posture about instrumental values and make it really look like you care.

    Is what he is getting is deconstructing the instrumental value posturing and forcing people to admit honestly they just want to harm the racists / harm the gays by any means possible?

    Sort of putting a P- prestige points hit on everybody who does not set posturing aside and agrees to seriously work on building frameworks of coexistence between disagreeing people? Interesting strategy but what is he “getting” ?

    • JBeshir says:

      I think he’s getting that if you’re debating with people with different ideas of what policies do what to you, and you’re trying to actually convince them they should change how they’re acting, any “type” of justification you argue they should accept is one you have to accept as well.

      For example, if you say “because X causes social harm” and suggest that they should accept such justification from you even if they don’t believe it, then you’d have to accept “because Y causes social harm” from them even if you didn’t believe it, at least unless Y was something like “You not licking my shoes” where everyone would agree that they were taking the piss.

      I think what he gets to in the end is more like “I support the non-filtering side of both because while I think trying to reduce racism is instrumentally useful, I also think a bias towards non-filtering is also instrumentally useful, and you should think the same- speech and debate and consideration of viewpoints both fulfils values and is useful in working out what to do.” rather than “Ideally I’d have banned what I don’t like but in the interests of avoiding my stuff getting banned back we should have a tacit truce.”, but I might be misunderstanding.

      • Oh. My pont is that it is really rarely happening. Usually nobody is convincing anyone to act differently, they are sending messages for bystanders give them more prestige than the opponent, or team up against the opponent or anything like that. And I am not being overly cynical. I just think it is basically the lawsuit logic. You are trying to convince the judge or jury, not the other attorney. And the judge or jury will not simply change their minds, they will also throw the client of the other attorney into prison, so basically you are essentially convincing them to harm and hurt the bad guy. The basic mode of politics has already been this kind of adversarial lawsuit type back in Cicero’s time. Do you think todays Ciceros really want to convince todays Catilinas of anything? The idea is just to convince everybody else to destroy the guy. The idea is to convict the jury to throw them in the pit. Everything is a signal to outside and not a desire to change the mind of the opponent and it is not an overly cynical view, it is just the normal standard lawsuit view. If laws are _applied_ by adversarial rhethorics, why shouldn’t they be made so?

  56. stillnotking says:

    Boy, this post hit a lot of my “rationalists teaching Grandma to suck eggs” triggers. There is little, if anything, you can teach even the most ignorant person about the nuances of human social interaction and theory-of-mind. We grasp this stuff intuitively. You can teach people to model it, which is basically like teaching them the physics of catching a ball: academically interesting, but not likely to turn them into a pro athlete.

    People who say things like “pro-lifers hate women” or “the terrorists hate us for our freedom” are not failing to reason about the motives of others, they’re accusing the others of being deceptive. If anything, they’re reasoning at a higher level than you. It’s hardly unprecedented for political partisans to be cagey about their true motives (whatever you think of those two specific examples). Also, they’re doing it in a deliberately aggressive and hyperbolic way, in an attempt to influence their audience. There’s a reason that detached perspectives like Nathan Robinson’s (or yours) are not the preferred approach to winning elections.

    • But maybe Scott is trying to educate the audience to not buy this. If the audience buys this, then it is not true the most ignorant person also grasps this intuitively. This is a real contradiction here.

      My proposed – geeky – solution is to defuse it by by inserting D+, D-, P+, P- signs into quoted text to make clear the dominance (toughness or submissivity / cowardice) or prestige (morality, ethics, achivement) aspect of someone’s speech so to defuse the emotional effect that way.

      “terrorists hate (P-) us for our freedoms (P+)”

      “pro-lifers hate (P-) women”

      Most messages are P because it is far more accepted today than D. Exceptions: “X has balls of steel (D+)” “this is just a knee-jerk reaction (D-)”

      • stillnotking says:

        Statements like those are never intended to convince, but to inspire. The audience buys it for the same reason they buy any type of flattery. Neither the speaker nor his intended listeners want to defuse the emotional impact of “the terrorists hate us for our freedoms”. They will actively resist attempts to defuse it, in fact.

        I don’t mean to dismiss the whole liberal project here. There are definite differences in the thinking of modern people and “primitive” ones, especially the nuances of ingroup/outgroup emotional response. I just don’t think Scott is telling a modern reader much that he doesn’t already know on an intuitive level. The logic presented in this post is, if anything, a few steps behind.

        • Sure, but then there is a third type of person, not the opposing politician and not the audicence, but some sort of a third one, maybe someone else’s audience or a neutral or someone who really takes it seriously and really tries to argue back. It looks like to me Scott was often the type who really tried to argue back, as a futile attempt to try to find some truth in it or people who are interested in truth. Basically you could say this is a warning to himself and such neutrals to not do so. The idea here seems to be “don’t try to argue with ingroup flattery messages outside your group”.

          Autistic traits, taking everything literally can play a role.

          Part of this confusion is caused by the Internet, heck, even print media. If we are football supporters right out there in the stadium and yell that the other football team sucks, we know at some level it is not serious, because we know the groups are clear.

          But in print media and especially Internet the groups are not clear.

          I remember years ago I browsed a subreddit called cringepics. And I think it was mostly about people who looked cringeworthy without any group affiliation. Too much Ed Hardy etc. And then there was a new trend of posting OKCupid pictures of men and quoting some of their answers like “I am a nice guy” and “I think women should shave their legs”.

          In a hindsight, cringepics was cleverly took over by a group of feminists. What was especially clever and devastating that they didn’t argue and didn’t even comment, in their in-group it was OBVIOUSLY cringeworthy. This uncommented, silent obviousness is often the most devastating rhetorical weapon because it makes you feel like you are left out from something that everybody else is part of. So I argued. Of course. Aaaand it wasn’t a good idea. It wasn’t even counter-arguments, just downvotes. For “obviously” wrong ideas. I can tell you, that kind of silent frown is psychologically more devastating than getting called an asshole. If other think you are bad that hurts, if they think guys like you are so irrelevant that they hardly worth talking to, that hurts more. This is a really efficient clever tactic. But often you don’t notice what you thought is generic neutral individuals talking, is a group dynamic.

          The point is, print media and the Internet made groups difficult. You could be thinking you are in a not-group-related rational neutral debate while you are actually seeing a group takeover.

    • Adam Casey says:

      I think I disagree with you about the empirics here. “We grasp this stuff intuitively.” is false as far as young kids and people with certain kinds of mental disorders goes, and seems to me only partially true for regular adults.

      As far as politics goes your description might be correct for some. But I’m a right-winger who can disguise himself well as a leftie, and lots of my leftie friends sure talk like I’d expect people to if they didn’t grok being right wing. They keep talking like this in more detailed discussions, which you can model as them being very good at playing the part, or you can model as them actually not getting it, the latter seems more reasonable.

      • stillnotking says:

        They don’t necessarily grok being right-wing, but they have to realize that right-wingers don’t literally say “We want to control women’s reproduction, because we hate them.”

        Your friends’ belief that righties’ professed concern for fetuses is just a deceptive pose may be highly ingrained, but it’s still a belief; it requires second-order reasoning and a coherent (if not plausible) theory of mind. It isn’t the same type of error that young children make.

        Edit: Actually, it requires third-order reasoning: They want me to think that they think X. That is not evidence of some glaring cognitive deficit; quite the opposite.

        • Adam Casey says:

          > It isn’t the same type of error that young children make.

          Oh sure. But I don’t think the post claims it’s the same error. It’s a different error which is analogous.

    • Psmith says:

      Strongly endorsed. If people don’t have these skills, or have them but don’t use them, perhaps it’s because they aren’t actually very useful. Playing at Horrible Debate Club Nerds is fun, but we’re political animals, and there ain’t no such thing as a free lunch.

  57. Alsadius says:

    God, this explains why I hate so much political debate. I’ve understood it on some level for a while, but this fits it into a framework much better than my previous “Most people don’t care that much about what their opponents really think, and just want to ‘win’ the debate”.

    This is a bit free-association, but a bit of politics just popped to mind. The correlation between critical thinking and political alignment has been studied a few times, and the study results are that left-wingers are better critical thinkers, and completely by coincidence the sort of people who study it are mostly left-wing academics. Conversely, there’s been studies done of the “ideological Turing test” – how well you can identify the honest beliefs of someone who has different opinions than you – and right-wingers usually do far better on that, which has commonly been explained by left-wing dominance of culture forcing them to internalize it whether they want to or not – they can’t put a bubble up the same way a lefty can.

    What if this is a result of different sides mastering different ones of the above techniques? It might be less about bubbles and bias, and more about something in the rightist memespace encouraging development of some aspect of theory of mind, while something in the leftist memespace encourages a different virtue that gets labelled “critical thinking”?

    (I suspect the traditional explanations are closer to the truth, FWIW, but it’s interesting enough that I thought I’d share)

    • Could be, but weakly. I am on the right and I desperately try to understand the modern mind because that is the difference between no chance and some chance of winning. My current theory is that moderns (leftists) like prestige status more than dominance status. They would rather be pop stars than lead a warband. I have no idea how it happens but it is almost a fact that modern people are really like that. This maps really well to the last 500 years of history and the trajectory of changes. OTOH if I want to be really charitable with the left, they always seem to have cared about fighting systems than people, like fighting capitalism as such and not just against specific capitalists, so due to this tendency towards abstraction – which is probably an excellent move to gather prestige – more interested in being critical about systems. But we know many counter-examples as well.

      • Viliam says:

        They would rather be pop stars than lead a warband.

        Seems obvious to me. A pop star can take a vacation, or change their mind and try doing something else, without getting killed. Also, failing to become a successful pop star usually doesn’t kill you. On the other hand, if you lead a warband, you get killed if you lose, and you are likely to get stabbed in the back even if you keep winning. (At least this is my model of the world; I have never been a pop star of a warband leader.)

        A preference for keeping your future options open, if you wish.

        • This… could also be a fundamental mental difference that determines political orientations. Good catch! People more left like this open-endedness, maximizing the breadth of the option tree, while people more right basically have a certain sense of attachment, loyalty and desire for solidity. Think the typical old fashioned farmer who is outcompeted by factory farming but will never sell it because it is HIS farm. Profitable or not, MY farm, part of my identity, stick to it.[1] This attachment / loyalty. See also how “rootless” can be an insult. Rootlessness is option-maximization, lack of loyalty-signal hence perhaps lower trustworthiness.

          Loyalty in this attachment sense is something people either “get” or not.

          [1] Doesn’t necessarily mean people more right have a bigger identity. More like different things matter in their identity. They can change political options without identity loss but location chance is more difficult, perhaps for people on the left it is the other way around.

  58. I know a guy who told me that when he was three years old, he was convinced that his parents were unconscious robots. They could not be real people because real people would not, for example, refrain from eating the ice cream in the freezer when no one else could stop them.

    • Murphy says:

      I remember at around age 6 realizing that adults/teachers didn’t all know almost everything. I already knew I could know things that they didn’t but I had the belief that they knew all about the world and had all the information that I did not.

      I wonder sometimes whether some people never get over this but simply apply it to book authors and believe anything they see written in a book.

      I think it’s the flip side of the earlier theory of mind thing, first going from understanding that you have information that others don’t to understanding that others have information you don’t to understanding that others others don’t have almost limitless information that you don’t.

      • moridinamael says:

        I think it actually takes some big, significant thing to first cause your brain to say “This person, who exudes all the signs of being an authority figure, is wrong in a way that I am certain of” and then to actually follow through with the implications of this.

        • Tracy W says:

          My three year old has been doing this ever since he could talk enough to do it. He doesn’t believe me on electricity, he doesn’t believe me on pronouncing Maori words, he doesn’t believe me on whether the tide was going in or out. Never mind that I’m right on all of those.
          Although in his defence he did learn to talk in London where nearly everyone talks with a different accent to mine.

  59. Deiseach says:

    Scott, this isn’t condescending and elitist at all, it’s very reasonable and presented neutrally.

    That means we’re all going to pull bits out of it and go “But what if – ?” 🙂

    RE: primitive cultures – I think the big mistake made there, at least with older anthropology etc., was to conflate “primitive level of development as compared with civilised societies today” as being the same thing as “representing earlier stage of human development”. So Australian Aborigines could be considered the closest thing to Stone Age peoples or whatever, and their customs shed a light on early Man. Which is not so – Australian Aborigines of the 18th century were just as much products of the 18th century as Europeans or Americans or Africans etc.

    The idea of different developmental milestones on the macro as on the micro level is interesting and I’m not saying it’s wrong, and I am not saying either this is what you are doing, but I would be cautious about taking “Amazonian tribesmen today = earlier developmental stage” and drawing the same kind of conclusion from that as previous scholarly generations, that this means they are chronologically more primitive and so are on the same level culturally and socially as people 5,000, 10,000 or more years ago.

    • Adam Casey says:

      But of course this gels very well with actual child development stages.

      There are lots of mental disorders that cause adults to not reach some development stage or other. And in many way it is then accurate to describe them as being like a child of 10. But in many other ways they are 40 and there are aspects of them that are inherently a product of being 40.

  60. John Ohno says:

    With regard to general semantics and ‘the map is not the territory’: my position (informed in part by seeing a lot of otherwise intelligent people make these kinds of fundamental mistakes) is that, rather than being an environment-driven developmental step like talking, accurately distinguishing between internal mental state and external world is a relatively rare skill that children learn to some degree but is rarely fully internalized, and that under mental stress people will fall back on magical thinking of this type. In other words, confusing the menu with the meal is an extremely common cognitive bias, and general semantics owes its complexity to being a system intended to minimize the rate at which practitioners commit this particular systemic error in cognition. (In other words, it stands with the ideas and techniques in the LW Sequences and in Dennet’s Intuition Pumps as tools for avoiding particular cognitive biases through habits of thought.)

    To claim that “the map is not the territory” is obviously true is, well, trivially true: even children who in practice do not make the distinction between self and other will not claim that a concrete map is identical to the place it represents; in the same way, it’s “obviously true” that qualia isn’t a necessary or meaningful idea, and most of the ideas that the Sequences goes over at length are equally obviously true. But, knowing that something is true when thinking directly about it is very different from putting that knowledge into practice in another situation (and, indeed, you can see this in the Sequences, where Yudkowsky spends a whole essay talking about how counterproductive it is to pander to your in-group and insult your out-group when making a rational argument, and then proceeds to pander to his in-group and insult his out-group in three more essays about subjects essentially unrelated to his in-group/out-group division right afterwards; Yudkowsky knew that this was bad, but he hadn’t internalized a mechanism for detecting when he was doing it when he wrote these essays).

    General semantics, even if it’s not an effective set of techniques, at least has an appropriate goal in mind, because you can indeed trace a lot of unfortunate situations down to people who intellectually know that the menu is not the meal but in practice are eating the menu.

    • We actively teach kids NOT to notice the map-terrain problem by so much education being about categories. We teach kids whales ARE mammals at school, we don’t teach them “we found it most useful to categorized animals by how they reproduce and whales reproduce like rabbits so we found it useful to put them there”. It is just they ARE mammals and that is it. Teaching to the test of course. This teaches kids to think what we think and say about reality is the same as reality as such, it teaches them to take everything literally true or false. There are no 65% correct test answers. You guess the teachers password or lose.

      To a good student, saying the map is not the terrain is weird – is it a wrong map or something? A good map is one that gets me an A on the test. What is the terrain anyway? The terrain is whatever map the authority figures who judge me accept.

      GS or LessWrong are basically unschooling.

      If you are actually trying to get something done in life, manual laborer, programmer, you figure out the map is not the terrain early. If you had a creative hobby or interest, it is even in your teens before you must work.

      If not, you can go through a significant chunk of your life thinking truth is whatever answer gets rewarded by grades, status, or anything else by the figures of authority. The right answer is the answer that gets the praise and the pat on the back. Schooling teaches this.

      And how do you even? This is how language works. We still say whales are mammals, not say whales sale 73% of the same characteristics with other animals categorised as mammals.

      • ChristianKl says:

        I think that explanation suggests bad teaching. The alternative to saying whales are mammals because we found it useful to put them in that category is too teach that wales and rabbits have a common ancestor that lived X years ago. You can find that out via DNA analysis.

        There also the map that you can create by classifying organsims by the way the reproduce.

        Other people look at fossils and made maps on how species are related based on that basis.

        Interestingly all those maps of how organisms are related to each other give the same results. In general semantics terms, the maps are *accurate*. Then we call a specific area in that map “mammels”.

        In the class it’s also interesting to note that the map of organising organisms by whether they live in the ocean or on land doesn’t produce the same results in the case of whales. The map of organising organisms by their size also produces different results.

        That’s how a good biology teacher who understands the lessons of general semantics should teach this issue. That has little to do with unschooling.

        The result of teaching that way is that the resulting children understand the principle of evolution and can do a lot more than correctly answering the useless question “Are whales mammels?”
        As societies we care that schools teach evolution in a way that it’s understood by children. We care less about whether everybody says “Whales are mammels” when you quiz them.

        And how do you even? This is how language works.

        I applied general semantics to this example, but if that’s a question you care about read Science and Sanity. The book doesn’t only say “The map isn’t the territory” but it also tells you what to do instead.

        • >That’s how a good biology teacher who understands the lessons of general semantics should teach this issue.

          In some sort of elite school where children are highly intelligent and are actually interested in it, yes.

          In the average public school? If they remember anything from the lesson and especially anything three days after the test it is a success.

          My point is, if you dumb down the GS method, you arrive exactly this common “X is Y” “X, Y, Z are called N” teaching. The job of the public school teacher is to not fail most of the class, the system is not prepared for the majority failing plain simply no matter how bad they are.

          • ChristianKl says:

            It’s certainly possible that you are constrained by certain standardized tests and those tests consider it important for children to be able to say that whales are mammels.

            In that case I consider that a bad test. I don’t care about whether a high school succeeds in teaching it’s students that whales are mammels. It’s useless knowledge that’s irrelevant.

            But if I get to write the test then I can write a better quesiton:

            If you order different animals by DNA similarity, does the resulting order look more like the order of ordering the animals by size or by birthing method?

          • Jon Gunnarsson says:

            Knowing that whales are mammals is useful. For example, that fact tells you that whales have lungs, give birth to live young, invest a significant amount of time rearing their offspring, and taste more nearly like beef than salmon.

          • ChristianKl says:

            I don’t want to eat flesh of whales in the first place so I don’t care about whether they taste more like salmon or beef.

            Can you tell me a story about how a twenty year old will make better live decisions because he know about how the flesh of whales taste? Or for that matter how much time whales spend raising their young?

            Why is it valuable for a society when it’s citizens know those facts?

          • Jiro says:

            Can you tell me a story about how a twenty year old will make better live decisions because he know about how the flesh of whales taste?

            The answer to this for most people is “no, I can’t tell you that”, but the reason is that the body of facts which contains that fact is cumulatively important to know, even though each individual member is not very important.

            Asking how often a specific scientific fact is useful in real life is like asking “it’s not important to calculate 12.39 * 6%, after all, how many times will anyone run into that specific calculation in real life?”

          • Jon Gunnarsson says:

            Sure, knowing specific facts about whales has little practical benefit, but the same is true of the vast majority of things taught in school. I could just as easily ask you how knowing that ordering animals by DNA similarity looks more like an ordering by birthing method than by size will help the average 20 year old make better life decisions.

          • ChristianKl says:

            “it’s not important to calculate 12.39 * 6%, after all, how many times will anyone run into that specific calculation in real life?”

            There good reason why we shouldn’t teach children to memorize the answer to 12.39 * 6%.

            Any teacher who says that he teaches public school children and it would be much easier to tell them to memorize a few numbers, isn’t a good math teacher.

            That’s the equivalent of TheDividualist wanting to teach directly that whales are mammels because anything else would go over their heads.

            I could just as easily ask you how knowing that ordering animals by DNA similarity looks more like an ordering by birthing method than by size will help the average 20 year old make better life decisions.

            No, that happens to be more useful because it helps with understanding the principle of evolution. It illustrates why it can be useful to do a lot of scientific experiments on small rats. The rats share more with us than the size difference would suggest.

      • FullMeta_Rationalist says:

        One of the concepts I’ve found most productive is that of E-Prime. E-Prime is mimics English, except you’re not allowed to use the lexeme “to be”. This taboo includes (but is not limited to): “is”, “am”, “are, “were”, “was”, and “will be”. The taboo extends to negative contractions. Exceptions are limited E-Prime limits exceptions to when “to be” is a helping verb, rather than the primary. E.g. E-Prime excuses “was going”. (Empirically, it makes sense for me to also make exceptions for topic sentences and theses. I don’t know why.)

        E-prime forces a speaker to include the subject in their constructions and encourages the active voice. It makes writing more exciting like an action movie. It also makes writing more literal and less euphemistic.

        When I say “cake is good”, my brain applies a little xml tag that reasons “‘cake’ is now labeled with a ‘good’ tag; I shall file the concept ‘cake’ under the category of things which are good.” The cake has been sorted into and now lives in a static, platonic universe devoid of exciting stuff because there’s no subject there does not exist a subject to operate the elements of the universe on and make exciting things happen.

        All sentences including the lexeme “to be” can be converted into E-Prime. “cake is good” becomes “I like cake” or “cake pleases me” or “I find cake enjoyable” (empirically, E-Prime is hard at first, until you start substituting “someone finds this X” for “this is X”). The difference is that A difference exists because your psychic universe now exhibits operations, has an actor, and must describe sentences in terms of action and changes and movement.

        When I say “cake is good”, good is just just represents a judgement given my personal preferences. But the sentence pretends “good” is an objective attribute, as if it were a property of the universe rather than a property of my mind. E-Prime forces subjective judgements masqerading as objective opinions to expose themselves as what they are: subjective judgements.

        E-Prime allows me to easily resolve a lot of linguistic sleights of hand. I can hardly stand reading my local newspaper anymore because I consciously but involuntarily reinterpret the text in E-Prime, which makes it easier for me to interpret the author’s message more intelligently, during which I often quickly realize is stupid and petty does not appeal to me.

        (Cat fact: the guy behind E-Prime (D. David Bourland, Jr.) was a student of studied under the guy who’s credited with the Map-Territory Distinction (Alfred Korzybski).)

  61. ChristianKl says:

    To me General Semantics claims more than just “the map is not the territory”. Korzybski for example has the notion of “sematic reactions” (s.r). Word aren’t about how their identity is defined in the dictionary but the point of a word is the reaction it causes in the brain of the listener and the speaker.

    To me thinking of words that way is also a developmental milestone. Words produce effects and don’t work through having an identity.

    • Vaniver says:

      This is a special case of “the map is not the territory,” though–the dictionary definition of words (the map) is not their s.r. / what they actually do (the territory).

      • ChristianKl says:

        It might be a special case, but while most people who read the sequences understand “the map is not the territory” I don’t think they think of words as producing sematic reactions.

        I remember doing focusing with a person who’s native language was French. I don’t speak French. When searching for a handle they came up with a French expression for which they didn’t readily found a English or German translation.

        For the Focusing process it’s useful that they spoke out the label but it’s irrelevant that I understand the meaning as facilitator of the process.
        When doing change work it’s often more important what words do, then what their dictionary meaning happens to be.

        Scott frequently expressed that he thinks to give valuable verbal suggestions to his clients as a psychiatrist, he would have to say things that *are* clever and insightful. That suggests that he thinks of words more as a medium for transmitting ideas than as tool to produce reactions in the other person.

  62. Toth says:

    For something completely different, but with some interesting parallels I would highly recommend the first Quora answer on What is it like to understand advanced mathematics.

    On a related note, I feel that there are certain concepts that the simple fact that you understand them can qualitatively improve your thinking. I wouldn’t characterize them as giving access to different “stages”, but they do expand the space of things you are able to think about productively.

    You already mentioned probability. Another example would be the notion of a derivative – something a lot of people never really grok it. I don’t mean knowing how to compute derivatives, just understanding the concept well. I remember trying to understand what speed means in a physics class before I learned about derivatives and I was just hopelessly confused. If you need to think about anything that involves instantaneous rates of change and don’t know derivatives, aren’t you in that same boat?

    I think other examples are things like the notion of a limit more generally, the Turing machine (or other models of computation), game theory and Cartesian geometry [1]. All my examples are from math, but probably that is because that’s just what I am most familiar with.

    [1] I’m trying to give examples of concepts that improve your reasoning about things outside mathematics. If you are willing to consider those that improve your reasoning about mathematics, then most mathematical concepts would count.

    • Skippy says:

      I think the ability to think strategically about systems in evolutionary terms is a level-up.

      In the stock market there’s the notion of cutting losers and riding winners. If you are tempted to buy more when the stock you like drops, and sell when it e.g. doubles, you will have invested the most in the worst stocks and miss the ones that go up 10x. It’s similar to probabilistic thinking but sort of the next level up. 1) estimate the probability it’s a winner and then 2) knowing you’re going to be wrong X amount of the time and you’re going find out pseudo-randomly, how do you play it so you lose less when you’re wrong and win more when you’re right.

      Or poker, the first level is, if I bluff here on average I’m going to win. Next level is, if I’m winning most of my bluffs I need to bluff more because I’m not bluffing enough. The other guy is folding too often and I need to exploit him, or I’m too cautious and I’m not getting enough calls on my value bets. Next level is, if a play a certain way, how does it make the game evolve, sometimes it’s better to get a certain rep or keep a guy in the game than to win a hand.

  63. Kevin Simler says:

    Another theory-of-mind feature I wish more people had was the understanding that their own mind starts to falter (logically/rationally) when considering sacred, taboo, or emotionally-charged subjects. Oddly enough, unlike other theory-of-mind features, this is one that’s easier to recognize in *other people* first, and takes training to learn to recognize in *oneself*.

  64. David Pinto says:

    This would have an funny corollary; the LW Sequences try to hammer in how different other minds can be from your own in order to develop the skill of thinking about artificial intelligences, but whether or not AI matters this might be an unusually effective hack to break a certain type of person out of their egocentrism and teach them how to deal with other humans.

    When I read that, I thought about Alex Rodriguez and his therapy sessions during his suspension, chronicled here.

    They talk about mind-mapping, which is how Dr. David describes the way human beings read each other, and about regressing, by which Dr. David means being blocked or immature or unevolved. They talk about perping, Dr. David’s term for lying or conning or disrespecting or generally mistreating people. Actually, on second thought, it’s not at all clear what perping is, not from Rodriguez’s précis, but it sounds a lot like being an asshole, and after many months with Dr. David, Rodriguez concedes that he’s done more than his fair share.

  65. Vaniver says:

    On the subject of developmental stages, people here might be interested in Triumphs of Experience, a popular-science level summary of a longitudinal study on adult development. When I get home and have access to the book I’ll respond with some details.

    • Vaniver says:

      He mostly focuses on Erikson’s psychosocial developmental stages. Focusing on just the ‘adulthood’ stages:

      1. Identity vs. Role Diffusion. Achieving identity “is to separate from social, economic, and ideological dependence upon one’s parents.”

      2. Intimacy vs. Isolation. Intimacy is “the capacity to live with another person in an emotionally attached, interdependent, and committed relationship for ten years or more.”

      3. Generativity vs. Stagnation. Generativity is “the wish and capacity to foster and guide the next generations (not only one’s own adolescents) to independence.”

      4. Guardianship vs. Hoarding. Guardians are caretakers and curators who try to preserve cultures and cultural values for posterity, not specifically caregiving to individual people (as in generativity).

      5. Integrity vs. Despair. “Integrity is the capacity to come to terms constructively with our pasts and our futures in the face of inevitable death.”

      The ‘psychosocial’ bit is strongly relevant; this is rather different from the epistemic developmental milestones discussed in this post.

  66. Patrick says:

    A lot of concepts discussed under the umbrella concept of error theory fall into this framework.

    People *feel* the emotion of moral approval when they think about something, and conclude that goodness is one of it’s traits. But this seems to be no different from enjoying music and concluding that it is objectively good music, and hating your kids music and concluding that it is objectively just noise. It’s a fundamental inability to recognize the interplay between your preferences and the outside world.

    Or look at the concept of “meaning.” People act like texts “have meaning,” and will go to elaborate lengths to defend this idea no matter how far it breaks down.

    • Brad (the other one) says:

      >Or look at the concept of “meaning.” People act like texts “have meaning,” and will go to elaborate lengths to defend this idea no matter how far it breaks down.

      So texts don’t have meaning, huh? Well, let’s apply this to your post. You might argue that your post is about people reading meaning into situations based on emotivism, but *I* will argue that your post was actually an amusing short story about dinosaurs. Since texts have no meaning, neither option is better than the other, right?

      • Patrick says:

        Yeah, thanks for the object lesson.

        Whether one interpretation is better than another relies on there being a standard of some kind by which they can be judged.

        “Is this interpretation what the author likely intended” is a standard.

        “Is this interpretation one which a specific audience group might adopt” is another.

        “Does this interpretation fully account for all facets of the text in an internally consistent manner” is one more.

        We could go on.

        The fact that we can list these options, and the fact that these options don’t necessarily lead to the same conclusion, is what demonstrates that “what does the text mean” is a useless framework.

  67. ChristianKl says:

    And this doesn’t seem too different from the leftist sources that say Republicans can’t really care about the lives of the unborn, they’re just “anti-woman” as a terminal value.

    If it’s about the life of the unborn you would expect those people to support paid leave around pregnancy to actually encourage woman to go through with the pregnancy. If it’s about not respecting woman’s right to self- determination you would expect that people against abortion would also be against parental leave around pregnancy. Claims such as that the woman can shut down pregnancy in cases of genuine rape would also count as evidence for the anti-woman thesis.

    I’m not sure whether the kind interpretation is correct.

    • Murphy says:

      The linked article covers it well but you’re making a lot of unfounded assertions.

      The most important point is the difference Deontology makes.
      The catholic church is hardcore on deontology, the idea that choosing to do something bad yourself no matter if the results are good is not acceptable. The idea that the end never justifies the means.

      As such they draw a distinction between goals and side effects. If a woman is pregnant and gets cancer and needs chemo and the treatment ends up killing the foetus the Catholic church is 100% ok with that. It’s a tragedy but since nobody set out with the goal of killing the foetus it’s morally ok.

      On the other hand if a woman is pregnant and is suffering from organ failure due to the strain from the pregancy and the treatment she needs to live is an abortion then the Catholic churches position is that that is 100% bad and unacceptable. Someone is setting out to choose to kill a child in order to save someone else. Sure both would die anyway but they see it no differently to the mother choosing to kill and eat a 5 year old child in a life raft in order to stay alive. The fact that they’d both die changes nothing.

      They don’t even accept the committing of moderate/mild sins to avoid bad outcomes including the use of contraception. I think it’s moronic that they include contraception as a sin but they do and their moral philosophy doesn’t allow them to commit one sin to reduce average incidence of a worse sin.

      • Linch says:

        It is difficult for me to condone this, because in my mind it *seems* obvious that deontology is a FAR more harmful meta-belief to have than the inability to model other people’s beliefs precisely…but then again I say this as somebody who can’t seriously model deontological beliefs as legitimate ones for intelligent rational people to have, suggesting some serious self-serving bias on my end.

        • Murphy says:

          People tend to be more deontological the closer to a situation they are.

          The simple example is that I would be totally unwilling to strangle a healthy baby to save the lives of 7 others, all dying for lack of a good organ and all of the same tissue type even if nobody would ever find out about it.

          In terms of consequentialism it’s the moral thing to do but the deontological position is that the murder isn’t made irrelevant by the lives saved.

          Most people are largely deontological until you get to population level choices.

          A lot of libertarian philosophy is based partly on the deontological idea that forcefully taking money/property from one person to save anothers life is unacceptable.

          • Linch says:

            I would personally be unwilling to die from organ failure to prevent 7 babies from being strangled, but that seems like a poor way to extrapolate/aggregate preferences.

            There are preferences, and then there are meta-preferences.

    • Vaniver says:

      If it’s about the life of the unborn you would expect those people to support paid leave around pregnancy to actually encourage woman to go through with the pregnancy.

      Only if paid leave around pregnancy is completely costless. If it’s not (and it’s certainly not), then we have potentially differing beliefs about those costs and the tradeoffs involved.

      • ChristianKl says:

        The point is that in an enviroment where woman have the free choice to have an abortion, you would expect people who sincerely care a lot about the life of the child to support paid pregnancy leave more than people who don’t care about unborn life.

        If you would run PCA being against abortion should correlate with supporting paid pregnancy leave. That’s what we would expect if concern about life is the driving factor.

        If we expect being anti-woman to be the driving factor we would expect correlation between against abortion and being against paid pregnancy leave.

        As far as the issue of the act/omission distinction goes, we could control for that by having not gender related questions where it comes into play.
        Do you strongly believe that the outcome of running the relevant factor analysis is that for “concern about life” being the driving factor and not “being anti-woman”?

        • Emily says:

          Hey, let’s try this the other way. Why do people support paid pregnancy leave? I don’t think it’s actually about concern for children. Because if it were about concern for children, we’d expect supporting paid pregnancy leave to be correlated with opposing abortion.

          • Who wouldn't want to be Anonymous says:

            I support paid child birthing leave because otherwise there is a strong incentive to try scheduling childbirth. Which generally means opting for a C-section. But use of that option includes a number of obvious surgery related risks, which are largely unnecessary. (Edit: And increased costs.) In addition, the concurrent rise in cesarean delivery and a whole host of other diseases is also, at the very least, suggestive.

            Early childhood parental leave is also probably a really good idea. But you would have to do a lot of convincing for pre-delivery pregnancy leave.

            I oppose abortion.

            I think this is all consistent.

          • Emily says:

            Yes, of course it’s consistent. But you could be consistent if you did support abortion as well. I was not seriously making that argument, I was pointing out that it’s a bad style of argument.

        • Anonymous says:

          I’m not sure it’s so obvious that someone who opposes abortion out of concern for the child should also support paid pregnancy leave. Perhaps they believe that it acts as a disincentive for employers to hire women. Perhaps they believe that it does work as intended, but that children are better off being brought up by their mother rather than put into daycare while the mother works.

    • Salem says:

      What Emily, Murphy and Vandiver say, but even more than that, you’re completely ignoring the act/omission distinction. There’s no contradiction between (for example):

      1. Pregnant women should not be allowed to kill the unborn.
      2. Employers should not be forced to grant parental leave to mothers.

      • Murphy says:

        To be fair, there is from the consequentialist but more importantly from the pragmatists point of view.

        If you pick a less controversial thing and ask similar questions of catholics who are hardcore deontologists when it comes to abortion and contraception you can get wildly inconsistent answers.

        Though a big part of that is that a huge portion of people don’t actually understand their own sides moral philosophy and just parrot things.

      • blacktrance says:

        Yeah, “you’re a hypocrite if you say you want to protect the unborn but don’t support paid parental leave” is analogous to “you’re a hypocrite if you want to protect people against murder but don’t donate to lifesaving charities”.

    • roystgnr says:

      If it’s about helping the women then you would expect those people to get out their own wallets or at least their collective tax dollars rather than trying to lobby a politician to try to coerce an employer into paying. If it’s about virtue signaling…

      You know, I’ll stop there. This sounded like a fun game, but I can’t get a third of the way through without feeling dirty. (But if I can’t easily put myself in the minds of people who can’t put themselves in the minds of others, does that mean I need to work on my Theory of Theory of Mind?)

      What does the unkind interpretation say about the nearly half of all women who are therefore “anti-woman”? It’s suspicious that you can’t even see a gender gap in abortion polling until a few years ago, if it’s all pure gender bias underneath.

      • ChristianKl says:

        If it’s about helping the women then you would expect those people to get out their own wallets or at least their collective tax dollars rather than trying to lobby a politician to try to coerce an employer into paying.

        I would indeed expect that if there was a bill on the floor of the house saying that a woman get’s 3 month of government paid pregnancy leave that a majority of feminists would want that bill approved and not rejected.
        Do you think they would want it rejected?

        What does the unkind interpretation say about the nearly half of all women who are therefore “anti-woman”?

        From what we know from implied gender bias tests, woman aren’t immune to it.

      • mtraven says:

        The point of Scott’s bringing up the issue was that the two sides don’t model each other fairly or accurately, which seems true. Anti-abortionists paint pro-choicers as murderers of children, and in the opposite direction the anti-abortionists are painted as oppressors of women. It’s probably safe to say that neither side sees themselves in that way.

        But I’m not sure the reason the two sides paint each other as moral monsters is a lack of cognitive development. In a moral/political battle, you are *motivated* to see your opposition as monstrous and alien. It’s just the basic dynamics of enmity. And while compassion and peaceful coexistence may be preferable, that’s not the world we are living in — there’s a war going on.

        The two sides of the abortion debate have very different ideas about morality, individual autonomy, and a great many other things that fit together into what I guess should be called ideologies. Say traditionalism vs. modernism. When a pro-choicer calls an opponent “anti-woman”, it’s not that they believe that person goes about consciously trying to harm woman all day long, it is shorthand for the fact that the opponent subscribes to a ideology whose model for how women should live is radically opposed to their own.

        There is certainly some value in trying to understand your opponent’s real value-structure. On the other hand, in a war your main mission is to defeat the enemy and understanding them is ancillary to that. To take a rather extreme form of traditionalism (trigger warning!) as an example — while its certainly worthwhile trying to understand the logic and reasoning behind such a worldview, the battle between my values and that is not going to be won by argument or by empathic understanding.

        • Anonymous says:

          When a pro-choicer calls an opponent “anti-woman”, it’s not that they believe that person goes about consciously trying to harm woman all day long, it is shorthand for the fact that the opponent subscribes to a ideology whose model for how women should live is radically opposed to their own.

          Thank you! This is the same point I just made elsewhere on this thread. I would add to “is radically opposed to their own,” in such a way that it actively harms women (rather than being beneficial or neutral).

  68. Emily says:

    I’m hesitant to call something a developmental milestone unless mastering it actually helps the typical person with life in some way. I don’t think it’s enough that it’s a sometimes-useful/informative mode of thought that some people go from not having to having.

    • Scott Alexander says:

      I didn’t get a chance to mention this in the article, but I think a lot of patients see psychiatrists because they’re missing these milestones.

      Think about for example people who are having really serious relationship problems / getting into fights. In a lot of cases, people don’t understand their partners’ different mind designs (for example, I used to have big fights with my parents where they would call me a slob for not doing some cleaning task, and I would call them oppressive for making me clean something that was already clean; now this seems to me like people just have different might-as-well-model-as-innate levels of what triggers their perception of dirty and gross, and that if I had realized this is would have been easier to come to a compromise). Likewise, when someone says “My girlfriend isn’t spending enough time with me, that must be because she doesn’t care about me / doesn’t love me anymore”, they’re either committing a different-mind-design error (my desire for social interaction with a partner was about 10% of my ex’s, which was one reason we broke up) or a trust-system-1-judgment error (I feel neglected, therefore I am being neglected, because my partner is a bad neglectful person). This turns into them shouting “Why are you neglecting me?!” at their partner, instead of a reasonable discussion like “Look, I’m sorry about this, but when you spend less than two hours with me a day I feel neglected, what can we do to deal with this?”. And if they do say this, but their partner hasn’t grasped the insight, they might answer “Well, you’re being silly, two hours a day is too much” rather than something like “My desire for interaction is much less than that, but I realize yours is higher, let’s negotiate somekind of compromise”.

      I don’t know exactly how much these sorts of things matter in real life, but I bet it’s a lot.

      • onyomi says:

        One of the hardest but probably most helpful things my fiancee and I have worked on in our relationship is understanding that what is easy or hard or stressful or relaxing for one of us may not be so for the other, and vice-versa. He have had a lot of arguments that basically boiled down to “why are you stressing out about this when it is clearly not a big deal??” and “why are you not taking this more seriously when it clearly IS a big deal??” and trying to more successfully model the other’s mind as different from our own has been somewhat helpful.

      • Reader says:

        Your examples of relating to your parents or girlfriend being useful make a lot of sense, but so many examples tend to be Republican/Democrat Blue/Red differences.

        Those examples strike me as closer to the post about people not being able to smell or thinking that everyone thinks numbers have a color associated with them. Kind of interesting to think about, but ultimately not especially relevant. If I know (or think I know) that right wingers don’t actually hate gay people, they just think that there’s going to be a massive zombie invasion any day now and gay sex pisses off god, and we are going to really need god’s help for the invasion — what am I supposed to do with that information?

        • Nornagest says:

          Well, in the example you could try establishing that God or zombies don’t exist. But in real life, it’s more likely that you’ve talked yourself into believing something short of their actual model. Stupidity usually beats malice as a model, but real-world stupidities are mostly high-level and subtle, not object-level and obvious; if you model someone as believing something that’s really absurd on its face, then ninety-nine times out of a hundred, it’ll turn out that you’re misunderstanding it, or that you’re lacking some context that makes it less absurd than you thought, or that it’s being professed for purely social reasons and doesn’t much affect their actual motivations.

          (But look out for that hundredth case. There are some Time Cubes out there — they just don’t usually underlie major political factions. Usually.)

  69. Neanderthal From Mordor says:

    Shinto is highly animistic and I wouldn’t call it primitive.
    The animistic intuitions of children are the bases of animistic religions while in abrahamic cultures this animism of children is superseded by learned religious beliefs.

    Edit: Lawrence Kohlberg’s theory of moral development states that the highest stage of moral reasoning is being a liberal.

  70. Anonymous says:

    Psychology textbooks never discuss whether this progression in and out of developmental stages is innate or environmental, which is weird because psychology textbooks usually love that sort of thing. I always assumed it was innate, because it was on the same timeline as things like walking and talking which are definitely innate.

    We might need to be a bit careful about what we call ‘innate’ and ‘environmental’. Wikipedia has that some feral children struggle mightily to learn how to walk upright after spending years on all fours. While I can’t vouch for any of the underlying research used in forming that conclusion, I do know about Space Rats. The experiment had rats which were gestated/born in space (a microgravity environment). If they returned to earth within a certain time period (I think something like within nine days; I’m going off of memory for this), then they were quite likely to figure out how to stand and locomote on a flat surface in a 1G environment. If they returned to earth after a certain time period (something like 12 days), it was quite likely that they would never figure out how to do this.

    One way I try to understand this is in the context of machine learning, particularly simple epsilon-greedy algorithms. We don’t know what the environment really is, so we start off injecting randomness (things like kicking in the womb may be mostly random events that are used to tune neural circuitry to physical parameters). Gradually, we learn about the environment, so we turn down the randomness and settle on a strategy that seems optimal considering what we’ve seen before.

    The extent to which this tuning is, itself, ‘innate’ is incredibly unknown for biological systems. Nevertheless, it seems likely that we have to consider that most development is a complicated, time-dependent process that almost always has an environmental component coupled to whatever is going on ‘innately’.

  71. sherman says:

    Grasping the is-ought problem and value noncognitivism.

  72. greenergrassgrowing says:

    Funny, just yesterday I was writing a story in which (non-centrally to the plot) some people were capable of reasoning and some not, and I was all worried about how self-aggrandizingly tribalistic it was. Now it turns out it’s actually true?

    I need to update all my models. What if there’s a sixth level no one’s reached yet? What if there’s a sixth level, and someone *has* reached it?

  73. Orphan Wilde says:

    I observe that the author of the linked piece has effectively defined himself as being part of the most highly developed group, and defined some of his least-favorite people as significantly beneath him. I also observe that his model of development has a particular emphasis on social awareness as the metric of development, which results in a rather narrow idea of what constitutes development.

    There’s a pattern to those stages, though. The even-numbered stages are, for some position, a “Model of How Things Are,” a stage of certainty, which describes the class of entities defined in the even-numbered stage below. The odd-numbered stages are, for some position, “Realizing that that Model of How Things Are isn’t the only one,” a stage of uncertainty, which reflects the realization that the class of entities defined most recently is part of a larger set of classes of classes of entities. The second stage is a model based on Self. The third stage is realizing you’re not the only Self, and others have different experiences. The fourth stage is a model based on Society. The fifth stage is realizing yours is not the only society.

    The sixth stage, therefore, would be an internal model based on the class of societies and their interactions. The seventh stage would be the realization that this model is only one of a class of models describing societies and interactions. The eighth stage would be a model of the classes of models of interactions societies can engage in. And so on and so forth, each layer an abstraction on top of the one below it. Granted, fifth stage and onward are useless for those of us who aren’t engaging in social engineering. (The fifth stage is really only useful as a stepping stone to the sixth stage.)

    Which is to say, the general pattern is recognizable: Developing a map for the territory, and then realizing your map is insufficient as you explore more territory, and then, eventually, developing a new map. Repeat until death.

    • Is there an omega stage in which you realize that you can climb the stages indefinitely? Is there an omega+1 stage after that?

      • Whatever Happened To Anonymous says:

        Yes, you access it by pressing Up, Up , Down, Down, Left, Right, Left, Right, B, A when on the title screen.

      • Orphan Wilde says:

        You actually have to have the model, and then identify the issues with it, to meaningfully advance. Knowledge that knowledge exists isn’t the same thing as possession of that knowledge.

        As an example, knowing that there are other instances of the “Self” program out there, from inside a “Self” instance running at Stage 2, doesn’t mean you’re at Stage 3. You’re going to model the other “Self” instances as copies of your own; assuming they have the same information you do, for example.

        Likewise, knowing that other societies can have other values in no way prepares you to actually deal with societies that have different values. The knowledge is abstract; you don’t truly grok it. If you happen to somehow venture into a hunter-gatherer society that practices infanticide during lean years (when feeding one extra child means two deaths instead of one), your abstract knowledge doesn’t help you deal with living in that situation firsthand.

  74. onyomi says:

    One probably related problem I notice in myself and others is the inability to completely banish notions of agency from the natural world/fate. Most ancient religions, of course, explicitly personify entities like the weather and “fate,” but I think, even though we don’t curse Zeus when things go badly for us, it is still very hard for us not to feel, on some level, that the world “owes us” something or is out to get us, or is cruel and indifferent (an emotion experienced when the natural world continually fails to meet our subtle expectation that it should act like a person).

    I think people feel more entitled to be rude when they’ve been having a bad day, for example, not just because their bad mood makes them less able to engage in social niceties, but because, on some level, they treat “the world” and everybody in it as if it were a unitary thing possessed of agency with which they interact: when “the world” sends me a bad day, I’m not going to be nice to the world.

  75. Jiro says:

    He proposes a bunch of potential counterarguments, then shoots each counterargument down by admitting that the other side would have a symmetrical counterargument of their own… After three or four levels of this, he ends up concluding that he can’t come up with a meta-level fundamental difference, but he’s going to fight for his values anyway because they’re his.

    Now apply this to evolution versus creation. Or medicine versus homeopathy.

    I think the results you get show that there is a fundamental flaw in thinking this way. (His own example does too, for that matter, but it’s clearer when you consider the evolution or homeopathy cases.)

  76. J B says:

    A mental operation that’s become important to me directly as a result of reading LW and Scott is that a high level of certainty in a belief about the world or commitment to an ethical value is completely consistent with a willingness to change that belief/value based on new evidence of lines of reasoning.

  77. Sebastian H says:

    One might be the importance of procedural safeguards. It is mostly strongly seen in criminal settings where they can’t see how letting one person off to make a safer system is better than trying to go after them by all means. (Type I, Type II error?).

    Similarly in politics, many people don’t want to talk about the systemic damage of doing certain things to try to win or how violating norms to win in the short term can hurt things in the long run. The filibuster is a classic example. It was used/misused at all sorts of different points in US history, but it never became so regularized as in the last 20 years. Each step along the way was a violation of the norms of use and expanded filibuster frequency. Each step was condemned by the majority side and then embraced as they became the minority. And now we are at the point where the Senate is essentially a super-majority only institution.

    I’m not sure I’m defining the insight properly so I’m attacking it from a lot of different angles.

    Another example is rules vs. spirit of the law thinking. There are a lot of cases where a smart person who is rules-lawyering can prevail over a less intelligent person despite violating the spirit of the law. This actually cuts against my first point on procedural justice–which is why I include it. Procedural justice exists because going after certain types of errors (catch all the bad people!) ends up hurting the system (violate the privacy of everyone to catch all the bad people!). Rules lawyering corrupts this by abusing the procedure without any regard to the aim.

    Which maybe gets to another important insight. Balance. Procedural justice and substantive criminal justice are both important. Get too far into seeking just one and you end up hurting the system.

    (This may just be the trade offs insight so I’m going to stop here).

  78. Sam Rosen says:

    Are any of these examples of the thing you are talking about?

    – concepts don’t have necessary and sufficient conditions
    – there isn’t “the dictionary” aka platonism is false
    – outside view is super useful
    – steelmanning is not supererogatory for understanding people
    – categories we find interesting are probably not ontologically basic
    – we are not the center of the universe
    – humans were not built for “a reason” aka teleology is false
    – most distinctions are continuums
    – inactions are casually efficacious
    – most things don’t have one cause

  79. In “Better Angels of our Nature”, Pinker makes the argument that the rise of novels was a key factor in falling rates of violence, because they made it vastly easier to imagine another person’s experiences.

  80. Sigivald says:

    The Post argues that because the Democrats support gun control and protest police, they are becoming the “pro-crime party”. I’m not sure whether the Post genuinely believes the Democrats are pro-crime by inclination

    It might be better to replace “The Post” with “The Author”, or the name of the blogger – whoever it was, since I didn’t bother to check.

    After all, that link was to one of The Post’s hosted blogs, rather than an editorial proclaiming the paper’s Editorial Position on the subject.

    Not the same thing at all, you know.

  81. John Sidles says:

    Is there historical precedent for David Chapman’s essay?  Selected references and quotations follow.

    David Chapman’s contemporary essay “Developing Ethical, Social, and Cognitive Competence” (2015) strikingly parallels Thomas Clarkson’s two-century-old (!) three-volume survey “A portraiture of Quakerism: Taken from a view of the education and discipline, social manners, civil and political economy, religious principles and character, of the Society of Friends” (1806).

    Clarkson’s work is valuable in that it is an “outsider” account, which arose from Clarkson’s founding role in the Society for Effecting the Abolition of the Slave Trade (Clarkson being one of three non-Friend members of the founding twelve-member committee).

    Clarkson’s opus thoroughly documents the Friends’ remarkably modern methods for “developing ethical, social, and cognitive competence” among Friends … methods that anticipate by two centuries many doctrines of the Less Wrong community:

    It is a just feature in their [Friends’] character, that, whenever they can be brought to argue upon political subjects, they reason upon principle, and not upon consequences; for if this mode of reasoning had been adopted by others, but particularly by men in exalted stations, policy had given way to moral justice, and there had been but little public wickedness in the world.

    (emphasis added by me)  It is the commitment to “reasoning upon principle, and not upon consequences” that lends a Less-Wrongesque flavor to both Clarkson’s and Chapman’s analysis.

    In this regard Part I of Clarkson’s work (“Moral Education”, Chapters 1-9) contains innumerable Less-Wrongesque passages:

    Moral Education of the Quakers  Amusements necessary for youth — Quakers distinguish between the useful and the hurtful — the latter specified and forbidden.

    The Quakers have thought it proper […] to draw the line between those amusements, which they consider to be salutary, and those, which they consider to be hurtful. […]

    Among the bodily exercises, dancing, and the diversions of the field [e.g., hawking and riding-to-hounds], have been proscribed; among the mental, music, novels, the theatre, and all games of chance, of every description, have been forbidden. […]

    Chapter 1: Games of Change  Quakers forbid cards, dice, and other similar amusements — also, concerns in lotteries — and certain transactions in the stocks — they forbid also all wagers, and speculations by a monied stake — the peculiar wisdom of the latter prohibition. […]

    Chapter 4: The Theatre  The theatre —the theatre as well as music abused —plays respectable in their origin—but degenerated— —Solon, Plato, and the ancient moralists against them —particularly immoral in England in the time of Charles the second — English plays better than formerly, but still objectionable […]

    Chapter 9: Novels  Novels forbidden — their fictitious nature no argument against them — arguments of the Quakers are, that they produce an affectation of knowledge — a romantic spirit —and a perverted morality — and that by creating an indisposition towards other kinds of reading, they prevent moral improvement and real delight of mind — hence novel-reading more pernicious than many other amusements.

    These effects [of novels] the Quakers consider as particularly frightful, when they fall upon this sex [of women]. For an affectation of knowledge, or a forwardness of character, seems to be much more disgusting among women than among men.

    It may be observed also, that an unsteady or romantic spirit or a wonder-loving or flighty imagination, can never qualify a woman for domestic duties, or make her a sedate and prudent wife. Nor can a relaxed morality qualify her for the discharge of her duty as a parent in the religious education of her children.

    Differences between Friendly cognitive practices and LW cognitive practices can be largely ascribed to the Friends relatively greater emphasis upon the crucial role — alike in virtue, security, and happiness — of multigenerational domestic life:

    The Task

    Domestic happiness, thou only bliss
    Of Paradise, that has survived the fall!

    Thou art the nurse of virtue — In thine arms
    She smiles, appearing, as in truth she is,
    Heav’n-born, and destin’d to the skies again.

    Forsaking thee, what shipwreck have we made
    Of honour, dignity, and fair renown!

    — William Cowper (1785)

    Indeed, in both historical and modern times the medical and therapeutic professions are well-represented in Friendly communities … the cultivation of happiness, both individual and domestic, having considerable economic value and social cachet in all centuries.

    What can we take away from these teachings of history?  The Friend’s well-tested (and outstandingly effective) cognitive training methods extend up to Chapman Mode 4 “System”, but notably stop short of Chapman Mode 5 “Fluid” … and so we can conclude that a great challenge for the Friends — and for the LW community too, and for all “born members of the [Melville’s] First Congregational Church” — is the extension to Mode 5 of these venerably Friendly practices.

    Needless to say, too, Mode 5 extensions to faith and practice must of necessity be as heretically outrageous in our century, as the extensions to faith and practice of Clarkson and Cowper and Melville (and many more) were in theirs.

    Other natural questions arise.  The Friends codify their faith and practice in concrete recommendations for family life and child-rearing … it is striking (to me) that the LW community and/or Chapman’s essay offer scant guidance in this regard … can anyone recommend a LW child-rearing guide? … do such things even exist?

    Appreciation and thanks are extended to Scott A/SSC and its commenters, for so capably fostering public discourse upon these crucial issues (as they seem many, including me).

    • John Sidles says:

      One further reflection:

      Scott Alexander wonders  “This raises the obvious question of whether there are any basic mental operations I still don’t have, how I would recognize them if there were, and how I would learn them once I recognized them.”

      Many cultures (most cultures? all cultures?) evolve methods for teaching/learning the answers to Scott’s questions.

      The Society of Friends has evolved a unique method for teaching/learning basis mental operations — a method whose traditions are flexible, adaptive, and communal — unprogrammed worship.

      Professional therapists are intimately familiar with Friendly cognition-augmenting teaching / learning methods … indeed, in urban Friends meetings it commonly happens that the medical and therapeutic professions are over-represented to a degree that Friends themselves experience as hilarious.

      The Friends’ still-evolving practices of unprogrammed worship thus provide an historically effective (and empirically well-tested) answer to the natural “Mode 5” question “Where do therapists go for therapy?”

      • Vaniver says:

        Many years after I became an atheist, I decided to attend a Quaker meeting for a month after learning that they accepted nontheists. Friendly people, but far too politically liberal for my tastes.

        The experience of being “moved to speak” was a surprising one, and I’d recommend others give it a try if there’s a meeting nearby. If there were sufficient LWers in my city (who were willing to risk shutting up for an hour 😉 ) I’d be very interested in seeing how unprogrammed worship would work with them.

        • John Sidles says:

          Randall Munroe’s fans (especially) might enjoyably accept Vaniver’s invitation … xkcd’s natural join of “Mode 4+” rationality with irenic hilaritas being thoroughly informed by Randall’s Friendly roots …

          • Vaniver says:

            You may be interested to learn that I reached the 5th highest post count on the xkcd messageboards before moving on to LessWrong 😉

        • Peter says:

          I had a friend at university (one of the lecturers who I liked to sit with at college breakfast) who was known to say of his experience of Quakerism: “I know the Spirit moves in mysterious ways, but it moves people to speak rather more often in Cambridge than it does in other places.”

          • John Sidles says:

            Lol … this accords with Scott A’s wry observation on human cognition — which is very much in the Friendly spirit of Mark Twain’s Pudd’nhead Wilson — “Not all readers from Massachusetts are able to correctly spell ‘Massachusetts’.”

            Which, when you think about it, would make a pretty good general-purpose cognition-expanding xkcd “title text”.

          • Peter says:

            Note that this is the Cambridge which features bridges over the Cam, rather than the one featuring bridges over the Charles. Although I wouldn’t be surprised if things are similar over there, too.

          • John Sidles says:

            Peter opines   “I wouldn’t be surprised if things are similar over there too [at both Cambridges].

            Indeed, for three centuries and more, even unto the present day, fluke-chasing persons have been heartily welcomed to seek illumination — including Chapman-style “Mode 4+” and/or connectome-rewiring illumination — equally on the eastern and the western shores of the “rolling deep and dark blue ocean“.

  82. Thanks! Glad the writing is clear. I wasn’t sure many people would be able to understand this particular post.

    There have been various proposals for “what comes after 5.” For instance, I discussed briefly Cook-Greuter’s framework in a comment (

    There’s good empirical support for the existence of stages 1-5; proposed stage 6s, not really. Interesting to speculate, however! And yes, several people have suggested that even numbered stages might continue to be individuated and odd-numbered ones more socially-oriented.

  83. onyomi says:

    Other things I find it hard to intuitively grasp:

    My words and actions have real-world impact
    It is possible to like me
    It is possible to not like me
    It is possible to be intimidated by me
    Not everyone understands I am basically a nice guy
    People can’t read my mind
    It is possible to lie and get away with it because people can’t read my mind

  84. Cet3 says:

    A hypothesis I’ve been toying with for a while is that the human brain is heavily optimized for something that might be called “narrative” or “dramatic” thinking. Basically, modeling the world in terms of the purposeful actions of a limited number of intelligent agents. Stories are the human mind’s favored way of organizing and storing complex information. Even the most disciplined of critical thinkers can only avoid slipping into this sort of reasoning with constant vigilance and effort. It’s like trying to keep a bike out of the ruts in a well-worn trail.

    • Troy Rex says:

      Yes, couldn’t agree more! Physics and evolution and economics all show small pieces combining accidentally to form a larger system – and that’s a very inhuman picture.

      It’s hard not to reach for story and morality, but avoiding that rut seems to be a huge part of a scientifically-informed perspective (or whatever better way that can be phrased).

      • Max says:

        Why fight what is natural? Create a better, more fitting story. Instead of making a patchwork of incoherencice which you think might be a better representation of the system.

        I always had a hard time with abstract math, but much better with physics. Why? – because in physics there are enough things to grab on to create a visual image with a narrative. Math at least the way I was though was a bunch of formulas with very mechanistic meaning. Only later I discovered that math can be actually represented much better with images and real concepts , not with cryptic formulas and obscure in their soulless precision definitions

  85. Mary says:

    “The Post argues that because the Democrats support gun control and protest police, they are becoming the “pro-crime party”. I’m not sure whether the Post genuinely believes the Democrats are pro-crime by inclination or are just arguing their policies will lead to more crime in a hyperbolic figurative way”

    Or three, they are acting objectively pro-crime. Was very popular among Communists. And others. Orwell observed that during WWII, pacifism was objectively pro-Axis – because Axis powers treated pacifists as criminals, and Allies didn’t, their net effect was to hinder the Allied war effort and so aid the Axis.

    He also admitted that he overdid it — there were certainly situations in which the pacifist’s subjective stance would matter — but still, he had a point.

  86. FJ says:

    Here’s a nice illustration of how difficult modeling other-minds can be: in an article explaining a case in which the US Supreme Court is trying to interpret a statute punishing child pornography, Noah Feldman writes,
    “If you’re still reading, you’re either interested in grammar or really care about civil liberties, even for attempted rapists who possess child pornography.”

    Now, I have no beef with Feldman and recognize that he’s just trying to be lighthearted and self-deprecating. But it’s amazing to me that he’d totally overlook the other reasons why someone might be interested in such a case: hypothetically, a human being might actually want attempted rapists who possess child pornography to be incarcerated for lengthy terms. Or, for that matter, his readers might be convicted sex offenders who like child pornography and thus have a rational curiosity in the legal penalties for that activity (although I suppose that would be a pretty motivating factor in making one care deeply about civil liberties). It’s funny to think that a case interpreting a criminal statute is of interest only to grammar pedants and civil libertarians.

  87. Michael vassar says:

    Wow! Kegan is interesting. I think that I very strongly identify his idea of developmental progress with my idea of inauthenticity or valuelessness or domination however. The major triumph of history seems to me to be Liberalism, the replacement of his 4th stage with the (usually massively slandered) second stage. The ‘fluid mode’ sounds like Taoism, or like Tantra or Bhakti, and the Chapman, of course, is a Tantra student. To me, that’s an evolved non-agentic parasite of Stage 4 systems. I like killing stage 4 systems, so I like Tantra and Bhakti, but personally I want to be what he calls stage two and want other people to be that as well. Admittedly, I prefer stage 2 to be tempered with fragments of cognitive software from stages 3 and 4, but only because that’s sexier. I don’t think you can sustainably have a technological society at stages beyond 2, as higher stages aren’t generally intelligent, either alone or collectively.

    • sakkyokusha says:

      Kegan seems identical to “constructive developmental theory”, right down to the numerical labels. At any rate, whatever the differences between them, the following applies to both:

      The primary problem in my life has been the oppression of my Stage-4 self by Stage-3 authority figures (parents etc.) arising from the mistaken belief that I was at Stage 2. That is, people at Stage 3 who can’t tell the difference between Stage 4 and Stage 2, at least as far as it concerns people over whom they have power (as opposed, perhaps, to those having power over them).

      This may help explain why you (Michael) have seemed to shift enigmatically between being my ally and adversary over the years. On the one hand, you dislike the oppression coming from Stage 3, but on the other, you don’t ultimately approve of my quest to move through Stages 4, 4.5, and 5. (Also, there have been some direct conflicts arising from your Stage-2-ness.)

      For what it’s worth, I think my current situation is more like 5 (or 4.5) being oppressed by 4, which is less bad than before, but still a problem. One part of the problem, ironically, is insufficient empathy and understanding of how oppressive Stage 3 can be to Stage 4, because the Stage 4 folks haven’t really dealt with stages lower than 4 trying to exert power over them.

      • Michael vassar says:

        Ally? Adversary?. Who are you?.
        I agree that what Kegan calls stage 4 is actually frequently oppressed by parents at what he calls stage 3, which should provide strong evidence that they aren’t really stages, since otherwise, given similar genetics 4 shouldn’t frequently appear in a child while absent in parents.
        In Venkatesh Rao terms, I think 4 is clueless, 3 is loser and mad abous sociopaths, 2 sociopath, and 5 loser and aware that losers get the best part of the deal so not mad.
        Note though, that Rao sees this as a stage progression in the reverse direction from Kegan.

        • sakkyokusha says:

          I’m not convinced that Stage 4 appears particularly frequently in children of Stage-3 parents. I’m pretty much the only case I know. For nearly all of my “peers” (by which I mean bay-area rationalist/LW types, who are near-universally 4’s), the main drama of their lives is entirely intra-4 drama. The exceptions involve a few elite 2’s such as yourself, idolized but not really imitated. Stage 3 just isn’t part of the landscape. People know about it theoretically, and (at most) pity the poor LW/SSC readers in flyover country who have to deal with it, the way they pity the homeless on the streets of SF, i.e. without accepting them into the same reference class as themselves with full cognizance of the circumstantial differences.

          Who am I? Imagine a muggleborn wizard in a magical society that thinks there are only muggles and pure-bloods. The options are acceptance (for wizards, assumed to be pure) and pity (for muggles), but not both.

          I do think I agree about the Kegan-Rao mapping you describe, and, in the end, probably also about your general implication that “beyond” (i.e. better than) Stage 5 lies a return to Stage 2 in some fashion. (Actually, this is already contained in Rao’s “Gervais principle”.)

  88. alaska3636 says:

    It took me awhile to figure out that other people are often not aware of what actions they are choosing and why they are choosing them; for me, it always felt, growing up, that everyone was more in tune with themselves, when it was almost precisely the opposite.

    Another aspect of this is my intuitive grasp of social pressure and signaling: I appear to be very socially intelligent, but I am just very good at memorizing actions and extrapolating different contexts for certain behaviors that I don’t intuitively understand like when someone fails at something and personalizes it and they just need a pat on the back or something.

    This would feel somewhat patronizing to me as I am usually pretty good at not personalizing individual failures (lack of prep, lack of natural inclination, luck, whatever…) but I am good at recognizing when other people really appreciate certain gestures. If I don’t have experience in a particular gesture or context, then I appear very foolish, rude or uncaring because the rest of the time I seem to know all the other social cues and rules. I think the underlying bias is lack of emotional awareness which oftentimes appears as coldness and over-reliance on thinking. INTJ basically, not to dredge up that post again or whether it means anything.

    Probably a lot of people here would relate to the conflict between doing things your own way which appear to be (and probably are) more efficient but don’t take into account other people’s reliance on traditional modes of thought and social pressures to conform.

  89. SUT says:

    We know SSC-ers can debate intellectually.

    But how many of us can “sell me this pen” ?

    • Scrolling backward from my own similar sentiment for a kindred spirit, I have to say this is extremely well put.

    • Bugmaster says:

      What do you mean by “sell me this pen” — do you mean, “convince me that our way of thinking is superior”, or “embezzle millions of dollars to live a carefree life in a drug-fueled haze” ? Heh.

      Seriously though, if it’s the former (hell, or even the latter), then there are at least two ways of accomplishing the task.

      One way would be to start a cult of personality around some incredibly charismatic public figure, so that everyone just does what he/she says. This is going to be a little tricky, since you are trying to teach people a mode of thinking that prevents them from falling for this exact technique.

      Another way would be to take on some intellectually challenging task, then outperform all other people who attempt to complete this task, but who are not blessed with your enhance way of thinking. This is also going to be tricky, since such tasks — scientific discoveries, engineering achievements, game tournaments, finance management, etc. — usually require a lot of background knowledge, not just raw intelligence. In addition, this approach will totally fail unless your enhanced mental techniques are actually better than the usual ones when applied to real-world problems.

      • Aegeus says:

        “Sell me this pen” refers to a classic interview question for people in marketing. The interviewer picks up a pen from his desk – an ordinary pen, nothing special – and asks you to sell it to him. It’s supposed to test how good your rhetoric is when your product is completely worthless. I think SUT is asking if SSC readers can persuade as well as debate, which would be an example of a mental skill we’re missing.

        Forming a cult of personality around the pen could work, but that’s hard to squeeze into a job interview 😛

        • SUT says:

          It’s not a test of rhetoric, e.g. a rousing reading of Kind Lear’s soliloquy (or Psalms 86).

          It’s about demonstrating a theory of the mind in its most practical form: get someone to give you their money.

          To do this, you need to size up your customer on the spot, understand his inner desires which will remain unspoken. You’re free to use any theory you think can help – Jung’s collective unconscious, or your grandma’s stereotypes.

          If you can’t do this (and I certainly cannot) do you really have a well calibrated theory of mind?

          • Bugmaster says:

            If the pen is truly worthless, then merely having a theory of mind of the prospective customer — let’s call him “Mark” — is not enough. In addition to that, you must understand how to exploit all of his mental weaknesses, and how to do so quickly (the long con won’t work in a job interview).

            By analogy, basic understanding of the principles of control theory and computer science is necessary, but not sufficient, in order to build a fully functional autonomous drone.

          • Aegeus says:

            Rhetoric doesn’t just mean ability to read a prescripted speech. The term applies to pretty much any use of words to persuade, inform, or direct an audience. Convincing someone they want a pen definitely falls under “rhetoric.”

            I’m not disagreeing with you, just being a little pedantic.

    • Vorkon says:

      “That’s a real nice pen you’ve got there. It would be a shame if something were to…”

      *snaps the plastic paperclip thingy off the cap and looms menacingly over the desk*

      …happen to it.”

      I don’t tend to get too many call-backs from interviews. Strange…

  90. Bugmaster says:

    People living in our modern culture exhibit animism all the time. We yell at our cars when they don’t start up right away. We ask, “what the hell does it want from me ?” when some website’s login process is not intuitively obvious. We think that “The Internet” is something that magically serves us cat pictures on demand.

    Ok, so maybe you know exactly how networking protocols work. Also, microwaves. Even cars, maybe, although I personally only have a vague idea about mine. But most people do not think this way. They do not intuitively understand that there’s some specific hierarchy of systems that powers a technological device (and certainly, not a natural occurrence like tides or eclipses); to them, it is all one monolithic thing that might as well be magical (of course, some people are religious and thus explicitly believe that natural occurrences are magical).

    I say “they”, but really I should say “we”, because most people act this way about things that are outside of their direct area of expertise.

    • onyomi says:

      Agree, but I don’t think it necessarily has to do with how well or poorly one understands the workings of a particular machine or natural phenomenon. I think it’s a symptom, rather, of the fact that our big brains got so big at least partially, if not primarily, for the purpose of navigating complex social interactions among smallish groups of people. Thus, our default mode of analysis is to treat anything complex as if it were a human brain.

  91. Tony says:

    Since when do modern western societies not obviously and constantly engage in magical thinking?

  92. Vorkon says:

    I’ve always wondered if maybe the results of the Amy and Brayden experiment have less to do with Amy being unable to model Brayden’s mind, as they do with Amy being unable to figure out why the researchers are asking her such a ridiculous question, and giving them the answer she expects they want. (After all, she was wrong about the skittles last time, and felt quite silly for it! They’re not gonna’ fool her again!)

    Admittedly, this is probably just a result of *ME* being unable to correctly model the mind of a 3 year old, since that is the only way I could imagine myself giving such an answer, but I still think it’s an amusing way to look at it.

    • ThrustVectoring says:

      It’s a modeling thing. They’ll ask you if you remember something they’ve seen.

      • Vorkon says:

        Oh, I get that. I’m just saying that Amy may be thinking, “why on earth are these weird people in lab coats asking me something about Brayden? That’s a pointless question. They already know what’s in the bag, so what does it matter what Brayden thinks about it? They’re just trying to trick me into saying “skittles” again, aren’t they? Those big meanies! Well, they won’t fool me THIS time!”

        In some ways, that line of reasoning requires even higher level modeling of another person’s mind than the original question, since she’s assigning complex motivations to the researchers. In other ways it’s even more selfish, because she can’t seem to understand that the researchers could be asking her something that isn’t about her, and that the question MUST be related in some way to her hurt feelings relating to getting the previous question wrong. I’m just saying that maybe it has less to do with whether or not she understands that other minds are separate from hers than with whether or not she cares.

        I know, that’s kind of a stretch. Like I said, I only proposed this theory because it’s hard for me to imagine NOT being able to model another person’s mind. (Even if I get my model wrong most of the time. lol.) I just think it’s another potential explanation, and one that’s fun to consider.

  93. TomA says:

    This topic brings to mind a fundamental question in the etiology of psychosis, e.g. when is a deficiency a problem? In other words, if your cognitive toolbox is short a few items, are you sufficiently ill to justify a remedy or just living the simple life with less stress?

  94. John Brunner played with an idea a little like this in The Long Result. It wasn’t really central to the plot though.

  95. We might look also for rewarding and practical skills (storytelling! dance! face to face communication! teasing! etc) that primitive humans may have that we personally do not, and not only more high-falutin energy-intensive unproven meta-philosophical mental technology.

  96. ThrustVectoring says:

    I think it’s much more valuable to talk about the kinds of things you’ve generalized over in the higher levels of development, rather than what level of development you’re at. Like, I have a work-in-progress model of akrasia as the result of children generalizing over “when I accomplish things, my parents react by raising their expectations, and this new system of rewards and punishments isn’t worth the gains from accomplishing things”. (I model excessive work-ethic similarly, except with systematic changes that are worth the price). The really sad thing about this is that since these children were force to grow up too soon, they’ll tend to pattern match outside experiences as part of a narrow framework.

    As an aside, you can get a lot of mileage out of constructive developmental theory in analyzing and appreciating movies and TV shows. For a couple great examples, “What’s Eating Gilbert Grape” (for the struggle of being in Stage 3) and “BoJack Horseman” (for some excellent examples of the interplay between characters in the first four stages)

  97. stargirl says:

    Honestly many people’s reactions to Kagan’s ideas reminds me why Buddhism has a system to authenticate spiritual progress. It is extremely difficult to know if you have made spiritual/ethical progress or if you have just gotten confused in a new and different way. In many branches if you feel you have reached new understanding you discuss this with your teacher. Many people think they have reached enlightenment.

    It seems many people think they are in the “stage 4.5-5” group. At least one of these people seems extremely morally confused to me. Though I am, of course, not going to say this. Perhaps I am just stuck in stage 4 or perhaps stage 2 (stage 3 seems unlikely).

    I am mostly just venting that people’s reaction to this article is annoying to me personally. It will be even more annoying if people start dismissing things as something like “Stage 4 concerns.” But I am also wondering if people have any ideas on how one can get a less subjective measure of their moral/ethical achievement.

  98. discursive2 says:

    May have missed it but has anyone brought up Buddhism / meditative philosophy about the self yet? Seems like a pretty clear candidate for a developmental stage that isn’t widely adopted yet:

    -reframing of the way you see the world / decentering of naive childhood models: namely, the self is a construct that the mind is constantly attempting to maintain, and this construct can be released / detached from while still staying alive and functional
    -new mental degrees of freedom gained by adopting this perspective: death and pain are no longer source of fear, goal-striving becomes less important allowing for more creative use of time
    -claims that this represents an evolutionary step forward for individuals and society

    Dunno if this is true — I haven’t personally mastered this perspective so I don’t have the perspective to claim it’s superior — but certainly seems to match the pattern

  99. Murphy says:

    Ok, this is weird

    for some reason I can’t seem to respond to this comment.

    Any response I post appears to get black-holed somehow and never turns up on the site.

    What’s up?

  100. ryan says:

    This reminds me of the few times I’ve tried to convince people that antidiscrimination laws are regulating morality. Discrimination is immoral, so there’s a law against it. I’ve never got anything but push back on that point.

    • Not sure what you mean by that but you are probably running into issues because “regulating morality” is a poisoned phrase that only those opposing discrimination laws use.

      • Really? It (or very similar phrases) used to be quite common as I recall, usually used to argue against laws criminalizing things like homosexuality, prostitution, or oral sex.

    • onyomi says:

      I’ve never taken this approach, but I’m not surprised you get pushback: regulating morality is something intolerant, often religious people do. Though it’s funny, the fact that the group which is usually more okay with regulating stuff refuses to admit they might want to regulate morality, seems to mean that they actually have a problem with the word “morality.”

      • ryan says:

        I think that’s it. It’s like they really want to see their moral rules as scientific truths or something.

    • blacktrance says:

      Because in unsophisticated discourse, “morality” is synonymous with “social conservatism”.

      • ryan says:

        So one of two things is going on. One is they think I’m stating that discrimination laws are based in social conservative values. Or they haven’t leveled up their thinking to be able to conceive of a common category “values” which can contain social conservative values, liberal values, other kind of values.

        Obviously I suspect the second, but I can’t completely write off the first. It amazes me that it’s possible, but people will often give a construction to a person’s argument which is sufficiently stupid that its existence stands in stark contradiction to the fact that the argument was parsed in English, comprised of correctly spelled words, had subject verb agreement, etc.

  101. AR says:

    This reminds me: it was a “brain on fire” revelation for me when I read George Lakoff’s “Moral Politics : How Liberals and Conservatives Think”. It completely explains how it is possible to think things like “if we tolerate homosexuals we are likely to have more pedophiles.” It’s because of the underlying mental model of the world that conservatives have, which is different from my liberal one! (The book is not about how liberals or conservatives are all right or all wrong; this happens to be an example where the conservative idea is totally wrong though 🙂

  102. Christian H says:

    I am breaking my self-imposed commenting ban to recommend you look into personal epistemology. Kuhn’s version (not the same Kuhn) was a big deal to me; Perry’s version is likely the most famous. Kuhn and Perry also postulate that most people get stuck in the middle stages or even regress and become entrenched when they try the next stage and find it difficult.

  103. Anthony says:

    This post hit home very hard for me. The experience of gaining what I call “full theory of mind” was a very big one for me, and only occurred around the middle of college. I went from “I think, they do” (and, usually, “Why do they do what they do when I just think…?!”) to the infinite recursion of:

    “I think, so I do.”
    “They see what I do, think, and infer what I think. Considering this, they do.”
    “I see what they do, think, and infer what they think. Considering this, I do.”
    …and so on down the line…

    Putting this into your rubric, I experienced a combination of 1. growth of theory of mind, and 2. ability to model different mind-designs.

    It was like magic. Suddenly, I could flirt. I could figure out how to ask someone out without it being weird. I could understand why one person liked me and another one didn’t, and how I could modify my behavior to make myself nicer to be around. I felt that I was included into this enormous hitherto invisible community of people who were all continuously modeling one another’s minds via their behavior and ingeniously accommodating one another’s desires with their own. Social competence is about optimizing the outcome for everyone involved in an interaction, and you so you cannot be socially competent until you understand both your desires, your peers’ desires, and how your actions affect your peers’ ability to fulfill their desires.

    The hardest part, actually, was interacting with people who were very like myself before I had this enormous epiphany. How do you explain to someone that they have a totally inaccurate view of themselves in the world because they’re just not thinking clearly enough to arrive at true empathy?

    Anyways, good post.

  104. unsafeideas says:

    I suspect that those cognitive development things are highly cluttered with local cultures. I live in an area where languages traditionally mixed, toddlers routinely watch tales in foreign languages (the believe is that it might make it easier for them to learn the language later on) and kindergartens have English lessons. Since learning foreign language is so important, many get it push it on them practically from the day one.

    I did not observed the “kids don’t get foreign languages” thing. There are different languages and they know that. Moreover, kids growing in an nationally mixed environment also get to see differently talking families all the time so they are not confused about the concept. (That does not imply tolerant utopia, animozity exists of course.)

    It seems to me more likely that seven years old were having those weird because that it was their first encounter with something that is not Japanese. As I told, where I live, the concept of foreign languages is something that small kid runs into all the time.

    • unsafeideas says:

      I would also treat those “we asked under three years old kid complicated question and kid did not answered right” studies. In my experience, under three years old do not understand the “if” and “imagine that” questions. They are too complicated on language, abstraction and even logic level.

      Wrong answer might just mean that kid does not get what you ask and may have zero to do with whether kid things I know exactly what kid now.

      Anecdotal, my kids when they were under three years knew I know different things then them and did not expected me to know what exactly they know to have ( e.g. that objective reality thing) but they would not be able to interpret mentioned question.

  105. Guy says:

    How about “recognizing that the terms of a problem are limited”? I’ve had a bunch of people object to, say, the blue-eyed islanders problem with something along the lines of “but real humans don’t work like that”. Or perhaps “but what if they look in a puddle?”, or “why would they even think through the problem?”. This seems to me like them being annoying and refusing to consider the problem as stated, but I sort of believe there has to be something more going on. (another fundamental skill: you are not describing a real human belief/action/thought unless a real human would believe/act/think like that)

  106. Nero tol Scaeva says:

    A nefarious typical mind fallacy I’ve started noticing everywhere:

    1. Group A is composed of subgroups which believe 1, 2, 3, 4, and 5
    2. My beliefs round off to being similar enough to 3.
    3. Therefore, the “correct” belief of Group A is belief 3.

    So for a bit more of a concrete example: Bob is a liberal. He sees a somewhat large variation in the types of Islam present in the world. Since liberal Islam lines up the closest to Bob’s liberal beliefs, he believes that liberal Islam is the “correct” version of Islam, even though Bob is an atheist.

    This is really closely related to the “I love watching Power Rangers so daddy must secretly love watching Power Rangers too” and the “Conservatives must hate women since they oppose abortion” type of thinking.

    • brad says:

      Do non-Muslims really believe there is a “correct” type of Islam? Sure, people say things like “ISIS is a perversion that doesn’t represent true Islam, a beautiful religion of peace” but I tend to think that statement is more instrumental than a positive statement about the actual fact-of-the-matter. I’m not even sure what such a positive statement would mean — something like the majority of believers believe X or Mohammed if here were here would say X or ???

      • Anonymous says:

        Non-muslims may believe there are better versions of Islam than others, and that there are central vs. non-central versions. It makes little sense to talk about a “correct” version of Islam.

  107. John Sidles says:

    Scott A observes  “Both psychotherapy and LW-style rationality aim to teach people some of these extra mental operations. The reactions to both [psychotherapy and LW-rationality] vary from enlightenment to boredom to bafflement depending on whether the listener needs the piece, already has the piece, or just plain lacks the socket that the piece is supposed to snap into.”

    (emphasis added by me).

    But doesn’t Scott A (surprisingly) overlook two exceedingly prominent reactions? The omitted reactions being anger and denial.

    It’s not easy to circumvent reactive anger and denial … because, as we all know, the circumvention of denial automatically triggers the reaction of anger … as is so vividly portrayed in the James Bond’s angry denial of his own problematic drinking habits in the recent movie Spectre (for which, kudos to Peter Rosenthal’s lucidly transgressive analysis).

    A proclivity for angry denial is hard-wired in our minds, for common-sense reasons that the sociobiologist Ed Wilson articulates:

    “Nowhere do people tolerate attacks on their person, their family, their country … or their creation myth. […] Our leaders, religious, political, and business, mostly accept supernatural explanations of human existence. […] Scientists who might contribute to a more realistic worldview are especially disappointing. Largely yeoman, they are intellectual dwarves content to stay within the narrow specialities for which they were trained and are paid.”

    Further readings include a thrilling (to me) four-essay sequence by mathematicians / philosophers Colin Mclarty, Michael Harris, Tim Gowers, and Barry Mazur, as collected in Circles Disturbed: the Interplay of Mathematics and Narrative (2012, Princeton University Press). The four essays are:

    •  Colin McLarty  “Hilbert on theology and its discontents: the origin myth of modern mathematics”
    •  Michael Harris  “Do androids prove theorems in their sleep?”
    •  Tom Gowers  “Vividness in mathematics and narrative”
    •  Barry Mazur  “Visions, dreams, and mathematics”

    We are fortunate to live in a century in which the foundations of mathematical reasoning and culture are being so ably deconstructed by its highest-level practitioners.

    Contrastingly, consider the angry denialistic cognition that is commonly elicited (even among STEM professionals) by deconstructive analyses of ultra-rationalist icons like Ayn Rand, Steven Pinker, John von Neumann, Kurt Gödel, and Richard Feynman.

    Needless to say, it’s not easy to argue against angry denialist cognition: even the much-praised prose of Tom Paine’s Common Sense (1776) rates an ultra-high difficulty Flesch–Kincaid Level 12.0, whereas (in comparison) the high-level abstract mathematical language of Charles Lutwidge Dodgson’s seminal Symbollic Logic (1896) rates a significantly easier Flesch–Kincaid Level of 10.5. Perhaps this explains why more people praise Tom Paine’s clarity, than scrutinize his difficult prose and intricate reasoning.

    As Paine says:

    Absolute governments (tho’ the disgrace of human nature) have this advantage with them, that they are simple; if the people suffer, they know the head from which their suffering springs, know likewise the remedy, and are not bewildered by a variety of causes and cures. But the constitution of England is so exceedingly complex, that the nation may suffer for years together without being able to discover in which part the fault lies; some will say in one and some in another, and every political physician will advise a different medicine.

    Can we assert with confidence that the intricate workings of the world’s modern democracies, and also the intricate workings of the world’s modern STEM professions (including medicine), are any more simply described than the constitution of England in the eighteenth century?

    To suggest otherwise — and thereby, to challenge the creation myths of modern democracy and the modern STEM community — suffices (as everyone knows) to elicit angry denialistic cognition in a great many public forums … equally among ideological conservatives and ideological liberals.

    More broadly, we can conclude that the angry denial that psychotherapy and LW-style rationality so commonly elicit, are grounded in instinctive rejection of the radical challenges that modern deconstructive practices — including but not limited to psychotherapy and LW-style rationality — pose to our creation myths, both personal and communal.

  108. Murray Hayes says:

    I found this to be a very enlightening article. There are many points that hit home to some of my own social interrelational (if such is even a word) failures as well as other people I know.

  109. Xaverius says:

    I’m pretty impressed by this insightful post, just because it makes something “snap”, make sense all of sudden. I think I was doing 1, 3, and 4 without fully doing 2… in the fact that others haven’t reached these milestones. I mean, in other ways I’ve reached 2, I can understand some people are weird to me in that they are extroverted, or just different in a way I brand as “probably okay person, but uninteresting to me”. I guess I kinda understood it when facing it during an argument, but just didn’t keep it into account by default.

    Now, I will admit as a defect and not bragging, that I live in a meta-bubble. That is, I not only got to university in a STEM degree(enough to be a bit isolated from the general population), but also live in a bubble as compared with my classmates, so i must ask, 100% honest and thanking any answer among the many commenters here.

    What percentage of the population do you estimate has reached each of these 4 milestones?

  110. keranih says:

    This raises the obvious question of whether there are any basic mental operations I still don’t have, how I would recognize them if there were, and how I would learn them once I recognized them.

    For what it’s worth, this sort of statement is why I keep coming back to SSC. I don’t know what I don’t know.

  111. Fj says:

    By the way, you might be interested in the research by Lev Vygotsky,, but unfortunately you won’t be able to find anything in English about it, so there’s that.

    Anyway, his point was that the stuff like internal monologue, being able to choose from somewhat good, somewhat bad options, and so on, is developed in children first as an external thing (a child talking about her experiences during the day etc), then it gets internalized. Sort of like ontogeny recapitulates phylogeny.

    So we can learn about the ways our highly-developed brains work by looking at the ways children acquire those skills, and also at primitive cultures where those skills remain un-internalized, like Native Americans using knotted ropes to remind them about stuff, or like everyone using an equivalent of flipping a coin to decide between two differently bad courses of action (I mean looking at bird intestines etc).

    His ideas have a lot of predictive power as well, saying that a Buridan’s Ass problem, when properly set up, will demonstrate that dogs suck at deciding to do stuff, while humans do unbelievably more OK:

    On a completely irrelevant note: what’s the point of entering my email here if I don’t receive emails when people reply to me? Or am I doing this wrong? How do I get any sort of notification when people reply to me here? Also, how do I format my comments to make quotes or links, I don’t see any helpful “halp I can’t into format” button.

    edit: a maybe helpful link:

  112. Alex Reynard says:

    Is it okay though if I think people shouldn’t see American Sniper because it presents itself as a true story while misrepresenting the real-life events? And would the corresponding counterargument be, ‘Fine, no fiction about gay people, but how about a nonfiction one?’

  113. Åke says:

    Developmental psychologist Gordon Neufeld has done a lot of research about how and when (and if) people mature as they grow up, and about which factors are conducive to growth. You may find an introduction to his theories in his speech “Kids need us more than friends” found on YouTube.
    I find the theories fascinating, refreshing and very useful. They’ve changed how I understand my own childhood, fill my role as a stepparent better, and helped me shed light on my current relationship issues.
    My partner appears to lack what you describe in pt. 2, so even if she loves me deeply (which she does) she is not able to respond to my needs when they differ form hers, because she lacks the ability to understand that I have them. She grew up with an absent father and alcoholic stepfather, and Neufeld’s theories explain why this environment has a high risk of leading to such developmental issues. It helps for me to at least understand her, so that when she does hurt me i know where it comes from. I understand that she doesn’t do these things in order to hurt me, even though I would only have done such a thing if I wanted to hurt her.

  114. Zanzard says:

    Hey There Scott! I’m a big fan of your blog.

    The way I came to know about this blog was through a very random comment someone left on The Last Psychiatrist Blog last year saying that his blog was as interesting as FOFOA and Slate Star Codex. (I’m not much a fan of FOFOA but I do like your Blog)

    The way I learned about the Last Psychiatrist was through an article on from David Wong, his viral article about 5 Harsh Truths that will make you a better person.

    Now he has written a new article and in it he links to this post.

    I’m just writing to say that I find it quite amusing the way that this feedback loop of blog reccomendations has just made.
    Also that if David Wong reads your blog, and thinks that you are smarter than him, you should feel very proud! Both his writings and yours have helped me cope with several very troublesome times I have experienced lately. I guess what I’m trying to write here is basically just Thank You.