"Talks a good game about freedom when out of power, but once he’s in – bam! Everyone's enslaved in the human-flourishing mines."

List Of Passages I Highlighted In My Copy Of “Hive Mind”

Even though simple logic said “there will be blood”, there were nevertheless long stretches [of WWI] when all was quiet on the Western front. Why? Axelrod said this was due to the power of tit for tat, which we discussed in Chapter 5. German and French soldiers tacitly created their own unwritten peace treaties: if you don’t shoot at our side, we won’t shoot at yours. And if the higher-ups do force us to fire our artillery, we’ll intentionally aim short of your position, or aim long of it. These rules were never written down, and they were rarely even spoken as far as we can tell – but they left their mark. Take this example. One day the German artillery fired toward the British side without doing any damage. A German infantryman climbed up on a parapet just to deliver an apology to the British: “We are very sorry about that; we hope no one was hurt. It’s not our fault, it’s that damned Prussian artillery” The German infantrymen didn’t want to destroy the fragile truce they had created with their alleged enemies. Military higher-ups hated this tacit cooperation across enemy lines. Fortunately for the officers (but not for the enlisted men) there was a simple solution: move troops around. By swapping one division south two kilometers and another north two kilometers, military officers could turn a repeated prisoner’s dilemma into a literal one-shot game.

This seems almost unbelievable, and absent knowledge of the primary source I can’t be 100% sure, but I guess it fits together with what you always hear about the soldiers celebrating Christmas together and so on.

Some skeptics dismiss IQ tests as just measuring whether you’re good at staring at a piece of paper, coming up with an answer, and writing it down. But the comprehensive IQ test used most often today, the Wechsler mentioned earlier – involves little paper-staring and almost no pencils. The person giving the test (a psychologist or other testing expert) asks you why the seasons change or asks you to recite a list of numbers that she reads out to you. You answer verbally. Later you are handed some wooden puzzle blocks and you try to assemble them into something meaningful.

I hadn’t realized this. There’s a lot of talk about IQ being related to “test-taking skills”, and it’s easy to imagine that people who are used to take paper-and-pencil multiple choice exams might be more comfortable with them, but that’s not what a lot of real IQ tests are and we should stop imagining them that way. This also makes theories about education and the Flynn effect more impressive.

The images are simple shapes like a regular C or a backwards one, and your job is to note, for instance, whether the C is open to the left or the right. So at what point would you no longer be able to do better than random at correctly answering “left” or “right”? When the image flashes for just an eighth of a second? A 32nd? A 128th? That’s the key variable the researcher keeps track of in this study…can it possibly be the case that people who only need to see the image for a tiny fraction of a second tend to have higher average IQ scores than people who need to see it for an eighth or sixteenth of a second? Summarizing “dozens of studies” run on “four continents” dating back to the 1970s, psychologist Ian Deary says, “the overall answer is yes, there is a moderate association between how good people are at the inspection time test and how well they score on intelligence tests.”

…speaking of IQ not just being about test-taking. Jones also goes through some studies on reaction time and reverse digit span, not to mention literal brain size as measured by MRI.

IQ tests do about the same as the best kind of job interviews – structured job interviews in which the interviewer carefully designs the questions beforehand and sticks to the same ones with each candidate – and better than most of the methods people use to choose employees…IQ tests are even better at predicting outcomes when the job requires higher skills. Back in the 1960s, the Bell Telephone System gave its entry level management trainees an IQ-type test along with a number of personality tests…when, after two decades, the company looked back to see which tests did the best job of predicting which trainees eventually rose the highest in the company hierarchy, the IQ-type test did the best job, beating out the personality tests. Looking across many studies of IQ in the elite workforce, one review says “general cognitive ability is the best single predictor of executive/professional-level performance, just as it is of performance in the middle to high end range of the general workforce.”

What I get out of this is “apparently people should structure their job interviews more”.

One might contend that it’s harder to measure social or emotional intelligence than it is to measure more conventional intelligence. But psychologists have tried: they’ve checked to see whether people with more social or emotional intelligence tend to have higher IQs, and so far it looks like they do. The relationship often isn’t as strong as the relationship between, say, a person’s vocabulary test scores and their Raven matrices scores, but the results are clear: IQ scores predict practical social skills.

On the other hand, all of these “measures of social intelligence” are things like “can you read this person’s face to determine what emotion they are feeling?”; I wonder how well that correlates with tasks like “work your way up to becoming coolest person in your high school class” or “win the heart of your love interest”. There is a George Washington Social Intelligence Test, but it doesn’t seem to actually involve becoming the universally beloved father of your country.

But what did the apparently more cautious, more careful Wicherts report? He said that the average IQ in sub-Saharan Africa was about 82 – corresponding to the 12th percentile in the United Kingdom…that’s an improvement from [Lynn’s estimate of] 70, and it’s an improvement that arose partly because Wicherts chose to throw out samples of students who came from families with nutrition problems and low socioeconomic status. The Wicherts average of 82 only includes samples of apparently healthy students from families that have typical socioeconomic status. And in a region of the world with as much poverty and disease as sub-Saharan Africa, that decision is quite likely to leave out substantial portions of the population. To further test the data and get the best average, Wicherts wrote a separate paper that looked at onl the best test samples…Wicherts’ best samples of students have an average IQ score of 76. That’s at the 5th percentile within the United Kingdom.

I keep hearing wildly different numbers for sub-Saharan African IQ, but 76 seems pretty plausible. Note that if the higher 82 number which I keep seeing around is true, it would throw a wrench in a lot of things. Remember that African-Americans are at about 85, so if going from Africa to America only gains you at most 3 IQ points, everything we know about the Flynn Effect falls apart. Even 76 -> 85 is a kind of low Flynn Effect estimate, but I’ll allow it if we speculate that these Africans probably have been at least a little Flynnified, and maybe African Americans are still impoverished enough not to have been fully Flynnified.

Even in rural Pakistan, higher Raven’s IQ predicts higher wages. One might think that thousands of miles away from the Western universities where the tsts were designed, abstract IQ tests would have no power to predict which workers earned more and which earned less – but an IQ test made up of boxes and lines and circles had a modest ability to predict a person’s wages across rural Pakistan, just like in the United States.

Also relevant to IQ tests and culture.

Country-level results go back decades. As early as the 1960s, studies in Taiwan and Hong Kong found average IQ scores slightly above the European average. This happened at a time when these countries’ economies were growing fast, but were still poor by US and UK standards. Any simple story that “wealth causes IQ” has to account for the puzzlingly high average scores found in Taiwan and Hong Kong decades ago…a healthy environment helps to boost IQ, but it can’t be the whole story. The high average IQs of East Asia and Singapore have yet to be fully explained.

See, this is another thing that confuses me about the Flynn Effect. Taiwan etc are still slightly above the European average. So either they had fully caught up to Europe in Flynnification by 1960 (how?!) or Asians have some sort of magic anti-Flynn-effect armor. Didn’t Ron Unz speculate something like that once?

The key evidence comes from the horrific experience of famine in the Netherlands during World War II. Some Dutch towns were cut off from regular food supplies toward the end of the war – daily calories were down to about one third of recommended levels at one point – and the children in gestation during that period had low birth weight. But when these children grew up, their IQs were nearly identical to those of children born a few years earlier or a few years later: massive in utero food deficiencies had no long-term effect on child IQ…famines lasting a few months might not matter for a child’s brain development. But what about chronic malnutrition among the young?…Again the experimental method offers an answer. In one study of Guatemalan villagers, some malnourished children where given a protein supplement and some were given a nonprotein supplement, and then their cognitive skills were measured in few different ways. Of the two interventions, the protein supplement gave a bigger boost to student scores.

I was expecting something awful but on first glance the paper actually looks really good. I’ve uploaded it here if you want to see. They don’t measure the results in IQ so I’m not sure exactly what the magnitude is. It only worked on people who were already poor and probably not well-nourished. Possibly another argument for vegetarians taking protein supplements?

So far it appears that schooling clearly boosts crystallized intelligence, but the typical efect of school on fluid intelligence may be small or even nonexistent…[but] here are two examples of apparently successful increases in fluid intelligence. First, in a study in Sudan, a few months of training on the abacus appeared to boost Raven’s matrix scores dramatically. Second, a study of Israeli schoolchildren, comparing students of similar ages who were born just before or just after the school’s age cutoff: because students born before the cutoff get an extra year of formal schooling, the age cutoff let the researchers identify the effect of an extra year of schooling on student IQ scores. The researchers found a bigger effect on vocabulary scores than on matrix scores, but nevertheless matrix scores indeed rose for students who had received an extra year of schooling.

Right now I am pretty confused on what affects fluid vs. crystalline intelligence. Also, really want to know if the school year results hold up 10 or 20 years later.

In the case of classroom learning, the evidence is mixed on whether your child’s standardized test scores will likely rise if she’s in a class room of high achievers or tend to fall if she’s placed in a classroom of weaker students. This is one question that’s been tested and retested in numerous ways in classrooms around the world, and perhaps the best way to summarize the vast literature is to say that some signs point to positive peer effects and some point to no peer effects. Amid the ambiguous findings, the clearest peer effect is that disruptive students hurt learning.

I could have told you that in third grade.

Sub-Saharan Africa was the last major region of the world to eliminate lead from its gasoline, a goal that apparently was reached in 2006.

That is exciting and will hopefully cause some progress for the region.

Did your neighbor win the lottery and buy a nicer car? That means that you’re more likely to buy a nicer car too. At least that’s what economist Peter Kuhn and his coauthors found when studying the results of lotteries in the Netherlands: people who won big lottery prizes tended to buy nicer cars (no surprise), but people who lived near the winners tended to buy nicer cars too. Buying is a social activity: most of us try at least a little to fit in with our neighbors.

This section was titled “Staying Frugal Like The Joneses”. Stop tooting your own horn, Garrett.

One study had twins play a repeated prisoner’s dilemma game against each other for a hundred rounds, and they found that higher-IQ pairs of twins did cooperate more often than lower-IQ pairs of twins.

I feel like making twins play the prisoners’ dilemma against each other is the sort of thing where you risk accidentally making the machinery of the universe divide by zero and explode.

Here’s the most exciting result from my experiments with Omar and Jaap: on average, over the course of the entire experiment, higher-IQ pairs were five times more cooperative than higher-IQ individuals. The link between IQ and cooperation was an emergent phenomenon: it arose not from smart individual players but from smart pairs of players.

There was a similar result in a dataset taken by looking at all the prisoners’ dilemmas done in different colleges’ psych programs and such, then sorting them by the SAT score of the college. This is a scarier lesson about WEIRD samples than anything I’ve read explicitly on that subject.

So far, two studies in the United Kingdom find that higher [IQ] individuals are more likely to vote, regardless of other things known about the person such as his social class, his education, and some personality traits…however, a study in the United States drawing on three different surveys finds no substantial evidence that IQ predicts voting behavior.

Thanks a lot, Get Out The Vote people.

A team of Notre Dame political scientists ran a series of deliberative polling studies to investigate these questions. They started by surveying participants’ politics attitudes before they were put into a room with other participants. Once they were in the room they were asked to discuss a prescribed set of topics such as the then-ongoing war in Iraq. After the discussion, they were surveyed again. Participants were randomly put into different rooms, some with more Iraq war supporters, some with more opponents, and so on. That made it possible to check whether their post-discussion opinion moved toward the pre-discussin opinion of the average person in their room: it was possible to see if they were conforming. The researchers ran 330 groups of about 10 people each, so if conformity was even a modestly strong force it should have shown up. Here’s what they conclude: “After several years of experimentation, we have found little evidence that group composition influences postdiscussion attitude. This persistent finding, which holds across a range of experimental variations and for subgroups defined by political knowledge, suggests that the expectations of the Asch literature do not apply to the Deliberative Poll setting”

I’m not sure how surprised to be about this. I guess it makes sense that political issues like the Iraq War are so polarizing that you can’t change people’s minds about them with a short discussion. It’s still surprising to find something without an Asch conformity effect at all.

Psychologist Christopher Chabris and his coauthors looked at team efforts another way: they checked to see if there was a “da Vinci effect” for teams, a tendency for teams that did well on one kind of task such as a team game of checkers to do well on another task such as taking an IQ test as a team. Indeed, they found such an efect, which they called a c facor, presumably as homage to the general or g factor of human intelligence. And what predicted which teams did better overall? The strongest two factors were whether team members routinely took turns speaking, and how well team members did on a test of reading the emotions of others…but nevertheless, the IQ of the average group member and the IQ of the highest-scoring group member were weak but still notable predictors of group intelligence, and differences in individual IQ explained over one third of differences in actual team performance.

I guess we should care a lot more about people taking turns speaking. Also, I bet that last statistic is basically entirely dependent on the IQ range in the study.

And I’ll take this space at the bottom here to point out that author Garrett Jones very kindly replied to some of my criticisms in the review here. I reply to his reply here, followed by a long conversation with Pseudoerasmus that continues here. I am not quite to the point of sticking this on the Mistakes page, especially since Arnold Kling had the same concern even before me, but I admit it some of this is more complicated than I originally expected and I might end up going either way on it.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

323 Responses to List Of Passages I Highlighted In My Copy Of “Hive Mind”

  1. RCF says:

    “And I’ll take this space at the bottom here to point out that author Garrett Jones very kindly replied to some of my criticisms in the review here.”

    And link goes to a tweet.

    Whaaaaa …. ?

    How did Twitter come to be considered a proper forum for responding to someone’s criticism of your book?

    And what does “1/N” mean? “This is the first tweet of a series, the length of which I don’t currently know”?

  2. MawBTS says:

    Do Lynn’s numbers get referenced much in the book?

    I gotta say, overuse of his work is a pretty big danger sign. Yes, he’s given us the best numbers we have for some places. He’s also prone to small and unrepresentative sample sizes, as well as doing things like calculating Cambodia’s IQ by averaging the scores of Laos and Vietnam.

    Lynn should be treated like a walking stick with a crack in it – lean on him as lightly and infrequently as possible.

  3. suntzuanime says:

    Apparently level of Americanization is associated with higher wages in rural Pakistan, which doesn’t seem implausible.

  4. :/ says:

    “Remember that African-Americans are at about 85”

    Welp, now I’m more racist. Thanks? :/

    • Acedia says:

      Is this true for all African-Americans, including say, the children or grandchildren of recent Nigerian immigrants? Or only the descendants of slaves?

    • Technically Not Anonymous says:

      On average, though. You can’t just look at a black guy and say with good confidence “that guy’s IQ is 85.” And that’s with the bold assumption that things like stereotype threat don’t factor in at all here.

      • Anon says:

        Stereotypes are priors that are generally useful. Despite the propaganda, they are useful and easily dismissed when you have better information. Conditional on blacks having lower IQs than whites, it is correct to assume the average African American you meet will be less intelligent than the average white person you meet, just as it is correct to assume the average Jew you meet will be much smarter than the average gentile and the average fnord will be much fnord smarter than any human you meet.

        • Evan Þ says:

          Assuming that you’re meeting a random cross-section of each race, though – which most people probably aren’t.

          • Anonymous says:

            Sure, but you can in turn generalize depending on your circumstances.

            If you meet a fitting-in black guy in Harvard, it’s a safe bet that he’ll be smarter than the average black by a substantial margin.

            If you meet a fitting-in black guy in the ghetto, it’s a safe bet that he’ll be a bit below average.

        • nope says:

          Useful, sure, but not as easily dismissed as you might think. Humans implicitly think essentialistically, even when they explicitly don’t.

        • Anon says:

          That’s not what “stereotype threat” is, other Anon. Stereotype threat is anxiety-mediated underperformance due to stereotypes.

          Basically, it’s been observed that performance on a task measuring a construct for which your group bears a widespread stereotypical stigma (e.g., IQ for African-americans) tends to be substantially lower if the task is framed in terms of that construct (as would presumably be the default), or as something else.

          A possibly comparable example would be differential performance for someone with social anxiety, if a social encounter is perceived as casual or a potential source of social judgment. Basically anxiety undermines performance, and certain groups have an extra source of anxiety beyond the typical amounts, due to social stigma.

      • ryan says:

        Is anyone actually that stupid? It’s the equivalent of meeting a 6’0″ tall Mexican dude and thinking he’s actually 5’6″ because that’s the average height for Mexicans.

      • :/ says:

        You can’t just look at a black guy and say with good confidence “that guy’s IQ is 85.”

        But I am an irrational animal and that’s what I’m going to do, now that I’ve heard this statistic from someone I trust to not throw around statistics without thoroughly researching them first.

        I never encountered the concept of one race having a higher IQ than another during childhood. In adulthood I encountered it in the context of racist cranks and dismissed it. Now I have to accept it as an inconvenient truth, and it will affect my perceptions, whether I want it to or not.

    • suntzuanime says:

      You should expect half the truths, on average, to make you more racist.

      • DrBeat says:

        You just blew my mind, Jerry!

      • Anon says:

        Conservation of expected evidence doesn’t work that way. You could also expect “most truths will make me a little more racist but some truths will make me a lot less racist” or “most truths will make me a little less racist but some truths will make me a lot more racist”.

        Even if we average over “you” the expectation shouldn’t be 50-50. Unless you’re in a society where everyone is on the fence about some issue, most people should expect most new information to support their beliefs (that’s what making a prediction means, practically) but should be mathematically obligated to make a larger update when they find disconfirming information instead. A society full of rational non-racists, for instance, should expect to see lots of new truths which make them even a little less racist balanced by occasional new truths which make them significantly more racist.

        • Alraune says:

          Conservation of expected evidence doesn’t work that way.

          I assumed he was referencing the “everything is 50%+ genetic” rule of thumb.

    • Selerax says:

      You do realize that average british IQ in 1948 was about 81 (on a modern scale), right?

      http://www.iapsych.com/iqmr/fe/LinkedDocuments/wicherts2010.pdf

      Seriously, people don’t realize how big the Flynn effect is.

      • onyomi says:

        Yeah, this seems ridiculous to me. When a test or study produces ridiculous results, you question the test first, not how the results happened.

  5. RCF says:

    I wonder about the motivations of the generals in WWI. It seems like at least one side must benefit from the truces. Did one side’s general try to break the truce while the other tried to preserve it? Were the generals motivated by personal interests? Did the generals view truces as bad, even if they served their country’s purposes?

    One explanation of how so many European countries got drawn into the conflict is that once one nation mobilized, then other countries had to mobilize immediately, because if they waited until attacked they wouldn’t have time to mobilized before being defeated. Then once they mobilized, this forced another country to mobilize, and so on. The suggestion seems to be that the countries didn’t want to go to war, but were caught up in a vicious cycle of escalation. But if front line troops were able to work out truces, why weren’t countries about to expand that into a peace? Pride?

    • Will S. says:

      The generals were emphatically not in favor of the truces. After the Christmas truce in 1914 there was a ban on them by the upper levels of the military (at least on the side of the Allies, idk about the Central Powers).

      Wikipedia has an article on the truce, which mentions the subsequent ban.

      There were also soldiers who wanted to have the various armies join together and overthrow their respective national governments who refused to end it. Nothing much came from those desires though.

      • CatCube says:

        Except for the Soviet Union

      • Brn says:

        Any truce favors the side that that currently occupying the other side’s territory. The Germans were occupying French and Belgian territory, so the truce was to their advantage. The truces, it was feared, would make it less likely for the soldiers of the Allied armies to be willing to fight to push them back. If the truce had been made permanent, then the Allies would have been conceding that the Germans had won and large parts of France and Belgium would be German today.

        Or take the current Israel-Palestine situation. If there was a permanent truce with things as they are now, then there would never be a Palestinian state.

    • Samuel Skinner says:

      “One explanation of how so many European countries got drawn into the conflict is that once one nation mobilized, then other countries had to mobilize immediately, because if they waited until attacked they wouldn’t have time to mobilized before being defeated. ”

      Previous European crisis’s also featured mobilization.

      • keranih says:

        Previous European crisis’s also featured mobilization.

        Did they feature as much rail tracks and rolling stock?

        It wasn’t so much “get all the soldiers together and start passing out rifles and cold weather kit 500 miles from the border” as it was “mobilize troops right next to a rail yard that could put all those troops on the enemy’s doorstep *tomorrow*”.

        (I agree that it wasn’t *just* mobilization, but rail transport of mobilized troops was very important.)

    • Oliver Cromwell says:

      Truces were considered bad for morale, since they normalised passivity at a time when action would still eventually be called for.

      That said, don’t get carried away imagining WWI as some Marxist fable in which noble working class warriors tried to bring about peace on earth, against the machinations of their cigar-twirling officers whose only goal was to enrich the arms manufacturers. Truces occurred in quiet sectors – i.e. sectors where the artillery was not being concentrated – where there was no possibility of breakthrough anyway and so they did nothing but formalise the status quo. They were way down the list of problems being faced by any side in the war.

      By far the biggest problem was how to bring about a decisive victory in the active sectors where the artillery was concentrated and there there certainly were not major unofficial truces. Even when the French army actually mutinied in 1917 the mutineers still pledged to defend their positions if attacked.

      • Marc Whipple says:

        It is my understanding that the use of chemical weapons by both sides in WWI ended rather abruptly. My pet theory on this is as follows:

        1) While they did have masks, etc, they weren’t very reliable and it’s very hard to fight in them, and the alternative is hideous suffering and/or death if they are deployed and you’re not geared up or your gear fails.

        2) It didn’t take long for the soldiers to figure out that advancing into a chemical weapons barrage was near-certain hideous suffering and/or death in ways which are, to most people, even more terrifying than being machine-gunned or blown up.

        3) Not long after that, the soldiers started saying things like, “You can run into that mustard gas if you want, Captain, but if you try to make us, we’ll shoot you in the back and dump you in a hole.” This is a little bit analogous to the whole “localized truces were most common in areas where both sides were so well dug in that they knew trying to advance against the other was certain death.” It’s hard enough to make people advance into gun and artillery fire. If they KNOW it won’t work, and their dying won’t accomplish anything, it’s even harder.

        4) Both sides realized – again, somewhat like the problem with localized truces – that if this spread, it would mess up the war effort, and decided to stop using the chemical weapons.

        No idea if it’s at all accurate. I just like it.

        • ADifferentAnonymous says:

          The way I’ve always heard it is that chemical weapons weren’t actually that militarily effective.

        • stillnotking says:

          I think the real problem was that chemical weapons aren’t terribly effective in warfare — changing weather conditions, for instance, can render them useless or even dangerous to one’s own side. Also, gas masks were pretty good even in 1914.

          The “best” use of chemical weapons has turned out to be terror attacks on civilian populations, where neither of those problems apply.

          • Publius Varinius says:

            > Also, gas masks were pretty good even in 1914.

            None of the belligerents had any gas masks in 1914. The first gas attack took place on 22 April 1915, by German troops wearing swimming goggles and hypo-soaked cotton respirators. Mass-produced German gas masks became available in Autumn 1915, British and French ones in 1916.

        • Publius Varinius says:

          British and French requests for gas were increasing right through 1917, so much that by the end of the year, industry could fulfill only about 1/3 of the request qoutas. ~100000 gas shells were fired in April, ~600000 in July, During 1918, 300000 mustard gas shells were the norm for *single offensives*.

          In late 1918, the British still used cylinder attacks, an average attack used 4000 cylinders (~120 tons) of phosgene – cf. 1915, when the British releases near Reims had about 1300 cylinders.

          1) Masks were very reliable for defending forces. In fact, Germany abandoned gas cylinder beaming attacks in 1917 altogether because they were ineffective, choosing to rely on gas shells instead. For offense, gas masks did cause difficulties. Source: Simon Jones, Gas Warfare Tactics and Equipment, 2007.

          3) I think you’re vastly overestimating the occurrence of open mutiny. By far the biggest one occured after the failed Nivelle Offensive, and that only involved a relatively small number of French divisions. Let’s also not forget that open refusal to follow orders did get you quickly, often on-the-spot, executed in that war.

          4) Sides did not stop using chemical weapons. In fact, they used more and more, up until the armistice of November 1918.

        • Oliver Cromwell says:

          I don’t understand why this was posted as a reply to my post. For interest, gas use didn’t suddenly end in WWI, so I think your argument collapses in face of a flawed premise.

          • Marc Whipple says:

            My apologies for posting something which apparently wasn’t correct. My understanding came from various anecdotal evidences which my brain collected into an incorrect understanding.

      • RCF says:

        “Truces were considered bad for morale, since they normalised passivity at a time when action would still eventually be called for.”

        But my point is that the disadvantages posed are irrelevant; what matters is the differential disadvantage. If your side’s morale is suffering, then so is the enemy’s.

        • Oliver Cromwell says:

          The absolute disadvantage is relevant if it means you are losing control of your troops. Fighting the enemy isn’t the only reason you want to be in control of your troops.

          However you are right that passivism in general did not benefit both sides equally. On the Western Front it benefited the Germans because they occupied French territory. But that is why the British and French tended to launch more major offensives than the Germans; their troops favouring passivism in other times and places was not a bigger problem than it was for the Germans.

    • pneumatik says:

      I wonder about the motivations of the generals in WWI. It seems like at least one side must benefit from the truces. Did one side’s general try to break the truce while the other tried to preserve it? Were the generals motivated by personal interests? Did the generals view truces as bad, even if they served their country’s purposes?

      My understanding of WWI is that no one really understood how enormous and terrible WWI would be. Military officers generally want opportunities to demonstrate how awesome they are in battle, and WWI gave them the opportunity to do just that after a longer quieter period. But WWI happened at a time right when technology encouraged a very slow and static attrition-based warfare while at the same time being much more effective at killing people than ever before. (source: John Boyd’s Pattern’s of Conflict). So while to us (well, at least to me) WWI looks like the most insane and pointless system of death and dismemberment ever, at the time the generals and other officers were trying to figure out how to break through the line and make progress, just like they had in previous wars.

  6. ryanch says:

    Doesn’t this actually undercut the rest of the argument:

    >Even in rural Pakistan, higher Raven’s IQ predicts higher wages. One might think that thousands of miles away from the Western universities where the tsts were designed, abstract IQ tests would have no power to predict which workers earned more and which earned less – but an IQ test made up of boxes and lines and circles had a **modest** ability to predict a person’s wages across rural Pakistan, just like in the United States.

    If the predictive power is different in different places, then it seems unlikely that it’s measuring what they say it’s measuring.

    The complaint has always been that IQ tests tend to measure exposure to certain types of abstract thinking. The idea that there is a scale of exposure to such ideas seems uncontroversial to me. I never see anyone offer any useful evidence against this criticism. Instead, we get strange theories about how one can look at social settings in which drastic malnutrition (or lead poisoning) is a known confounding factor, and then exclude the obviously malnourished, and pretend the effects of malnourishment and lead poisoning don’t exist on a continuum, making the exercise pointless.

    Sigh.

    • Samuel Skinner says:

      “If the predictive power is different in different places, then it seems unlikely that it’s measuring what they say it’s measuring. ”

      Returns to IQ depends on the structure of the economy; we should expect places like the US to have higher returns that rural Pakistan (the gap between janitor and financial analyst is a lot bigger than migrant laborer and successful farmer).

    • Doesn’t this actually undercut the rest of the argument:

      To the contrary. The observation is at the heart of the ‘paradox’ that is the book’s starting point: IQ predicts GDP/capita much better than it predicts domestic earnings.

      Returns to IQ depends on the structure of the economy; we should expect places like the US to have higher returns that rural Pakistan.

      The book supplies evidence (albeit limited) that this is not so, or at least not very much higher individual returns to IQ in the USA than in Pakistan. 1 point increase in IQ => 1% increase in earnings in the USA. In Pakistan and some other developing countries, the semi-elasticities reported are 1% or less.

    • HeelBearCub says:

      The complaint has always been that IQ tests tend to measure exposure to certain types of abstract thinking. The idea that there is a scale of exposure to such ideas seems uncontroversial to me. I never see anyone offer any useful evidence against this criticism

      I am really interested in an answer to this criticism. Using the letter C (backwards vs. forwards) is not going to work as well to do an IQ measure on someone who is illiterate, not comfortable reading, or is not familiar with the Roman alphabet. Presumably if you used an Arabic or Cyrillic or Kanji character in a test measuring my IQ, I will do less well on this test than someone who uses these characters on a daily basis.

      • Scott Alexander says:

        I think that it’s the letter “C” is irrelevant. Imagine it was a crescent moon or something.

        Someone who knows more can correct me if I’m wrong.

        • HeelBearCub says:

          @Scott Alexander:

          Quick, which way does the crescent moon open, left or right?

          • Nornagest says:

            Depends on its phase (i.e waxing or waning crescent).

          • HeelBearCub says:

            @Nornagest:

            Correct. Meaning if you look at a series of “moons” you don’t see what you would if you were looking at “Cs”: right right WRONG WRONG right WRONG

            Whereas as vaguely moon-shaped character looks like: crescent, crescent, crescent, crescent, crescent, crescent.

            Given that reaction time is also apparently measured for some of these tests, having that sort of ingrained reaction would probably be pretty helpful not just for correctness but also speed.

          • Nornagest says:

            Good point.

  7. Anonymous says:

    Remember that African-Americans are at about 85, so if going from Africa to America only gains you at most 3 IQ points, everything we know about the Flynn Effect falls apart. Even 76 -> 85 is a kind of low Flynn Effect estimate, but I’ll allow it if we speculate that these Africans probably have been at least a little Flynnified, and maybe African Americans are still impoverished enough not to have been fully Flynnified.

    Another possibility is that the Africans that were sent over as slaves had a lower average IQ than the African population as a whole.

    • E. Harding says:

      Speculative; requires more evidence to move forward.

      • keranih says:

        Agreed, but I’m wondering how this could be effectively demonstrated.

        On a basic level, when slaves were war prisoners, and it was known for some time (the slave trade in Africa had been going on for centuries under Arab traders) that slaves had a good chance of being sold off across an ocean (of water or sand) and hence never returning to their homelands, it would seem logical that communities and individuals would do their best to not be captured as slaves. Luck and other gestures of the hand of fate would play a role, but it would reasonable to assume that those who were captured were, on average, less bright than those capturing as well as less bright than those who got away.

        • D says:

          Look up the Igbo response to slavery. They had a reputation for suicide so were disfavored as captives.

          • Douglas Knight says:

            Right, the sugar plantations hated them, so instead of being worked to death they were sent to North America, where they formed a larger proportion of the ancestry of African Americans than they did of captives. At least, that’s what history says; it would be nice to confirm with genetic tests, but I don’t think that they are yet adequate to distinguish several hundred year old ancestry into Igbo, Hausa, Yoruba, and Fulani.

          • Anthony says:

            I’d think that genetic testing would be able to notice something like the absence of Igbo ancestry among American Blacks, if someone has bothered to look. The genetic data sets for various peoples of Africa exist, and there may be markers which occur (or don’t) among the Igbo but not among their neighbors.

            My ancestry is primarily from two different countries in Europe, with a dash of American, but the 23andme test could still pick out which two, and find the uncommon ancestry from within the one country which led me to discover something I didn’t know about medieval history; and it correctly placed (on a country-size scale) the American ancestry.

  8. E. Harding says:

    “See, this is another thing that confuses me about the Flynn Effect. Taiwan etc are still slightly above the European average. So either they had fully caught up to Europe in Flynnification by 1960 (how?!) or Asians have some sort of magic anti-Flynn-effect armor.”

    -Or, alternatively, the technology required to produce the Flynn effect spreads really quickly*. After all, if Taiwanese scores were slightly higher than European ones in 1960 and slightly higher than European ones in 2010, this means Taiwan and Europe must have had the same Flynn effect.

    BTW, Northern Italy and Japan were always roughly similarly rich. And according to the below paper:
    http://ahes.ier.hit-u.ac.jp/ahec_tokyo/papers/S7A-2_Cha.pdf
    Japan and Taiwan had roughly the same real wages in the 1920s (before that, they were higher in Taiwan, after, in Japan).

    *How do we know the Flynn effect is not roughly the same all around the world due to the quick spread of technology?

  9. anon85 says:

    >The images are simple shapes like a regular C or a backwards one, and your job is to note, for instance, whether the C is open to the left or the right. So at what point would you no longer be able to do better than random at correctly answering “left” or “right”? When the image flashes for just an eighth of a second? A 32nd? A 128th? That’s the key variable the researcher keeps track of in this study…can it possibly be the case that people who only need to see the image for a tiny fraction of a second tend to have higher average IQ scores than people who need to see it for an eighth or sixteenth of a second? Summarizing “dozens of studies” run on “four continents” dating back to the 1970s, psychologist Ian Deary says, “the overall answer is yes, there is a moderate association between how good people are at the inspection time test and how well they score on intelligence tests.”

    This is perfectly consistent with IQ tests just testing test-taking abilities. After all, we all know that essentially *all* good traits positively correlate with each other, even things like IQ and height. So *even if* IQ tests just measured test-taking skill, we’d expect them to positively correlate with reaction time (and height, and musical ability, etc., etc.)

  10. Ialdabaoth says:

    > Fortunately for the officers (but not for the enlisted men) there was a simple solution: move troops around. By swapping one division south two kilometers and another north two kilometers, military officers could turn a repeated prisoner’s dilemma into a literal one-shot game.

    I remember hearing that a variant of this problem occured in Vietnam: enlisted men tended to side with their fellow enlisted men, rather than the officers, when told to meaninglessly risk their lives on missions that had no obvious military benefit.

    It reached a point where any officer who gave obviously suicidal orders risked being ‘fragged’ – killed by his own men in an “accidental” friendly-fire incident. Of course, any officer who DIDN’T relay obviously suicidal orders from his superiors risked being court-martialled, so what do you do?

    The answer was to break up units and move men around enough that they never felt loyalty to their fellow platoon members – and thus unit cohesion and morale suffered a blow, from which it has ostensibly never recovered.

    • keranih says:

      The answer was to break up units and move men around enough that they never felt loyalty to their fellow platoon members – and thus unit cohesion and morale suffered a blow, from which it has ostensibly never recovered.

      Eh. Not debating the existence of fragging, but low unit morale in VN was hard baked into the system of rotating individuals in and out, rather than whole units. Hence, if you speak with people who actually recently served in Iraq/Afghanistan, they were sent over in a whole group and came back in an whole group, rather than being replaced one at a time. (Likewise, “stoploss” meant that a unit scheduled to deploy would not have people leaving the military before the unit was relieved, but would return to the USA with the unit and leave military service then.)

      The military has its issues, I am assured, but is not incapable of learning from past mistakes.

      • Randy M says:

        I wonder if this is part of why the current military leadership is insistent that they do not want conscripts. People with an idealistic commitment my be less likely to balk at nigh-suicidal objectives than those fearing punishment.

        • keranih says:

          From what I’ve seen, it’s because they (military leadership) have a job to do, and they don’t want to have to work with people who don’t want to be there. The potential for sabotage, negligence or even just plain inefficiency goes way up with conscripts.

          This holds for jobs that don’t involve sacred honor and the brotherhood of battle, too.

        • pneumatik says:

          The US military uses being rich as a competitive advantage. This means that they can page competitive enough wages to attract people who want to enlist. This motivation means it’s much easier to train them on more complicated tactics, to get them to stay in good shape, or to teach them how to use more expensive and complicated equipment. It means that in combat you can trust 20-year-olds with an enormous amount of firepower without worrying that they’ll use it incorrectly.

          • Steve Sailer says:

            The IQ requirements to enlist in the modern military are surprisingly high. They’ve taken practically nobody who scores below the 30th percentile on the SAT-like AFQT since the downsizing at the end of the Cold War. The Air Force and Navy during the recent recession required about the 50th percentiles.

          • houseboatonstyx says:

            @ Steve Sailer,

            Hi, Steve. Way off topic and from a thread far ago, but was it you who gave me a link re a Lewis book, concerning morality by fiat? I seem to have lost the link; may I have it again? _Reflections on the Psalms_, iirc.

  11. Zluria says:

    Just pointing out that playing the prisoners Dilemma a predetermined number of times is a really bad test of IQ, since the rational strategy is to always defect. To get the effect where the rational thing is to cooperate, you have to hide the number of repetitions from the participants.

    • suntzuanime says:

      It’s cool that not only do you have an unsophisticated perspective on the prisoner’s dilemma, you also just casually equate rationality with IQ. Did you look all the “rational” answers up in a book and feel good about how smart that made you?

      • Marc Whipple says:

        One of Piers Anthony’s Xanth books involves one of the main characters teaching an unimaginably powerful demon the basics of iterated Prisoner’s Dilemma strategies, including what he calls “tough but fair,” which is how he describes tit-for-tat. The point of the story is that while the demon is unimaginably powerful, he’s not especially intelligent. Once he has the strategies explained to him, his interactions with other demons take a huge leap forward. (They rarely fight directly, they’re too powerful. They play strategy games, more or less, to determine the outcome of disagreements.)

        I thought of this because while these books are usually thought of as rather juvenile, it’s one of the best explanations of Prisoner’s Dilemma strategy I’ve seen… and demonstrates more nuanced understanding than the post you’re replying to. 😉

        • pneumatik says:

          Was the book The Source of Magic? I remember the protagonist talking to the demon, but I don’t remember any of the details and when I read it last I didn’t know what the Prisoner’s Dilemma was.

          • Marc Whipple says:

            That is the book in which we first meet the demon, but the book I am thinking of is Golem in the Gears.

          • houseboatonstyx says:

            @ Marc Whipple

            _Golem in the Gears_ did tic-tac-toe on a large scale. Grundy and Rapunzel were on their way from her ivory tower toward the capitol and got in trouble with an enclave of Elves. The game involved Grundy swinging around in a giant wooden framework, iirc.

      • Anonymous says:

        Hey, being annoyed at someone’s reasoning doesn’t make it OK to be mean to them, even if it makes you feel good.

      • “Unsophisticated perspective on the prisoner’s dilemma”

        Could you support that? What he is describing is the standard counter-intuitive analysis.

        Given a PD with a known number of plays, why is it in the interest of either player to cooperate on the final play? If not, why doesn’t the whole series unravel?

        • jaimeastorga2000 says:

          Could you support that? What he is describing is the standard counter-intuitive analysis.

          Given a PD with a known number of plays, why is it in the interest of either player to cooperate on the final play? If not, why doesn’t the whole series unravel?

          Like I said below, the standard analysis assumes that”two rational players” means both players are casual decision theorists. Changing the assumptions lead to different analyses.

          For one thing, a lot of people reason that both players must follow the same strategy, since they are both “rational”. But if both players must follow the same strategy, then when you choose your strategy you are actually choosing the strategy of both players, in which case it makes more sense to use a superrational analysis and reason that, since both you and your opponent will make the same choice, you should choose a strategy that when followed by both players leads to the best possible outcome for you. Note that superrational decision theorists cooperate with themselves in the one-shot prisoner’s dilemma, so the situation where the last iteration leads to defection never even arises.

          For another, if you relax the requirement that both players must have the same strategy, then imagine that you are entering a fixed-length iterated prisoner’s dilemma tournament with a precommitted strategy. Do you really want to show up as defectbot and close off the possibility of cooperation with an agent running, say, tit-for-tat with initial cooperation? Note that this is far closer to the situation real humans in the fixed-length iterated prisoner dilemma tests encounter, except perhaps for the twins, who might be argued to be much more similar to each other and therefore more closely modeled by the “both players are the same” approach.

          • Earthly Knight says:

            For one thing, a lot of people reason that both players must follow the same strategy. But if both players must follow the same strategy, then when you choose your strategy you are actually choosing the strategy of both players,

            What? Who thinks this? I had thought the conclusion that both players will (not “must”) employ the same strategy followed from the stipulation that they are symmetrically well-informed and self-interested. If you make my opponent’s choice counterfactually dependent on mine, then of course the optimal strategy will be different, because you are playing a different game.

    • g says:

      And yet, all those other “less rational” people are getting better results than you are with your maximally-rational always-defect strategy.

      You might want to reconsider your notion of “rational strategy”.

      Hint 1: the best thing to do in a PD encounter depends on the other participant’s strategy. If they’re playing tit-for-tat, you’re going to get predictably worse results by always defecting than someone else who always cooperates (never mind someone who does something more sophisticated like tit-for-tat).

      Hint 2: yes, there is a “recursive” argument for always defecting. Concluding from that that you should actually always defect involves an astonishing level of confidence in how the other guy is thinking.

      Hint 3: perhaps the simplest scenario in which you are entitled to really high confidence in how the other guy is thinking is one where you know they think just like you do. What’s the best strategy in a PD, if you know that the other player will be adopting the same strategy as you do? It’s not “always defect”.

      • Earthly Knight says:

        the best thing to do in a PD encounter depends on the other participant’s strategy. If they’re playing tit-for-tat, you’re going to get predictably worse results by always defecting than someone else who always cooperates (never mind someone who does something more sophisticated like tit-for-tat).

        But surely, even if both me and my counterpart seem to be playing a reciprocal strategy, it would make no sense for either of us to cooperate on the final iteration, when there will be no subsequent opportunities for us to be rewarded for our cooperation. That’s just leaving money on the table! So we can both know that the other will defect on the final iteration, and, moreover, we can both know that the other knows that we will defect. But given that there is no hope of reciprocation on the final iteration, it will no longer make sense for either of us to cooperate on the penultimate iteration. That’s just leaving money on the table! So we can both know that the other will defect, and, moreover, we can both know that the other knows that we will defect. But given that there is no hope of reciprocation on the penultimate iteration…

        It’s not hard to see the force of the backwards induction. It is true, of course, that if either I or my opponent is an altruist or a nincompoop it won’t work, but it’s built into the thought experiment that this is not the case.

        Here’s a question for you: at what point in a series of n prisoner’s dilemmas do you stop cooperating? The nth trial? n-1? n-2?

        • Anon says:

          I think n-1 tends to win in iterated prisoner’s dilemma of fixed length competitions.

        • Anonymous says:

          Won’t it depend on the payouts for the different possible outcomes? Imagine a Prisoner’s Dilemma in which the rules are: mutual defection -> both players get 1; mutual cooperation -> both players get 1,000,000; A cooperates, B defects -> A gets 0, B gets 1,000,001. In such a game, the cost of mutual defection is so high, and the benefit of defecting over cooperating when your opponent cooperates so small, that even the tiniest attempt to establish cooperation ought to be enormously beneficial, and the benefit from being the first one to defect is miniscule. So you should keep cooperating as long as you can – you would be leaving far more money on the table if you cause cooperation to break down early than you would by not defecting first.

          The opposite case would be one in which mutual cooperation is barely better than mutual defection, but defecting when your opponent cooperates is much better than mutual cooperation. For example, rules such as: mutual defection -> both players get 1; mutual cooperation -> both players get 2; A cooperates, B defects -> A gets 0, B gets 1,000,000. In this case the benefit from being the first to defect dwarfs everything else, so I would expect defection to happen early – if cooperation could even be established at all.

          • Earthly Knight says:

            Won’t it depend on the payouts for the different possible outcomes?

            No– the backwards induction doesn’t depend on the payout values. All that matters is that it will be rational to defect on the nth iteration and rational to defect on the predecessor of each iteration in which you know you will both defect.

          • Anonymous says:

            If you are going to cooperate until some point then I think the question of where that point is does depend on the payouts. If you’re going to defect all the way then it won’t, but the argument that it’s rational to do that seems mistaken, although I can’t put my finger on why.

          • Earthly Knight says:

            If you are going to cooperate until some point then I think the question of where that point is does depend on the payouts.

            It shouldn’t, because the backwards induction isn’t sensitive to what comes before. If someone holds a gun to your head and demands that you cooperate any non-zero number of times, that number should be precisely one, because your opponent will always defect and always expect you to defect.

          • Anonymous says:

            @Earthly Knight

            According to the backwards induction reasoning, no. I’m talking about what seems to me to be the ‘correct’ result, whatever the reasoning behind it is. As others have said, imagine a prisoner’s dilemma competition, in which the number of rounds is fixed but set to some high number, maybe a million or so. Under those conditions, I expect the strategy that would actually win would not be to always defect, but to attempt cooperation until some point at which the gain from defecting first exceeds the gain from further cooperation – so perhaps tit for tat, but switching to always defecting a few rounds before the end.

            I understand the logic of the backwards induction, but it seems to me that the answer it produces is wrong.

    • stargirl says:

      Calling always defect “Rational” seems like a serious stretch. It is, of course, the nash equilibrium. But imagine you entered a tournament where you a program you wrote was going to play 1 million round iterated prisoner’s dilemma’s. For real stakes. Would you actually suggest the program always defect?

    • jaimeastorga2000 says:

      Just pointing out that playing the prisoners Dilemma a predetermined number of times is a really bad test of IQ, since the rational strategy is to always defect. To get the effect where the rational thing is to cooperate, you have to hide the number of repetitions from the participants.

      What you are describing is the behavior of a causal decision theorist playing against another agent known to be a causal decision theorist. To say that it is the rational strategy, and in particular the strategy that a human should follow when facing another human is… unwarranted, to say the least.

  12. Anonymous says:

    I keep hearing wildly different numbers for sub-Saharan African IQ, but 76 seems pretty plausible. Note that if the higher 82 number which I keep seeing around is true, it would throw a wrench in a lot of things. Remember that African-Americans are at about 85, so if going from Africa to America only gains you at most 3 IQ points, everything we know about the Flynn Effect falls apart. Even 76 -> 85 is a kind of low Flynn Effect estimate, but I’ll allow it if we speculate that these Africans probably have been at least a little Flynnified, and maybe African Americans are still impoverished enough not to have been fully Flynnified.

    Remember than African-Americans are one-fith to one-fourth white. And apparently substantially descended from some singular white guy in the colonial era.

    http://isteve.blogspot.no/2013/02/henry-louis-gates-exactly-how-black-is.html

    • Scott Alexander says:

      I do remember that. In fact, African-Americans are 20% white, and their mean of 85 happens to be 20% of the way from the supposed sub-Saharan African mean of 82 to the white mean of 100. This is really bad for the Flynn Effect unless we find that the sub-Saharan African mean is in fact well below 82 in real life.

      • Ilya Shpitser says:

        Weren’t African slaves generally drawn from West Africa, not sub-Saharan Africa?

        • Stezinech says:

          West Africa is in sub-Saharan Africa.

          The majority of Africa is below the Sahara, with the exception of North Africa, which is considered a different region.

        • Salem says:

          West Africa is part of sub-Saharan Africa.

          Sub-Saharan Africa doesn’t just mean those countries that are part of the Sahara desert. It means those countries that are South of (or in) the Sahara. It basically means “black” Africa, and it’s used because “Arab” Africa is very different in all sorts of ways.

          • Nornagest says:

            The political boundaries between countries like Algeria, Libya, Egypt (“Arab” Africa) and Mali, Chad, Sudan (“black” Africa) are in the middle of the Sahara, maybe a little south of the middle, but it doesn’t matter much because no one lives there. Population density in the Sahara might as well be zero, and travel across it is pretty difficult; even today, there are only something like a half-dozen highways crossing it, most of them narrow and poorly maintained. This over a desert about the size of the United States.

            For practical purposes, you can think of e.g. Mali as being a country of the Sahel, the savanna belt south of the Sahara, just as e.g. Libya is a country of the Mediterranean coast and a few oases, and Egypt is a country of the Nile valley and delta (and the coast, and a few oases).

      • ryan says:

        This is possibly a misunderstanding of what an IQ score means. Tests are re-normalized on a regular basis to keep the mean at 100 and standard deviation at 15. The Flynn effect functions as follows (caution numbers not exact): Give two kids the WISC-V today, one scores 100 the other 85. Then wait a year (control for test preparation bias) and administer the WISC-III with the same questions and scoring methods as used in 1990. The scores will be something like 105 and 90. The first kid is only average compared to other kids today, but compared to kids in 1990 they are slightly above average.

        It might help to think of IQ scores in terms of percentiles. 100 simply means 50th percentile. 115 means 84.1 and 85 means 15.9. If you scored a 115 it means that in a room of 100 random people about 16 of them would have scored higher than you. If you scored an 85 it means about 84 of them would have scored higher than you. Thinking of IQ scores as percentiles would I think help to understand why something like a group level standard deviation in average score can lead to really substantial differences in social outcomes.

        Anyway, that difference, which I hereby name for all time and mankind “The Sailer Gap,” the observed one standard deviation gap between average IQ of African and European Americans, has held constant over time. Back when IQs were normalized only using white children the scores were 85 and 100, now that they’re normalized using all children the scores sit at about 90 and 104. The Sailer Gap has not changed at all.

        So this is good news for the Flynn effect. Whatever the specific nature of the environmental tide, it is lifting all boats as expected.

  13. scav says:

    Not at all surprised to find that IQ tests are a better predictor of job performance than personality tests.

    It is not clear to me that personality tests are measuring an objectively real phenomenon, or if it is, how the designers of the tests would calibrate them. Especially Myers-Briggs, which seems to be pure voodoo.

    But to the extent that your job performance is about problem solving, a test for whether you can solve problems seems like it would be directly relevant.

    • ryan says:

      I’ve only taken one personality test for employment. My impression was you would have to be pretty out of it to not be able to game the answers. It doesn’t take a genius to know that if asked if you would call a manager or scream and yell back at an angry customer, you’re supposed to say call the manager even if you wouldn’t.

      • DrBeat says:

        Yeah, but then you get to the questions like “Agree/Disagree/Etc: It is infuriating when guilty criminals go free.”

        The standard questionnaire I have seen multiple times has several questions like that, where you just don’t know how they are going to interpret the answer; the other questions make it clear they want you to lie, you just don’t know what lie you are supposed to tell. Are you supposed to agree and indicate you care about right and wrong and won’t steal from the company? Or are you supposed to disagree and indicate you value following the rules all the time and thus won’t steal from the company?

        • The Nybbler says:

          Search for “unicru answer key” and you can find the desired answers to all those questions.

          “It is infuriating when guilty criminals go free.” should be answered Strongly Agree.

          They’re looking for sociable, agreeable, diligent, patient, and honest doormats who follow rules and want others to as well. Except they want these people to think of themselves as thrill-seekers… or maybe that’s just an error in the answer key.

          Also the answer is always “Strongly Agree” or “Strongly Disagree”. No wishy-washiness allowed.

  14. ivvenalis says:

    WWI anecdotes: truces definitely occurred, often so that the dead and wounded in No Man’s Land could be cleaned up after an attack. Ernst Junger, whom no one would ever accuse of being soft in WWI, describes one such truce in Storm of Steel. These were always explicitly negotiated. Unspoken or low-level truces on quiet sectors probably happened, although this has probably been exaggerated and is probably indeed better understood as the natural emergent state of two heavily fortified but lightly manned positions facing each other.

    There’s no way artillery was intentionally fired to miss. The explanation could be total bullshit, trench superstition, or vets yanking the newbies’ chains. (Great War veterans could tell from sound whether incoming artillery was falling directly at them or would miss–many primary sources describe this–and they could easily act nonchalant about off-target shells while trolling rookies about how the Germans were missing on purpose.)

    No one would stick their head over a parapet. Even if the line unit facing you had super pinkie sworn not to shoot, there might be snipers from outside that unit in concealed positions outside the trench (they moved in and out at night). Someone might have actually shouted that, either sincerely or as a joke with various shades of sarcasm (“sorry about our SICK ARTY tommy hope u didn’t get owned too hard lol”), but the mean, median, and mode experience of being shelled in 1916 didn’t involve apologies from the other side.

    • Anonymous says:

      Are you sure that’s not just projection of modern thinking on people a century back? It is not inconceivable that they might have thought differently. WWI soldiers weren’t trained to fire on reflex whenever they saw a humanoid form.

    • Oliver Cromwell says:

      The sense given in the quote is a bit misleading for people with no other knowledge of how WWI was fought. It was probably very common in terms of space and time but most of the casualties in WWI were suffered in relatively small areas in relatively short spaces of time. In the active areas there will not have been such behaviour and certainly not by distant support arms like artillery. Outside the active areas there was no chance – or intention – of achieving a breakthrough anyway so activity will mostly have been to avoid the troops becoming lazy, bored, or ill-disciplined.

  15. g says:

    But the comprehensive IQ test used most often today, the Wechsler mentioned earlier – involves little paper-staring and almost no pencils. The person giving the test (a psychologist or other testing expert) asks you why the seasons change or asks you to recite a list of numbers that she reads out to you.

    [Emphasis mine — g]

    Wow, it’s really hard to imagine how anyone could possibly think intelligence test results have any kind of dependence on culture and education, isn’t it?

    • Murphy says:

      Just for fun I looked up the Wechsler.

      It’s like every shitty screwup anyone ever put in an IQ test all rolled into one.

      Testing for specific knowledge, check.

      Testing for associations with a strong cultural context, Check.

      Looking for whatever random shit the test composer was thinking of at the time, check. (choose the odd one out? Oh perhaps it’s that only this one has fur? Perhaps it’s that this is the only one that has stripes? perhaps it’s that this is the only one which has 4 legs, don’t worry though, the thicko who wrote the test had a clear single answer in mind)

      There’s a few decent questions which simply test abstract thinking without cultural baggage but they’re few and far between.

      I don’t have any beef with IQ tests in general but the Wechsler is like a parody designed to include everything which gives IQ tests a bad name.

      • JK says:

        How did you look up a copyrighted test that doesn’t exist in text format? Don’t say you found some test online and took it for the real thing.

        In any case, your subjective impressions of what is “specific knowledge” or “cultural baggage” or “abstract thinking” are just that, subjective impressions. The meaninglessness of such judgments was demonstrated in a classic study by Frank McGurk:

        McGurk collected a representative sample of 226 test items from various well-known group-administered IQ tests that were widely used at the time, such as the Otis Test, Thorndike CAVD, and the American Council on Education Test. A panel of 78 judges, including professors of psychology and sociology, educators, professional workers in counseling and guidance, and graduate students in these fields, were asked to classify each of the 226 test items into one of three categories: I, least cultural; II, neutral; III, most cultural. Each rater was permitted to ascribe his own meaning to the word ‘cultural’ in classifying the items. McGurk wanted to select the test items regarded as the most and the least ‘cultural’ in terms of some implicit consensus as to the meaning of this term among psychologists, sociologists, and educators. Only those items were used on which at least 50% of the judges made the same classification or on which the frequency of classification showed significantly greater than chance agreement. The main part of the study then consisted of comparing blacks and whites on the 103 items claimed as the ‘most cultural’ and the 81 items claimed as the ‘least cultural’ according to the ratings described. The 184 items were administered to 90 high school seniors. From these data, items classed as ‘most cultural’ were matched for difficulty (i.e. percentage passing) with items classed as ‘least cultural’; there were 37 pairs of items matched (+/- 2%) for difficulty.

        These 37 pairs of matched items were then administered as a test to seniors in 14 high schools in Pennsylvania and New Jersey, totalling 2630 whites and 233 blacks. Because there were so many more whites than blacks, it was possible for McGurk to obtain practically perfect matching of a white pupil with each of 213 black pupils. Each black pupil was paired with a white pupil in (1) the same curriculum, (2) the same school, and (3) enrollment in the same school district since first grade. The white-black pairs were also matched so that the white member of each pair was either equal to or lower than the black member on an 11-item index of socioeconomic background (the Sims Scale). (Exact matching on the 11 items of the SES index was achieved, on the average, in 66% of the 213 matched black-white pairs.) The matched black and white groups averaged 18.2 and 18.1 yr of age, respectively.

        The results flatly contradicted the hypothesis that the white-black difference in test scores is due to the cultural loading of the items, at least as the culture loading of test items is commonly judged. On the test composed exclusively of the 37 items classified as ‘most cultural’, the mean white-black difference (expressed in units of the average standard deviation in the two samples) is 0.30σ, as compared with the mean difference of 0.58σ on the test composed of the 37 items classified as ‘least cultural’. In a subset of 28 pairs of ‘most’ and ‘least’ cultural items that were matched for difficulty (based on the per cent passing in the combined samples), the mean black-white differences are 0.32σ and 0.56σ on the ‘most’ and ‘least’ cultural tests, respectively. Hence differences in item difficulty are not responsible for the relatively greater black deficit on the ‘least cultural’ items.

        [source]

        • Murphy says:

          There’s no way that anything copyrighted EVER ends up on the internet such that people can read or view it! Not a chance! That would imply people breaking copyright law which nobody ever ever does on the internet!

          Sounds like a real shit-show of a study.

          What’s your point?

          • JK says:

            My point is that, firstly, you did not actually review Wechsler items (e.g., the plastic cubes used in the block design subtest), and, secondly, that your impressions of test items’ cultural “loadings” are as meaningful as a baboon’s impressions of the same would be. McGurk’s study is a great one.

          • Murphy says:

            Some of the physical parts of the WISC may be fine but you don’t get to pretend they’re the only parts.

            original title:

            “Comparison of the performance of Negro and white high school seniors on cultural and non-cultural psychological test questions” (1952)

            not generally available online but you may be able to view it through a university library

            Data re-analyzed:

            http://arthurjensen.net/wp-content/uploads/2014/06/Black-White-Bias-in-Cultural-and-Noncultural-Test-Items-1987-by-Arthur-Robert-Jensen-Frank-C.-J.-McGurk.pdf

            No, it sounds like a crapfest of a study.

            They paired children with others at the same school, matched for being the most similar then found that those children did similarly on test items that they’d guessed might be most likely to be affected by children having similar backgrounds. (oh my god!)

            Matching was of course done manually with the method of selecting matched candidates not fully described in the paper but unblinded.

            If you saw a drug trial run this poorly and you failed to throw it in the trash you’d be doing a disservice to your patients.

          • roystgnr says:

            It turns out that not only does the internet have copyrighted materials, it has images and videos too.

            https://www.google.com/search?q=wechsler+block+design+test&tbm=isch

            https://www.google.com/search?q=wechsler+block+design+test&tbm=vid

          • JK says:

            However they were matched is completely irrelevant. In the full, unmatched sample, black-white gaps on cultural and non-cultural items were 0.32σ and 0.69σ, respectively, that is, the discrepancy was larger than in the matched sample. See p. 527 in Jensen’s Bias in Mental Testing, which is available online.

            Whether a test item is good and unbiased is not decided based on subjective impressions. You need to look at the item statistics, which is routinely done when creating tests these days.

      • Vaniver says:

        It’s like every shitty screwup anyone ever put in an IQ test all rolled into one.

        You’re right, your preconceptions about measuring IQ are better than empirical validation. That’s how knowledge works.

        • Anonymous says:

          @Vaniver, actually, a statement such as “It’s like every shitty screwup anyone ever put in an IQ test all rolled into one” seems like something Scott would write. And you’d probably agree with him.

  16. There is a failed replication of the group performance study by Chablis. Not published yet, but it was a great talk at ISIR 2015. That is, it found that group performance was all about individual members cognitive ability and no effects of gender/emotional stuff.

    One can read the abstract here.
    http://www.isironline.org/wp-content/uploads/2015/09/ISIR-2016-program1.pdf

  17. Deiseach says:

    Later you are handed some wooden puzzle blocks and you try to assemble them into something meaningful.

    Well, that’s me failed.

    I have very poor hand-eye co-ordination and little to no spatial processing ability. Hand me odd-shaped puzzle blocks to put together into “something meaningful” and Cody the Chimp will beat me every time. My dyscalculia also means that give me a list of numbers to recite and I will transpose, forget or duplicate some of them (I constantly have to re-check I’m dialling the correct phone number, if it’s an unfamiliar one, at work).

    So – result of Weschler for me would be “thick as two short planks” which I can’t really argue with 🙂

    A bit more serious: a question about “why do the seasons change?” only measures “Did you learn this in school?” A smart person from a more ‘primitive’ background who never learned anything about axial tilt is not going to get this right. Though if they worked it out on their own, that is genius right there.

    • GerardD says:

      @Deiseach

      “Something meaningful”

      Nah, at some point it becomes a test of your verbal ability to concoct a bullshit explanation of how that apparently random pile of blocks is a post-existential commentary on the symbolism of eros, and it sublimates the underlying metaphor.

      • Deiseach says:

        Ah! So if I work up something along the lines of “The collapsed and innately incapable attempt at building a structure out of randomly-shaped objets trouvés not alone references the symbolism of the archetypal Tower of Babel, and the esoteric tradition of the Tarot “Lightning-Struck Tower” – also known, pregnantly, as “The House of God” – it is, in sum, a statement of the meaninglessness of all human attempts to defy entropy and construct meaning out of the void, even our childish yearning for religion to give order and structure to the chaotic and indifferent forces of physical law and the disappointment of those yearnings as the House of God which is also and at its foundation the House of Man falls apart beneath our gaze, and our helplessness in the face of natural forces such as storm and flood which sweep away all our vaunted technological progress. This is the nakedness and futility of Humanity in the vast, indifferent Cosmos that stares us in the face like a blind cruel idol!”, then I can bump my marks on the test up? 🙂

      • TrivialGravitas says:

        What? The goal is produce an exact duplicate of a picture you’re shown. A stopwatch is involved.

    • ryan says:

      I don’t think this is right. The fact that you did well on many sections of the test but completely fouled up the numbers repetition portion is likely how you were diagnosed with dyscalculia in the first place.

      Internet world: IQ tests are used to grind identity politics axes

      Real world: IQ tests are used to diagnose learning disabilities and screen kids for gifted classes

  18. Deiseach says:

    This seems almost unbelievable, and absent knowledge of the primary source I can’t be 100% sure, but I guess it fits together with what you always hear about the soldiers celebrating Christmas together and so on.

    Why do you think there was so much effort put into propaganda about “poor little Belgium”, “the Hun” (explicitly making the link with Attila the Hun and the pop-culture reputation for the Huns being a barbaric militaristic horde which swept over and destroyed the civilised nations), “the baby-killing Boche” and so forth?

    Arthur Machen who wrote the story which became the inspiration for the legend of The Angels of Mons worked as a journalist during the period and wrote many stories which were, in the words of Wikipedia, “morale-boosting propaganda”, as in his story The Happy Children:

    Many of them had wreaths of dripping seaweed about their brows; one showed a painted scar on her throat; a tiny boy held open his white robe, and pointed to a dreadful wound above his heart, from which the blood seemed to flow; another child held out his hands wide apart and the palms looked torn and bleeding, as if they had been pierced. One of the children held up a little baby in her arms, and even the infant showed the appearance of a wound on its face.

    …I had seen those who came singing from the deep waters that are about the Lusitania; I had seen the innocent martyrs of the fields of Flanders and France rejoicing

    You see? The Germans are the murderers and torturers of little innocent children, how can our brave boys in the trenches possibly fraternise with such animals? And how can the Americans stand aloof when children are being drowned by German submarines sinking passenger ships?

  19. Jan Rzymkowski says:

    “I’m not sure how surprised to be about this. I guess it makes sense that political issues like the Iraq War are so polarizing that you can’t change people’s minds about them with a short discussion. It’s still surprising to find something without an Asch conformity effect at all.”

    I wonder if there’s an another factor in play. People know that there are people out there with “wrong” views on Iraq War. On the other hand they don’t expect people to have “wrong” views about lines’ lengths.

  20. Eoin says:

    Sherlock Holmes (a fictional character, but one who is from a Western society and who is considered very smart) would fail the “Why do the seasons change” IQ question, simply because it’s outside of his field of interest.

    • keranih says:

      Interestingly, the modern Sherlock, who is not above using a GPS, actually should care if the sun goes round and round the garden like a teddy bear.

      Doyle’s Holmes – him I’ll cut some slack.

      • Deiseach says:

        I have always thought that was Holmes trolling Watson. It’s the early day of their acquaintance, he can see Watson is trying to get a read on him, he’s not above having a laugh at his expense.

        Look at Watson’s list: “Knowledge of literature – Nil” and yet Holmes in the next story is quoting Goethe and Racine and who knows what all 🙂

        • Eoin says:

          I’m reminded of the various passages in Flann O’Brien where otherwise-idiotic characters are nevertheless making constant references to the classics and philosophy. Certainly Holmes trying to get a rise out of Watson is funnier to me, and therefore I will accept your hypothesis.

    • jaimeastorga2000 says:

      Sherlock Holmes (a fictional character, but one who is from a Western society and who is considered very smart) would fail the “Why do the seasons change” IQ question, simply because it’s outside of his field of interest.

      Sherlock Holmes is unrealistic. In the real world, smart people tend to pick up well-known facts like that even if they are not inside their area of expertise. I bet everybody in this blog knows what causes the seasons to change even if they did not draw “meteorology” or “astronomy” from the lottery of fascinations.

      In general, “smart” fictional characters are not actually all that intelligent.

      • Douglas Knight says:

        Harvard students don’t pick up this fact. I think you are wrong about what smart people do, although it is also possible that Harvard students are special.

        I think that this is a very common error here and in related communities. I am not sure how to describe this error. Roughly speaking, it is confusing intelligence with IQ+Openness. More accurately, though less precisely, it is confusing intelligence with nerdiness. But even that is not quite right.

        • Chalid says:

          +1

          This is grey tribalism in Scott’s original sense – people aren’t meeting anyone who come from a different culture and thus don’t really believe they exist.

          I know a guy with a physics PhD from one of the world’s top schools who thought evolution was Lamarckian. And I’ve worked in corporate environments with people who were really smart, and I wouldn’t have been one bit surprised if some of them didn’t know what caused the seasons – I’d see similar ignorance come up on occasion.

          An example that might work on the average commenter here – how many basic facts about sports are commenters here likely to be ignorant of? Surely everyone has at least heard in passing why Joe Namath is famous, or who won the Super Bowl last year. It’s just that a fact that doesn’t interest you doesn’t stick.

          An even better example might be facts about women’s fashion, or celebrity gossip, but they’re outside my spheres of interest so I can’t come up with any facts about them at all 🙂

          • Jiro says:

            Sports is a form of entertainment. Asking who won the Super Bowl isn’t like asking someone about the seasons; it’s more like asking who Emperor Palpatine is. Most of us know who Palpatine is, but we wouldn’t be surprised to find someone who doesn’t.

          • suntzuanime says:

            Astronomy is a form of entertainment for 99% of the people who pay attention to it. It does not provide actionable information for most people’s everyday lives.

          • I don’t know who won the super bowl. I recognize “Joe Namath” as the name of someone famous in sports, I think football, but that’s it.

            My standard example of widespread scientific ignorance is the popularity of videos that purport to demonstrate the greenhouse effect with a simple experiment–and only work for people who don’t understand how the greenhouse effect works. For details see:

            http://daviddfriedman.blogspot.com/2014/12/a-nice-example-of-scientific-ignorance.html

          • Glen Raphael says:

            I don’t know who won the Super Bowl or what teams played in it last year or any year – my “bubble” excludes nearly all information about sports. I did know that Joe Namath was a football quarterback but that’s all I knew about him – I couldn’t say why he was a famous one.

            (I suspect what knowledge I used to get of such things mainly came from television commercials, back when I watched over-the-air TV.)

          • Chalid says:

            For myself – I’m probably exposed to celebrity gossip on a majority of days. I’ll glance in passing at supermarket tabloid covers when standing in line. Or when I’m on the subway, the news screens will often display celebrity news. And, in spite of having an IQ well above average, I retain nothing whatsoever from these repeated exposures because I don’t care about the topic, just like a significant fraction of this comment section doesn’t care about sports and doesn’t know who won the Super Bowl even though they surely saw something about it in the news, on their Facebook, etc. And most of the population cares about science even less than that, including many smart people.

          • Montfort says:

            I agree with you in general, but the specific facts you point out are poor examples – though both are good signs that someone watches professional football (or, for Namath, that one cares about the Jets or watched football in the 70s), they’re not necessarily what someone would get taught in the football equivalent of third grade.

            Nevertheless, I’m sure many people here could fail more equivalent tests – e.g. “why is the offensive line important” or “why would you run the ball instead of passing” or even “how many players are on the field at one time”.

          • suntzuanime says:

            “Or even”? As someone who has watched football casually, the first two questions are drop dead obvious, but when I’m watching football I hope to be watching football, not counting dudes.

          • Glen Raphael says:

            (or, for Namath, that one cares about the Jets or watched football in the 70s)

            I’ve never seen Namath play football but there’s an excellent chance I saw him play himself on The Brady Bunch. He also guest starred on Fantasy Island, The A Team, The Love Boat and other shows I’m likely to have seen.

            (Agreed that “how many players are on the field at one time” seems the hardest of those three questions. The answer I guessed at was off by about 50%)

          • TrivialGravitas says:

            @suntzuanime Days being longer in the summer and shorter in the winter is a pretty everyday life thing, I mean, you need to know if you wait till 7 to rake the leaves it’ll be dark.

            What’s interesting here is that it suggests the science education is actually making people *more* wrong. If people knew nothing about orbits they’d probably just says the days getting longer/shorter which is more or less correct even if it misses the exact mechanic.

          • Montfort says:

            Glen: If you can tell me that Namath recounted the “legendary” tale of his Superbowl III win on Love Boat or The Brady Bunch, I will revise my snark w/r/t his current irrelevance.

            My impression was that number of players on the field would be a part of most fans’ or players’ introduction to the game, but I suppose that’s less likely if you come to the game without dedicated instruction (or, if you prefer, indoctrination) from a parent/coach/significant other/etc.

          • Glen Raphael says:

            @Montfort
            Not sure what your thesis is here, but the relevant Brady Bunch episode just indicates Namath is a quarterback with the Jets and famous to the extent that people recognize him on sight and are excited to meet him.

            30 second youtube snippet. Full episode via hulu.

            The Love Boat appearances were playing characters (not “himself”) so whatever that “legendary” tale is, it wouldn’t have come up.

          • Deiseach says:

            Gets worse when you come to cultural bubbles. That standard question you see doctors ask patients after being knocked out on TV hospital shows “Who is the president?”

            If I answered “Michael D. Higgins”, you might say “Okay, she’s out of it, brain injury”.

            It wouldn’t be my fault you didn’t specify the question was about who your president was 🙂

            How many players on a football team?

            Soccer – 11
            Gaelic – 15
            American – I haven’t a clue

  21. Vaniver says:

    This also makes theories about education and the Flynn effect more impressive.

    Not quite. One of the example questions for Flynn effect discussions is “what do dogs and rabbits have in common?” The answer that gets the most points is “they’re both mammals.” So it’s not necessarily the case that knowing that they’re mammals gets you the points–it’s that being the first thing that comes to mind.

    Which is very possibly anticorrelated with actual experience with dogs and rabbits.

    • Professor Frink says:

      This is a terrible example for a Flynn effect discussion. The largest gains from Flynn are consistently on culturally reduced tests like Raven matrices.

    • Deiseach says:

      And you’ll only learn that “dogs and rabbits are mammals” in school. I mean, “dogs and rabbits have four legs, have fur, are animals” are going to come to mind more readily until you’ve sat enough tests in school to have learned that the answer for tests is “dogs and rabbits are mammals”.

      That’s a response that gets drilled into you with learning-by-rote education; it’s not a natural answer at all. That’s the Gradgrind School of Fact:

      ‘If you please, sir, when they can get any to break, they do break horses in the ring, sir.’

      ‘You mustn’t tell us about the ring, here. Very well, then. Describe your father as a horsebreaker. He doctors sick horses, I dare say?’

      ‘Oh yes, sir.’

      ‘Very well, then. He is a veterinary surgeon, a farrier, and horsebreaker. Give me your definition of a horse.’

      (Sissy Jupe thrown into the greatest alarm by this demand.)

      ‘Girl number twenty unable to define a horse!’ said Mr. Gradgrind, for the general behoof of all the little pitchers. ‘Girl number twenty possessed of no facts, in reference to one of the commonest of animals! Some boy’s definition of a horse. Bitzer, yours.’

      …‘Bitzer,’ said Thomas Gradgrind. ‘Your definition of a horse.’

      ‘Quadruped. Graminivorous. Forty teeth, namely twenty-four grinders, four eye-teeth, and twelve incisive. Sheds coat in the spring; in marshy countries, sheds hoofs, too. Hoofs hard, but requiring to be shod with iron. Age known by marks in mouth.’ Thus (and much more) Bitzer.

      ‘Now girl number twenty,’ said Mr. Gradgrind. ‘You know what a horse is.’

      …‘Very well,’ said this gentleman, briskly smiling, and folding his arms. ‘That’s a horse. Now, let me ask you girls and boys, Would you paper a room with representations of horses?’

      After a pause, one half of the children cried in chorus, ‘Yes, sir!’ Upon which the other half, seeing in the gentleman’s face that Yes was wrong, cried out in chorus, ‘No, sir!’—as the custom is, in these examinations.

  22. Vaniver says:

    This section was titled “Staying Frugal Like The Joneses”. Stop tooting your own horn, Garrett.

    Obviously a reference to “Keeping Up With The Joneses,” an old idiom.

    But the real question is, Jones is a Welsh name. About 5% of the Welsh population is named Jones, making it more common than Smith is for English-speaking world (which appears to be about 1%). But is status-seeking spending a Welsh thing, or is this just a first-mover effect where one rich family happened to be named Jones, and one cartoon writer happened to pick Jones, and then it spiraled from there? I suspect it’s the latter. (Frugality, on the other hand, does appear to be a Scottish thing; “Staying Frugal like the Fergusons” would fit. See The Millionaire Next Door for more on high-income low-consumption high-wealth people in the US.)

  23. CS says:

    Here’s something on the WWII trench peace (‘live and let live’) https://www.youtube.com/watch?v=dMR_sddvM3I

  24. CS says:

    Here’s something on the WWI trench peace (‘live and let live’) https://www.youtube.com/watch?v=dMR_sddvM3I

  25. Anonymous says:

    Has anyone done any work on figuring out which way the arrow of causality goes on the tight relationship between talking about IQ all the time and being an asshole?

    • HeelBearCub says:

      Please don’t do this.

      As someone who generally thinks that the argument “IQ explains everything” is far too simplistic and subject to a great deal of confirmation bias, you are still not helping.

    • jonathan says:

      See now Anonymous, you’re making a fundamental mistake by assuming that “being an asshole” is a one-dimensional concept that one can assess with a simple test. In reality, being an asshole is a high-dimensional concept, and any purported unitary measure (AQ) you construct will necessarily fail to capture this complexity, leading to incorrect inferences about the causes of commenting behavior of various individuals.

  26. Deiseach says:

    Psychologist Christopher Chabris and his coauthors looked at team efforts another way: they checked to see if there was a “da Vinci effect” for teams

    When a team of researchers uses “da Vinci” instead of “Leonardo” as the naming for their putative effect, I automatically suspect their methodology will be poor. I also question their taste in literature (confess, how many of you guys are Dan Brown fans?) 🙂

    • pneumatik says:

      The authors knew that if they talked about a “Leonardo effect” the readers would have immediately started thinking about Teenage Mutant Ninja Turtles, which is not the association they wanted people to have.

      • Deiseach says:

        That just goes to show that a Japanese-American rat living in the sewers had more genuine cultural knowledge than university-educated moderns.

        It’s also surrendering to and perpetuating false understanding, which does not reassure me about how good their work is or how reliable as a source for other work it should be accounted.

        “Yeah, maybe we fudged the data a bit and sure we used that source everyone knows is dodgy, but hey, we managed to work a Star Wars reference in, and isn’t that what really counts?” 🙂

  27. Here’s my interview with Jones about Hive Mind, for those of you who like to consume your science through the ears.

  28. jonathan says:

    This is just a general comment on this topic, and this seems like a good place as any to put it:

    I’m highly skeptical of claims of huge differences in average IQ across countries and over time.

    When you put together estimates of Flynn effects and cross-country differences, you come up with claims like 30 points of average IQ difference between one place and another. But people 30 points below average are essentially borderline mentally challenged.

    It seems to me that a difference that large would be more noticeable. Why don’t the intellectual and artistic contributions of modern man far outstrip earlier generations? Why don’t more people notice that people today are geniuses compared to people of the past? Why don’t old people, or immigrants from less developed countries, shake their heads at the astonishing brilliance of young people today? How do societies with ~70 average IQ function *at all*?

    It seems more plausible to me that, whereas IQ tests might be able to accurately differentiate by cognitive ability *within* a society, and predict outcomes of interest, they may simply not be appropriate tools to measure such differences between societies.

    This logic applies to a lesser extent to measured differences between distinct groups within a society. Maybe I don’t know enough poor black people, but “15-18 points lower IQ” wouldn’t be the first thing that springs to mind when assessing blacks’ relative cognitive ability.

    • jonathan says:

      By the way, here’s a related puzzle:

      Today the total population that receives a reasonable education is *many times greater* than at any point before the very recent past. Thus at any level of ability, we should have many more great artists, inventors, thinkers, scientists, and so on. Where are they? Why don’t we see a huge explosion of great art, though, inventions, and so on?

      Possibilities: The geniuses are here, but we don’t notice them because of specialization; low-hanging fruit is all taken; the super high-IQ core that really drives innovation were already being educated 200 years ago, so educating a lot of <120 IQ people doesn't make much difference; other social trends (internet, video games, drugs, not learning Latin/Euclid, smart people becoming lawyers and bankers) act to retard high achievement.

      • jaimeastorga2000 says:

        Scott adds a few more possibilities in his essay “If you’re so smart, why are you dead?”

      • Murphy says:

        patents aren’t a great proxy but still:

        http://2.bp.blogspot.com/-N5VmZ3jgaNY/TkmtvVzFntI/AAAAAAAAACo/_rhX0LcWHqY/s1600/Patents+issued+per+year.png

        And in reality we do see a continuing explosion in inventions. When my father started in computing programming was something you did with a soldering iron or punch-cards. Now I carry around a practically-magical pocket supercomputer which allows me to talk to people on the opposite side of the globe. My old supervisor who isn’t even very old did his PHD on 27 codons of one gene, one of his phd students did his on 180 whole genomes and the people I’m working with now are doing theirs on something closer to 18000 using tools which do in a day what it used to take billions of dollars and years of effort to do.

        Smug people who like to bash anything modern will claim that we have no modern great art yet we have more artists creating more art than at any other point in human history.

        I can’t find exact numbers but from eyeballing the numbers I’d bet a reasonable sum on more books being published in the last 50 years than in the previous 5000.

        In almost every technical field people are working at higher and higher levels of abstraction. Interview candidates are routinely expected to solve, in an hour long interview obscure graph theory problems which could have netted you a phd 50 years ago had you solved them because a higher level understanding of graph theory is expected from new grads as standard.

        • jonathan says:

          I’m not saying there is no innovation today. I’m simply observing that its pace hasn’t drastically increased, as one might have naively predicted based on the growth of the talent pool.

          • Murphy says:

            To add, books published per year.

            http://www.journalofelectronicpublishing.org/images/3336451.0009.208-00000004.jpg

            How many new inventions and new works of art are you expecting? it it not enough that the current generation produce more than literally every generation before them combined since the beginning of recorded history?

          • Who wouldn't want to be anonymous says:

            What does that work out to per capital?

            Absolute value are cool and all but can be extremely misleading. I have heard (and don’t feel like verifying) that more people over the age of 65 (or something like that) are alive than the sum of all previous human history. The biggest factor is that population growth trends to be exponential, not medicine. I mean sure, medicine has done a spectacular job of reducing childhood mortality, but people generally just compensated by not having so many kids. On the other hand, adult life expectancy hadn’t really increased that much.

            So even a tiny reduction in the difficulty of getting published recently (movable type, anyone) coupled with just how god damn many people are alive now pretty much makes that observation uninteresting.

          • Murphy says:

            @Who wouldn’t want to be anonymous

            Well, the total accumulation of dead, has reached about 100 billion so the most recent 7 billion aren’t doing too bad for themselves.

        • brad says:

          Art is probably a terrible one to look at because it is going to inevitably degrade into squabbling about taste. But what about math.

          Given the increases in population, the increase in access to education, and the purported increases in average IQ how many Gauss level mathematical minds should we expect there to be today?

          • Murphy says:

            jaimeastorga2000 already posted a link to this http://squid314.livejournal.com/348160.html which I think applies to math as well.

            How much is down to low hanging fruit? It’s hard to say, if you erased everyone’s knowledge of math do we have many people who could re-create the work of the old geniuses from first principles right away?

          • Saint_Fiasco says:

            They are probably working for Wall Street or the NSA and similar organizations.

          • brad says:

            @Murphy

            What about the stories of Gauss, Ramanujan, von Neumann and others doing really impressive things at a really young age? That shouldn’t be impacted by the low hanging fruit being eaten. Similar stories are told about Terence Tao, but given the parable of the soccer players (http://putanumonit.com/2015/11/10/003-soccer1/) shouldn’t we expect at least hundreds of such kids to be running around if the average IQ is so much higher now than the late 18th century and there’s so many more of us to boot?

          • God Damn John Jay says:

            The natural progression for a field seems to be that one person invents a ton of groundwork concepts and gets a huge amount of money / prestige and then that turns into an endless array of smart people developing refinements upon those ideas.

            John Nash developing Game Theory and won his place in legend, for something that you can discuss with bright middle schoolers. Edwin Catmull has everything in computer graphics named after him and a lot of his advancements are relatively simple to explain.

            On a related note, I have heard multiple Phds / Nobel Prize winners bemoan how they would never have distinguished themselves amongst the keen eyed youth of today.

        • Let me take it to the individual level.

          I’ve known a fair number of top people in my field, including five Nobel prize winners. I don’t think any of them was smarter than David Ricardo 200 years ago, judged by his writing. Combine population increase, education increase, and Flynn effect and all of them should have been.

          I don’t read a lot of modern poetry, but I sometimes look at things other people say are good. I have yet to find any poet still alive who I would rank with the top ten or twenty of the past few centuries.

          I’m not a mathematician. Can someone who is say how he thinks the current top level people compare to Gauss or Ramanujan?

          • jonathan says:

            My impression is similar. (I’m also an academic.)

            I remember someone (Scott Aaronson?) mentioning that reading over a lot of some famous mathematicians’ old papers (Gauss?) it was clear just how much more difficult frontier-level math is today.

            In general, there seems to be a mismatch between subjective impressions of modern intellectual accomplishment + measurements of GDP growth on the one hand, and quantitative measures of the amount of new knowledge being generated (patents, books, etc).

          • Murphy says:

            Thing is, it’s almost impossible to read the “classics” without suffering a distorted view.

            One of the happy offshoots of natural language processing that’s starting to provide a more objective way to assess old writing is AI’s that can assess SA’s that are already used to double check teachers work. Simple-ish ones already exist and you can feed them arbitrary text and they will treat them like someone would a students paper.

            The classics do not fare well under this treatment with the equivalent of a dispassionate reader who isn’t starting out with a massive throbbing hardon for the author.

            Trying to judge the writing of one of the “greats” as hard as you would the writing of a coworker is hard, really really hard.

            it’s like trying to assess the effectiveness of a drug by only questioning people who’ve just shelled out 100 bucks per tablet, they’re all already biased towards saying it’s fantastic. (it has to be, otherwise they’d be idiots for shelling out a hundred bucks per tablet)

            how about an experiment.

            Get a group of students utterly unfamiliar with a great author. Randomise them into trial or control.

            Present the trial group them with the texts as if they’re the scribblings of some random blogger who keeps emailing the department with his theories then ask them what their assessment is.

            Present the same texts to the other group saying they’re the world of one of the ultimate geniuses of history.

            See how harsh the 2 groups are and how smart they think the author is.

          • onyomi says:

            I’m sure you could get strongly divergent outcomes this way, but the thing is, you would find the effect weaker and weaker the more well-educated the test subjects (still assuming you can find works they aren’t actually previously familiar with).

            Example, if a large portion of our rosy assessment of Shakespeare’s writing had to do with him being 400 years old and, you know, Shakespeare, then one would expect assessments of Shakespeare (or something similar but not so famous, and, therefore, more testable) to worsen as the test audience grows more familiar with literature and, therefore, is less subject to priming effects.

            But the opposite is, in fact, the case; high school students don’t usually really like Shakespeare very much, especially if you can get them to be candid. Literature professors do like Shakespeare. This isn’t just because Shakespeare is the cathedral; it’s because, despite everyone saying Shakespeare is really good, Shakespeare is really good (is this a real HL Mencken quote or did Scott just paraphrase something? Either way, it’s great).

            If we are, in fact, biased to evaluate the ancients as smarter than us, or, at least, not as dumb as they actually, supposedly were, I would say it is probably more due to the posterity effect: 500 years from now, some people may still listen to recordings of Diana Ross, while probably no one will listen to recordings of Britney Spears. This will give people the erroneous impression that music from the 20th century was all great; it wasn’t, of course, but only the great stuff will survive 500 years.

          • The original Mr. X says:

            If people only think a particular old author is great because everybody says he is, how did he get his reputation for greatness in the first place?

          • Jiro says:

            Mr. X: Chance. Someone has to be most popular by chance, and popular authors tend to grow more popular.

          • Deiseach says:

            Present the trial group them with the texts as if they’re the scribblings of some random blogger who keeps emailing the department with his theories then ask them what their assessment is.

            The trouble with that is you would have to “translate” the passage into “modern” which loses all the effect of language, and good writing is about how language is used, the emotional impact of word choice, the sound and rhythm of the text as well as the written word, the colour and choice of one word over another.

            You would lose all the musicality of the following speech:

            Not poppy nor mandragora
            Nor all the drowsy syrups of the world,
            Shall ever medicine thee to that sweet sleep
            Which thou owedst yesterday

            And this has already been translated into “modern” for the students of today (so that the old language isn’t too difficult for the petals to understand):

            No drugs or sleeping pills will ever give you the restful sleep that you had last night.

            Can’t you see these are two different things? The flat modern version may give you the bare literal meaning of the words, but it has nothing of the effect of the sound of Shakespeare’s original.

            Ask your putative test groups to analyse the spoken version, what effect do all those sibilants have? The hissing of a snake, to indicate Iago’s untrustworthiness, his venom dropping into Othello’s mind and heart; the “shushing” sound associated with lulling to sleep or rest, which fits with the theme of restful slumber; the rhythm and pulse of the verse.

            The original is poetry. The modern version for students who can’t handle “old language” is a sleeping pill advertisement.

            It’s the same as Tolkien wrote in his “Letters” about deliberate archaism in order to express concepts that don’t fit into modern prescribed prose:

            The proper use of ‘tushery’ is to apply it to the kind of bogus ‘medieval’ stuff which attempts (without knowledge) to give a supposed temporal colour with expletives, such as tush, pish, zounds, marry, and the like. But a real archaic English is far more terse than modern; also many of things said could not be said in our slack and often frivolous idiom. …But take an example from the chapter that you specially singled out (and called terrible): Book iii, “The King of the Golden Hall’. ‘Nay, Gandalf!’ said the King. ‘You do not know your own skill in healing. It shall not be so. I myself will go to war, to fall in the front of the battle, if it must be. Thus shall I sleep better.’ This is a fair sample — moderated or watered archaism. Using only words that still are used or known to the educated, the King would really have said: ‘Nay, thou (n’)wost not thine own skill in healing. It shall not be so. I myself will go to war, to fall . . .’ etc. I know well enough what a modern would say. ‘Not at all my dear G. You don’t know your own skill as a doctor. Things aren’t going to be like that. I shall go to the war in person, even if I have to be one of the first casualties’ — and then what? Theoden would certainly think, and probably say ‘thus shall I sleep better’! But people who think like that just do not talk a modern idiom. You can have ‘I shall lie easier in my grave’, or ‘I should sleep sounder in my grave like that rather than if I stayed at home’ – if you like. But there would be an insincerity of thought, a disunion of word and meaning. For a King who spoke in a modern style would not really think in such terms at all, and any reference to sleeping quietly in the grave would be a deliberate archaism of expression on his part (however worded) far more bogus than the actual ‘archaic’ English that I have used.

            I do feel sorry for the youth who have, in their schooling, been deliberately deprived of any roots, who have had everything put into “relevant” terms for them, have been spoon-fed summaries and notes and essay themes and bullet-point lists of what “the Classics” are all about so that, given the plethora of online resources, they can successfully write essays and reports without ever needing to actually read the original text; who have been carefully guarded from any stretch of their mental muscles involved in trying to move outside their bubble of modern-day experience into the mindset of the past (because this would be “elitist” and “Dead White Males”, as though the treasures of literature did not belong as much to me, from working-class nowhere, as to the ‘proper’ class who would go to university) and above all, who have been deprived of the chance of discovering the muscular, supple, musical use of language merely because it wears a different set of clothes than jeans and T-shirt.

          • The original Mr. X says:

            Mr. X: Chance. Someone has to be most popular by chance, and popular authors tend to grow more popular.

            Chance probably works as an explanation for why one good/great author takes off instead of another one, but that doesn’t mean that chance is the whole explanation. A genuinely bad work isn’t going to become regarded as a classic, chance or no.

          • “and popular authors tend to grow more popular.”

            I don’t think it’s true. If you look at references to the popular novelists of a century ago, most are people you have never heard of.

            How many people, for instance, have ever heard of the other Winston Churchill–who, according to Wikipedia, was one of the best selling novelists of the early 20th century?

            Here’s a list of the ten best selling fiction books of 1895:

            Beside the Bonnie Brier Bush. Ian Maclaren
            Trilby. George du Maurier
            The Adventures of Captain Horn. Frank R. Stockton
            The Manxman. Hall Caine
            The Princess Aline. Richard Harding Davis
            The Days of Auld Lang Syne. Ian Maclaren
            The Master. Israel Zangwill
            The Prisoner of Zenda. Anthony Hope
            Degeneration. Max Nordau
            My Lady Nobody. Maarten Maartens

            (https://redeemingqualities.wordpress.com/early-20th-century-bestsellers/)

            I recognize two, plus the author of another.

          • Jiro says:

            A genuinely bad work isn’t going to become regarded as a classic, chance or no.

            Sure, but “not genuinely bad” isn’t a high barrier to meet. Yes, the work has to have *some* merit to survive, so survival isn’t only caused by chance. But it’s still *largely* caused by chance; a lot of works have some merit and are not genuinely bad, and only a few of those actually become classics.

            If you look at references to the popular novelists of a century ago, most are people you have never heard of.

            Technically, yes, but it’s a random walk.

            It’s like surnames. Some surnames today are far more common than others. In some places it’s pretty extreme–21.6% of all Koreans have the family name Kim. That’s not because one family name is better than another–it’s because the percentage of the population with a specific family name undergoes a slight random change from generation to generation. After lots of generations, some names get random walked to extinction and some to popularity. Popularity of “classics” follows the same rule; some work is going to get random walked to classic status (and of course popular works are more likely to do so, even though most popular works still drop out.)

          • onyomi says:

            @David Friedman

            Yeah, I was also going to mention that phenomenon: of all the successful artists of a given century, some very small percentage of them will “stand the test of time” and/or still be consumed in some form a few centuries later. This can give us the mistaken impression that premodern people didn’t produce heaps and heaps of silly, derivative, bad art. They did; of course, but we’ve never heard of it.

            BUT the ones we like and have heard of are often not the ones people of the time would have predicted. It would be so interesting to take a time machine 500 years in the future and find out, for example, that no one has heard of Harry Potter, but Bon Jovi is widely considered the greatest musician of the 20th century or something.

          • Murphy says:

            @Deiseach

            I don’t think you’d have to translate it.

            intro for the “trial” group, “ok, in academic departments we often receive lots of emails from random people asking us to review their work. Recently I’ve been getting a string of emails from a gentleman who likes to write in something like old-english style.”

            There. No need to translate.

          • Murphy says:

            @onyomi

            “you would find the effect weaker and weaker the more well-educated the test subjects”

            If you select your subjects based on “number of years spent investigating ESP” you also tend to find that the people who’ve spent many many years on it are more likely to believe it to exist because people aren’t just learning, they’re being selected for, 1: already believing and 2: being able to withstand or even enjoy it.

            If someone finds every passage of James Joyce grates on their very soul and that every plot from Emily Brontë makes them want to stab themselves in the eye then they’re extremely unlikely to still be in the field 20 years later. Inversely their classmate who feels great joy at exposure to the same authors is far more likely to end up as an english lit professor.

            That being said, the above is hyperbole for clarity and to get my point across. I do believe there’s a reasonable number of genuinely good classics but simply saying that since english lit professors like something it must be good is not good logic.

            Selection of english lit professors is not based on anything objective. It strongly selects for people who already like and enjoy the same things that previous english lit professors do because first you have to make it through years of courses written by english lit professors.

            For assessing objective reality this isn’t too big a deal but when it comes to matters of taste it yields a feedback loop.

        • Deiseach says:

          Smug people who like to bash anything modern will claim that we have no modern great art yet we have more artists creating more art than at any other point in human history.

          And we won’t know who the real greats are until a century or so goes by and winnows out the merely popular from the enduring.

          There are more examples than I can count of authors who sold hundreds of thousands of copies of each novel, who were wealthy and famous and well-regarded in their day, whom critics at the time said their works would endure in letters -and who a year or five or ten after their death were completely forgotten.

          Back in my school days when I was going to vocational college, there was a book from 1900 in the library with a couple of hundred names of young up-and-coming poets and writers of the time, whom the critics agreed would probably be the ones to watch, the names that would endure.

          Out of those two hundred, about a handful of names were ones that would be recognised today, Yeats being one. Back then, he was just one of the field.

          So out of your hundreds today, expect only a handful to be recognised as real creative forces in the future.

          • Maware says:

            It’s not even that. A lot of the greats owe success to unknowns, too.

            Sherlock Holmes is a huge example. Probably one of the reasons he endured instead of falling into obscurity was from the efforts of William Gillette, the actor.

            https://en.wikipedia.org/wiki/William_Gillette

            A lot of the errata people assume is Holmes actually came from him. The deerstalker cap and pipe, and “elementary my dear fellow,” are examples. He did an absurd amount of stage plays about him, to the point where he was Holmes in the popular eye, even more than Basil Rathbone. Without him, Holmes probably would have faded from the public eye after Doyle tired of him. But Gillete is unknown today, and I only learned of him because I live close to the castle (!) he built.

            Or someone like Milo Hastings, who wrote City of Night and The Book of Gud. You’d be surprised at how creative some of the forces were that we don’t hear about.

    • Douglas Knight says:

      The Flynn effect is probably basically false. It occurs much more on, say, Raven’s matrices than on more mainstream tests. In a single place and time Raven correlates highly with other IQ tests, but across time and space, it doesn’t. So there is an improvement on a narrow facet of IQ. International comparisons emphasize Raven for two reasons: (1) it doesn’t need translation; (2) it is hard for people to accuse it of being culturally biased.

      When you think of mental retardation, you probably think of developmental disabilities, which is not well described by an IQ threshold. If IQ were a bell curve, 0.4% of whites would score <60. Actually, more do, largely due to developmental disabilities, but they have a lot more problems, even mental problems, than are captured by an IQ test. Whereas 5% of (American) blacks score <60, very few due to developmental disabilities. They are much more able to function in society, although that IQ causes them a lot of problems. Even casual conversation distinguishes the two groups. Added: So I don’t think a country of IQ 70 is so implausible.

      • Scott Alexander says:

        I thought Flynn Effect was first spotted on ASVAB style tests which are pretty different from the Ravens.

        • Douglas Knight says:

          I don’t know the history and I don’t see why it matters.* Even within ASVAB there are several subtests with different Flynn strengths. When I said “much more” I didn’t mean to imply that the smaller quantity was zero. But I do think that it is a small facet of intelligence.

          The fact that the strength of the Flynn effect is correlated with the practice effect (ie, gains on retest) suggests to me that it is measuring an easily learned skill and due to an environment (whether school or life) that more emphasizes this skill. But if it is easily learned, it probably doesn’t have much consequence – those who need it, learn it. Maybe people who really need it learn it in one area of their life and that helps them in other areas where it is optional, but maybe not.

          * A google search finds me claiming without citation that Flynn did first find it on Raven. Wikipedia says that he found it on the Dutch military test, but that it was pretty close to Raven.

          • onyomi says:

            This was my first explanatory thought: there’s no way the average Englishman was 20% dumber only 60 years ago, but it is conceivable that he was 20% less frequently exposed to standardized testing-type situations.

            As Heelbearcub commented further down, this might also explain some or all of the difference we see in say, Sub-Saharan populations who, presumably don’t take tests any more often than the British of 60 years ago. Only difference is lagging development relative to Britain, which could indicate some more substantive difference, whether because of genetics, malnutrition, parasites, or some combination.

          • HeelBearCub says:

            @onyomi:
            “20% dumber”

            I don’t think we can say that? IQ is a distribution curve, not a absolute measure of knowledge.

          • Douglas Knight says:

            I don’t think it’s as simple as standardized tests. I don’t think that they fit the timeline very well.

    • Murphy says:

      >Why don’t the intellectual and artistic contributions of modern man far outstrip earlier generations?

      Do they not? Kids nowdays read and write more than any previous generation. There’s more art created by more people than ever before. More people are producing more art, much of it orrigional and interesting that ever before in human society.

      Unless you simply define everything modern as automatically crap it’s hard to argue that this isn’t the case.

      >Why don’t more people notice that people today are geniuses compared to people of the past?

      70 year olds of today have their intelligence + 50 years of experience in order to contend with 20 year olds who merely have their intelligence. It’s also socially acceptable and pretty much normal for elderly people to claim bafflement at modern technology. Also the people most likely to have stayed in a particular field are likely to be at the higher end of the ability distribution while new entrants are more likely to be closer to the average.

      >Why don’t old people, or immigrants from less developed countries, shake their heads at the astonishing brilliance of young people today?

      many do.

      >How do societies with ~70 average IQ function *at all*?

      Because they’re societies and the people aren’t damaged even if they’re not great at abstract thinking.

      • jonathan says:

        > Do they not?

        Maybe you and Douglas Knight should talk about this 🙂

        > Kids nowdays read and write more than any previous generation. There’s more art created by more people than ever before. More people are producing more art, much of it orrigional and interesting that ever before in human society.

        This isn’t my perception. What art are you referring to?

        • Murphy says:

          >What art are you referring to?

          Books, music, television, films, paintings, comics, games, designs, crafts, plays, comedy, flash videos, operas and discussions.

          If you don’t automatically hand 10,000 free “being really old + everyone told me this was the pinnacle of literature when I was growing up so I believe them” points to a lot of old “classics” you can see them for what they are, mostly barely readable tat with the occasional jewel.

          If you don’t automatically penalize everything written in the last 50 years with -10,000 “darn kids, music was better in my day, their music is just noise” points then there’s a hell of a lot of good works in the last couple of decades.

          • The original Mr. X says:

            This seems like a bad case of typical-minding here: “I don’t like old stuff, therefore no-one does, and anybody who claims otherwise is just lying for status reasons.”

          • Murphy says:

            Typical mind? I’m arguing against the exact wanky-inverse.

            “I don’t like new stuff, therefore no-one does, and anybody who claims otherwise is just lying or doesn’t know any better.”

            Not all “classics” are bad. Some are very good yet the way they’re introduced to people almost guarantees an inflated assessment of them.

    • Anonymous says:

      >How do societies with ~70 average IQ function *at all*?

      How do animal societies function at all? Animals aren’t very bright from a human perspective, yet they do have communication, do have societies of a sort, and do manage to reproduce without help… except when they don’t, which is where the process of evolution prunes them. The ones that you do see functioning are the ones who have whatever it takes to succeed at the game of life.

      In the case of a human subspecies that has an average IQ around 70, it’s probably similar. High intelligence, abstract thought, low time preference – they’re demonstrably not required to function in tribal savagery, hell, they might actually impede survival.

    • ryan says:

      This is a pretty nice explanation of how IQ tests are normalized:

      http://homepages.rpi.edu/~verwyc/Chap4btm.htm

      Key part: “a large sample of test takers who represent the population for which the test is intended. This standardization sample is also referred to as the norm group (or norming group).”

      If you’re using the test for someone from a population other than the population for which the test was intended, you’re going to have a bad time. And if you’re comparing scores on tests normalized for different populations, you’re probably vastly underestimating the error in whatever conversion metric you’re using.

      To my knowledge there is no IQ test which has been normed for all 7 billion homo sapiens sapiens. So population to population comparisons are always going to be using suboptimal data.

  29. DensityDuck says:

    Every study I’ve ever seen about the long-term effect of intensive schooling comes to the same conclusion: “Students do great while they’re in it, but the effects fade pretty quickly, often within a year or two”.

    • jonathan says:

      I think this is the true individual/society IQ puzzle.

      If the Flynn effect is even somewhat right, we’ve done something at the society level that we can’t seem to do even marginally at the individual level.

      • Anon says:

        Wait, isn’t it the same as the schooling example, it’s just you never “quit” society, so it’s like school that never ends?

        I wonder if a person left to live in the wilderness for a decade if their IQ would decrease. I’d guess it would.

  30. Nornagest says:

    From the social intelligence test link:

    Most important, however, the GWSIT came under immediate criticism for its relatively high correlation with abstract intelligence.

    *facepalm*

  31. pf says:

    Thanks a lot, Get Out The Vote people.

    Describing voting as a means of influencing the outcome of an election probably doesn’t encourage intelligent people to vote, since it’s trivial to show that outside of rare cases in very small local elections, the election outcomes will be identical regardless of whether an individual votes or not.

    Phrasing voting as a social duty, or as a visible pro-social act that encourages positive secondary behaviors, is likely to get more traction among intelligent potential voters (assuming you make a good case).

    • HeelBearCub says:

      @pf:
      That’s just a “tragedy of the commons” problem. Smart people usually understand that commons problems aren’t solved by saying “meh, who cares”.

      • pf says:

        Looking exclusively at election results (ignoring all secondary effects), voting is strategically different from “tragedy of the commons”. In the latter case, cooperation vs. defection gives you noticeable positive and negative feedback (misaligned in the short term, but favoring cooperation for a long-term thinker known to be in the presence of other long-term thinkers, especially if mutual agreements are allowed and can be monitored/enforced).

        In the former case, your vote makes exactly no difference at all unless the remaining votes form an exact tie (there’s a little fuzz on that, since in very very close votes, each vote marginally increases or decreases the likelihood of triggering a recount or runoff vote). In each election, you can actually confirm that your vote (would have) made no difference, so even the usual optimistic biases will be trained out of you if you pay any attention.

        • brad says:

          The vote totals are communicative. So in addition to the negligible chance of swinging the election in vote has a tiny-but-not-negligible marginal communicative effect.

          Probably still not enough to outweigh the costs of voting, but at least worth mentioning.

          • pf says:

            Vote totals was one of the secondary effects I had in mind when I said “excluding all secondary effects”. The others that come to mind are:

            Voting as a discipline:
            An intelligent, pro-social person is likely to feel obliged to look into the issues and candidates before voting, which means that mentally committing to the act of voting amounts to committing to the act of keeping informed and potentially sharing that opinion (creating an influence). Personally, when I’ve made comments online of the form “you should become informed about the candidates and issues, and then vote your opinion”, I’ve gotten push-back from people who thought I was discouraging voting by adding difficult preconditions to an otherwise simple task.

            Voting as a license to commit otherwise rude behaviors:
            Exhorting people to make certain choices in the voting booth would be hypocritical if you haven’t voted yourself, so voting is a kind of license to publish political rants, join flamewars, etc.

            Voting as a means to encourage other pro-social behavior via peer pressure:
            Describing voting as a civic duty, and then making it a public display, might help to establish an environment in which this and other nominally pro-social behaviors are normal and expected (this is also a reason to vote in person at a voting station rather than via the mail).

        • HeelBearCub says:

          @pf:
          You are looking at this in isolation, as if the a) there is only one election and not a slate of elections all across the country, b) the state of a given race is secret and unknowable and c) as if politicians pay no attention to previous election results when making decisions going forward.

          Even if a candidate wins an election regardless, it makes a difference whether they win by 20% or 5% or 1%.

          Much as taking a single fish from a fishery makes no real difference, it is not the loss of a single voter from the electorate, but rather the cumulative effect of all voters that makes a difference.

          And the elections that make the most difference and where individual voters can make the most difference, the primaries, are the ones where turnout is the lowest. This is a bizarre result if what is really motivating people is the results on election day.

          • pf says:

            When I said “excluding all secondary effects”, I meant excluding, among other things, your case (c). My original point was exactly that: when people advocate voting by claiming only the opportunity to decide who is elected and which ballot propositions are chosen, they’re advocating voting to people who haven’t done the math.

            A convincing argument for voting directed at “smart people” would have to involve secondary effects which change the paradigm from “threshold result with a very high probability of having zero impact” to a different one, where marginal positive contributions yield marginal positive returns.

            If your original point was “yes, and by the way, (c)”, and you agree with all of that, then I misunderstood, and we’ve been in agreement the whole time.

            I would further suggest, based on the U.S./U.K. difference, that the ability to change voter tallies isn’t a winning argument: U.S. news covers voter tallies extensively, so if it was convincing, “smart people” would already be acting accordingly. My understanding is that high-IQ people in the U.S., like the U.K., are disproportionately pro-social.

    • Squirrel of Doom says:

      An individual vote has vastly more influence under proportional representation.

      If we’re a million voters and there are 100 mandates, the average vote has 1/10,000 chance of adding a mandate for my party. If we’re 10,000 voters in a winner takes all vote, it’s probably one chance in a billion.

      This might explain why smart US voters don’t vote much, but sadly fails to explain how the UK voters behave differently.

  32. onyomi says:

    At what point do we stop trying to explain why the Flynn effect is happening and instead try to explain why it appears to be happening (but isn’t real)? The idea that the current generation of young people has more native intelligence than their grandparents or great-grandparents strikes me as completely implausible and not the least bit supported by any of my experience with the very elderly, even without taking into account the gradual cognitive decline associated with normal aging.

    • HeelBearCub says:

      If the Flynn effect isn’t real, then what are the odds that IQ is measuring g with enough fidelity to compare countries with wildly different circumstances?

    • Jaskologist says:

      I also find it questionable whether people are getting smarter.

      On the other hand, sports records keep getting broken. You’d think we would have hit the human limit for the fastest 150m dash a while ago, but somehow people* are still getting faster. We’re definitely improving on that fairly objective measure, so maybe we’re really improving in the more nebulous IQ area, too. It’s still a mystery to me how this could be.

      * Or, looking at this chart, maybe it’s just Usain Bolt getting faster through the power of nominative determinism. Still, so many of those records are really recent.

      • onyomi says:

        I do think we are getting better at athletics, though probably not quite as much as we think we are–I saw a presentation basically explaining why Olympic athletes keep breaking records and part of it is just stuff like better equipment–softer gym mats, etc. But another big part of it is better training methods (no one would be a major league pitcher and not also do weight training nowadays, for example), and, maybe even more important, better sorting–recognizing early the body types suited to various sports and getting those kids in there young. There is also an interesting psychological dynamic where, once one person runs the 100 meters in under 10 seconds, shortly thereafter 20 people can do it even though no one could before. Knowing something is possible seems to make it more achievable.

        I think the relevant comparison for athletic achievement is not IQ, but intellectual achievement. I think more good scholarship is being written now than at any other time in history (though there are some fields which I think are weaker now than at times past), but I don’t think people are a lot smarter now than they were 100 years ago; I think we’re just building on a much bigger base of knowledge and have improved our methods.

        To make a more parallel comparison, I think you’d have to compare IQ to potential for athletic achievement, which I doubt has changed nearly so much. I think people born 100 years ago had about the same potential for athletic achievement as we do, assuming they avoided polio, etc. but the training and sorting methods were not as advanced, nor the equipment as good.

        Given my relatively dim view of education nowadays, however, one wonders: when did our ability to train our bodies outstrip our ability to train our minds?

      • Dr Dealgood says:

        Not to be a cynical curmudgeon but there might be a more prosaic explanation for the explosion in athletic records.

        • onyomi says:

          Care to elaborate?

          • jaimeastorga2000 says:

            Better or more widely used drugs?

          • Dr Dealgood says:

            Steroids. Maybe some more exotic performance enhancers like blood doping too, but mainly the juice.

            I mean it shouldn’t be a surprise that Olympic atheletes cheat, there’s a well established tradition of blatant cheating in the Olympics. And most professional sports are already decades into their respective steroid scandals. Hell even some of the kids on my old high school football team used steroids.

            The most parsimonious explanation is that all, or at least most, of the record-breaking athletes in the last half-century or more were on some cocktail of hormones at the time.

          • Saint_Fiasco says:

            If steroids are a valid explanation for the athletic records, that doesn’t mean it’s not happening.

            Couldn’t we also say that children are now “cheating” on IQ tests by having better nutrition and education?

        • stargirl says:

          The conventional opinion is that athletes are getting less chemical assistance than they got in the 80s. The effects of massive doses of steroids are more noticeable in women than men. And in many women’s sports, especially track and field, most of the records still date back to the 80s: http://www.perelman-pioneer.com/drugs-and-the-greatest-women-athletes-of-all-time/

          Essentially all successful Olympic athletes are on steroids. But steroid testing is sufficiently effective that people cannot take as many steroids as they could in the “Steroid era.”

          • Deiseach says:

            Essentially all successful Olympic athletes are on steroids.

            Which is why I contend eventually professional sporting bodies will throw in the towel on doping and permit certain drugs at such-and-such levels. Because if they really mean to get drugs out of sport, they’re going to have to scrap the records for twenty or thirty years and go back to natural, unaided performance from athletes which means slower times etc.

            And since money comes from “tune in to watch the 100m record get smashed tonight!” that is not going to play with audiences or advertisers. So no money means genuine amateur athletics, which means say hello to all your new Irish gold medallists in all sports, or it means recognising that horse has long since bolted and you can’t lock the stable door.

          • Psmith says:

            “Because if they really mean to get drugs out of sport, they’re going to have to scrap the records for twenty or thirty years and go back to natural, unaided performance from athletes which means slower times etc.”

            This has actually happened in several sports (weightlifting, powerlifting, javelin with the redesigned javelin, cycling with various rules about equipment, wrestling–and maybe judo?–with weight class changes insofar as those even have quantitative records), probably some others I don’t know about. It has also happened informally in the sense that, for example, average power outputs on big climbs in the Tour de France are consistently 5-10% lower than they were ten years ago, the men’s mile record hasn’t changed since the late nineties, the women’s 100m record was set in 1984, etc., and these sports seem to be about as popular as ever, modulo, for instance, the presence of a dominant American in road cycling. In general, people seem to be willing to tolerate reductions in performance in exchange for at least a credible fiction of drug-free sports, at least in the context of relatively high-brow sports as opposed to, say, pro wrestling.

      • TrivialGravitas says:

        The body of knowledge about what constitutes the best training is an ever advancing field. This is something that the education system lacks, end even if it didn’t this only benefits a small percentage of the population, so its not comparable to flynn effects unless there’s an extremistan effect in the data (think average incomes, a lot of schlubs making 30k a year and a few billionaires throwing the average up).

        • onyomi says:

          But why is the state of athletic education and training advancing much faster and better than that of actual education…?

          Actually, maybe I shouldn’t be surprised, considering how much the best football coaches get paid…

          • HeelBearCub says:

            @onyomi:
            Take away your computer, the internet, books and pen and paper and how “intelligent” would you be able to become? Take away iodine and add lead to the water?

            Athletes have faster tracks, faster suits, faster pools, and better drugs now than they did back not too long ago. They also, unlike in the not to distant past, dedicate themselves to their chosen sport, and sport in general, year round.

          • onyomi says:

            “Take away your computer, the internet, books and pen and paper and how “intelligent” would you be able to become? Take away iodine and add lead to the water?”

            I’m not sure what your point is.

            “Athletes have faster tracks, faster suits, faster pools, and better drugs now than they did back not too long ago. They also, unlike in the not to distant past, dedicate themselves to their chosen sport, and sport in general, year round.”

            We spend waaaay more money on school equipment than in the not too distant past as well. Why is intellectual achievement not keeping pace?

            And professional academics do dedicate themselves to research and education year round. Students get the summer off, which I think is not ideal (a relic of the necessities of farming–we’d probably be better off distributing the vacation more evenly throughout the year), but 40 hours a week in class 8 months out of the year, plus homework, is nothing to sniff at. Anyone training that much could be Olympic level for sure.

          • Saint_Fiasco says:

            If your athletic training method does not work you will find that out relatively soon, while if your actual education method does not work you will only find out half a generation later.

            This feedback makes developing effective athletic education easier than actual education.

          • onyomi says:

            “This feedback makes developing effective athletic education easier than actual education.”

            This is a good point.

            I also think it’s one of those cases, like with healthcare, where we actually care about it too much to be at all objective, and therefore end up making worse decisions than we do about more trivial matters (as a libertarian I always point out we have more problems with the issues “too important” to leave up to the vagaries of the wildcat free market: housing, healthcare, education).

          • HeelBearCub says:

            @onyomi:

            Moore’s law sort of blows the idea of “we aren’t gaining in intellectual achievements as fast as athletic ones” out of the water. Compare how fast chip speeds are improving to how fast 100M dash speeds are improving and it isn’t a contest. And to a great extent, the athletic achievements are occurring because of intellectual ones.

            My point about year round was that year round athletic dedication is new, but year round academic dedication is old.

          • onyomi says:

            “we aren’t gaining in intellectual achievements as fast as athletic ones”

            That’s why I said, “I think more good scholarship is being written now than at any other time in history.”

          • TrivialGravitas says:

            @Saint_Fiasco: Sadly not true (a good example is stretching to prevent injury which turns out may or may not do anything and and at best dynamic warmups work way better, yet people have, and continue, to sell stretching as really good for injury prevention) and most people waste their time relative to their potential. But it is easy to SCIENCE things, and people at the top levels are generally getting good advice, many are able to dive into journals and examine the evidence themselves.

          • TrivialGravitas says:

            @onyomi: to more directly answer your question, now that I realize it was directed to me:

            Given 40 or so people at roughly equal training levels you can test two training methods for six weeks and figure out which one is better at approaching a specific goal. That is, when we SCIENCE things we know, with very little room for doubt, that plyometrics will make you jump higher and run faster but only if you’re already strong enough (successful tests used men who could parallel squat at bodyweight for 3 reps). We know that also continuing to lift high intensity/low volume weights with progressive overload in tandem with plyos improves this (average ten centimeters to standing jump height vs no plyo control) we know additional techniques that have even more additive effect (low intensity low volume lifts, explosive lifts), start stacking all this stuff together and you get people who have massive advantage over people 50 years ago, when this stuff was rarely done and never done in tandem.

            In education? You need to run experiments for years, you need to somehow keep parents from demanding you put their kid in a different group, then you have to follow up to see if what you did provided a cumulative advantage, because if it only lasts six weeks, hell if only lasts six minutes, that can get you a new record in athletics but its useless in education.

          • Anonymous says:

            @TrivialGravitas

            Perhaps this would be less the case if kids did more useful things earlier on, rather than being effectively shielded from the real world until their twenties.

            (That sounds snarky but isn’t meant to be – I’m not claiming that kids definitely should start doing productive work earlier on, only that one advantage of them doing so might be the ability to more quickly see the results of different education techniques.)

          • TrivialGravitas says:

            I don’t really see it, maybe for teenagers, but if you come up with techniques that make an 8 year old able to do something low level productive that’s probably not a useful means of creating highly skilled adults.

  33. Will S. says:

    Scott, have you considered doing an IQ FAQ? Not a racial IQ blah blah blah, but something about how the different IQ tests are structured, what they claim to measure, and what predictive power they actually possess. I’ve tried to read some papers, and to me they sound like what critical theory sounds like to you: a group of people made up a field of study whose truths are obvious to those that are part of the field, while to those on the outside it sounds like ducks quacking.

    For me the biggest red flag is the constant comparison of IQ to SAT scores. I haven’t taken an IQ test but I have taken the SATs and the GREs and I’m baffled how anyone can think they’re a good proxy for innate intelligence (whatever that is). I mean that in the sense that it’s something you can train for or apply things that you’ve already learned, not that the questions themselves are meaningless.

    Also, the audience for this blog is very critical of the social sciences generally, but psychometrics seems to get a giant pass. Maybe there is a good reason for this but, once again, when I read papers linked here I don’t see how they are any better than papers that (usually justifiably) are castigated here.

    I admit to having a strong bias against IQ meaning very much, and when I read comments here and look over linked papers I feel like I’m reading a high concentration mixture of arguments from authority and question begging propositions. An example: I was looking at something linked here about racial admixture that categorized black subjects based on the shade of their skin. This seems crazy to me, as black people from the same family can have a wide variety of shades of skin. And this was offered in the paper as obvious evidence of racial admixture without, as far as I could tell, any citation supporting this method for classifying people.

    Anyway, I’m open to changing my mind. At the very least it would be nice to have all of the information structured in a way that promotes an understanding of the topic rather than scattershot comments. And your FAQs/More Than You Wanted to Know are much more informative than something like a Vox explainer and I trust you not to engage in disingenuous demagoguery like a Steve Sailer type.

    • Max says:

      I too was always puzzled by extremely good correlation between IQ and SAT/GRE but its a fact. There is a lot of very good science and statistics supporting it . (there are series of posts about it on information processing). I chalk it to the fact that very long test even on relatively trivial subjects do measure the general quality of analytical/memory apparatus

      As to whether IQ test measure anything useful . IF you look at the intelligence as first and foremost classification problem, then IQ tests (Raven matrices prime example, and WAIS still is even if its diluted somewhat) are very good representation of that ability

      • Will S. says:

        Do you have any links? I just spend far too long looking through InfoProc for information correlating SAT scores to IQ. The only thing I found was this arvix paper by Steve Hsu, which mentions exactly one about IQ and SAT scores. The study is, unfortunately, behind a paywall. However, it only tracks people who score in the top 1% when they are 13, and doesn’t have, afaict, anything to do with IQ at all.

        And if you look at the common conversion chart, it seems to be based on what’s acceptable to high IQ societies, which is basically arbitrary.

        With further research I found a single(!) paper that tries to correlate IQ and SAT scores. Except guess what, the study doesn’t actually do that. It’s called “Scholastic Assessment or g? The Relationship Between the Scholastic Assessment Test and General Cognitive Ability” (I’m afraid of the spam filter). It uses data from 1979 and that’s it and correlates it with the ASVAB and Raven Matrices, the former of which is not really an IQ test (I guess?) and the second of which can be converted to IQ, although that appears to be something they do after the fact? In any case, the correlation is .48, which is high, but not like SAT —> IQ conversion chart high.

        Anyway, a single study that looks at data from 1979 seems like a highly dubious justification for the SAT —> IQ correlation. This is actually driving me crazy, because so many of the commenters make it seem like the two are interchangeable and that there’s robust evidence of it, when that does not seem to be the case afaict. Which makes me even more skeptical of the whole IQ thing than I already was, which was pretty skeptical.

        Of course, I’d be happy if someone could provide me with links or papers that I didn’t find.

        • Emily says:

          You could check out “SAT and ACT predict college GPA after removing g” for both the lit review and the research. (Yes, the research is with NLSY97, so they have the ASVAB and not an IQ test.)

          Raven’s is an IQ test.

          But the SAT —> IQ conversion charts are not reliable.

          • Will S. says:

            Thank you very much!

            So Predictive Validity of Non-g Residuals of Tests: More Than g says that the non-g aspects of the SAT predict GPA in college about as well as the g aspects of it. Which seems like almost the opposite conclusion generally supported here, or one that at least splits the difference.

            And if I’m understanding this correctly, g is essentially a correlation coefficient that’s treated as an independent quantity with independent properties? Once again, that sounds like a dubious premise on which to base an entire field of science. Are there examples of other fields doing something similar?

            This gets to my original request of Scott, which is for an IQ FAQ. The paper you pointed me towards, while interesting, isn’t foundational enough to placate my concerns. Like, I’ve never seen a concerted defense of why IQ tests are important, what they tell us, and why we should care. Why should I care about g or believe it measures something real? On this blog especially these truths are considered to be self-evident, but when I look at the papers it seems far from obvious to me. And I have neither the time nor the inclination to try and piece together an understanding from reading papers on my own and trying to sort the good from the bad.

            What I’d really like is a sort of “Brief History of Time” for psychometrics. But an SSC FAQ would be also be great.

          • Will S,

            What you want to read is this book http://emilkirkegaard.dk/en/wp-content/uploads/The-g-factor-the-science-of-mental-ability-Arthur-R.-Jensen.pdf and this paper http://www.udel.edu/educ/gottfredson/reprints/1997whygmatters.pdf

            The first is best to read with some background in statistics. g is not a correlation, it is a factor (latent variable). It is a measure of general cognitive ability, a mental ability. One can extract factor scores, which are estimates of cases’ distributional position on that ability. g is a scientific construct, not a concrete object. It has numerous correlations to expected biological foundations such as brain size, nerve conduction velocity, lesions and so on. I summarized correlates in this unfinished paper: http://emilkirkegaard.dk/en/?p=5034 We now know of some more ones, like measures of brain interconnections http://www.nature.com/neuro/journal/v18/n11/full/nn.4125.html

            Non-g residuals sometimes have validity, but usually do not. Try reading the chapter about predictive validity in this book http://emilkirkegaard.dk/en/wp-content/uploads/Helmuth_Nyborg_The_Scientific_Study_of_General_IBookos.org_.pdf (“The Ubiquitous Role of g in Training”).

            E.g.:

            “Thorndike (1986) estimated the comparative validity of g versus specific ability composites for predicting results for about 1,900 enlisted U.S. Army trainees enrolled in 35 technical training schools. Specific abilities showed little incremental validity (0.03) beyond g and on cross-validation the multiple correlations for specific abilities usually shrunk below the bivariate correlation for g.

            Using a large U.S. Air Force sample, Ree & Earles (1991) demonstrated that training performance was almost exclusively a function of g rather than specific factors. Participants were 78,041 enlisted men and women enrolled in 82 job-training courses. Ree and Earles examined whether g predicted training performance in about the same way regardless of the kind of job or its difficulty. Based on Hull’s (1928) theory, it might be argued that although g was useful for some jobs, specific abilities were more important or compensatory and therefore, more valid for other jobs. Ree and Earles tested Hull’s hypothesis with regression analyses. They sought to resolve whether the relationship between g and training performance was identical for the 82 jobs. This was accomplished by initially imposing the constraint that the regression coefficients for g be the same for each of the 82 equations, and then freeing the constraint and allowing the 82 regression coefficients to be estimated individually. Even though there was statistical evidence that the relationship between g and the training outcomes differed by job, these differences were so small as to be of no practical predictive consequence. The relationship between g and training performance was nearly identical across jobs. Using a single regression equation for all 82 jobs resulted in a reduction in the correlation of less than one-half of 1%.

            In selection for technical training, specific ability tests may be given to qualify applicants on the assumption that specific abilities are predictive or incrementally predictive. For example, the U.S. Air Force uses specific ability tests for qualifying applicants for training as computer programmers and intelligence operatives. Besetsny, Earles & Ree (1993) and Besetsny, Ree & Earles (1993) examined these two specific ability tests to determine if they measured a construct other than g and if their validity was incremental to g. The samples were 3,547 computer-programmer and 776 intelligence-operative trainees and the criterion was training performance. Two multiple regression equations were computed for computer-programmer and intelligenceoperative trainees. The first equation for each group had only g and the second g and specific cognitive abilities. The difference in R2 between these two equations was tested for each group of trainees to determine whether specific abilities incremented g. Incremental validity gains for specific abilities beyond g for the two training courses were 0.00 and 0.02, respectively. Although the specialized tests were designed to measure specific cognitive abilities thought to be incrementally predictive, they added nothing (0.00) or little (0.02) beyond g.”

            Bad methods can easily find evidence for validity of non-g variance, see e.g. my little study here: http://emilkirkegaard.dk/en/?p=5465

          • Will S. says:

            Thanks, I will look through this giant stack of resources (it will take some time).

            The conclusion I’m beginning to reach is that people tend to engage in Motte and Bailey reasoning when it comes to IQ. My impressions were formed by a variety of very(!) strong statements about IQ, and when you look through these papers it’s a bunch of correlations between .3 and .5. Which is certainly something, but it’s not the answer to the meaning of life, which I would’ve guessed were the case based on many of the comments I’ve read about IQ.

        • Dr Dealgood says:

          Intelligence: Knowns and Unknowns was an official statement by the APA after the massive controversy from ‘The Bell Curve,’ and is basically a big review article on the field of psychometrics circa 1996.

          It’s missing some important developments that have happened in the last twenty years, like the very interesting results of genome wide association studies attempting to find genes which predict IQ, but the statistics at least haven’t changed since then.

          • Will S. says:

            Thank you! That’s very much what I was looking for. I’ll give it a read.

          • Will S. says:

            I finished reading it. It was certainly interesting and I feel like I have a much better understanding of psychometrics now, but it left me feeling less charitable towards the IQ-obsessed. I can see why it would be a good tool for diagnosing learning disabilities, but it does not seem to be close to the be all end all so often stated by commenters here.

            I know it’s not the place of an overview to go into a lot of detail, but there is also a lot of “we’ve found this correlation” and leaving it at that, instead of interrogating that correlation further. I wouldn’t bring this up if this were merely in the overview, but it’s also a major weakness of a lot of individual papers I’ve looked at. IQ has a correlation of .3 to .5 with a lot of positive life outcomes, which I don’t think warrants its primacy in this community. Especially not when the primacy is so unquestioned.

            Look at this chart. It has a variety of correlations with life outcomes, of which IQ is one (and it has one of the strongest correlations… but not a lot stronger). That’s interesting to me in a way that correlating similar tests to one another is not.

          • jonathan says:

            IQ has the advantage of being fairly easy to measure, and a more powerful predictor than just about any other single measure. It also has a long history and well-established literature, unlike many other psychological concepts that tend to be faddish. (“Hey, I just came up with a new measure of emotional intelligence that explains everything!”)

            That said, individual outcomes are the result of very complex and random processes, and we can’t reasonably expect any single measure to predict them with high confidence.

            I prefer the perspective, “Wow, isn’t it amazing how much we can predict based on a simple test that can be administered in a few hours?” But that’s very different from saying that it’s the only thing that matters.

    • JK says:

      Essentially any test item that can be objectively scored (true/false) and doesn’t require some secret or restricted knowledge can be used to measure IQ or g. All such items will measure other things besides g as well, but when you sum up scores on a large number of such items, the sum score will mostly reflect g. The more diverse content-wise the items are, the better measure of g the sum score is. Therefore, Wechsler’s IQ scores are better measures of g than Raven’s matrices scores. This is because the sum score reflects whatever is common to all the items, and when the items are diverse, what is common to all of them is not some special ability needed to solve specific kinds of items but rather the general ability that helps solve items regardless of what they are like.

      The ability to learn and apply that knowledge is as good a definition of intelligence as any, and that’s what SATs and GREs measure. In general, the more uniform the opportunity to train for some task is, the more the differences between people on that task will reflect their genetic differences (assuming there’s no ceiling on the task).

      I don’t understand your skin color comment. Why does it matter that it varies within families? It’s still an indicator of ancestry. Darker black individuals have more African ancestry, on average (although this, interestingly, doesn’t apply within families, which can be used to test various hypotheses).

    • Saint_Fiasco says:

      It’s something you can train for or apply things that you’ve already learned

      Is IQ correlated with conscientiousness? If so, high-IQ people would study and train harder for their SATs, and their high IQ might make such training more efficient. Also, the cumulative effects of having a high IQ when they were children will make it so they already learned lots of stuff in school and don’t need to study as hard.

      I would have been extremely surprised if the SAT scores didn’t correlate with IQ.

      • HeelBearCub says:

        @Saint Fiasco:
        The question is, how much do any differences in IQ between either individuals or groups tell you about there IQ, and then what does that tell you about “g” (innate intelligence, which is what most people think IQ is measuring).

        Reductio: Tranq dart a member of an isolated Amazonian tribe, put them in an SAT test room and then calculate their IQ (and by inference g) from their SAT test score.

      • Will S. says:

        I don’t know, is it? In any case, you seem to have assumed your conclusion pretty handily.

        As far as I can tell there’s been exactly one study trying to directly correlate SAT scores and IQ. I was able to find the full paper yesterday, but now I can’t. However, the abstract can be found here. It found a correlation of .48 between SAT scores and IQ derived from Raven Matrices tests. And it used a fairly small sample size (104) from 1979 (and no p-values are mentioned), so I would be interested in an updated version of this study, which apparently doesn’t exist.

        My point was that many commenters write about being able to plug in SAT scores for IQ like it’s an established fact. I would believe there was a moderately high correlation, but having taken the SAT, I wouldn’t put much faith in their inability to be gamed or to be skewed in a way that doesn’t reflect “innate intelligence” whatever that is.

        Of course, if you define intelligence as the sum total of everything that decides what one scores on their SATs, you’ve neatly defined away the problem while saying something neither interesting nor falsifiable!

        • There are more studies, but it is not a well-researched area.

          http://sapa-project.org/dmc/docs/CRICAR2014.pdf

          Note that self-rated SAT has some error. They did not correct for range restriction, but the reliability corrected correlation to SAT was .59 (N=thousands). If you correct for range restriction, it will be a much stronger correlation, probably something like .80 to .90.

          RAPM is not a good measure of general cognitive ability, it is too narrow, so the correlations from the first study you mention also need to be corrected for construct invalidity (imperfect measurement of the construct of interest). The paper I cited for you above finds a correlation of about .80 between the g of Cattell’s test and that of WAIS etc. Cattel’s test is similar to RAPM, so one could use that value for the correction, giving .48/.80=.60 before restriction of range correction. This is only .01 off the value with the ICAR, a more broad test.

          There are older studies looking at other scholastic achievement tests (like SAT/ACT/GRE) and the good studies found r’s of .80 or so.

          So yes, SAT/ACT/GRE are useful proxies for general cognitive ability, but not perfect.

    • “Also, the audience for this blog is very critical of the social sciences generally, but psychometrics seems to get a giant pass.”

      The soft sciences become so much more acceptable when they are able to say flattering things about nerds.

  34. Selerax says:

    Not surprisingly, Wicherts himself has a different view of the discrepancy between his data and Lynn’s.

    Spoiler: Lynn & Rushton also throw away a lot of data.

    https://www.researchgate.net/profile/Jelte_Wicherts/publication/222668167_A_systematic_literature_review_of_the_average_IQ_of_sub-Saharan_Africans/links/00463537c7d0bb0da1000000.pdf

    FWIW, All of Wicherts’ publications are available on his Google Scholar profile. Worth a look.

    https://scholar.google.com/citations?user=V4uGPd8AAAAJ&amp;

  35. onyomi says:

    This may be obvious, and forgive me if it’s been done to death, but the Flynn Effect reminds me a lot of the drastic gains in life expectancy we see compared to just 100 years ago: at first glance, it may seem like the modal living person is now likely to enjoy 20 more years of life than a person of the same age living 100 years ago. But then you realize that it’s actually just that the mean had previously been pulled way down by the extremely high infant mortality. 100 years ago, if you made it past the age of 5 or 10, your chances of living to 80 or 90 were probably not a lot worse than today.

    Similarly, 100 years ago, could it not simply have been a larger number of people with severe impairments (like, say, IQ 60) due to malnutrition, severe childhood illness, etc. pulling down the average, rather than the modal person having IQ 80? I have a very hard time believing the average, reasonably health British adult of 60 years ago had IQ 80 by our standards. I have a much easier time believing that the UK of 60 years ago had a larger number of people with more severe mental handicaps.

    Put another way, has the shape of the curve not shifted, or is the claim that whole curve has shifted without much changing its shape? And, if the latter, could we be mistaken about that?

    • Nornagest says:

      100 years ago, if you made it past the age of 5 or 10, your chances of living to 80 or 90 were probably not a lot worse than today.

      They were, actually. Infant mortality is largely responsible for the really dramatically low life expectancies of the past, like ~40 in 1900, but it’s not the whole story; life expectancy at 20 around the same time would have been something in the neighborhood of 40 more years, for a total of ~60.

      The shape of the curve has changed, though. It wouldn’t have been unheard of to make it to your seventies or even eighties in 1900 if you didn’t get carried off by infectious disease (or a bunch of other stuff — accidents were more common back then, too, and homicide probably was although our data is deficient), but that was a much bigger “if” then than it is now.

    • Douglas Knight says:

      Yes, the Flynn effect is more on the left-hand side of the curve. The shape has changed a lot, too. This paper shows the distributions from the Scottish population tests of 1932 and 1947. According to the norms of the time, the mean was 100, but the mode was 105.

  36. vV_Vv says:

    I keep hearing wildly different numbers for sub-Saharan African IQ, but 76 seems pretty plausible.

    IQ 76 is only one-third standard deviation above mental retardation. Are societies where 34% of the population is retarded plausible?

    I don’t know, maybe I’m being lead astray by a “typical society fallacy” since I’ve never visited sub-Saharan Africa and the only sub-Saharan Africans that I personally know are immigrants to Europe likely self-selected for intelligence, and I know that sub-Saharan African countries notoriously suck, but how could they avoid to completely collapse with so many cognitively disabled people?

    If sub-Saharan African average IQ is so low, shouldn’t all anti-poverty interventions to sub-Saharan Africa be focused on improving it?

    • Deiseach says:

      There’s two explanations:

      (1) The way IQ has been measured in sub-Saharan and other populations sucks and is so bad we should scrap the results until we come up with something better

      (2) A functioning human society really doesn’t need that much IQ or whatever value is measured in IQ tests

      While I’m mostly inclined to (1), the older I get, the more I wonder about (2); sure, you need high IQ for the kind of advanced culture we in the West live in, but for some kind of working society where you can manage to have agriculture, domestication of animals and a basic lifestyle, maybe you really don’t need to be that ‘smart’. If you can learn how to weave baskets, make clay pots, which berries will and won’t poison you, and how to milk a goat, you might be able to function perfectly well.

      • Psmith says:

        It was pointed out to me recently that people from traditional societies can do hilariously poorly on IQ-test-type items while nevertheless being able to function pretty well.

        For instance, consider this exchange from Luria:
        IQ Test Examiner: All bears are white where there is always snow; in Novaya Zemlya there is always snow; what color are the bears there?

        Soviet Union Peasant: I have seen only black bears and I do not talk of what I have not seen.

        IQ Test Examiner: But what do my words imply?

        Soviet Union Peasant: If a person has not been there he can not say anything on the basis of words. If a man was 60 or 80 and had seen a white bear there and told me about it, he could be believed.

        • houseboatonstyx says:

          @ PSMITH
          IQ Test Examiner: But what do my words imply?

          Soviet Union Peasant: If a person has not been there he can not say anything on the basis of words. If a man was 60 or 80 and had seen a white bear there and told me about it, he could be believed.

          That’s not stupidity, that’s wisdom, and courtesy.

          • HeelBearCub says:

            @houseboatonstyx:
            This answer strikes me as “I reject the premise of the question”.

            Or, as Mona Lisa Vito says, “It’s a bullshit question!”

      • vV_Vv says:

        Maybe I’m overestimating the value of intelligence to a functioning human society, or I’m overestimating the degree of functioning of sub-Saharan African societies, or both.

        Or, as other people commented, the IQ distribution of these societies is non-Gaussian, with a median substantially higher than the mean. That would imply that very low-IQ people are still a burden, but there aren’t enough of them to make the society “retarded” as a whole.

        • “That would imply that very low-IQ people are still a burden”

          Not necessarily. There might be particular niches in the society where low-IQ people functioned just fine.

          Horses and oxen have much lower IQ’s than humans, but that doesn’t mean that they are a burden.

          • vV_Vv says:

            Until they were larged replaced by technology.

            Anyway, domesticated horses and oxen are generally not a burden to society because they are intentionally created by humans in the right amount needed by economy (as much as markets can optimize) and when they become unproductive they are killed and even eaten.

            Domesticated horses and oxen are also the result of a long process of selective breeding that made them economically useful. Note that chimps are smarter than them but we don’t use them for economic purposes.

            Low IQ people just occur in some rate determined by genetic and environmental factors, and it’s generally considered immoral to kill them if they are unproductive. In fact, they have relatives who will generally try to spend resources to keep them alive (in developed countries, the government will do it as well). Therefore it’s unlikely that most of them will find some economic niche that allows them to be net producers.

      • houseboatonstyx says:

        @ Deiseach
        “If you can learn how to weave baskets, make clay pots, which berries will and won’t poison you, and how to milk a goat, you might be able to function perfectly well.”

        Milking a goat is easy, I’ll admit. But the rest of those things — especially if you’re pretty much figuring it out as you go, with minimal instruction — take a lot of intelligence. Especially which berries, at what stage of ripeness, pre-treated how, cooked how, stored how — for which our book-learning is very helpful, so imagine doing it without books. Given symbols, an IQ test using those symbols could work.

        Pots and baskets, the intelligence is between senses and fingers, so testing it would be too messy and expensive to be counted as intelligence.

      • keranih says:

        It doesn’t take a “within one standard dev of normal” intellegence to make a person loveable by another person. We form social bonds with inanimate objects all the time. The level of intelligence needed to be functioning and valued part of a community is lower than we might expect. (And it’s part of what makes me crazy about IQ discussions – we’re talking about something akin to strength and eyesight and attention span. It’s not human worth that we’re measuring, here.)

        Something to keep in mind – Africa is the motherland of the human race. All of the parasites and pathogens there are very well adapted to us. In terms of mega fauna, carrying capacity of agricultural land, and the aforementioned parasites and pathogens, humans in Africa are subject to more threats from things of subhuman intelligence than in the rest of the world. Outside of Africa, the primary challenges were far more likely to be humans. On balance, people outside of Africa had less need to develop adaptive resistance to environment, while people in Africa were (comparatively) more likely to die from the environment than from other humans.

        Genetic diversity in sub-sahara humans is far broader than the rest of the population put together – dozens and dozens of tribal/clan groups that had adapted to local threats, compared to far less diversity outside Africa.

        So long as selection was hitting as hard on parasite resistance and famine adaption as it was for ability to out predict the other humans in the region, it’s understandable that shifts to higher average intelligence would take a hit in Africa, compared to the softer, easier conditions in other parts of the world.

  37. Alex says:

    With IQ alone I think we are missing half the picture

  38. HeelBearCub says:

    I want to highlight this 2006 Psychological Science paper by Dickens and Flynn. Professor Frink linked it above.

    http://www.brookings.edu/views/papers/dickens/20060619_IQ.pdf

    We analyze data from nine standardization samples for four major tests of cognitive ability. These suggest that blacks have gained 5 or 6 IQ points on non-Hispanic whites between 1972 and 2002.

    Here is a response that they wrote to criticism of the paper by Rushton and Jensen.

    http://www.brookings.edu/views/papers/dickens/20060619_response.pdf

    Rushton and Jensen concede that the magnitude of the black/white IQ gap is not immutable but could have narrowed by as much as 3.44 IQ points or 0.23 white SDs. They concur that the black/white IQ gap rises with age.

    I don’t see Dickens mentioned anywhere here, which would seem to indicate that people aren’t dealing with this contention, or that there is further work building on this which is more current.

  39. I wonder whether IQ tests actually test what’s needed for great achievement. They don’t cover the ability to choose a worthwhile project, generate and choose good methods, and the ability to stay with the project for a long time. They also don’t test the ability to notice anomalies that no one else is noticing.

    I’d be surprised if someone with great achievements did badly on IQ tests, but the Flynn effect may be about being good at what a lot of people do.

    I agree that there are random effects in what goes into the canon, but I don’t think it’s a coincidence that people keep making new versions of Shakespeare.

    As for the low IQs of black people, it doesn’t seem likely to me that they’re measuring general mental competence. I could be missing a lot, but I know some black people (mostly from sf fandom), read some black sf authors, and have random interactions with a fair number of black people in Philadelphia. None of these groups seem noticeably less intelligent than white people in those groups.

    • vV_Vv says:

      I could be missing a lot, but I know some black people (mostly from sf fandom), read some black sf authors, and have random interactions with a fair number of black people in Philadelphia. None of these groups seem noticeably less intelligent than white people in those groups.

      The sf fandom probably selects for intelligence above/within a certain range, as do most jobs.

      You may have probably noticed that in any “nerd” space blacks are underrepresented compared to whites and Asians, and Ashkenazi Jews are overrepresented. It’s probably not only a matter of IQ, but IQ certainly plays a role.

    • Nicholas says:

      Shakespeare’s works, in turn, are often remakes of even older popular stories that his English audience would have been as familiar with as we are with Shakespeare.

  40. Anon says:

    Has anyone addressed the issue of malaria in africa? P. Falciparum has been shown to, in the case of cerebral malaria infections that are common in children, produce a number of neurological sequelae that could possibly translate to cognitive function deficits that could be predicted by IQ. Living adults may have managed to acquire malaria resistance, but that’s from actually suffering through the disease.

    IQ isn’t usually conceptualized as a public health predictor beyond nutrition, but maybe we’re not thinking about it broadly enough.

  41. Wackademic says:

    Serious question: psychology, as a whole, has been demonstrated to be absolutely dominated by un-reproducible results. Given that fact, how many of these findings (about correlations between IQ and management performance, or whatever) should we actually believe?

    http://www.nytimes.com/2015/08/28/science/many-social-science-findings-not-as-strong-as-claimed-study-says.html?_r=0

    • Deiseach says:

      My own personal uneducated opinion:

      There is something we might as well call IQ, though we don’t know what it is and our current definitions aren’t as helpful as we like to think. I note a lot of assumptions that “mathematical ability/giftedness” = “this is what IQ is and this is how we should measure IQ” and I’m not so sure that’s correct.

      It may or may not be associated with being smarter as defined by “how well did you succeed academically?” Note that many of the mathematically gifted were crap at other subjects; Einstein’s bad school reports are the legendary example here but I saw something posted said to be verse by Richard Feynman and let me tell you people, that was not poetry by any measure, unless you think Rod McKuen is poetry 🙂

      Higher IQ is useful and does generate real results. Higher IQ people are generally more creative and find results quicker and better.

      Being high IQ does have some correlation to success; you’re more likely to get non-manual labour so you have a better chance of getting something that pays better than digging ditches and waiting tables (skilled trades and things that hover on the edges – e.g. work that would have been considered artisan but was uplifted into ‘the arts’ – are a different matter. Someone who works in design from the art, not the engineering, side is just as necessary for producing goods that the public will buy but we’re not necessarily measuring mathematical intelligence in their talents and thus ability to get a good job there).

      Countries where the population has an overall higher IQ do better, for various reasons not related to IQ alone, than countries with a population of lower national IQ.

      Apart from that – how do we increase IQ, how do we get smarter babies, can we make adults smarter once they’re past a certain age, how do we translate IQ into achievement (and most governments, like my own, measure that by “smart people going into commercial research and development which results in the discovery or invention of products or items that can be turned into industries providing manufacturing jobs which bring down unemployment rates”, which is why they push for more students doing STEM subjects at school so they can get jobs as computer programmers or in the pharmaceutical industry) – who the hell knows?

      • HeelBearCub says:

        @Deiseach:
        “There is something we might as well call IQ”

        I think that what you are pointing at is g which is the thing in IQ which is solely “mental talent” that isn’t effected by learning. IQ is a measure that is supposed to let us see g obliquely.

    • jonathan says:

      If you will accept appeal to an authority who expresses himself in 140 characters:
      https://twitter.com/sapinker/status/645301814955388930

    • namae nanka says:

      IQ and the general factor of intelligence(g) are the best constructs to have come out of psychology.

      “The positive manifold of correlations among scores on diverse cognitive tests was discovered, and named as g, by Charles Spearman (Spearman, 1904). Spearman’s g is the most well-documented construct in the human behavioral sciences. The reliability of g is greater than the reliability of height and weight measured in a doctor’s office (Jensen, 1998, p50), its predictive power leaves rival psychometric constructs in the dust yet, despite a century of research, certain properties of g are still unresolved.”

      http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3283908/

  42. Decius says:

    How did the inspection time test/IQ correlation account for the big confounders? For example, I would be very surprised if people who were tired did not do significantly worse on both the inspection time test and any valid IQ test worth using.

    • JK says:

      The association between IQ and inspection time is largely genetically mediated. Unless being tired on a test is highly heritable, the correlation is not due to tiredness.

      • Decius says:

        Yes, sleep habits are heritable. Also, it looks like the study said that it was the correlation that was probably genetic- that is, that identical twins’ IT and IQ scores were more likely to share a degree of correlation than fraternal twins’. It’s possible that I just lack reading comprehension (not sarcastic),
        “Bivariate model fitting indicated that the covariance between IT and FIQ was due to a common genetic factor…”

        • JK says:

          Not sure what the disagreement is here. When the phenotypic correlation between two variables is due to a genetic factor common to both, it’s said that the correlation is genetically mediated.

  43. Anthony says:

    So far, two studies in the United Kingdom find that higher [IQ] individuals are more likely to vote, regardless of other things known about the person such as his social class, his education, and some personality traits…however, a study in the United States drawing on three different surveys finds no substantial evidence that IQ predicts voting behavior.

    Does this result hold when controlling for race in the U.S.?

  44. cassander says:

    >This seems almost unbelievable, and absent knowledge of the primary source I can’t be 100% sure, but I guess it fits together with what you always hear about the soldiers celebrating Christmas together and so on.

    I can’t talk about the particular instance, but there is a long and very well documented history of such battlefield truces, both short ones for doing things like collecting wounded, and much longer term agreements do do things like not fight at certain times of the day. People are VERY touchy about them, reacting with great anger when they are violated, and working with extreme dispatch to excuse accidental or unintended violations of them. It’s a very interesting subject.