Slate Star Codex

In a mad world, all blogging is psychiatry blogging

OT13: Thread, The Blood Of Angry Men

Blog, the dark of ages past!

This is the semimonthly open thread. Post about anything you want, ask random questions, whatever. Also:

1. I will be in the Bay Area from about 2/21 to maybe 3/7. I’ll see all of you who plan to be at Miranda and Ruby’s wedding there; otherwise I hope to get a chance to see some other people as schedules allow. If there’s interest in an SSC meetup, I could tentatively try scheduling such for the afternoon of Sunday 3/1 somewhere in Berkeley. If there’s interest I’ll give a firmer date later on.

2. Comment of the week is the whole discussion of gender equality in Soviet Russia and Eastern Europe. But I also need to praise everyone who continued the coffee shop gag in the comments. A very few among my favorites were Hayek, Heidegger, various economists, gwern, Thomas Schelling, various Chinese legalists, G.K. Chesterton, Nick Bostrom (1, 2), Enrico Fermi, various Islamic philosophers, Terry Tao, Nick Land, Alicorn, and various biologists.

3. Some people seem to have gotten genuinely upset about some of the recent discussion of IQ, on grounds something like that if high IQ is a necessary ingredient of some forms of academic success and they’re lower-IQ than other people, then they are bad and worthless. I strongly disagree with this and think it gets the reasoning exactly wrong, and I hope to explain why. But work has been pretty crazy lately (no pun intended) and I might not get the chance to write it up for a little while. Until then, please do me a favor and just take it on faith that you are a valuable human being who is worthy of existence.

4. Many of you probably know Multiheaded. My statistics say she is the most frequent commenter on this blog (pushing me down to second place) and we all acknowledge her heartfelt Communist comments as, um, things that exist. What you may not know about her is that she is a trans woman who lives in Russia, which is not known as a very safe place for trans women. She’s planning to escape to Canada and claim refugee status. Most of the steps of the plan are in place, and we have a few people in the Canada rationalist community willing to host her for a while, but she is asking for some money to help with travel and living expenses. She’s set up a GoFundMe account with a target of $2500. If there’s any doubt about the story, I can confirm that Ozy and I have known her for a long time and she’s kept her biography consistent longer than I would expect anyone to fake; also, her IP address does trace to Russia. Multi intends to pay as much as possible forward eventually with donations to effective charities. I intend to donate, and I hope some of you do as well.

Remember, no race and gender in the open thread, EXCEPT that I will permit, this time only, discussion of Hyde & Mertz (2009) because it’s interesting and I want to know what other people here think about it. Everything else can go over to Ozy’s place.

Posted in Uncategorized | Tagged | 119 Comments

A Philosopher Walks Into A Coffee Shop

I have been really enjoying, which publishes complicated jokes about what famous authors and fictional characters order at Starbucks. I like it so much I wish I knew more great literature, so I could get more of the jokes.

Since the creators seem to be restricting themselves to the literary world, I hope they won’t mind if I fail to resist the temptation to steal their technique for my own field of interest. Disclaimer: two of these are widely-known philosophy jokes and not original to me.

* * *

Parmenides goes up to the counter. “Same as always?” asks the barista. Parmenides nods.

* * *

Pythagoras goes up to the counter and orders a caffe Americano. “Mmmmm,” he says, tasting it. “How do you guys make such good coffee?” “It’s made from the freshest beans,” the barista answers. Pythagoras screams and runs out of the store.

* * *

Thales goes up to the counter, says he’s trying to break his caffeine habit, and orders a decaf. The barista hands it to him. He takes a sip and spits it out. “Yuck!” he says. “What is this, water?”

* * *

Gottfried Leibniz goes up to the counter and orders a muffin. The barista says he’s lucky since there is only one muffin left. Isaac Newton shoves his way up to the counter, saying Leibniz cut in line and he was first. Leibniz insists that he was first. The two of them come to blows.

* * *

Georg Wilhelm Friedrich Hegel goes up to the counter and gives a tremendously long custom order in German, specifying exactly how much of each sort of syrup he wants, various espresso shots, cream in exactly the right pattern, and a bunch of toppings, all added in a specific order at a specific temperature. The barista can’t follow him, so just gives up and hands him a small plain coffee. He walks away. The people behind him in line are very impressed with his apparent expertise, and they all order the same thing Hegel got. The barista gives each of them a small plain coffee, and they all remark on how delicious it tastes and what a remarkable coffee connoisseur that Hegel is. “The Hegel” becomes a new Starbucks special and is wildly popular for the next seventy years.

* * *

Socrates goes up to the counter. “What would you like?” asks the barista. “What would you recommend?” asks Socrates. “I would go with the pumpkin spice latte,” says the barista. “Why?” asks Socrates. “It’s seasonal,” she answers. “But why exactly is a seasonal drink better than a non-seasonal drink?” “Well,” said the barista, “I guess it helps to connect you to the rhythm of the changing seasons.” “But do you do other things to connect yourself to that rhythm?” asked Socrates. “Like wear seasonal clothing? Or read seasonal books? If not, how come it’s only drinks that are seasonal?” “I’m not sure,” says the barista. “Think about it,” says Socrates, and leaves without getting anything.

* * *

Rene Descartes goes up to the counter. “I’ll have a scone,” he says. “Would you like juice with that?” asks the barista. “I think not,” says Descartes, and he ceases to exist.

* * *

Jean-Paul Sartre goes up to the counter. “What do you want?” asks the barista. Sartre thinks for a long while. “What do? I want?” he asks, and wanders off with a dazed look on his face.

* * *

William of Occam goes up to the counter. He orders a coffee.

* * *

Adam Smith goes up to the counter. “I’ll have a muffin,” he says. “Sorry,” says the barista, “but those two are fighting over the last muffin.” She points to Leibniz and Newton, who are still beating each other up. “I’ll pay $2 more than the sticker price, and you can keep the extra,” says Smith. The barista hands him the muffin.

* * *

John Buridan goes up to the counter and stares at the menu indecisively.

* * *

Ludwig Wittgenstein goes up to the counter. “I’ll have a small toffee mocha,” he says. “We don’t have small,” says the barista. “Then what sizes do you have?” “Just tall, grande, and venti.” “Then doesn’t that make ‘tall’ a ‘small’?” “We call it tall,” says the barista. Wittgenstein pounds his fist on the counter. “Tall has no meaning separate from the way it is used! You are just playing meaningless language games!” He storms out in a huff.

* * *

St. Anselm goes up to the counter and considers the greatest coffee of which it is possible to conceive. Since existence is more perfect than nonexistence, the coffee must exist. He brings it back to his table and drinks it.

* * *

Ayn Rand goes up to the counter. “What do you want?” asks the barista. “Exactly the relevant question. As a rational human being, it is my desires that are paramount. Since as a reasoning animal I have the power to choose, and since I am not bound by any demand to subordinate my desires to that of an outside party who wishes to use force or guilt to make me sacrifice my values to their values or to the values of some purely hypothetical collective, it is what I want that is imperative in this transaction. However, since I am dealing with you, and you are also a rational human being, under capitalism we have an opportunity to mutually satisfy our values in a way that leaves both of us richer and more fully human. You participate in the project of affirming my values by providing me with the coffee I want, and by paying you I am not only incentivizing you for the transaction, but giving you a chance to excel as a human being in the field of producing coffee. You do not produce the coffee because I am demanding it, or because I will use force against you if you do not, but because it most thoroughly represents your own values, particularly the value of creation. You would not make this coffee for me if it did not serve you in some way, and therefore by satisfying my desires you also reaffirm yourself. Insofar as you make inferior coffee, I will reject it and you will go bankrupt, but insofar as your coffee is truly excellent, a reflection of the excellence in your own soul and your achievement as a rationalist being, it will attract more people to your store, you will gain wealth, and you will be able to use that wealth further in pursuit of excellence as you, rather than some bureaucracy or collective, understand it. That is what it truly means to be a superior human.” “Okay, but what do you want?” asks the barista. “Really I just wanted to give that speech,” Rand says, and leaves.

* * *

Voltaire goes up to the counter and orders an espresso. He takes it and goes to his seat. The barista politely reminds him he has not yet paid. Voltaire stays seated, saying “I believe in freedom of espresso.”

* * *

Thomas Malthus goes up to the counter and orders a muffin. The barista tells him somebody just took the last one. Malthus grumbles that the Starbucks is getting too crowded and there’s never enough food for everybody.

* * *

Immanuel Kant goes up to the counter at exactly 8:14 AM. The barista has just finished making his iced cinnamon dolce latte, and hands it to him. He sips it for eight minutes and thirty seconds, then walks out the door.

* * *

Bertrand Russell goes up to the counter and orders the Hegel. He takes one sip, then exclaims “This just tastes like plain coffee! Why is everyone making such a big deal over it?”

* * *

Pierre Proudhon goes up to the counter and orders a Tazo Green Tea with toffee nut syrup, two espresso shots, and pumpkin spice mixed in. The barista warns him that this will taste terrible. “Pfah!” scoffs Proudhon. “Proper tea is theft!”

* * *

Sigmund Freud goes up to the counter. “I’ll have ass sex, presto,” he says. “What?!” asks the barista. “I said I’ll have iced espresso.” “Oh,” said the barista. “For a moment I misheard you.” “Yeah,” Freud tells her. “I fucked my mother. People say that.” “WHAT?!” asks the barista. “I said, all of the time other people say that.”

* * *

Jeremy Bentham goes up to the counter, holding a $50 bill. “What’s the cheapest drink you have?” he asks. “That would be our decaf roast, for only $1.99,” says the barista. “Good,” says Bentham and hands her the $50. “I’ll buy those for the next twenty-five people who show up.”

* * *

Patricia Churchland walks up to the counter and orders a latte. She sits down at a table and sips it. “Are you enjoying your beverage?” the barista asks. “No,” says Churchland.

* * *

Friedrich Nietzsche goes up to the counter. “I’ll have a scone,” he says. “Would you like juice with that?” asks the barista. “No, I hate juice,” says Nietzsche. The barista misinterprets him as saying “I hate Jews”, so she kills all the Jews in Europe.

Posted in Uncategorized | Tagged , | 326 Comments

Perceptions Of Required Ability Act As A Proxy For Actual Required Ability In Explaining The Gender Gap


I briefly snarked about Leslie et al (2015) last week, but I should probably snark at it more rigorously and at greater length.

This is the paper that concludes that “women are underrepresented in fields whose practitioners believe that raw, innate talent is the main requirement for success because women are stereotyped as not possessing that talent.” They find that some survey questions intended to capture whether people believe a field requires innate talent correlate with percent women in that field at a fairly impressive level of r = -0.60.

The media, science blogosphere, et cetera has taken this result and run with it. A very small sample includes: National Science Foundation: Belief In Raw Brilliance May Decrease Diversity. Science Mag: the “misguided” belief that certain scientific fields require brilliance helps explain the underrepresentation of women in those fields. Reuters: Fields That Cherish Genius Shun Women. LearnU: Study Findings Point To Source Of Gender Gap In STEM. Scientific American: Hidden Hurdle Looms For Women In Science. Chronicle Of Higher Education: Disciplines That Expect Brilliance Tend To Punish Women. News Works: Academic Gender Gaps Tied To Stereotypes About Genius. Mathbabe: “The genius myth” keeps women out of science. Vocativ: Women Avoid Fields Full Of Self-Appointed Geniuses. And so on in that vein.

Okay. Imagine a study with the following methodology. You survey a bunch of people to get their perceptions of who is a smoker (“97% of his close friends agree Bob smokes”). Then you correlate those numbers with who gets lung cancer. Your statistics program lights up like a Christmas tree with a bunch of super-strong correlations. You conclude “Perception of being a smoker causes lung cancer”, and make up a theory about how negative stereotypes of smokers cause stress which depresses the immune system. The media reports that as “Smoking Doesn’t Cause Cancer, Stereotypes Do”.

This is the basic principle behind Leslie et al (2015).

The obvious counterargument is that people’s perceptions may be accurate, so your perception measure might be a proxy for a real thing. In the smoking study, we expect that people’s perception of smoking only correlates with lung cancer because it correlates with actual smoking which itself correlates with lung cancer. You would expect to find that perceived smoking correlates with lung cancer less than actual smoking, because the perceived smoking correlation is just the actual smoking correlation plus some noise resulting from misperceptions.

So I expected the paper to investigate whether or not perceived required ability correlated more, the same as, or less than actual required ability. Instead, they simply write:

Are women and African-Americans less likely to have the natural brilliance that some fields believe is required for top-level success? Although some have argued that this is so, our assessment of the literature is that the case has not been made that either group is less likely to possess innate intellectual talent1.

So we will have to do this ourselves. The researchers helpfully include in their supplement a list of the fields they studied and GRE scores for each, as part of some sub-analysis to check for selectivity. GRE scores correlate closely with IQ and with a bunch of measures of success in graduate school, so this sounds like it would be a good test of the actual required ability hypothesis. Let’s use this to figure out whether actual innate ability explains the discrepancies better or worse than perceived innate ability does.

When I use these data I find no effect of GRE scores on female representation.

But these data are surprising – for example, Computer Science had by far the lowest GRE score (and hence projected IQ?) of any field, which matches neither other sources nor my intuition. I looked more closely and found their measure combines Verbal, Quantitative, and Writing GREs. These are to some degree anti-correlated with each other across disciplines2; ie those disciplines whose students have higher Quantitative tend to have lower Writing scores (not surprising; consider a Physics department versus an English department).

Since the study’s analysis included two measures of verbal intelligence and only one measure of mathematical intelligence, it makes more mathematical departments appear to have lower scores and lower innate ability. Certainly a measure set up such that computer scientists get the lowest intelligence of everyone in the academy isn’t going to find innate ability related to STEM!

Since the gender gap tends to favor men in more mathematical subjects, if we’re checking for a basis in innate ability we should probably disentangle these tests and focus on the GRE Quantitative. I took GRE Quantitative numbers by department from the 2014 edition of the ETS report. The results looked like this:

There is a correlation of r = -0.82 (p = 0.0003) between average GRE Quantitative score and percent women in a discipline. This is among the strongest correlations I have ever seen in social science data. It is much larger than Leslie et al’s correlation with perceived innate ability3.

Despite its surprising size this is not a fluke. It’s very similar to what other people have found when attempting the same project. There’s a paper from 2002, Templer and Tomeo, that tries the same thing and finds r = 0.76, p < 0.001. Randal Olson tried a very similar project on his blog a while back and got r = 0.86. My finding is right in the middle.

A friendly statistician went beyond my pay grade and did a sequential ANOVA on these results4 and Leslie et al’s perceived-innate-ability results. They found that they could reject the hypothesis that the effect of actual innate ability was entirely mediated by perceived innate ability (p = 0.002), but could not reject the hypothesis that the effect of perceived-innate-ability was entirely mediated by actual-innate ability (p = 0.36).

In other words, we find no evidence for a continuing effect of people’s perceptions of innate ability after we adjust for what those perceptions say about actual innate ability, in much the same way we would expect to see no evidence for a continuing effect of people’s perceptions of smoking on lung cancer after we adjust for what those perceptions say about actual smoking.


Correlation is not causation, but a potential causal mechanism can be sketched out.

I’m going to use terms like “ability” and “innate ability” and “genius” and “brilliance” because those are the terms Leslie et al use, but I should clarify. I’m using them the way Leslie et al seem to, as a contrast to hard work, the internal factors that give different people different payoffs per unit effort. So a genius is someone who can solve difficult problems with little effort; a dullard is one who can solve them only with great effort or not at all.

This use of “innate ability” is not the same thing as “genetically determined ability”. Genetically determined ability will be part of it, but there will also be many other factors. Environmental determinants of intelligence, like good nutrition and low lead levels. Exposure to intellectual stimulation during crucial developmental windows. The effect of steretoypes, insofar as those stereotypes globally decrease performance. Even previous training in a field might represent “innate ability” under this definition, although later we’ll try to close that loophole.

Academic programs presumably want people with high ability. The GRE bills itself as an ability test, and under our expanded definition of ability this is a reasonable claim. So let’s talk about what would happen if programs selected based solely on ability as measured by GREs.

This is, of course, not the whole story. Programs also use a lot of other things like grades, interviews, and publications. But these are all correlated with GRE scores, and anyway it’s nice to have a single number to work with. So for now let’s suppose colleges accept applicants based entirely on GRE scores and see what happens. The STEM subjects we’re looking at here are presumably most interested in GRE Quantitative, so once again we’ll focus on that.

Mathematics unsurprisingly has the highest required GRE Quantitative score. Suppose that the GRE score of the average Mathematics student – 162.0 – represents the average level that Mathematics departments are aiming for – ie you must be this smart to enter.

The average man gets 154.3 ± 8.6 on GRE Quantitative. The average woman gets 149.4 ± 8.1. So the threshold for Mathematics admission is 7.7 points ahead of the average male test-taker, or 0.9 male standard deviation units. This same threshold is 12.6 points ahead of the average female test-taker, or 1.55 female standard deviation units.

GRE scores are designed to follow a normal distribution, so we can plug all of this into our handy-dandy normal distribution calculator and find that 19% of men and 6% of women taking the GRE meet the score threshold to get into graduate level Mathematics. 191,394 men and 244,712 women took the GRE last year, so there will be about 36,400 men and 14,700 women who pass the score bar and qualify for graduate level mathematics. That means the pool of people who can do graduate Mathematics is 29% female. And when we look at the actual gender balance in graduate Mathematics, it’s also 29% female.

Vast rivers of ink have been spilled upon the question of why so few women are in graduate Mathematics programs. Are interviewers misogynist? Are graduate students denied work-life balance? Do stereotypes cause professors to “punish” women who don’t live up to their sexist expectations? Is there a culture of sexual harassment among mathematicians?

But if you assume that Mathematics departments are selecting applicants based on the thing they double-dog swear they are selecting applicants based on, there is literally nothing left to be explained5.

I am sort of cheating here. The exact perfect prediction in Mathematics is a coincidence. And I can’t extend this methodology rigorously to any other subject because I would need a much more complicated model where people of a given score level are taken out of the pool as they choose the highest-score-requiring discipline, leaving fewer high-score people available for the low-score-requiring ones. Without this more complicated task, at best I can set a maximum expected gender imbalance, then eyeball whether the observed deviation from that maximum is more or less than expected. Doing such eyeballing, there are slightly fewer women in graduate Physics and Computer Science than expected and slightly more women in graduate Economics than expected.

But on the whole, the prediction is very good. That it is not perfect means there is still some room to talk about differences in stereotypes and work-life balance and so on creating moderate deviations from the predicted ratio in a few areas like computer science. But this is arguing over the scraps of variance left over, after differences in mathematical ability have devoured their share.


There are a couple of potentially very strong objections to this hypothesis. Let me see if I can answer them.

First, maybe this is a binary STEM vs. non-STEM thing. That is, STEM fields require more mathematical aptitude (obviously) and they sound like the sort to have more stereotypes about women. So is it possible that my supposedly large sample size is actually just showing an artifact of division into these two categories?

No. I divided the fields into STEM and non-STEM and ran an analysis within each subgroup. Within the non-STEM subgroup, there was a correlation between GRE Quantitative and percent female in a major of -0.64, p = 0.02. It is completely irresponsible to do this within the STEM subgroup, because it has n = 7 which is too small a sample size to get real results. But if we are bad people and do it anyway, we find a very similar correlation of -0.63. p is only 0.12, but with n=7 what did you expect?

Both of these correlations are higher than Leslie et al were able to get from their entire sample.

Second, suppose that it’s something else driving gender-based patterns in academia. Maybe stereotypes or long hours or whatever. Presumably, these could operate perfectly well in undergrad. So stereotypes cause lots of men to go into undergraduate math and lots of women to go into undergraduate humanities. The men in math classes successfully learn math and the women in humanities classes successfully learn humanities. Then at the end of their time in college they all take the GRE, and unsurprisingly the men who have been taking all the math classes do better in math. In this case, the high predictive power of mathematical ability would be a result of stereotypes, not an alternative to them.

In order to investigate this possibility we could look at SAT Math instead of GRE Quantitative scores, since these would show pre-college ability. SAT scores show a gap much like that in GRE scores; in both, the percentile of the average woman is in the low 40s.

Here is a graph of SAT Math scores against percent women in undergraduate majors:

SAT Math had a correlation of -0.65, p = 0.016.

This correlation is still very strong. It is still stronger than Leslie et al’s correlation with perceived required ability. But it is slightly weaker than the extremely strong correlation we find with GRE scores. Why?

I can’t answer that for sure, but here is a theory. The “undergraduate major” data is grabbed from what SAT test-takers put down as their preferred undergraduate major when they take the test in (usually) 11th grade. The “percent female” data is grabbed from records of degrees awarded in each field. So these are not exactly the same people on each side. One side shows the people who thought they wanted to do Physics in 11th grade. The other side shows the people who ended up completing a Physics degree.

The people who intend to pursue Physics but don’t end up getting a degree will be those who dropped out for some reason. While there are many reasons to drop out, one no doubt very common one is that the course was too hard. Therefore, the people who drop out will be disproportionately those with lower mathematical ability. Therefore, the average SAT Math score of 11th grade intended Physics majors will be lower than the average SAT Math score of Physics degree earners. So the analysis above likely underestimates the average SAT Math score of people in mathematical fields. This could certainly explain the lower correlation, and I predict that if we could replace our unrepresentative measure of SAT scores with a more representative one, much of the gap between this correlation and the previous one would close.

These data do not rule out simply pushing everything back a level and saying that these stereotypes affect what classes girls take in middle school and high school. Remember, we using “ability” as a designation for a type of excellence, not an explanatory theory of it. This simply confirms that by eleventh grade, the gap has already formed.7.

Third, perhaps SAT and GRE math tests are not reflective of women’s true mathematical ability. This is the argument from stereotype threat, frequently brought up as reasons why tests should not be used to judge aptitude.

But this is based on a fundamental misunderstanding of stereotype threat found in the popular media, which actual researchers in the field keep trying to correct (to no avail). See for example Sackett, Hardison, and Cullen (2004), who point out that no research has ever claimed stereotype threat accounts for gender gaps on mathematics tests. What the research found was that, by adding an extra stereotype threat condition, you could widen those gaps further. The existing gaps on tests like the SAT and GRE correspond to the “no stereotype threat” control condition in stereotype threat experiments, and “absent stereotype threat, the two groups differ to the degree that would be expected based on differences in prior SAT scores”. Aronson and Steele, who did the original stereotype threat research and invented the field, have confirmed that this is accurate and endorsed the warning.

Anyway, even if the pop sci version of stereotype threat were entirely true and explained everything, it still wouldn’t rescue claims of bias or sexism in the sciences. It would merely mean that the sciences’ reasonable and completely non-sexism-motivated policy of trusting test scores was ill-advised.8

Fourth, might there be reverse causation? That is, suppose that there are stereotypes and sexism restricting women’s entry into STEM fields, and unrelatedly men have higher test scores. Then the fields with the stereotypes would end up with the people with higher test scores, and it would look like they require more ability. Might that be all that’s happening here?

No. I used gender differences in the GRE scores to predict what scores we would expect each major to have if score differences came solely from differences in gender balance. This predicted less than a fifth of the variation. For example, the GRE Quantitative score difference between the average test-taker and the average Physics graduate student was 9 points, but if this were solely because of differential gender balance plus the male test advantage we would predict a difference of only 1.5 points. The effect on SAT scores is similarly underwhelming.

But I think the most important thing I want to say about objections to Part II is that, whether they’re correct or not, Part I still stands. Even if the correlation between innate ability and gender balance turns out to be an artifact, Leslie et al’s correlation between perceived innate ability and gender balance is still an artifact of an artifact.


A reader of an early draft of this post pointed out the imposingly-named Nonlinear Psychometric Thresholds In Physics And Mathematics. This paper uses SAT Math scores and GPA to create a model in which innate ability and hard work combine to predict the probability that a student will be successful in a certain discipline. It finds that in disciplines “such as Sociology, History, English, and Biology” these are fungible – greater work ethic can compensate for lesser innate ability and vice versa. But in disciplines such as Physics and Mathematics, this doesn’t happen. People below a certain threshold mathematical ability will be very unlikely to succeed in undergraduate Physics and Mathematics coursework no matter how hard-working they are.

And that brought into relief part of why this study bothers me. It ignores the pre-existing literature on the importance of innate ability versus hard work. It ignores the rigorous mathematical techniques developed to separate innate ability from hard work. Not only that, but it ignores pre-existing literature on predicting gender balance in different fields, and the pre-existing literature on GRE results and what they mean and how to use them, and all the techniques developed by people in those areas.

Having committed itself to flying blind, it takes the thing we already know how use to predict gender balance, shoves it aside in favor of a weird proxy for that thing, and finds a result mediated by that thing being a proxy for the thing they are inexplicably ignoring. Even though it just used a proxy for aptitude to predict gender balance, everyone congratulates it for having proven that aptitude does not affect gender balance.

Science journalism declares that the myth that ability matters has been vanquished forever. The media take the opportunity to remind us that scientists are sexist self-appointed geniuses who use stereotypes to punish women. And our view of an important issue becomes just a little muddier.

I encourage everyone to reanalyze this data and see if I’m missing something. You can find the GRE data I used here and the SAT data here (both in .xlsx format).


1. They cite for this claim, among other things, Stephen Jay Gould’s The Mismeasure Of Man

2. Beware the ecological fallacy; these scores are still positively correlated in individuals.

3. It was also probably more highly significant, but I can’t tell for sure because (ironically) their significance result wasn’t to enough significant digits.

4. There was a small error in the percent of women in Communications in the dataset I provided them with, so these numbers are off by a tiny fraction from what you will get if you try to replicate. I didn’t feel comfortable asking them to redo the entire thing, but the small error would not have changed the results significantly, and the tiny amount it would have changed them would have been in the direction of making the innate ability results more striking rather than less.

5. Although Leslie et al focused on women, they believe their results could also extend to why African-Americans are underrepresented compared to European-Americans and Asian-Americans in certain subjects. They theorize that European and Asian Americans, like men, are stereotyped as innately brilliant, but African-Americans, like women, lack this stereotype. I find this a bit off – after all, in the gender results, they contrasted the male “more innately brilliant” stereotype with the female “harder-working” stereotype, but African Americans suffer from a stereotype of not being hard-working, and Asian-Americans do have a stereotype of being hard-working, even more so than women. Anyway, this is only a mystery if you stick to Leslie et al’s theory of stereotypes about perceived innate ability. Once you look at GRE Quantitative scores, you find that whites average 150.8, Asians average 153.9, and blacks average 143.7, and there’s not much left to explain.

6. It’s hard to correlate SAT scores with majors, because the SAT data is full of tiny vocational majors that throw off the results. For example, there are two hundred people in the country studying some form of manufacturing called “precision production”, they’re almost all male, and they have very low SAT scores. On the other hand, there are a few thousand people studying something called “family science”, they’re almost all women, and they also all have very low SAT scores. The shape of gender*major*SAT scores depends almost entirely on how many of these you count. I circumvented the entire problem by just counting the fields that approximately corresponded to the ones Leslie et al counted in their graduate-level study. I tried a few different analyses using different ways of deciding which fields to count, and as long as they were vaguely motivated by a desire to include academic subjects and not the vocational subjects with very low scores, they all came out about the same.

7. The argument that stereotypes cause boys to take more middle school and high school math classes than girls is somewhat argued against by the finding that actually girls take more middle school and high school math classes than boys. However, there are some contrary results; for example, boys are more likely than girls to take the AP Calculus test. This entire area gets so tangled up in differing levels of interest and ability and work-ethic that it’s not worth it, at my level of interest and ability and work ethic, to try to work it out. The best I can say is that the gap appears by the time kids take the SAT in 11th grade.

8. I can’t help adding that I continue to believe that the stereotype threat literature looks like a null field which continues to exist only through publication bias and experimenter effects. The funnel plot shows a clear peak at “zero effect” and an asymmetry indicating a publication bias for positive results (for some discussion of why I like funnel plots, see here.) And a closer look at the individual research shows this really disturbing pattern of experiments by true believers finding positive effects, experiments by neutral parties and skeptics not finding them, replication attempts failing, and large real-world quasi-experiments turning up nothing – in a way very reminiscent of parapsychology. Although I am far from 100% sure, I would tentatively place my money on the entire idea of stereotype threat vanishing into the swamp of social psychology’s crisis of replication.

These Are A Few (More) Of My (Least) Favorite Things

One year ago, I wrote Ten Things I Want To Stop Seeing On The Internet In 2014.

And now it’s 2015, and I think things are getting better. Take doge. I swear to God that the last time I saw the word doge, it was referring to an honest-to-God Venetian noble. And the price of dogecoin is down an order of magnitude from its peak last February. The War on Doges is starting to seem winnable.

I can’t take any credit for this. It has been a concerted effort on the part of millions of people who saw doge memes on Facebook, let their fingers briefly drift towards the “share” button, and then pulled themselves back from the precipice, restrained by their better nature.

But in the hopes that this is the first success of many, I would like to share some things I want to stop seeing on the Internet in 2015:

1. Abuse Of Poe’s Law

Poe’s Law is the belief that some religious fundamentalists are so stupid that it’s impossible to distinguish them from a parody.

This is all nice and well in the abstract, but when applied to a particular case, where a particular atheist has fallen for a parody site, it tends to be an unfortunate stand-in for “Some atheists are so ignorant that it’s impossible for them to distinguish religious people from a parody of religious people.” Listen:

A: “The Pope just said that everyone who isn’t creationist should be put in jail! What an outrage!”

B: “Uh, you do know that’s on The Onion, right?”

A: “Oh, well, haha, Poe’s Law, just goes to show how dumb those religious people are.”

Problem is, Poe’s Law isn’t limited to religion any more. Now it’s politics, culture, science, and anywhere else where one side thinks their opponents are so stupid it’s literally impossible to parody them (ie everywhere on both sides). You spread the dumbest and most obviously fake rumors to smear your opponents. And then when you’re caught, instead of admitting you were fooled, you claim Poe’s Law and smear your opponents even more.

On the other hand, once you’re willing to admit this dynamic exists, it can make for some pretty interesting guessing games and unintentional Intellectual Turing Tests – see the Poe’s Law In Action subreddit for some examples.

2. People Getting Destroyed By Other People

Whenever I write a persuasive piece, I get to see my fans share it on Twitter like this:

I didn’t destroy anybody. I disagreed with them.

I’m glad to know I’m not the only one who has to deal with this. Newsweek writes about how Jon Stewart Is A Violent Sociopath Who Must Be Stopped in reponse to increasing claims that Stewart “destroys”, “demolishes”, “disembowels”, and “makes ground beef” out of whoever he’s arguing against on his show.

This bothers me the same way that “debunked” bothers me. Both sides are going to insist that their own research “debunks” the other, and so make it impossible to have a conversation based on the premise that there’s still room for disagreement. The flip side of my fans believing that I’ve destroyed whoever is that when that person writes a response, their fans are going to believe they’ve destroyed me.

At least no one can eviscerate me, since Jon Stewart has already eviscerated the entire blogosphere.

3. Demonstrating That People Are Stupid By Having Them Use The Word “Muh”

No straw man is ever concerned about immigrants stealing his job. He’s always concerned about immigrants stealing “muh jarb”, or possibly “muh jawb”, which sounds like some form of obscure Islamic garment.

This has lately taken a disturbing turn in the form of straw feminists worrying about “muh sojiny”. I strongly believe that every women has a right to her sojiny and no man should be able to take it from her, but I still can’t help wishing that people would lay off the cheap shots for a while.

4. Wikipedia-Shaming

Did you know it is 2015 and people will still criticize you for getting facts off of Wikipedia?

I’m not even talking about controversial conclusions, like “on balance, the research about gun control shows…”. I’m talking about simple facts.

A: “China is bigger than the United States”

B: “Where’d you hear that one, Wikipedia?”

A: “…yes?”

B: “You expect me to believe something you literally just took off a Wikipedia article?”

Yes. Yes I do. I could go find the CIA World Factbook or whatever, but it will say the same thing as Wikipedia, because Wikipedia is pretty much always right. When you challenge Wikipedia on basic facts, all you do is force people to use inconvenient sources to back up the things Wikipedia says, costing people time for no reason and making them hate you. There may have been a time when Wikipedia was famously inaccurate. Or maybe there wasn’t. I don’t know. Wikipedia doesn’t have an article on it, so it would take time and energy to find out. The point is, now it’s 2015, and the matter has been settled.

How accurate is Wikipedia?:

Several studies have been done to assess the reliability of Wikipedia. An early study in the journal Nature said that in 2005, Wikipedia’s scientific articles came close to the level of accuracy in Encyclopædia Britannica and had a similar rate of “serious errors”. The study by Nature was disputed by Encyclopædia Britannica, and later Nature replied to this refutation with both a formal response and a point-by-point rebuttal of Britannica?’?s main objections. Between 2008 and 2012, articles in medical and scientific fields such as pathology, toxicology, oncology, pharmaceuticals, and psychiatry comparing Wikipedia to professional and peer-reviewed sources found that Wikipedia’s depth and coverage were of a high standard.

I know this because I got it from Wikipedia’s Reliability Of Wikipedia article. Go ahead, challenge me, I dare you.

5. Articles That Start Off With An Image Taking Up The Entire Screen

This is what I’m talking about. I click on the link expecting an article on gas pipeline deals, and there is exactly zero article on the first screenfull of page I come to. That’s fine, you know, the only reason I even clicked was to see a huge, high-resolution picture of Vladimir Putin’s head. Information is totally optional. Screw you. This is why if I want to learn about Russian-Chinese gas deals, I’ll just look them up on Wikipedia.

I feel the same way about those Web 2.0 sites where the landing page is just an image of a smiling group of people engaged in a nondescript activity, and then way up in the corner is a tiny button that says “Discover” (it’s always “discover”) which leads to actual information. Likewise this site, which probably made its designer feel very smug about their clean minimalist style, but where you can’t get a single word of information without watching a video.

6. Ads That Disappear Very Slowly

You get an ad. It appears at the bottom of the screen. You look at it, decide you’re not interested, click the little X. It disappears. But not right away. It crawls. It saunters. After what seems to be a long and arduous journey, during which it had to ford several rivers and stop off at Fort Laramie for supplies, it finally makes it to the bottom of the screen and fades away.

I try hard to understand other people’s perspectives. I know that companies need to have ads to make money. I know that they have an incentive to make those ads as disruptive and obnoxious as possible to make you look at them. I even understand why some ads have the little x kind of hidden, so you can’t find it without some poking around, which forces you to view the ad for a little while longer. I understand all those things.

But I don’t understand why the ad has to take so long to disappear. It’s obviously not just incompetence. They specifically have to add an extra little sliding-down animation to the ad to make it take so long. They put in more work to make it more annoying for no benefit. Do you really think that while I’m waiting for the ad to disappear, I’m thinking “You know, I thought I didn’t need to meet hot desperate singles in my area, which is why I clicked the X to make it go away, but that sliding-down-the-screen animation is so cool that I’m going to reload the page a couple of times, wait for the ad to come back, and then click it”?

7. Overuse Of Demonstratives In Clickbait

I understand that demonstratives (“this”, “that”, “yon”) are supposed to give you a bit of mystery, make you want to click on the article to see what’s happening. “This Celebrity Just Came Out As Gay” makes you wonder which one it is. “Rare Disease Spreads To These Three US States” makes you check if yours is one of them. Fine. I would personally prefer “Rare Disease Spreads To Three States” or even “Which Three States Did A Rare Disease Spread To? Click Here To Find Out”, but whatever.

However. On Vox recently, Obama Just Hit These North Koreans With Sanctions. What, exactly, are we supposed to get out of this? “Oh! I wonder if it’s Yu Kwang Ho! Surely they wouldn’t get Yu Kwang Ho! Better click to find out!”

8. Any Use Of The Word “Entitled”

Okay, I’ve already written on what I think of people calling nerds “entitled”. But it goes beyond that.

Number 8 in last year’s Least Favorite Things was “arguments about which generation is better”. Well, now those have progressed to arguments over which generation is most entitled. Hard Work? No Thanks! Meet Entitled-To-It-All Generation Y. Millenials are Selfish and Entitled and Helicopter Parents Are To Blame. But The Most Entitled Generation Isn’t Millennials, It’s Baby Boomers. And coming in from left field, maybe The Greatest Generation Was The Most Entitled. There are even entire books about this

Men feel entitled to women. Women feel entitled to men. Blacks feel entitled. Whites feel entitled. The Entitlement Mentality of Liberals coexists with Entitled Conservative White Male Putzes, possibly because Conservatives Feel ‘Entitled’ To Scorn ‘Entitlement’ (whatever).

Anyone can call their out-group entitled. The easiest way is – well, poor people are entitled because they demand hand-outs without working for them. Rich people are entitled because they think they deserve 100% of what they have and refuse to acknowledge or change the inequalities in the system that benefit them. One side or the other of that dichotomy is likely to map onto whatever group you want to insult.

“Entitled” is a Fully General Insult that can apply to anyone, and it really hurts. That makes it irresistable to the wrong kind of people, and it’s why I hope I start seeing less of it. Alternately, people could start giving their enemies the Psychological Entitlement Scale, which is so hilariously obvious with what it’s doing that I find it astounding that it apparently still manages to successfully detect some entitled people. The Titanic? Really?

9. People Being Post-Things

I recently heard someone describe themselves as “post-Zionist”, then go on to give what sounded like pretty standard criticism of Zionism. I don’t want to get too heavily into this particular example, because I understand post-Zionism is complex and every time I write something about Israel I get Israeli commenters saying I’ve gotten it wrong and other Israeli commenters saying no they’ve gotten it wrong and still other Israeli commenters saying we’ve all got it wrong. What was that saying about “two Jews, three opinions” again?

But what bothers me about post-Zionism is that it seems to carry this kind of smug “Oh, you guys are still Zionist? Don’t you know Zionism is, like, totally five years ago? Nowadays all the cool people have moved on to more exciting things,” which I don’t think really adds to the argument. Zionism versus anti-Zionism suggests a picture of two sides with two different opinions – which seems to match the reality pretty well. Zionism versus post-Zionism suggests one side just hasn’t gotten the message yet.

I feel the same way about post-rationalism. Yes, maybe you’ve seen through rationalism in some profound way and transcended it. Or maybe you just don’t get it. This is exactly the point under debate, and naming yourselves “post-rationalists” seems like an attempt to short-circuit it, not to mention leaving everyone else confused. And maybe you could give yourself a name that actually reflected your beliefs (“Kind Of New-Age-y People Who Are Better At Math Than Usual For That Demographic And Will Angrily Deny Being New-Age-y If Asked Directly”?) and we wouldn’t have to have a new “but what is post-rationalism?!?!” conversation every month.

Post-modernism can stay, though. At this point it’s less of a name than a warning label.

10. Disputes Over Whether Humans Evolved From Monkeys

I don’t mean creationism. I mean disputes among people who accept evolution, over whether it was monkeys in particular that humans evolved from.

It tends to go something like this.

A: “Humans evolved from monkeys”.

B: “No they didn’t! They evolved from chimps! Chimps are an ape, not a monkey!”

C: “Humans didn’t evolve from chimps! They evolved from a most recent common ancestor whose descendants include both humans and chimps!”

Everything about this conversation is not-even-wrong.

First, humans clearly evolved from monkeys in the same sense humans evolved from single-celled organisms. No one’s saying it had to be the most recent step.

Second, apes are ambiguously a type of monkey. Think square versus rectangle. All squares are rectangles but not all rectangles are squares and “rectangle” is usually used to indicate rectangles that are not squares but can technically refer to squares as well. Here’s a primatologist saying that Apes Are Monkeys – Deal With It.

Third, the most recent common ancestor of humans and chimpanzees may (or may not) have been a chimpanzee. This is Richard Wrangham’s thesis, and he calls it pan prior, placing it firmly within the chimpanzee genus.

These last two issues are especially annoying because they’re kind of meaningless category disputes. Yet for some reason the Internet seems to be obsessed with the lurking fear that someone, somewhere, might be saying that people evolved from monkeys or chimps.

Seriously. Get a life, Internet.

Posted in Uncategorized | Tagged | 674 Comments

Links 1/2014: Link, For You Know Not Whence You Came Nor Why

This blog sometimes discusses how ideas which weren’t originally religious can evolve into a semi-religious form. But even I was flabbergasted to see Chinese peasants offering bowls of pig blood to statues of Mao on his birthday (h/t Spandrell).

Speaking of Chinese religion, here’s yet another Christianity Is Exploding In China article. This makes me think: China is a big and powerful dictatorship with weak traditional religions and widespread concern about decaying values and social decadence. It’s a lot like the late Roman Empire where Christianity originally took off. I would really like to see someone knowledgeable write an analysis of what the unexpectedly rapid spread of Christianity in China can tell us about the unexpectedly rapid spread of early Christianity and why the religion took off at all.

23andMe finally gets a business plan that the FDA can’t torpedo – selling genetic data to pharmaceutical companies. Key statistic – a single drug company deal is worth as much as doubling their current consumer base. Probably a good thing for anyone who wants to advance personal genomics or drug discovery.

The effect of the tsetse fly on African development finds that modeled fly population predicts some of the underdevelopment of the region before colonial times. The theory is that fly-borne disease decreased farming output and thus population density, making it difficult for strong states and economies to form except in rare fly-free areas like Great Zimbabwe. H/t Marginal Revolution.

Belgian serial rapist requests euthanasia in place of his life sentence on the grounds that he is facing “unbearable psychological suffering” in prison; government originally agrees, but cancels due to lack of a doctor willing to perform the procedure. Before you argue about how refusal to permit prisoner euthanasia successfully draws a bright line that will one day protect prisoners’ rights, keep in mind that the families of the man’s victims have been petitioning against it on the grounds that he deserves unbearable psychological suffering rather than “a swift release”. I know this’ll be unpopular, but I’m pretty in favor of changing the appropriate UN conventions to specify that any country where prisoners who request euthanasia can’t get it gets charged with torture.

A new experimental treatment for multiple sclerosis: destroy the immune system with chemo, then build it back up again.

Israel Won’t Recognize Armenian Genocide, Says Ambassador. Apparently it wants better relations with Turkey, which I get, but the irony of Israel of all countries being willing to compromise genocide-recognition for its short-term goals is really really sad.

Scientists develop computer program that can always win at poker. I was originally confused why they published this result instead of heading to online casinos and becoming rich enough to buy small countries, but it seems that it’s a very simplified version of the game with only two players. More interesting, the strategy was reinforcement learning – the computer started with minimal domain knowledge, then played poker against itself a zillion times until it learned everything it needed to know. Everyone who thinks that AI is nothing to worry about, please think very carefully about the implications of a stupid non-generalized algorithm being able to auto-solve a game typically considered a supreme test of strategy and intellect.

A US Air Force team including a young Carl Sagan spent the 1950s trying to nuke the moon for extremely shaky reasons including “a possible boosting of domestic morale”.

India’s new ruling party is trying to hack through its legendary government bureaucracy. Minor victory of the month – a government employee who did not show up to work for twenty-four years has finally gotten fired.

A Career In Science Will Cost You Your First-Born. I’d like to see a really good analysis by someone who understands economics of why the science job market is so terrible. Is it that lots of bright-eyed idealistic young geniuses have so much non-monetary attraction to the idea of going into science that labs and universities can make the career as awful as they want and still have a ready supply of takers?

Authorities Suspect A Shark Tried To Eat Vietnam’s Internet is a deliberately clickbaity title, but there is no way for me to stay mad after watching a video of a shark eating the Internet.

Quiz: Anatomical Feature, Or Obscure Tolkien Reference? I have been preparing my entire life for this moment. And I still got three of them wrong.

Language Log finds the most perversely pronounced monosyllabic word in all human language – and, no surprise, it’s Gaelic.

It’s been recognized for a while that school choice programs can improve standardized test scores, but a new study finds that they can also result in more higher education, greater salaries at age 30, and less dependence on government handouts.

I’ve been saying for a while that BPA is probably bad news, and now there’s some evidence that it alters fetal brain development in fish, which are sort of like humans in that they are both animals. Supposed BPA-free substitute plastics don’t fare any better. I’m hoping this will eventually result in a ban. Until then, you can try avoiding canned foods and plastic water bottles, but that’s not going to prevent the pipes that bring water to your home from often being lined with the same stuff.

The first big randomized controlled trial of police body cameras shows they very dramatically reduce incidents of police misbehavior. Previous studies were unable to distinguish between better officer behavior and fewer frivolous complaints by citizens, but this one provides some strong evidence it’s mostly the officers who are changing.

The Impending Collapse Of Venezuela looks pretty grim, with the only plus side being hopefully this will encourage them to get a competent government and end up better off. I feel like we’ve already been over the whole “no, really, socialism doesn’t work” thing, but I guess some people always need more reminders.

Speaking of which, 538 draws the obvious-in-hindsight conclusion that this is why Cuba, whose economy is heavily dependent on Venezuelan aid, is suddenly cozying up to the US – they realize that their lifeline is about to be cut off, and that once that happens their government is in big trouble. A better question – why is Obama choosing to deal with them now, rather than waiting until they’re desperate or just letting them collapse so he can help pick up the pieces? Maybe because he’s a nice guy and my cutthroat geopolitical instincts aren’t very healthy in the real world?

Vox: Paul Ryan isn’t running for president. He’s after something even bigger. TL;DR – Paul Ryan is the Petyr Baelish of the USA.

I try to train myself to remember that blindly debating a factual question is dumb, because some responsible scientist has already investigated it much more thorougly than I have. This is a remarkably hard habit to stick to, and I always like reminders. So – did you know people have formally investigated whether or not austerity worked in Europe?

Robin Hanson suggests selling cities to people or corporations. Sounds familiar.

A new study finds that underrepresentation of women in a field is closely linked to perception of that field as requiring lots of innate talent or “genius”. The news sites explain to us that “women avoid fields full of self-appointed geniuses”, that genius-intensive fields “punish” women, and ask whether “the genius stereotype is holding women back”. The researchers recommend that genius-heavy academic fields “examine the culture they have about how much brilliance influences success”. I hereby give everyone involved in this discussion the prestigious Sailer Award For Excellence In Failure To Consider Alternate Hypotheses.

Ritual Circumcision Linked To Increased Risk Of Autism In Young Boys (EDIT: Highly dubious)

This is big news – Attorney General Eric Holder has limited police ability to take money from people for no reason, which surprisingly was not limited until now. Between this and the camera study, I feel like we’re finally heading towards the right track with policing.

Publishers pull best-selling religious inspiration book The Boy Who Came Back from Heaven after the boy in question admits he did not, in fact, come back from Heaven. Alex Malarkey (nominative determinism!) said that the story was “all made up” and “I said I went to heaven because I thought it would get me attention”. Interesting for its implications about other paranormal claims. I am adjusting my view of the median case somewhat away from “the brain does weird things sometimes in states of great stress or illness” and towards “people often lie”.

I’ve been trying to avoid talking about Charlie Hebdo because it seems like classic toxoplasma. It’s something everyone should agree is terrible, and instead we’re desperately trying to figure out how to turn it into a controversy / a stick to hit one of various out-groups with. But I was impressed by some of the discussion of French double standards – a Charlie Hebdo cartoonist who said something mildly anti-Semitic was recently fired by the magazine, then charged with ‘inciting racial hatred’ by the government. And a Middle Eastern comedian who used some arguably inflammatory language to describe how he felt about the attacks was charged and faces seven years in prison. If I had to justify the existence of Charlie Hebdo to a French Muslim, I would want to be able to say “Look, I know it offends you, but we hold freedom of speech absolutely sacred and we want you to join us in that”. Instead it’s going to look to them (maybe accurately) like Muslims are specifically singled out as a group it’s ok to offend even while everyone else gets “protection”. It’s good that this incident has gotten everyone excited about free speech, but now the French need to start making sure the realities match their newfound ideals.

Related: If Charlie Hebdo Had Been Published In Britain.

That Korean company I linked to a while back that everyone suspected was trying to clone a mammoth? They’ve admitted they’re trying to clone a mammoth. ETA seems to be a couple of years.

OLD: Psychedelic use causes mental disease. NEW: Psychedelic use doesn’t cause mental disease. NEWER: Psychedelic use may protect against mental disease.

My spirit animal might be the confused flour beetle

Relevant to our interests: the Handbook of Relationship Initiation. Unfortunately seems more academic than practical, but still probably really interesting. Now I want a practical one of these.

Posted in Uncategorized | Tagged | 639 Comments

Depression Is Not A Proxy For Social Dysfunction


Here is a terrible article from the New York Post: Sorry, Liberals, Scandinavian Countries Aren’t Utopias.

Its thesis is interesting and worth exploring, but instead of a principled investigation, the article just publishes a bunch of cherry-picked smears about Scandinavia. Did you know that 5% of Danes have had sex with animals?

(What percent of people in other countries have had sex with animals? I don’t know. More important, I see no sign that the New York Post knows either.)

But the part that really caught my eye was statements like these:

Why does no one seem particularly interested in visiting Denmark? Visitors say Danes are joyless to be around. Denmark suffers from high rates of alcoholism. In its use of antidepressants it ranks fourth in the world. (Its fellow Nordics the Icelanders are in front by a wide margin) … Finland, which tops the charts in many surveys, is also a leader in categories like alcoholism, murder, suicide and antidepressant usage.

The Post is not the only paper to make this argument. The Guardian (“The Grim Truth Behind The Scandinavian Miracle”) has said much the same thing:

Take the Danes, for instance. True, they claim to be the happiest people in the world, but why no mention of the fact they are second only to Iceland when it comes to consuming anti-depressants?…Finland has by far the highest suicide rate in the Nordic countries.

I’ve heard this same argument applied to other issues; for example, in his debate with Noah Smith, Michael Anissimov argues against the supposed success of modern liberal society by pointing out rising rates of depression and suicide.

It’s really tempting to equate depression with misery and misery with social dysfunction. Danes and Finns have high levels of depression, therefore their lives must be unusually miserable, therefore Denmark and Finland are poorly-organized societies.

But first of all, it’s not clear that Scandinavian countries really have very high depression and suicide rates. There are a lot of collections of statistics, and many of them show Scandinavia around the middle. Going by “antidepressant prescriptions” is a terrible way to do things, because it mixes amount of depression with resources devoted to treating depression – if the Scandinavian health systems are as good as everyone says, maybe they just treat a greater percent of their depressives than everywhere else.

But more important, even if Scandinavia does have very high rates of depression, that doesn’t tell us much about whether they’re happy or not. Depression is not the same thing as being sad. Sadness is a risk factor for depression – although even there I suspect that it’s very specific kinds of sadness that we haven’t yet teased out from the general construct – but it is not the condition itself. The condition itself is a complicated mess of neurotransmitters, cytokines, hormones, changes in brain structure, and goodness only knows what else.

Off the top of my head, here are six plausible reasons why Scandinavia could have higher rates of depression than the United States, even if it is a utopian society of perfect happiness.

1. Light. Scandinavia is far north [citation needed] which puts its citizens at very high risk for seasonal affective disorder, which can present as depression.

2. The midnight sun. Scandinavia’s weird day-night cycle could easily disrupt people’s circadian rhythms. Studies find that “increasing evidence points to a role of the biological clock in the development of depression…it seems likely the circadian system plays a vital role in the genesis of the disorder. This is why some European countries use melatonergic substances as antidepressants.

3. Parasite load. It’s positively correlated with temperature, which means Scandinavia probably has some of the lowest parasite load in the world. But low parasite load causes the immune system to get antsy and start attacking random stuff, leading to increase risk of autoimmune disease. If there’s an immunological component to depression – and right now lots of people think there is – then that’s another risk factor right there.

4. Diet. The Scandinavian diet has unusually little fresh food, because the area is a frozen wasteland and most things have to be imported from elsewhere. They’re big on frozen stuff, processed stuff, and canned stuff. I am neither an expert in Scandinavian cuisine nor in nutrition, but if depression is linked to diet and imbalance in the gut microbiome, which there’s some evidence it is, then diet is heavily implicated and the Scandinavians are in a good position to get hit extra hard.

5. Genetics. The New York Post article mentions that Scandinavians have an unusual variant of the MAO-A enzyme (I told you it was a weird hit piece. Scandinavia is too liberal, therefore they have bad genes?). MAO-A is also known as “the thing that processes serotonin” and “the thing that MAO inhibitors, some of the most powerful known antidepressants, inhibit”. I’m not saying this gene in particular is responsible for Scandinavian depression, I’m saying that the article itself is admitting that Scandinavia contains some genetically distinct populations and for all we know this could be involved.

6. Culture. Maybe the biggest factor in the level of depression and suicide in a culture is whether it is culturally acceptable to be depressed and commit suicide. Some of the lowest suicide rates are found in heavily religious cultures and communities who believe suicide is a mortal sin. On the other hand, one of the most suicidal countries in the world is Japan, with its heavily-mythologized history of heroic samurai taking “the honorable way out” when they had brought shame upon themselves. Well, Scandinavia is one of the least religious regions in the world. And all I know about their culture is that they produce about 100% of good death metal, and their native mythology ends with the world being plunged into eternal winter and the gods being eaten by wolves.


But all this is just speculation. Let me give a concrete example of a case where social dysfunction doesn’t track depression and suicidality in a predictable way.

What about white versus black Americans? To some degree these two groups live in separate “societies”. Most people would consider the white society better off in most ways – higher income, better health, more family stability, less involvement with the criminal justice system. If White America and Black America were countries, White America would get all of the accolades currently given to the Scandinavians.

But American whites have higher rates of depression than blacks. There are the usual contradictory studies and arguments about how to adjust for which confounder, but I’m pretty sure this is something like a consensus position right now. More solidly, white Americans have much higher suicide rates than black Americans.

(although I feel bad mentioning this, because the stereotype that blacks never commit suicide is wrong and sometimes prevents black people from getting the help they need.)

We can go a few centuries back and get even more surprising results. Although it’s difficult to get data from the era, analyses of suicide rate among African-American slaves in the antebellum South describe it as “surprisingly low”. I can’t find any hard evidence proving Kurt Vonnegut’s contention that “the suicide rate per capita among slave owners was much higher than the suicide rate among slaves”, but it seems to have been commonly believed. Kneeland writes:

“[These low suicide rates are] consistent with suicide rates for Africa and for people of African descent living in other areas of the world, and further supports the theory that a low suicide rate is an element of African culture.”

If you’re going to say that Scandinavia’s higher depression and suicide rates mean Scandinavia has it worse off than America, you also need to theorize that white people have it worse off than black people, including black slaves. Why don’t you go post something to that effect on Tumblr and see what they have to say? I’ll wait.


Or maybe we’re barking up entirely the wrong tree. What if it’s not even that happy, well-functioning societies can sometimes still end up with high suicide rates? What if people become suicidally depressed precisely because they live in happy, well-functioning societies?

This is the fascinating hypothesis of Daly, Oswald, and Wu (2011), who after crunching the numbers find pretty convincingly that “suicide rates tend to be highest in happy places”:

A little-noted puzzle is that many of [the happiest] places have unusually high rates of suicide. While this fact has been remarked on occasionally for individual nations, especially for the case of Denmark, it has usually been attributed in an anecdotal way to idiosyncratic features of the location in question (eg the dark winters in Scandinavia), definitional variations in the measurement of well-being and suicide, and differences in culture and social attitudes regarding happiness and taking one’s life. Most scholars have not thought of the anecdotal observation as a systematic relationship that might be robust to replication or investigation…this paper attempts to document the existence of a happiness-suicide paradox: happier areas have a higher percentage of suicides.

They then go on to show a strong positive relationship between average self-reported happiness and suicidality across Western nations – Greece is both the least happy country and the one with the lowest suicide rate – and US states, where confirmed hellholes New York and New Jersey are at or near the bottom. The relationship holds whether you adjust for confounders (including income!) or not.

I expected this to be a straightforward effect of modernization/industrialization/liberalism, as per Michael Anissimov’s hypothesis. The country-level data maybe sort of vaguely supports that trend – Greece and Portugal are our token incompletely-modernized countries and have very low suicide rates, Scandinavia is high, and everywhere else is sort of a toss-up. But US states really really don’t support that hypothesis – New York and Jersey both seem high on the modernization/industrialization/liberalism axis, and they’re right in the bottom left corner of the study’s graphs along with Greece and Portugal. Meanwhile, tropical paradise Hawai’i is suicidal as heck, even though it doesn’t seem espcially modern/industrial/liberalized. The US state data also torpedo – albeit less conclusively – an attempt to make the whole issue one of latitude.

One caveat I do have about the US data is that several of the happiest and most suicidal states – at least on the unadjusted plot – are also high-altitude. Utah, Wyoming, Colorado, Montana, Idaho are all up there at the top left side of the graph. But we already know there’s a strong positive relationship between altitude and suicide in 2584 US counties, probably because the brain’s emotional regulation system doesn’t work well in low-oxygen environments. If we assume people living in beautiful open forested mountain areas are especially happy, that takes away a big chunk of the graph right there. But it leaves other chunks untouched, and I don’t think it’s going to be that simple.

The authors’ preferred explanation is that suicide is an effect of relative rather than absolute misery. If you’re depressed and everybody around you is very happy, that makes things worse than if you’re depressed and everyone around you is also pretty miserable. Thus suicide is more common in happier societies.

I really don’t like this theory. Although everyone else should be happier in these societies, the person in question who might or might not commit suicide should also be, on average, happier. There’s no reason to think that the average hedonic distance between potential suicides and their neighbors is higher in these areas. Indeed, given that Scandinavia – and many of the other happy societies – are also some of the most equal societies, I would expect an unusually low hedonic distance between people. And in fact, I notice that suicide rates by country are negatively correlated with inequality – that is, the more unequal the country, the lower the suicide rate (wow, I definitely don’t remember seeing that one in The Spirit Level.)

On the other hand, I can’t for the life of me think of a better theory, so whatever.

Other things that increase suicide rates, by the way, include springtime, nice weather, high levels of education, and very occasionally antidepressants. My father, a very hard-headed internist, makes fun of me for doing psychiatry because “the whole field is just common sense”, but sometimes it really isn’t.

So you should probably think very carefully before using a difference in depression or suicide rates to support your pet theory about which societies work better than others.

The Influenza Of Evil


A recent Cracked piece: Five Everyday Groups Society Says It’s Okay To Mock. It begins:

There’s a rule in comedy that says you shouldn’t punch down. It’s okay to make fun of someone rich and famous, because they’re too busy molesting groupies with 100-dollar bills to notice, but if you make a joke at the expense of a homeless person, you’re just an asshole. That said, we as a society have somehow decided on a few arbitrary exceptions to this rule.

“Somehow decided on a few arbitrary exceptions” isn’t very technical. Then again, perhaps we shouldn’t expect technical explanations from a humor website. Let’s try something a little bit more rigorous, like poetry:

For Humanity sweeps onward: where to-day the martyr stands,
On the morrow crouches Judas with the silver in his hands;
Far in front the cross stands ready and the crackling faggots burn,
While the hooting mob of yesterday in silent awe return
To glean up the scattered ashes into History’s golden urn.

’Tis as easy to be heroes as to sit the idle slaves
Of a legendary virtue carved upon our father’s graves,
Worshippers of light ancestral make the present light a crime;—
Was the Mayflower launched by cowards, steered by men behind their time?
Turn those tracks toward Past or Future, that make Plymouth Rock sublime?

They were men of present valor, stalwart old iconoclasts,
Unconvinced by axe or gibbet that all virtue was the Past’s;
But we make their truth our falsehood, thinking that hath made us free,
Hoarding it in mouldy parchments, while our tender spirits flee
The rude grasp of that great Impulse which drove them across the sea.

No? Still not technical enough? I guess that was kind of a long shot. Fine, let’s do this the hard way.


Earlier this week, I wrote about things that are anti-inductive. Something is anti-inductive if it fights back against your attempts to understand it. The classic example is the stock market. If someone learns that the stock market is always low on Tuesdays, then they’ll buy lots of stocks on Tuesdays to profit from the anomaly. But this raises the demand for stocks on Tuesdays, and therefore stocks won’t be low on Tuesdays anymore. To detect a pattern is to destroy the pattern.

The less classic example is job interviews where every candidate is trying to distinguish themselves from every other candidate. If someone learns that interviewers are impressed if you talk about your experience in tropical medicine, then as more and more people catch on they’ll all get experience in tropical medicine, it will become cliche, and people won’t be impressed by it anymore.

Evil, too, is anti-inductive.

The Nazis were very successful evildoers, at least for a while. Part of their success was convincing people – at least the German people, but sometimes also foreigners – that they were the good guys. And they were able to convince a lot of people, because people can be pretty dumb, a lot of them kind of just operate by pattern-matching, and the Nazis didn’t match enough patterns to set off people’s alarms.

Neo-Nazis cannot be called “successful” in any sense of the word. Their PR problem isn’t just that they’re horrible – a lot of groups are horrible and do much better than neo-Nazis. Their PR problem is that they’re horrible in exactly the way that our culture formed memetic antibodies against. Our pattern-matching faculties have been trained on Nazis being evil. The alarm bells that connect everything about Nazis to evil are hypersensitive, so much so that even contingent features of the Nazis remain universally acknowledged evil-signals.

It would be premature to say that we will never have to worry about fascism again. But for now, we are probably pretty safe from fascism that starts its sales pitch with “Hi, I’m fascism! Want a swastika armband?”

Huey Long supposedly predicted that “Fascism in America will attempt to advance under the banner of anti-fascism.” I’m not sure I like the saying as it stands – it seems too susceptible to Hitler Jr. telling Churchill Jr. that he’s marching under the banner of anti-fascism which proves he’s the real fascist. Then again, in a world where capitalism marches under the banner of “socialism with Chinese characteristics”, who knows? I would prefer to say that fascism will, at the very least, advance in a way which carefully takes our opposition to fascism into account .

Sure enough, people who had learned to be wary of fascism were still highly susceptible to communism, which wore its anti-fascism proudly on its sleeve as a symbol of how great it was. It convinced a lot of very smart people in the free world that it was the best thing since sliced bread, all while murdering tens of millions of people. Meanwhile, our memetic immune systems were sitting watchfully at their posts, thinking “Well, this doesn’t look at all like Nazism. They’re saying all the right stuff about equality, which is like the opposite of what the Nazis said. I’m giving them a pass.”

In fact, I’ll make the analogy more explicit. Every winter, there’s a flu epidemic. Every spring and summer, people’s bodies put in a lot of effort making antibodies to last year’s flu. The next winter, the flu mutates a little, a new virus with new antigens starts a new epidemic, and the immune system doesn’t have a clue: “This virus doesn’t have the very very specific characteristic I’ve learned to associate with the flu. Maybe it wants to be my friend!” This is why we need the WHO to predict what the up-and-coming flu virus will be and give us vaccines against it; it’s also why their job is so hard; they don’t know what’s coming, except that it will look different from however it’s looked before.

Nowadays most people’s memetic immune systems have some antibodies to communism, and people talking with Russian accents about how we need to eliminate the bourgeoisie and institute a dictatorship of the proletariat sends shiver up the spines of a lot of people. Nowadays an openly Communist party faces the same uphill battle as an openly Nazi party.

But that just means that if there’s some other evil on the horizon, it probably won’t resemble either fascism or communism. It will be movement about which everyone’s saying “These new guys are so great! They don’t pattern-match to any of the kinds of evil we know about at all!” By Long’s formulation, it may very well be marching under the banners of anti-fascism and anti-Communism.

(I’m not vagueblogging, by the way. I honestly don’t have anyone in mind here. The whole point is that it’s probably someone I’m not expecting. And if you say “I KNOW EXACTLY WHICH GROUP IT WILL BE, BASED ON THOSE CRITERIA IT’S CLEARLY X!” consider the possibility that you’re missing the point.)


But getting back to the Cracked article.

We as a society have mostly figured out that shouting “GET A JOB, LOSER!” at the homeless is mean. We have mostly figured out that shouting “YOU’RE GOING TO HELL” at people of different religions is bad. We’re even, slowly but surely, starting to wonder whether there’s something problematic about shouting “FAGGOTS!” at the local gay couple.

Stupid bullies will continue to do those things, just as stupid investors will continue to read “How To Beat The Stock Market” books published in 1985, and stupid socialites will continue to wear the fashion that was cool six months ago.

But smart bullies are driven by their desire to have their bullying make them more popular, to get the rest of the world pointing and laughing with them. In a Blue Tribe bubble, shouting “FAGGOT” at gay people is no longer a good way to do that. The smart bullies in these circles have long since stopped shouting at gays – not because they’ve become any nicer, but because that’s no longer the best way to keep their audience laughing along with them.

Cracked starts off by naming mentally ill celebrities as a group society considers it okay to mock. This doesn’t seem surprising. Nowadays people talk a lot about punching-up versus punching-down. But that just means bullies who want to successfully punch down will come up with a way to make it look like they’re punching up. Take a group that’s high-status and wealthy, but find a subset who are actually in serious trouble and mock them, all the while shouting “I’M PUNCHING UP, I’M PUNCHING UP!”. Thus mentally ill celebrities.

The other examples are harder to figure out. I would argue that they’re ones that are easy to victim-blame (ie obesity), ones that punch down on axes orthogonal to the rich-poor axis we usually think about and so don’t look like punching down (ie virginity), or ones that are covertly associated with an outgroup. In every case, I would expect the bullies involved, when they’re called upon, it to loudly protest “But that’s not real bullying! It’s not like [much more classic example of bullying, like mocking the homeless]!” And they will be right. It’s just different enough to be the hot new bullying frontier that most people haven’t caught onto yet.

I think the Cracked article is doing good work. It’s work that I also try to do (see for example number 6 here, which corresponds to Cracked’s number 5). It’s the work of pointing these things out, saying “Actually, no, that’s bullying”, until eventually it sinks into the culture, the bullies realize they’ll be called out if they keep it up, and they move on to some new target.

All of this ties way into the dynamic I talked about in Untitled. I mean, look at the people on Cracked’s list of whom society says it’s okay to mock. Virgins. The obese. People who live in their parents’ basements. Generalize “mentally ill celebrities” just a little bit to get “people who are financially well-off but non-neurotypical” and there you go.

I apologize for irresponsibly claiming to have found a pattern in an anti-inductive domain. You may now all adjust your behavior to make me wrong.

The Physics Diet?

There are at least four possible positions on the thermodynamics of weight gain:

1. Weight gain does not depend on calories in versus calories out, even in the loosest sense.

2. Weight gain is entirely a function of calories in versus calories out, but calories may move in unexpected ways not linked to the classic “eat” and “exercise” dichotomy. For example, some people may have “fast metabolisms” which burn calories even when they are not exercising. These people may stay very thin even if they eat and exercise as much as much more obese people.

3. Weight gain is entirely a function of calories in versus calories out, and therefore of how much you eat and exercise. However, these are in turn mostly dependent on the set points of a biologically-based drive. For example, some people may have overactive appetites, and feel starving unless they eat an amount of food that will make them fat. Other people will have very strong exercise drives and feel fidgety unless they get enough exercise to keep them very thin. These things can be altered in various ways which cause weight gain or loss, without the subject exerting willpower. For example, sleep may cause weight loss because people who get a good night sleep have decreased appetite and lower levels of appetite-related hormones.

4. Weight gain is entirely a function of calories in versus calories out, and therefore of how much you eat and exercise. That means diet is entirely a function of willpower and any claim that factors other than amount of food eaten and amount of exercise performed can affect weight gain is ipso facto ridiculous. For example, we can dismiss claims that getting a good night’s sleep helps weight loss, because that would violate the laws of thermodynamics.

1 and 4 are kind of dumb. 1 is dumb because…well, to steal an Eddington quote originally supposed apply to the second law of thermodynamics:

If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations — then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against…thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.

But 4 is also dumb. We have a long list of things that affect weight gain – for example, patients on the powerful psychiatric medication clozapine usually gain a lot of weight – fifteen pounds more on average than people on safer antipsychotics. Other medications are known to increase weight to a lesser degree, and some medications even decrease weight, though you wouldn’t like the side effects of most of them. Certain genetic diseases are also known to cause increased weight – Prader-Willi syndrome, for example.

One could try to rescue 4 by saying that people with rare genetic diseases or taking powerful prescription-only medications are a different story and in normal people it’s entirely controlled by willpower. But first, this is an area where possibility proofs are half the battle, and we have a possibility proof. And second, there are more than enough studies about genetics, microbiome, and, yes, sleep showing that all of these things can have effects in normal people.

So 1 and 4 are out. And although I do sometimes see people pushing them, they mostly seem to do a thriving business as straw men for people who want to accuse their opponents of saying something absurd.

The most interesting debate to be had is between 2 and 3. 3 says that all of the interventions that we know affect weight – certain pills, certain recreational drugs, changes in gut bacteria, whatever – do it by affecting appetite and exercise drive. 2 says that basal metabolism is also involved. 3 seems to at least leave open the possibility of just starving yourself even when your body is telling you really hard to eat. 2 says even that won’t work.

There’s room for a little bit of gradation between 2 and 3. A lot of people suggest that one way “fast metabolism” presents is by people fidgeting a lot, which is sort of the same as “your body increases its exercise drive”.

But in general, I think 2 is an important issue that does cause at least some interpersonal weight differences.

We’ll start with the “possibility proof” again. MRAP2. It’s a gene. Scientists can delete it in mice. These mice will eventually develop excessive appetites. But when they are young, they eat the same amount as any other mouse, but still get fatter.

Likewise, 2,4-dinitrophenol is a cellular uncoupling agent which increases metabolic rate and consistently produces weight loss of 2-3 pounds per week. It would be an excellent solution to all of our obesity-related problems if the papers on it didn’t keep having names like 2,4-Dinitrophenol: A Weight Loss Agent With Significant Acute Toxicity And Risk Of Death.

So what about everyday life?

A study of individual variation in basal metabolic rate found very significant interpersonal differences. A lot of that was just “some people are bigger than others”, but some of it wasn’t – they state that “twenty-six percent of the variance remained unexplained”. The Wikipedia article puts this in context: “One study reported an extreme case where two individuals with the same lean body mass of 43 kg had BMRs of 1075 kcal/day (4.5 MJ/day) and 1790 kcal/day (7.5 MJ/day). This difference of 715 kcal/day (67%) is equivalent to one of the individuals completing a 10 kilometer run every day”

Dr. Claude Bouchard and his team stuck 12 pairs of male identical twins in isolation chambers where their caloric intake and exercise could be carefully controlled, then fed them more calories than their bodies needed. All sets of twins gained weight, and in all twin groups both twins gained about the same amount of weight as each other, but the amount of weight gained varied between twin pairs by a factor of 3 (from 4 to 13 kg).

A lot of the sites that talk about this thing are careful to say that people “can’t blame” genes for their obesity, because obesity levels have been rising for decades and genes can’t change that quickly. I think this is wrong-headed. True, genes are not the source of the modern rise in obesity levels. But it’s entirely possible that a globally rising tide of obesity has disproportionately affected the people with the wrong genes. Just as Bouchard fed the same amount extra to all his study participants but some of them gained more weight than others, so if you put an entire civilization worth of people in an obesogenic environment, some of them might be genetically predisposed to do worse than the rest.

A more practical question – can individual people’s metabolism change?

I am personally predisposed to answer in the affirmative. In my early twenties, I ate a crazy amount every day – two bagels with breakfast, cookies with lunch, a big dinner followed by dessert – and I stayed pretty thin throughout. Now I’m thirty, I eat a very restrained diet, and my weight still hovers at just above the range where I am supposed to be. I know that people are famously bad at understanding how much they’re eating and exercising, but seriously if you try to convince me that I’m eating more now than I was then I’m going to start doubting my own sanity, or at least my autobiographical memory.

But there’s not much evidence to back me up. Metabolic rate is well-known to decline with age, but linearly and predictably. And it changes with muscle mass, but only minimally – and I don’t think I used to be any more muscular.

The sites that talk about drastic and unexpected ways to change metabolism seem mostly crackpottish. This isn’t to say their methods don’t work – green tea, for example, has a statistically significant effect – but it’s all so small as to be pretty meaningless in a real-world context.

So my own story seems to be on shaky ground. But as far as I can tell, the people arguing that they’re trying just as hard as anybody else but still unable to lose weight because of their metabolism are very possibly right.

Posted in Uncategorized | Tagged , | 285 Comments

The Phatic And The Anti-Inductive


Ozy recently taught me the word “phatic”. It means talking for the sake of talking.

The classic example is small talk. “Hey.” “Hey.” “How are you?” Fine, and you?” “Fine.” No information has been exchanged. Even if the person involved wasn’t fine, they’d still say fine. Indeed, at least in this country giving an information-bearing response to “how are you?” is a mild social faux pas.

Some people call this “social grooming behavior” and it makes sense. It’s just a way of saying “Hello, I acknowledge you and still consider you an acquaintance. There’s nothing wrong between us. Carry on.” That you are willing to spend ten seconds holding a useless conversation with them signals this just fine.

We can go a little more complex. Imagine I’m calling a friend from college after five years out of contact; I’ve heard he’s got a company now and I want to ask him for a job. It starts off “Hey, how are you?”, segues into “And how are the wife and kids?”, then maybe into “What are you doing with yourself these days?” and finally “Hey, I have a big favor to ask you.” If you pick up the phone and say “Hello, it’s Scott from college, can you help me get a job?” this is rude. It probably sounds like you’re using him.

And I mean, you are. If I cared about him deeply as a person I probably would have called him at some point in the last five years, before I needed something. But by mutual consent we both sweep that under the rug by having a few minutes of meaningless personal conversation beforehand. The information exchanged doesn’t matter – “how’s your business going?” is just as good as “how’s your wife and kids?” is just as good as “how are your parents doing?”. The point is to clock a certain number of minutes about something vaguely personal, so that the request seems less abrupt.

We can go even more complex. By the broadest definition, phatic communication is equivalent to signaling.

Consider a very formulaic conservative radio show. Every week, the host talks about some scandal that liberals have been involved in. Then she explains why it means the country is going to hell. I don’t think the listeners really care that a school in Vermont has banned Christmas decorations or whatever. The point is to convey this vague undercurrent of “Hey, there are other people out there who think like you, we all agree with you, you’re a good person, you can just sit here and listen and feel reassured that you’re right.” Anything vaguely conservative in content will be equally effective, regardless of whether the listener cares about the particular issue.


Douglas Adams once said there was a theory that if anyone ever understood the Universe, it would disappear and be replaced by something even more incomprehensible. He added that there was another theory that this had already happened.

These sorts of things – things such that if you understand them, they get more complicated until you don’t – are called “anti-inductive”.

The classic anti-inductive institution is the stock market. Suppose you found a pattern in the stock market. For example, it always went down on Tuesdays, then up on Wednesdays. Then you could buy lots of stock Tuesday evening, when it was low, and sell it Wednesday, when it was high, and be assured of making free money.

But lots of people want free money, so lots of people will try this plan. There will be so much demand for stock on Tuesday evening that there won’t be enough stocks to fill it all. Desperate buyers will bid up the prices. Meanwhile, on Wednesday, everyone will sell their stocks at once, causing a huge glut and making prices go down. This will continue until the trend of low prices Tuesday, high prices Wednesday disappears.

So in general, it should be impossible to exploit your pattern-finding ability to profit of the stock market unless you are the smartest and most resourceful person in the world. That is, maybe stocks go up every time the Fed cuts interest rates, but Goldman Sachs knows that too, so they probably have computers programmed to buy so much stock milliseconds after the interest rate announcement is made that the prices will stabilize on that alone. That means that unless you can predict better than, or respond faster than, Goldman Sachs, you can’t exploit your knowledge of this pattern and shouldn’t even try.

Here’s something I haven’t heard described as anti-inductive before: job-seeking.

When I was applying for medical residencies, I asked some people in the field to help me out with my interviewing skills.

“Why did you want to become a doctor?” they asked.

“I want to help people,” I said.

“Oh God,” they answered. “No, anything but that. Nothing says ‘person exactly like every other bright-eyed naive new doctor’ than wanting to help people. You’re trying to distinguish yourself from the pack!”

“Then…uh…I want to hurt people?”

“Okay, tell you what. You have any experience treating people in disaster-prone Third World countries?”

“I worked at a hospital in Haiti after the earthquake there.”

“Perfect. That’s inspirational as hell. Talk about how you want to become a doctor because the people of Haiti taught you so much.”

Wanting to help people is a great reason to become a doctor. When Hippocrates was taking his first students, he was probably really impressed by the one guy who said he wanted to help people. But since that time it’s become cliche, overused. Now it signals people who can’t come up with an original answer. So you need something better.

During my interviews, I talked about my time working in Haiti. I got to talk to some of the other applicants, and they talked about their time working in Ethiopia, or Bangladesh, or Nicaragua, or wherever. Apparently the “stand out by working in a disaster-prone Third World country” plan was sufficiently successful that everyone started using, and now the people who do it don’t stand out at all. My interviewer was probably thinking “Oh God, what Third World country is this guy going to start blabbering about how much he learned from?” and moving my application to the REJECT pile as soon as I opened my mouth.

I am getting the same vibe from the critiques of OKCupid profiles in the last open thread. OKCupid seems very susceptible to everybody posting identical quirky pictures of themselves rock-climbing, then talking about how fun-loving and down-to-earth they are. On the other hand, every deviation from that medium has also been explored.

“I’m going for ‘quirky yet kind'”.


“Sarcastic, yet nerdy?”


“Outdoorsy, yet intellectual.”


“Introverted, yet a zombie.”

“I thought we went over this. Zombies. Are. Super. Done..”


I’ve been thinking about this lately in the context of psychotherapy.

I’m not talking about the very specific therapies, the ones where they teach special cognitive skills, or expose you to spiders to cure your arachnophobia. They don’t let me do those yet. I’m talking about what’s called “supportive therapy”, where you’re just talking to people and trying to make them feel generally better.

When I was first starting out, I tried to do therapy anti-inductively. I figured that I had to come up with something unexpected, something that the patient hadn’t thought of. Some kind of brilliant interpretation that put all of their problems in a new light. This went poorly. It tended to be a lot of “Well, have you tried [obvious thing?]”, them saying they had, and me escalating to “Well, have you tried [long shot that probably wouldn’t work]?”

(I wonder if this was Freud’s strategy: “Okay, he says he’s depressed, I can’t just tell him to cheer up, probably everybody says that. Can’t just tell him to accept his sadness, that one’s obvious too. Got to come up with something really original…uh…”HAVE YOU CONSIDERED THAT YOU WANT TO KILL YOUR FATHER AND MARRY YOUR MOTHER??!”)

Now I tend more to phatic therapy. This happened kind of by accident. Some manic people have a symptom called “pressured speech” which means they never shut up and they never let you get a word in edgewise. Eventually, more out of surrender than out of a strategic plan, I gave up and stopped trying. I just let them talk, nodded my head, said “Yeah, that sounds bad” when they said something bad-sounding, said “Oh, that’s good” when they said something good-sounding.

After a while I realized this went at least as well as any other therapy I was doing, plus the patients really liked me and thought I was great and gave me lots of compliments.

So after that, “active listening” became sort of my default position for supportive therapy. Get people talking. Let them talk. Nod my head as if I am deeply concerned about their problems. Accept their effusive praise about how well I seem to be understanding them.

This is clearly phatic. I would say the ritual is “High status person is willing to listen to my problems. That means society considers my problems important and considers me important. It means my problems are okay to have and I’m not in trouble for having them.” As long as I seem vaguely approving, the ritual reaches its predetermined conclusion.


I was thinking about this recently several friends have told me how much she hated “therapist speak”. You know, things like “I feel your pain” or “And how does that make you feel?”

I interpret this as an anti-inductive perspective on therapy. The first therapist to say “I feel your pain” may have impressed her patients – a person who herself can actually feel all my hurt and anger! Amazing! But this became such a standard in the profession that it became the Default Therapist Response. Now it’s a signal of “I care so little about your pain that I can’t even bother to say anything other than the default response.” When a therapist says “I feel your pain,” it’s easy to imagine that in her head she’s actually planning what she’s going to make for dinner or something.

So just as some people find it useful to divide the world into “ask culture” and “guess culture”, I am finding it useful to divide the world into “phatic culture” and “anti-inductive culture”.

There are people for whom “I feel your pain” is exactly the right response. It shows that you are sticking to your therapist script, it urges them to stick to their patient script, and at the end of the session they feel like the ritual has been completed and they feel better.

There are other people for whom “I feel your pain” is the most enraging thing you could possibly say. It shows that you’re not taking them seriously or engaging with them, just saying exactly the same thing you do to all your other patients.

There are people for whom coming up with some sort of unique perspective or clever solution for their problems is exactly the right response. Even if it doesn’t work, it at least proves that you are thinking hard about what they are saying.

There are other people for whom coming up with some sort of unique perspective or clever solution is the most enraging thing you could possibly do. At the risk of perpetuating gender stereotypes, one of the most frequently repeated pieces of relationship advice I hear is “When a woman is telling you her problems, just listen and sympathize, don’t try to propose solutions”. It sounds like the hypothetical woman in this advice is looking for a phatic answer.

I think myself and most of my friends fall far to the anti-inductive side, with little tolerance for the phatic side. And I think we probably typical-mind other people as doing the same.

This seems related to the classic geek discomfort with small-talk, with pep rallies, and with normal object-level politics. I think it might also be part of the problem I had with social skills when I was younger – I remember talking to people, panicking because I couldn’t think of any way to make the conversation unusually entertaining or enlightening, and feeling like I had been a failure for responding to the boring-weather-related question with a boring-weather-related answer. Very speculatively, I think it might have something to do with creepy romantic overtures – imagine the same mental pattern that made me jokingly consider giving “I want to hurt people” as my motivation for becoming a doctor, applied to a domain that I really don’t understand on a fundamental enough level to know whether or not saying that is a good idea.

I’ve been trying to learn the skill of appreciating the phatic. I used to be very bad at sending out thank-you cards, because I figured if I sent a thank-you card that just said “Thank you for the gift, I really appreciate it” then they would think that the lack of personalization meant I wasn’t really thankful. But personalizing a bunch of messages to people I often don’t really know that well is hard and I ended up all miserable. Now I just send out the thank you card with the impersonal message, and most people are like “Oh, it was so nice of you to send me a card, I can tell you really appreciated it.” This seems like an improvement.

As for psychotherapy, I think I’m going to default to phatic in most cases when I don’t have some incredibly enlightening insight, then let my patients tell me if that’s the wrong thing to do.

Posted in Uncategorized | Tagged | 302 Comments

OT12: Openness To Threadxperience

(seen at the New York Solstice celebration. Explanation here)

This is the semimonthly open thread. Post about anything you want, ask random questions, whatever. Also:

1. Thanks to custom website and software design company Trike Apps for agreeing to host this blog. The occasional downtime when the hosting service gets annoyed at too high a traffic volume should be over for good now.

2. Comments of the month have to be the various stories of overachieving German soldiers in the last links post. Here’s Doug Muir on Beate Uhse and Chaosmage on Ernst Junger.

3. A while back I linked to Nick Land’s experimental horror shory story Phyl-Undhu on the grounds that someone whose blog I read wrote a thing. At the time I hadn’t actually read it. I was recently alerted by a friend that I should, and that it contains a character named “Alex Scott” who makes exactly my argument about the Great Filter, which is cited in the appendix. I still think Land is too quick to round my “Late filter doesn’t make sense, therefore early filter” to “Late filter doesn’t make sense, therefore I am putting my head in the sand and refusing to think about it”, but I am prepared to excuse this for a literary cameo.

Now more than ever, no race or gender in the open thread. There’s a race and gender open thread at Ozy’s.

Posted in Uncategorized | Tagged | 533 Comments