codex Slate Star Codex

SELF-RECOMMENDING!

Addendum to “Targeting Meritocracy”

I’ve always been dissatisfied with Targeting Meritocracy and the comments it got. My position seemed so obvious to me – and the opposite position so obvious to other people – that we both had to be missing something.

Reading it over, I think I was missing the idea of conflict vs mistake theory.

I wrote the post from a mistake theory perspective. The government exists to figure out how to solve problems. Good government officials are the ones who can figure out solutions and implement them effectively. That means we want people who are smart and competent. Since meritocracy means promoting the smartest and most competent people, it is tautologically correct. The only conceivable problem is if we make mistakes in judging intelligence and competence, which is what I spend the rest of the post worrying about.

From a conflict theory perspective, this is bunk. Good government officials are ones who serve our class interests and not their class interests. At best, merit is uncorrelated with this. At worst, we are the lower and middle class, they are the upper class, and there is some system in place (eg Ivy League universities) that disproportionately funnels the most meritorious people into the upper class. Then when we put the most meritorious people in government, we are necessarily seeding the government with upper class people who serve upper class interests.

This resolves my confusion about why people disagree with me on this point. It reinforces a lesson I’ve had to learn again and again: if people seem slightly stupid, they’re probably just stupid. But if they seem colossally and inexplicably stupid, you probably differ in some kind of basic assumption so fundamental that you didn’t realize you were assuming it, and should poke at the issue until you figure it out.

Posted in Uncategorized | Tagged | 327 Comments

Confirmation Bias As Misfire Of Normal Bayesian Reasoning

From the subreddit: Humans Are Hardwired To Dismiss Facts That Don’t Fit Their Worldview. Once you get through the preliminary Trump supporter and anti-vaxxer denunciations, it turns out to be an attempt at an evo psych explanation of confirmation bias:

Our ancestors evolved in small groups, where cooperation and persuasion had at least as much to do with reproductive success as holding accurate factual beliefs about the world. Assimilation into one’s tribe required assimilation into the group’s ideological belief system. An instinctive bias in favor of one’s in-group” and its worldview is deeply ingrained in human psychology.

I think the article as a whole makes good points, but I’m increasingly uncertain that confirmation bias can be separated from normal reasoning.

Suppose that one of my friends says she saw a coyote walk by her house in Berkeley. I know there are coyotes in the hills outside Berkeley, so I am not too surprised; I believe her.

Now suppose that same friend says she saw a polar bear walk by her house. I assume she is mistaken, lying, or hallucinating.

Is this confirmation bias? It sure sounds like it. When someone says something that confirms my preexisting beliefs (eg ‘coyotes live in this area, but not polar bears’), I believe it. If that same person provides the same evidence for something that challenges my preexisting beliefs, I reject it. What am I doing differently from an anti-vaxxer who rejects any information that challenges her preexisting beliefs (eg that vaccines cause autism)?

When new evidence challenges our established priors (eg a friend reports a polar bear, but I have a strong prior that there are no polar bears around), we ought to heavily discount the evidence and slightly shift our prior. So I should end up believing that my friend is probably wrong, but I should also be slightly less confident in my assertion that there are no polar bears loose in Berkeley today. This seems sufficient to explain confirmation bias, ie a tendency to stick to what we already believe and reject evidence against it.

The anti-vaxxer is still doing something wrong; she somehow managed to get a very strong prior on a false statement, and isn’t weighing the new evidence heavily enough. But I think it’s important to note that she’s attempting to carry out normal reasoning, and failing, rather than carrying out some special kind of reasoning called “confirmation bias”.

There are some important refinements to make to this model – maybe there’s a special “emotional reasoning” that locks down priors more tightly, and maybe people naturally overweight priors because that was adaptive in the ancestral environment. Maybe after you add these refinements, you end up at exactly the traditional model of confirmation bias (and the one the Fast Company article is using) and my objection becomes kind of pointless.

But not completely pointless. I still think it’s helpful to approach confirmation bias by thinking of it as a normal form of reasoning, and then asking under what conditions it fails.

Posted in Uncategorized | Tagged , | 105 Comments

Welcome (?), Infowars Readers

Hello to all the new readers I’ve gotten from, uh, Paul Watson of Infowars. Before anything else, consider reading this statement by the CDC about vaccines.

Still here? Fine.

Infowars linked here with the headline Survey Finds People Who Identify As Left Wing More Likely To Have Been Diagnosed With A Mental Illness. This is accurate only insofar as the result uses the publicly available data I provide. The claim about mental illness was made by Twitter user Philippe Lemoine and not by me. In general, if a third party analyzes SSC survey data, I would prefer that media sources reporting on their analysis attribute it to them, and not to SSC.

As far as I can tell, Lemoine’s analysis is accurate enough, but needs some clarifications:

1. Both extreme rightists and extreme leftists are more likely than moderates to have been diagnosed with most conditions.

2. Leftists might be more likely to trust the psychiatric system and get diagnosed. My survey shows some signs of that. Liberals are 60% more likely than conservatives to have formal diagnoses of depression, but only 30% more likely to have a self-diagnosis of depression.

3. Leftists might be more likely to think of their issues through a psychiatric lens than rightists, meaning that even the self-diagnosis numbers might be inflated.

4. The SSC survey is a bad sample to use for this, not just because it’s unrepresentative, but because it might be unrepresentative of different political affiliations in different ways. For example, SSC Marxists really are surprisingly depressed, but maybe the only Marxists who would read an anti-Marxist blog are depressed Marxists looking for things to be miserable and angry about (though see below for some counterevidence).

5. A commenter on Lemoine’s tweet links to this blog post by someone who found the same thing in the General Social Survey. The General Social Survey is much larger and more rigorous than my survey, and there’s no reason to care what my survey has to say when there are GSS results available.

In general, if a survey analysis is posted on this blog, it’s mine. If not, then it isn’t mine and you should link to whoever performed it and let them clean up their own mess. Thanks – and seriously, vaccines are fine.

Autogenderphilia Is Common And Not Especially Related To Transgender

“Autogynephilia” means becoming aroused by imagining yourself as a woman. “Autoandrophilia” means becoming aroused by imagining yourself as a man. There’s no term that describes both, but we need one, so let’s say autogenderphilia.

These conditions are famous mostly because a few sexologists, especially Ray Blanchard and Michael Bailey, speculate that they are the most common cause of transgender. They point to studies showing most trans women endorse autogynephilia. Most trans people disagree with this theory, sometimes very strongly, and accuse it of reducing transgender to a fetish.

Without wading into the moral issues around it, I thought it would be interesting to get data from the SSC survey. The following comes partly from my own analyses and partly from wulfrickson’s look at the public survey data on r/TheMotte.

The survey asked the following questions:

First of all, thanks to the 6,715 people (182 trans, 6259 cis, 274 confused) who answered these questions despite my disclaimers. Here’s how it worked out. 5 is maximally autogenderphilic, 1 is no autogenderphilia at all:

Group (n) Autogynephilia Autoandrophilia
Cis men (5592) 2.6 1.9
Cis women (667) 2.5 2
Trans men (35) 1.9 2.3
Trans women (147) 3.2 1.3

Group* (n)** Autogynephilia (1 – 5) Autoandrophilia (1 – 5)
Straight cis men (4871) 2.6 1.8
Bi cis men (430) 2.6 3.3
Gay cis men (197) 1.7 3.4
Straight cis women (375) 2.4 1.9
Bi cis women (201) 2.8 2.5
Lesbian cis women (31) 2.5 1.9
Straight trans men (5) ??? ???
Bi trans men (19) ??? ???
Gay trans men (3) ??? ???
Straight trans women (5) ??? ???
Bi trans women (76) 3.1 1.4
Lesbian trans women (39) 3.4 1.2

*sexual orientation was self-reported. Almost all transgender people report sexual orientation relative to their current gender rather than their birth gender, so for example a “lesbian trans woman” would be someone who grew up male, currently identifies as female, and is attracted to other women. This is the opposite of how Blanchard and Bailey sometimes use these terms, so be careful comparing these results to theirs!
**results are marked as ??? for groups with sample size lower than 20

The survey confirmed Blanchard and Bailey’s finding that many lesbian trans women had strong autogynephilia. But it also confirmed other people’s findings that many cis people also have strong autogenderphilia. In this dataset, autogenderphilia rates in gay cis men were equal to those in lesbian trans women.

Autogenderphilia in cis people was divided between fantasies about being the opposite gender, and fantasies about being the gender they already were. What does it mean to fantasize about being a gender you already are? I asked a cis female friend who admitted to autogynephilia. She told me:

My literal body is arousing – it’s hot that I have breasts and can get pregnant and have a curvy figure and a feminine face and long hair, and it’s hot to dress up in femme clothes. There are certain gendered/social interactions that are very hot, or that can easily springboard into ones that are very hot. I’ve honestly wondered whether I might not be nonbinary or trans male, because I’m not really sure how euphoric being female is, besides that it’s like living in a sex fantasy.

(score one for the hypothesis that this kind of thing causes gender transition, because after reading this I kind of want to be a woman.)

Uh…moving on. The highest rates of autogenderphilia were found in bi cis men (autoandrophilia), gay cis men (autoandrophilia), bi trans women (autogynephilia), and lesbian trans women (autogynephilia).

These groups all have three things in common: they identify as the gender involved, they are attracted to the gender involved, and they are biologically male.

I would guess biological men have more of every fetish, regardless of their current gender identity, so it’s not surprising that they have more autogenderphilia also. In fact, we see that in biological women, the two highest categories are bi cis women (autogynephilia), and lesbian cis women (autogynephilia); again, they identify as the gender involved, and they are attracted to the gender involved.

So abstracting that away, the SSC survey data suggest a very boring hypothesis of autogenderphilia: if you identify as a gender, and you’re attracted to that gender, it’s a natural leap to be attracted to yourself being that gender.

The SSC survey hypothesis explains the same evidence that Blanchard and Bailey’s hypothesis explains (that lesbian trans women very often have autogynephilic fantasies), but reverse the proposed causation: it’s not that autogynephilia causes gender transition; it’s that identification as a gender is one factor that causes autogenderphilia.

But after that, it can go on to explain other things that Blanchard and Bailey can’t explain, like why cis gay men have as much autoandrophilia as trans lesbian women have autogynephilia. Or why some people with low levels of autogenderphilia transition, but many people with high levels don’t. I think it’s a simpler and more defensible explanation of the evidence.

I asked some people I know who supported Blanchard and Bailey’s theory for their thoughts. They focused on a few concerns about the data.

First, weird Internet samples plausibly have more of every paraphilia. This might inflate the rate for cis gay men and the number of trans lesbian women, assuming the latter all had to be above some cutoff; that might falsely lead me to believe the two groups have the same rate.

One counterargument might be that the responses among cis people alone are enough to generate the hypothesis discussed above. The low rates of autogynephilia in gay men, compared to in straight and bi men, suggest that being attracted to a gender is a prerequisite of autogenderphilia to it. And (adjusting again for the general tendency of male-bodied people to have more fetishes) the higher rates of autogynephilia in cis women/autoandrophilia in cis men, compared to autoandrophilia in cis women/autogynephilia in cis men, suggest that identifying as a gender is a prequisite to autogenderphilia to it.

Another counterargument might be the similarity of the histograms produced by cis gay male and trans lesbian female responses; they don’t look like they’re being generated by two different processes which have only coincidentally averaged out into the same summary statistic:

This doesn’t look like all cis men over a certain cutoff are becoming trans women; it looks like the curve for cis gay men and trans lesbian women are being shaped by the same process.

Second, did the survey questions accurately capture autogenderphilia? Fetishes range from very mild to very extreme; some people like being slapped during sex, other people have whole BDSM dungeons in their basement. Is it possible the survey captured some boring meaning of autogenderphilia, like “sure, I guess it would be hot to be a woman”, but some people have a much stronger and more obsessive form? The histogram above argues against this a little, but there might be ceiling effects.

Alice Dreger seems to take something like this perspective here:

Q: Do you think autogynephilia might be a part of the female experience, trans or cis? I’ve seen some (very preliminary) theorizing about it as well as a paper with a tiny sample size that suggest that cis women also experience sexual arousal at the thought of themselves as women.

A: I’ve talked with Blanchard, Bailey, and also Anne Lawrence about this, and my impression is they all doubt cis (non-transgender) women experience sexual arousal at the thought of themselves as women. Clinically, Blanchard observed autogynephilic natal male individuals who were aroused, for example, at the ideas of using a tampon for menses or knitting as a woman with other women. I have never heard a natal woman express sexual arousal at such ideas. I’ve never heard of a natal woman masturbating to such thoughts.

I asked the same cis female friend who gave me the quotation above, and she described using a tampon to masturbate and finding it hot. I think Dreger makes an important point that there are some pretty unusual manifestations of autogenderphilic fetishes out there and we should hesitate before drawing too many conclusions from a single question that lumps them all together. But also, Alice Dreger seems like an really dignified and important person who probably doesn’t hang out with people who talk openly about their menstruation-related masturbation fantasies, and she should probably adjust for that. Maybe she could move to the Bay Area.

There’s a common failure mode in psychiatry, where we notice people with some condition doing some weird thing, and fail to notice that huge swathes of people without the condition do the exact same weird thing. For example, everyone knows schizophrenics hear voices, but until recently nobody realized that something like 20% of healthy people do too. Everyone knows that LSD users can end up with permanent visual hallucinations, but until recently nobody realized that lots of drug-free people have the same problem. Schizophrenics definitely hear more voices than healthy people, and LSD users have more permanent visual hallucinations, but it’s movement along the distribution rather than a completely novel phenomenon.

I think autogenderphilia is turning out to work the same way, and that this will require us to reassess the way we think about it.

As usual, I welcome people trying to replicate or expand on these results. All of the data used in this post are freely available and can be downloaded here. I’ve also heard Michael Bailey is going to release his own interpretation of these data, so stay tuned for that. I’d like to delve into these issues further on future surveys, so let me know if you have ideas about how to do that.

And a big thanks to Tailcalled for helping me set up this section of the survey. If you’re interested in these issues, you might enjoy his blog or his own analysis of these results.

Open Thread 147

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, but please try to avoid hot-button political and social topics. You can also talk at the SSC subreddit – and also check out the SSC Podcast. Also:

1. I’m no longer soliciting updates about when links in my old posts no longer work. There are over a thousand SSC posts, and some are 5+ years old. I’m sure there are lots of links that no longer work, but keeping up with them would be a full-time job and I’m not interested (if someone else is, let me know).

2. I’ve added this to my Mistakes page, but it seems important enough that I want to signal-boost it here too: I’ve been informed of some studies suggesting Ritalin is just as likely to increase Parkinson’s disease risk as Adderall. This contradicts my previous position expressed in Adderall Risks: Much More Than You Wanted To Know that only Adderall and not Ritalin had this risk. I can no longer trace down the evidence supporting my previous position. Sorry for getting this wrong.

3. I had originally planned to end my review of Human-Compatible with a push for Soeren Everlin’s AI Safety Reading Group, which meets online every Wednesday and which was discussing Human Compatible for a while. But I waited too long and didn’t publish the review until they were done with the discussion. But that’s not their fault, and I still think you should check them out [see this comment for logistical info]

4. Comment on the week is Nick on Tyler Cowen’s state capacity libertarianism and the whole ensuing comment thread. And somehow I’ve lost the source comment, but also check out this article on how latitude affects binge drinking and not alcohol consumption per se. This may mean we don’t have to bring in sexual abuse to understand Greenland’s continued high suicide rates.

Posted in Uncategorized | Tagged | 712 Comments

Suicide Hotspots Of The World

[Content warning: suicide, rape, child abuse. Thanks to MC for some help with research.]

I.

Guyana has the highest national suicide rate in the world, 30 people per year per 100,000. Guyana has poverty and crime and those things, but no more so than neighboring Brazil (suicide rate of 6) or Venezuela (suicide rate of 4). What’s going on?

One place to start: Guyana is a multi-ethnic country. Is its sky-high suicide rate focused in one ethnic group? The first answer I found was this article by a social justice warrior telling us it constitutes racial “essentialism” to even ask the question. But in the process of telling us exactly what kind of claims we should avoid, she mentions someone bringing up that “80% of the reported suicides are carried out by Indo-Guyanese”. I feel like one of those classicists who has reconstructed a lost heresy through hostile quotations in Irenaeus.

Indo-Guyanese aren’t American Indians; they’re from actual India. Apparently thousands of Indians immigrated to Guyana as indentured laborers in the late 1800s. Most went to Guyana, and somewhat fewer went to neighboring Suriname. Suriname also has a sky-high suicide rate, but slightly less than Guyana’s, to the exact degree that its Indian population is slightly less than Guyana’s. Basically no Indians went anywhere else in South America, and nowhere else in South America has anywhere near the suicide rate of these two countries. The most Indian regions of Guyana also have the highest suicide rate. Hmmm.

Does India itself have high suicide rates? On average, yes. But India has a lot of weird suicide microclimates. Statewide rates range from from 38 in Sikkim (higher than any country in the world) to 0.5 in Bihar (lower than any country in the world except Barbados). Indo-Guyanese mostly come from Bihar and other low-suicide regions. While I can’t rule out that the Indo-Guyanese come from some micro-micro-climate of higher suicidality, this guy claims to have traced them back to some of their ancestral villages and found that those villages have low suicide rates.

So what’s going on? Social and Cultural Dimensions of Indian Indentured Labour and Its Diaspora argues that despite the mixed suicide rates in India itself, rates across the Indian Diaspora are universally high. For example:

The Fiji Indian suicide rate in the period 1900 to 1915 was the highest among all Indian labour importing colonies in Africa and the West Indies, and much higher than in India itself. In Mauritius too, hundreds of indentured Indian laborers committed suicide by jumping from a particular hillock during the indenture period, which acquired the name of ‘Suicide Hill’, now turned into a monument […]

In his article ‘Veil of Dishonor’ Lal (1985) describes what officials tend to point out as the primary cause of the Fiji Indian suicides: sexual jealousy arising from the persistent shortage of women on the plantations. The rate of indentured adult Indian females to males in Fiji was only 43 to 100. The intense competition for women among the indentured men was seen as the main reason for male suicides in Fiji. Lomarsh Roopnarine (2007) also shows high rates of suicides among indentured Indians in British Guiana […]

Although there is no reason to doubt the existence of sexual jealousy, this emphasis on the scarcity of women disregards the arduous circumstances in which the indentured labourers were working, and the disruption of the “integrative institutions” of society – family, marriage, caste, kinship, and religion – as the underlying causes of suicide and other ills affecting the Indian indentured labour population.

Yeah, but arduous circumstances affected dozens of different ethnicities involved in various colonization and forced labor schemes, and most of them didn’t have these kinds of suicide rates. I can kind of imagine a story where first-generation laborers had no hope of settling down or raising families, committed suicide at high rates, and that ingrained suicidal tendencies in the culture that never went away. But then how come that didn’t happen to eg indentured Englishmen in Virginia?

The incongruously named Vijayakumar and John (2018) blame the Hindu religion. Did you know that the Ramayana ends with Rama, three of brothers, and the entire population of his kingdom committing mass suicide by drowning? Or that the mahaparasthana is a traditional Hindu method of suicide “where the person walks in a north easterly direction, subsisting only on water and air, until his body sinks to rest”? Any religion that has a traditional direction to walk in while you’re committing suicide by starving yourself seems kind of suspicious here. But then how come Hindus in some parts of India have such low suicide rates? How come it’s just the diaspora that suffers. The paper suggests maybe it’s because religiosity plays a protective effect, but it sounds kind of strained.

I don’t have better answers to any of these questions. Maybe the combination of Hindu religion, imbalanced gender ratios, and uprooted communities created a perfect storm. I don’t have any better ideas.

II.

Guyana, at 30 suicides per year per 100,000, is the highest national suicide rate in the world. But if Greenland ever wins independence, it will steal first place. Greenland’s suicide rate is 83 per year per 100,000, almost three times higher than any other country in the world.

Like Guyana, this is more ethnic than national. Greenland is mostly Inuit, and Inuit everywhere have equally high suicide rates. The suicide rate in the mostly-Inuit Canadian province of Nunavit is 71 (for comparison, Canada in general is 10). The suicide rate among Alaskan Inuit is 40 (for comparison, the US in general is 14).

This definitely is not just because of the cold and darkness. White Alaskans who live right next to Alaskan natives have a rate of about 20, not much higher than the US average. And suicide in Greenland – like everywhere else – peaks in the spring and summer anyway.

Most damning of all, Greenland’s high suicide rates are a recent phenomenon. In 1971, the rate was 4. I didn’t forget a zero there. Fifty years ago, Greenland had one of the lowest suicide rates in the world. But by 1990, it had reached 120 (it’s since come down a little bit). What happened in those twenty years?

You would think limiting it to such a short time period would make things easy. It isn’t. There are two main theories: social alienation, and alcohol.

There is definitely a lot of social alienation. For centuries, the Inuit hunted seals in traditional villages. At some point the Danish government decided that was unacceptably backwards and resettled them in cities, especially the capital of Nuuk. This didn’t go well.

One counterargument to this story is that Nuuk has the lowest suicide rate in Greenland, and the more remote the village, the worse the suicide crisis. Maybe you could argue that everywhere was modernized and disrupted and alienated but at least a big city has some interesting stuff to do. This would kind of match the American experience, where it’s small towns in West Virginia that are getting hit by the opioid crisis and deaths of despair.

Another counterargument is that all Native American communities suffered a lot of displacement and alienation and modernization, but none of them suffered the same suicide spike as the Inuit. Sources disagree on the exact Native American suicide rate in the US, but it isn’t unusually high; the CDC numbers say it is slightly below the rate for non-Hispanic whites. Canadian First Nations suicide rates are elevated, but still only a third or so of Inuit levels. Maybe Inuits suffered stronger relocation pressures than other native peoples because of their Arctic environment? Or maybe every native group suffered a suicide spike, but Native Americans and Canadians have adjusted by now and their suicide rates have come back down? I’m not sure.

The other theory about Greenland is alcohol. Alcohol consumption in Greenland skyrocketed around the same time suicide did, reached levels that temporarily made Greenland by far the most alcoholic country in the world – then started declining around the same time suicide did. This seems to be a pattern when hunter-gatherers with no genetic or cultural resistance encounter alcohol for the first time – Native Americans in the 1700s got up to some crazy stuff.

But the Inuit seem to have gotten it much worse. Now we can bring back in the cold and darkness. Alcohol consumption seems to increase reliably with latitude, whether we’re talking about the US:

Japan:

Or the whole world:

So you take some hunter-gatherers who have never encountered alcohol before, stick them in the northernmost place in the world, and throw cheap Danish alcohol at them at the exact moment their communities are being uprooted and destroyed forever, and you get…well, you get this:

By 1980, Greenland was the most alcoholic country in the world, drinking an average of 22 liters of pure alcohol per capita per year (Russia is 15). It doesn’t look like this was responsible social drinking either. Take the most deranged binge drinking in the worst college fraternity in the world, multiply it by a thousand, and that was Greenland during much of the late 20th century.

But this can’t be the whole story. Alcohol consumption in Greenland has since dropped to the same level as Denmark and other European countries. But the suicide rate is still ten times higher. Why? Maybe the moderate quantities are hiding deeply dysfunctional drinking patterns with lots of binging and addiction.

Or maybe it’s something worse. Child sexual abuse rates in Greenland range from 37% in Nuuk to 46% in East Greenland. As far as I can tell, you are understanding those numbers correctly – almost half of children in Greenland are sexually abused. In Nunavut, the numbers are 52% of women and 22% of men suffering “severe” childhood sexual abuse. The New York Times sets a disturbingly vivid scene:

Pay days are the worst time for the children of Tasiilaq, officials say. With their salaries or social benefits in hand, many adults tend to drink and parents become too inebriated to look after their children, officials say. That’s when an already high rate of sexual abuse rises, according to a police study published last week […]

So on the last Friday of every month, officials open a sports hall in the district as a shelter to keep children away from sexual abuse.

“Children were abused by their stepfathers, cousins and by the neighbor looking after them as the parents were on a bender,” Naasunnguaq Ignatiussen Streymoy, the mother of a sexual abuse victim and an anti-abuse activist, told Weekendavisen, a newsweekly, in an article published on Friday about the crisis.”

Correlation is not causation. Maybe the same dysfunction or social alienation or alcoholism that causes the sexual abuse separately causes the suicides. But maybe the obvious answer is true, and the sexual abuse contributes to the mental health problems that eventually lead to suicide. Maybe a generation of staggeringly high alcoholism led to staggeringly high child abuse, and a generation later those children are still committing suicide at staggeringly high rates.

III.

This is getting really depressing. Let’s talk about something a little bit lighter, like the remote Siberian okrugs with the highest suicide rates in the world.

The highest suicide rate I have seen credibly attributed to an ethnic group is the Chukchi of northeastern Russia, who are said to have reached 165 per year per 100,000 in 1998. They may be distantly related to the Inuit, but I wouldn’t put too much weight on this; Siberia is riddled with weird ethnicities with super-high suicide rates. The Evenks reached 121; their western neighbors the Nenets reached 119. There is a group called the Koryaks with a rate of 92, and another group called the Udmurts with a rate of 40ish – which is still higher than Guyana.

Voracek, Fisher, and Marusic try to tie some of these groups into their Finno-Ugrian Suicide Hypothesis, claiming that the genetically-related Finno-Ugric group have a unique predisposition to suicidality. The theory has some superficial plausibility – in the 1990s, the world’s first, second, and third most suicidal countries were Finland, Hungary, and Estonia – all Finno-Ugric. Their surrounding non-Finno-Ugric neighbors, like Sweden or Austria, were unremarkable, so a genetic hypothesis made sense. Unfortunately for the theory (but fortunately for everyone else) these countries have since improved by a lot, and now are barely above the world average; improved mental health care may be responsible (and the fall of Communism didn’t hurt). I’m actually a little confused what happened here.

But the Finno-Ugric hypothesis can’t explain the Chukchi, Evenks, Nenets, Koryaks, and Udmurts. Sure, the Udmurts are Finno-Ugric. And the Nenets are closely related. But the Chukchi, Evenks, and Koryaks aren’t. It’s tempting to group all of these tiny Siberian ethnic groups together, but eg the Evenks are more closely related to the Japanese than they are to the Nenets (despite living right next to them). Any genetic hypothesis flounders on the sheer genetic diversity and unrelatedness of this region.

Psychologist David Lester tries to point the finger at these groups’ ancient culture, which he says has been especially suicidal since the time of the earliest records. He quotes an account of the 19th-century Chukchi:

Bogoras described the [Chukchi] as irritable and obstinate and, when frustrated, impulsively self-destructive. He reports the case of a young girl who hung herself when her mother refused to take her to a feast in a neighboring camp. [He] reported cases of suicide in a husband over grief at his wife’s death andof a mother after her ten-year-old son’s death; a case motivated by bad fortune, compounded by the fear of further bad fortune; a woman who no longer found any pleasure in life; a young man who was driven away by his father-in-law for being lazy who then killed his pregnant wife and himself; and a young woman whose husband wanted to lend her to a friend in a group marriage, a friend whom she disliked.

Suicidal behavior appeared to be so common that people planning to kill themselves would often ask for a last meal of exotic tastes before they did so. Some Chukchee prefer to commit suicide by having someone else kill them. The man reported above who committed suicide because of present and anticipated misfortune asked to be strangled. In another case,a man who fell ill asked his wife to shoot him. Bogoras noted that ‘voluntary death’ as he called it, suicide by getting others to kill oneself, was common for the elderly and those suffering from physical illnesses.

However, Bogoras also noted ‘peculiar’ causes of voluntary death, such as that of a man who grew weary of quarrelling with his wife over their ill-behaved sons. Part of the motive in these cases may be to induce guilt in the survivors. As one father said, ‘Then he asks to be killed, and charges the very son who offended him with the execution of his request. Let him give me the mortal blow, let him suffer from the memory of it’.

I can only aspire to one day achieve this level of passive-aggressiveness. But in the end it has the same problem as the genetic hypothesis: these groups are just totally unrelated to each other. The Chukchi are not much more suicidal than the Nenets or Evenks, who have none of these traditions. And the Inuit are up there with all of them, and they had one of the lowest suicide rates in the world pre-colonization.

I think the explanation here is the same as with Greenland: the combination of alcohol-naive hunter gatherers, alcohol, the alcohol-promoting effects of high latitudes, and a disruption of the traditional way of life. There’s apparently a Russian proverb about Siberians that goes “reindeer-herders are sober only when they don’t have the money to get drunk” – and when the Russians are appalled by your alcoholism, you know you have a problem. Alcohol was found in the blood of 75 – 80% of Nenets suicides. And if anything, the Siberians had their way of life disrupted even worse than the Inuit did – Soviet central planners tried to collectivize them as a PR move – they wanted to demonstrate that Communism could work for even the most primitive of peoples. Well, it didn’t, and here we are.

While genetics or culture may matter a little, overall I am just going to end with a blanket recommendation to avoid being part of any small circumpolar ethnic group that has just discovered alcohol.

Posted in Uncategorized | Tagged | 280 Comments

Map Of Effective Altruism

In the spirit of my old map of the rationalist diaspora, here’s a map of effective altruism:


Thumbail – click to expand

Continents are cause areas; cities are charities or organizations; mountains are individuals. Some things are clickable links with title-text explanations. Thanks to AG for helping me set up the imagemap.

Posted in Uncategorized | Tagged , , , , | 101 Comments

Book Review: Human Compatible

I.

Clarke’s First Law goes: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

Stuart Russell is only 58. But what he lacks in age, he makes up in distinction: he’s a computer science professor at Berkeley, neurosurgery professor at UCSF, DARPA advisor, and author of the leading textbook on AI. His new book Human Compatible states that superintelligent AI is possible; Clarke would recommend we listen.

I’m only half-joking: in addition to its contents, Human Compatible is important as an artifact, a crystallized proof that top scientists now think AI safety is worth writing books about. Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies previously filled this role. But Superintelligence was in 2014, and by a philosophy professor. From the artifactual point of view, HC is just better – more recent, and by a more domain-relevant expert. But if you also open up the books to see what’s inside, the two defy easy comparison.

S:PDS was unabashedly a weird book. It explored various outrageous scenarios (what if the AI destroyed humanity to prevent us from turning it off? what if it put us all in cryostasis so it didn’t count as destroying us? what if it converted the entire Earth into computronium?) with no excuse beyond that, outrageous or not, they might come true. Bostrom was going out on a very shaky limb to broadcast a crazy-sounding warning about what might be the most important problem humanity has ever faced, and the book made this absolutely clear.

HC somehow makes risk from superintelligence not sound weird. I can imagine my mother reading this book, nodding along, feeling better educated at the end of it, agreeing with most of what it says (it’s by a famous professor! I’m sure he knows his stuff!) and never having a moment where she sits bolt upright and goes what? It’s just a bizarrely normal, respectable book. It’s not that it’s dry and technical – HC is much more accessible than S:PDS, with funny anecdotes from Russell’s life, cute vignettes about hypothetical robots, and the occasional dad joke. It’s not hiding any of the weird superintelligence parts. Rereading it carefully, they’re all in there – when I leaf through it for examples, I come across a quote from Moravec about how “the immensities of cyberspace will be teeming with unhuman superminds, engaged in affairs that are to human concerns as ours are to those of bacteria”. But somehow it all sounds normal. If aliens landed on the White House lawn tomorrow, I believe Stuart Russell could report on it in a way that had people agreeing it was an interesting story, then turning to the sports page. As such, it fulfills its artifact role with flying colors.

How does it manage this? Although it mentions the weird scenarios, it doesn’t dwell on them. Instead, it focuses on the present and the plausible near-future, uses those to build up concepts like “AI is important” and “poorly aligned AI could be dangerous”. Then it addresses those abstractly, sallying into the far future only when absolutely necessary. Russell goes over all the recent debates in AI – Facebook, algorithmic bias, self-driving cars. Then he shows how these are caused by systems doing what we tell them to do (ie optimizing for one easily-described quantity) rather than what we really want them to do (capture the full range of human values). Then he talks about how future superintelligent systems will have the same problem.

His usual go-to for a superintelligent system is Robbie the Robot, a sort of Jetsons-esque butler for his master Harriet the Human. The two of them have all sorts of interesting adventures together where Harriet asks Robbie for something and Robbie uses better or worse algorithms to interpret her request. Usually these requests are things like shopping for food or booking appointments. It all feels very Jetsons-esque. There’s no mention of the word “singleton” in the book’s index (not that I’m complaining – in the missing spot between simulated evolution of programs, 171 and slaughterbot, 111, you instead find Slate Star Codex blog, 146, 169-70). But even from this limited framework, he manages to explore some of the same extreme questions Bostrom does, and present some of the answers he’s spent the last few years coming up with.

If you’ve been paying attention, much of the book will be retreading old material. There’s a history of AI, an attempt to define intelligence, an exploration of morality from the perspective of someone trying to make AIs have it, some introductions to the idea of superintelligence and “intelligence explosions”. But I want to focus on three chapters: the debate on AI risk, the explanation of Russell’s own research program, and the section on misuse of existing AI.

II.

Chapter 6, “The Not-So-Great Debate”, is the highlight of the book-as-artifact. Russell gets on his cathedra as top AI scientist, surveys the world of other top AI scientists saying AI safety isn’t worth worrying about yet, and pronounces them super wrong:

I don’t mean to suggest that there cannot be any reasonable objections to the view that poorly designed superintelligent machines would present a serious risk to humanity. It’s just that I have yet to see such an objection.

He doesn’t pull punches here, collecting a group of what he considers the stupidest arguments into a section called “Instantly Regrettable Remarks”, with the connotation that the their authors (“all of whom are well-known AI researchers”), should have been embarrassed to have been seen with such bad points. Others get their own sections, slightly less aggressively titled, but it doesn’t seem like he’s exactly oozing respect for those either. For example:

Kevin Kelly, founding editor of Wired magazine and a remarkably perceptive technology commentator, takes this argument one step further. In “The Myth of a Superhuman AI,” he writes, “Intelligence is not a single dimension, so ‘smarter than humans’ is a meaningless concept.” In a single stroke, all concerns about superintelligence are wiped away.

Now, one obvious response is that a machine could exceed human capabilities in all relevant dimensions of intelligence. In that case, even by Kelly’s strict standards, the machine would be smarter than a human. But this rather strong assumption is not necessary to refute Kelly’s argument.

Consider the chimpanzee. Chimpanzees probably have better short-term memory than humans, even on human-oriented tasks such as recalling sequences of digits. Short-term memory is an important dimension of intelligence. By Kelly’s argument, then, humans are not smarter than chimpanzees; indeed, he would claim that “smarter than a chimpanzee” is a meaningless concept.

This is cold comfort to the chimpanzees and other species that survive only because we deign to allow it, and to all those species that we have already wiped out. It’s also cold comfort to humans who might be worried about being wiped out by machines.

Or:

The risks of superintelligence can also be dismissed by arguing that superintelligence cannot be achieved. These claims are not new, but it is surprising now to see AI researchers themselves claiming that such AI is impossible. For example, a major report from the AI100 organization, Artificial Intelligence and Life in 2030, includes the following claim: “Unlike in the movies, there is no race of superhuman robots on the horizon or probably even possible.”

To my knowledge, this is the first time that serious AI researchers have publicly espoused the view that human-level or superhuman AI is impossible—and this in the middle of a period of extremely rapid progress in AI research, when barrier after barrier is being breached. It’s as if a group of leading cancer biologists announced that they had been fooling us all along: They’ve always known that there will never be a cure for cancer.

What could have motivated such a volte-face? The report provides no arguments or evidence whatever. (Indeed, what evidence could there be that no physically possible arrangement of atoms outperforms the human brain?) I suspect that the main reason is tribalism — the instinct to circle the wagons against what are perceived to be “attacks” on AI. It seems odd, however, to perceive the claim that superintelligent AI is possible as an attack on AI, and even odder to defend AI by saying that AI will never succeed in its goals. We cannot insure against future catastrophe simply by betting against human ingenuity.

If superhuman AI is not strictly impossible, perhaps it’s too far off to worry about? This is the gist of Andrew Ng’s assertion that it’s like worrying about “overpopulation on the planet Mars.” Unfortunately, a long-term risk can still be cause for immediate concern. The right time to worry about a potentially serious problem for humanity depends not just on when the problem will occur but also on how long it will take to prepare and implement a solution. For example, if we were to detect a large asteroid on course to collide with Earth in 2069, would we wait until 2068 to start working on a solution? Far from it! There would be a worldwide emergency project to develop the means to counter the threat, because we can’t say in advance how much time is needed.

Russell displays master-level competence at the proving too much technique, neatly dispatching sophisticated arguments with a well-placed metaphor. Some expert claims it’s meaningless to say one thing is smarter than another thing, and Russell notes that for all practical purposes it’s meaningful to say humans are smarter than chimps. Some other expert says nobody can control research anyway, and Russell brings up various obvious examples of people controlling research, like the ethical agreements already in place on the use of gene editing.

I’m a big fan of Luke Muehlhauser’s definition of common sense – making sure your thoughts about hard problems make use of the good intuitions you have built for thinking about easy problems. His example was people who would correctly say “I see no evidence for the Loch Ness monster, so I don’t believe it” but then screw up and say “You can’t disprove the existence of God, so you have to believe in Him”. Just use the same kind of logic for the God question you use for every other question, and you’ll be fine! Russell does great work applying common sense to the AI debate, reminding us that if we stop trying to out-sophist ourselves into coming up with incredibly clever reasons why this thing cannot possibly happen, we will be left with the common-sense proposition that it might.

My only complaint about this section of the book – the one thing that would have added a cherry to the slightly troll-ish cake – is that it missed a chance to include a reference to On The Impossibility Of Supersized Machines.

Is Russell (or am I) going too far here? I don’t think so. Russell is arguing for a much weaker proposition than the ones Bostrom focuses on. He’s not assuming super-fast takeoffs, or nanobot swarms, or anything like that. All he’s trying to do is argue that if technology keeps advancing, then at some point AIs will become smarter than humans and maybe we should worry about this. You’ve really got to bend over backwards to find counterarguments to this, those counterarguments tend to sound like “but maybe there’s no such thing as intelligence so this claim is meaningless”, and I think Russell treats these with the contempt they deserve.

He is more understanding of – but equally good at dispatching – arguments for why the problem will really be easy. Can’t We Just Switch It Off? No; if an AI is truly malicious, it will try to hide its malice and prevent you from disabling it. Can’t We Just Put It In A Box? No, if it were smart enough it could probably find ways to affect the world anyway (this answer was good as far as it goes, but I think Russell’s threat model also allows a better one: he imagines thousands of AIs being used by pretty much everybody to do everything, from self-driving cars to curating social media, and keeping them all in boxes is no more plausible than keeping transportation or electricity in a box). Can’t We Just Merge With The Machines? Sounds hard. Russell does a good job with this section as well, and I think a hefty dose of common sense helps here too.

He concludes with a quote:

The “skeptic” position seems to be that, although we should probably get a couple of bright people to start working on preliminary aspects of the problem, we shouldn’t panic or start trying to ban AI research. The “believers”, meanwhile, insist that although we shouldn’t panic or start trying to ban AI research, we should probably get a couple of bright people to start working on preliminary aspects of the problem.

I couldn’t have put it better myself.

III.

If it’s important to control AI, and easy solutions like “put it in a box” aren’t going to work, what do you do?

Chapters 7 and 8, “AI: A Different Approach” and “Provably Beneficial AI” will be the most exciting for people who read Bostrom but haven’t been paying attention since. Bostrom ends by saying we need people to start working on the control problem, and explaining why this will be very hard. Russell is reporting all of the good work his lab at UC Berkeley has been doing on the control problem in the interim – and arguing that their approach, Cooperative Inverse Reinforcement Learning, succeeds at doing some of the very hard things. If you haven’t spent long nights fretting over whether this problem was possible, it’s hard to convey how encouraging and inspiring it is to see people gradually chip away at it. Just believe me when I say you may want to be really grateful for the existence of Stuart Russell and people like him.

Previous stabs at this problem foundered on inevitable problems of interpretation, scope, or altered preferences. In Yudkowsky and Bostrom’s classic “paperclip maximizer” scenario, a human orders an AI to make paperclips. If the AI becomes powerful enough, it does whatever is necessary to make as many paperclips as possible – bulldozing virgin forests to create new paperclip mines, maliciously misinterpreting “paperclip” to mean uselessly tiny paperclips so it can make more of them, even attacking people who try to change its programming or deactivate it (since deactivating it would cause fewer paperclips to exist). You can try adding epicycles in, like “make as many paperclips as possible, unless it kills someone, and also don’t prevent me from turning you off”, but a big chunk of Bostrom’s S:PDS was just example after example of why that wouldn’t work.

Russell argues you can shift the AI’s goal from “follow your master’s commands” to “use your master’s commands as evidence to try to figure out what they actually want, a mysterious true goal which you can only ever estimate with some probability”. Or as he puts it:

The problem comes from confusing two distinct things: reward signals and actual rewards. In the standard approach to reinforcement learning, these are one and the same. That seems to be a mistake. Instead, they should be treated separately…reward signals provide information about the accumulation of actual reward, which is the thing to be maximized.

So suppose I wanted an AI to make paperclips for me, and I tell it “Make paperclips!” The AI already has some basic contextual knowledge about the world that it can use to figure out what I mean, and my utterance “Make paperclips!” further narrows down its guess about what I want. If it’s not sure – if most of its probability mass is on “convert this metal rod here to paperclips” but a little bit is on “take over the entire world and convert it to paperclips”, it will ask me rather than proceed, worried that if it makes the wrong choice it will actually be moving further away from its goal (satisfying my mysterious mind-state) rather than towards it.

Or: suppose the AI starts trying to convert my dog into paperclips. I shout “No, wait, not like that!” and lunge to turn it off. The AI interprets my desperate attempt to deactivate it as further evidence about its hidden goal – apparently its current course of action is moving away from my preference rather than towards it. It doesn’t know exactly which of its actions is decreasing its utility function or why, but it knows that continuing to act must be decreasing its utility somehow – I’ve given it evidence of that. So it stays still, happy to be turned off, knowing that being turned off is serving its goal (to achieve my goals, whatever they are) better than staying on.

This also solves the wireheading problem. Suppose you have a reinforcement learner whose reward is you saying “Thank you, you successfully completed that task”. A sufficiently weak robot may have no better way of getting reward than actually performing the task for you; a stronger one will threaten you at gunpoint until you say that sentence a million times, which will provide it with much more reward much faster than taking out your trash or whatever. Russell’s shift in priorities ensures that won’t work. You can still reinforce the robot by saying “Thank you” – that will give it evidence that it succeeded at its real goal of fulfilling your mysterious preference – but the words are only a signpost to the deeper reality; making you say “thank you” again and again will no longer count as success.

All of this sounds almost trivial written out like this, but number one, everything is trivial after someone thinks about it, and number two, there turns out to be a lot of controversial math involved in making it work out (all of which I skipped over). There are also some big remaining implementation hurdles. For example, the section above describes a Bayesian process – start with a prior on what the human wants, then update. But how do you generate the prior? How complicated do you want to make things? Russell walks us through an example where a robot gets great information that a human values paperclips at 80 cents – but the real preference was valuing them at 80 cents on weekends and 12 cents on weekdays. If the robot didn’t consider that a possibility, it would never be able to get there by updating. But if it did consider every single possibility, it would never be able to learn anything beyond “this particular human values paperclips at 80 cents on 12:08 AM on January 14th when she’s standing in her bedroom.” Russell says that there is “no working example” of AIs that can solve this kind of problem, but “the general idea is encompassed within current thinking about machine learning”, which sounds half-meaningless and half-reassuring.

People with a more technical bent than I have might want to look into some deeper criticisms of CIRL, including Eliezer Yudkowsky’s article here and some discussion in the AI Alignment Newsletter.

IV.

I want to end by discussing what was probably supposed to be an irrelevant middle chapter of the book, Misuses of AI.

Russell writes:

A compassionate and jubilant use of humanity’s cosmic endowment sounds wonderful, but we also have to reckon with the rapid rate of innovation in the malfeasance sector. Ill-intentioned people are thinking up new ways to misuse AI so quickly that this chapter is likely to be outdated even before it attains printed form. Think of it not as depressing reading, however, but as a call to act before it is too late.

…and then we get a tour of all the ways AIs are going wrong today: surveillance, drones, deepfakes, algorithmic bias, job loss to automation, social media algorithms, etc.

Some of these are pretty worrying. But not all of them.

Google “deepfakes” and you will find a host of articles claiming that we are about to lose the very concept of truth itself. Brookings calls deepfakes “a threat to truth in politics” and comes up with a scenario where deepfakes “could trigger a nuclear war.” The Guardian asks “You Thought Fake News Was Bad? Deepfakes Are Where Truth Goes To Die”. And these aren’t even the alarmist ones! The Irish Times calls it an “information apocalypse” and literally titles their article “Be Afraid”; Good Times just writes “Welcome To Deepfake Hell”. Meanwhile, deepfakes have been available for a couple of years now, with no consequences worse than a few teenagers using them to make pornography, ie the expected outcome of every technology ever. Also, it’s hard to see why forging videos should be so much worse than forging images through Photoshop, forging documents through whatever document-forgers do, or forging text through lying. Brookings explains that deepfakes might cause nuclear war because someone might forge a video of the President ordering a nuclear strike and then commanders might believe it. But it’s unclear why this is so much more plausible than someone writing a memo saying “Please launch a nuclear strike, sincerely, the President” and commanders believing that. Other papers have highlighted the danger of creating a fake sex tape with a politician in order to discredit them, but you can already convincingly Photoshop an explicit photo of your least favorite politician, and everyone will just laugh at you.

Algorithmic bias has also been getting colossal unstoppable neverending near-infinite unbelievable amounts of press lately, but the most popular examples basically boil down to “it’s impossible to satisfy several conflicting definitions of ‘unbiased’ simultaneously, and algorithms do not do this impossible thing”. Humans also do not do the impossible thing. Occasionally someone is able to dig up an example which actually seems slightly worrying, but I have never seen anyone prove (or even seriously argue) that algorithms are in general more biased than humans (see also Principles For The Application Of Human Intelligence – no, seriously, see it). Overall I am not sure this deserves all the attention it gets any time someone brings up AI, tech, science, matter, energy, space, time, or the universe.

Or: with all the discussion about how social media algorithms are radicalizing the youth, it was refreshing to read a study investigating whether this was actually true, which found that social media use did not increase support for right-wing populism, and online media use (including social media use) and right-wing populism actually seem to be negatively correlated (remember, correlational studies are always bad). Recent studies of YouTube’s algorithms find they do not naturally tend to radicalize, and may deradicalize, viewers, although I’ve heard some people say this is only true of the current algorithm and the old ones (which were not included in these studies) were much worse.

Or: is automation destroying jobs? Although it seems like it should, the evidence continues to suggest that it isn’t. There are various theories for why this should be, most of which suggest it may not destroy jobs in the near future either. See my review of technological unemployment for details.

A careful reading reveals Russell appreciates most of these objections. A less careful reading does not reveal this. The general structure is “HERE IS A TERRIFYING WAY THAT AI COULD BE KILLING YOU AND YOUR FAMILY although studies do show that this is probably not literally happening in exactly this way AND YOUR LEADERS ARE POWERLESS TO STOP IT!”

I understand the impulse. This book ends up doing an amazing job of talking about AI safety without sounding weird. And part of how it accomplishes this is building on a foundation of “AI is causing problems now”. The media has already prepared the way; all Russell has to do is vaguely gesture at deepfakes and algorithmic radicalization, and everyone says “Oh yeah, that stuff!” and realizes that they already believe AI is dangerous and needs aligning. And then you can add “and future AI will be the same way but even more”, and you’re home free.

But the whole thing makes me nervous. Lots of right-wingers say “climatologists used to worry about global cooling, why should we believe them now about global warming?” They’re wrong – global cooling was never really a big thing. But in 2040, might the same people say “AI scientists used to worry about deepfakes, why should we believe them now about the Singularity?” And might they actually have a point this time? If we get a reputation as the people who fall for every panic about AI, including the ones that in retrospect turn out to be kind of silly, will we eventually cry wolf one too many times and lose our credibility before crunch time?

I think the actual answer to this question is “Haha, as if our society actually punished people for being wrong”. The next US presidential election is all set to be Socialists vs. Right-Wing Authoritarians – and I’m still saying with a straight face that the public notices when movements were wrong before and lowers their status? Have the people who said there were WMDs in Iraq lost status? The people who said sanctions on Iraq were killing thousands of children? The people who said Trump was definitely for sure colluding with Russia? The people who said global warming wasn’t real? The people who pushed growth mindset as a panacea for twenty years?

So probably this is a brilliant rhetorical strategy with no downsides. But it still gives me a visceral “ick” reaction to associate with something that might not be accurate.

And there’s a sense in which this is all obviously ridiculous. The people who think superintelligent robots will destroy humanity – these people should worry about associating with the people who believe fake videos might fool people on YouTube, because the latter group is going beyond what the evidence will support? Really? But yes. Really. It’s more likely that catastrophic runaway global warming will boil the world a hundred years from now than that it will reach 75 degrees in San Francisco tomorrow (predicted high: 59); extreme scenarios about the far future are more defensible than even weak claims about the present that are ruled out by the evidence.

There’s been some discussion in effective altruism recently about public relations. The movement has many convincing hooks (you can save a live for $3000, donating bednets is very effective, think about how you would save a drowning child) and many things its leading intellectuals are actually thinking about (how to stop existential risks, how to make people change careers, how to promote plant-based meat), and the Venn diagram between the hooks and the real topics has only partial overlap. What to do about this? It’s a hard question, and I have no strong opinion besides a deep respect for everyone on both sides of it and appreciation for the work they do trying to balance different considerations in creating a better world.

HC’s relevance to this debate is as an extraordinary example. If you try to optimize for being good at public relations and convincingness, you can be really, really good at public relations and convincingness, even when you’re trying to explain a really difficult idea to a potentially hostile audience. You can do it while still being more accurate, page for page, than a New York Times article on the same topic. There are no obvious disadvantages to doing this. It still makes me nervous.

V.

My reaction to this book is probably weird. I got interested in AI safety by hanging out with transhumanists and neophiles who like to come up with the most extreme scenario possible, and then back down when maybe it isn’t true. Russell got interested in AI safety by hanging out with sober researchers who like to be as boring and conservative as possible, and then accept new ideas once the evidence for them proves overwhelming. At some point one hopes we meet in the middle. We’re almost there.

But maybe we’re not quite there yet. My reaction to this book has been “what an amazing talent Russell must have to build all of this up from normality”. But maybe it’s not talent. Maybe Russell is just recounting his own intellectual journey. Maybe this is what a straightforward examination of AI risk looks like if you have fewer crazy people in your intellectual pedigree than I do.

I recommend this book both for the general public and for SSC readers. The general public will learn what AI safety is. SSC readers will learn what AI safety sounds like when it’s someone other than me talking about it. Both lessons are valuable.

Posted in Uncategorized | Tagged , | 311 Comments

Assortative Mating And Autism

Introduction

Assortative mating is when similar people marry and have children. Some people worry about assortative mating in Silicon Valley: highly analytical tech workers marry other highly analytical tech workers. If highly analytical tech workers have more autism risk genes than the general population, assortative mating could put their children at very high risk of autism. How concerned should this make us?

Methods / Sample Characteristics

I used the 2020 Slate Star Codex survey to investigate this question. It had 8,043 respondents selected for being interested in a highly analytical blog about topics like science and economics. The blog is associated with – and draws many of its readers from – the rationalist and effective altruist movements, both highly analytical. More than half of respondents worked in programming, engineering, math, or physics. 79% described themselves as atheist or agnostic. 65% described themselves as more interested in STEM than the humanities; only 15% said the opposite.

According to Kogan et al (2018), about 2.5% of US children are currently diagnosed with autism spectrum disorders. The difference between “autism” and “autism spectrum disorder” is complicated, shifts frequently, and is not very well-known to the public; this piece will treat them interchangeably from here on. There are no surveys of what percent of adults are diagnosed with autism; it is probably lower since most diagnoses happen during childhood and the condition was less appreciated in past decades. These numbers may be affected by parents’ education level and social class; one study shows that children in wealthy neighborhoods were up to twice as likely to get diagnosed as poorer children.

Given that respondents are likely wealthier than average, we might expect a rate of 2.5% – 5%. Instead the rate is noticeably higher than that, consistent with the hypothesis that this sample will be more autistic than average. About 4% of the SSC survey sample had a formal diagnosis of autism, but this rose to 6% when the sample was limited to people below 30, and to 8% below 20. This sample is plausibly about 2-3x more autistic than the US population. Childhood social class was not found to have a significant effect on autism status in this sample.

Results

I tried to get information on how many children respondents have, but I forgot to ask important questions about age until a quarter of the way through the survey. I want to make sure I’m only catching children old enough that their autism would have been diagnosed, so the information below (except when otherwise noted) comes from the three-quarters of the sample where I have good age information. I also checked it against the whole sample and it didn’t make a difference.

Of this limited sample, 1,204 individual parents had a total of 2,459 children. 1,892 of those children were older than 3, and 1,604 were older than 5. I chose to analyze children older than 3, since autism generally becomes detectable around 2.

71 children in the 1,892 child sample had formal diagnoses of autism, for a total prevalence of 3.7%. When parents were asked to include children who were not formally diagnosed but who they thought had the condition, this increased to 99 children, or a 5.2% prevalence. Both numbers are much lower than the 8% prevalence in young people in the sample.

What about marriages where both partners were highly analytical? My proxy for this was the following survey question:

I’ll be referring to these answers as “yes”, “sort of”, and “no” from here on, and moving back to the full sample. 938 parents answered this question; 51 (5.4%) yes, 233 (24.8%) sort of, and 653 (69.4%) no. Keep in mind the effective sample is even smaller, since both partners in two-partners-read-SSC-families may have filled out the survey individually about the same set of children (though this should not have affected the “sort of” group). Here is the autism rate for each group, with 95% confidence interval in black:

There is little difference. If we combine the latter two groups, the confidence interval narrows slightly, to 2.7 – 6.5.

I asked respondents about the severity of their children’s autism.

People who hadn’t previously reported any children with autism gave answers other than N/A for this one, which was confusing. Instead of the 71 children we had before, now it’s up to 144 children. I’m not sure what’s going on here. Of these phantom children, 101 had mild cases, 31 moderate, and only 12 severe. Severe autism was only present in 0.6% of the children in the sample. There was no tendency for couples where both partners were highly analytical to have children with more severe autism.

Discussion

Autism rates in this survey were generally low. Although the general rate of 3.7% was higher than the commonly-estimated US base rate of 2.5%, this is consistent with the slight elevation of autism observed in higher social classes.

There was no sign of elevated risk when both partners were highly analytical. The sample size was too small to say for certain that no such elevation exists, but it can say with 95% confidence that the elevated risk is less than three percentage points.

This suggests that the answer to the original question – does assortative mating between highly analytical people significantly increase chance of autism in offspring – is at least a qualified “no”.

Why should this be? It could just be that regression to the mean is more important in this case than any negative effects from combining recessive genes or mixing too many risk genes together. Or maybe we should challenge the assumption that being a highly analytical programmer is necessarily on a continuum with autism. It seems like p(highly analytical|on autism spectrum) is pretty high, but p(on autism spectrum|is highly analytical) might be much lower.

Obvious limitations of this survey include the small sample size of both-partners-highly-analytical couples, the weak operationalization of highly analytical as “member of the SSC, rationalist, and effective altruist communities”, and the inability to separate non-autistic children from children who are not yet diagnosed. Due to these limitations, this should only be viewed as providing evidence against the strongest versions of the assortative mating hypothesis, where it might increase risk by double, triple, or more. Smaller elevations of risk remain plausible and would require larger studies to assess.

I welcome people trying to replicate or expand on these results. All of the data used in this post are freely available and can be downloaded here.

Open Thread 146

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, but please try to avoid hot-button political and social topics. You can also talk at the SSC subreddit – and also check out the SSC Podcast. Also:

1. In 2016, I made a bet with bhauth that the US median income growth under Donald Trump wouldn’t significantly outperform the trendline for the past 25 years. It did, so I lost. As his prize, bhauth asks me to signal-boost his work on a new type of battery that outperforms lithium-ion.

2. Some people have already gotten a nice head start analyzing the SSC survey; see eg wulfrickson on autogynephilia and jsmp on various things.

3. The SSC podcast is still trying to recoup its costs, so it’s started offering ads. You can get your ad read on the podcast for $100/month; they get about 1500 downloads per episode, and there are 10-ish episodes per month. Email slatestarpodcast[at]gmail[dot]com for details.

4. I need to make my inbox more manageable, so I am going to ask you not to send me emails asking for comments on your manifestos or ideas or interesting links you found. I find myself feeling annoyed if I spend time on them and guilty if I don’t, and it’s unfair to you to have to listen to me saying I will answer you and then never doing so. If you have interesting things like this you want to bring to my attention, post them on the SSC subreddit, which I read pretty often. I continue to accept other types of emails. Sorry about this.

Posted in Uncategorized | Tagged | 712 Comments