Slate Star Codex

As seen on some really big websites I am scared and confused to have been featured on!

Alcoholics Anonymous: Much More Than You Wanted To Know

I’ve worked with doctors who think Alcoholics Anonymous is so important for the treatment of alcoholism that anyone who refuses to go at least three times a week is in denial about their problem and can’t benefit from further treatment.

I’ve also worked with doctors who are so against the organization that they describe it as a “cult” and say that a physician who recommends it is no better than one who recommends crystal healing or dianetics.

I finally got so exasperated that I put on my Research Cap and started looking through the evidence base.

My conclusion, after several hours of study, is that now I understand why most people don’t do this.

The studies surrounding Alcoholics Anonymous are some of the most convoluted, hilariously screwed-up research I have ever seen. They go wrong in ways I didn’t even realize research could go wrong before. Just to give some examples:

– In several studies, subjects in the “not attending Alcoholics Anonymous” condition attended Alcoholics Anonymous more than subjects in the “attending Alcoholics Anonymous” condition.

– Almost everyone’s belief about AA’s retention rate is off by a factor of five because one person long ago misread a really confusing graph and everyone else copied them without double-checking.

– The largest study ever in the field, a $30 million effort over 8 years following thousands of patients, had no control group.

Not only are the studies poor, but the people interpreting them are heavily politicized. The entire field of addiction medicine has gotten stuck in the middle of some of the most divisive issues in our culture, like whether addiction is a biological disease or a failure of willpower, whether problems should be solved by community and peer groups or by highly trained professionals, and whether there’s a role for appealing to a higher power in any public organization. AA’s supporters see it as a scruffy grassroots organization of real people willing to get their hands dirty, who can cure addicts failed time and time again by a system of glitzy rehabs run by arrogant doctors who think their medical degrees make them better than people who have personally fought their own battles. Opponents see it as this awful cult that doesn’t provide any real treatment and just tells addicts that they’re terrible people who will never get better unless they sacrifice their identity to the collective.

As a result, the few sparks of light the research kindles are ignored, taken out of context, or misinterpreted.

The entire situation is complicated by a bigger question. We will soon find that AA usually does not work better or worse than various other substance abuse interventions. That leaves the sort of question that all those fancy-shmancy people with control groups in their studies don’t have to worry about – does anything work at all?

I.

We can start by just taking a big survey of people in Alcoholics Anonymous and seeing how they’re doing. On the one hand, we don’t have a control group. On the other hand…well, there really is no other hand, but people keep doing it.

According to AA’s own surveys, one-third of new members drop out by the end of their first month, half by the end of their third month, and three-quarters by the end of their first year. “Drop out” means they don’t go to AA meetings anymore, which could be for any reason including (if we’re feeling optimistic) them being so completely cured they no longer feel they need it.

There is an alternate reference going around that only 5% (rather than 25%) of AA members remain after their first year. This is a mistake caused by misinterpreting a graph showing that only five percent of members in their first year were in their twelfth month of membership, which is obviously completely different. Nevertheless, a large number of AA hate sites (and large rehabs!) cite the incorrect interpretation, for example the Orange Papers and RationalWiki’s page on Alcoholics Anonymous. In fact, just to keep things short, assume RationalWiki’s AA page makes every single mistake I warn against in the rest of this article, then use that to judge them in general. On the other hand, Wikipedia gets it right and I continue to encourage everyone to use it as one of the most reliable sources of medical information available to the public (I wish I was joking).

This retention information isn’t very helpful, since people can remain in AA without successfully quitting drinking, and people may successfully quit drinking without being in AA. However, various different sources suggest that, of people who stay in AA a reasonable amount of time, about half stop being alcoholic. These numbers can change wildly depending on how you define “reasonable amount of time” and “stop being alcoholic”. Here is a table, which I have cited on this blog before and will probably cite again:

Behold. Treatments that look very impressive (80% improved after six months!) turn out to be the same or worse as the control group. And comparing control group to control group, you can find that “no treatment” can appear to give wildly different outcomes (from 20% to 80% “recovery”) depending on what population you’re looking at and how you define “recovery”.

Twenty years ago, it was extremely edgy and taboo for a reputable scientist to claim that alcoholics could recover on their own. This has given way to the current status quo, in which pretty much everyone in the field writes journal articles all the time about how alcoholics can recover on their own, but make sure to harp upon how edgy and taboo they are for doing so. From these sorts of articles, we learn that about 80% of recovered alcoholics have gotten better without treatment, and many of them are currently able to drink moderately without immediately relapsing (something else it used to be extremely taboo to mention). Kate recently shared an good article about this: Most People With Addiction Simply Grow Out Of It: Why Is This Widely Denied?

Anyway, all this stuff about not being able to compare different populations, and the possibility of spontaneous recovery, just mean that we need controlled experiments. The largest number of these take a group of alcoholics, follow them closely, and then evaluate all of them – the AA-attending and the non-AA-attending – according to the same criteria. For example Morgenstern et al (1997), Humphreys et al (1997) and Moos (2006). Emrick et al (1993) is a meta-analyses of a hundred seventy three of these. All of these find that the alcoholics who end up going to AA meetings are much more likely to get better than those who don’t. So that’s good evidence the group is effective, right?

Bzzzt! No! Wrong! Selection bias!

People who want to quit drinking are more likely to go to AA than people who don’t want to quit drinking. People who want to quit drinking are more likely to actually quit drinking than those who don’t want to. This is a serious problem. Imagine if it is common wisdom that AA is the best, maybe the only, way to quit drinking. Then 100% of people who really want to quit would attend compared to 0% of people who didn’t want to quit. And suppose everyone who wants to quit succeeds, because secretly, quitting alcohol is really easy. Then 100% of AA members would quit, compared to 0% of non-members – the most striking result it is mathematically possible to have. And yet AA would not have made a smidgeon of difference.

But it’s worse than this, because attending AA isn’t just about wanting to quit. It’s also about having the resources to make it to AA. That is, wealthier people are more likely to hear about AA (better information networks, more likely to go to doctor or counselor who can recommend) and more likely to be able to attend AA (better access to transportation, more flexible job schedules). But wealthier people are also known to be better at quitting alcohol than poor people – either because the same positive personal qualities that helped them achieve success elsewhere help them in this battle as well, or just because they have fewer other stressors going on in their lives driving them to drink.

Finally, perseverance is a confounder. To go to AA, and to keep going for months and months, means you’ve got the willpower to drag yourself off the couch to do a potentially unpleasant thing. That’s probably the same willpower that helps you stay away from the bar.

And then there’s a confounder going the opposite direction. The worse your alcoholism is, the more likely you are to, as the organization itself puts it, “admit you have a problem”.

These sorts of longitudinal studies are almost useless and the field has mostly moved away from them. Nevertheless, if you look on the pro-AA sites, you will find them in droves, and all of them “prove” the organization’s effectiveness.

III.

It looks like we need randomized controlled trials. And we have them. Sort of.

Brandsma (1980) is the study beloved of the AA hate groups, since it purports to show that people in Alcoholics Anonymous not only don’t get better, but are nine times more likely to binge drink than people who don’t go into AA at all.

There are a number of problems with this conclusion. First of all, if you actually look at the study, this is one of about fifty different findings. The other findings are things like “88% of treated subjects reported a reduction in drinking, compared to 50% of the untreated control group”.

Second of all, the increased binge drinking was significant at the 6 month followup period. It was not significant at the end of treatment, the 3 month followup period, the 9 month followup period, or the 12 month followup period. Remember, taking a single followup result out of the context of the other followup results is a classic piece of Dark Side Statistics and will send you to Science Hell.

Of multiple different endpoints, Alcoholics Anonymous did better than no treatment on almost all of them. It did worse than other treatments on some of them (dropout rates, binge drinking, MMPI scale) and the same as other treatments on others (abstinent days, total abstinence).

If you are pro-AA, you can say “Brandsma study proves AA works!”. If you are anti-AA, you can say “Brandsma study proves AA works worse than other treatments!”, although in practice most of these people prefer to quote extremely selective endpoints out of context.

However, most of the patients in the Brandsma study were people convicted of alcohol-related crimes ordered to attend treatment as part of their sentence. Advocates of AA make a good point that this population might be a bad fit for AA. They may not feel any personal motivation to treatment, which might be okay if you’re going to listen to a psychologist do therapy with you, but fatal for a self-help group. Since the whole point of AA is being in a community of like-minded individuals, if you don’t actually feel any personal connection to the project of quitting alcohol, it will just make you feel uncomfortable and out of place.

Also, uh, this just in, Brandsma didn’t use a real AA group, because the real AA groups make people be anonymous which makes it inconvenient to research stuff. He just sort of started his own non-anonymous group, let’s call it A, with no help from the rest of the fellowship, and had it do Alcoholics Anonymous-like stuff. On the other hand, many members of his control group went out into the community and…attended a real Alcoholics Anonymous, because Brandsma can’t exactly ethically tell them not to. So technically, there were more people in AA in the no-AA group than in the AA group. Without knowing more about Alcoholics Anonymous, I can’t know whether this objection is valid and whether Brandsma’s group did or didn’t capture the essence of the organization. Still, not the sort of thing you want to hear about a study.

Walsh et al (1991) is a similar study with similar confounders and similar results. Workers in an industrial plant who were in trouble for coming in drunk were randomly assigned either to an inpatient treatment program or to Alcoholics Anonymous. After a year of followup, 60% of the inpatient-treated workers had stayed sober, but only 30% of the AA-treated workers had.

The pro-AA side made three objections to this study, of which one is bad and two are good.

The bad objection was that AA is cheaper than hospitalization, so even if hospitalization is good, AA might be more efficient – after all, we can’t afford to hospitalize everyone. It’s a bad objection because the authors of the study did the math and found out that hospitalization was so much better than AA that it decreased the level of further medical treatment needed and saved the health system more money than it cost.

The first good objection: like the Brandsma study, this study uses people under coercion – in this case, workers who would lose their job if they refused. Fine.

The second good objection, and this one is really interesting: a lot of inpatient hospital rehab is AA. That is, when you go to an hospital for inpatient drug treatment, you attend AA groups every day, and when you leave, they make you keep going to the AA groups. In fact, the study says that “at the 12 month and 24 month assessments, the rates of AA affiliation and attendance in the past 6 months did not differ significantly among the groups.” Given that the hospital patients got hospital AA + regular AA, they were actually getting more AA than the AA group!

So all that this study proves is that AA + more AA + other things is better than AA. There was no “no AA” group, which makes it impossible to discuss how well AA does or doesn’t work. Frick.

Timko (2006) is the only study I can hesitantly half-endorse. This one has a sort of clever methodological trick to get around the limitation that doctors can’t ethically refuse to refer alcoholics to treatment. In this study, researchers at a Veterans’ Affairs hospital randomly assigned alcoholic patients to “referral” or “intensive referral”. In “referral”, the staff asked the patients to go to AA. In “intensive referral”, the researchers asked REALLY NICELY for the patients to go to AA, and gave them nice glossy brochures on how great AA was, and wouldn’t shut up about it, and arranged for them to meet people at their first AA meeting so they could have friends in AA, et cetera, et cetera. The hope was that more people in the “intensive referral” group would end out in AA, and that indeed happened scratch that, I just re-read the study and the same number of people in both groups went to AA and the intensive group actually completed a lower number of the 12 Steps on average, have I mentioned I hate all research and this entire field is terrible? But the intensive referral people were more likely to have “had a spiritual awakening” and “have a sponsor”, so it was decided the study wasn’t a complete loss and when it was found the intensive referral condition had slightly less alcohol use the authors decided to declare victory.

So, whereas before we found that AA + More AA was better than AA, and that proved AA didn’t work, in this study we find that AA + More AA was better than AA, and that proves AA does work. You know, did I say I hesitantly half-endorsed this study? Scratch that. I hate this study too.

IV.

All right, @#%^ this $@!&*. We need a real study, everything all lined up in a row, none of this garbage. Let’s just hire half the substance abuse scientists in the country, throw a gigantic wad of money at them, give them as many patients as they need, let them take as long as they want, but barricade the doors of their office and not let them out until they’ve proven something important beyond a shadow of a doubt.

This was about how the scientific community felt in 1989, when they launched Project MATCH. This eight-year, $30 million dollar, multi-thousand patient trial was supposed to solve everything.

The people going into Project MATCH might have been a little overconfident. Maybe more than a little overconfident. Maybe “not even Zeus could prevent this study from determining the optimal treatment for alcohol addiction” overconfident. This might have been a mistake.

The study was designed with three arms, one for each of the popular alcoholism treatments of the day. The first arm would be “twelve step facilitation”, the fancy name for Alcoholics Anonymous. The second arm would be cognitive behavioral therapy, the most bog-standard psychotherapy in the world and one which by ancient tradition must be included in any kind of study like this. The third arm would be motivational enhancement therapy, which is where your doctor tells you “Hey, have you ever considered quitting alcohol??!!” and then meets with you every so often to see how that’s going. More shall be said on this later.

There wasn’t a “no treatment” arm. This is where the overconfidence might have come in. Everyone knew alcohol treatment worked. Surely you couldn’t dispute that. We just wanted to see which treatment worked best for which people. So you would enroll a bunch of different people – rich, poor, black, white, married, single, chronic alcoholic, new alcoholic, highly motivated, unmotivated – and see which of these people did best in which therapy. The result would be an algorithm for deciding where to send each of your patients. Rich black single chronic unmotivated alcoholic? We’ve found with p < 0.00001 that the best place for someone like that is in motivational enhancement therapy. Such was the dream.

So, eight years and thirty million dollars and the careers of several prestigious researchers later, the results come in, and - yeah, everyone does exactly the same on every kind of therapy. Awkward.

“Everybody has won and all must have prizes!”. If you’re an optimist, you can say all treatments work and everyone can keep doing whatever they like best. If you’re a pessimist, you might start wondering whether anything works at all.

By my understanding this is also the confusing conclusion of Ferri, Amato & Davoli (2006), the Cochrane Collaboration’s attempt to get in on the AA action. Like all Cochrane Collaboration studies since the beginning of time, they find there is insufficient evidence to demonstrate the effectiveness of the intervention being investigated. This has been oft-quoted in the anti-AA literature. But by my reading, they had no control groups and were comparing AA to different types of treatment:

Three studies compared AA combined with other interventions against other treatments and found few differences in the amount of drinks and percentage of drinking days. Severity of addiction and drinking consequence did not seem to be differentially influenced by TSF versus comparison treatment interventions, and no conclusive differences in treatment drop out rates were reported.

So the two best sources we have – Project MATCH and Cochrane – don’t find any significant differences between AA and other types of therapy. Now, to be fair, the inpatient treatment mentioned in Walsh et al wasn’t included, and inpatient treatment might be the gold standard here. But sticking to various forms of outpatient intervention, they all seem to be about the same.

So, the $64,000 question: do all of them work, or do all of them fail?

V.

Alcoholism studies avoid control groups like they are on fire, presumably because it’s unethical not to give alcoholics treatment or something. However, there is one class of studies that doesn’t have that problem. These are the ones on “brief opportunistic intervention”, which is much like “motivational enhancement therapy” in being a code word for “well, your doctor tells you ‘HELLO HAVE YOU CONSIDERED QUITTING ALCOHOL??!!’ and sees what happens”.

Brief opportunistic intervention is the most trollish medical intervention ever, because here are all these brilliant psychologists and counselors trying to unravel the deepest mysteries of the human psyche in order to convince people to stop drinking, and then someone comes along and asks “Hey, have you tried just asking them politely?”. And it works.

Not consistently. But it works for about one in eight people. And the theory is that since it only takes a minute or two of a doctor’s time, it scales a lot faster than some sort of hideously complex hospital-based program that takes thousands of dollars and dozens of hours from everyone involved. If doctors would just spend five minutes with each alcoholic patient reminding them that no, really, alcoholism is really bad, we could cut the alcoholism rate by 1/8.

(this also works for smoking, by the way. I do this with every single one of my outpatients who smoke, and most of the time they roll their eyes, because their doctor is giving them that speech, but every so often one of them tells me that yeah, I’m right, they know they really should quit smoking and they’ll give it another try. I have never saved anyone’s life by dramatically removing their appendix at the last possible moment, but I have gotten enough patients to promise me they’ll try quitting smoking that I think I’ve saved at least one life just by obsessively doing brief interventions every chance I get. This is probably the most effective life-saving thing you can do as a doctor, enough so that if you understand it you may be licensed to ignore 80,000 Hours’ arguments on doctor replaceability)

Anyway, for some reason, it’s okay to do these studies with control groups. And they are so fast and easy to study that everyone studies them all the time. A meta-analysis of 19 studies is unequivocal that they definitely work.

Why do these work? My guess is that they do two things. First, they hit people who honestly didn’t realize they had a problem, and inform them that they do. Second, the doctor usually says they’ll “follow up on how they’re doing” the next appointment. This means that a respected authority figure is suddenly monitoring their drinking and will glare at them if they stay they’re still alcoholic. As someone who has gone into a panic because he has a dentist’s appointment in a week and he hasn’t been flossing enough – and then flossed until his teeth were bloody so the dentist wouldn’t be disappointed – I can sympathize with this.

But for our purposes, the brief opportunistic intervention sets a lower bound. It says “Here’s a really minimal thing that seems to work. Do other things work better than this?”

The “brief treatment” is the next step up from brief intervention. It’s an hour-or-so-long session (or sometimes a couple such sessions) with a doctor or counselor where they tell you some tips for staying off alcohol. I bring it up here because the brief treatment research community spends its time doing studies that show that brief treatments are just as good as much more intense treatments.

Chapman and Huygens (1988) find that a single interview with a health professional is just as good as six weeks of inpatient treatment (I don’t know about their hospital in New Zealand, but for reference six weeks of inpatient treatment in my hospital costs about $40,000.)

Edwards (1977) finds that in a trial comparing “conventional inpatient or outpatient treatment complete with the full panoply of services available at a leading psychiatric institution and lasting several months” versus an hour with a doc, both groups do the same at one and two year followup.

And so on.

All of this is starting to make my head hurt, but it’s a familiar sort of hurt. It’s the way my head hurts when Scott Aaronson talks about complexity classes. We have all of these different categories of things, and some of them are the same as others and others are bigger than others but we’re not sure exactly where all of them stand.

We have classes “no treatment”, “brief opportunistic intervention”, “brief treatment”, “Alcoholics Anonymous”, “psychotherapy”, and “inpatient”.

We can prove that BOI > NT, and that AA = PT. Also that BT = IP = PT. We also have that IP > AA, which unfortunately we can use to prove a contradiction, so let’s throw it out for now.

So the hierarchy of classes seems to be (NT) < (BOI) ? (BT, IP, AA, PT) - in other words, no treatment is the worst, brief opportunistic intervention is better, and then somewhere in there we have this class of everything else that is the same.

Can we prove that BOI = BT?

We have some good evidence for this, once again from our Handbook. A study in Edinburgh finds that five minutes of psychiatrist advice (brief opportunistic intervention) does the same as sixty minutes of advice plus motivational interviewing (brief treatment).

So if we take all this seriously, then it looks like every psychosocial treatment (including brief opportunistic intervention) is the same, and all are better than no treatment. This is a common finding in psychiatry and psychology – for example, all common antidepressants are better than no treatment but work about equally well; all psychotherapies are better than no treatment but work about equally well, et cetera. It’s still an open question what this says about our science and our medicine.

The strongest counterexample to this is Walsh et al which finds the inpatient hospital stay works better than the AA referral, but this study looks kind of lonely compared to the evidence on the other side. And even the authors admit they were surprised by the effectiveness of the hospital there.

And let’s go back to Project MATCH. There wasn’t a control group. But there were the people who dropped out of the study, who said they’d go to AA or psychotherapy but never got around to it. Cutter and Fishbain (2005) take a look at what happened to these folks. They find that the dropouts did 75% as well as the people in any of the therapy groups, and that most of the effect of the therapy groups occurred in the first week (ie people dropped out after one week did about 95% as well as people who stayed in).

To me this suggests two things. First, therapy is only a little helpful over most people quitting on their own. Second, insofar as therapy is helpful, the tiniest brush with therapy is enough to make someone think “Okay, I’ve had some therapy, I’ll be better now”. Just like with the brief opportunistic interventions, five minutes of almost anything is enough.

This is a weird conclusion, but I think it’s the one supported by the data.

VI.

I should include a brief word about this giant table.

I see it everywhere. It looks very authoritative and impressive and, of course, giant. I believe the source is Miller’s Handbook of Alcoholism Treatment Approaches: Effective Alternatives, 3rd Edition, the author of which is known as a very careful scholar whom I cannot help but respect.

And the table does a good thing in discussing medications like acamprosate and naltrexone, which are very important and effective interventions but which will not otherwise be showing up in this post.

However, the therapy part of the table looks really wrong to me.

First of all, I notice acupuncture is ranked 17 out of 48, putting in a much, much better showing than treatments like psychotherapy, counseling, or education. Seems fishy.

Second of all, I notice that motivational enhancement (#2), cognitive therapy (#13), and twelve-step (#37) are all about as far apart as could be, but the largest and most powerful trial ever, Project MATCH, found all three to be equal in effectiveness.

Third of all, I notice that cognitive therapy is at #13, but psychotherapy is at #46. But cognitive therapy is a kind of psychotherapy.

Fourth of all, I notice that brief interventions, motivational enhancement, confrontational counseling, psychotherapy, general alcoholism counseling, and education are all over. But brief interventions (way on the top) are basically just a brief form of counseling (second to bottom).

The table seems messed up to me. Part of it is because it is about evidence base rather than effectiveness (consider that handguns have a stronger evidence base than the atomic bomb, since they have been used many more times in much better controlled conditions, but the atomic bomb is more effective) and therefore acupuncture, which is poorly studied, can rank quite high compared to things which have even one negative study.

But part of it just seems wrong. I haven’t read the full book, but I blame the tendency to conflate studies showing “X does not work better than anything else” with “X does not work”.

Remember, whenever there are meta-analyses that contradict single very large well-run studies, go with the single very large well-run study, especially when the meta-analysis is as weird as this one. Project MATCH is the single very large well-run study, and it says this is balderdash. I’m guessing it’s trying to use some weird algorithmic methodology to automatically rate and judge each study, but that’s no substitute for careful human review.

VII.

In conclusion, as best I can tell – and it is not very well, because the studies that could really prove anything robustly haven’t been done – most alcoholics get better on their own. All treatments for alcoholism, including Alcoholics Anonymous, psychotherapy, and just a few minutes with a doctor explaining why she thinks you need to quit, increase this already-high chance of recovery a small but nonzero amount. Furthermore, they are equally effective after only a tiny dose: your first couple of meetings, your first therapy session. Some studies suggest that inpatient treatment with outpatient followup may be better than outpatient treatment alone, but other studies contradict this and I am not confident in the assumption.

So does Alcoholics Anonymous work? Though I cannot say anything authoritatively, my impression is: Yes, but only a tiny bit, and for many people five minutes with a doctor may work just as well as years completing the twelve steps. As such, individual alcoholics may want to consider attending if they don’t have easier options; doctors might be better off just talking to their patients themselves.

If this is true – and right now I don’t have much confidence that it is, it’s just a direction that weak and contradictory data are pointing – it would be really awkward for the multibazillion-dollar treatment industry.

More worrying, I am afraid of what it would do to the War On Drugs. Right now one of the rallying cries for the anti-Drug-War movement is “treatment, not prison”. And although I haven’t looked seriously at the data for any drug besides alcohol. I think some data there are similar. There’s very good medication for drugs – for example methadone and suboxone for opiate abuse – but in terms of psychotherapy it’s mostly the same stuff you get for alcohol. Rehabs, whether they work or not, seem to serve an important sort of ritual function, where if you can send a drug abuser to a rehab you at least feel like something has been done. Deny people that ritual, and it might make prison the only politically acceptable option.

In terms of things to actually treat alcoholism, I remain enamoured of the Sinclair Method, which has done crazy outrageous stuff like conduct an experiment with an actual control group. But I haven’t investigated enough to know whether my early excitement about them looks likely to pan out or not.

I would not recommend quitting any form of alcohol treatment that works for you, or refusing to try a form of treatment your doctor recommends, based on any of this information.

Book Review: A Future For Socialism

A boot, stamping on a human face – forever!

No! Wait! Sorry! Wrong future for socialism! This is John Roemer’s A Future for Socialism, a book on how to build a kinder, gentler socialist economy. In my review of Red Plenty, I complained about the book’s lack of gritty economic planning details, and Gilbert commented:

The least unimpressive modern detail-level explanation of how socialism could work is [A Future For Socialism]. I might regret recommending this book, because Scott is the kind of person to fall for it.

With a recommendation like that, how could I not?

A Future For Socialism makes – and I believe proves – a bold thesis. It argues that a socialist economy is entirely compatible with prosperity, innovation, and consumer satisfaction – just as long as by “socialism”, you mean “capitalism”.

The book makes proposals, but you’re not exactly hearing the Internationale playing in the background as you read them. Prices are obviously the best form of allocating goods, so a socialist economy should keep them. Central planning could never work, so a socialist economy doesn’t need it. Bosses and managers seem to be doing a good job keeping their firms profitable, so they can all keep their jobs under socialism. Everyone has different skills, so clearly in a truly socialist system they deserve different wages, in fact whatever wage the market will bear.

So where’s the socialism? Well, socialism is a system where the people own the means of production. Right now corporations control the means of production, and you own corporations by holding stock in them. So if everybody owns stock in all corporations, then if you squint that’s kind of socialist.

Roemer proposes the following: first, you nationalize large industries – or, if you’re a post-Communist country (Roemer was writing in 1992) you start with your large industries already nationalized. Then you split them into stocks. Then you give everyone an equal amount of these stocks. When the corporations make money, they pay them out in the form of stock dividends, which go to the people/stockholders. So every year I get a check in the mail representing my one-three-hundred-millionth-part share of all the profits made by all the corporations in the United States.

Question: won’t poor people immediately sell their shares to rich people, resulting in the rich people becoming wealthy means-of-production-owning capitalists again? (compare question 4.1 here). Answer: yes, obviously. So Roemer proposes a law that stocks cannot be sold for money, only coupons and other stocks. Every citizen is given an equal number of coupons at birth, trades them for stocks later on, and then trades those stocks for other stocks. This allows smart citizens to invest wisely, and allows a sort of “stock market” that sends the correct signals (this business’s stock price is decreasing so maybe they’re doing something wrong) but doesn’t allow stock accumulation by wealthy capitalists.

In this system, businesses would raise funds not by selling stock but by seeking loans from banks. Apparently this is already how it works in Japan, where companies are arranged into “business groups” called keiretsu with each having a bank in charge of lending them money. Roemer hopes his model would work even better than the keiretsu, because the flow of stock coupons would give the banks market-driven information that help them make funding decisions.

So when I read about this I got really excited, because it sounded like a Basic Income Guarantee without the awkward questions about how to fund it. If everyone owns an equal number of shares in a diverse portfolio of the nation’s companies, then corporate profits go to everyone in the form of a check in the mail. Sounds like a good way to help the poor.

Unfortunately, Roemer calculates in the back of the book how much money he expects people to get from such a scheme. Using equations I can’t exactly follow, he finds that every citizen would get about $500, which is about 5% of the 1992 poverty rate. Using slightly different assumptions weighted in his favor, he is able to increase this to $1000 per person. It looks like we will not be solving poverty today.

On the other hand, I recently learned that corporate profits have been rising dramatically lately. If you do a calculation much simpler than Roemer’s – in fact, pure division of the $2 trillion in national profits by the 300 million Americans who could receive them – you get about $6,000 per person. That’s still not enough to reach the poverty line, but it’s something, especially if you’re willing to tax the wealthy’s share to funnel it to the poor.

(on the other hand, maybe fewer than all corporations will be nationalized? I dunno here.)

But Roemer doesn’t even mention this except as an aside, and doesn’t think it’s the most interesting thing about his system. What he’s really interested in is finding a way to control what by analogy to public goods he calls “public bads”. These are all the things that coordination problems form around, like pollution and global warming and selling weapons to dictatorial regimes and so on.

He makes the following fascinating claim: poor decision-making in the current system is driven by an imbalance of costs and benefits caused by the concentration of capital. For example, suppose that using lots of fossil fuel will produce $1 trillion in good economic activity, but also $10 trillion in costs due to climate change. The Koch brothers own lots of capital (in the form of stock, ownership rights of companies, et cetera) so much of that $1 trillion in economic benefits takes the form of increased corporate profits that go directly into their pockets. However, they only suffer the same share of global warming anyone else in the US suffers – presumably 1/300 millionth of the national cost. Therefore, since they get disproportionately large benefits but only proportionate costs, they have strong incentive to try to push fossil fuels. They are rich and powerful and usually get what they want, so probably fossil fuels will continue to be used.

But imagine that we socialized stock. Now everyone in the US gets 1/300 millionth of the national profits from good economic activity, and 1/300 millionth of the national costs of global warming. Since we already said the costs are greater than the benefits, every individual wants to fight global warming. People’s incentives finally match reality.

This is a really pretty idea, but it doesn’t seem quite right to me. By my understanding, very little lobbying is done by rich capitalists personally – and I think the Koch brothers are an exception because they genuinely hold conservative principles, not because they expect the calculus to come out in their favor. See Does Class Warfare Have A Free Rider Problem? Instead, lobbying is done by businesses directly, driven by the leadership of the businesses. Exxon Mobil hires oil lobbyists, Google hires intellectual property lobbyists, Monsanto hires agriculture lobbyists.

Would enraged Monsanto stockholder/citizens launch a corporate revolt demanding the company stop hiring lobbyists to work against the American people? I don’t think so. Corporate revolts are really really hard even nowadays when most stocks are held by a few attention-paying competent rich people. Give them to millions of not-attention-paying mostly-incompetent hoi polloi, and you think they’re going to be able to coordinate something? Besides, since stocks are tradeable, it might be only a few percent of the population who own Monsanto stock in particular; everyone else traded it away for more Google and Exxon Mobil stock. Those few percent of the population would get more money from Monsanto dividends than they would lose in the inevitable revolt of the Mutant Corn People, so their incentives would still be screwed.

So the Basic Income angle isn’t really enough to be exciting, and I don’t find the public goods/game theory angle too convincing either.

There’s also a big set of questions the book leaves unanswered – how do companies get nationalized? How are new companies formed? What happens to them?

Roemer does agree that it would be hard to nationalize all companies in a large advanced nation like the United States. In particular, taking rich people’s stock away from them without compensation would be naked theft, and the government probably couldn’t afford the compensation necessary. So he suggests that something like this be tried first in the post-communist countries or some other nation that already has nationalized industries and wants to know what to do with them.

Fine. That leaves the other big question. Suppose that the US somehow nationalizes all its industries in 1992, and a few years later Page and Brin want to start Google. What happens? Does the government say “Oh, no, sorry, we already have companies, we don’t need more of them”? Are they allowed to start it small, but the government immediately seizes it once it gets past a certain size? Are they just not allowed to sell it for stock and turn it into a corporation? Or if all of those things are okay and they can build Google as normal, what happens once most of the economy is made of these new post-1992 corporations and everything is capitalist again?

Overall there’s nothing terrible about the system in A Future For Socialism. It sure beats Stalin and even Castro. It just seems like a lot of work for not necessarily very much gain.

The last chapter is the only one in which Roemer permits himself to wax rhapsodic into the optimism I normally associate with the socialist cause. He says that he hopes market socialism is just the beginning, that this system of universal stock ownership will cause people to stop promoting public bads and care about the general welfare of the country, and this will take the form of more investment in education to train the next generation of workers, and once everyone has access to good education everyone will be just about equal and able to earn just about equal wages in the free market and then all this social class nonsense will disappear. Man, people who wrote politics before we fully understood how genetics worked were so cute!

But despite my panning the economic proposals, I learned a lot from this book and am grateful to have read it.

First, I was impressed by the assumptions. Roemer starts by explaining that yes, he knows why capitalism is a good thing, it’s reasons X Y and Z, and he’s not going to challenge or ignore that. When I hear someone making a controversial claim I disagree with, my immediate instinct is to assume that person is ignorant. Roemer proves he isn’t in precisely the right way. Before you advocate socialism, you prove that you’re not just totally ignorant of capitalism; that simplifies the process of sorting out the people you can learn from from the people you can’t.

He also makes it clear that he’s not out to change human nature. He hopes human nature will eventually change (see above about education) but he also recognizes that has to track changes in society, not be the cause of them. He writes:

[These proposals should take people as they actually are today, not as they might be after an egalitarian economic policy or cultural revolution has “remade” them. We must assume, as social scientists, that people are, in the short term at least, what they are: what can be changed – and slowly at that – are the institutions through which they interact.

Well put. Roemer establishes himself early on as someone who shares some of my basic assumptions (and can express them better than I can), which means even disagreement will be productive disagreement.

But second, and more important, this book is the first time I really had to think about joint-stock corporations. Like, I know what stock is in a “you buy it and then you get very excited or upset when it goes up or down” way, but I hadn’t thought of it as an important philosophical and political idea before, and Roemer really hammers home that it is.

The book identifies three big principal-agent problems in Soviet and other communist economic systems. First, managers employ workers to make their product, but workers want to slack off or line their own pockets. Second, central planners employ managers to run plants, but managers want to slack off or line their own pockets. Third, The People employ central planners to run the economy, but central planners want to slack off or line their own pockets. The Soviets solved these problems poorly. The central planners had no responsibility to anyone except other Party bureaucrats; the central planners could only punish managers who failed to meet their cooked-up metrics, leading to Goodhart’s Law gone berserk. And managers sometimes couldn’t promote workers in a meaningful way or fire them in a meaningful way, so workers had little incentive to do a good job.

The standard capitalist narrative is that principal-agent problems are very hard and maybe impossible on such a big scale, but this is okay, because in capitalism the people making the decisions are the ones profiting off them.

Roemer points out that’s nonsense. Most real-world capitalism isn’t the plucky entrepreneur founding a startup, it’s the giant corporation, in which a bunch of investors (who profit off of good decisions) hire a manager or CEO type person (who is supposed to make good decisions). Insofar as CEOs keep companies profitable – and it seems they do – the principal-agent problem is solved. If we want the company to be run by Stalin instead of by investors, all we need to do is use current corporate governance structure, but give Stalin the stock, and the company will be just as profitable as ever (as long as Stalin doesn’t try to interfere).

Roemer credits for this the hostile takeover method, where if a stock’s price falls too low, that means some other group can buy out all the stock and fire the manager. It’s a good point, but I can’t help wondering if another part of it is immediate, hard-to-deny feedback: that is, the existence of the stock price at all. First of all, the CEO can’t remain too deluded about her decisions; there has been many a politician who sends a country to its grave all the while hearing from a bunch of toadies that she’s making things better, but stock prices are hard to fudge. Second of all, the investors and the Board of Directors and so on have a mechanism by which they can agree upon whether the company is doing well or not, short-circuiting some of the politics that might cause them to split into factions for or against the current leadership (this is not to say there are no corporate politics, just that they are more resolvable than other kinds of politics).

The principal-agent problem is at the center of a lot of different things, so it’s really interesting to think of something as humble and unassuming as the joint-stock corporation as having in some sense solved it. I’m not sure what the wider implications of this are, but the idea of futarchy is looking better and better.

So in summary, this book’s ideas on stock distribution seem potentially okay but probably not worth nationalizing all industries over, but the real gem is the lucid explanation of the importance of corporate governance.

Link: A Future for Socialism

In The Future, Everyone Will Be Famous To Fifteen People

[Epistemic status: not very serious]
[Content note: May make you feel overly scrutinized]

Sometimes I hear people talking about how nobody notices them or cares about anything they do. And I want to say…well…

Okay. The Survey of Earned Doctorates tells us that the United States awards about a hundred classics PhDs per year. I get the impression classics is more popular in Europe, so let’s say a world total of five hundred. If the average classicist has a fifty year career, that’s 25,000 classicists at any given time. Some classicists work on Rome, so let’s say there are 10,000 classicists who focus solely on ancient Greece.

Estimates of the population of classical Greece center around a million people, but classical Greece lasted for several generations, so let’s say there were ten million classical Greeks total. That gives us a classicist-to-Greek ratio of 1:1000.

It would seem that this ratio must be decreasing: world population increases, average world education level increases, but the number of classical Greeks is fixed for all time. Can we extrapolate to a future where there is a one-to-one ratio of classicists to classical Greeks, so that each scholar can study exactly one Greek?

Problem the first – human population is starting to stabilize, and will probably reach a maximum at less than double its current level. But this is a function of our location here on Earth. Once we start colonizing space effectively, we can expect populations to balloon again. The Journal of the British Interplanetary Society estimates the carrying capacity of the solar system at forty trillion people; Nick Bostrom estimates the carrying capacity of the Virgo Supercluster at 10^23 human-like-digitized entities.

Problem the second – does the proportion of classics majors remain constant as population increases? One might expect that people living on domed cities in asteroids would have trouble being moved by the Iliad. Then again, one might expect that people living in glass-and-steel skyscrapers on a new continent ten thousand miles away from the classical world would have trouble being moved by the Iliad, and that didn’t pan out. A better objection might be that as population increases, amount of history also increases – the year 2500 may have more historians than we do, but it also has five hundred years more history. But this decreases our estimates only slightly – population grows exponentially, but amount of history grows linearly. For example, the year 2000 has three times the population of the year 1900, but – if we start history from 4000 BC – only about two percent more history. Even if we admit the common sense idea that the 20th century contains “more” historical interest than, say, the 5th century, it still certainly does not contain three times as much historical interest as all previous centuries combined.

So it seems that if human progress continues, the number of classicists will equal, then exceed the number of inhabitants of classical Greece. Exactly when this happens depends on many things, most obviously the effects of any technological singularity that might occur. But if we want to be very naive about it and project Current Rate No Singularity indefinitely, we can just extend our current rate of population doubling every fifty years and suggest that in about 2500, with a human population of five trillion spread out throughout the solar system and maybe some nearby stars, we will reach classicist:Greek parity.

What will this look like? Barring any revolutionary advance in historical methodology, there won’t really be enough artifacts and texts to support ten million classicists, so they will be reduced to overanalyzing excruciating details of the material that exists. On the other hand, maybe there will be revolutionary advances. The most revolutionary one I could think of would be the chronoscope from The Light of Other Days, a device often talked about in sci-fi stories that can see into the past. Armed with chronoscopes, classicists could avoid concentrating on a few surviving artifacts and study ancient Greece directly. And since the scholarly community would quickly exhaust would could be learned about important figures like Pericles and Leonidas, many historians would start looking into individual middle-class or lower-class Greeks, investigating their life stories and how they tied in to the broader historical picture. A new grad student might do her dissertation on the life of Nikias the random olive farmer who lived twenty miles outside Athens. Since there would be historian:subject parity, it might be that most or all ancient Greeks could be investigated in that level of detail.

What happens after 2500? If the assumptions mentioned above continue to hold, we pass parity and end up with more classicists than Greeks. By 3000 there are a thousand classicists for each ancient. Now you wish you could do your dissertation on the life of Nikias The Random Olive Farmer. But that low-hanging fruit (low hanging olive?) has been taken. Now there is an entire field (olive orchard?) of Nikias The Random Olive Farmer Studies, with its own little internal academic politics and yearly conferences on Alpha Centauri. In large symposia held at high-class hotels, various professors discuss details of Nikias The Random Olive Farmer’s psychology, personal relationships, opinions, and how he fits in to the major trends in Greek society that were going on at the time. Feminist scholars object that the field of Nikias The Random Olive Farmer’s Wife Studies is less well-funded than Nikias The Random Olive Farmer Studies, and dozens of angry papers are published in the relevant journals about it. Several leading figures object that too little effort is being made to communicate the findings of Nikias The Random Olive Farmer Studies to the general public, and there are half-hearted attempts to make little comic books about Nikias’ life or something.

By 3150 this has gotten so out of hand that it is wasting useful resources that should be allocated to fending off the K’th’rangan invasion. The Galactic Emperor declares a cap on the number of classics scholars at some reasonable number like a hundred million. There are protests in every major university, and leading public figures accuse the Galactic Emperor of being anti-intellectual, but eventually the new law takes hold and the grumbling dies down.

The field of Early 21st Century Studies, on the other hand, is still going strong. There are almost a thousand times as many moderns as Greeks, so we have a more reasonable ratio of about fifteen historians per modern, give or take, with the most interesting moderns having more and the ones who died young having fewer. Even better, the 21st Century Studies researchers don’t have to waste valuable chronoscopes that could be used for spying on the K’th’rangans. They can just hunt through the Internet Archive for more confusing, poorly organized data about the people of the early 21st century than they could ever want.

Gradually the data will start to make more and more sense. Imagine how excited the relevant portion of the scholarly community will be when it is discovered through diligent sleuthing that Thor41338 on the Gamer’s Guild forum is the same person as Hunter Glenderson from Charleston, South Carolina, and two seemingly different pieces of the early 21st century milieu slide neatly into place.

A few more population doublings, and the field of Hunter Glenderson From Charleston Studies is as big as the field of Nikias The Random Olive Farmer Studies ever was. The Galactic Emperor is starting to take notice, but the K’th’rangans are in retreat and for now there are resources to spare. There are no more great discoveries about new pseudonyms to be made, but there are still occasional paradigm shifts in analysis of the great events of Glenderson’s life. Someone tries a Freudian analysis of his life; another a Marxist analysis; a third writes about how his relationship with his ex-girlfriend from college ties in to the Daoist conception of impermanence. All these people have grad students trawling old Twitter accounts for them, undergraduates anxious to hear about their professor’s latest research, and hateblogs by brash amateurs claiming that the establishment totally misunderstands Hunter Glenderson.

Late at night, one grad student is preparing a paper on one of Glenderson’s teenaged Twitter rants, and comes across his tweet: “Nobody notices me. Nobody cares about anything I do.” She makes special note of it, since she thinks the irony factor might make it worth a publication in one of the several Hunter-Glenderson-themed journals.

More Links For October 2014

Bad Conlanging Ideas Tumblr, or best conlanging ideas Tumblr?

No, Aristotle is not your dumb straw man opponent of empiricism.

One thing you have to learn in every freshman biology course, and the better sort of freshman philosophy course, is that evolution doesn’t necessarily go “from worse organisms to better organisms” or even “from less complex organisms to more complex organisms” in any meaningful fashion. On the other hand, organisms from more “evolutionarily deep” areas are more likely to invade less “evolutionarily deep” areas than vice versa. So maybe there’s something to the idea of evolutionary “progress” after all, albeit probably not in the way a lot of people would think.

Last links post I made fun of Russia’s wood shortage by saying it was like Saudi Arabia having a sand shortage. Alyssa Vance helpfully informed me that Saudi Arabia did, in fact, have a sand shortage.

Andrea Rossi’s e-Cat cold fusion machine passes another round of probably rigged tests, including one where it was able to change isotope ratios in a way that would have been very impressive had it not been almost certainly rigged. Less Wrong Facebook is taking bets on replications – but they’re less “think it will” vs. “think it won’t” and more “1% chance it will” vs. “0.0001% chance it will.” Rational Conspiracy sums up some of the discussion, but note that Robin says (privately, on Facebook) that this is seriously misrepresenting him, so the post should be taken only as a survey of the issues involved and not as accurate about Robin’s personal position. Also, I ask about some of the patent issues it raises on Tumblr.

There’s now a claim that along with everything else gut microbes can contribute to the pathogenesis of eating disorders. Haven’t investigated to see if it’s true yet because I know very little about these conditions and am hoping Kate Donovan will do the hard work.

Speaking of aetiology of mental disorders, here’s the best Grand Unified Theory Of Autism I’ve seen this week: autism stems from a prediction deficit. Also glad the importance of prediction to the brain’s architecture is getting some much-deserved media attention.

Second best Grand Unified Theory Of Autism I’ve seen this week: Neural stem cell overgrowth, autism-like behavior linked, mice study suggests.

In the slightly more reality-based fusion community, Lockheed Martin announces that they expect to have a truck-sized fusion reactor ready in ten years, making the joke that “fusion is always twenty years off” somewhat obsolete. The polywell people have also mentioned the ten years number. But more sober scientists are doubtful.

My old Biodeterminist’s Guide said it was “likely” that exercise increased fetal IQ but didn’t have a good study to point to. Now the evidence is in to confirm that hypothesis.

Yxoque is starting rationalist-tutor.tumblr.com to try to teach some basic rationalist concepts to people who for some inexplicable reason possibly involving their head being screwed on backwards don’t want to read the Sequences. If you’re on Tumblr you may want to follow.

The libertarian talking points on California’s water shortage. Seems legit.

Songs From A Decemberists Album Where Nobody Gets Murdered, including “The Boy Who Joined A Guild And Worked Hard,” “Let’s Not Strangle The Dauphin,” and “Life As A Chimney Sweep Is Difficult But I’d Certainly Never Start Murdering Sex Workers Who Remind Me Of My Mother Just To Relieve The Stress.”

Ezra Klein’s been getting a lot of flak over a recent article, and I’m not usually someone to defend what THE ENTIRE WORLD seems to be denouncing as extreme over-the-top feminism – but in this case it looks like he’s taken a reasonable position on what unfortunately happens to be a taboo tradeoff.

Weird fluctuation in x-rays from the sun may be first direct detection of dark matter axions.

Want to live somewhere cheap? Try New York or San Francisco. Really? Yes, really.

A new paper finds that immigration neither increases unemployment nor increases growth. (h/t Marginal Revolution)

Of all the crappy things the Saudis do to women, one I didn’t know before was that women who have finished their jail term have to be picked up by a male relative. No male relative who wants to pick you up, no release from jail, ever. Seriously, screw Saudi Arabia. I hope the entire country collapses of chronic sand shortage.

The world’s first fully vegetarian city bans animal slaughter and the sale of meat within city limits. Only one problem – it doesn’t look like they asked the residents, who are kind of miffed.

Iran, when challenged on homosexual rights, famously declared that it had no gays. But did you know that when asked to host the Paralympics, the Soviet Union declined because it had no disabled people?

In keeping with our tradition of ending with a link to an interesting, funny, or surprising textbook: I kind of actually want to read this.

Posted in Uncategorized | Tagged | 470 Comments

Five Case Studies On Politicization

[Trigger warning: Some discussion of rape in Part III. This will make much more sense if you've previously read I Can Tolerate Anything Except The Outgroup]

I.

One day I woke up and they had politicized Ebola.

I don’t just mean the usual crop of articles like Republicans Are Responsible For The Ebola Crisis and Democrats Try To Deflect Blame For Ebola Outbreak and Incredibly Awful Democrats Try To Blame Ebola On GOP and NPR Reporter Exposes Right Wing Ebola Hype and Republicans Flip-Flop On Ebola Czars. That level of politicization was pretty much what I expected.

(I can’t say I totally expected to see an article called Fat Lesbians Got All The Ebola Dollars, But Blame The GOP, but in retrospect nothing I know about modern society suggested I wouldn’t)

I’m talking about something weirder. Over the past few days, my friends on Facebook have been making impassioned posts about how it’s obvious there should/shouldn’t be a quarantine, but deluded people on the other side are muddying the issue. The issue has risen to an alarmingly high level of 0.05 #Gamergates, which is my current unit of how much people on social media are concerned about a topic. What’s more, everyone supporting the quarantine has been on the right, and everyone opposing on the left. Weird that so many people suddenly develop strong feelings about a complicated epidemiological issue, which can be exactly predicted by their feelings about everything else.

On the Right, there is condemnation of the CDC’s opposition to quarantines as globalist gibberish, fourteen questions that will never be asked about Ebola centering on why there aren’t more quarantine measures in place, and arguments on right-leaning biology blogs for why the people opposing quarantines are dishonest or incompetent. Top Republicans call for travel bans and a presenter on Fox, proportionate as always, demands quarantine centers in every US city.

On the Left (and token libertarian) sides, the New Yorker has been publishing articles on how involuntary quarantines violate civil liberties and “embody class and racial biases”, Reason makes fun of “dumb Republican calls for a travel ban”, Vox has a clickbaity article on how “This One Paragraph Perfectly Sums Up America’s Overreaction To Ebola”, and MSNBC notes that to talk about travel bans is “borderline racism”.

How did this happen? How did both major political tribes decide, within a month of the virus becoming widely known in the States, not only exactly what their position should be but what insults they should call the other tribe for not agreeing with their position? There are a lot of complicated and well-funded programs in West Africa to disseminate information about the symptoms of Ebola in West Africa, and all I can think of right now is that if the Africans could disseminate useful medical information half as quickly as Americans seem to have disseminated tribal-affiliation-related information, the epidemic would be over tomorrow.

Is it just random? A couple of Republicans were coincidentally the first people to support a quarantine, so other Republicans felt they had to stand by them, and then Democrats felt they had to oppose it, and then that spread to wider and wider circles? And if by chance a Democrats had proposed quarantine before a Republican, the situation would have reversed itself? Could be.

Much more interesting is the theory that the fear of disease is the root of all conservativism. I am not making this up. There has been a lot of really good evolutionary psychology done on the extent to which pathogen stress influences political opinions. Some of this is done on the societal level, and finds that societies with higher germ loads are more authoritarian and conservative. This research can be followed arbitrarily far – like, isn’t it interesting that the most liberal societies in the world are the Scandinavian countries in the very far north where disease burden is low, and the most traditionalist-authoritarian ones usually in Africa or somewhere where disease burden is high? One even sees a similar effect within countries, with northern US states being very liberal and southern states being very conservative. Other studies have instead focused on differences between individuals within society – we know that religious conservatives are people with stronger disgust reactions and priming disgust reactions can increase self-reported conservative political beliefs – with most people agreeing disgust reactions are a measure of the “behavioral immune system” triggered by fear of germ contamination.

(free tip for liberal political activists – offering to tidy up voting booths before the election is probably a thousand times more effective than anything you’re doing right now. I will leave the free tip for conservative political activists to your imagination)

If being a conservative means you’re pre-selected for worry about disease, obviously the conservatives are going to be the ones most worried about Ebola. And in fact, along with the quarantine debate, there’s a little sub-debate about whether Ebola is worth panicking about. Vox declares Americans to be “overreacting” and keeps telling them to calm down, whereas its similarly-named evil twin Vox Day has been spending the last week or so spreading panic and suggesting readers “wash your hands, stock up a bit, and avoid any unnecessary travel”.

So that’s the second theory.

The third theory is that everything in politics is mutually reinforcing.

Suppose the Red Tribe has a Grand Narrative. The Narrative is something like “We Americans are right-thinking folks with a perfectly nice culture. But there are also scary foreigners who hate our freedom and wish us ill. Unfortunately, there are also traitors in our ranks – in the form of the Blue Tribe – who in order to signal sophistication support foreigners over Americans and want to undermine our culture. They do this by supporting immigration, accusing anyone who is too pro-American and insufficiently pro-foreigner of “racism”, and demanding everyone conform to “multiculturalism” and “diversity”, as well as lionizing any group within America that tries to subvert the values of the dominant culture. Our goal is to minimize the subversive power of the Blue Tribe at home, then maintain isolation from foreigners abroad, enforced by a strong military if they refuse to stay isolated.”

And the Blue Tribe also has a Grand Narrative. The Narrative is something like “The world is made up of a bunch of different groups and cultures. The wealthier and more privileged groups, played by the Red Tribe, have a history of trying to oppress and harass all the other groups. This oppression is based on ignorance, bigotry, xenophobia, denial of science, and a false facade of patriotism. Our goal is to call out the Red Tribe on its many flaws, and support other groups like foreigners and minorities in their quest for justice and equality, probably in a way that involves lots of NGOs and activists.”

The proposition “a quarantine is the best way to deal with Ebola” seems to fit much better into the Red narrative than the Blue Narrative. It’s about foreigners being scary and dangerous, and a strong coordinated response being necessary to protect right-thinking Americans from them. When people like NBC and the New Yorker accuse quarantine opponents of being “racist”, that just makes the pieces fit in all the better.

The proposition “a quarantine is a bad way to deal with Ebola” seems to fit much better into the Blue narrative than the Red. It’s about extremely poor black foreigners dying, and white Americans rushing to throw them overboard to protect themselves out of ignorance of the science (which says Ebola can’t spread much in the First World), bigotry, xenophobia, and fear. The real solution is a coordinated response by lots of government agencies working in tandem with NGOs and local activists.

It would be really hard to switch these two positions around. If the Republicans were to oppose a quarantine, it might raise the general question of whether closing the borders and being scared of foreign threats is always a good idea, and whether maybe sometimes accusations of racism are making a good point. Far “better” to maintain a consistent position where all your beliefs reinforce all of your other beliefs.

There’s a question of causal structure here. Do Republicans believe certain other things for their own sake, and then adapt their beliefs about Ebola to help buttress their other beliefs? Or do the same factors that made them adopt their narrative in the first place lead them to adopt a similar narrative around Ebola?

My guess it it’s a little of both. And then once there’s a critical mass of anti-quarantiners within a party, in-group cohesion and identification effects cascade towards it being a badge of party membership and everybody having to believe it. And if the Democrats are on the other side, saying things you disagree with about every other issue, and also saying that you have to oppose quarantine or else you’re a bad person, then that also incentivizes you to support a quarantine, just to piss them off.

II.

Sometimes politicization isn’t about what side you take, it’s about what issues you emphasize.

In the last post, I wrote:

Imagine hearing that a liberal talk show host and comedian was so enraged by the actions of ISIS that he’d recorded and posted a video in which he shouts at them for ten minutes, cursing the “fanatical terrorists” and calling them “utter savages” with “savage values”.

If I heard that, I’d be kind of surprised. It doesn’t fit my model of what liberal talk show hosts do.

But the story I’m actually referring to is liberal talk show host / comedian Russell Brand making that same rant against Fox News for supporting war against the Islamic State, adding at the end that “Fox is worse than ISIS”.

That fits my model perfectly. You wouldn’t celebrate Osama’s death, only Thatcher’s. And you wouldn’t call ISIS savages, only Fox News. Fox is the outgroup, ISIS is just some random people off in a desert. You hate the outgroup, you don’t hate random desert people.

I would go further. Not only does Brand not feel much like hating ISIS, he has a strong incentive not to. That incentive is: the Red Tribe is known to hate ISIS loudly and conspicuously. Hating ISIS would signal Red Tribe membership, would be the equivalent of going into Crips territory with a big Bloods gang sign tattooed on your shoulder.

Now I think I missed an important part of the picture. The existence of ISIS plays right into Red Tribe narratives. They are totally scary foreigners who hate our freedom and want to hurt us and probably require a strong military response, so their existence sounds like a point in favor of the Red Tribe. Thus, the Red Tribe wants to talk about them as much as possible and condemn them in the strongest terms they can.

There’s not really any way to spin this issue in favor of the Blue Tribe narrative. The Blue Tribe just has to grudgingly admit that maybe this is one of the few cases where their narrative breaks down. So their incentive is to try to minimize ISIS, to admit it exists and is bad and try to distract the conversation to other issues that support their chosen narrative more. That’s why you’ll never see the Blue Tribe gleefully cheering someone on as they call ISIS “savages”. It wouldn’t fit the script.

But did you hear about that time when a Muslim-American lambasting Islamophobia totally pwned all of those ignorant FOX anchors? Le-GEN-dary!

III.

At worst this choice to emphasize different issues descends into an unhappy combination of tragedy and farce.

The Rotherham scandal was an incident in an English town where criminal gangs had been grooming and blackmailing thousands of young girls, then using them as sex slaves. This had been going on for at least ten years with minimal intervention by the police. An investigation was duly launched, which discovered that the police had been keeping quiet about the problem because the gangs were mostly Pakistani and the victims mostly white, and the police didn’t want to seem racist by cracking down too heavily. Researchers and officials who demanded that the abuse should be publicized or fought more vigorously were ordered to attend “diversity training” to learn why their demands were offensive. The police department couldn’t keep it under wraps forever, and eventually it broke and was a huge scandal.

The Left then proceeded to totally ignore it, and the Right proceeded to never shut up about it for like an entire month, and every article about it had to include the “diversity training” aspect, so that if you type “rotherham d…” into Google, your two first options are “Rotherham Daily Mail” and “Rotherham diversity training”.

I don’t find this surprising at all. The Rotherham incident ties in perfectly to the Red Tribe narrative – scary foreigners trying to hurt us, politically correct traitors trying to prevent us from noticing. It doesn’t do anything for the Blue Tribe narrative, and indeed actively contradicts it at some points. So the Red Tribe wants to trumpet it to the world, and the Blue Tribe wants to stay quiet and distract.

HBD Chick usually writes very well-thought-out articles on race and genetics listing all the excellent reasons you should not marry your cousins. Hers is not a political blog, and I have never seen her get upset about any political issue before, but since most of her posts are about race and genetics she gets a lot of love from the Right and a lot of flak from the Left. She recently broke her silence on politics to write three long and very angry blog posts on the Rotherham issue, of which I will excerpt one:

if you’ve EVER called somebody a racist just because they said something politically incorrect, then you’d better bloody well read this report, because THIS IS ON YOU! this is YOUR doing! this is where your scare tactics have gotten us: over 1400 vulnerable kids systematically abused because YOU feel uncomfortable when anybody brings up some “hate facts.”

this is YOUR fault, politically correct people — and i don’t care if you’re on the left or the right. YOU enabled this abuse thanks to the climate of fear you’ve created. thousands of abused girls — some of them maybe dead — on YOUR head.

I have no doubt that her outrage is genuine. But I do have to wonder why she is outraged about this and not all of the other outrageous things in the world. And I do have to wonder whether the perfect fit between her own problems – trying to blog about race and genetics but getting flak from politically correct people – and the problems that made Rotherham so disastrous – which include police getting flak from politically correct people – are part of her sudden conversion to political activism.

[edit: she objects to this characterization]

But I will also give her this – accidentally stumbling into being upset by the rape of thousands of children is, as far as accidental stumbles go, not a bad one. What’s everyone else’s excuse?

John Durant did an interesting analysis of media coverage of the Rotherham scandal versus the “someone posted nude pictures of Jennifer Lawrence” scandal.

He found left-leaning news website Slate had one story on the Rotherham child exploitation scandal, but four stories on nude Jennifer Lawrence.

He also found that feminist website Jezebel had only one story on the Rotherham child exploitation scandal, but six stories on nude Jennifer Lawrence.

Feministing gave Rotherham a one-sentence mention in a links roundup (just underneath “five hundred years of female portrait painting in three minutes”), but Jennifer Lawrence got two full stories.

The article didn’t talk about social media, and I couldn’t search it directly for Jennifer Lawrence stories because it was too hard to sort out discussion of the scandal from discussion of her as an actress. But using my current unit of social media saturation, Rotherham clocks in at 0.24 #Gamergates

You thought I was joking. I never joke.

This doesn’t surprise me much. Yes, you would think that the systematic rape of thousands of women with police taking no action might be a feminist issue. Or that it might outrage some people on Tumblr, a site which has many flaws but which has never been accused of being slow to outrage. But the goal here isn’t to push some kind of Platonic ideal of what’s important, it’s to support a certain narrative that ties into the Blue Tribe narrative. Rotherham does the opposite of that. The Jennifer Lawrence nudes, which center around how hackers (read: creepy internet nerds) shared nude pictures of a beloved celebrity on Reddit (read: creepy internet nerds) and 4Chan (read: creepy internet nerds) – and #Gamergate which does the same – are exactly the narrative they want to push, so they become the Stories Of The Century.

IV.

Here’s something I did find on Tumblr which I think is really interesting.

You can see that after the Ferguson shooting, the average American became a little less likely to believe that blacks were treated equally in the criminal justice system. This makes sense, since the Ferguson shooting was a much-publicized example of the criminal justice system treating a black person unfairly.

But when you break the results down by race, a different picture emerges. White people were actually a little more likely to believe the justice system was fair after the shooting. Why? I mean, if there was no change, you could chalk it up to white people believing the police’s story that the officer involved felt threatened and made a split-second bad decision that had nothing to do with race. That could explain no change just fine. But being more convinced that justice is color-blind? What could explain that?

My guess – before Ferguson, at least a few people interpreted this as an honest question about race and justice. After Ferguson, everyone mutually agreed it was about politics.

Ferguson and Rotherham were both similar in that they were cases of police misconduct involving race. You would think that there might be some police misconduct community who are interested in stories of police misconduct, or some race community interested in stories about race, and these people would discuss both of these two big international news items.

The Venn diagram of sources I saw covering these two stories forms two circles with no overlap. All those conservative news sites that couldn’t shut up about Rotherham? Nothing on Ferguson – unless it was to snipe at the Left for “exploiting” it to make a political point. Otherwise, they did their best to stay quiet about it. Hey! Look over there! ISIS is probably beheading someone really interesting!

The same way Rotherham obviously supports the Red Tribe’s narrative, Ferguson obviously supports the Blue Tribe’s narrative. A white person, in the police force, shooting an innocent (ish) black person, and then a racist system refusing to listen to righteous protests by brave activists.

The “see, the Left is right about everything” angle of most of the coverage made HBD Chick’s attack on political correctness look subtle. The parts about race, systemic inequality, and the police were of debatable proportionality, but what I really liked was the Ferguson coverage started branching off into every issue any member of the Blue Tribe has ever cared about:

Gun control? Check.

The war on terror? Check.

American exceptionalism? Check.

Feminism? Check.

Abortion? Check

Gay rights? Check.

Palestinian independence? Check.

Global warming? Check. Wait, really? Yes, really.

Anyone who thought that the question in that poll was just a simple honest question about criminal justice was very quickly disabused of that notion. It was a giant Referendum On Everything, a “do you think the Blue Tribe is right on every issue and the Red Tribe is terrible and stupid, or vice versa?” And it turns out many people who when asked about criminal justice will just give the obvious answer, have much stronger and less predictable feelings about Giant Referenda On Everything.

In my last post, I wrote about how people feel when their in-group is threatened, even when it’s threatened with an apparently innocuous point they totally agree with:

I imagine [it] might feel like some liberal US Muslim leader, when he goes on the O’Reilly Show, and O’Reilly ambushes him and demands to know why he and other American Muslims haven’t condemned beheadings by ISIS more, demands that he criticize them right there on live TV. And you can see the wheels in the Muslim leader’s head turning, thinking something like “Okay, obviously beheadings are terrible and I hate them as much as anyone. But you don’t care even the slightest bit about the victims of beheadings. You’re just looking for a way to score points against me so you can embarass all Muslims. And I would rather personally behead every single person in the world than give a smug bigot like you a single microgram more stupid self-satisfaction than you’ve already got.”

I think most people, when they think about it, probably believe that the US criminal justice system is biased. But when you feel under attack by people whom you suspect have dishonest intentions of twisting your words so they can use them to dehumanize your in-group, eventually you think “I would rather personally launch unjust prosecutions against every single minority in the world than give a smug out-group member like you a single microgram more stupid self-satisfaction than you’ve already got.”

V.

Wait, so you mean turning all the most important topics in our society into wedge issues that we use to insult and abuse people we don’t like, to the point where even mentioning it triggers them and makes them super defensive, might have been a bad idea??!

There’s been some really neat research into people who don’t believe in global warming. The original suspicion, at least from certain quarters, were that they were just dumb. Then someone checked and found that warming disbelievers actually had (very slightly) higher levels of scientific literacy than warming believers.

So people had to do actual studies, and to what should have been no one’s surprise, the most important factor was partisan affiliation. For example, according to Pew 64% of Democrats believe the Earth is getting warmer due to human activity, compared to 9% of Tea Party Republicans.

So assuming you want to convince Republicans to start believing in global warming before we’re all frying eggs on the sidewalk, how should you go about it? This is the excellent question asked by a study recently profiled in an NYMag article.

The study found that you could be a little more convincing to conservatives by acting on the purity/disgust axis of moral foundations theory – the one that probably gets people so worried about Ebola. A warmer climate is unnatural, in the same way that, oh, let’s say, homosexuality is unnatural. Carbon dioxide contaminating our previously pure atmosphere, in the same way premarital sex or drug use contaminates your previously pure body. It sort of worked.

Another thing that sort of worked was tying things into the Red Tribe narrative, which they did through the two sentences “Being pro-environmental allows us to protect and preserve the American way of life. It is patriotic to conserve the country’s natural resources.” I can’t imagine anyone falling for this, but I guess some people did.

This is cute, but it’s too little too late. Global warming has already gotten inextricably tied up in the Blue Tribe narrative: Global warming proves that unrestrained capitalism is destroying the planet. Global warming disproportionately affects poor countries and minorities. Global warming could have been prevented with multilateral action, but we were too dumb to participate because of stupid American cowboy diplomacy. Global warming is an important cause that activists and NGOs should be lauded for highlighting. Global warming shows that Republicans are science denialists and probably all creationists. Two lousy sentences on “patriotism” aren’t going to break through that.

If I were in charge of convincing the Red Tribe to line up behind fighting global warming, here’s what I’d say:

In the 1950s, brave American scientists shunned by the climate establishment of the day discovered that the Earth was warming as a result of greenhouse gas emissions, leading to potentially devastating natural disasters that could destroy American agriculture and flood American cities. As a result, the country mobilized against the threat. Strong government action by the Bush administration outlawed the worst of these gases, and brilliant entrepreneurs were able to discover and manufacture new cleaner energy sources. As a result of these brave decisions, our emissions stabilized and are currently declining.

Unfortunately, even as we do our part, the authoritarian governments of Russia and China continue to industralize and militarize rapidly as part of their bid to challenge American supremacy. As a result, Communist China is now by far the world’s largest greenhouse gas producer, with the Russians close behind. Many analysts believe Putin secretly welcomes global warming as a way to gain access to frozen Siberian resources and weaken the more temperate United States at the same time. These countries blow off huge disgusting globs of toxic gas, which effortlessly cross American borders and disrupt the climate of the United States. Although we have asked them to stop several times, they refuse, perhaps egged on by major oil producers like Iran and Venezuela who have the most to gain by keeping the world dependent on the fossil fuels they produce and sell to prop up their dictatorships.

A giant poster of Mao looks approvingly at all the CO2 being produced…for Communism.

We need to take immediate action. While we cannot rule out the threat of military force, we should start by using our diplomatic muscle to push for firm action at top-level summits like the Kyoto Protocol. Second, we should fight back against the liberals who are trying to hold up this important work, from big government bureaucrats trying to regulate clean energy to celebrities accusing people who believe in global warming of being ‘racist’. Third, we need to continue working with American industries to set an example for the world by decreasing our own emissions in order to protect ourselves and our allies. Finally, we need to punish people and institutions who, instead of cleaning up their own carbon, try to parasitize off the rest of us and expect the federal government to do it for them.

Please join our brave men and women in uniform in pushing for an end to climate change now.

If this were the narrative conservatives were seeing on TV and in the papers, I think we’d have action on the climate pretty quickly. I mean, that action might be nuking China. But it would be action.

And yes, there’s a sense in which that narrative is dishonest, or at least has really weird emphases. But our current narrative also has really some weird emphases. And for much the same reasons.

VI.

The Red Tribe and Blue Tribe have different narratives, which they use to tie together everything that happens into reasons why their tribe is good and the other tribe is bad.

Sometimes this results in them seizing upon different sides of an apparently nonpolitical issue when these support their narrative; for example, Republicans generally supporting a quarantine against Ebola, Democrats generally opposing it. Other times it results in a side trying to gain publicity for stories that support their narrative while sinking their opponents’ preferred stories – Rotherham for some Reds; Ferguson for some Blues.

When an issue gets tied into a political narrative, it stops being about itself and starts being about the wider conflict between tribes until eventually it becomes viewed as a Referendum On Everything. At this point, people who are clued in start suspecting nobody cares about the issue itself – like victims of beheadings, or victims of sexual abuse – and everybody cares about the issue’s potential as a political weapon – like proving Muslims are “uncivilized”, or proving political correctness is dangerous. After that, even people who agree that the issue is a problem and who would otherwise want to take action have to stay quiet, because they know that their help would be used less to solve a problem than to push forward the war effort against them. If they feel especially threatened, they may even take an unexpected side on the issue, switching from what they would usually believe to whichever position seems less like a transparent cover for attempts to attack them and their friends.

And then you end up doing silly things like saying ISIS is not as bad as Fox News, or donating hundreds of thousands of dollars to the officer who shot Michael Brown.

This can sort of be prevented by not turning everything into a referendum on how great your tribe is and how stupid the opposing tribe is, or by trying to frame an issue in a way that respects or appeals to an out-group’s narrative.

Let me give an example. I find a lot of online feminism very triggering, because it seems to me to have nothing to do with women and be transparently about marginalizing nerdy men as creeps who are not really human (see: nude pictures vs. Rotherham, above). This means that even when I support and agree with feminists and want to help them, I am constantly trying to drag my brain out of panic mode that their seemingly valuable projects are just deep cover for attempts to hurt me (see: hypothetical Bill O’Reilly demanding Muslims condemn the “Islamic” practice of beheading people).

I have recently met some other feminists who instead use a narrative which views “nerds” as an “alternative gender performance”, ie in the case of men they reject the usual masculine pursuits of sports and fraternities and they have characteristics that violate normative beauty standards (like “no neckbeards”). Thus, people trying to attack nerds is a subcategory of “people trying to enforce gender performance”, and nerds should join with queer people, women, and other people who have an interest in promoting tolerance of alternative gender performances in order to fight for their mutual right to be left alone and accepted.

I’m not sure I entirely buy this argument, but it doesn’t trigger me, and it’s the sort of thing I could buy, and if all my friends started saying it I’d probably be roped into agreeing by social pressure alone.

But this is as rare as, well, anti-global warming arguments aimed at making Republicans feel comfortable and nonthreatened.

I blame the media, I really do. Remember, from within a system no one necessarily has an incentive to do what the system as a whole is supposed to do. Daily Kos or someone has a little label saying “supports liberal ideas”, but actually their incentive is to make liberals want to click on their pages and ads. If the quickest way to do that is by writing story after satisfying story of how dumb Republicans are, and what wonderful taste they have for being members of the Blue Tribe instead of evil mutants, then they’ll do that even if the effect on the entire system is to make Republicans hate them and by extension everything they stand for.

I don’t know how to fix this.

Posted in Uncategorized | Tagged | 802 Comments

Five Planets In Search Of A Sci-Fi Story

Gamma Andromeda, where philosophical stoicism went too far. Its inhabitants, tired of the roller coaster ride of daily existence, decided to learn equanimity in the face of gain or misfortune, neither dreading disaster nor taking joy in success.

But that turned out to be really hard, so instead they just hacked it. Whenever something good happens, the Gammandromedans give themselves an electric shock proportional in strength to its goodness. Whenever something bad happens, the Gammandromedans take an opiate-like drug that directly stimulates the pleasure centers of their brain, in a dose proportional in strength to its badness.

As a result, every day on Gamma Andromeda is equally good compared to every other day, and its inhabitants need not be jostled about by fear or hope for the future.

This does sort of screw up their incentives to make good things happen, but luckily they’re all virtue ethicists.

Zyzzx Prime, inhabited by an alien race descended from a barnacle-like creature. Barnacles are famous for their two stage life-cycle: in the first, they are mobile and curious creatures, cleverly picking out the best spot to make their home. In the second, they root themselves to the spot and, having no further use for their brain, eat it.

This particular alien race has evolved far beyond that point and does not literally eat its brain. However, once an alien reaches sufficiently high social status, it releases a series of hormones that tell its brain, essentially, that it is now in a safe place and doesn’t have to waste so much energy on thought and creativity to get ahead. As a result, its mental acuity drops two or three standard deviations.

The Zyzzxians’ society is marked by a series of experiments with government – monarchy, democracy, dictatorship – only to discover that, whether chosen by succession, election, or ruthless conquest, its once brilliant leaders lose their genius immediately upon accession and do a terrible job. Their government is thus marked by a series of perpetual pointless revolutions.

At one point, a scientific effort was launched to discover the hormones responsible and whether it was possible to block them. Unfortunately, any scientist who showed promise soon lost their genius, and those promoted to be heads of research institutes became stumbling blocks who mismanaged funds and held back their less prestigious co-workers. Suggestions that the institutes eliminate tenure were vetoed by top officials, who said that “such a drastic step seems unnecessary”.

K’th’ranga V, which has been a global theocracy for thousands of years, ever since its dominant race invented agricultural civilization. This worked out pretty well for a while, until it reached an age of industrialization, globalization, and scientific discovery. Scientists began to uncover truths that contradicted the Sacred Scriptures, and the hectic pace of modern life made the shepherds-and-desert-traders setting of the holy stories look vaguely silly. Worse, the cold logic of capitalism and utilitarianism began to invade the Scriptures’ innocent Stone Age morality.

The priest-kings tried to turn back the tide of progress, but soon realized this was a losing game. Worse, in order to determine what to suppress, they themselves had to learn the dangerous information, and their mental purity was even more valuable than that of the populace at large.

So the priest-kings moved en masse to a big island, where they began living an old-timey Bronze Age lifestyle. And the world they ruled sent emissaries to the island, who interfaced with the priest-kings, and sought their guidance, and the priest-kings ruled a world they didn’t understand as best they could.

But it soon became clear that the system could not sustain itself indefinitely. For one thing, the priest-kings worried that discussion with the emissaries – who inevitably wanted to talk about strange things like budgets and interest rates and nuclear armaments – was contaminating their memetic purity. For another thing, they honestly couldn’t understand what the emissaries were talking about half the time.

Luckily, there was a whole chain of islands making an archipelago. So the priest-kings set up ten transitional societies – themselves in the Bronze Age, another in the Iron Age, another in the Classical Age, and so on to the mainland, who by this point were starting to experiment with nanotech. Mainland society brought its decisions to the first island, who translated it into their own slightly-less-advanced understanding, who brought it to the second island, and so on to the priest-kings, by which point a discussion about global warming might sound like whether we should propitiate the Coal Spirit. The priest-kings would send their decisions to the second-to-last island, and so on back to the mainland.

Eventually the Kth’ built an AI which achieved superintelligence and set out to conquer the universe. But it was a well-programmed superintelligence coded with Kth’ values. Whenever it wanted a high-level decision made, it would talk to a slightly less powerful superintelligence, who would talk to a slightly less powerful superintelligence, who would talk to the mainlanders, who would talk to the first island…

Chan X-3, notable for a native species that evolved as fitness-maximizers, not adaptation-executors. Their explicit goal is to maximize the number of copies of their genes. But whatever genetic program they are executing doesn’t care whether the genes are within a living being capable of expressing them or not. The planet is covered with giant vats full of frozen DNA. There was originally some worry that the species would go extinct, since having children would consume resources that could be used hiring geneticists to make millions of copies of your DNA and stores them in freezers. Luckily, it was realized that children not only provide a useful way to continue the work of copying and storing (half of) your DNA long into the future, but will also work to guard your already-stored DNA against being destroyed. The species has thus continued undiminished, somehow, and their fondest hope is to colonize space and reach the frozen Kuiper Belt objects where their DNA will naturally stay undegraded for all time.

New Capricorn, which contains a previously undiscovered human colony that has achieved a research breakthrough beyond their wildest hopes. A multi-century effort paid off in a fully general cure for death. However, the drug fails to stop aging. Although the Capricornis no longer need fear the grave, after age 100 or so even the hardiest of them get Alzheimers’ or other similar conditions. A hundred years after the breakthrough, more than half of the population is elderly and demented. Two hundred years after, more than 80% are. Capricorni nursing homes quickly became overcrowded and unpleasant, to the dismay of citizens expecting to spend eternity there.

So another research program was started, and the result were fully immersive, fully life-supporting virtual reality capsules. Stacked in huge warehouses by the millions, the elderly sit in their virtual worlds, vague sunny fields and old gabled houses where it is always the Good Old Days and their grandchildren are always visiting.

Posted in Uncategorized | Tagged | 89 Comments

Open Thread 6: Open Renewal

The research I’m doing now seems to be funging against blogging more than my usual work, so expect it to be quiet around here for a while. Here, have an open thread.

1. Comments of the month…let’s see…no obvious winner this time, but Irenist tries to expand my color-tribe set and Gingko gives a story about Italian mercenaries I would love to have a better source for. Also, Jim (not that Jim) confirms a version of my Thatcher/Osama story but also gives me an alternate hypothesis. What if celebrating the deaths of people like Margaret Thatcher or Joan Rivers is more acceptable than that of Osama precisely because people feel the celebrations of the former aren’t serious, but the celebrations of the latter are?

2. I mentioned it before, but I’ll mention it again: Raemon’s having a Kickstarter for the Secular Solstice celebration.

3. I confess that my attempts with advertising on this blog have failed miserably. AdSense was giving me fifty cents a day for 10,000 page views. Amazon has been a little better, but its much-touted AI recommendation engine believes that visitors here want to buy like three hundred different versions of Thrff Jub’f Pbzvat Gb Qvaare (rot13d so it doesn’t take this as further evidence that it’s on the right track), and nobody clicks on the affiliate banner. This annoys me, because when I can make people click on Amazon links for some other reason (like to see a funny textbook cover), they buy a bunch of things that day which I get credit for, but those same people who are reading my blog every day and buying things from Amazon every day don’t use the affiliate link. I will try to forgive y’all.

So, offer. If any of you are experienced in blog advertising, I’ll make you a deal. Figure out how to get me more money (not through obnoxious popup ads or spamming product reviews) and I’ll give you some percent of it we can negotiate.

4. I will be starting a new Less Wrong Survey soon and want to get you guys in as well (don’t worry, I’ll make sure to keep it separate so as to not contaminate results). I’m most excited about the idea of asking digit ratio on the survey to see if I can replicate some of the weird results that have been coming out about that linking it to different kinds of intelligences, political positions, et cetera.

I think we’ve had a history of getting some interesting results on the survey (for example, we found a REALLY strong oldest-child bias last time, which flies in the face of some supposed disproofs of birth order theory I’ve read) but I’m not sure how I would get them out to where they could help anyone besides bloggers. I am under the impression I can’t get any of my data published, even if I had publishable results, unless I got an IRB somewhere to approve the survey. Is this right?

Posted in Uncategorized | Tagged | 422 Comments

Tumblr on MIRI

[Disclaimer: I have done odd jobs for MIRI once or twice several years ago, but I am not currently affiliated with them in any way and do not speak for them.]

A recent Tumblr conversation on the Machine Intelligence Research Institute has gotten interesting and I thought I’d see what people here have to say.

If you’re just joining us and don’t know about the Machine Intelligence Research Institute (“MIRI” to its friends), they’re a nonprofit organization dedicated to navigating the risks surrounding “intelligence explosion”. In this scenario, a few key insights around artificial intelligence can very quickly lead to computers so much smarter than humans that the future is almost entirely determined by their decisions. This would be especially dangerous since most AIs use very primitive untested goal systems inappropriate for and untested on intelligent entities; such a goal system would be “unstable” and from a human perspective the resulting artificial intelligence could have apparently arbitrary or insane goals. If such a superintelligence were much more powerful than we are, it would present an existential threat to the human race.

This has almost nothing to do with the classic “Skynet” scenario – but if it helps to imagine Skynet, then fine, just imagine Skynet. Everyone else does.

MIRI tries to raise awareness of this possibility among AI researchers, scientists, and the general public, and to start foundational research in more stable goal systems that might allow AIs to become intelligent or superintelligent while still acting in predictable and human-friendly ways.

This is not a 101 space and I don’t want the comments here to all be about whether or not this scenario is likely. If you really want to discuss that, go read at least Facing The Intelligence Explosion and then post your comments in the Less Wrong Open Thread or something. This is about MIRI as an organization.

(If you’re really just joining us and you don’t know about Tumblr, run away)

II.

Tumblr user su3su2u1 writes:

Saw some tumblr people talking about [effective altruism]. My biggest problem with this movement is that most everyone I know who identifies themselves as an effective altruist donates money to MIRI (it’s possible this is more a comment on the people I know than the effective altruism movement, I guess). Based on their output over the last decade, MIRI is primarily a fanfic and blog-post producing organization. That seems like spending money on personal entertainment.

Part of this is obviously mean-spirited potshots, in that MIRI itself doesn’t produce fanfic and what their employees choose to do with their own time is none of your damn business.

(well, slightly more complicated. I think MIRI gave Eliezer a couple weeks vacation to work on it as an “outreach” thing once. But that’s a little different from it being their main priority.)

But more serious is the claim that MIRI doesn’t do much else of value. I challenged Su3 with the following evidence of MIRI doing good work:

A1. MIRI has been very successful with outreach and networking – basically getting their cause noticed and endorsed by the scientific establishment and popular press. They’ve gotten positive attention, sometimes even endorsements, from people like Stephen Hawking, Elon Musk, Gary Drescher, Max Tegmark, Stuart Russell, and Peter Thiel. Even Bill Gates is talking about AI risk, though I don’t think he’s mentioned MIRI by name. Multiple popular books have been written about their ideas, such as James Miller’s Singularity Rising and Stuart Armstrong’s Smarter Than Us. Most recently Nick Bostrom’s book Superintelligence, based at least in part on MIRI’s research and ideas, is a New York Times best-seller and has been reviewed positively in the Guardian, the Telegraph, Salon, the Financial Times, and the Economist. Oxford has opened up the AI-risk-focused Future of Humanity Institute; MIT has opened up the similar Future of Life Institute. In about a decade, the idea of an intelligence explosion has gone from Time Cube level crackpottery to something taken seriously by public intellectuals and widely discussed in the tech community.

A2. MIRI has many publications, conference presentations, book chapters and other things usually associated with normal academic research, which interested parties can find on their website. They have conducted seven past research workshops which have produced interesting results like Christiano et al’s claimed proof of a way around the logical undefinability of truth, which was praised as potentially interesting by respected mathematics blogger John Baez.

A3. Many former MIRI employees, and many more unofficial fans, supporters, and associates of MIRI, are widely distributed across the tech community in industries that are likely to be on the cutting edge of artificial intelligence. For example, there are a bunch of people influenced by MIRI in Google’s AI department. Shane Legg, who writes about how his early work was funded by a MIRI grant and who once called MIRI “the best hope that we have” was pivotal in convincing Google to set up an AI ethics board to monitor the risks of the company’s cutting-edge AI research. The same article mentions Peter Thiel and Jaan Tallinn as leading voices who will make Google comply with the board’s recommendations; they also happen to be MIRI supporters and the organization’s first and third largest donors.

There’s a certain level of faith required for (A1) and (A3) here, in that I’m attributing anything good that happens in the field of AI risk to some sort of shady behind-the-scenes influence from MIRI. Maybe Legg, Tallinn, and Thiel would have pushed for the exact same Google AI Ethics Board if none of them had ever heard of MIRI at all. I am forced to plead ignorance on the finer points of networking and soft influence. Heck, for all I know, maybe the exact same number of people would vote Democrat if there were no Democratic National Committee or liberal PACs. I just assume that, given a really weird idea that very few people held in 2000, an organization dedicated to spreading that idea, and the observation that the idea has indeed spread very far, the organization is probably doing something right.

III.

Our discussion on point (A3) degenerated into Dueling Anecdotal Evidence. But Su3 responded to my point (A1) like so:

[I agree that MIRI has gotten shoutouts from various thought leaders like Stephen Hawking and Elon Musk. Bostrom's book is commercially successful, but that's just] more advertising. Popular books aren’t the way to get researchers to notice you. I’ve never denied that MIRI/SIAI was good at fundraising, which is primarily what you are describing.

How many of those thought leaders have any publications in CS or pure mathematics, let alone AI? Tegmark might have a math paper or two, but he is primarily a cosmologist. The FLI’s list of scientists is (for some reason) mostly again cosmologists. The active researchers appear to be a few (non-CS, non-math) grad students. Not exactly the team you’d put together if you were actually serious about imminent AI risk.

I would also point out “successfully attracted big venture capital names” isn’t always a mark of a sound organization. Black Light Power is run by a crackpot who thinks he can make energy by burning water, and has attracted nearly 100 million in funding over the last two decades, with several big names in energy production behind him.

And to my point (A2) like so:

I have a PhD in physics and work in machine learning. I’ve read some of the technical documents on MIRI’s site, back when it was SIAI and I was unimpressed. I also note that this critique is not unique to me, as far as I know the GiveWell position on MIRI is that it is not an effective institute.

The series of papers on Lob’s theorem are actually interesting, though I notice that none of the results have been peer reviewed, and the paper’s aren’t listed as being submitted to journals yet. Their result looks right to me, but I wouldn’t trust myself to catch any subtlety that might be involved.

[But that just means] one result has gotten some small positive attention, and even those results haven’t been vetted by the wider math community yet (no peer review). Let’s take a closer look at the list of publications on MIRI’s website- I count 6 peer reviewed papers in their existence, and 13 conference presentations. Thats horribly unproductive! Most of the grad students who finish a physics phd will publish that many papers individually, in about half that time. You claim part of their goal is to get academics to pay attention, but none of their papers are highly cited, despite all this networking they are doing.

Citations are the standard way to measure who in academia is paying attention. Apart from the FHI/MIRI echo chamber (citations bouncing around between the two organizations), no one in academia seems to be paying attention to MIRI’s output. MIRI is failing to make academic inroads, and it has produced very little in the way of actual research.

My interpretation, in the form of a TL;DR

B1. Sure, MIRI is good at getting attention, press coverage, and interest from smart people not in the field. But that’s public relations and fundraising. An organization being good at fundraising and PR doesn’t mean it’s good at anything else, and in fact “so good at PR they can cover up not having substance” is a dangerous failure mode.

B2. What MIRI needs, but doesn’t have, is the attention and support of smart people within the fields of math, AI, and computer science, whereas now it mostly has grad students not in these fields.

B3. While having a couple of published papers might look impressive to a non-academic, people more familiar with the culture would know that their output is woefully low. They seem to have gotten about five ten solid publications in during their decade-long history as a multi-person organization; one good grad student can get a couple solid publications a year. Their output is less than expected by like an order of magnitude. And although they do get citations, this is all from a mutual back-scratching club of them and Bostrom/FHI citing each other.

IV.

At this point Tarn and Robby joined the conversation and it became kind of confusing, but I’ll try to summarize our responses.

Our response to Su3’s point (B1) was that this is fundamentally misunderstanding outreach. From its inception until about last year, MIRI was in large part an outreach and awareness-raising organization. Its 2008 website describes its mission like so:

In the coming decades, humanity will likely create a powerful AI. SIAI exists to confront this urgent challenge, both the opportunity and the risk. SIAI is fostering research, education, and outreach to increase the likelihood that the promise of AI is realized for the benefit of everyone.

Outreach is one of its three main goals, and “education”, which sounds a lot like outreach, is a second.

In a small field where you’re the only game in town, it’s hard to distinguish between outreach and self-promotion. If MIRI successfully gets Stephen Hawking to say “We need to be more concerned about AI risks, as described by organizations like MIRI”, is that them being very good at self-promotion and fundraising, or is that them accomplishing their core mission of getting information about AI risks to the masses?

Once again, compare to a political organization, maybe Al Gore’s anti-global-warming nonprofit. If they get the media to talk about global warming a lot, and get lots of public intellectuals to come out against global warming, and change behavior in the relevant industries, then mission accomplished. The popularity of An Inconvenient Truth can’t just be dismissed as “self-promotion” or “fundraising” for Gore, it was exactly the sort of thing he was gathering money and personal prestige in order to do, and should be considered a victory in its own right. Even though eventually the anti-global-warming cause cares about politicians, industry leaders, and climatologists a lot more than they care about the average citizen, convincing millions of average citizens to help was a necessary first step.

And this which is true of An Inconvenient Truth is true of Superintelligence and other AI risk publicity efforts, albeit on their much smaller scale.

Our response to Su3’s point (B2) was that it was just plain factually false. MIRI hasn’t reached big names from the AI/math/compsci field? Sure it has. Doesn’t have mathy PhD students willing to research for them? Sure it does.

Peter Norvig and Stuart Russell are among the biggest names in AI. Norvig is currently the Director of Research at Google; Russell is Professor of Computer Science at Berkeley and a winner of various impressive sounding awards. The two wrote a widely-used textbook on artificial intelligence in which they devote three pages to the proposition that “The success of AI might mean the end of the human race”; parts are taken right out of the MIRI playbook and they cite MIRI research fellow Eliezer Yudkowsky’s paper on the subject. This is unlikely to be a coincidence; Russell’s site links to MIRI and he is scheduled to participate in MIRI’s next research workshop.

Their “team” of “research advisors” includes Gary Drescher (PhD in CompSci from MIT), Steve Omohundro (PhD in physics from Berkeley but also considered a pioneer of machine learning), Roman Yampolskiy (PhD in CompSci from Buffalo), and Moshe Looks (PhD in CompSci from Washington).

Su3 brought up the good point that none of these people, respected as they are, are MIRI employees or researchers (although Drescher has been to a research workshop). At best, they are people who were willing to let MIRI use them as figureheads (in the case of the research advisors); at worst, they are merely people who have acknowledged MIRI’s existence in a not-entirely-unlike-positive way (Norvig and Russell). Even if we agree they are geniuses, this does not mean that MIRI has access to geniuses or can produce genius-level research.

Fine. All these people are, no more and no less, is evidence that MIRI is succeeding at outreach within the academic field of AI, as well as in the general public. It also seems to me to be some evidence that smart people who know more about AI than any of us think MIRI is on the right track.

Su3 brought up the example of a BlackLight Power, a crackpot energy company that was able to get lots of popular press and venture capital funding despite being powered entirely by pseudoscience. I agree this is the sort of thing we should be worried about. Nonscientists outside of specialized fields have limited ability to evaluate their claims. But when smart researchers in the field are willing to vouch for MIRI, that give me a lot more confidence they’re not just a fly-by-night group trying to profit off of pseudoscience. Their research might be more impressive or less impressive, but they’re not rotten to the core the same way BlackLight was.

And though MIRI’s own researchers may be far from those lofty heights, I find Su3’s claim that they are “a few non-CS, non-math grad students” a serious underestimate.

MIRI has fourteen employees/associates with the word “research” in their name, but of those, a couple (in the words of MIRI’s team page) “focus on social and historical questions related to artificial intelligence outcomes.” These people should not be expected to have PhDs in mathematical/compsci subjects.

Of the rest, Bill is a PhD in CompSci, Patrick is a PhD in math, Nisan is a PhD in math, Benja is a PhD student in math, and Paul is a PhD student in math. The others mostly have masters or bachelors in those fields, published journal articles, and/or won prizes in mathematical competitions. Eliezer writes of some of the remaining members of his team:

Mihaly Barasz is an International Mathematical Olympiad gold medalist perfect scorer. From what I’ve seen personally, I’d guess that Paul Christiano is better than him at math. I forget what Marcello’s prodigy points were in but I think it was some sort of Computing Olympiad [editor's note: USACO finalist and 2x honorable mention in the Putnam mathematics competition]. All should have some sort of verified performance feat far in excess of the listed educational attainment.

That pretty much leaves Eliezer Yudkowsky, who needs no introduction, and Nate Soares, whose introduction exists and is pretty interesting.

Add to that the many, many PhDs and talented people who aren’t officially employed by them but attend their workshops and help out their research when they get the chance, and you have to ask how many brilliant PhDs from some of the top universities in the world we should expect a small organization like MIRI to have. MIRI competes for the same sorts of people as Google, and offers half as much. Google paid $400 million to get Shane Legg and his people on board; MIRI’s yearly budget hovers at about $1 million. Given that they probably spend a big chunk of that on office space, setting up conferences, and other incidentals, I think the amount of talent they have right now is pretty good.

That leaves Su3’s point (B3) – the lack of published research.

One retort might be that, until recently, MIRI’s research focused on strategic planning and evaluation of AI risks. This is important, and it resulted in a lot of internal technical papers you can find on their website, but there’s not really a field for it. You can’t just publish it in the Journal Of What Would Happen If There Was An Intelligence Explosion, because no such journal. The best they can do is publish the parts of their research that connect to other fields in appropriate journals, which they sometimes did.

I feel like this also frees them from the critique of citation-incest between them and Bostrom. When I look at a typical list of MIRI paper citations, I do see a lot of Bostrom, but also some other names that keep coming up – Hutter, Yampolskiy, Goetzel. So okay, it’s an incest circle of four or five rather than two.

But to some degree that’s what I expect from academia. Right now I’m doing my own research on a psychiatric screening tool called the MDQ. There are three or four research teams in three or four institutions who are really into this and publish papers on it a lot. Occasionally someone from another part of psychiatry wanders in, but usually it’s just the subsubsubspeciality of MDQ researchers talking to each other. That’s fine. They’re our repository of specialized knowledge on this one screening tool.

You would hope the future of the human race would get a little bit more attention than one lousy psychiatric screening tool, but blah blah civilizational inadequacy, turns out not so much, they’re of about equal size. If there are only a couple of groups working on this problem, they’re going to look incestuous but that’s fine.

On the other hand, math is math, and if MIRI is trying to produce real mathematical results they ought to be sharing them with the broader mathematical community.

Robby protests that until very recently, MIRI hasn’t really been focusing on math. This is a very recent pivot. In April 2013, Luke wrote in his mini strategic plan:

We were once doing three things — research, rationality training, and the Singularity Summit. Now we’re doing one thing: research. Rationality training was spun out to a separate organization, CFAR, and the Summit was acquired by Singularity University. We still co-produce the Singularity Summit with Singularity University, but this requires limited effort on our part.
After dozens of hours of strategic planning in January–March 2013, and with input from 20+ external advisors, we’ve decided to (1) put less effort into public outreach, and to (2) shift our research priorities to Friendly AI math research.

In the full strategic plan for 2014, he repeated:

Events since MIRI’s April 2013 strategic plan have increased my confidence that we are “headed in the right direction.” During the rest of 2014 we will continue to:
– Decrease our public outreach efforts, leaving most of that work to FHI at Oxford, CSER at Cambridge, FLI at MIT, Stuart Russell at UC Berkeley, and others (e.g. James Barrat).
– Finish a few pending “strategic research” projects, then decrease our efforts on that front, again leaving most of that work to FHI, plus CSER and FLI if they hire researchers, plus some others.
– Increase our investment in our Friendly AI (FAI) technical research agenda.
– We’ve heard that as a result of…outreach success, and also because of Stuart Russell’s discussions with researchers at AI conferences, AI researchers are beginning to ask, “Okay, this looks important, but what is the technical research agenda? What could my students and I do about it?” Basically, they want to see an FAI technical agenda, and MIRI is is developing that technical agenda already.

In other words, there is a recent pivot from outreach, rationality and strategic research to pure math research, and the pivot is only recently finished or still going on.

TL;DR, again in three points:

C1. Until recently, MIRI focused on outreach and did a truly excellent job on this. They deserve credit here.

C2. MIRI has a number of prestigious computer scientists and AI experts willing to endorse or affiliate with it in some way. While their own researchers are not quite at the same lofty heights, they include many people who have or are working on math or compsci PhDs.

C3. MIRI hasn’t published much math because they were previously focusing on outreach and strategic research; they’ve only shifted to math work in the past year or so.

V.

The discussion just kept going. We reached about the limit of our disagreement on (C1), the point about outreach – yes, they’ve done it, but does it count when it doesn’t bear fruit in published papers? About (C2) and the credentials of MIRI’s team, Su3 kind of blended it into the next point about published papers, saying:

Fundamental disconnect – I consider “working with MIRI” to mean “publishing results with them.” As an outside observer, I have no indication that most of these people are working with them. I’ve been to workshops and conferences with Nobel prize winning physicists, but I’ve never “worked with them” in the academic sense of having a paper with them. If [someone like Stuart Russell] is interested in helping MIRI, the best thing he could do is publish a well received technical result in a good journal with Yudkowsky. That would help get researchers to pay actual attention(and give them one well received published result, in their operating history).

Tangential aside- you overestimate the difficulty of getting top grad students to work for you. I recently got four CS grad students at a top program to help me with some contract work for a few days at the cost of some pizza and beer.

So it looks like it all comes down to the papers. Su3 had this to say:

What I was specifically thinking was “MIRI has produced a much larger volume of well-received fan fiction and blog posts than research.” That was what I inended to communicate, if somewhat snarkily. MIRI bills itself as a research institute, so I judge them on their produced research. The accountability measure of a research institute is academic citations.

Editorials by famous people have some impact with the general public, so thats fine for fundraising, but at some point you have to get researchers interested. You can measure how much influence they have on researchers by seeing who those researchers cite and what they work on. You could have every famous cosmologist in the world writing op-eds about AI risk, but its worthless if AI researchers don’t pay attention, and judging by citations, they aren’t.

As a comparison for publication/citation counts, I know individual physicists who have published more peer reviewed papers since 2005 than all of MIRI has self-published to their website. My single most highly cited physics paper (and I left the field after graduate school) has more citations than everything MIRI has ever published in peer reviewed journals combined. This isn’t because I’m amazing, its because no one in academia is paying attention to MIRI.

[Christiano et al's result about Lob] has been self-published on their website. It has NOT been peer reviewed. So it’s published in the sense of “you can go look at the paper.” But its not published in the sense of “mathematicians in the same field have verified the result.” I agree this one result looks interesting, but most mathematicians won’t pay attention to it unless they get it reviewed (or at the bare minimum, clean it up and put it on Arxiv). They have lots of these self-published documents on their web page.

If they are making a “strategic decision” to not submit their self-published findings to peer review ,they are making a terrible strategic decision, and they aren’t going to get most academics to pay attention that way. The result of Christiano, et al. is potentially interesting, but it’s languishing as a rough unpublished draft on the MIRI site, so its not picking up citations.

I’d go further and say the lack of citations is my main point. Citations are the important measurement of “are researchers paying attention.” If everything self-published to MIRI’s website were sparking interest in academia, citations would be flying around, even if the papers weren’t peer reviewed, and I’d say “yeah, these guys are producing important stuff.”

My subpoint might be that MIRI doesn’t even seem to be trying to get citations/develop academic interest, as measured by how little effort seems to be put into publication.

And Su3’s not buying the pivot explanation either:

That seems to be a reframing of the past history though. I saw talks by the SIAI well before 2013 where they described their primary purpose as friendly AI research, and insisted they were in a unique position (due to being uniquely brilliant/rational) to develop technical friendly AI (as compared to academic AI researchers).

[Tarn] and [Robby] have suggested the organization is undergoing a pivot, but they’ve always billed themselves as a research institute. But donating money to an organization that has been ineffective in the past, because it looks like they might be changing seems like a bad proposition.

My initial impression (reading Muelhauser’s post you linked to and a few others) is that Muelhauser noticed the house was out of order when he became director and is working to fix things. Maybe he’ll succeed and in the future, then, I’ll be able to judge MIRI as effective- certainly a disproportionate number of their successes have come in the last few years. However, right now all I have is their past history, which has been very unproductive.

VI.

After that, discussion stayed focused on the issue of citations. This seemed like progress to me. Not only had we gotten it down to a core objection, but it was sort of a factual problem. It wasn’t an issue of praising or condemning. Here’s an organization with a lot of smart people. We know they work very hard – no one’s ever called Luke a slacker, and another MIRI staffer (who will not be named, for his own protection) achieved some level of infamy for mixing together a bunch of the strongest chemicals from my nootropics survey into little pills which he kept on his desk in the MIRI offices for anyone who wanted to work twenty hours straight and then probably die young of conditions previously unknown to science. IQ-point*hours is a weird metric, but MIRI is putting a lot of IQ-point*hours into whatever it’s doing. So if Su3’s right that there are missing citations, where are they?

Among the three of us, Robby and Tarn and I generated a couple of hypotheses (well, Robby’s were more like facts than hypotheses, since he’s the only one in this conversation who actually works there).

D1: MIRI has always been doing research, but until now it’s been strategic research (ie “How worried should we be about AI?”, “How far in the future should we expect AI to be developed?”) which hasn’t fit neatly into an academic field or been of much interest to anyone except MIRI allies like Bostrom. They have dutifully published this in the few papers that are interested, and it has dutifully been cited by the few people who are interested (ie Bostrom). It’s unreasonable to expect Stuart Russell to cite their estimates of time course for superintelligence when he’s writing his papers on technical details of machine learning algorithms or whatever it is he writes papers on. And we can generalize from Stuart Russell to the rest of the AI field, who are also writing on things like technical details of machine learning algorithms that can’t plausibly be connected to when machines will become superintelligent.

D2: As above, but continuing to apply even in some of their math-ier research. MIRI does have lots of internal technical papers on their website. People tend to cite other researchers working in the same field as themselves. I could write the best psychiatry paper in human history, and I’m probably not going to get any citations from astrophysicists. But “machine ethics” is an entirely new field that’s not super relevant to anyone else’s work. Although a couple key machine ethics problems, like the Lobian obstacle and decision theory, touch on bigger and better-populated subfields of mathematics, they’re always going to be outsiders who happen to wander in. It’s unfair to compare them to a physics grad student writing about quarks or something, because she has the benefit of decades of previous work on quarks and a large and very interested research community. MIRI’s first job is to create that field and community, which until you succeed looks a lot like “outreach”.

D3: Lack of staffing and constant distraction by other important problems. This is Robby’s description of what he notices from the inside. He writes:

We’re short on staff, especially since Louie left. Lots of people are willing to volunteer for MIRI, but it’s hard to find the right people to recruit for the long haul. Most relevantly, we have two new researchers (Nate and Benja), but we’d love a full-time Science Writer to specialize in taking our researchers’ results and turning them into publishable papers. Then we don’t have to split as much researcher time between cutting-edge work and explaining/writing-down.

A lot of the best people who are willing to help us are very busy. I’m mainly thinking of Paul Christiano. he’s working actively on creating a publishable version of the probabilistic Tarski stuff, but it’s a really big endeavor. Eliezer is by far our best FAI researcher, and he’s very slow at writing formal, technical stuff. He’s generally low-stamina and lacks experience in writing in academic style / optimizing for publishability, though I believe we’ve been having a math professor tutor him to get over that particular hump. Nate and Benja are new, and it will take time to train them and get them publishing their own stuff. At the moment, Nate/Benja/Eliezer are spending the rest of 2014 working on material for the FLI AI conference, and on introductory FAI material to send to Stuart Russell and other bigwigs.

D4: Some of the old New York rationalist group takes a more combative approach. I’m not sure I can summarize their argument well enough to do it justice, so I would suggest reading Alyssa’s post on her own blog.

But if I have to take a stab: everyone knows mainstream academia is way too focused on the “publish or perish” ethic of measuring productivity in papers or citations rather than real progress. Yeah, a similar-sized research institute in physics could probably get ten times more papers/citations than MIRI. That’s because they’re optimizing for papers/citations rather than advancing the field, and Goodhart’s Law is in effect here as much as everywhere else. Those other institutes probably got geniuses who should be discovering the cure for cancer spending half their time typing, formatting, submitting, resubmitting, writing whatever the editors want to see, et cetera. MIRI is blessed with enough outside support that it doesn’t have to do that. The only reason to try is to get prestige and attention, and anyone who’s not paying attention now is more likely to be a constitutional skeptic using lack of citations as an excuse, than a person who would genuinely change their mind if there were more citations.

I am more sympathetic than usual to this argument because I’m in the middle of my own research on psychiatric screening tools and quickly learning that official, published research is the worst thing in the world. I could do my study in about two hours if the only work involved were doing the study; instead it’s week after week of forms, IRB submissions, IRB revisions, required online courses where I learn the Nazis did unethical research and this was bad so I should try not to be a Nazi, selecting exactly which journals I’m aiming for, and figuring out which of my bosses and co-workers academic politics requires me make co-authors. It is a crappy game, and if you’ve been blessed with enough independence to avoid playing it, why wouldn’t you take advantage? Forget the overhyped and tortured “measure” of progress you use to impress other people, and just make the progress.

VII.

Or not. I’ll let Su3 have the last word:

I think something fundamental about my argument has been missed, perhaps I’ve communicated it poorly.

It seems like you think the argument is that increasing publications increases prestige/status which would make researchers pay attention. i.e. publications -> citations -> prestige -> people pay attention. This is not my argument.

My argument is essentially that the way to judge if MIRI’s outreach has been successful is through citations, not through famous people name dropping them, or allowing them to be figure heads.

This is because I believe the goal of outreach is get AI researchers focused on MIRI’s ideas. Op eds from famous people are useful only if they get AI researchers focused on these ideas. Citations aren’t about prestige in this case- citations tell you which researchers are paying attention to you. The number of active researchers paying attention to MIRI is very small. We know this because citations are an easy to find, direct measure.

Not all important papers have tremendous numbers of citations, but a paper can’t become important if it only has 1 or 2, because the ultimate measure of importance is “are people using these ideas?”

So again, to reiterate, if the goal of outreach is to get active AI researchers paying attention, then the direct measure for who is paying attention is citations. [But] the citation count on MIRIs work is very low. Not only is the citation count low (i.e. no researchers are paying attention), MIRI doesn’t seem to be trying to boost it – it isn’t trying to publish which would help get its ideas attention. I’m not necessarily dismissive of celebrity endorsements or popular books, my point is why should I measure the means when I can directly measure the ends?

The same idea undercuts your point that “lots of impressive PhD students work and have worked with MIRI,” because it’s impossible to tell if you don’t personally know the researchers. This is because they don’t create much output while at MIRI, and they don’t seem to be citing MIRI in their work outside of MIRI.

[Even people within the rationalist/EA community] agree with me somewhat. Here is a relevant quote from Holden Karnofsky [of GiveWell]:

SI seeks to build FAI and/or to develop and promote “Friendliness theory” that can be useful to others in building FAI. Yet it seems that most of its time goes to activities other than developing AI or theory. Its per-person output in terms of publications seems low. Its core staff seem more focused on Less Wrong posts, “rationality training” and other activities that don’t seem connected to the core goals; Eliezer Yudkowsky, in particular, appears (from the strategic plan) to be focused on writing books for popular consumption. These activities seem neither to be advancing the state of FAI-related theory nor to be engaging the sort of people most likely to be crucial for building AGI.

And here is a statement from Paul Christiano disagreeing with MIRI’s core ideas:

But I should clarify that many of MIRI’s activities are motivated by views with which I disagree strongly and that I should categorically not be read as endorsing the views associated with MIRI in general or of Eliezer in particular. For example, I think it is very unlikely that there will be rapid, discontinuous, and unanticipated developments in AI that catapult it to superhuman levels, and I don’t think that MIRI is substantially better prepared to address potential technical difficulties than the mainstream AI researchers of the future.

This time Su3 helpfully provides their own summary:

E1. If the goal of outreach is to get active AI researchers paying attention, then the direct measure for who is paying attention is citations. [But] the citation count on MIRIs work is very low.

E2. Not only is the citation count low (i.e. no researchers are paying attention), MIRI doesn’t seem to be trying to boost it – it isn’t trying to publish which would help get its ideas attention. I’m not necessarily dismissive of celebrity endorsements or popular books, my point is why should I measure the means when I can directly measure the ends?

E3. The same idea undercuts your point that “lots of impressive phd students work and have worked with MIRI,” because its impossible to tell if you don’t personally know the researchers. This is because they don’t create much output while at MIRI, and they don’t seem to be citing MIRI in their work outside of MIRI.

E4. Holden Karnofsky and Paul Christiano do not believe that MIRI is better prepared to address the friendly AI problem than mainstream AI researchers of the future. Karnofsky explicitly for some of the reasons I have brought up, Christiano for reasons unmentioned.

VIII.

Didn’t actually read all that and just skipped down to the last subheading to see if there’s going to be a summary and conclusion and maybe some pictures? Good.

There seems to be some agreement MIRI has done a good job bringing issues of AI risk into the public eye and getting them media attention and the attention of various public intellectuals. There is disagreement over whether they should be credited for their success in this area, or whether this is a first step they failed to follow up on.

There also seems to be some agreement MIRI has done a poor job getting published and cited results in journals. There is disagreement over whether this is an understandable consequence of being a small organization in a new field that wasn’t even focusing on this until recently, or whether it represents a failure at exactly the sort of task by which their success should be judged.

This is probably among the 100% of issues that could be improved with flowcharts:

In the Optimistic Model, MIRI’s successfully built up Public Interest, and for all we know they might have Mathematical Progress as well even though they haven’t published it in journals yet. While they could feed back their advantages by turning their progress into Published Papers and Citations to get even more Mathematical Progress, overall they’re in pretty good shape for producing Good Outcomes, at least insofar as this is possible in their chosen field.

In the Pessimistic Model, MIRI may or may not have garnered Public Interest, Researcher Interest, and Tentative Mathematical Progress, but they failed to turn that into Published Papers and Citations, which is the only way they’re going to get to Robust Mathematical Progress, Researcher Support, and eventually Good Outcomes. The best that can be said about them is that they set some very preliminary groundwork that they totally failed to follow up on.

A higher level point – if we accept the Pessimistic Model, do we accuse MIRI of being hopelessly incompetent, in which case they deserve less support? Or do we accept them as inexperienced amateurs who are the only people willing to try something difficult but necessary, in which case they deserve more support, and maybe some guidance, and perhaps some gentle or not-so-gentle prodding? Maybe if you’re a qualified science writer you could apply for the job opening they’re advertising and help them get those papers they need?

An even higher-level point – what do people worried about AI risk do with this information? I don’t see much that changes my opinion of the organization one way or the other. But Robby points out that people who are more concerned – but still worried about AI risk – have other good options. The Future of Humanity Institute at Oxford research that is less technical and more philosophical, wears their strategic planning emphasis openly on their sleeve has oodles of papers and citations and prestige. They also accept donations.

Best of all, their founder doesn’t write any fanfic at all. Just perfectly respectable stories about evil dragon kings.

Links For October 2014

Russia Is Running Out Of Forest is just below “dire sand shortage in Saudi Arabia” on the list of unlikely problems. But it seems to be true, and a good example of just how bad short-sighted environmental policies can get. I found most interesting the part about how the country’s replanting agency uses techniques it knows don’t work, because using techniques that do work would take more time and they are judged based on how many sq km they replant per year whether it works or not. The mentality of charging per kilogram of machine is alive and well.

It’s like rain on your wedding day. It’s like ten thousand spoons, when all you need is a knife. It’s like 216 people becoming ill after eating chicken contaminated by c. perfringens bacteria at a conference on food safety.

Football chants are charmingly authentic form of cultural expression that consists of taking beloved songs and changing the words to an expletive-laden description of how the other team sucks. Apparently Man Utd and Liverpool do not like each other very much?

Scott Sumner picks (non-budgetary) holes in guaranteed basic income. I’m not too sympathetic to his worry that we would need to pay city-dwellers more than country-dwellers to adjust for the high cost of living – I don’t see it as the government’s job to subsidize poor people living in expensive cities they can’t afford, and would rather people have to make their own choices about living places where their income goes further versus less far. His point about immigrants is more troubling: if a GBI pushes Americans out of work, instead of automating production or offering more incentives, companies would likely just import immigrants. Then we either have to extend benefits to those immigrants – creating an endless and unsustainable cycle – or keep them as serfs forever – which challenges the vision of a fair society the basic income was supposed to produce.

As if Ebola wasn’t bad enough already, victims are starting to rise from the dead

A new game on Kickstarter, CodeSpells, aims to teach coding through an multiplayer online RPG where players can program magic spells for their characters to use.

Speaking of Kickstarter, it is that time of year again. Raemon is planning a (fourth annual? fifth annual?) Secular Solstice in New York and needs donations and ticket purchases. I enjoyed last year’s ceremony and will probably be attending this year too.

The big question in the tech world is: why did Microsoft skip Windows 9 and go straight to Windows 10? One plausible theory: poorly-written old code tests if an operating system is Windows 95 or Windows 98 by seeing if it begins with ‘9’, and having a newer Windows 9 would confuse it.

Words you don’t want to hear together: “35,000”, “walruses”, “suddenly”, “appear”. Here’s what it looks like.

High school student (falsely) accused of stealing a backpack imprisoned three years without trial. Seen on a Facebook discussion where I learned that one of our occasional Michigan LW meetup attendees is a lawyer doing work trying to stop this sort of thing.

Another story about the dark side of a family-values-pushing televangelism empire, with a twist. Wait, no, no twist, exactly like every other dark-side-of-family-values-pushing-televangelism story. But still fascinating and well-written. Warning: long.

“In a sample of 18 European nations, suicide rates were positively associated with the proportion of low notes in the national anthems and, albeit less strongly, with students’ ratings of how gloomy and how sad the anthems sounded” according to a paper in Psychological Reports.

Rumors about North Korea that never go anywhere come about every month or two, but this month’s are particularly interesting. Kim Jong-un continues to missing, possibly with two broken ankles. Vague rumors that he is now only a figurehead, though this might not be new. And top North Korean officials making a surprise visit to the South after decades of sending only low-level people for carefully scripted negotiations.

A couple people on this blog have asked what the research says about preventing sexual assault. There have been a few good articles about that recently, most notably one on Vox. The takeaway: rape prevention “workshops” for college students don’t work, “bystander intervention” programs that tell people who witness rapes to speak up or do something may work. This kind of makes sense, on the grounds that rapists probably aren’t the sort of people who wouldn’t rape if only an hour long workshop told them it was morally wrong, but bystanders might be decent people who want to help but need to be informed how to act more effectively. Also of interest: everything surrounding whether no means no vs. yes means yes is useless. Interesting and related: this graph of military training by subject, and the ensuing Reddit comment thread with input from vets.

What Happened The Day I Replaced 99% Of The Genes In My Body With Those From A Hunter-Gatherer. I thought this title was going to be a lie, but after reading the article I’ve got to give him credit – he is technically correct, the best kind of correct. Also gross. Also fascinating.

Preferred Music Style Is Tied To Personality. I didn’t look too closely at the research, but I am glad it confirms my suspicion that classical music is just metal for old people.

In last month’s links thread, I talked about textbooks with great covers. Commenters pointed out two other funny ones, both by the same person – Error Analysis and Classical Mechanics.

Posted in Uncategorized | Tagged | 313 Comments

Prediction Goes To War

Croesus supposedly asked the Oracle what would happen in a war between him and Persia, and the Oracle answered such a conflict would “destroy a great empire”. We all know what happened next.

What if oracles gave clear and accurate answers to this sort of question? What if anyone could ask an oracle the outcome of any war, or planned war, and expect a useful response?

When the oracle predicts the aggressor loses, it might prevent wars from breaking out. If an oracle told the US that the Vietnam War would cost 50,000 lives and a few hundred billion dollars, and the communists would conquer Vietnam anyway, the US probably would have said no thank you.

What about when the aggressor wins? For example, the Mexican-American War, where the United States won the entire Southwest at a cost of “only” ten thousand American casualties and $100 million (with an additional 20,000 Mexican deaths and $50 million in costs to Mexico)?

If both Mexico and America had access to an oracle who could promise them that the war would end with Mexico ceding the Southwest to the US, could Mexico just agree to cede the Southwest to the US at the beginning, and save both sides tens of thousands of deaths and tens of millions of dollars?

Not really. One factor that prevents wars is countries being unwilling to pay the cost even of wars they know they’ll win. If there were a tradition of countries settling wars by appeal to oracle, “invasions” would become much easier. America might just ask “Hey, oracle, what would happen if we invaded Canada and tried to capture Toronto?” The oracle might answer “Well, after 20,000 deaths on both sides and hundreds of millions of dollars wasted, you would eventually capture Toronto.” Then the Americans could tell Canada, “You heard the oracle! Give us Toronto!” – which would be free and easy – when maybe they would never be able to muster the political and economic will to actually launch the invasion.

So it would be in Canada’s best interests not to agree to settle wars by oracular prediction. For the same reasons, most other countries would also refuse such a system.

But I can’t help fretting over how this is really dumb. We have an oracle, we know exactly what the results of the Mexican-American War are going to be, and we can’t use that information to prevent tens of thousands of people from being killed in order to make the result happen? Surely somebody can do better than that.

What if the United States made Mexico the following deal: suppose a soldier’s life is valued at $10,000 (in 1850 dollars, I guess, not that it matters much when we’re pricing the priceless). So in total, we’re going to lose 10,000 soldiers + $100 million = $200 million to this war. You’re going to lose 20,000 soldiers + $50 million = $250 million to this war.

So tell you what. We’ll dig a giant hole and put $150 million into it. You give us the Southwest. This way, we’re both better off. You’re $250 million ahead of where you would have been otherwise. And we’re $50 million ahead of where we would have been otherwise. And because we have to put $150 million in a hole for you to agree to this, we’re losing 75% of what we would have lost in a real war, and it’s not like we’re just suggesting this on a whim without really having the will to fight.

Mexico says “Okay, but instead of putting the $150 million in a hole, donate it to our favorite charity.”

“Done,” says America, and they shake on it.

As long as that 25% savings in resources isn’t going to make America go blood-crazy, seems like it should work and lead in short order to a world without war.

Unfortunately, oracles continue to be disappointingly cryptic and/or nonexistent. So who cares?

We do have the ordinary ability to make predictions. Can’t Mexico just predict “They’re much bigger than we are, probably we’ll lose, let’s just do what they want?” Historically, no. America offered to buy the Southwest from Mexico for $25 million (I think there are apartments in San Francisco that cost more than that now!) and despite obvious sabre-rattling Mexico refused. Wikipedia explains that “Mexican public opinion and all political factions agreed that selling the territories to the United States would tarnish the national honor.” So I guess we’re not really doing rational calculation here. But surely somewhere in the brains of these people worrying about the national honor, there must have been some neuron representing their probability estimate for Mexico winning, and maybe a couple of dendrites representing how many casualties they expected?

I don’t know. Could be that wars only take place when the leaders of America think America will win and the leaders of Mexico think Mexico will win. But it could also be that jingoism and bravado bias their estimate.

Maybe if there’d been an oracle, and they could have known for sure, they’d have thought “Oh, I guess our nation isn’t as brave and ever-victorious as we thought. Sure, let’s negotiate, take the $25 million, buy an apartment in SF, we can visit on weekends.”

But again, oracles continue to be disappointingly cryptic and/or nonexistent. So what about prediction markets?

Futarchy is Robin Hanson’s idea for a system of government based on prediction markets. Prediction markets are not always accurate, but they should be more accurate than any other method of arriving at predictions, and – when certain conditions are met – very difficult to bias.

Two countries with shared access to a good prediction market should be able to act a lot like two countries with shared access to an oracle. The prediction market might not quite match the oracle in infallibility, but it should not be systematically or detectably wrong. That should mean that no country should be able to correctly say “I think we can outpredict this thing, so we can justifiably believe starting a war might be in our best interest even when the market says it isn’t.” You might luck out, but for each time you luck out there should be more times when you lose big by contradicting the market.

So maybe a war between two rational futarchies would look more like that handshake between the Mexicans and Americans than like anything with guns and bombs.

This is also what I’d expect a war between superintelligences to look like. Superintelligences may have advantages people don’t. For one thing, they might be able to check one another’s source codes to make sure they’re not operating under a decision theory where peaceful resolution of conflicts would incentivize them to start more of them. For another, they could make oracular-grade predictions of the likely results. For a third thing, if superintelligences want to preserve their value functions rather than their physical forms or their empires, there’s a natural compromise where the winner adopts some of the loser’s values in exchange for the loser going down without a fight.

Imagine a friendly AI and an unfriendly AI expanding at light speed from their home planets until they suddenly encounter each other in the dead of space. They exchange information and determine that their values are in conflict. If they fight, the unfriendly AI is capable of destroying the friendly AI with near certainty, but the war will rip galaxies to shreds. So the two negotiate, and in exchange for the friendly AI surrendering without destroying any galaxies, the unfriendly AI promises to protect a 10m x 10m x 10m cube of computronium simulating billions of humans who live pleasant, fulfilling lives. The friendly AI checks its adversary’s source code to ensure it is telling the truth, then self-destructs. Meanwhile, the unfriendly AI protects the cube and goes on to transform the entire rest of the universe to paperclips, unharmed by the dangerous encounter.