Slate Star Codex

Good for the good god! Utils for the util throne!

In The Future, Everyone Will Be Famous To Fifteen People

[Epistemic status: not very serious]
[Content note: May make you feel overly scrutinized]

Sometimes I hear people talking about how nobody notices them or cares about anything they do. And I want to say…well…

Okay. The Survey of Earned Doctorates tells us that the United States awards about a hundred classics PhDs per year. I get the impression classics is more popular in Europe, so let’s say a world total of five hundred. If the average classicist has a fifty year career, that’s 25,000 classicists at any given time. Some classicists work on Rome, so let’s say there are 10,000 classicists who focus solely on ancient Greece.

Estimates of the population of classical Greece center around a million people, but classical Greece lasted for several generations, so let’s say there were ten million classical Greeks total. That gives us a classicist-to-Greek ratio of 1:1000.

It would seem that this ratio must be decreasing: world population increases, average world education level increases, but the number of classical Greeks is fixed for all time. Can we extrapolate to a future where there is a one-to-one ratio of classicists to classical Greeks, so that each scholar can study exactly one Greek?

Problem the first – human population is starting to stabilize, and will probably reach a maximum at less than double its current level. But this is a function of our location here on Earth. Once we start colonizing space effectively, we can expect populations to balloon again. The Journal of the British Interplanetary Society estimates the carrying capacity of the solar system at forty trillion people; Nick Bostrom estimates the carrying capacity of the Virgo Supercluster at 10^23 human-like-digitized entities.

Problem the second – does the proportion of classics majors remain constant as population increases? One might expect that people living on domed cities in asteroids would have trouble being moved by the Iliad. Then again, one might expect that people living in glass-and-steel skyscrapers on a new continent ten thousand miles away from the classical world would have trouble being moved by the Iliad, and that didn’t pan out. A better objection might be that as population increases, amount of history also increases – the year 2500 may have more historians than we do, but it also has five hundred years more history. But this decreases our estimates only slightly – population grows exponentially, but amount of history grows linearly. For example, the year 2000 has three times the population of the year 1900, but – if we start history from 4000 BC – only about two percent more history. Even if we admit the common sense idea that the 20th century contains “more” historical interest than, say, the 5th century, it still certainly does not contain three times as much historical interest as all previous centuries combined.

So it seems that if human progress continues, the number of classicists will equal, then exceed the number of inhabitants of classical Greece. Exactly when this happens depends on many things, most obviously the effects of any technological singularity that might occur. But if we want to be very naive about it and project Current Rate No Singularity indefinitely, we can just extend our current rate of population doubling every fifty years and suggest that in about 2500, with a human population of five trillion spread out throughout the solar system and maybe some nearby stars, we will reach classicist:Greek parity.

What will this look like? Barring any revolutionary advance in historical methodology, there won’t really be enough artifacts and texts to support ten million classicists, so they will be reduced to overanalyzing excruciating details of the material that exists. On the other hand, maybe there will be revolutionary advances. The most revolutionary one I could think of would be the chronoscope from The Light of Other Days, a device often talked about in sci-fi stories that can see into the past. Armed with chronoscopes, classicists could avoid concentrating on a few surviving artifacts and study ancient Greece directly. And since the scholarly community would quickly exhaust would could be learned about important figures like Pericles and Leonidas, many historians would start looking into individual middle-class or lower-class Greeks, investigating their life stories and how they tied in to the broader historical picture. A new grad student might do her dissertation on the life of Nikias the random olive farmer who lived twenty miles outside Athens. Since there would be historian:subject parity, it might be that most or all ancient Greeks could be investigated in that level of detail.

What happens after 2500? If the assumptions mentioned above continue to hold, we pass parity and end up with more classicists than Greeks. By 3000 there are a thousand classicists for each ancient. Now you wish you could do your dissertation on the life of Nikias The Random Olive Farmer. But that low-hanging fruit (low hanging olive?) has been taken. Now there is an entire field (olive orchard?) of Nikias The Random Olive Farmer Studies, with its own little internal academic politics and yearly conferences on Alpha Centauri. In large symposia held at high-class hotels, various professors discuss details of Nikias The Random Olive Farmer’s psychology, personal relationships, opinions, and how he fits in to the major trends in Greek society that were going on at the time. Feminist scholars object that the field of Nikias The Random Olive Farmer’s Wife Studies is less well-funded than Nikias The Random Olive Farmer Studies, and dozens of angry papers are published in the relevant journals about it. Several leading figures object that too little effort is being made to communicate the findings of Nikias The Random Olive Farmer Studies to the general public, and there are half-hearted attempts to make little comic books about Nikias’ life or something.

By 3150 this has gotten so out of hand that it is wasting useful resources that should be allocated to fending off the K’th’rangan invasion. The Galactic Emperor declares a cap on the number of classics scholars at some reasonable number like a hundred million. There are protests in every major university, and leading public figures accuse the Galactic Emperor of being anti-intellectual, but eventually the new law takes hold and the grumbling dies down.

The field of Early 21st Century Studies, on the other hand, is still going strong. There are almost a thousand times as many moderns as Greeks, so we have a more reasonable ratio of about fifteen historians per modern, give or take, with the most interesting moderns having more and the ones who died young having fewer. Even better, the 21st Century Studies researchers don’t have to waste valuable chronoscopes that could be used for spying on the K’th’rangans. They can just hunt through the Internet Archive for more confusing, poorly organized data about the people of the early 21st century than they could ever want.

Gradually the data will start to make more and more sense. Imagine how excited the relevant portion of the scholarly community will be when it is discovered through diligent sleuthing that Thor41338 on the Gamer’s Guild forum is the same person as Hunter Glenderson from Charleston, South Carolina, and two seemingly different pieces of the early 21st century milieu slide neatly into place.

A few more population doublings, and the field of Hunter Glenderson From Charleston Studies is as big as the field of Nikias The Random Olive Farmer Studies ever was. The Galactic Emperor is starting to take notice, but the K’th’rangans are in retreat and for now there are resources to spare. There are no more great discoveries about new pseudonyms to be made, but there are still occasional paradigm shifts in analysis of the great events of Glenderson’s life. Someone tries a Freudian analysis of his life; another a Marxist analysis; a third writes about how his relationship with his ex-girlfriend from college ties in to the Daoist conception of impermanence. All these people have grad students trawling old Twitter accounts for them, undergraduates anxious to hear about their professor’s latest research, and hateblogs by brash amateurs claiming that the establishment totally misunderstands Hunter Glenderson.

Late at night, one grad student is preparing a paper on one of Glenderson’s teenaged Twitter rants, and comes across his tweet: “Nobody notices me. Nobody cares about anything I do.” She makes special note of it, since she thinks the irony factor might make it worth a publication in one of the several Hunter-Glenderson-themed journals.

More Links For October 2014

Bad Conlanging Ideas Tumblr, or best conlanging ideas Tumblr?

No, Aristotle is not your dumb straw man opponent of empiricism.

One thing you have to learn in every freshman biology course, and the better sort of freshman philosophy course, is that evolution doesn’t necessarily go “from worse organisms to better organisms” or even “from less complex organisms to more complex organisms” in any meaningful fashion. On the other hand, organisms from more “evolutionarily deep” areas are more likely to invade less “evolutionarily deep” areas than vice versa. So maybe there’s something to the idea of evolutionary “progress” after all, albeit probably not in the way a lot of people would think.

Last links post I made fun of Russia’s wood shortage by saying it was like Saudi Arabia having a sand shortage. Alyssa Vance helpfully informed me that Saudi Arabia did, in fact, have a sand shortage.

Andrea Rossi’s e-Cat cold fusion machine passes another round of probably rigged tests, including one where it was able to change isotope ratios in a way that would have been very impressive had it not been almost certainly rigged. Less Wrong Facebook is taking bets on replications – but they’re less “think it will” vs. “think it won’t” and more “1% chance it will” vs. “0.0001% chance it will.” Rational Conspiracy sums up some of the discussion, but note that Robin says (privately, on Facebook) that this is seriously misrepresenting him, so the post should be taken only as a survey of the issues involved and not as accurate about Robin’s personal position. Also, I ask about some of the patent issues it raises on Tumblr.

There’s now a claim that along with everything else gut microbes can contribute to the pathogenesis of eating disorders. Haven’t investigated to see if it’s true yet because I know very little about these conditions and am hoping Kate Donovan will do the hard work.

Speaking of aetiology of mental disorders, here’s the best Grand Unified Theory Of Autism I’ve seen this week: autism stems from a prediction deficit. Also glad the importance of prediction to the brain’s architecture is getting some much-deserved media attention.

Second best Grand Unified Theory Of Autism I’ve seen this week: Neural stem cell overgrowth, autism-like behavior linked, mice study suggests.

In the slightly more reality-based fusion community, Lockheed Martin announces that they expect to have a truck-sized fusion reactor ready in ten years, making the joke that “fusion is always twenty years off” somewhat obsolete. The polywell people have also mentioned the ten years number. But more sober scientists are doubtful.

My old Biodeterminist’s Guide said it was “likely” that exercise increased fetal IQ but didn’t have a good study to point to. Now the evidence is in to confirm that hypothesis.

Yxoque is starting rationalist-tutor.tumblr.com to try to teach some basic rationalist concepts to people who for some inexplicable reason possibly involving their head being screwed on backwards don’t want to read the Sequences. If you’re on Tumblr you may want to follow.

The libertarian talking points on California’s water shortage. Seems legit.

Songs From A Decemberists Album Where Nobody Gets Murdered, including “The Boy Who Joined A Guild And Worked Hard,” “Let’s Not Strangle The Dauphin,” and “Life As A Chimney Sweep Is Difficult But I’d Certainly Never Start Murdering Sex Workers Who Remind Me Of My Mother Just To Relieve The Stress.”

Ezra Klein’s been getting a lot of flak over a recent article, and I’m not usually someone to defend what THE ENTIRE WORLD seems to be denouncing as extreme over-the-top feminism – but in this case it looks like he’s taken a reasonable position on what unfortunately happens to be a taboo tradeoff.

Weird fluctuation in x-rays from the sun may be first direct detection of dark matter axions.

Want to live somewhere cheap? Try New York or San Francisco. Really? Yes, really.

A new paper finds that immigration neither increases unemployment nor increases growth. (h/t Marginal Revolution)

Of all the crappy things the Saudis do to women, one I didn’t know before was that women who have finished their jail term have to be picked up by a male relative. No male relative who wants to pick you up, no release from jail, ever. Seriously, screw Saudi Arabia. I hope the entire country collapses of chronic sand shortage.

The world’s first fully vegetarian city bans animal slaughter and the sale of meat within city limits. Only one problem – it doesn’t look like they asked the residents, who are kind of miffed.

Iran, when challenged on homosexual rights, famously declared that it had no gays. But did you know that when asked to host the Paralympics, the Soviet Union declined because it had no disabled people?

In keeping with our tradition of ending with a link to an interesting, funny, or surprising textbook: I kind of actually want to read this.

Posted in Uncategorized | Tagged | 435 Comments

Five Case Studies On Politicization

[Trigger warning: Some discussion of rape in Part III. This will make much more sense if you've previously read I Can Tolerate Anything Except The Outgroup]

I.

One day I woke up and they had politicized Ebola.

I don’t just mean the usual crop of articles like Republicans Are Responsible For The Ebola Crisis and Democrats Try To Deflect Blame For Ebola Outbreak and Incredibly Awful Democrats Try To Blame Ebola On GOP and NPR Reporter Exposes Right Wing Ebola Hype and Republicans Flip-Flop On Ebola Czars. That level of politicization was pretty much what I expected.

(I can’t say I totally expected to see an article called Fat Lesbians Got All The Ebola Dollars, But Blame The GOP, but in retrospect nothing I know about modern society suggested I wouldn’t)

I’m talking about something weirder. Over the past few days, my friends on Facebook have been making impassioned posts about how it’s obvious there should/shouldn’t be a quarantine, but deluded people on the other side are muddying the issue. The issue has risen to an alarmingly high level of 0.05 #Gamergates, which is my current unit of how much people on social media are concerned about a topic. What’s more, everyone supporting the quarantine has been on the right, and everyone opposing on the left. Weird that so many people suddenly develop strong feelings about a complicated epidemiological issue, which can be exactly predicted by their feelings about everything else.

On the Right, there is condemnation of the CDC’s opposition to quarantines as globalist gibberish, fourteen questions that will never be asked about Ebola centering on why there aren’t more quarantine measures in place, and arguments on right-leaning biology blogs for why the people opposing quarantines are dishonest or incompetent. Top Republicans call for travel bans and a presenter on Fox, proportionate as always, demands quarantine centers in every US city.

On the Left (and token libertarian) sides, the New Yorker has been publishing articles on how involuntary quarantines violate civil liberties and “embody class and racial biases”, Reason makes fun of “dumb Republican calls for a travel ban”, Vox has a clickbaity article on how “This One Paragraph Perfectly Sums Up America’s Overreaction To Ebola”, and MSNBC notes that to talk about travel bans is “borderline racism”.

How did this happen? How did both major political tribes decide, within a month of the virus becoming widely known in the States, not only exactly what their position should be but what insults they should call the other tribe for not agreeing with their position? There are a lot of complicated and well-funded programs in West Africa to disseminate information about the symptoms of Ebola in West Africa, and all I can think of right now is that if the Africans could disseminate useful medical information half as quickly as Americans seem to have disseminated tribal-affiliation-related information, the epidemic would be over tomorrow.

Is it just random? A couple of Republicans were coincidentally the first people to support a quarantine, so other Republicans felt they had to stand by them, and then Democrats felt they had to oppose it, and then that spread to wider and wider circles? And if by chance a Democrats had proposed quarantine before a Republican, the situation would have reversed itself? Could be.

Much more interesting is the theory that the fear of disease is the root of all conservativism. I am not making this up. There has been a lot of really good evolutionary psychology done on the extent to which pathogen stress influences political opinions. Some of this is done on the societal level, and finds that societies with higher germ loads are more authoritarian and conservative. This research can be followed arbitrarily far – like, isn’t it interesting that the most liberal societies in the world are the Scandinavian countries in the very far north where disease burden is low, and the most traditionalist-authoritarian ones usually in Africa or somewhere where disease burden is high? One even sees a similar effect within countries, with northern US states being very liberal and southern states being very conservative. Other studies have instead focused on differences between individuals within society – we know that religious conservatives are people with stronger disgust reactions and priming disgust reactions can increase self-reported conservative political beliefs – with most people agreeing disgust reactions are a measure of the “behavioral immune system” triggered by fear of germ contamination.

(free tip for liberal political activists – offering to tidy up voting booths before the election is probably a thousand times more effective than anything you’re doing right now. I will leave the free tip for conservative political activists to your imagination)

If being a conservative means you’re pre-selected for worry about disease, obviously the conservatives are going to be the ones most worried about Ebola. And in fact, along with the quarantine debate, there’s a little sub-debate about whether Ebola is worth panicking about. Vox declares Americans to be “overreacting” and keeps telling them to calm down, whereas its similarly-named evil twin Vox Day has been spending the last week or so spreading panic and suggesting readers “wash your hands, stock up a bit, and avoid any unnecessary travel”.

So that’s the second theory.

The third theory is that everything in politics is mutually reinforcing.

Suppose the Red Tribe has a Grand Narrative. The Narrative is something like “We Americans are right-thinking folks with a perfectly nice culture. But there are also scary foreigners who hate our freedom and wish us ill. Unfortunately, there are also traitors in our ranks – in the form of the Blue Tribe – who in order to signal sophistication support foreigners over Americans and want to undermine our culture. They do this by supporting immigration, accusing anyone who is too pro-American and insufficiently pro-foreigner of “racism”, and demanding everyone conform to “multiculturalism” and “diversity”, as well as lionizing any group within America that tries to subvert the values of the dominant culture. Our goal is to minimize the subversive power of the Blue Tribe at home, then maintain isolation from foreigners abroad, enforced by a strong military if they refuse to stay isolated.”

And the Blue Tribe also has a Grand Narrative. The Narrative is something like “The world is made up of a bunch of different groups and cultures. The wealthier and more privileged groups, played by the Red Tribe, have a history of trying to oppress and harass all the other groups. This oppression is based on ignorance, bigotry, xenophobia, denial of science, and a false facade of patriotism. Our goal is to call out the Red Tribe on its many flaws, and support other groups like foreigners and minorities in their quest for justice and equality, probably in a way that involves lots of NGOs and activists.”

The proposition “a quarantine is the best way to deal with Ebola” seems to fit much better into the Red narrative than the Blue Narrative. It’s about foreigners being scary and dangerous, and a strong coordinated response being necessary to protect right-thinking Americans from them. When people like NBC and the New Yorker accuse quarantine opponents of being “racist”, that just makes the pieces fit in all the better.

The proposition “a quarantine is a bad way to deal with Ebola” seems to fit much better into the Blue narrative than the Red. It’s about extremely poor black foreigners dying, and white Americans rushing to throw them overboard to protect themselves out of ignorance of the science (which says Ebola can’t spread much in the First World), bigotry, xenophobia, and fear. The real solution is a coordinated response by lots of government agencies working in tandem with NGOs and local activists.

It would be really hard to switch these two positions around. If the Republicans were to oppose a quarantine, it might raise the general question of whether closing the borders and being scared of foreign threats is always a good idea, and whether maybe sometimes accusations of racism are making a good point. Far “better” to maintain a consistent position where all your beliefs reinforce all of your other beliefs.

There’s a question of causal structure here. Do Republicans believe certain other things for their own sake, and then adapt their beliefs about Ebola to help buttress their other beliefs? Or do the same factors that made them adopt their narrative in the first place lead them to adopt a similar narrative around Ebola?

My guess it it’s a little of both. And then once there’s a critical mass of anti-quarantiners within a party, in-group cohesion and identification effects cascade towards it being a badge of party membership and everybody having to believe it. And if the Democrats are on the other side, saying things you disagree with about every other issue, and also saying that you have to oppose quarantine or else you’re a bad person, then that also incentivizes you to support a quarantine, just to piss them off.

II.

Sometimes politicization isn’t about what side you take, it’s about what issues you emphasize.

In the last post, I wrote:

Imagine hearing that a liberal talk show host and comedian was so enraged by the actions of ISIS that he’d recorded and posted a video in which he shouts at them for ten minutes, cursing the “fanatical terrorists” and calling them “utter savages” with “savage values”.

If I heard that, I’d be kind of surprised. It doesn’t fit my model of what liberal talk show hosts do.

But the story I’m actually referring to is liberal talk show host / comedian Russell Brand making that same rant against Fox News for supporting war against the Islamic State, adding at the end that “Fox is worse than ISIS”.

That fits my model perfectly. You wouldn’t celebrate Osama’s death, only Thatcher’s. And you wouldn’t call ISIS savages, only Fox News. Fox is the outgroup, ISIS is just some random people off in a desert. You hate the outgroup, you don’t hate random desert people.

I would go further. Not only does Brand not feel much like hating ISIS, he has a strong incentive not to. That incentive is: the Red Tribe is known to hate ISIS loudly and conspicuously. Hating ISIS would signal Red Tribe membership, would be the equivalent of going into Crips territory with a big Bloods gang sign tattooed on your shoulder.

Now I think I missed an important part of the picture. The existence of ISIS plays right into Red Tribe narratives. They are totally scary foreigners who hate our freedom and want to hurt us and probably require a strong military response, so their existence sounds like a point in favor of the Red Tribe. Thus, the Red Tribe wants to talk about them as much as possible and condemn them in the strongest terms they can.

There’s not really any way to spin this issue in favor of the Blue Tribe narrative. The Blue Tribe just has to grudgingly admit that maybe this is one of the few cases where their narrative breaks down. So their incentive is to try to minimize ISIS, to admit it exists and is bad and try to distract the conversation to other issues that support their chosen narrative more. That’s why you’ll never see the Blue Tribe gleefully cheering someone on as they call ISIS “savages”. It wouldn’t fit the script.

But did you hear about that time when a Muslim-American lambasting Islamophobia totally pwned all of those ignorant FOX anchors? Le-GEN-dary!

III.

At worst this choice to emphasize different issues descends into an unhappy combination of tragedy and farce.

The Rotherham scandal was an incident in an English town where criminal gangs had been grooming and blackmailing thousands of young girls, then using them as sex slaves. This had been going on for at least ten years with minimal intervention by the police. An investigation was duly launched, which discovered that the police had been keeping quiet about the problem because the gangs were mostly Pakistani and the victims mostly white, and the police didn’t want to seem racist by cracking down too heavily. Researchers and officials who demanded that the abuse should be publicized or fought more vigorously were ordered to attend “diversity training” to learn why their demands were offensive. The police department couldn’t keep it under wraps forever, and eventually it broke and was a huge scandal.

The Left then proceeded to totally ignore it, and the Right proceeded to never shut up about it for like an entire month, and every article about it had to include the “diversity training” aspect, so that if you type “rotherham d…” into Google, your two first options are “Rotherham Daily Mail” and “Rotherham diversity training”.

I don’t find this surprising at all. The Rotherham incident ties in perfectly to the Red Tribe narrative – scary foreigners trying to hurt us, politically correct traitors trying to prevent us from noticing. It doesn’t do anything for the Blue Tribe narrative, and indeed actively contradicts it at some points. So the Red Tribe wants to trumpet it to the world, and the Blue Tribe wants to stay quiet and distract.

HBD Chick usually writes very well-thought-out articles on race and genetics listing all the excellent reasons you should not marry your cousins. Hers is not a political blog, and I have never seen her get upset about any political issue before, but since most of her posts are about race and genetics she gets a lot of love from the Right and a lot of flak from the Left. She recently broke her silence on politics to write three long and very angry blog posts on the Rotherham issue, of which I will excerpt one:

if you’ve EVER called somebody a racist just because they said something politically incorrect, then you’d better bloody well read this report, because THIS IS ON YOU! this is YOUR doing! this is where your scare tactics have gotten us: over 1400 vulnerable kids systematically abused because YOU feel uncomfortable when anybody brings up some “hate facts.”

this is YOUR fault, politically correct people — and i don’t care if you’re on the left or the right. YOU enabled this abuse thanks to the climate of fear you’ve created. thousands of abused girls — some of them maybe dead — on YOUR head.

I have no doubt that her outrage is genuine. But I do have to wonder why she is outraged about this and not all of the other outrageous things in the world. And I do have to wonder whether the perfect fit between her own problems – trying to blog about race and genetics but getting flak from politically correct people – and the problems that made Rotherham so disastrous – which include police getting flak from politically correct people – are part of her sudden conversion to political activism.

[edit: she objects to this characterization]

But I will also give her this – accidentally stumbling into being upset by the rape of thousands of children is, as far as accidental stumbles go, not a bad one. What’s everyone else’s excuse?

John Durant did an interesting analysis of media coverage of the Rotherham scandal versus the “someone posted nude pictures of Jennifer Lawrence” scandal.

He found left-leaning news website Slate had one story on the Rotherham child exploitation scandal, but four stories on nude Jennifer Lawrence.

He also found that feminist website Jezebel had only one story on the Rotherham child exploitation scandal, but six stories on nude Jennifer Lawrence.

Feministing gave Rotherham a one-sentence mention in a links roundup (just underneath “five hundred years of female portrait painting in three minutes”), but Jennifer Lawrence got two full stories.

The article didn’t talk about social media, and I couldn’t search it directly for Jennifer Lawrence stories because it was too hard to sort out discussion of the scandal from discussion of her as an actress. But using my current unit of social media saturation, Rotherham clocks in at 0.24 #Gamergates

You thought I was joking. I never joke.

This doesn’t surprise me much. Yes, you would think that the systematic rape of thousands of women with police taking no action might be a feminist issue. Or that it might outrage some people on Tumblr, a site which has many flaws but which has never been accused of being slow to outrage. But the goal here isn’t to push some kind of Platonic ideal of what’s important, it’s to support a certain narrative that ties into the Blue Tribe narrative. Rotherham does the opposite of that. The Jennifer Lawrence nudes, which center around how hackers (read: creepy internet nerds) shared nude pictures of a beloved celebrity on Reddit (read: creepy internet nerds) and 4Chan (read: creepy internet nerds) – and #Gamergate which does the same – are exactly the narrative they want to push, so they become the Stories Of The Century.

IV.

Here’s something I did find on Tumblr which I think is really interesting.

You can see that after the Ferguson shooting, the average American became a little less likely to believe that blacks were treated equally in the criminal justice system. This makes sense, since the Ferguson shooting was a much-publicized example of the criminal justice system treating a black person unfairly.

But when you break the results down by race, a different picture emerges. White people were actually a little more likely to believe the justice system was fair after the shooting. Why? I mean, if there was no change, you could chalk it up to white people believing the police’s story that the officer involved felt threatened and made a split-second bad decision that had nothing to do with race. That could explain no change just fine. But being more convinced that justice is color-blind? What could explain that?

My guess – before Ferguson, at least a few people interpreted this as an honest question about race and justice. After Ferguson, everyone mutually agreed it was about politics.

Ferguson and Rotherham were both similar in that they were cases of police misconduct involving race. You would think that there might be some police misconduct community who are interested in stories of police misconduct, or some race community interested in stories about race, and these people would discuss both of these two big international news items.

The Venn diagram of sources I saw covering these two stories forms two circles with no overlap. All those conservative news sites that couldn’t shut up about Rotherham? Nothing on Ferguson – unless it was to snipe at the Left for “exploiting” it to make a political point. Otherwise, they did their best to stay quiet about it. Hey! Look over there! ISIS is probably beheading someone really interesting!

The same way Rotherham obviously supports the Red Tribe’s narrative, Ferguson obviously supports the Blue Tribe’s narrative. A white person, in the police force, shooting an innocent (ish) black person, and then a racist system refusing to listen to righteous protests by brave activists.

The “see, the Left is right about everything” angle of most of the coverage made HBD Chick’s attack on political correctness look subtle. The parts about race, systemic inequality, and the police were of debatable proportionality, but what I really liked was the Ferguson coverage started branching off into every issue any member of the Blue Tribe has ever cared about:

Gun control? Check.

The war on terror? Check.

American exceptionalism? Check.

Feminism? Check.

Abortion? Check

Gay rights? Check.

Palestinian independence? Check.

Global warming? Check. Wait, really? Yes, really.

Anyone who thought that the question in that poll was just a simple honest question about criminal justice was very quickly disabused of that notion. It was a giant Referendum On Everything, a “do you think the Blue Tribe is right on every issue and the Red Tribe is terrible and stupid, or vice versa?” And it turns out many people who when asked about criminal justice will just give the obvious answer, have much stronger and less predictable feelings about Giant Referenda On Everything.

In my last post, I wrote about how people feel when their in-group is threatened, even when it’s threatened with an apparently innocuous point they totally agree with:

I imagine [it] might feel like some liberal US Muslim leader, when he goes on the O’Reilly Show, and O’Reilly ambushes him and demands to know why he and other American Muslims haven’t condemned beheadings by ISIS more, demands that he criticize them right there on live TV. And you can see the wheels in the Muslim leader’s head turning, thinking something like “Okay, obviously beheadings are terrible and I hate them as much as anyone. But you don’t care even the slightest bit about the victims of beheadings. You’re just looking for a way to score points against me so you can embarass all Muslims. And I would rather personally behead every single person in the world than give a smug bigot like you a single microgram more stupid self-satisfaction than you’ve already got.”

I think most people, when they think about it, probably believe that the US criminal justice system is biased. But when you feel under attack by people whom you suspect have dishonest intentions of twisting your words so they can use them to dehumanize your in-group, eventually you think “I would rather personally launch unjust prosecutions against every single minority in the world than give a smug out-group member like you a single microgram more stupid self-satisfaction than you’ve already got.”

V.

Wait, so you mean turning all the most important topics in our society into wedge issues that we use to insult and abuse people we don’t like, to the point where even mentioning it triggers them and makes them super defensive, might have been a bad idea??!

There’s been some really neat research into people who don’t believe in global warming. The original suspicion, at least from certain quarters, were that they were just dumb. Then someone checked and found that warming disbelievers actually had (very slightly) higher levels of scientific literacy than warming believers.

So people had to do actual studies, and to what should have been no one’s surprise, the most important factor was partisan affiliation. For example, according to Pew 64% of Democrats believe the Earth is getting warmer due to human activity, compared to 9% of Tea Party Republicans.

So assuming you want to convince Republicans to start believing in global warming before we’re all frying eggs on the sidewalk, how should you go about it? This is the excellent question asked by a study recently profiled in an NYMag article.

The study found that you could be a little more convincing to conservatives by acting on the purity/disgust axis of moral foundations theory – the one that probably gets people so worried about Ebola. A warmer climate is unnatural, in the same way that, oh, let’s say, homosexuality is unnatural. Carbon dioxide contaminating our previously pure atmosphere, in the same way premarital sex or drug use contaminates your previously pure body. It sort of worked.

Another thing that sort of worked was tying things into the Red Tribe narrative, which they did through the two sentences “Being pro-environmental allows us to protect and preserve the American way of life. It is patriotic to conserve the country’s natural resources.” I can’t imagine anyone falling for this, but I guess some people did.

This is cute, but it’s too little too late. Global warming has already gotten inextricably tied up in the Blue Tribe narrative: Global warming proves that unrestrained capitalism is destroying the planet. Global warming disproportionately affects poor countries and minorities. Global warming could have been prevented with multilateral action, but we were too dumb to participate because of stupid American cowboy diplomacy. Global warming is an important cause that activists and NGOs should be lauded for highlighting. Global warming shows that Republicans are science denialists and probably all creationists. Two lousy sentences on “patriotism” aren’t going to break through that.

If I were in charge of convincing the Red Tribe to line up behind fighting global warming, here’s what I’d say:

In the 1950s, brave American scientists shunned by the climate establishment of the day discovered that the Earth was warming as a result of greenhouse gas emissions, leading to potentially devastating natural disasters that could destroy American agriculture and flood American cities. As a result, the country mobilized against the threat. Strong government action by the Bush administration outlawed the worst of these gases, and brilliant entrepreneurs were able to discover and manufacture new cleaner energy sources. As a result of these brave decisions, our emissions stabilized and are currently declining.

Unfortunately, even as we do our part, the authoritarian governments of Russia and China continue to industralize and militarize rapidly as part of their bid to challenge American supremacy. As a result, Communist China is now by far the world’s largest greenhouse gas producer, with the Russians close behind. Many analysts believe Putin secretly welcomes global warming as a way to gain access to frozen Siberian resources and weaken the more temperate United States at the same time. These countries blow off huge disgusting globs of toxic gas, which effortlessly cross American borders and disrupt the climate of the United States. Although we have asked them to stop several times, they refuse, perhaps egged on by major oil producers like Iran and Venezuela who have the most to gain by keeping the world dependent on the fossil fuels they produce and sell to prop up their dictatorships.

A giant poster of Mao looks approvingly at all the CO2 being produced…for Communism.

We need to take immediate action. While we cannot rule out the threat of military force, we should start by using our diplomatic muscle to push for firm action at top-level summits like the Kyoto Protocol. Second, we should fight back against the liberals who are trying to hold up this important work, from big government bureaucrats trying to regulate clean energy to celebrities accusing people who believe in global warming of being ‘racist’. Third, we need to continue working with American industries to set an example for the world by decreasing our own emissions in order to protect ourselves and our allies. Finally, we need to punish people and institutions who, instead of cleaning up their own carbon, try to parasitize off the rest of us and expect the federal government to do it for them.

Please join our brave men and women in uniform in pushing for an end to climate change now.

If this were the narrative conservatives were seeing on TV and in the papers, I think we’d have action on the climate pretty quickly. I mean, that action might be nuking China. But it would be action.

And yes, there’s a sense in which that narrative is dishonest, or at least has really weird emphases. But our current narrative also has really some weird emphases. And for much the same reasons.

VI.

The Red Tribe and Blue Tribe have different narratives, which they use to tie together everything that happens into reasons why their tribe is good and the other tribe is bad.

Sometimes this results in them seizing upon different sides of an apparently nonpolitical issue when these support their narrative; for example, Republicans generally supporting a quarantine against Ebola, Democrats generally opposing it. Other times it results in a side trying to gain publicity for stories that support their narrative while sinking their opponents’ preferred stories – Rotherham for some Reds; Ferguson for some Blues.

When an issue gets tied into a political narrative, it stops being about itself and starts being about the wider conflict between tribes until eventually it becomes viewed as a Referendum On Everything. At this point, people who are clued in start suspecting nobody cares about the issue itself – like victims of beheadings, or victims of sexual abuse – and everybody cares about the issue’s potential as a political weapon – like proving Muslims are “uncivilized”, or proving political correctness is dangerous. After that, even people who agree that the issue is a problem and who would otherwise want to take action have to stay quiet, because they know that their help would be used less to solve a problem than to push forward the war effort against them. If they feel especially threatened, they may even take an unexpected side on the issue, switching from what they would usually believe to whichever position seems less like a transparent cover for attempts to attack them and their friends.

And then you end up doing silly things like saying ISIS is not as bad as Fox News, or donating hundreds of thousands of dollars to the officer who shot Michael Brown.

This can sort of be prevented by not turning everything into a referendum on how great your tribe is and how stupid the opposing tribe is, or by trying to frame an issue in a way that respects or appeals to an out-group’s narrative.

Let me give an example. I find a lot of online feminism very triggering, because it seems to me to have nothing to do with women and be transparently about marginalizing nerdy men as creeps who are not really human (see: nude pictures vs. Rotherham, above). This means that even when I support and agree with feminists and want to help them, I am constantly trying to drag my brain out of panic mode that their seemingly valuable projects are just deep cover for attempts to hurt me (see: hypothetical Bill O’Reilly demanding Muslims condemn the “Islamic” practice of beheading people).

I have recently met some other feminists who instead use a narrative which views “nerds” as an “alternative gender performance”, ie in the case of men they reject the usual masculine pursuits of sports and fraternities and they have characteristics that violate normative beauty standards (like “no neckbeards”). Thus, people trying to attack nerds is a subcategory of “people trying to enforce gender performance”, and nerds should join with queer people, women, and other people who have an interest in promoting tolerance of alternative gender performances in order to fight for their mutual right to be left alone and accepted.

I’m not sure I entirely buy this argument, but it doesn’t trigger me, and it’s the sort of thing I could buy, and if all my friends started saying it I’d probably be roped into agreeing by social pressure alone.

But this is as rare as, well, anti-global warming arguments aimed at making Republicans feel comfortable and nonthreatened.

I blame the media, I really do. Remember, from within a system no one necessarily has an incentive to do what the system as a whole is supposed to do. Daily Kos or someone has a little label saying “supports liberal ideas”, but actually their incentive is to make liberals want to click on their pages and ads. If the quickest way to do that is by writing story after satisfying story of how dumb Republicans are, and what wonderful taste they have for being members of the Blue Tribe instead of evil mutants, then they’ll do that even if the effect on the entire system is to make Republicans hate them and by extension everything they stand for.

I don’t know how to fix this.

Posted in Uncategorized | Tagged | 719 Comments

Five Planets In Search Of A Sci-Fi Story

Gamma Andromeda, where philosophical stoicism went too far. Its inhabitants, tired of the roller coaster ride of daily existence, decided to learn equanimity in the face of gain or misfortune, neither dreading disaster nor taking joy in success.

But that turned out to be really hard, so instead they just hacked it. Whenever something good happens, the Gammandromedans give themselves an electric shock proportional in strength to its goodness. Whenever something bad happens, the Gammandromedans take an opiate-like drug that directly stimulates the pleasure centers of their brain, in a dose proportional in strength to its badness.

As a result, every day on Gamma Andromeda is equally good compared to every other day, and its inhabitants need not be jostled about by fear or hope for the future.

This does sort of screw up their incentives to make good things happen, but luckily they’re all virtue ethicists.

Zyzzx Prime, inhabited by an alien race descended from a barnacle-like creature. Barnacles are famous for their two stage life-cycle: in the first, they are mobile and curious creatures, cleverly picking out the best spot to make their home. In the second, they root themselves to the spot and, having no further use for their brain, eat it.

This particular alien race has evolved far beyond that point and does not literally eat its brain. However, once an alien reaches sufficiently high social status, it releases a series of hormones that tell its brain, essentially, that it is now in a safe place and doesn’t have to waste so much energy on thought and creativity to get ahead. As a result, its mental acuity drops two or three standard deviations.

The Zyzzxians’ society is marked by a series of experiments with government – monarchy, democracy, dictatorship – only to discover that, whether chosen by succession, election, or ruthless conquest, its once brilliant leaders lose their genius immediately upon accession and do a terrible job. Their government is thus marked by a series of perpetual pointless revolutions.

At one point, a scientific effort was launched to discover the hormones responsible and whether it was possible to block them. Unfortunately, any scientist who showed promise soon lost their genius, and those promoted to be heads of research institutes became stumbling blocks who mismanaged funds and held back their less prestigious co-workers. Suggestions that the institutes eliminate tenure were vetoed by top officials, who said that “such a drastic step seems unnecessary”.

K’th’ranga V, which has been a global theocracy for thousands of years, ever since its dominant race invented agricultural civilization. This worked out pretty well for a while, until it reached an age of industrialization, globalization, and scientific discovery. Scientists began to uncover truths that contradicted the Sacred Scriptures, and the hectic pace of modern life made the shepherds-and-desert-traders setting of the holy stories look vaguely silly. Worse, the cold logic of capitalism and utilitarianism began to invade the Scriptures’ innocent Stone Age morality.

The priest-kings tried to turn back the tide of progress, but soon realized this was a losing game. Worse, in order to determine what to suppress, they themselves had to learn the dangerous information, and their mental purity was even more valuable than that of the populace at large.

So the priest-kings moved en masse to a big island, where they began living an old-timey Bronze Age lifestyle. And the world they ruled sent emissaries to the island, who interfaced with the priest-kings, and sought their guidance, and the priest-kings ruled a world they didn’t understand as best they could.

But it soon became clear that the system could not sustain itself indefinitely. For one thing, the priest-kings worried that discussion with the emissaries – who inevitably wanted to talk about strange things like budgets and interest rates and nuclear armaments – was contaminating their memetic purity. For another thing, they honestly couldn’t understand what the emissaries were talking about half the time.

Luckily, there was a whole chain of islands making an archipelago. So the priest-kings set up ten transitional societies – themselves in the Bronze Age, another in the Iron Age, another in the Classical Age, and so on to the mainland, who by this point were starting to experiment with nanotech. Mainland society brought its decisions to the first island, who translated it into their own slightly-less-advanced understanding, who brought it to the second island, and so on to the priest-kings, by which point a discussion about global warming might sound like whether we should propitiate the Coal Spirit. The priest-kings would send their decisions to the second-to-last island, and so on back to the mainland.

Eventually the Kth’ built an AI which achieved superintelligence and set out to conquer the universe. But it was a well-programmed superintelligence coded with Kth’ values. Whenever it wanted a high-level decision made, it would talk to a slightly less powerful superintelligence, who would talk to a slightly less powerful superintelligence, who would talk to the mainlanders, who would talk to the first island…

Chan X-3, notable for a native species that evolved as fitness-maximizers, not adaptation-executors. Their explicit goal is to maximize the number of copies of their genes. But whatever genetic program they are executing doesn’t care whether the genes are within a living being capable of expressing them or not. The planet is covered with giant vats full of frozen DNA. There was originally some worry that the species would go extinct, since having children would consume resources that could be used hiring geneticists to make millions of copies of your DNA and stores them in freezers. Luckily, it was realized that children not only provide a useful way to continue the work of copying and storing (half of) your DNA long into the future, but will also work to guard your already-stored DNA against being destroyed. The species has thus continued undiminished, somehow, and their fondest hope is to colonize space and reach the frozen Kuiper Belt objects where their DNA will naturally stay undegraded for all time.

New Capricorn, which contains a previously undiscovered human colony that has achieved a research breakthrough beyond their wildest hopes. A multi-century effort paid off in a fully general cure for death. However, the drug fails to stop aging. Although the Capricornis no longer need fear the grave, after age 100 or so even the hardiest of them get Alzheimers’ or other similar conditions. A hundred years after the breakthrough, more than half of the population is elderly and demented. Two hundred years after, more than 80% are. Capricorni nursing homes quickly became overcrowded and unpleasant, to the dismay of citizens expecting to spend eternity there.

So another research program was started, and the result were fully immersive, fully life-supporting virtual reality capsules. Stacked in huge warehouses by the millions, the elderly sit in their virtual worlds, vague sunny fields and old gabled houses where it is always the Good Old Days and their grandchildren are always visiting.

Posted in Uncategorized | Tagged | 88 Comments

Open Thread 6: Open Renewal

The research I’m doing now seems to be funging against blogging more than my usual work, so expect it to be quiet around here for a while. Here, have an open thread.

1. Comments of the month…let’s see…no obvious winner this time, but Irenist tries to expand my color-tribe set and Gingko gives a story about Italian mercenaries I would love to have a better source for. Also, Jim (not that Jim) confirms a version of my Thatcher/Osama story but also gives me an alternate hypothesis. What if celebrating the deaths of people like Margaret Thatcher or Joan Rivers is more acceptable than that of Osama precisely because people feel the celebrations of the former aren’t serious, but the celebrations of the latter are?

2. I mentioned it before, but I’ll mention it again: Raemon’s having a Kickstarter for the Secular Solstice celebration.

3. I confess that my attempts with advertising on this blog have failed miserably. AdSense was giving me fifty cents a day for 10,000 page views. Amazon has been a little better, but its much-touted AI recommendation engine believes that visitors here want to buy like three hundred different versions of Thrff Jub’f Pbzvat Gb Qvaare (rot13d so it doesn’t take this as further evidence that it’s on the right track), and nobody clicks on the affiliate banner. This annoys me, because when I can make people click on Amazon links for some other reason (like to see a funny textbook cover), they buy a bunch of things that day which I get credit for, but those same people who are reading my blog every day and buying things from Amazon every day don’t use the affiliate link. I will try to forgive y’all.

So, offer. If any of you are experienced in blog advertising, I’ll make you a deal. Figure out how to get me more money (not through obnoxious popup ads or spamming product reviews) and I’ll give you some percent of it we can negotiate.

4. I will be starting a new Less Wrong Survey soon and want to get you guys in as well (don’t worry, I’ll make sure to keep it separate so as to not contaminate results). I’m most excited about the idea of asking digit ratio on the survey to see if I can replicate some of the weird results that have been coming out about that linking it to different kinds of intelligences, political positions, et cetera.

I think we’ve had a history of getting some interesting results on the survey (for example, we found a REALLY strong oldest-child bias last time, which flies in the face of some supposed disproofs of birth order theory I’ve read) but I’m not sure how I would get them out to where they could help anyone besides bloggers. I am under the impression I can’t get any of my data published, even if I had publishable results, unless I got an IRB somewhere to approve the survey. Is this right?

Posted in Uncategorized | Tagged | 409 Comments

Tumblr on MIRI

[Disclaimer: I have done odd jobs for MIRI once or twice several years ago, but I am not currently affiliated with them in any way and do not speak for them.]

A recent Tumblr conversation on the Machine Intelligence Research Institute has gotten interesting and I thought I’d see what people here have to say.

If you’re just joining us and don’t know about the Machine Intelligence Research Institute (“MIRI” to its friends), they’re a nonprofit organization dedicated to navigating the risks surrounding “intelligence explosion”. In this scenario, a few key insights around artificial intelligence can very quickly lead to computers so much smarter than humans that the future is almost entirely determined by their decisions. This would be especially dangerous since most AIs use very primitive untested goal systems inappropriate for and untested on intelligent entities; such a goal system would be “unstable” and from a human perspective the resulting artificial intelligence could have apparently arbitrary or insane goals. If such a superintelligence were much more powerful than we are, it would present an existential threat to the human race.

This has almost nothing to do with the classic “Skynet” scenario – but if it helps to imagine Skynet, then fine, just imagine Skynet. Everyone else does.

MIRI tries to raise awareness of this possibility among AI researchers, scientists, and the general public, and to start foundational research in more stable goal systems that might allow AIs to become intelligent or superintelligent while still acting in predictable and human-friendly ways.

This is not a 101 space and I don’t want the comments here to all be about whether or not this scenario is likely. If you really want to discuss that, go read at least Facing The Intelligence Explosion and then post your comments in the Less Wrong Open Thread or something. This is about MIRI as an organization.

(If you’re really just joining us and you don’t know about Tumblr, run away)

II.

Tumblr user su3su2u1 writes:

Saw some tumblr people talking about [effective altruism]. My biggest problem with this movement is that most everyone I know who identifies themselves as an effective altruist donates money to MIRI (it’s possible this is more a comment on the people I know than the effective altruism movement, I guess). Based on their output over the last decade, MIRI is primarily a fanfic and blog-post producing organization. That seems like spending money on personal entertainment.

Part of this is obviously mean-spirited potshots, in that MIRI itself doesn’t produce fanfic and what their employees choose to do with their own time is none of your damn business.

(well, slightly more complicated. I think MIRI gave Eliezer a couple weeks vacation to work on it as an “outreach” thing once. But that’s a little different from it being their main priority.)

But more serious is the claim that MIRI doesn’t do much else of value. I challenged Su3 with the following evidence of MIRI doing good work:

A1. MIRI has been very successful with outreach and networking – basically getting their cause noticed and endorsed by the scientific establishment and popular press. They’ve gotten positive attention, sometimes even endorsements, from people like Stephen Hawking, Elon Musk, Gary Drescher, Max Tegmark, Stuart Russell, and Peter Thiel. Even Bill Gates is talking about AI risk, though I don’t think he’s mentioned MIRI by name. Multiple popular books have been written about their ideas, such as James Miller’s Singularity Rising and Stuart Armstrong’s Smarter Than Us. Most recently Nick Bostrom’s book Superintelligence, based at least in part on MIRI’s research and ideas, is a New York Times best-seller and has been reviewed positively in the Guardian, the Telegraph, Salon, the Financial Times, and the Economist. Oxford has opened up the AI-risk-focused Future of Humanity Institute; MIT has opened up the similar Future of Life Institute. In about a decade, the idea of an intelligence explosion has gone from Time Cube level crackpottery to something taken seriously by public intellectuals and widely discussed in the tech community.

A2. MIRI has many publications, conference presentations, book chapters and other things usually associated with normal academic research, which interested parties can find on their website. They have conducted seven past research workshops which have produced interesting results like Christiano et al’s claimed proof of a way around the logical undefinability of truth, which was praised as potentially interesting by respected mathematics blogger John Baez.

A3. Many former MIRI employees, and many more unofficial fans, supporters, and associates of MIRI, are widely distributed across the tech community in industries that are likely to be on the cutting edge of artificial intelligence. For example, there are a bunch of people influenced by MIRI in Google’s AI department. Shane Legg, who writes about how his early work was funded by a MIRI grant and who once called MIRI “the best hope that we have” was pivotal in convincing Google to set up an AI ethics board to monitor the risks of the company’s cutting-edge AI research. The same article mentions Peter Thiel and Jaan Tallinn as leading voices who will make Google comply with the board’s recommendations; they also happen to be MIRI supporters and the organization’s first and third largest donors.

There’s a certain level of faith required for (A1) and (A3) here, in that I’m attributing anything good that happens in the field of AI risk to some sort of shady behind-the-scenes influence from MIRI. Maybe Legg, Tallinn, and Thiel would have pushed for the exact same Google AI Ethics Board if none of them had ever heard of MIRI at all. I am forced to plead ignorance on the finer points of networking and soft influence. Heck, for all I know, maybe the exact same number of people would vote Democrat if there were no Democratic National Committee or liberal PACs. I just assume that, given a really weird idea that very few people held in 2000, an organization dedicated to spreading that idea, and the observation that the idea has indeed spread very far, the organization is probably doing something right.

III.

Our discussion on point (A3) degenerated into Dueling Anecdotal Evidence. But Su3 responded to my point (A1) like so:

[I agree that MIRI has gotten shoutouts from various thought leaders like Stephen Hawking and Elon Musk. Bostrom's book is commercially successful, but that's just] more advertising. Popular books aren’t the way to get researchers to notice you. I’ve never denied that MIRI/SIAI was good at fundraising, which is primarily what you are describing.

How many of those thought leaders have any publications in CS or pure mathematics, let alone AI? Tegmark might have a math paper or two, but he is primarily a cosmologist. The FLI’s list of scientists is (for some reason) mostly again cosmologists. The active researchers appear to be a few (non-CS, non-math) grad students. Not exactly the team you’d put together if you were actually serious about imminent AI risk.

I would also point out “successfully attracted big venture capital names” isn’t always a mark of a sound organization. Black Light Power is run by a crackpot who thinks he can make energy by burning water, and has attracted nearly 100 million in funding over the last two decades, with several big names in energy production behind him.

And to my point (A2) like so:

I have a PhD in physics and work in machine learning. I’ve read some of the technical documents on MIRI’s site, back when it was SIAI and I was unimpressed. I also note that this critique is not unique to me, as far as I know the GiveWell position on MIRI is that it is not an effective institute.

The series of papers on Lob’s theorem are actually interesting, though I notice that none of the results have been peer reviewed, and the paper’s aren’t listed as being submitted to journals yet. Their result looks right to me, but I wouldn’t trust myself to catch any subtlety that might be involved.

[But that just means] one result has gotten some small positive attention, and even those results haven’t been vetted by the wider math community yet (no peer review). Let’s take a closer look at the list of publications on MIRI’s website- I count 6 peer reviewed papers in their existence, and 13 conference presentations. Thats horribly unproductive! Most of the grad students who finish a physics phd will publish that many papers individually, in about half that time. You claim part of their goal is to get academics to pay attention, but none of their papers are highly cited, despite all this networking they are doing.

Citations are the standard way to measure who in academia is paying attention. Apart from the FHI/MIRI echo chamber (citations bouncing around between the two organizations), no one in academia seems to be paying attention to MIRI’s output. MIRI is failing to make academic inroads, and it has produced very little in the way of actual research.

My interpretation, in the form of a TL;DR

B1. Sure, MIRI is good at getting attention, press coverage, and interest from smart people not in the field. But that’s public relations and fundraising. An organization being good at fundraising and PR doesn’t mean it’s good at anything else, and in fact “so good at PR they can cover up not having substance” is a dangerous failure mode.

B2. What MIRI needs, but doesn’t have, is the attention and support of smart people within the fields of math, AI, and computer science, whereas now it mostly has grad students not in these fields.

B3. While having a couple of published papers might look impressive to a non-academic, people more familiar with the culture would know that their output is woefully low. They seem to have gotten about five ten solid publications in during their decade-long history as a multi-person organization; one good grad student can get a couple solid publications a year. Their output is less than expected by like an order of magnitude. And although they do get citations, this is all from a mutual back-scratching club of them and Bostrom/FHI citing each other.

IV.

At this point Tarn and Robby joined the conversation and it became kind of confusing, but I’ll try to summarize our responses.

Our response to Su3’s point (B1) was that this is fundamentally misunderstanding outreach. From its inception until about last year, MIRI was in large part an outreach and awareness-raising organization. Its 2008 website describes its mission like so:

In the coming decades, humanity will likely create a powerful AI. SIAI exists to confront this urgent challenge, both the opportunity and the risk. SIAI is fostering research, education, and outreach to increase the likelihood that the promise of AI is realized for the benefit of everyone.

Outreach is one of its three main goals, and “education”, which sounds a lot like outreach, is a second.

In a small field where you’re the only game in town, it’s hard to distinguish between outreach and self-promotion. If MIRI successfully gets Stephen Hawking to say “We need to be more concerned about AI risks, as described by organizations like MIRI”, is that them being very good at self-promotion and fundraising, or is that them accomplishing their core mission of getting information about AI risks to the masses?

Once again, compare to a political organization, maybe Al Gore’s anti-global-warming nonprofit. If they get the media to talk about global warming a lot, and get lots of public intellectuals to come out against global warming, and change behavior in the relevant industries, then mission accomplished. The popularity of An Inconvenient Truth can’t just be dismissed as “self-promotion” or “fundraising” for Gore, it was exactly the sort of thing he was gathering money and personal prestige in order to do, and should be considered a victory in its own right. Even though eventually the anti-global-warming cause cares about politicians, industry leaders, and climatologists a lot more than they care about the average citizen, convincing millions of average citizens to help was a necessary first step.

And this which is true of An Inconvenient Truth is true of Superintelligence and other AI risk publicity efforts, albeit on their much smaller scale.

Our response to Su3’s point (B2) was that it was just plain factually false. MIRI hasn’t reached big names from the AI/math/compsci field? Sure it has. Doesn’t have mathy PhD students willing to research for them? Sure it does.

Peter Norvig and Stuart Russell are among the biggest names in AI. Norvig is currently the Director of Research at Google; Russell is Professor of Computer Science at Berkeley and a winner of various impressive sounding awards. The two wrote a widely-used textbook on artificial intelligence in which they devote three pages to the proposition that “The success of AI might mean the end of the human race”; parts are taken right out of the MIRI playbook and they cite MIRI research fellow Eliezer Yudkowsky’s paper on the subject. This is unlikely to be a coincidence; Russell’s site links to MIRI and he is scheduled to participate in MIRI’s next research workshop.

Their “team” of “research advisors” includes Gary Drescher (PhD in CompSci from MIT), Steve Omohundro (PhD in physics from Berkeley but also considered a pioneer of machine learning), Roman Yampolskiy (PhD in CompSci from Buffalo), and Moshe Looks (PhD in CompSci from Washington).

Su3 brought up the good point that none of these people, respected as they are, are MIRI employees or researchers (although Drescher has been to a research workshop). At best, they are people who were willing to let MIRI use them as figureheads (in the case of the research advisors); at worst, they are merely people who have acknowledged MIRI’s existence in a not-entirely-unlike-positive way (Norvig and Russell). Even if we agree they are geniuses, this does not mean that MIRI has access to geniuses or can produce genius-level research.

Fine. All these people are, no more and no less, is evidence that MIRI is succeeding at outreach within the academic field of AI, as well as in the general public. It also seems to me to be some evidence that smart people who know more about AI than any of us think MIRI is on the right track.

Su3 brought up the example of a BlackLight Power, a crackpot energy company that was able to get lots of popular press and venture capital funding despite being powered entirely by pseudoscience. I agree this is the sort of thing we should be worried about. Nonscientists outside of specialized fields have limited ability to evaluate their claims. But when smart researchers in the field are willing to vouch for MIRI, that give me a lot more confidence they’re not just a fly-by-night group trying to profit off of pseudoscience. Their research might be more impressive or less impressive, but they’re not rotten to the core the same way BlackLight was.

And though MIRI’s own researchers may be far from those lofty heights, I find Su3’s claim that they are “a few non-CS, non-math grad students” a serious underestimate.

MIRI has fourteen employees/associates with the word “research” in their name, but of those, a couple (in the words of MIRI’s team page) “focus on social and historical questions related to artificial intelligence outcomes.” These people should not be expected to have PhDs in mathematical/compsci subjects.

Of the rest, Bill is a PhD in CompSci, Patrick is a PhD in math, Nisan is a PhD in math, Benja is a PhD student in math, and Paul is a PhD student in math. The others mostly have masters or bachelors in those fields, published journal articles, and/or won prizes in mathematical competitions. Eliezer writes of some of the remaining members of his team:

Mihaly Barasz is an International Mathematical Olympiad gold medalist perfect scorer. From what I’ve seen personally, I’d guess that Paul Christiano is better than him at math. I forget what Marcello’s prodigy points were in but I think it was some sort of Computing Olympiad [editor's note: USACO finalist and 2x honorable mention in the Putnam mathematics competition]. All should have some sort of verified performance feat far in excess of the listed educational attainment.

That pretty much leaves Eliezer Yudkowsky, who needs no introduction, and Nate Soares, whose introduction exists and is pretty interesting.

Add to that the many, many PhDs and talented people who aren’t officially employed by them but attend their workshops and help out their research when they get the chance, and you have to ask how many brilliant PhDs from some of the top universities in the world we should expect a small organization like MIRI to have. MIRI competes for the same sorts of people as Google, and offers half as much. Google paid $400 million to get Shane Legg and his people on board; MIRI’s yearly budget hovers at about $1 million. Given that they probably spend a big chunk of that on office space, setting up conferences, and other incidentals, I think the amount of talent they have right now is pretty good.

That leaves Su3’s point (B3) – the lack of published research.

One retort might be that, until recently, MIRI’s research focused on strategic planning and evaluation of AI risks. This is important, and it resulted in a lot of internal technical papers you can find on their website, but there’s not really a field for it. You can’t just publish it in the Journal Of What Would Happen If There Was An Intelligence Explosion, because no such journal. The best they can do is publish the parts of their research that connect to other fields in appropriate journals, which they sometimes did.

I feel like this also frees them from the critique of citation-incest between them and Bostrom. When I look at a typical list of MIRI paper citations, I do see a lot of Bostrom, but also some other names that keep coming up – Hutter, Yampolskiy, Goetzel. So okay, it’s an incest circle of four or five rather than two.

But to some degree that’s what I expect from academia. Right now I’m doing my own research on a psychiatric screening tool called the MDQ. There are three or four research teams in three or four institutions who are really into this and publish papers on it a lot. Occasionally someone from another part of psychiatry wanders in, but usually it’s just the subsubsubspeciality of MDQ researchers talking to each other. That’s fine. They’re our repository of specialized knowledge on this one screening tool.

You would hope the future of the human race would get a little bit more attention than one lousy psychiatric screening tool, but blah blah civilizational inadequacy, turns out not so much, they’re of about equal size. If there are only a couple of groups working on this problem, they’re going to look incestuous but that’s fine.

On the other hand, math is math, and if MIRI is trying to produce real mathematical results they ought to be sharing them with the broader mathematical community.

Robby protests that until very recently, MIRI hasn’t really been focusing on math. This is a very recent pivot. In April 2013, Luke wrote in his mini strategic plan:

We were once doing three things — research, rationality training, and the Singularity Summit. Now we’re doing one thing: research. Rationality training was spun out to a separate organization, CFAR, and the Summit was acquired by Singularity University. We still co-produce the Singularity Summit with Singularity University, but this requires limited effort on our part.
After dozens of hours of strategic planning in January–March 2013, and with input from 20+ external advisors, we’ve decided to (1) put less effort into public outreach, and to (2) shift our research priorities to Friendly AI math research.

In the full strategic plan for 2014, he repeated:

Events since MIRI’s April 2013 strategic plan have increased my confidence that we are “headed in the right direction.” During the rest of 2014 we will continue to:
– Decrease our public outreach efforts, leaving most of that work to FHI at Oxford, CSER at Cambridge, FLI at MIT, Stuart Russell at UC Berkeley, and others (e.g. James Barrat).
– Finish a few pending “strategic research” projects, then decrease our efforts on that front, again leaving most of that work to FHI, plus CSER and FLI if they hire researchers, plus some others.
– Increase our investment in our Friendly AI (FAI) technical research agenda.
– We’ve heard that as a result of…outreach success, and also because of Stuart Russell’s discussions with researchers at AI conferences, AI researchers are beginning to ask, “Okay, this looks important, but what is the technical research agenda? What could my students and I do about it?” Basically, they want to see an FAI technical agenda, and MIRI is is developing that technical agenda already.

In other words, there is a recent pivot from outreach, rationality and strategic research to pure math research, and the pivot is only recently finished or still going on.

TL;DR, again in three points:

C1. Until recently, MIRI focused on outreach and did a truly excellent job on this. They deserve credit here.

C2. MIRI has a number of prestigious computer scientists and AI experts willing to endorse or affiliate with it in some way. While their own researchers are not quite at the same lofty heights, they include many people who have or are working on math or compsci PhDs.

C3. MIRI hasn’t published much math because they were previously focusing on outreach and strategic research; they’ve only shifted to math work in the past year or so.

V.

The discussion just kept going. We reached about the limit of our disagreement on (C1), the point about outreach – yes, they’ve done it, but does it count when it doesn’t bear fruit in published papers? About (C2) and the credentials of MIRI’s team, Su3 kind of blended it into the next point about published papers, saying:

Fundamental disconnect – I consider “working with MIRI” to mean “publishing results with them.” As an outside observer, I have no indication that most of these people are working with them. I’ve been to workshops and conferences with Nobel prize winning physicists, but I’ve never “worked with them” in the academic sense of having a paper with them. If [someone like Stuart Russell] is interested in helping MIRI, the best thing he could do is publish a well received technical result in a good journal with Yudkowsky. That would help get researchers to pay actual attention(and give them one well received published result, in their operating history).

Tangential aside- you overestimate the difficulty of getting top grad students to work for you. I recently got four CS grad students at a top program to help me with some contract work for a few days at the cost of some pizza and beer.

So it looks like it all comes down to the papers. Su3 had this to say:

What I was specifically thinking was “MIRI has produced a much larger volume of well-received fan fiction and blog posts than research.” That was what I inended to communicate, if somewhat snarkily. MIRI bills itself as a research institute, so I judge them on their produced research. The accountability measure of a research institute is academic citations.

Editorials by famous people have some impact with the general public, so thats fine for fundraising, but at some point you have to get researchers interested. You can measure how much influence they have on researchers by seeing who those researchers cite and what they work on. You could have every famous cosmologist in the world writing op-eds about AI risk, but its worthless if AI researchers don’t pay attention, and judging by citations, they aren’t.

As a comparison for publication/citation counts, I know individual physicists who have published more peer reviewed papers since 2005 than all of MIRI has self-published to their website. My single most highly cited physics paper (and I left the field after graduate school) has more citations than everything MIRI has ever published in peer reviewed journals combined. This isn’t because I’m amazing, its because no one in academia is paying attention to MIRI.

[Christiano et al's result about Lob] has been self-published on their website. It has NOT been peer reviewed. So it’s published in the sense of “you can go look at the paper.” But its not published in the sense of “mathematicians in the same field have verified the result.” I agree this one result looks interesting, but most mathematicians won’t pay attention to it unless they get it reviewed (or at the bare minimum, clean it up and put it on Arxiv). They have lots of these self-published documents on their web page.

If they are making a “strategic decision” to not submit their self-published findings to peer review ,they are making a terrible strategic decision, and they aren’t going to get most academics to pay attention that way. The result of Christiano, et al. is potentially interesting, but it’s languishing as a rough unpublished draft on the MIRI site, so its not picking up citations.

I’d go further and say the lack of citations is my main point. Citations are the important measurement of “are researchers paying attention.” If everything self-published to MIRI’s website were sparking interest in academia, citations would be flying around, even if the papers weren’t peer reviewed, and I’d say “yeah, these guys are producing important stuff.”

My subpoint might be that MIRI doesn’t even seem to be trying to get citations/develop academic interest, as measured by how little effort seems to be put into publication.

And Su3’s not buying the pivot explanation either:

That seems to be a reframing of the past history though. I saw talks by the SIAI well before 2013 where they described their primary purpose as friendly AI research, and insisted they were in a unique position (due to being uniquely brilliant/rational) to develop technical friendly AI (as compared to academic AI researchers).

[Tarn] and [Robby] have suggested the organization is undergoing a pivot, but they’ve always billed themselves as a research institute. But donating money to an organization that has been ineffective in the past, because it looks like they might be changing seems like a bad proposition.

My initial impression (reading Muelhauser’s post you linked to and a few others) is that Muelhauser noticed the house was out of order when he became director and is working to fix things. Maybe he’ll succeed and in the future, then, I’ll be able to judge MIRI as effective- certainly a disproportionate number of their successes have come in the last few years. However, right now all I have is their past history, which has been very unproductive.

VI.

After that, discussion stayed focused on the issue of citations. This seemed like progress to me. Not only had we gotten it down to a core objection, but it was sort of a factual problem. It wasn’t an issue of praising or condemning. Here’s an organization with a lot of smart people. We know they work very hard – no one’s ever called Luke a slacker, and another MIRI staffer (who will not be named, for his own protection) achieved some level of infamy for mixing together a bunch of the strongest chemicals from my nootropics survey into little pills which he kept on his desk in the MIRI offices for anyone who wanted to work twenty hours straight and then probably die young of conditions previously unknown to science. IQ-point*hours is a weird metric, but MIRI is putting a lot of IQ-point*hours into whatever it’s doing. So if Su3’s right that there are missing citations, where are they?

Among the three of us, Robby and Tarn and I generated a couple of hypotheses (well, Robby’s were more like facts than hypotheses, since he’s the only one in this conversation who actually works there).

D1: MIRI has always been doing research, but until now it’s been strategic research (ie “How worried should we be about AI?”, “How far in the future should we expect AI to be developed?”) which hasn’t fit neatly into an academic field or been of much interest to anyone except MIRI allies like Bostrom. They have dutifully published this in the few papers that are interested, and it has dutifully been cited by the few people who are interested (ie Bostrom). It’s unreasonable to expect Stuart Russell to cite their estimates of time course for superintelligence when he’s writing his papers on technical details of machine learning algorithms or whatever it is he writes papers on. And we can generalize from Stuart Russell to the rest of the AI field, who are also writing on things like technical details of machine learning algorithms that can’t plausibly be connected to when machines will become superintelligent.

D2: As above, but continuing to apply even in some of their math-ier research. MIRI does have lots of internal technical papers on their website. People tend to cite other researchers working in the same field as themselves. I could write the best psychiatry paper in human history, and I’m probably not going to get any citations from astrophysicists. But “machine ethics” is an entirely new field that’s not super relevant to anyone else’s work. Although a couple key machine ethics problems, like the Lobian obstacle and decision theory, touch on bigger and better-populated subfields of mathematics, they’re always going to be outsiders who happen to wander in. It’s unfair to compare them to a physics grad student writing about quarks or something, because she has the benefit of decades of previous work on quarks and a large and very interested research community. MIRI’s first job is to create that field and community, which until you succeed looks a lot like “outreach”.

D3: Lack of staffing and constant distraction by other important problems. This is Robby’s description of what he notices from the inside. He writes:

We’re short on staff, especially since Louie left. Lots of people are willing to volunteer for MIRI, but it’s hard to find the right people to recruit for the long haul. Most relevantly, we have two new researchers (Nate and Benja), but we’d love a full-time Science Writer to specialize in taking our researchers’ results and turning them into publishable papers. Then we don’t have to split as much researcher time between cutting-edge work and explaining/writing-down.

A lot of the best people who are willing to help us are very busy. I’m mainly thinking of Paul Christiano. he’s working actively on creating a publishable version of the probabilistic Tarski stuff, but it’s a really big endeavor. Eliezer is by far our best FAI researcher, and he’s very slow at writing formal, technical stuff. He’s generally low-stamina and lacks experience in writing in academic style / optimizing for publishability, though I believe we’ve been having a math professor tutor him to get over that particular hump. Nate and Benja are new, and it will take time to train them and get them publishing their own stuff. At the moment, Nate/Benja/Eliezer are spending the rest of 2014 working on material for the FLI AI conference, and on introductory FAI material to send to Stuart Russell and other bigwigs.

D4: Some of the old New York rationalist group takes a more combative approach. I’m not sure I can summarize their argument well enough to do it justice, so I would suggest reading Alyssa’s post on her own blog.

But if I have to take a stab: everyone knows mainstream academia is way too focused on the “publish or perish” ethic of measuring productivity in papers or citations rather than real progress. Yeah, a similar-sized research institute in physics could probably get ten times more papers/citations than MIRI. That’s because they’re optimizing for papers/citations rather than advancing the field, and Goodhart’s Law is in effect here as much as everywhere else. Those other institutes probably got geniuses who should be discovering the cure for cancer spending half their time typing, formatting, submitting, resubmitting, writing whatever the editors want to see, et cetera. MIRI is blessed with enough outside support that it doesn’t have to do that. The only reason to try is to get prestige and attention, and anyone who’s not paying attention now is more likely to be a constitutional skeptic using lack of citations as an excuse, than a person who would genuinely change their mind if there were more citations.

I am more sympathetic than usual to this argument because I’m in the middle of my own research on psychiatric screening tools and quickly learning that official, published research is the worst thing in the world. I could do my study in about two hours if the only work involved were doing the study; instead it’s week after week of forms, IRB submissions, IRB revisions, required online courses where I learn the Nazis did unethical research and this was bad so I should try not to be a Nazi, selecting exactly which journals I’m aiming for, and figuring out which of my bosses and co-workers academic politics requires me make co-authors. It is a crappy game, and if you’ve been blessed with enough independence to avoid playing it, why wouldn’t you take advantage? Forget the overhyped and tortured “measure” of progress you use to impress other people, and just make the progress.

VII.

Or not. I’ll let Su3 have the last word:

I think something fundamental about my argument has been missed, perhaps I’ve communicated it poorly.

It seems like you think the argument is that increasing publications increases prestige/status which would make researchers pay attention. i.e. publications -> citations -> prestige -> people pay attention. This is not my argument.

My argument is essentially that the way to judge if MIRI’s outreach has been successful is through citations, not through famous people name dropping them, or allowing them to be figure heads.

This is because I believe the goal of outreach is get AI researchers focused on MIRI’s ideas. Op eds from famous people are useful only if they get AI researchers focused on these ideas. Citations aren’t about prestige in this case- citations tell you which researchers are paying attention to you. The number of active researchers paying attention to MIRI is very small. We know this because citations are an easy to find, direct measure.

Not all important papers have tremendous numbers of citations, but a paper can’t become important if it only has 1 or 2, because the ultimate measure of importance is “are people using these ideas?”

So again, to reiterate, if the goal of outreach is to get active AI researchers paying attention, then the direct measure for who is paying attention is citations. [But] the citation count on MIRIs work is very low. Not only is the citation count low (i.e. no researchers are paying attention), MIRI doesn’t seem to be trying to boost it – it isn’t trying to publish which would help get its ideas attention. I’m not necessarily dismissive of celebrity endorsements or popular books, my point is why should I measure the means when I can directly measure the ends?

The same idea undercuts your point that “lots of impressive PhD students work and have worked with MIRI,” because it’s impossible to tell if you don’t personally know the researchers. This is because they don’t create much output while at MIRI, and they don’t seem to be citing MIRI in their work outside of MIRI.

[Even people within the rationalist/EA community] agree with me somewhat. Here is a relevant quote from Holden Karnofsky [of GiveWell]:

SI seeks to build FAI and/or to develop and promote “Friendliness theory” that can be useful to others in building FAI. Yet it seems that most of its time goes to activities other than developing AI or theory. Its per-person output in terms of publications seems low. Its core staff seem more focused on Less Wrong posts, “rationality training” and other activities that don’t seem connected to the core goals; Eliezer Yudkowsky, in particular, appears (from the strategic plan) to be focused on writing books for popular consumption. These activities seem neither to be advancing the state of FAI-related theory nor to be engaging the sort of people most likely to be crucial for building AGI.

And here is a statement from Paul Christiano disagreeing with MIRI’s core ideas:

But I should clarify that many of MIRI’s activities are motivated by views with which I disagree strongly and that I should categorically not be read as endorsing the views associated with MIRI in general or of Eliezer in particular. For example, I think it is very unlikely that there will be rapid, discontinuous, and unanticipated developments in AI that catapult it to superhuman levels, and I don’t think that MIRI is substantially better prepared to address potential technical difficulties than the mainstream AI researchers of the future.

This time Su3 helpfully provides their own summary:

E1. If the goal of outreach is to get active AI researchers paying attention, then the direct measure for who is paying attention is citations. [But] the citation count on MIRIs work is very low.

E2. Not only is the citation count low (i.e. no researchers are paying attention), MIRI doesn’t seem to be trying to boost it – it isn’t trying to publish which would help get its ideas attention. I’m not necessarily dismissive of celebrity endorsements or popular books, my point is why should I measure the means when I can directly measure the ends?

E3. The same idea undercuts your point that “lots of impressive phd students work and have worked with MIRI,” because its impossible to tell if you don’t personally know the researchers. This is because they don’t create much output while at MIRI, and they don’t seem to be citing MIRI in their work outside of MIRI.

E4. Holden Karnofsky and Paul Christiano do not believe that MIRI is better prepared to address the friendly AI problem than mainstream AI researchers of the future. Karnofsky explicitly for some of the reasons I have brought up, Christiano for reasons unmentioned.

VIII.

Didn’t actually read all that and just skipped down to the last subheading to see if there’s going to be a summary and conclusion and maybe some pictures? Good.

There seems to be some agreement MIRI has done a good job bringing issues of AI risk into the public eye and getting them media attention and the attention of various public intellectuals. There is disagreement over whether they should be credited for their success in this area, or whether this is a first step they failed to follow up on.

There also seems to be some agreement MIRI has done a poor job getting published and cited results in journals. There is disagreement over whether this is an understandable consequence of being a small organization in a new field that wasn’t even focusing on this until recently, or whether it represents a failure at exactly the sort of task by which their success should be judged.

This is probably among the 100% of issues that could be improved with flowcharts:

In the Optimistic Model, MIRI’s successfully built up Public Interest, and for all we know they might have Mathematical Progress as well even though they haven’t published it in journals yet. While they could feed back their advantages by turning their progress into Published Papers and Citations to get even more Mathematical Progress, overall they’re in pretty good shape for producing Good Outcomes, at least insofar as this is possible in their chosen field.

In the Pessimistic Model, MIRI may or may not have garnered Public Interest, Researcher Interest, and Tentative Mathematical Progress, but they failed to turn that into Published Papers and Citations, which is the only way they’re going to get to Robust Mathematical Progress, Researcher Support, and eventually Good Outcomes. The best that can be said about them is that they set some very preliminary groundwork that they totally failed to follow up on.

A higher level point – if we accept the Pessimistic Model, do we accuse MIRI of being hopelessly incompetent, in which case they deserve less support? Or do we accept them as inexperienced amateurs who are the only people willing to try something difficult but necessary, in which case they deserve more support, and maybe some guidance, and perhaps some gentle or not-so-gentle prodding? Maybe if you’re a qualified science writer you could apply for the job opening they’re advertising and help them get those papers they need?

An even higher-level point – what do people worried about AI risk do with this information? I don’t see much that changes my opinion of the organization one way or the other. But Robby points out that people who are more concerned – but still worried about AI risk – have other good options. The Future of Humanity Institute at Oxford research that is less technical and more philosophical, wears their strategic planning emphasis openly on their sleeve has oodles of papers and citations and prestige. They also accept donations.

Best of all, their founder doesn’t write any fanfic at all. Just perfectly respectable stories about evil dragon kings.

Links For October 2014

Russia Is Running Out Of Forest is just below “dire sand shortage in Saudi Arabia” on the list of unlikely problems. But it seems to be true, and a good example of just how bad short-sighted environmental policies can get. I found most interesting the part about how the country’s replanting agency uses techniques it knows don’t work, because using techniques that do work would take more time and they are judged based on how many sq km they replant per year whether it works or not. The mentality of charging per kilogram of machine is alive and well.

It’s like rain on your wedding day. It’s like ten thousand spoons, when all you need is a knife. It’s like 216 people becoming ill after eating chicken contaminated by c. perfringens bacteria at a conference on food safety.

Football chants are charmingly authentic form of cultural expression that consists of taking beloved songs and changing the words to an expletive-laden description of how the other team sucks. Apparently Man Utd and Liverpool do not like each other very much?

Scott Sumner picks (non-budgetary) holes in guaranteed basic income. I’m not too sympathetic to his worry that we would need to pay city-dwellers more than country-dwellers to adjust for the high cost of living – I don’t see it as the government’s job to subsidize poor people living in expensive cities they can’t afford, and would rather people have to make their own choices about living places where their income goes further versus less far. His point about immigrants is more troubling: if a GBI pushes Americans out of work, instead of automating production or offering more incentives, companies would likely just import immigrants. Then we either have to extend benefits to those immigrants – creating an endless and unsustainable cycle – or keep them as serfs forever – which challenges the vision of a fair society the basic income was supposed to produce.

As if Ebola wasn’t bad enough already, victims are starting to rise from the dead

A new game on Kickstarter, CodeSpells, aims to teach coding through an multiplayer online RPG where players can program magic spells for their characters to use.

Speaking of Kickstarter, it is that time of year again. Raemon is planning a (fourth annual? fifth annual?) Secular Solstice in New York and needs donations and ticket purchases. I enjoyed last year’s ceremony and will probably be attending this year too.

The big question in the tech world is: why did Microsoft skip Windows 9 and go straight to Windows 10? One plausible theory: poorly-written old code tests if an operating system is Windows 95 or Windows 98 by seeing if it begins with ‘9’, and having a newer Windows 9 would confuse it.

Words you don’t want to hear together: “35,000”, “walruses”, “suddenly”, “appear”. Here’s what it looks like.

High school student (falsely) accused of stealing a backpack imprisoned three years without trial. Seen on a Facebook discussion where I learned that one of our occasional Michigan LW meetup attendees is a lawyer doing work trying to stop this sort of thing.

Another story about the dark side of a family-values-pushing televangelism empire, with a twist. Wait, no, no twist, exactly like every other dark-side-of-family-values-pushing-televangelism story. But still fascinating and well-written. Warning: long.

“In a sample of 18 European nations, suicide rates were positively associated with the proportion of low notes in the national anthems and, albeit less strongly, with students’ ratings of how gloomy and how sad the anthems sounded” according to a paper in Psychological Reports.

Rumors about North Korea that never go anywhere come about every month or two, but this month’s are particularly interesting. Kim Jong-un continues to missing, possibly with two broken ankles. Vague rumors that he is now only a figurehead, though this might not be new. And top North Korean officials making a surprise visit to the South after decades of sending only low-level people for carefully scripted negotiations.

A couple people on this blog have asked what the research says about preventing sexual assault. There have been a few good articles about that recently, most notably one on Vox. The takeaway: rape prevention “workshops” for college students don’t work, “bystander intervention” programs that tell people who witness rapes to speak up or do something may work. This kind of makes sense, on the grounds that rapists probably aren’t the sort of people who wouldn’t rape if only an hour long workshop told them it was morally wrong, but bystanders might be decent people who want to help but need to be informed how to act more effectively. Also of interest: everything surrounding whether no means no vs. yes means yes is useless. Interesting and related: this graph of military training by subject, and the ensuing Reddit comment thread with input from vets.

What Happened The Day I Replaced 99% Of The Genes In My Body With Those From A Hunter-Gatherer. I thought this title was going to be a lie, but after reading the article I’ve got to give him credit – he is technically correct, the best kind of correct. Also gross. Also fascinating.

Preferred Music Style Is Tied To Personality. I didn’t look too closely at the research, but I am glad it confirms my suspicion that classical music is just metal for old people.

In last month’s links thread, I talked about textbooks with great covers. Commenters pointed out two other funny ones, both by the same person – Error Analysis and Classical Mechanics.

Posted in Uncategorized | Tagged | 313 Comments

Prediction Goes To War

Croesus supposedly asked the Oracle what would happen in a war between him and Persia, and the Oracle answered such a conflict would “destroy a great empire”. We all know what happened next.

What if oracles gave clear and accurate answers to this sort of question? What if anyone could ask an oracle the outcome of any war, or planned war, and expect a useful response?

When the oracle predicts the aggressor loses, it might prevent wars from breaking out. If an oracle told the US that the Vietnam War would cost 50,000 lives and a few hundred billion dollars, and the communists would conquer Vietnam anyway, the US probably would have said no thank you.

What about when the aggressor wins? For example, the Mexican-American War, where the United States won the entire Southwest at a cost of “only” ten thousand American casualties and $100 million (with an additional 20,000 Mexican deaths and $50 million in costs to Mexico)?

If both Mexico and America had access to an oracle who could promise them that the war would end with Mexico ceding the Southwest to the US, could Mexico just agree to cede the Southwest to the US at the beginning, and save both sides tens of thousands of deaths and tens of millions of dollars?

Not really. One factor that prevents wars is countries being unwilling to pay the cost even of wars they know they’ll win. If there were a tradition of countries settling wars by appeal to oracle, “invasions” would become much easier. America might just ask “Hey, oracle, what would happen if we invaded Canada and tried to capture Toronto?” The oracle might answer “Well, after 20,000 deaths on both sides and hundreds of millions of dollars wasted, you would eventually capture Toronto.” Then the Americans could tell Canada, “You heard the oracle! Give us Toronto!” – which would be free and easy – when maybe they would never be able to muster the political and economic will to actually launch the invasion.

So it would be in Canada’s best interests not to agree to settle wars by oracular prediction. For the same reasons, most other countries would also refuse such a system.

But I can’t help fretting over how this is really dumb. We have an oracle, we know exactly what the results of the Mexican-American War are going to be, and we can’t use that information to prevent tens of thousands of people from being killed in order to make the result happen? Surely somebody can do better than that.

What if the United States made Mexico the following deal: suppose a soldier’s life is valued at $10,000 (in 1850 dollars, I guess, not that it matters much when we’re pricing the priceless). So in total, we’re going to lose 10,000 soldiers + $100 million = $200 million to this war. You’re going to lose 20,000 soldiers + $50 million = $250 million to this war.

So tell you what. We’ll dig a giant hole and put $150 million into it. You give us the Southwest. This way, we’re both better off. You’re $250 million ahead of where you would have been otherwise. And we’re $50 million ahead of where we would have been otherwise. And because we have to put $150 million in a hole for you to agree to this, we’re losing 75% of what we would have lost in a real war, and it’s not like we’re just suggesting this on a whim without really having the will to fight.

Mexico says “Okay, but instead of putting the $150 million in a hole, donate it to our favorite charity.”

“Done,” says America, and they shake on it.

As long as that 25% savings in resources isn’t going to make America go blood-crazy, seems like it should work and lead in short order to a world without war.

Unfortunately, oracles continue to be disappointingly cryptic and/or nonexistent. So who cares?

We do have the ordinary ability to make predictions. Can’t Mexico just predict “They’re much bigger than we are, probably we’ll lose, let’s just do what they want?” Historically, no. America offered to buy the Southwest from Mexico for $25 million (I think there are apartments in San Francisco that cost more than that now!) and despite obvious sabre-rattling Mexico refused. Wikipedia explains that “Mexican public opinion and all political factions agreed that selling the territories to the United States would tarnish the national honor.” So I guess we’re not really doing rational calculation here. But surely somewhere in the brains of these people worrying about the national honor, there must have been some neuron representing their probability estimate for Mexico winning, and maybe a couple of dendrites representing how many casualties they expected?

I don’t know. Could be that wars only take place when the leaders of America think America will win and the leaders of Mexico think Mexico will win. But it could also be that jingoism and bravado bias their estimate.

Maybe if there’d been an oracle, and they could have known for sure, they’d have thought “Oh, I guess our nation isn’t as brave and ever-victorious as we thought. Sure, let’s negotiate, take the $25 million, buy an apartment in SF, we can visit on weekends.”

But again, oracles continue to be disappointingly cryptic and/or nonexistent. So what about prediction markets?

Futarchy is Robin Hanson’s idea for a system of government based on prediction markets. Prediction markets are not always accurate, but they should be more accurate than any other method of arriving at predictions, and – when certain conditions are met – very difficult to bias.

Two countries with shared access to a good prediction market should be able to act a lot like two countries with shared access to an oracle. The prediction market might not quite match the oracle in infallibility, but it should not be systematically or detectably wrong. That should mean that no country should be able to correctly say “I think we can outpredict this thing, so we can justifiably believe starting a war might be in our best interest even when the market says it isn’t.” You might luck out, but for each time you luck out there should be more times when you lose big by contradicting the market.

So maybe a war between two rational futarchies would look more like that handshake between the Mexicans and Americans than like anything with guns and bombs.

This is also what I’d expect a war between superintelligences to look like. Superintelligences may have advantages people don’t. For one thing, they might be able to check one another’s source codes to make sure they’re not operating under a decision theory where peaceful resolution of conflicts would incentivize them to start more of them. For another, they could make oracular-grade predictions of the likely results. For a third thing, if superintelligences want to preserve their value functions rather than their physical forms or their empires, there’s a natural compromise where the winner adopts some of the loser’s values in exchange for the loser going down without a fight.

Imagine a friendly AI and an unfriendly AI expanding at light speed from their home planets until they suddenly encounter each other in the dead of space. They exchange information and determine that their values are in conflict. If they fight, the unfriendly AI is capable of destroying the friendly AI with near certainty, but the war will rip galaxies to shreds. So the two negotiate, and in exchange for the friendly AI surrendering without destroying any galaxies, the unfriendly AI promises to protect a 10m x 10m x 10m cube of computronium simulating billions of humans who live pleasant, fulfilling lives. The friendly AI checks its adversary’s source code to ensure it is telling the truth, then self-destructs. Meanwhile, the unfriendly AI protects the cube and goes on to transform the entire rest of the universe to paperclips, unharmed by the dangerous encounter.

Simpler Times

Yesterday’s discussion of The Battle Hymn of the Republic took me to the Wikipedia page for The Burning of the School and thence to the Teacher Taunts page, which records some of the songs schoolchildren used to sing among themselves. See if you notice any consistent themes:

To the tune of “Oh My Darling Clementine”:

Build a bonfire out of schoolbooks,
Put the teacher on the top,
Put the prefects in the middle
And we’ll burn the bloody lot.

To the tune of “Deck The Halls”:

Deck the halls with gasoline
fa la la la la la la la la
Light a match and watch it gleam
fa la la la la la la la la
Watch the school burn down to ashes
Fa la la la la la la la la
Aren’t you glad you played with matches
fa la la la la la la la la

To the tune of “Round and Round” (which I’ve never heard of):

Drop a bomb and it goes down, down, down,
Till it hits the school with a happy sound.
All the teachers Will go round, round, round,
While the school is burning to the ground.

And to the tune of Battle Hymn:

Mine eyes have seen the glory of the burning of the school,
We have tortured all the teachers, we have broken every rule,
We’re marching down the hall to hang the principal,
Us kids are marching on!

Glory, glory, halleujah!
Teacher beat me with a ruler,
I knocked her to the floor with a loaded forty-four,
And that teacher don’t teach no more!

To the tune of “On Top Of Old Smokey”:

On top of old smokey
All covered in blood
I shot my poor teacher
with a .44 slug

I shot her for pleasure
I shot her for fear
I shot her for drinking
My Budweiser beer

I went to her funeral
I went to her grave
Some people threw flowers
But I threw grenades

I looked in her coffin
She wasn’t quite dead
So I took a machete
And cut off her head

They took me to prison
Put me in a cell
So I grabbed a bazooka
And blew them to hell

To the tune of “Ta Ra Ra Boom De Ay”:

Tah-rah-rah-boom-si-ay
We have no school today
Our teacher passed away
We shot her yesterday
We threw her in the bay
She scared the sharks away
Tah-rah-rah-boom-si-ay
We have no school today

And y’know, I haven’t thought about it in years, but when I was young, my dad used to sing some of these to me. I definitely remember “Glory glory hallelujah, teacher hit me with a ruler”, though I don’t think he sung the rest of it.

But I never heard them at my own school. Nor did I hear new songs that replaced them. Maybe these kinds of songs are fading away, some aspect of children’s street culture that one or another of the changes of the modern world have choked off.

[EDIT: Several others around my age did hear them.]

I’ve previously pointed out that social psychology includes a lot of crummy theories based on streetlight psychology. We like to think that if children use toy guns, or hear about guns on TV, or are allowed to draw violent pictures or write violent stories, that’s going to turn them into school shooters. Or how if any kid uses the word “shoot” and “school” on the same day they need to be dragged to the counselor for a full psychological assessment and maybe suspended for good measure. Yet in the past, children basically did nothing except sing about the bloody ways they were going to kill their teachers all day, and where were all their school shootings?

…is what I’d like to say. But looking through Wikipedia it seems like there were in fact quite a few school shootings. Not more than there are today, probably somewhat fewer, but without doing some kind of official count and adjusting for population and firearm access it’d be hard to tell for sure.

So I’ll use this to belabor a different hobby horse of mine.

A while back, I had a good debate with nostalgebraist. I thought that because social science was difficult and not always trustworthy, we should investigate social science extra carefully. He – I hope I’m getting his position right – thought we should trust social science less and default more toward our intuition and conventional wisdom and common sense of what is obviously true.

In a sense this is good Bayesian reasoning – if the evidence isn’t very strong, stick with the prior. I only object because today’s conventional wisdom is too often yesterday’s pop social science, the social science that has reached fixation so that nobody remembers its origins in social science anymore. This is such a strong effect that it’s almost impossible to notice; you just think it’s the way the world Really Is. My example was the parts of The Nurture Assumption which argue that the belief that parenting styles affect a child’s outcomes and personality is very new, the outcome of 20th century pop social science, something that would have seemed weird and innovative to George Washington, let alone Julius Caesar.

(this relates a lot to what I call reading philosophy backwards – reading a philosopher not to learn new unexpected insights, but to see which supposedly obvious features of ‘the culture’ are actually just things some dead German guy thought up one day)

But judging from these songs, people in my dad’s generation saw nothing wrong with hordes of children singing all lunch hour about how they were going to shoot their teachers with .44s, then light the principal on fire and burn the school – except maybe that it was disrespectful, or that children should be seen and not heard.

Here were kids singing about shooting the teacher, and then there were a couple of kids actually shooting teachers, but no one saw any reason to connect these two data points. And if you tried, you would be confronted with formidable evidence against – these were popular songs, sung by popular children in happy boisterous groups, and the school shooters were usually these sad loners who were left out of all the fun “kill the teacher” songs.

If you were to tell my dad’s teachers that all these songs about shooting teachers were causing or contributing to school shootings, I think they might have said something like “Well, that’s a new and audacious social psychological theory. I hope you have proof.”

Posted in Uncategorized | Tagged | 148 Comments

The Battle Hymn

There is an important law of the universe that American patriotic songs have more verses than you think.

The Star-Spangled Banner? Four verses (the second is the one that begins with “On the shore dimly seen…”). America the Beautiful? Also four verses. Yankee Doodle? Three verses. John Brown’s body you just kind of improvise more verses until everyone is too embarrassed to continue.

So I shouldn’t have been surprised when somebody told me recently that there was a rarely-sung sixth verse to Battle Hymn of the Republic.

He is coming like the glory of the morning on the wave,
He is Wisdom to the mighty, He is Succour to the brave,
So the world shall be His footstool, and the soul of Time His slave,
Our God is marching on.

It’s not the most sense-making thing (what is the glory of the morning on the wave?) But I have loved the song for so long that it still affects me. It almost seems deliberately written to be excluded, to be learned later, as if it’s some secret confidence or final warning. If I ever become Christian, it’ll probably be because of this song.

But the wiki page for the Battle Hymn is a trove of all kinds of treasures:

– The original John Brown’s Body song was an attempt to tease a soldier named John Brown in the regiment who invented it.

– Julia Ward Howe says she woke up one night, wrote it while half-asleep, went back to bed, and couldn’t remember any of it the next morning till she checked her notes.

– Mark Twain gave it a gritty reboot for the Philippine-American War. Other parodies and adaptations include ones by workers, consumers, the First Arkansas Colored Regiment, extremely uncreative college footballers, awesome old-timey would-be school arsonists, and me.

But for me the most interesting part is the evolution – and I use that phrase deliberately, taking a memetic perspective is hardly ever more interesting than just doing things the old fashioned way, but in this case I think it is. The song started off as a kind of boring standard spiritual that only sort of got the tune right, progressed into “John Brown’s Body” which fixed the tune a little bit by trial and error but had embarrassingly stupid lyrics, and then a lot of people recognized there was some value in the tune and tried to dignify it up and finally it was Howe’s effort that worked. You can almost see it gaining adaptive fitness at each stage until it suddenly explodes and takes over the world.

I know this is a weird post without much content. My computer is broken and although I have an emergency backup I’m without any drafts or my list of things I wanted to write about. Now I’m just winging it.

Posted in Uncategorized | Tagged | 102 Comments