codex Slate Star Codex

By the author of unsongbook.com

Links 6/15: URLing Toward Freedom

Did you know England has one of the highest rates of tornadoes per unit area of anywhere in the world?

Why do some schools produce a disproportionate share of math competition winners? May not just be student characteristics.

My post The Control Group Is Out Of Control, as well as some of the Less Wrong posts that inspired it, has gotten cited in a recent preprint article, A Skeptical Eye On Psi, on what psi can teach us about the replication crisis. One of the authors is someone I previously yelled at, so I like to think all of that yelling is having a positive effect.

The Prescription Drug vs. Tolkien Elf Quiz. I am a doctor and Silmarillion fan, and I still only got 93%.

A study from Sweden (it’s always Sweden) does really good work examining the effect of education on IQ. It takes an increase in mandatory Swedish schooling length which was rolled out randomly at different times in different districts, and finds that the districts where people got more schooling have higher IQ; in particular, an extra year of education increases permanent IQ by 0.75 points. I was previously ambivalent about this, but this is a really strong study and I guess I have to endorse it now (though it’s hard to say how g-loaded it is or how linear it is). Also of note; the extra schooling permanently harmed emotional control ability by 0.5 points on a scale identical to IQ (mean 100, SD 15). This is of course the opposite of past studies suggest that education does not improve IQ but does help non-cognitive factors. But this study was an extra year tacked on to the end of education, whereas earlier ones have been measuring extra education tacked on to the beginning, or just making the whole educational process more efficient. Still weird, but again, this is a good experiment (EDIT: This might not be on g)

Did you know: Russian author Sergey Lukyanenko (of Night Watch fame) wrote a series of sci-fi novels set in the Master of Orion universe.

In my review of Age of Em, I said we were very far away from being able to simulate human brains, and sure enough just a few days later Derek Lowe wrote the fascinating Simulating The Brain? Let’s Try Donkey Kong First. Brain simulation proponents hope that without really understanding the brain we can make simple models of each part and how they connect to other parts and produce things that replicate that activity. But we can test these techniques right now on a much simpler and more accessible object – an old video game microprocessor – and they’re not good enough to do anything useful. See also Simulating The Brain. Sure Thing.

A post-mortem of the National Childrens’ Study, which was supposed to be a gold standard for gathering data on early childhood risk, but fell apart because of politics and administrative incompetence.

80,000 Hours’ career guide for people who hate career guides. Lots of statistics on how often each job-search strategy succeeds and fails.

The Devil With Hitler was a 1940s US wartime propaganda film in which Hell wants Hitler to take over from Satan, and Satan has to trick Hitler into performing a good deed to win his position back.

Related: “The present U.S. official position seems to be that Satan may exist and, if so, might be found in New Hampshire.”

Did you know that the Great Pyramid at Giza actually has eight sides? Kind of a weird site, but seems to check out as per the academic literature.

In the game of callout culture, either you win or you die.

Pssst, wanna buy a 92-house town in a National Radio Silence Zone? Only $1 million!

Related: Craigslist for 20 foot trebuchet

Google’s Larry Page has a flying car startup – and a second, competing flying car startup just to motivate the first one. Or at least he had them before someone wrote this article. I don’t know, if somebody says they’re going to give us flying cars but they might stop if it becomes public, I would think twice before publicizing it. And here’s a profile of the flying car design itself.

If Greece was the least neoliberal economy in the developed world, is it fair to blame its failures on neoliberalism?

Rate of innovation in Norway halved after law changed to give universities more of a share of professors’ discoveries.

Motherboard has an article about how censorship on Reddit – it points out that Reddit moderators heavy-handedly censored discussion of the Orlando shooting in unspecified ways, then goes on to condemn it for Donald Trump memes and anti-Hillary conspiracy theories. But it never mentions the whole point of the story it’s reporting about – that Reddit actually censored any information that the shooter was Middle Eastern or motivated by Islamic terrorism. I’m less worried about Reddit censorship (which eventually lifted) than I am about Motherboard’s own distorted reporting which somehow turns a story about excessive political correctness into bashing Reddit for being right-wing.

30% of people would choose to be the other gender if reincarnated, no difference between men and women.

Sam Altman: Nine years of claims that Silicon Valley is a bubble about to burst.

ScienceNews: Bayesian reasoning implicated on some mental disorders. If you’re interested, I wrote a Less Wrong post on this kind of thing back in 2012.

One estimate says that millions of Russians were fooled by a TV documentary claiming that Lenin was a mushroom. Here’s a paper with a little more information than the wiki article. Key quote: “One of the top regional functionaries stated that ‘Lenin could not have been a mushroom’ because ‘a mammal cannot be a plant.'”

Despite the interest in assault rifles when discussing gun violence, Alex Tabarrok finds that rifles as a category account for only 3% of all gun deaths, and fewer total murders than knives, bare hands, or blunt weapons. The real problem is with handguns, which cause about 20x more deaths than all rifles, assault or otherwise.

New study: schools giving out condoms increases teen births. This is just one study about one specific type of situation, and I can think of a few other studies contradicting it, so I won’t quite retract my previous position that the existence of contraceptives probably lowers unplanned pregnancy. But I’m sure glad I’m not the people who were arguing that the position was so stupid that nobody really held it and it was just an excuse for hating women.

Study of 50,000 people who underwent surgery for obesity finds that they have mortality rates only about 30% of those of similar peers who did not have surgery for obesity. Obesity surgery is a really serious operation, and a lot of doctors are scared of it because the side effects might be worse than the disease, but I think this provides very strong evidence that it is very much worth it. I don’t know whether we should lower the threshhold for who gets obesity surgery or not based on these data.

Posted in Uncategorized | Tagged | 858 Comments

OT52: Once Open A Time

This is the bi-weekly visible open thread. There are hidden threads every few days here. Post about anything you want, ask random questions, whatever. Also:

1. Many comments worthy of highlighting this week. First this thread on why banning encryption won’t prevent terrorism, since sufficiently smart terrorists could use steganography – especially Izaak’s demonstration of such. Second, baconbacon discusses different ways people he knows do or don’t save money, but see also Jeysiec notes that all of this discussion savings can be moot when poor people don’t get significant amounts of money in the first place. Finally, Patjab gives more details on the background of Ken Livingstone and British anti-Semitism, and Dan Simon has an unusual theory of scandal.

2. There is a new ad on the sidebar this week: apply to work for Qualia. Or maybe you already work for Qualia and don’t know it yet. Maybe you have no private knowledge about whether you work for Qualia or not. Maybe you think that all of your friends work for Qualia, but they don’t work for Qualia at all and only say that they do. Maybe there is no difference even in principle between working for Qualia and not working for Qualia. Maybe working for Qualia is just a complicated way of describing the fact that you work for certain mechanical and biology companies – or was that information processing companies? I don’t know. Perhaps checking out their jobs page will shed light on some of these mysteries.

Posted in Uncategorized | Tagged | 1,045 Comments

Against Dog Whistle-ism

I.

Back during the primary, Ted Cruz said he was against “New York values”.

A chump might figure that, being a Texan whose base is in the South and Midwest, he was making the usual condemnation of coastal elites and arugula-eating liberals that every other Republican has made before him, maybe with a special nod to the fact that his two most relevant opponents, Donald Trump and Hillary Clinton, were both from New York.

But sophisticated people immediately detected this as an “anti-Semitic dog whistle”, eg Cruz’s secret way of saying he hated Jews. Because, you see, there are many Jews in New York. By the clever strategem of using words that had nothing to do with Jews or hatred, he was able to effectively communicate his Jew-hatred to other anti-Semites without anyone else picking up on it.

Except of course the entire media, which seized upon it as a single mass. New York values is coded anti-Semitism. New York values is a classic anti-Semitic slur. New York values is an anti-Semitic comment. New York values is an anti-Semitic code word. New York values gets called out as anti-Semitism. My favorite is this article whose headline claims that Ted Cruz “confirmed” that he meant his New York values comment to refer to Jews; the “confirmation” turned out to be that he referred to Donald Trump as having “chutzpah”. It takes a lot of word-I-am-apparently-not-allowed-to-say to frame that as a “confirmation”.

Meanwhile, back in Realityville (population: 6), Ted Cruz was attending synagogue services at his campaign tour, talking about his deep love and respect for Judaism, and getting described as “a hero” in many parts of the Orthodox Jewish community” for his stance that “if you will not stand with Israel and the Jews, then I will not stand with you.”

But he once said “New York values”, so clearly all of this was just really really deep cover for his anti-Semitism.

II.

Unlike Ted Cruz, former London mayor Ken Livingstone said something definitely Jew-related and definitely worrying.

A month or two ago a British MP named Naz Shah got in trouble for sharing a Facebook post saying Israel should be relocated to the United States. Fellow British politician Ken Livingstone defended her, and one thing led to another, and somewhere in the process he might have kind of said that Hitler supported Zionism.

This isn’t totally out of left field. During the Nazi period in Germany, some Nazis who wanted to get rid of the Jews and some Jews who wanted to get away from the Nazis created the Haavara Agreement, which facilitated German Jewish emigration to Palestine. Hitler was ambivalent on the idea but seems to have at least supported some parts of it at some points. But it seems fair to say that calling Hitler a supporter of Zionism was at the very least a creative interpretation of the historical record.

The media went further, again as a giant mass. Ken Livingstone is anti-Semitic. Ken Livingstone is anti-Semitic. Ken Livingstone is anti-Semitic. Ken Livingstone is anti-Semitic. Ken Livingstone is anti-Semitic. I understand he is now having to defend himself in front of a parliamentary hearing on anti-Semitism.

So. First things first. Ken Livingstone is tasteless, thoughtless, embarrassing, has his foot in his mouth, is inept, clownish and offensive, and clearly made a blunder of cosmic proportions.

But is he anti-Semitic?

When I think “anti-Semitic”, I think of people who don’t like, maybe even hate, Jews. I think of the medieval burghers who accused Jews of baking matzah with the blood of Christian children. I think of the Russians who would hold pogroms and kill Jews and burn their property. I think of the Nazis. I think of people who killed various distant family members of mine without a second thought.

Obviously Livingstone isn’t that anti-Semitic. But my question is, is he anti-Semitic at all? Is there any sense in which his comments reveal that, in his heart of hearts, he really doesn’t like Jews? That he thinks of them as less – even slightly less – than Gentiles? That if he were to end up as Prime Minister of Britain, this would be bad in a non-symbolic, non-stupid-statement-related way for Britain’s Jewish community? Does he just say dumb things, or do the dumb things reflect some underlying attitude of his that colors his relationship with Jews in general?

(speaking of “his relationship with Jews”, he brings up in his own defense that two of his ex-girlfriends are Jewish)

I haven’t seen anyone present any evidence that Livingstone has any different attitudes or policies towards Jews than anyone else in his general vicinity. I don’t think even his worst enemies suggest that during a hypothetical Livingstone administration he would try (or even want) to kick the Jews out of Britain, or make them wear gold stars, or hire fewer Jews for top posts (maybe he’d hire more, if he makes his hiring decisions the same way he makes his dating decisions). It sounds like he might be less sympathetic to Israel than some other British people, but I think he describes his preferred oppositional policies toward Israel pretty explicitly. I don’t think knowing that he made a very ill-advised comment about the Haavara agreement should make us believe he is lying about his Israel policies and would actually implement ones that are even more oppositional than he’s letting on.

Where am I going with this? It’s stupid to care that Ken Livingstone describes 1930s Germany in a weird way qua describing 1930s Germany in a weird way; he’s a politician and not a history textbook writer. It seems important only insofar as his weird description reveals something about him, insofar as it’s a sort of Freudian slip revealing deep-seated attitudes that he had otherwise managed to keep hidden. The British press framed Livingstone’s comments as an explosive revelation, an “aha! now we see what Labour is really like!” They’re really like…people who describe the 1930s in a really awkward and ill-advised way? That’s not a story. It’s a story only if the weird awkward description reveals more important attitudes of Livingstone’s and Labour’s that might actually affect the country in an important way.

But not only is nobody making this argument, but nobody even seems to think it’s an argument that has to be made. It’s just “this is an offensive thing involving Jews, that means it’s anti-Semitic, that means the guy who said it is anti-Semitic”. Maybe he is. I’m just not sure this incident proves much one way or the other.

III.

Nobody reads things online anymore unless they involve senseless violence, Harambe the gorilla, or Donald Trump. I can’t think of a relevant angle for the first two, so Trump it is.

Donald Trump is openly sexist. We know this because every article about him prominently declares that he is “openly sexist” or “openly misogynist” in precisely those words. Trump is openly misogynist. Trump is openly misogynist. Trump is openly misogynist. Trump shows blatant misogyny. Trump is openly sexist. Trump is openly sexist and gross.

But if you try to look for him being openly anything, the first quote anyone mentions is the one where he says Megyn Kelly has blood coming out of her “wherever”. As somebody who personally ends any list of more than three items with “… and whatever”, I may be more inclined than most to believe his claim that no anatomical reference was intended. But even if he was in fact talking about her anatomy – well, we’re back to Livingstone again. The comment is crude, stupid, puerile, offensive, gross, inappropriate, and whatever. But sexist?

When I think of “sexist” or “misogynist”, I think of somebody who thinks women are inferior to men, or hates women, or who thinks women shouldn’t be allowed to have good jobs or full human rights, or who wants to disadvantage women relative to men in some way.

This does not seem to apply very well to Trump. It’s been remarked several times that his policies are more “pro-women” in the political sense than almost any other Republican candidate in recent history – he defends Planned Parenthood, defends government support for child care, he’s flip-flopped to claiming he’s pro-life but is much less convincing about it than the average Republican. And back before his campaign, he seems to have been genuinely proud of his record as a pro-women employer. From his Art of the Deal, written in the late 1980s (ie long before he was campaigning):

The person I hired to be my personal representative overseeing the construction, Barbara Res, was the first woman ever put in charge of a skyscraper in New York…I’d watched her in construction meetings, and what I liked was that she took no guff from anyone. She was half the size of most of these bruising guys, but she wasn’t afraid to tell them off when she had to, and she knew how to get things done.

It’s funny. My own mother was a housewife all her life. And yet it’s turned out that I’ve hired a lot of women for top jobs, and they’ve been among my best people. Often, in fact, they are far more effective than the men around them. Louise Sunshine, who was an executive vice president in my company for ten years, was as relentless a fighter as you’ll ever meet. Blanche Sprague, the executive vice president who handles all sales and oversses the interior design of my buildings, is one of the best salespeople and managers I’ve ever met. Norma Foerderer, my executive assistant, is sweet and charming and very classy, but she’s steel underneath, and people who think she can be pushed around find out very quickly that they’re mistaken.

There have since been a bunch of news reports on how Trump was (according to the Washington Post) “ahead of his time in providing career advancement for women” and how “while some say he could be boorish, his companies nurtured and promoted women in an otherwise male-dominated industry”. According to internal (ie hard-to-confirm) numbers, his organization is among the few that have more female than male executives.

Meanwhile, when I check out sites like Women Hold Up Signs With Donald Trump’s Most Sexist Quotes, the women are holding up signs with quotes like “A person who is flat-chested is very hard to be a 10” (yes, he actually said that). This is undeniably boorish. But are we losing something when we act as if “boorish” and “sexist” are the same thing? Saying “Donald Trump is openly boorish” doesn’t have the same kind of ring to it.

This bothers me in the same way the accusations that Ken Livingstone is anti-Semitic bother me. If Trump thinks women aren’t attractive without big breasts, then His Kink Is Not My Kink But His Kink Is Okay. If Trump is dumb enough to say out loud that he thinks women aren’t attractive without big breasts, that says certain things about his public relations ability and his dignity-or-lack-thereof, but it sounds like it requires a lot more steps to suggest he is a bad person, or would have an anti-woman administration, or anything that we should actually care about.

(if you’re going to bring up “objectification”, then at least you have some sort of theory for how this tenuously connects, but it doesn’t really apply to the Megyn Kelly thing, and anyway, this)

And what bothers me most about this is that word “openly”. Donald Trump says a thousand times how much he wants to fight for women and thinks he will be a pro-women president, then makes some comments that some people interpret as revealing a deeper anti-women attitude, and all of a sudden he’s openly sexist? Maybe that word doesn’t mean what you think it means.

IV.

I don’t want to claim dog whistles don’t exist. The classic example is G. W. Bush giving a speech that includes a Bible verse. His secular listeners think “what a wise saying”, and his Christian listeners think “ah, I recognize that as a Bible verse, he must be very Christian”.

The thing is, we know G. W. Bush was pretty Christian. His desire to appeal to Christian conservatives isn’t really a secret. He might be able to modulate his message a little bit to his audience, but it wouldn’t be revealing a totally new side to his personality. Nor could somebody who understood his “dog whistles” predict his policy more accurately than somebody who just went off his stated platform.

I guess some of the examples above might have gotten kind of far from what people would usually call a “dog whistle”, but I feel like there’s an important dog-whistle-related common thread in all of these cases.

In particular, I worry there’s a certain narrative, which is catnip for the media: Many public figures are secretly virulently racist and sexist. If their secret is not discovered, they will gain power and use their racism and sexism to harm women and minorities. Many of their otherwise boring statements are actually part of a code revealing this secret, and so very interesting. Also, gaffes are royal roads to the unconscious which must be analyzed obsessively. By being very diligent and sophisticated, journalists can heroically ferret out which politicians have this secret racism, and reveal it to a grateful world.

There’s an old joke about a man who walks into a bar. The bar patrons are holding a weird ritual. One of them will say a number, like “twenty-seven”, and the others of them will break into laughter. He asks the bartender what’s going on. The bartender explains that they all come here so often that they’ve memorized all of each other’s jokes, and instead of telling them explicitly, they just give each a number, say the number, and laugh appropriately. The man is intrigued, so he shouts “Two thousand!”. The other patrons laugh uproariously. “Why did they laugh more at mine than any of the others?” he asks the bartender. The bartender answers “They’d never heard that one before!”

In the same way, although dog whistles do exist, the dog whistle narrative has gone so far that it’s become detached from any meaningful referent. It went from people saying racist things, to people saying things that implied they were racist, to people saying the kind of things that sound like things that could imply they are racist even though nobody believes that they are actually implying that. Saying things that sound like dog whistles has itself become the crime worthy of condemnation, with little interest in whether they imply anything about the speaker or not.

Against this narrative, I propose a different one – politicians’ beliefs and plans are best predicted by what they say their beliefs and plans are, or possibly what beliefs and plans they’ve supported in the past, or by anything other than treating their words as a secret code and trying to use them to infer that their real beliefs and plans are diametrically opposite the beliefs and plans they keep insisting that they hold and have practiced for their entire lives.

Let me give a snarky and totally unfair example. This is from the New York Times in 1922 (source):

I won’t say we should always believe that politicians are honest about their beliefs and preferred policies. But I am skeptical when the media claims to have special insight into what they really think.

Three More Articles On Poverty, And Why They Disagree With Each Other

[Posts are decreased in quantity and quality because I’m on vacation; normal schedule to return next week]

Wealth, Health, and Child Development is a study of Swedish lottery winners which finds that winning the lottery doesn’t make them or their children any healthier, better educated, or more prosocial. It fits in with a large literature of studies showing the same – for example, I discussed here the Cherokee land lottery, where the families of Georgians who were randomly given a gift of lots of lucrative land were no better off a generation later. And let’s not forget that the best evidence suggests poverty traps don’t exist.

Why Do The Poor Make Such Poor Decisions also involves the Cherokee, but comes to the opposite conclusion. The main study discussed follows an impoverished group of Cherokee Indians as a casino opened on their land. The casino was very successful and the profits were distributed among the (relatively small) Indian tribe, meaning each Cherokee family got about $6,000 extra. Some researchers had been studying the Cherokee before for other reasons, and found that the boost in incomes decreased behavioral problems in teenagers, juvenile crime, and improved school performance. I don’t see huge evidence that anybody’s checked to what degree this persists into adulthood, but it’s already gotten past the early childhood period where these things tend to fade out. And even if the decreased crime is just in adolescence, adolescent crime can still have a really negative impact on people’s lives. I don’t really trust a lot of the studies listed here, but the main Cherokee one seems pretty solid.

Can America’s Poor Save A Large Share Of Their Incomes? by Scott Sumner is sadly Cherokee-less. It describes a Chinese immigrant to the US who has the same sort low-paying job as many poor Americans but manages to save > $1000/month. It mentions my observation a little while ago that it was strange that the poor are earning 10x (in real value) what they did in 1900, poor people in 1900 survived just fine, but poor people today don’t find themselves with ten times the money they need to survive. Sumner suggests that it is economically possible for poor people today to save much of their income, but that they don’t because they’re not the kind of people who do that kind of thing. When the sort of people who do do that kind of thing find themselves poor – like Chinese immigrants – they tend to be poor very temporarily and have no trouble getting out of poverty even with the same jobs as everyone else.

These articles sort of contradict each other. The first contradicts the second – does giving people money improve life outcomes, or doesn’t it? And the second sort of contradicts the third – if poor people’s budget will expand to fit the money available, such that 2010’s $15000 leaves people just as desperate as 1900’s $1500, what does it matter if some people get an extra $6000?

The contradiction between the first and second reminds me of Tucker-Drob on IQ. He resolves a long-standing debate on whether intelligence is more heritable in poor than in rich individuals by finding this was true in the US but not in Europe. This suggests that American poverty can genuinely lower IQ (and presumably all the other good things associated with IQ like responsibility and prosocial behavior), but European poverty can’t. The study didn’t find this to be related to the US’ greater racial diversity, but it might have to do with the worse social safety net or just changes in the level and nature of poverty. Take this seriously, and it reconciles the first and second article. Getting more money might not help long-term outcomes in Sweden, but in certain kinds of extreme poverty in America – like the type you might find on an Indian reservation – maybe it would.

The third article is more complicated. The second article says:

What, then, is the cause of mental health problems among the poor? Nature or culture? Both, was Costello’s conclusion, because the stress of poverty puts people genetically predisposed to develop an illness or disorder at an elevated risk.

Maybe with the right genes it might be easier to rise out of poverty; I guess the stories of famous entrepreneurs who did exactly that already suggest that. With the wrong genes, it might be much harder but – at least in America, at least if given large amounts of money – still possible.

Also, regarding that Chinese immigrant – I, too, have worked a $20,000/year job and managed to save a lot of money while doing so. I think my “secret” was not having a car, debts, drugs, or dependents; it seems the Chinese guy’s secret is the same. Exactly how easy this strategy is for the average person is left as an exercise for the reader, but I’m impressed with how culturally malleable it seems to be. If we’re worse at this kind of thing today than in 1900, maybe the extra is just compensating for those sorts of problems.

I think this can be considered me slightly changing my opinion stated here to be more optimistic about the possibility of alleviating the most extreme poverty. But it still seems like money transfers are the way to go.

Ketamine Research In A New Light

[Preliminary drawing of very far-out conclusions from research that hasn’t even been 100% confirmed yet]

A few weeks ago, Nature published a bombshell study showing that ketamine’s antidepressant effects were actually caused by a metabolite, 2S,6S;2R,6R-hydroxynorketamine (don’t worry about the name; within ten years it’ll be called JOYVIVA™®© and you’ll catch yourself humming advertising jingles about it in the shower). Unlike ketamine, which is addictive and produces scary dissociative experiences, the metabolite is pretty safe. This is a big deal clinically, because it makes it easier and safer to prescribe to depressed people.

It’s also a big deal scientifically. Ketamine is a strong NMDA receptor antagonist; the metabolite is an AMPA agonist – they have different mechanisms of action. Knowing the real story behind why ketamine works will hopefully speed efforts to understand the nature of depression.

But I’m also interested in it from another angle. For the last ten years, everyone has been excited about ketamine. In a field that gets mocked for not having any really useful clinical discoveries in the last thirty years, ketamine was proof that progress was possible. It was the Exciting New Thing that everybody wanted to do research about.

Given the whole replication crisis thing, I wondered. You’ve got a community of people who think that NMDA antagonism and dissociation are somehow related to depression. If the latest study is true, all that was false. This is good; science is supposed to be self-correcting. But what about before it self-corrected? Did researchers virtuously say “I know the paradigm says NMDA is essential to depression, and nobody’s come up with a better idea yet, but there are some troubling inconsistencies in that picture”? Or did they tinker with their studies until they got the results they expected, then triumphantly declare that they had confirmed the dominant paradigm was right about everything all along?

This is too complicated an issue for me to be really sure, but overall the picture I found was mixed.

A big review of ketamine and NDMA antagonism came out last year. In this case, I was most interested in the section on other NMDA antagonists – if ketamine’s efficacy is unrelated to its NMDA antagonism, then we shouldn’t expect other NMDA antagonists to be antidepressants like ketamine. So if the review found that other NMDA antagonists worked great, that would be a sign that something fishy was going on. But in fact the abstract says:

The antidepressant efficacy of ketamine, and perhaps D-cycloserine and rapastinel, holds promise for future glutamate-modulating strategies; however, the ineffectiveness of other NMDA antagonists suggests that any forthcoming advances will depend on improving out understanding of ketamine’s mechanism of action.

This is pretty impressive; they basically admit that other NMDA antagonists don’t work and that maybe this means they don’t really understand ketamine.

But dig deeper, and you find a less sanguine picture. The body of the paper lists notes five NMDA antagonists as confirmed ineffective – memantine, lanicemine, nitrous oxide, traxoprodil, and MK-0657. But the paper itself notes that all of these were effective on some endpoints and not others, and the decision that they were ineffective was sort of a judgment call by the reviewers. Just to give an example, there’s only ever been one study done on traxoprodil. Since the reviewers reviewed this one study and declared it ineffective, you might expect the study to be negative. But here’s the abstract of the study itself:

On the prespecified main outcome measure (change from baseline in the Montgomery-Asberg Depression Rating Scale total score at day 5 of period 2), CP-101,606 produced a greater decrease than did placebo (mean difference, 8.6; 80% confidence interval, -12.3 to -4.5) (P < 0.10). Hamilton Depression Rating Scale response rate was 60% for CP-101,606 versus 20% for placebo. Seventy-eight percent of CP-101,606-treated responders maintained response status for at least 1 week after the infusion. CP-101,606 was safe, generally well tolerated, and capable of producing an antidepressant response without also producing a dissociative reaction. Antagonism of the NR2B subtype of the N-methyl-D-aspartate receptor may be a fruitful target for the development of a new antidepressant with more robust effects and a faster onset compared with those currently available and capable of working when existing antidepressants do not.

Read this quickly, and it looks like they’ve confirmed traxoprodil is pretty great. The reviewers say it isn’t. They argue that p < 0.10 isn't good enough (I think the study was trying to use a one-sided t-test or something?), and that of five different days when responses were measured (day 2, 5, 8, 12, and 15 after the infusion), there was only a difference on day 5. Apparently this isn't good enough for the reviewers. On the other hand, take rapastinel, one of the NMDA antagonists the reviewers say "holds promise".

No statistically significant differences were observed in rates of treatment response or symptom remission associated with placebo (64% and 42%, respectively) versus rapastinel at any dose (up to 70% and 53%, respectively). However, statistically significant differences in the reduction of the 17-item HAM-D scores were observed for the 5-mg/kg dose at all intervals except day 14) and the 10-mg/kg dose at day 1 and day 3. Neither the low nor the high rapastinel doses were associated with significant greater 17-item HAM-D score reduction than placebo, leading the authors to posit an inverted U-shape dose-response curve.

Sometimes things do have inverted U-shaped dose-response curves – for some discussion of why, read the Last Psychiatrist’s Most Important Article On Psychiatry. But a study that shows no treatment response or symptom response, and the test score response is only on a medium dose but not a high or a low dose – that makes me kind of suspicious.

Why is the review so much more accepting of these ambiguous results than of the last set of ambiguous results? Psych blog 1BoringOldMan points out that the original study was done by the company making rapastinel and two authors of the review article I’m citing were affiliated with the companies that are developing rapastinel. And that at least one of them has a “legendary” history of conflicts of interest.

I don’t want to say for sure this is what’s going on. For all anybody knows, rapastinel might work – the NMDA and AMPA systems are really connected, and the base rate of a randomly chosen compound being an antidepressant is higher than you’d think. But I think it’s at least one possible explanation.

This review article also gets into the nitty-gritty of mechanisms of action:

That other NMDA channel blockers have yet to replicate ketamine’s rapid antidepressant effects has led to speculation that ketamine’s antidepressant properties may not be mediated via the NMDA receptor at all…additional evidence indicates that activation of glutamatergic AMPA receptors is necessary for ketamine’s antidepressant effects. Specifically, coadministration of an AMPA receptor antagonist has been shown to block ketamine’s antidepressant-like behavioral effects.

So that’s neat.

Two other relevant studies: Do The Dissociative Side Effects Of Ketamine Mediate Its Antidepressant Effects finds that they do, which contradicts the recent metabolite-related findings. On the other hand, the two papers share some authors, so I’m tempted to say it was an honest mistake. This paper incidentally finds that the dissociative effects of ketamine are not related to its antidepressant effects, which I think makes more sense now.

The other studies I found were mostly compatible with the new results, with a lot of people expressing doubt about whether NMDA really mediated ketamine, a lot of people finding null results for other NMDA antagonist medications, and a lot of people saying there were weird hints that AMPA was involved somewhere.

I feel kind of premature doing this, because as much as I think it’s elegant the discovery about the metabolite hasn’t been totally confirmed yet. But assuming it’s right, psychiatry comes out of this looking sort of okay. There were a lot of early results with a lot of hype. But the big review articles mostly put these in their place and were able to come up with the right results and fit the pieces together.

The one place this wasn’t so clear was when there were conflicts of interest. If we assume rapastinel doesn’t really work (which right now would be very preliminary and I’m not actually saying this, but these latest findings do seem to imply that), various teams made up of people affiliated with rapastinel’s manufacturers were unable to determine this (neither was the FDA, who just just gave rapastinel “breakthrough drug” status, apparently on the strength of industry studies).

A big reason I’m concerned about this is that I want to know how much to trust the rest of the psychiatric literature – for example, those claims about SSRIs being mostly ineffective. An answer of “you can trust it a lot, except in cases of conflicts of interest” would be a mixed bag. Almost every drug was originally researched and promoted by people with conflicts of interest, and then we trust the academics to catch up with them later and keep them honest. I don’t think this system has failed us too terribly yet. But it’s important to remember that that is the system.

Posted in Uncategorized | Tagged | 94 Comments

OT51: Alien Vs. Threadator

This is the bi-weekly visible open thread. Post about anything you want, ask random questions, whatever. Also:

1. Did you perhaps miss Open Thread 50.25, Open Thread 50.5, and Open Thread 50.75? Remember, this blog now has “hidden” open threads every Wednesday and Sunday (except the Sunday of a visible open thread like this one). You can find them by going to the “Open Thread” tab at the top of the page in the blue area.

2. Comments of the week include Trollumination on dualization in labor markets and JDDT on British union-hospital bargaining. I also really liked this long thread about people’s political conversion stories – ie who started off with one political position and switched to a very different one.

3. I may or may not switch to writing fewer but longer posts sometime soon. I feel like I get the most out of really comprehensive review posts, or complicated theory posts like this, and I don’t have the time to write them if I’m writing a couple of posts a week. So don’t be surprised if quantity starts to drop off.

Posted in Uncategorized | Tagged | 1,312 Comments

Links 6/16: Linkandescence

Maybe you knew that Maoist China got really obsessed with mangos for a while. But did you know that it reached the point where “when one dentist in a small village compared a mango to a sweet potato, he was put on trial for malicious slander and executed”?

Preliminary study suggests that genes that are disproportionately expressed in autism are also disproportionately expressed in men. Possibly a good time to review the male brain theory of autism.

People who lost weight on The Biggest Loser mostly gained it back afterwards and ended up with even worse metabolisms. Possibly related to permanent changes from obesity and yo-yo dieting, but we should also take into account allegations that the contestants were given illegal drugs.

This blog previously linked a Wikipedia article about a radar detector detector detector detector, but Rational Conspiracy did the research and believes it to be a hoax. The radar detection hierarchy likely ends with radar detector detector detectors. Mea culpa.

Some anecdotal evidence has previously suggested that online ads for vegetarianism could convert implausible numbers of people into vegetarians. Effective animal charity Mercy for Animals has done a more formal study and finds complicated and inconsistent results depending on how they define success. People apparently become more interested in vegetarianism, but there’s not much sign of a change in meat consumption itself.

New really interesting blog dissecting bad papers in the social sciences: ljzigerell.com

AskReddit: What is the most surprising mathematical fact you know?. The fractions 1/7, 2/7, 3/7 etc all share the same sequence but start at different places: 1/7 = 0.142857…, 3/7 = 0.428571…, 2/7 = 0.285714, and so on.

Giving scientific papers “badges” for transparent practices allowing outside analysis and replication increases compliance with such practices.

Guardian: “For nearly a year, Richard Rosenfeld’s research on crime trends has been used to debunk the existence of a ‘Ferguson effect’, a suggested link between protests over police killings of black Americans and an increase in crime and murder. Now, the St Louis criminologist says, a deeper analysis of the increase in homicides in 2015 has convinced him that ‘some version’ of the Ferguson effect may be real.”. I’m going to count this as a success for my 44th prediction for 2016.

Jerry Coyne continues beating up on media presentations of epigenetics.

A man who had no problems running thirty miles with no previous training and who later ran fifty marathons in fifty days may have some kind of mutation in his lactate metabolism.

Venezuela is collapsing, with the New York Times describing it as “uncharted territory” for a semi-developed country to be so deep in economic disaster that its hospitals, schools, power plants, and basic services are simply shutting down. So it’s a good time to reflect on the media’s previous glowing Venezuela stories. In 2013, Salon praised “Hugo Chavez’ Economic Miracle, saying that “[Chavez’s] full-throated advocacy of socialism and redistributionism at once represented a fundamental critique of neoliberal economics, and also delivered some indisputably positive results” (h/t Ciphergoth). And the Guardian wrote that “Sorry, Venezuela Haters: This Economy Is Not The Greece Of Latin America. Prediction is hard, and I was willing to forgive eg the pundits who were wrong about the Trump nomination. But I am less willing to forgive here, because the thesis of these articles wasn’t just that they were right, but that the only reason everyone else didn’t admit they were right was neoliberalism and bad intentions. Psychologizing other people instead of arguing with them should take a really high burden of proof, and Salon and Guardian didn’t meet it. Muggeridge, thou should be living at this hour…

Related: we all like to make fun of Salon, but Politico asks: no, seriously, what is wrong with Salon? They argue that it used to have great journalism, but that the pressures of trying to make money online forced them to fire journalists and increase demands from existing employees until the only way its writers could possibly keep up with the quantities expected of them was by throwing quality out the window. Key quote: “The low point arrived when my editor G-chatted me with the observation that our traffic figures were lagging that day and ordered me to ‘publish something within the hour,’’ Andrew Leonard, who left Salon in 2014, recalled in a post. ‘Which, translated into my new reality, meant ‘Go troll Twitter for something to get mad about — Uber, or Mark Zuckerberg, or Tea Party Republicans — and then produce a rant about it.’ I performed my duty, but not without thinking, ‘Is this what 25 years as a dedicated reporter have led to?’ That’s when it dawned on me: I was no longer inventing the future. I was a victim of it. So I quit my job to keep my sanity.”

Twitter: Questions Wolfram Alpha Can’t Answer, along with some it can. “Duration of the next time window during which the fraction of cats getting closer to Voyager 1 is between 0.2 and 0.8”, “Year that the ulnae of all living humans could first encircle Saturn’s equator, if laid end to end”.

Schizophrenia expert E. Fuller Torrey reviews Robert Whitaker’s contrarian mental health book Anatomy of an Epidemic. I was hoping someone of Torrey’s caliber would do this. Also a really interesting piece on schizophrenia in and of itself.

Neerav Kingsland, CEO of various educational groups, reviews my review of teacher-related research and emphasizes his belief that school-level factors are more important than teacher-level ones. And Education Realist offers a more pessimistic take.

But related: Adam Smith Institute’s roundup of how going to a better school doesn’t make you more successful (see especially paragraph starting with “luck is certainly a huge factor”). And a study finds that attending an elite school in Britain has few positive later-life effects, at least for men. That means either elite schools don’t have better teachers (really?) or that there’s a discrepancy between this and the Chetty study that needs to be resolved.

How come none of my Berkeley friends ever told me about the Berkeley Mystery Walls, a series of mysterious East San Francisco Bay structures that seem to predate the Spanish colonization and have inspired wild theories about pre-Columbian American settlement by the Mongols? [EDIT: proposed explanation]

How much do historians know about whether King Charles the Bald was actually bald or not?

The underwhelmingness of practice effects – “Overall, deliberate practice accounted for 18% of the variance in sports performance. However, the contribution differed depending on skill level. Most important, deliberate practice accounted for only 1% of the varaince in performance among elite-level performers…another major finding was that athletes who reached a high level of skill did not begin their sport earlier in childhood than lower skill athletes.” Maybe that 1% finding is partly ceiling effects – at the Olympic level, everybody’s practicing the same (high) amount. Does anyone know of any studies that contradict this?

Yet another Swedish lottery study finds that wealth itself (as opposed to the factors that cause wealth) has no independent impact on mortality, adult health care utilization, child scholastic performance, drug use, etc. “Our estimates allow us to rule out effects on 10-year mortality one sixth as large as the cross-sectional wealth-mortality gradient”

Maaaaaybe related, hard to tell – socioeconomic status has no relationship to hair cortisol level, which complicates theories about how many body systems are affected by “the stress of poverty” since we might expect hair cortisol level to be an indicator of biological stress levels.

17th-century philosopher William Molyneux formulated what’s now called Molyneux’s Problem: “If a man born blind could feel differences between shapes such as spheres and cubes, could he, if given the ability to see (but now without recourse to touch) distinguish those objects by sight alone, in reference to the tactile schemata he already possessed?”. Thanks to modern science we can now perform the experiment, and the answer is: no.

A group including James Heckman does a really detailed analysis of the effects of years of education. I can’t follow along and I’m suspicious of any model that gets too complicated, but I think their conclusion is that everybody benefits (in terms of earnings) by graduating high-school, but only high-ability people benefit from graduating college.

Have you seen Ostagram and related sites yet, where an AI given two pictures can redraw the first picture in the style of the second? It’s really impressive. And if you want, you can get it done yourself for free at deepart.io, although there’s an 18 hour wait for your completed pictures.

Louisiana governor signs bill making offenses against police count as “hate crimes”.

Soviet jokes on Reddit. Pretty good. Most depressing is: “Q: Don’t the Constitutions of the USA and USSR both guarantee freedom of speech? A: Yes, but the Constitution of the USA also guarantees freedom after the speech” – not because of what it says about Russia but because it’s basically just the “freedom from speech does not guarantee freedom from consequences” argument that so many people love in a non-joke way here in America.

Everybody knows China is a big late 20th/21st century success story, but did you know India’s GDP per capita has tripled in the past 25 years? Noah Smith has more statistics.

Vipul Naik wants you to take a survey on your Wikipedia use.

Reddit AMAs with: Paul Niehaus of GiveDirectly on their basic income study, and Robin Hanson on Age of Em.

I agree with this article saying the recent study linking cell phones to brain cancer is hard to believe and that we should hold off judgment for now.

Dalai Lama warns that “too many” refugees are going to Europe and that “Germany cannot afford to become an Arab country”. I guess the Dalai Lama’s political views are a lot harder to predict than I would have expected.

John Horgan gave a really ill-conceived talk at a skeptics’ convention last month saying that instead of focusing on boring topics like Bigfoot and homeopathy, skeptics should focus on debunking the really dangerous ideas like [consensus scientific beliefs that John Horgan does not agree with]. Since then a whole host of scientists have pointed out that John Horgan doesn’t actually understand their scientific fields and is wrong when he talks about them, of which a decent roundup would include Steve Pinker on war and Neurologica on various things. And since Horgan also believes the anti-psychiatry book Anatomy of an Epidemic, have I mentioned that schizophrenia expert E. Fuller Torrey wrote a really neat review?

Emergence of Individuality In Genetically Identical Mice (h/t Paige Harden). Apparent biological differences in genetically identical individuals caused by “factors unfolding or emerging during development”. Maybe a good time to reread Non-Shared Environment Doesn’t Just Mean Schools And Peers.

Futurist Madsen Pirie has been called “Britain’s Nostradamus” for accurately predicting various British elections, Schwarzenegger’s California victory, and various other things. He’s just called the 2016 POTUS elections for Donald Trump.

A study two years ago argued that the US was an “oligarchy” because rich people were more likely to get their way than average citizens (I wrote about it here). But Vox now has a good article about why that study may not be true.

Putin offers free land for foreigners in Russia’s Far East. If you can get enough people over there, the government will even pay to hook up infrastructure. If you’ve ever wanted your own town, this could be your chance.

Probably not real, but A+ for effort to this method of dealing with speed traps.

Vox: Congressional Democrats who get elected on rainy days become more conservative. This sort of makes sense. Fewer voters come to the polls on rainy days, conservatives are usually more committed voters than liberals, so rainy days favor conservatives, mean that Democrats get elected by lower margins, and make Democrats feel like they have less of a mandate to pursue liberal policies. But it also sort of doesn’t make sense – political scientists have known this for years, so shouldn’t Democrats adjust for it? Not sure if this is a mystery beyond just that Congressional Democrats aren’t experts in obscure political science studies.

The war on free speech on social media, “I see” edition.

US cancer deaths down 26% since 1990 (graphs, paper)

Congratulations to SSC reader Stuart Ritchie, who got his book on IQ featured in a Vox article last month.

Bryan Caplan wins almost all his bets, even though many are against smart people like Tyler Cowen. Scott Sumner on the phenomenon and Caplan himself on how he does it.

Posted in Uncategorized | Tagged | 951 Comments

Ascended Economy?

[Obviously speculative futurism is obviously speculative. Complex futurism may be impossible and I should feel bad for doing it anyway. This is “inspired by” Nick Land – I don’t want to credit him fully since I may be misinterpreting him, and I also don’t want to avoid crediting him at all, so call it “inspired”.]

I.

My review of Age of Em mentioned the idea of an “ascended economy”, one where economic activity drifted further and further from human control until finally there was no relation at all. Many people rightly questioned that idea, so let me try to expand on it further. What I said there, slightly edited for clarity:

Imagine a company that manufactures batteries for electric cars. The inventor of the batteries might be a scientist who really believes in the power of technology to improve the human race. The workers who help build the batteries might just be trying to earn money to support their families. The CEO might be running the business because he wants to buy a really big yacht. The shareholders might be holding the stock to help save for a comfortable retirement. And the whole thing is there to eventually, somewhere down the line, let a suburban mom buy a car to take her kid to soccer practice. Like most companies the battery-making company is primarily a profit-making operation, but the profit-making-ness draws on a lot of not-purely-economic actors and their not-purely-economic subgoals.

Now imagine the company fires the inventor and replaces him with a genetic algorithm that optimizes battery design. It fires all its employees and replaces them with robots. It fires the CEO and replaces him with a superintelligent business-running algorithm. All of these are good decisions, from a profitability perspective. We can absolutely imagine a profit-driven shareholder-value-maximizing company doing all these things. But it reduces the company’s non-masturbatory participation in an economy that points outside itself, limits it to just a tenuous connection with soccer moms and maybe some shareholders who want yachts of their own.

Now take it further. Imagine that instead of being owned by humans directly, it’s owned by an algorithm-controlled venture capital fund. And imagine there are no soccer moms anymore; the company makes batteries for the trucks that ship raw materials from place to place. Every non-economic goal has been stripped away from the company; it’s just an appendage of Global Development.

Now take it even further, and imagine this is what’s happened everywhere. Algorithm-run banks lend money to algorithm-run companies that produce goods for other algorithm-run companies and so on ad infinitum. Such a masturbatory economy would have all the signs of economic growth we have today. It could build itself new mines to create raw materials, construct new roads and railways to transport them, build huge factories to manufacture them into robots, then sell the robots to whatever companies need more robot workers. It might even eventually invent space travel to reach new worlds full of raw materials. Maybe it would develop powerful militaries to conquer alien worlds and steal their technological secrets that could increase efficiency. It would be vast, incredibly efficient, and utterly pointless. The real-life incarnation of those strategy games where you mine Resources to build new Weapons to conquer new Territories from which you mine more Resources and so on forever.

This is obviously weird and I probably went too far, but let me try to explain my reasoning.

The part about replacing workers with robots isn’t too weird; lots of industries have already done that. There’s a whole big debate over to what degree that will intensify, and whether unemployed humans will find jobs somewhere else, or whether there will only be jobs for creative people with a certain education level or IQ. This part is well-discussed and I don’t have much to add.

But lately there’s also been discussion of automating corporations themselves. I don’t know much about Ethereum (and I probably shouldn’t guess since I think the inventor reads this blog and could call me on it) but as I understand it they aim to replace corporate governance with algorithms. For example, the DAO is a leaderless investment fund that allocates money according to member votes. Right now this isn’t super interesting; algorithms can’t make too many difficult business decisions so it’s limited to corporations that just do a couple of primitive actions (and why would anyone want a democratic venture fund?). But once we get closer to true AI, they might be able to make the sort of business decisions that a CEO does today. The end goal is intelligent corporations controlled by nobody but themselves.

This very blog has an advertisement for a group trying to make investment decisions based on machine learning. If they succeed, how long is it before some programmer combines a successful machine investor with a DAO-style investment fund, and creates an entity that takes humans out of the loop completely? You send it your money, a couple years later it gives you back hopefully more money, with no humans involved at any point. Such robo-investors might eventually become more efficient than Wall Street – after all, hedge fund managers get super rich by skimming money off the top, and any entity that doesn’t do that would have an advantage above and beyond its investment acumen.

If capital investment gets automated, corporate governance gets automated, and labor gets automated, we might end up with the creepy prospect of ascended corporations – robot companies with robot workers owned by robot capitalists. Humans could become irrelevant to most economic activity. Run such an economy for a few hundred years and what do you get?

II.

But in the end isn’t all this about humans? Humans as the investors giving their money to the robo-venture-capitalists, then reaping the gains of their success? And humans as the end consumers whom everyone is eventually trying to please?

It’s possible to imagine accidentally forming stable economic loops that don’t involve humans. Imagine a mining-robot company that took one input (steel) and produced one output (mining-robots), which it would sell either for money or for steel below a certain price. And imagine a steel-mining company that took one input (mining-robots) and produced one output (steel) which it would sell for either money or for mining-robots below a certain price. The two companies could get into a stable loop and end up tiling the universe with steel and mining-robots without caring whether anybody else wanted either. Obviously the real economy is a zillion times more complex than that, and I’m nowhere near the level of understanding I would need to say if there’s any chance that an entire self-sustaining economy worth of things could produce a loop like that. But I guess you only need one.

I think we can get around this in a causal-historical perspective, where we start with only humans and no corporations. The first corporations that come into existence have to be those that want to sell goods to humans. The next level of corporations can be those that sell goods to corporations that sell to humans. And so on. So unless a stable loop forms by accident, all corporations should exist to serve humans. A sufficiently rich human could finance the creation of a stable loop if they wanted to, but why would they want to? Since corporations exist only to satisfy human demand on some level or another, and there’s no demand for stable loops, corporations wouldn’t finance the development of stable loops, except by accident.

(for an interesting accidental stable loop, check out this article on the time two bidding algorithms accidentally raised the price of a book on fly genetics to more than $20 million)

Likewise, I think humans should always be the stockholders of last resort. Since humans will have to invest in the first corporation, even if that corporation invests in other corporations which invest in other corporations in turn, eventually it all bottoms down in humans (is this right?)

The only way I can see humans being eliminated from the picture is, again, by accident. If there are a hundred layers between some raw material corporation and humans, then if each layer is slightly skew to what the layer below it wants, the hundredth layer could be really really skew. Theoretically all our companies today are grounded in serving the needs of humans, but people are still thinking of spending millions of dollars to build floating platforms exactly halfway between New York and London in order to exploit light-speed delays to arbitrage financial markets better, and I’m not sure which human’s needs that serves exactly. I don’t know if there are bounds to how much of an economy can be that kind of thing.

Finally, humans might deliberately create small nonhuman entities with base level “preferences”. For example, a wealthy philanthropist might create an ascended charitable organization which supports mathematical research. Now 99.9% of base-level preferences guiding the economy would be human preferences, and 0.1% might be a hard-coded preference for mathematics research. But since non-human agents at the base of the economy would only be as powerful as the proportion of the money supply they hold, most of the economy would probably still overwhelmingly be geared towards humans unless something went wrong.

Since the economy could grow much faster than human populations, the economy-to-supposed-consumer ratio might become so high that things start becoming ridiculous. If the economy became a light-speed shockwave of economium (a form of matter that maximizes shareholder return, by analogy to computronium and hedonium) spreading across the galaxy, how does all that productive power end up serving the same few billion humans we have now? It would probably be really wasteful, the cosmic equivalent of those people who specialize in getting water from specific glaciers on demand for the super-rich because the super-rich can’t think of anything better to do with their money. Except now the glaciers are on Pluto.

III.

Glacier water from Pluto sounds pretty good. And we can hope that things will get so post-scarcity that governments and private charities give each citizen a few shares in the Ascended Economy to share the gains with non-investors. This would at least temporarily be a really good outcome.

But in the long term it reduces the political problem of regulating corporations to the scientific problem of Friendly AI, which is really bad.

Even today, a lot of corporations do things that effectively maximize shareholder value but which we consider socially irresponsible. Environmental devastation, slave labor, regulatory capture, funding biased science, lawfare against critics – the list goes on and on. They have a simple goal – make money – whereas what we really want them to do is much more complicated and harder to measure – make money without engaging in unethical behavior or creating externalities. We try to use regulatory injunctions, and it sort of helps, but because those go against a corporation’s natural goals they try their best to find loopholes and usually succeed – or just take over the regulators trying to control them.

This is bad enough with bricks-and-mortar companies run by normal-intelligence humans. But it would probably be much worse with ascended corporations. They would have no ethical qualms we didn’t program into them – and again, programming ethics into them would be the Friendly AI problem, which is really hard. And they would be near-impossible to regulate; most existing frameworks for such companies are built on crypto-currency and exist on the cloud in a way that transcends national borders.

(A quick and very simple example of an un-regulate-able ascended corporation – I don’t think it would be too hard to set up an automated version of Uber. I mean, the core Uber app is already an automated version of Uber, it just has company offices and CEOs and executives and so on doing public relations and marketing and stuff. But if the government ever banned Uber the company, could somebody just code another ride-sharing app that dealt securely in Bitcoins? And then have it skim a little bit off the top, which it offered as a bounty to anybody who gave it the processing power it would need to run? And maybe sent a little profit to the programmer who wrote the thing? Sure, the government could arrest the programmer, but short of arresting every driver and passenger there would be no way to destroy the company itself.)

The more ascended corporations there are trying to maximize shareholder value, the more chance there is some will cause negative externalities. But there’s a limited amount we would be able to do about them. This is true today too, but at least today we maintain the illusion that if we just elected Bernie Sanders we could reverse the ravages of capitalism and get an economy that cares about the environment and the family and the common man. An Ascended Economy would destroy that illusion.

How bad would it get? Once ascended corporations reach human or superhuman level intelligences, we run into the same AI goal-alignment problems as anywhere else. Would an ascended corporation pave over the Amazon to make a buck? Of course it would; even human corporations today do that, and an ascended corporation that didn’t have all human ethics programmed in might not even get that it was wrong. What if we programmed the corporation to follow local regulations, and Brazil banned paving over the Amazon? This is an example of trying to control AIs through goals plus injunctions – a tactic Bostrom finds very dubious. It’s essentially challenging a superintelligence to a battle of wits – “here’s something you want, and here are some rules telling you that you can’t get it, can you find a loophole in the rules?” If the superintelligence is super enough, the answer will always be yes.

From there we go into the really gnarly parts of AI goal alignment theory. Would an ascended corporation destroy South America entirely to make a buck? Depending on how it understood its imperative to maximize shareholder value, it might. Yes, this would probably kill many of its shareholders, but its goal is to “maximize shareholder value”, not to keep its shareholders alive to enjoy that value. It might even be willing to destroy humanity itself if other parts of the Ascended Economy would pick up the slack as investors.

(And then there are the weirder problems, like ascended corporations hacking into the stock market and wireheading themselves. When this happens, I want credit for being the first person to predict it.)

Maybe the most hopeful scenario is that once ascended corporations achieved human-level intelligence they might do something game-theoretic and set up a rule-of-law among themselves in order to protect economic growth. I wouldn’t want to begin to speculate on that, but maybe it would involve not killing all humans? Or maybe it would just involve taking over the stock market, formally setting the share price of every company to infinity, and then never doing anything again? I don’t know, and I expect it would get pretty weird.

IV.

I don’t think the future will be like this. This is nowhere near weird enough to be the real future. I think superintelligence is probably too unstable. It will explode while still in the lab and create some kind of technological singularity before people have a chance to produce an entire economy around it.

But given Robin’s assumptions in Age of Em – hard AI, no near-term intelligence explosion, fast economic growth – but ditching his idea of human-like em minds as important components of the labor force – I think something like this would be where we would end up. It probably wouldn’t be so bad for the first couple of years. But eventually ascended corporations would start reaching the point where we might as well think of them as superintelligent AIs. Maybe this world would be friendlier towards AI goal alignment research than Yudkowsky and Bostrom’s scenarios, since at least here we could see it coming, there was no instant explosion, and a lot of different entities approach superintelligence around the same time. But given that the smartest things around are encrypted, uncontrollable, unregulated entities that don’t have humans’ best interests at heart, I’m not sure they would be in much shape to handle the transition.


Posted in Uncategorized | Tagged , | 395 Comments

Book Review: Age of Em

[Note: I really liked this book and if I criticize it that’s not meant as an attack but just as what I do with interesting ideas. Note that Robin has offered to debate me about some of this and I’ve said no – mostly because I hate real-time debates and have bad computer hardware – but you may still want to take this into account when considering our relative positions. Mild content warning for murder, rape, and existential horror. Errors in Part III are probably my own, not the book’s.]

I.

There are some people who are destined to become adjectives. Pick up a David Hume book you’ve never read before and it’s easy to recognize the ideas and style as Humean. Everything Tolkien wrote is Tolkienesque in a non-tautological sense. This isn’t meant to denounce either writer as boring. Quite the opposite. They produced a range of brilliant and diverse ideas. But there was a hard-to-define and very consistent ethos at the foundation of both. Both authors were very much like themselves.

Robin Hanson is more like himself than anybody else I know. He’s obviously brilliant – a PhD in economics, a masters in physics, work for DARPA, Lockheed, NASA, George Mason, and the Future of Humanity Institute. But his greatest aptitude is in being really, really Hansonian. Bryan Caplan describes it as well as anybody:

When the typical economist tells me about his latest research, my standard reaction is ‘Eh, maybe.’ Then I forget about it. When Robin Hanson tells me about his latest research, my standard reaction is ‘No way! Impossible!’ Then I think about it for years.

This is my experience too. I think I said my first “No way! Impossible!” sometime around 2008 after reading his blog Overcoming Bias. Since then he’s influenced my thinking more than almost anyone else I’ve ever read. When I heard he was writing a book, I was – well, I couldn’t even imagine a book by Robin Hanson. When you read a thousand word blog post by Robin Hanson, you have to sit down and think about it and wait for it to digest and try not to lose too much sleep worrying about it. A whole book would be something.

I have now read Age Of Em (website)and it is indeed something. Even the cover gives you a weird sense of sublimity mixed with unease:

And in this case, judging a book by its cover is entirely appropriate.

II.

Age of Em is a work of futurism – an attempt to predict what life will be like a few generations down the road. This is not a common genre – I can’t think of another book of this depth and quality in the same niche. Predicting the future is notoriously hard, and that seems to have so far discouraged potential authors and readers alike.

Hanson is not discouraged. He writes that:

Some say that there is little point in trying to foresee the non-immediate future. But in fact there have been many successful forecasts of this sort. For example, we can reliably predict the future cost changes for devices such as batteries or solar cells, as such costs tend to follow a power law of the cumulative device production (Nagy et al 2013). As another example, recently a set of a thousand published technology forecasts were collected and scored for accuracy, by comparing the forecasted date of a technology milestone with its actual date. Forecasts were significantly more accurate than random, even forecasts 10 to 25 years ahead. This was true separately for forecasts made via many different methods. On average, these milestones tended to be passed a few years before their forecasted date, and sometimes forecasters were unaware that they had already passed (Charbonneau et al, 2013).

A particularly accurate book in predicting the future was The Year 2000, a 1967 book by Herman Kahn and Anthony Wiener. It accurately predicted population, was 80% correct for computer and communication technology, and 50% correct for other technology (Albright 2002). On even longer time scales, in 1900 the engineer John Watkins did a good job of forecasting many basic features of society a century later (Watkins 1900) […]

Some say no one could have anticipated the recent big changes associated with the arrival and consequences of the World Wide Web. Yet participants in the Xanadu hypertext project in which I was involved from 1984 to 1993 correctly anticipated many key aspects of the Web […] Such examples show that one can use basic theory to anticipate key elements of distant future environments, both physical and social, but also that forecasters do not tend to be much rewarded for such efforts, either culturally or materially. This helps to explain why there are relatively few serious forecasting efforst. But make no mistake, it is possible to forecast the future.

I think Hanson is overstating his case. All except Watkins were predicting only 10 – 30 years in the future, and most of their predictions were simple numerical estimates, eg “the population will be one billion” rather than complex pictures of society. The only project here even remotely comparable in scope to Hanson’s is John Watkins’ 1900 article.

Watkins is classically given some credit for broadly correct ideas like “Cameras that can send pictures across the world instantly” and “telephones that can call anywhere in the world”, but of his 28 predictions, I judge only eight as even somewhat correct. For example, I grant him a prediction that “the average American will be two inches taller because of good medical care” even though he then goes on to say in the same sentence that the average life expectancy will be fifty and suburbanization will be so total that building city blocks will be illegal (sorry, John, only in San Francisco). Most of the predictions seem simply and completely false. Watkins believes all animals and insects will have been eradicated. He believes there will be “peas as large as beets” and “strawberries as large as apples” (these are two separate predictions; he is weirdly obsessed with fruit and vegetable size). We will travel to England via giant combination submarine/hovercrafts that will complete the trip in a lightning-fast two days. There will be no surface-level transportation in cities as all cars and walkways have moved underground. The letters C, X, and Q will be removed from the language. Pneumatic tubes will deliver purchases from stores. “A man or woman unable to walk ten miles at a stretch will be regarded as a weakling.”

Where Watkins is right, he is generally listing a cool technology slightly beyond what was available to his time and predicting we will have it. Nevertheless, he is still mostly wrong. Yet this is Hanson’s example of accurate futurology. And he is right to make it his example of accurate futurology, because everything else is even worse.

Hanson has no illusions of certainty. He starts by saying that “conditional on my key assumptions, I expect at least 30% of future situations to be usefully informed by my analysis. Unconditionally, I expect at least 10%.” So he is not explicitly overconfident. But in an implicit sense, it’s just weird to see the level of detail he tries to predict – for example, he has two pages about what sort of swear words the far future might use. And the book’s style serves to reinforce its weirdness. The whole thing is written in a sort of professorial monotone that changes little from loving descriptions of the sorts of pipes that will cool future buildings (one of Hanson’s pet topics) to speculation on our descendents’ romantic relationships (key quote: “The per minute subjective value of an equal relation should not fall much below half of the per-minute value of a relation with the best available open source lover”). And it leans heavily on a favorite Hansonian literary device – the weirdly general statement about something that sounds like it can’t possibly be measurable, followed by a curt reference which if followed up absolutely confirms said statement, followed by relentlessly ringing every corollary of it:

Today, mental fatigue reduces mental performance by about 0.1% per minute. As by resting we can recover at a rate of 1% per minute, we need roughly one-tenth of our workday to be break time, with the duration between breaks being not much more than an hour or two (Trougakos and Hideg 2009; Alvanchi et al 2012)…Thus many em tasks will be designed to take about an hour, and many spurs are likely to last for about this duration.

Or:

Today, painters, novelists, and directors who are experimental artists tend to do their best work at roughly ages 46-52, 38-50, and 45-63 respectively, but those ages are 24-34, 29-40, and 27-43, respectively for conceptual artists (Galenson 2006)…At any one time, the vast majority of actual working ems [should be] near a peak productivity subjective age.

Or:

Wars today, like cities, are distributed evenly across all possible war sizes (Cederman 2003).

At some point I started to wonder whether Hanson was putting me on. Everything is just played too straight. Hanson even addresses this:

To resist the temptation to construe the future too abstractly, I’ll try to imagine a future full of complex detail. One indiciation that I’ve been successful in all these efforts will be if my scenario description sounds less like it came from a typical comic book or science fiction movie, and more like it came form a typical history text or business casebook.

Well, count that project a success. The effect is strange to behold, and I’m not sure it will usher in a new era of futurology. But Age of Em is great not just as futurology, but as a bunch of different ideas and purposes all bound up in a futurological package. For example:

An introduction to some of the concepts that recur again and again across Robin’s thought – for example, near vs. far mode, the farmer/forager dichotomy, the inside and outside views, signaling. Most of us learned these through years reading Hanson’s blog Overcoming Bias, getting each chunk in turn, spending days or months thinking over each piece. Getting it all out of a book you can read in a couple of days sounds really hard – but by applying them to dozens of different subproblems involved in future predictions, Hanson makes the reader more comfortable with them, and I expect a lot of people will come out of the book with an intuitive understanding of how they can be applied.

A whirlwind tour through almost every science and a pretty good way to learn about the present. If you didn’t already know that wars are distributed evenly across all possible war sizes, well, read Age of Em and you will know that and many similar things besides.

A manifesto. Hanson often makes predictions by assuming that since the future will be more competitive, future people are likely to converge toward optimal institutions. This is a dangerous assumption for futurology – it’s the same line of thinking that led Watkins to assume English would abandon C, X, and Q as inefficient – but it’s a great assumption if you want a chance to explain your ideas of optimal institutions to thousands of people who think they’re reading fun science-fiction. Thus, Robin spends several pages talking about how ems may use prediction markets – an information aggregation technique he invented – to make their decisions. In the real world, Hanson has been trying to push these for decades, with varying levels of success. Here, in the guise of a future society, he can expose a whole new group of people to their advantages – as well as the advantages of something called “combinatorial auctions” which I am still not smart enough to understand.

A mind-expanding drug. One of the great risks of futurology is to fail to realize how different societies and institutions can be – the same way uncreative costume designers make their aliens look like humans with green skin. A lot of our thoughts about the future involve assumptions we’ve never really examined critically, and Hanson dynamites those assumptions. For page after page, he gives strong arguments why our descendants might be poorer, shorter-lived, less likely to travel long distances or into space, less progressive and open-minded. He predicts little noticeable technological change, millimeter-high beings living in cities the size of bottles, careers lasting fractions of seconds, humans being incomprehensibly wealthy patrons to their own robot overlords. And all of it makes sense.

When I read Stross’ Accelerando, one of the parts that stuck with me the longest were the Vile Offspring, weird posthuman entities that operated a mostly-incomprehensible Economy 2.0 that humans just sort of hung out on the edges of, goggle-eyed. It was a weird vision – but, for Stross, mostly a black box. Age of Em opens the box and shows you every part of what our weird incomprehensible posthuman descendents will be doing in loving detail. Even what kind of swear words they’ll use.

III.

So, what is the Age of Em?

According to Hanson, AI is really hard and won’t be invented in time to shape the posthuman future. But sometime a century or so from now, scanning technology, neuroscience, and computer hardware will advance enough to allow emulated humans, or “ems”. Take somebody’s brain, scan it on a microscopic level, and use this information to simulate it neuron-by-neuron on a computer. A good enough simulation will map inputs to outputs in exactly the same way as the brain itself, effectively uploading the person to a computer. Uploaded humans will be much the same as biological humans. Given suitable sense-organs, effectuators, virtual avatars, or even robot bodies, they can think, talk, work, play, love, and build in much the same way as their “parent”. But ems have three very important differences from biological humans.

First, they have no natural body. They will never need food or water; they will never get sick or die. They can live entirely in virtual worlds in which any luxuries they want – luxurious penthouses, gluttonous feasts, Ferraris – can be conjured out of nothing. They will have some limited ability to transcend space, talking to other ems’ virtual presences in much the same way two people in different countries can talk on the Internet.

Second, they can run at different speeds. While a normal human brain is stuck running at the speed that physics allow, a computer simulating a brain can simulate it faster or slower depending on preference and hardware availability. With enough parallel hardware, an em could experience a subjective century in an objective week. Alternatively, if an em wanted to save hardware it could process all its mental operations v e r y s l o w l y and experience only a subjective week every objective century.

Third, just like other computer data, ems can be copied, cut, and pasted. One uploaded copy of Robin Hanson, plus enough free hardware, can become a thousand uploaded copies of Robin Hanson, each living in their own virtual world and doing different things. The copies could even converse with each other, check each other’s work, duel to the death, or – yes – have sex with each other. And if having a thousand Robin Hansons proves too much, a quick ctrl-x and you can delete any redundant ems to free up hard disk space for Civilization 6 (coming out this October!)

Would this count as murder? Hanson predicts that ems will have unusually blase attitudes toward copy-deletion. If there are a thousand other copies of me in the world, then going to sleep and not waking up just feels like delegating back to a different version of me. If you’re still not convinced, Hanson’s essay Is Forgotten Party Death? is a typically disquieting analysis of this proposition. But whether it’s true or not is almost irrelevant – at least some ems will think this way, and they will be the ones who tend to volunteer to be copied for short term tasks that require termination of the copy afterwards. If you personally aren’t interested in participating, the economy will leave you behind.

The ability to copy ems as many times as needed fundamentally changes the economy and the idea of economic growth. Imagine Google has a thousand positions for Ruby programmers. Instead of finding a thousand workers, they can find one very smart and very hard-working person and copy her a thousand times. With unlimited available labor supply, wages plummet to subsistence levels. “Subsistence levels” for ems are the bare minimum it takes to rent enough hardware from Amazon Cloud to run an em. The overwhelming majority of ems will exist at such subsistence levels. On the one hand, if you’ve got to exist on a subsistence level, a virtual world where all luxuries can be conjured from thin air is a pretty good place to do it. On the other, such starvation wages might leave ems with little or no leisure time.

Sort of. This gets weird. There’s an urban legend about a “test for psychopaths”. You tell someone a story about a man who attends his mother’s funeral. He met a really pretty girl there and fell in love, but neglected to get her contact details before she disappeared. How might he meet her again? If they answer “kill his father, she’ll probably come to that funeral too”, they’re a psychopath – ordinary people would have a mental block that prevents them from even considering such a drastic solution. And I bring this up because after reading Age of Em I feel like Robin Hanson would be able to come up with some super-solution even the psychopaths can’t think of, some plan that gets the man a threesome with the girl and her even hotter twin sister at the cost of wiping out an entire continent. Everything about labor relations in Age of Em is like this.

For example, suppose you want to hire an em at subsistence wages, but you want them 24 hours a day, 7 days a week. Ems probably need to sleep – that’s hard-coded into the brain, and the brain is being simulated at enough fidelity to leave that in. But jobs with tasks that don’t last longer than a single day – for example, a surgeon who performs five surgeries a day but has no day-to-day carryover – can get around this restriction by letting an em have one full night of sleep, then copying it. Paste the em at the beginning of the workday. When it starts to get tired, let it finish the surgery it’s working on, then delete it and paste the well-rested copy again to do the next surgery. Repeat forever and the em never has to get any more sleep than that one night. You can use the same trick to give an em a “vacation” – just give it one of them, then copy-paste that brain-state forever.

Or suppose your ems want frequent vacations, but you want them working every day. Let a “trunk” em vacation every day, then make a thousand copies every morning, work all the copies for twenty-four hours, then delete them. Every copy remembers a life spent in constant vacation, and cheered on by its generally wonderful existence it will give a full day’s work. But from the company’s perspective, 99.9% of the ems in its employment are working at any given moment.

(another option: work the em at normal subjective speed, then speed it up a thousand times to take its week-long vacation, then have it return to work after only one-one-thousandth of a week has passed in real life)

Given that ems exist at subsistence wages, saving enough for retirement sounds difficult, but this too has weird psychopathic solutions. Thousands of copies of the same em can pool their retirement savings, then have all except a randomly chosen one disappear at the moment of retirement, leaving that one with an nest egg thousands of time what it could have accumulated by its own efforts. Or an em can invest its paltry savings in some kind of low-risk low-return investment and reduce its running speed so much that the return on its investment is enough to pay for its decreased subsistence. For example, if it costs $100 to rent enough computing power to run an em at normal speed for one year, and you only have $10 in savings, you can rent 1/1000th of the computer for $0.10, run at 1/1000th speed, invest your $10 in a bond that pays 1% per year, and have enough to continue running indefinitely. The only disadvantage is that you’ll only experience a subjective week every twenty objective years. Also, since other entities are experiencing a subjective week every second, and some of those entities have nukes, probably there will be some kind of big war, someone will nuke Amazon’s data centers, and you’ll die after a couple of your subjective minutes. But at least you got to retire!

If ems do find ways to get time off the clock, what will they do with it? Probably they’ll have really weird social lives. After all, the existence of em copies is mostly funded by companies, and there’s no reason for companies to copy-paste any but the best workers in a given field. So despite the literally trillions of ems likely to make up the world, most will be copies of a few exceptionally brilliant and hard-working individuals with specific marketable talents. Elon Musk might go out one day to the bar with his friend, who is also Elon Musk, and order “the usual”. The bartender, who is Elon Musk himself, would know exactly what drink he wants and have it readily available, as the bar caters entirely to people who are Elon Musk. A few minutes later, a few Chesley Sullenbergers might come in after a long day of piloting airplanes. Each Sullenberger would have met hundreds of Musks before and have a good idea about which Musk-Sullenberger conversation topics were most enjoyable, but they might have to adjust for circumstances; maybe the Musks they met before all branched off a most recent common ancestor in 2120, but these are a different branch who were created in 2105 and remember Elon’s human experiences but not a lot of the posthuman lives that shaped the 2120 Musks’ worldviews. One Sullenberger might tentatively complain that the solar power grid has too many outages these days; a Musk might agree to take the problem up with the Council of Musks, which is totally a thing that exist (Hanson calls these sorts of groups “copy clans” and says they are “a natural candidate unit for finance, reproduction, legal, liability, and political representation”).

Romance could be even weirder. Elon Musk #2633590 goes into a bar and meets Taylor Swift #105051, who has a job singing in a nice local nightclub and so is considered prestigious for a Taylor Swift. He looks up a record of what happens when Elon Musks ask Taylor Swifts out and finds they are receptive on 87.35% of occasions. The two start dating and are advised by the Council of Musks and the Council of Swifts on the issues that are known to come up in Musk-Swift relationships and the best solutions that have been found to each. Unfortunately, Musk #2633590 is transferred to a job that requires operating at 10,000x human speed, but Swift #105051’s nightclub runs at 100x speed and refuses to subsidize her to run any faster; such a speed difference makes normal interaction impossible. The story has a happy ending; Swift #105051 allows Musk #2633590 to have her source code, and whenever he is feeling lonely he spends a little extra money to instantiate a high-speed copy of her to hang out with.

(needless to say, these examples are not exactly word-for-word taken from the book, but they’re heavily based off of Hanson’s more abstract descriptions)

The em world is not just very weird, it’s also very very big. Hanson notes that labor is a limiting factor in economic growth, yet even today the economy doubles about once every fifteen years. Once you can produce skilled labor through a simple copy-paste operation, especially labor you can run at a thousand times human speed, the economy will go through the roof. He writes that:

To generate an empirical estimate of em economy doubling times, we can look at the timescales it takes for machine shopes and factories today to make a mass of machines of a quality, quantity, variety, and value similar to that of machines that they themselves contain. Today that timescale is roughly 1 to 3 months. Also, designs were sketched two to three decades ago for systems that might self-repliate nearly completeld in 6 to 12 months…these estimates suggest that today’s manufacturing technologiy is capable of self-repliating on a scale of a few weeks to a few months.

Hanson thinks that with further innovation, such times can be reduced so far that “the economy might double every objective year, month, week, or day.” As the economy doubles the labor force – ie the number of ems – may double with it, until only a few years after the first ems the population numbers in the trillions. But if the em population is doubling every day, there had better be some pretty amazing construction efforts going on. The only thing that could possibly work on that scale is prefabricated modular construction of giant superdense cities, probably made mostly out of some sort of proto early-stage computronium (plus cooling pipes). Ems would be reluctant to travel from one such city to another – if they exist at a thousand times human speed, a trip on a hypersonic airliner that could go from New York to Los Angeles in an hour would still take forty subjective days. Who wants to be on an airplane for forty days?

(long-distance trade is also rare, since if the economy doubles fast enough it means that by the time goods reach their destination they could be almost worthless)

The real winners of this ultra-fast-growing economy? Ordinary humans. While humans will be way too slow and stupid to do anything useful, they will tend to have non-subsistence amounts of money saved up from their previous human lives, and also be running at speeds thousands of times slower than most of the economy. When the economy doubles every day, so can your bank account. Ordinary humans will become rarer, less relevant, but fantastically rich – a sort of doddering Neanderthal aristocracy spending sums on a cheeseburger that could support thousands of ems in luxury for entire lifetimes. While there will no doubt be pressure to liquidate humans and take their stuff, Hanson hopes that the spirit of rule of law – the same spirit that protects rich minority groups today – will win out, with rich ems reluctant to support property confiscation lest it extend to them also. Also, em retirees will have incentives a lot like humans – they have saved up money and go really slow – and like AARP memembers today they may be able to obtain disproportionate political power which will then protect the interests of slow rich people.

But we might not have much time to enjoy our sudden rise in wealth. Hanson predicts that the Age of Em will last for subjective em millennia – ie about one to two actual human years. After all, most of the interesting political and economic activity is going on at em timescales. In the space of a few subjective millennia, either someone will screw up and cause the apocalypse, somebody will invent real superintelligent AI that causes a technological singularity, or some other weird thing will happen taking civilization beyond the point that even Robin dares to try to predict.

IV.

Hanson understands that people might not like the idea of a future full of people working very long hours at subsistence wages forever (Zack Davis’ Contract-Drafting Em song is, as usual, relevant). But Hanson himself does not view this future as dystopian. Despite our descendents’ by-the-numbers poverty, they will avoid the miseries commonly associated with poverty today. There will be no dirt or cockroaches in their sparkling virtual worlds, nobody will go hungry, petty crime will be all-but-eliminated, and unemployment will be low. Anybody who can score some leisure time will have a dizzying variety of hyperadvanced entertainment available, and as for the people who can’t, they’ll mostly have been copied from people who really like working hard and don’t miss it anyway. As unhappy as we moderns may be contemplating em society, ems themselves will not be unhappy! And as for us:

The analysis in this book suggests that lives in the next great era may be as different from our lives as our lives are from farmers’ lives, or farmers’ lives are from foragers’ lives. Many readers of this book, living industrial era lives and sharing industrial era values, may be disturbed to see a forecast of em era descendants with choices and lifestyles that appear to reject many of the values that they hold dear. Such readers may be tempted to fight to prevent the em future, perhaps preferring a continuation of the industrial era. Such readers may be correct that rejecting the em future holds them true to their core values. But I advise such readers to first try hard to see this new era in some detail from the point of view of its typical residents. See what they enjoy and what fills them with pride, and listen to their criticisms of your era and values.

A short digression: there’s a certain strain of thought I find infuriating, which is “My traditionalist ancestors would have disapproved of the changes typical of my era, like racial equality, more open sexuality, and secularism. But I am smarter than them, and so totally okay with how the future will likely have values even more progressive and shocking than my own. Therefore I pre-approve of any value changes that might happen in the future as definitely good and better than our stupid hidebound present.”

I once read a science-fiction story that depicted a pretty average sci-fi future – mighty starships, weird aliens, confederations of planets, post-scarcity economy – with the sole unusual feature that rape was considered totally legal, and opposition to such as bigoted and ignorant as opposition to homosexuality is today. Everybody got really angry at the author and said it was offensive for him to even speculate about that. Well, that’s the method by which our cheerful acceptance of any possible future values is maintained: restricting the set of “any possible future values” to “values slightly more progressive than ours” and then angrily shouting down anyone who discusses future values that actually sound bad. But of course the whole question of how worried to be about future value drift only makes sense in the context of future values that genuinely violate our current values. Approving of all future values except ones that would be offensive to even speculate about is the same faux-open-mindedness as tolerating anything except the outgroup.

Hanson deserves credit for positing a future whose values are likely to upset even the sort of people who say they don’t get upset over future value drift. I’m not sure whether or not he deserves credit for not being upset by it. Yes, it’s got low-crime, ample food for everybody, and full employment. But so does Brave New World. The whole point of dystopian fiction is pointing out that we have complicated values beyond material security. Hanson is absolutely right that our traditionalist ancestors would view our own era with as much horror as some of us would view an em era. He’s even right that on utilitarian grounds, it’s hard to argue with an em era where everyone is really happy working eighteen hours a day for their entire lives because we selected for people who feel that way. But at some point, can we make the Lovecraftian argument of “I know my values are provincial and arbitrary, but they’re my provincial arbitrary values and I will make any sacrifice of blood or tears necessary to defend them, even unto the gates of Hell?”

This brings us to an even worse scenario.

There are a lot of similarities between Hanson’s futurology and (my possibly erroneous interpretation of) the futurology of Nick Land. I see Land as saying, like Hanson, that the future will be one of quickly accelerating economic activity that comes to dominate a bigger and bigger portion of our descendents’ lives. But whereas Hanson’s framing focuses on the participants in such economic activity, playing up their resemblances with modern humans, Land takes a bigger picture. He talks about the economy itself acquiring a sort of self-awareness or agency, so that the destiny of civilization is consumed by the imperative of economic growth.

Imagine a company that manufactures batteries for electric cars. The inventor of the batteries might be a scientist who really believes in the power of technology to improve the human race. The workers who help build the batteries might just be trying to earn money to support their families. The CEO might be running the business because he wants to buy a really big yacht. And the whole thing is there to eventually, somewhere down the line, let a suburban mom buy a car to take her kid to soccer practice. Like most companies the battery-making company is primarily a profit-making operation, but the profit-making-ness draws on a lot of not-purely-economic actors and their not-purely-economic subgoals.

Now imagine the company fires all its employees and replaces them with robots. It fires the inventor and replaces him with a genetic algorithm that optimizes battery design. It fires the CEO and replaces him with a superintelligent business-running algorithm. All of these are good decisions, from a profitability perspective. We can absolutely imagine a profit-driven shareholder-value-maximizing company doing all these things. But it reduces the company’s non-masturbatory participation in an economy that points outside itself, limits it to just a tenuous connection with soccer moms and maybe some shareholders who want yachts of their own.

Now take it further. Imagine there are no human shareholders who want yachts, just banks who lend the company money in order to increase their own value. And imagine there are no soccer moms anymore; the company makes batteries for the trucks that ship raw materials from place to place. Every non-economic goal has been stripped away from the company; it’s just an appendage of Global Development.

Now take it even further, and imagine this is what’s happened everywhere. There are no humans left; it isn’t economically efficient to continue having humans. Algorithm-run banks lend money to algorithm-run companies that produce goods for other algorithm-run companies and so on ad infinitum. Such a masturbatory economy would have all the signs of economic growth we have today. It could build itself new mines to create raw materials, construct new roads and railways to transport them, build huge factories to manufacture them into robots, then sell the robots to whatever companies need more robot workers. It might even eventually invent space travel to reach new worlds full of raw materials. Maybe it would develop powerful militaries to conquer alien worlds and steal their technological secrets that could increase efficiency. It would be vast, incredibly efficient, and utterly pointless. The real-life incarnation of those strategy games where you mine Resources to build new Weapons to conquer new Territories from which you mine more Resources and so on forever.

But this seems to me the natural end of the economic system. Right now it needs humans only as laborers, investors, and consumers. But robot laborers are potentially more efficient, companies based around algorithmic trading are already pushing out human investors, and most consumers already aren’t individuals – they’re companies and governments and organizations. At each step you can gain efficiency by eliminating humans, until finally humans aren’t involved anywhere.

True to form, Land doesn’t see this as a dystopia – I think he conflates “maximally efficient economy” with “God”, which is a hell of a thing to conflate – but I do. And I think it provides an important new lens with which to look at the Age of Em.

The Age of Em is an economy in the early stages of such a transformation. Instead of being able to replace everything with literal robots, it replaces them with humans who have had some aspects of their humanity stripped away. Biological bodies. The desire and ability to have children normally. Robin doesn’t think people will lose all leisure time and non-work-related desires, but he doesn’t seem too sure about this and it doesn’t seem to bother him much if they do.

I envision a spectrum between the current world of humans and Nick Land’s Ascended Economy. Somewhere on the spectrum we have ems who get leisure time. A little further on the spectrum we have ems who don’t get leisure time.

But we can go further. Hanson imagines that we can “tweak” em minds. We may not understand the brain enough to create totally new intelligences from the ground up, but by his Age of Em we should understand it well enough to make a few minor hacks, the same way even somebody who doesn’t know HTML or CSS can usually figure out how to change the background color of a webpage with enough prodding. Many of these mind tweaks will be the equivalent of psychiatric drugs – some might even be computer simulations of what we observe to happen when we give psychiatric drugs to a biological brain. But these tweaks will necessarily be much stronger and more versatile, since we no longer care about bodily side effects (ems don’t have bodies) and we can apply it to only a single small region of the brain and avoid actions anywhere else. You could also very quickly advance brain science – the main limits today are practical (it’s really hard to open up somebody’s brain and do stuff to it without killing them) and ethical (the government might have some words with you if you tried). An Age of Em would remove both obstacles, and give you the added bonus of being able to make thousands of copies of your test subjects for randomized controlled trials, reloading any from a saved copy if they died. Hanson envisions that:

As the em world is a very competitive world where sex is not needed for reproduction, and as sex can be time and attention-consuming, ems may try to suppress sexuality, via mind tweaks that produce effects analogous to castration. Such effects might be temporary, perhaps with a consciously controllable on-off switch…it is possible that em brain tweaks could be found to greatly reduce natural human desires for sex and related romantic and intimate pair bonding without reducing em productivity. It is also possible that many of the most productive ems would accept such tweaks.

Possible? I can do that right now with a high enough dose of Paxil, and I don’t even have to upload your brain to a computer first. Fun stories about Musk #2633590 and Swift #105051 aside, I expect this would happen about ten minutes after the advent of the Age of Em, and we would have taken another step down the path to the Ascended Economy.

There are dozens of other such tweaks I can think of, but let me focus on two.

First, stimulants have a very powerful ability to focus the brain on the task at hand, as anybody who’s taken Adderall or modafinil can attest. Their main drawbacks are addictiveness and health concerns, but in a world where such pills can be applied as mental tweaks, where minds have no bodies, and where any mind that gets too screwed up can be reloaded from a backup copy, these are barely concerns at all. Many of the purely mental side effects of stimulants come from their effects in parts of the brain not vital to the stimulant effect. If we can selectively apply Adderall to certain brain centers but not others, then unapply it at will, then from employers’ point of view there’s no reason not to have all workers dosed with superior year 2100 versions of Adderall at all times. I worry that not only will workers not have any leisure time, but they’ll be neurologically incapable of having their minds drift off while on the job. Davis’ contract-drafting em who starts wondering about philosophy on the job wouldn’t get terminated. He would just have his simulated-Adderall dose increased.

Second, Robin managed to write an entire book about emulated minds without using the word “wireheading”. This is another thing we can do right now, with today’s technology – but once it’s a line of code and not a costly brain surgery, it should become nigh-universal. Give ems the control switches to their own reward centers and all questions about leisure time become irrelevant. Give bosses the control switches to their employees’ reward centers, and the situation changes markedly. Hanson says that there probably won’t be too much slavery in the em world, because it will likely have strong rule of law, because slaves aren’t as productive as free workers, and there’s little advantage to enslaving someone when you could just pay them subsistence wages anyway. But slavery isn’t nearly as abject and inferior a condition as the one where somebody else has the control switch to your reward center. Combine that with the stimulant use mentioned above, and you can have people who will never have nor want to have any thought about anything other than working on the precise task at which they are supposed to be working at any given time.

This is something I worry about even in the context of normal biological humans. But Hanson already believes em worlds will have few regulations and be able to ignore the moral horror of 99% of the population by copying and using the 1% who are okay with something. Combine this with a situation where brains are easily accessible and tweakable, and this sort of scenario becomes horribly likely.

I see almost no interesting difference between an em world with full use of these tweaks and an Ascended Economy world. Yes, there are things that look vaguely human in outline laboring in the one and not the other, but it’s not like there will be different thought processes or different results. I’m not even sure what it would mean for the ems to be conscious in a world like this – they’re not doing anything interesting with the consciousness. The best we could say about this is that if the wireheading is used liberally it’s a lite version of the world where everything gets converted to hedonium.

V.

In a book full of weird ideas, there is only one idea rejected as too weird. And in a book written in a professorial monotone, there’s only one point at which Hanson expresses anything like emotion:

Some people foresee a rapid local “intelligence explosion” happening soon after a smart AI system can usefully modify its local architecture (Chalmers 2010; Hanson and Yudkowsky 2013; Yudkowsky 2013; Bostrom 2014)…Honestly to me this local intelligence explosion scenario looks suspiciously like a super-villain comic book plot. A flash of insight by a lone genius lets him create a genius AI. Hidden in its super-villain research lab lair, this guines villain AI works out unprecedented revolutions in AI design, turns itself into a super-genius, which then invents super-weapons and takes over the world. Bwa ha ha.

For someone who just got done talking about the sex lives of uploaded computers in millimeter-tall robot bodies running at 1000x human speed, Robin is sure quick to use the absurdity heuristic to straw-man intelligence explosion scenarios as “comic book plots”. Take away his weird authorial tic of using the words “genius” and “supervillain”, this scenario reduces to “Some group, perhaps Google, perhaps a university, invent an artificial intelligence smart enough to edit its own source code; exponentially growing intelligence without obvious bound follows shortly thereafter”. Yes, it’s weird to think that there may be a sudden quantum leap in intelligence like this, but no weirder than to think most of civilization will transition from human to em in the space of a year or two. I’m a little bit offended that this is the only idea given this level of dismissive treatment. Since I do have immense respect for Robin, I hope my offense doesn’t color the following thoughts too much.

Hanson’s arguments against AI seem somewhat motivated. He admits that AI researchers generally estimate less than 50 years before we get human-level artificial intelligence, a span shorter than his estimate of a century until we can upload ems. He even admits that no AI researcher thinks ems are a plausible route to AI. But he dismisses this by saying when he asks AI experts informally, they say that in their own field, they have only noticed about 5-10% of the progress they expect would be needed to reach human intelligence over the past twenty years. He then multiplies out to say that it will probably take at least 400 years to reach human-level AI. I have two complaints about this estimate.

First, he is explicitly ignoring published papers surveying hundreds of researchers using validated techniques, in favor of what he describes as “meeting experienced AI experts informally”. But even though he feels comfortable rejecting vast surveys of AI experts as potentially biased, as best I can tell he does not ask a single neuroscientist to estimate the date at which brain scanning and simulation might be available. He just says that “it seems plausible that sufficient progress will be made in roughly a century or so”, citing a few hopeful articles by very enthusiastic futurists who are not neuroscientists or scanning professionals themselves and have not talked to any. This seems to me to be an extreme example of isolated demands for rigor. No matter how many AI scientists think AI is soon, Hanson will cherry-pick the surveying procedures and results that make it look far. But if a few futurists think brain emulation is possible, then no matter what anybody else thinks that’s good enough for him.

Second, one would expect that even if there were only 5-10% progress over the last twenty years, then there would be faster progress in the future, since the future will have a bigger economy, better supporting technology, and more resources invested in AI research. Robin answers this objection by saying that “increases in research funding usually give much less than proportionate increases in research progress” and cites Alston et al 2011. I looked up Alston et al 2011, and it is a paper relating crop productivity to government funding of agriculture research. There was no attempt to relate its findings to any field other than agriculture, nor to any type of funding other than government. But studies show that while public research funding often does have minimal effects, the effect of private research funding is usually much larger. A single sentence citing a study in crop productivity to apply to artificial intelligence while ignoring much more relevant results that contradict it seems like a really weak argument for a statement as potentially surprising as “amount of research does not affect technological progress”.

I realize that Hanson has done a lot more work on this topic and he couldn’t fit all of it in this book. I disagree with his other work too, and I’ve said so elsewhere. For now I just want to say that the arguments in this book seem weak to me.

I also want to mention what seems to me a very Hansonian counterargument to the ems-come-first scenario: we have always developed de novo technology before understanding the relevant biology. We built automobiles by figuring out the physics of combustion engines, not by studying human muscles and creating mechanical imitations of myosin and actin. Although the Wright brothers were inspired by birds, their first plane was not an ornithopter. Our power plants use coal and uranium instead of the Krebs Cycle. Biology is really hard. Even slavishly copying biology is really hard. I don’t think Hanson and the futurists he cites understand the scale of the problem they’ve set themselves.

Current cutting-edge brain emulation projects have found their work much harder than expected. Simulating a nematode is pretty much the rock-bottom easiest thing in this category, since they are tiny primitive worms with only a few neurons; the history of the field is a litany of failures, with current leader OpenWorm “reluctant to make bold claims about its current resemblance to biological behavior”. A more ambitious $1.3 billion attempt to simulate a tiny portion of a rat brain has gone down in history as a legendary failure (politics were involved, but I expect they would be involved in a plan to upload a human too). And these are just attempts to get something that behaves vaguely like a nematode or rat. Actually uploading a human, keeping their memory and personality intact, and not having them go insane afterwards boggles the mind. We’re still not sure how much small molecules matter to brain function, how much glial cells matter to brain function, how many things in the brain are or aren’t local. AI researchers are making programs that can defeat chess grandmasters; upload researchers are still struggling to make a worm that will wriggle. The right analogy for modern attempts to upload human brains isn’t modern attempts at designing AI. It’s an attempt at designing AI by someone who doesn’t even know how to plug in a computer.

VI.

I guess what really bothers me about Hanson’s pooh-poohing of AI is him calling it “a comic book plot”. To me, it’s Hanson’s scenario that seems science-fiction-ish.

I say this not as a generic insult but as a pointer at a specific category of errors. In Star Wars, the Rebellion had all of these beautiful hyperspace-capable starfighters that could shoot laser beams and explore galaxies – and they still had human pilots. 1977 thought the pangalactic future would still be using people to pilot its military aircraft; in reality, even 2016 is moving away from this.

Science fiction books have to tell interesting stories, and interesting stories are about humans or human-like entities. We can enjoy stories about aliens or robots as long as those aliens and robots are still approximately human-sized, human-shaped, human-intelligence, and doing human-type things. A Star Wars in which all of the X-Wings were combat drones wouldn’t have done anything for us. So when I accuse something of being science-fiction-ish, I mean bending over backwards – and ignoring the evidence – in order to give basically human-shaped beings a central role.

This is my critique of Robin. As weird as the Age of Em is, it makes sure never to be weird in ways that warp the fundamental humanity of its participants. Ems might be copied and pasted like so many .JPGs, but they still fall in love, form clans, and go on vacations.

In contrast, I expect that we’ll get some kind of AI that will be totally inhuman and much harder to write sympathetic stories about. If we get ems after all, I expect them to be lobotomized and drugged until they become effectively inhuman, cogs in the Ascended Economy that would no more fall in love than an automobile would eat hay and whinny. Robin’s interest in keeping his protagonists relatable makes his book fascinating, engaging, and probably wrong.

I almost said “and probably less horrible than we should actually expect”, but I’m not sure that’s true. With a certain amount of horror-suppressing, the Ascended Economy can be written off as morally neutral – either having no conscious thought, or stably wireheaded. All of Robin’s points about how normal non-uploaded humans should be able to survive an Ascended Economy at least for a while seem accurate. So morally valuable actors might continue to exist in weird Amish-style enclaves, living a post-scarcity lifestyle off the proceeds of their investments, while all the while the Ascended Economy buzzes around them, doing weird inhuman things that encroach upon them not at all. This seems slightly worse than a Friendly AI scenario, but much better than we have any right to expect of the future.

I highly recommend Age of Em as a fantastically fun read and a great introduction to these concepts. It’s engaging, readable, and weird. I just don’t know if it’s weird enough.

Three Great Articles On Poverty, And Why I Disagree With All Of Them

QZ: The universal basic income is an idea whose time will never come. Okay, maybe this one isn’t so great. It argues that work is ennobling (or whatever), that robots probably aren’t stealing our jobs, that even if we’re going through a period of economic disruption we’ll probably adapt, and that “if the goal is eliminating poverty, it is better to direct public funds to [failing schools and substandard public services]” then to try a guaranteed income scheme. It ends by saying that “I can’t understand why we’d consider creating and then calcifying a perpetually under-employed underclass by promoting the stagnation of their skills and severing their links to broader communities.”

(imagine a world where we had created and calcified a perpetually under-employed stagnant underclass. It sounds awful.)

More Crows Than Eagles: Unnecessariat. This one is great. A blogger from the Rust Belt reports on the increasing economic despair and frustration all around her, in the context of the recent spikes in heroin overdoses and suicides. There’s an important caveat here, in that at least national-level economic data paint a rosy picture: the unemployment rate is very low, consumer confidence is high, and the studies of technological unemployment suggest it’s not happening yet. Still, a lot of people on the ground – the anonymous blogger, the pathologists she worked with, and me from my position as a psychiatrist in the Midwest – feel like there’s a lot more misery and despair than the statistics suggest. MCTE replaces the old idea of the “precariat” – people who just barely have jobs and are worried about losing them – with her own coinage “unnecessariat” – people who don’t have jobs, are useless to the economy, and nobody cares what happens to them. It reminds me of the old argument of sweatshop-supporting economists – sure, we’re exploiting you, but you’d miss us if we left. She hates Silicon Valley for building its glittering megaplexes while ignoring everyone else, but she hates even more the people saying “Learn to code! Become part of the bright new exciting knowledge economy!” because realistically there’s no way an opioid-depended 55-year-old ex-trucker from Kentucky is going to learn to code. The only thing such people have left is a howl of impotent rage, and it has a silly hairstyle and is named Donald J. Trump.

Freddie deBoer: Our Nightmare. Also pretty great. The same things deBoer has been warning about for years, but expressed unusually clearly. By taking on the superficial mantle of center-leftism, elites sublimate the revolutionary impulse into a competition for social virtue points which ends up reinforcing and legitimizing existing power structures. Constant tally-keeping over what percent of obscenely rich exploitative Wall Street executives are people of color replaces the question of whether there should be obscenely rich exploitative Wall Street executives at all. As such tendencies completely capture the Democratic Party and the country’s mainstream left, genuine economic anger becomes more likely to be funneled into the right wing, where the elites can dismiss it as probably-racist (often with justification) and ignore it. “I cannot stress enough to you how vulnerable the case for economic justice is in this country right now. Elites agitate against it constantly…this is a movement, coordinated from above, and its intent is to solidify the already-vast control of economic elites over our political system…[Liberalism] is an attempt to ameliorate the inequality and immiseration of capitalism, when inequality and immiseration are the very purpose of capitalism.”

These articles all look at poverty in different ways, and I think that I look at poverty in a different way still. In the spirit of all the crazy political compasses out there, maybe we can learn something by categorizing them:


Including only people who think society should be in the business of collectively helping the poor at all (ie no extreme libertarians or social Darwinists) and people who are interested in something beyond deBoer’s nightmare scenario (ie not just making sure every identity group has an equal shot at the Wall Street positions).

People seem to split into a competitive versus a cooperative view of poverty. To massively oversimplify: competitives agree with deBoer that “inequality and immiseration are the very purpose of capitalism” and conceive of ending poverty in terms of stopping exploitation and giving the poor their “just due” that the rich have taken away from them. The cooperatives argue that everyone is working together to create a nice economy that enriches everybody who participates in it, but some people haven’t figured out exactly how to plug into the magic wealth-generating machine, and we should give them a helping hand (“here’s government-subsidized tuition to a school where you can learn to code!”). Probably nobody’s 100% competitive or 100% cooperative, but I think a lot of people have a tendency to view the problem more one way than the other.

So the northwest corner of the grid is people who think the problem is primarily one of exploitation, but it’s at least somewhat tractable to reform. No surprises here – these are the types who think that the big corporations are exploiting people, but if average citizens try hard enough they can make the Man pay a $15 minimum wage and give them free college tuition, and then with enough small victories like these they can level the balance enough to give everybody a chance.

(These are all going to be straw men, but hopefully useful straw men)

The southwest corner is people who think the problem is primarily one of exploitation, but nothing within the system will possibly help. I put “full communism” in the little box, but I guess this could also be anarcho-syndicalism, or anarcho-capitalism, or theocracy, or Trumpism, or [insert your preferred poorly-planned form of government which inevitably fails here].

The northeast corner is people who think we’re all in this together and there are lots of opportunities to help. This is the QZ writer who said we should be focusing on “education and public services”. The economy is a benevolent force that wants to help everybody, but some people through bad luck – poor educational opportunities, not enough childcare, racial prejudice – haven’t gotten the opportunity they need yet, so we should lend them a helping hand so they can get back on their feet and one day learn to code. I named this quadrant “Free School Lunches” after all those studies that show that giving poor kids free school lunches improves their grades by X percent, which changes their chances of getting into a good college by Y percent, which increases their future income by Z percent, so all we have to do is have lots of social programs like free school lunches and then poverty is solved. But aside from the lunch people people, this category must also include libertarians who think that all we need to do is remove regulations that prevent the poor from succeeding, Reaganites who think that a rising tide will lift all boats, and conservatives who think the poor just need to be taught Traditional Hard-Working Values. Actually, probably 90% of the Overton Window is in this corner.

The southeast corner is people who think that we’re all in this together, but that helping the poor is really hard. They agree with the free school lunch crowd that capitalism is more the solution than the problem, and that we should think of this in terms of complicated impersonal social and educational factors preventing poor people from fitting into the economy. But the southeasterners worry school lunches won’t be enough. Maybe even hiring great teachers, giving everybody free health care, ending racism, and giving generous vocational training to people in need wouldn’t be enough. If we held a communist revolution, it wouldn’t do a thing: you can’t hold a revolution against skill mismatch. This is a very gloomy quadrant, and I don’t blame people for not wanting to be in it. But it’s where I spend most of my time.

The exploitation narrative seems fundamentally wrong to me – I’m not saying exploitation doesn’t happen, nor even that it isn’t common, just that isn’t not the major factor causing poverty and social decay. The unnecessariat article, for all its rage against Silicon Valley hogging the wealth, half-admits this – the people profiled have become unnecessary to the functioning of the economy, no longer having a function even as exploited proletarians. Silicon Valley isn’t exploiting these people, just ignoring them. Fears of technological unemployment are also relevant here: they’re just the doomsday scenario where all of us are relegated to the unnecessariat, the economy having passed us by.

But I also can’t be optimistic about programs to end poverty. Whether it’s finding out that schools and teachers have relatively little effect on student achievement, that good parenting has even less, or that differences in income are up to fifty-eight percent heritable and a lot of what isn’t outright genetic is weird biology or noise, most of the research I read is very doubtful of easy (or even hard) solutions. Even the most extensive early interventions have underwhelming effects. We can spend the collective energy of our society beating our head against a problem for decades and make no headway. While there may still be low-hanging fruit – maybe an scaled-up Perry Preschool Project, lots of prenatal vitamins, or some scientist discovering a new version of the unleaded-gasoline movement – we don’t seem very good at finding it, and I worry it would be at most a drop in the bucket. Right now I think that a lot of variation in class and income is due to genetics and really deep cultural factors that nobody knows how to change en masse.

I can’t even really believe that a rising tide will lift all boats anymore. Not only has GDP uncoupled from median wages over the past forty years, but there seems to be a Red Queen’s Race where every time the GDP goes up the cost of living goes up the same amount. US real GDP has dectupled since 1900, yet a lot of people have no savings and are one paycheck away from the street. In theory, a 1900s poor person who suddenly got 10x his normal salary should be able to save 90% of it, build up a fund for rainy days, and end up in a much better position. In practice, even if the minimum wage in 2100 is $200 2016 dollar an hour, I expect the average 2100 poor person will be one paycheck away from the street. I can’t explain this, I just accept it at this point. And I think that aside from our superior technology, I would rather be a poor farmer in 1900 than a poor kid in the projects today. More southeast corner gloom.

The only public figure I can think of in the southeast quadrant with me is Charles Murray. Neither he nor I would dare reduce all class differences to heredity, and he in particular has some very sophisticated theories about class and culture. But he shares my skepticism that the 55 year old Kentucky trucker can be taught to code, and I don’t think he’s too sanguine about the trucker’s kids either. His solution is a basic income guarantee, and I guess that’s mine too. Not because I have great answers to all of the QZ article’s problems. But just because I don’t have any better ideas1,2.

The QZ article warns that it might create a calcified “perpetually under-employed stagnant underclass”. But of course we already have such an underclass, and it’s terrible. I can neither imagine them all learning to code, nor a sudden revival of the non-coding jobs they used to enjoy. Throwing money at them is a pretty subpar solution, but it’s better than leaving everything the way it is and not throwing money at them.

This is why I can’t entirely sympathize with any of the essays I read on poverty, eloquent though they are.

Footnotes

1. And then there’s the rest of the world. Given the success of export capitalism in Korea, Taiwan, China, Vietnam, et cetera, and the pattern where multinationals move to some undeveloped country with cheap labor, boost the local economy until the country is developed and labor there isn’t so cheap anymore, and then move on to the next beneficiary – solving international poverty seems a lot easier than solving local poverty. All we have to do is keep wanting shoes and plastic toys. And part of me wonders – if setting up a social safety net would slow domestic economic growth – or even divert money that would otherwise go to foreign aid – does that make it a net negative? Maybe we should be optimizing for maximum economic growth until we’ve maxed out the good we can do by industrializing Third World countries? My guess is that enough of the basic income debate is about how to use existing welfare payments that this wouldn’t be too big a factor. And I would hope (for complicated reasons), that basic income would be more likely to help than hurt the economy3.

2. Obviously invent genetic engineering and create a post-scarcity society, but until then we have to deal with this stuff.

3. And then there’s the whole open borders idea, which probably isn’t very compatible with basic income at all. Right now I think – I’ll explain at more length later – fully open borders is a bad idea, because the risk of it destabilizing the country and ruining the economic motor that lifts Third World countries out of poverty is too high.