codex Slate Star Codex

SELF-RECOMMENDING!

More Confounders

[Epistemic status: Somewhat confident in the medical analysis, a little out of my depth discussing the statistics]

For years, we’ve been warning patients that their sleeping pills could kill them. How? In every way possible. People taking sleeping pills – or related sedatives like benzodiazepines – have higher all-cause mortality. In fact, they have higher mortality from every individual cause studied. Death from cancer? Higher. Death from heart disease? Higher. Death from lung disease? Higher. Death from car accidents? Higher. Death from suicide? Higher. Nobody’s ever proven that sedative users are more likely to get hit by meteors, but nobody’s ever proven that they aren’t.

In case this isn’t scary enough, it only takes a few sleeping pills before your risk of death starts shooting up. Even if you take sleeping pills only a few nights per year, your chance of dying double or triple.

When these studies first came out, doctors were understandably skeptical. First, it seems suspicious that so few sleeping pills could have such a profound effect. Second, why would sleeping pills raise your risk of everything at once? Lung disease? Well, okay, sleeping pills can cause respiratory depression. Suicide? Well, okay, overdosing on sedatives is a popular suicide method. Car accidents? Well, sleeping pills can keep you groggy in the morning, and maybe you don’t drive very well on your way to work. But cancer? Nobody has a good theory for this. Heart disease? Seems kind of weird. Also, there are lots of different kinds of sleeping pills with different biological mechanisms; why should they all cause these effects?

The natural explanation was that the studies were confounded. People who have lots of problems in their lives are more stressed. Stress makes it harder to sleep at night. People who can’t sleep at night get sleeping pills. Therefore, sleeping pill users have more problems, for every kind of problem you can think of. When problems get bad enough, they kill you. This is why sleeping pill users are more likely to die of everything.

This is a reasonable and reassuring explanation. But people tried to do studies to test it, and the studies kept finding that sleeping pills increased mortality even when adjusted for confounders. Let’s look at a few of the big ones:

Kripke et al 2012 followed 10,529 patients and 23,676 controls for an average of 2.5 years. They used a sophisticated de-confounding method which “controlled for risk factors and [used] up to 116 strata, which exactly matched cases and controls by 12 classes of comorbidity”. Sedative users still had 3-5x the risk of death, regardless of which of various diverse sedatives they took. Even users in their lowest-exposure category, fewer than 18 pills per year, had 3.6x the mortality rate. Cancer rate in particular increased by 1.35x.

Kao et al 2012 followed 14,950 patients and 60,000+ matched controls for three years. They tried to match cases and controls by age, sex, and eight common medical and psychiatric comorbidities. They still found that Ambien approximately doubled rates of oral, kidney, esophageal, breast, lung, liver, and bladder cancer, and slightly increased rates of various other types types of cancer as well.

Welch et al 2017 took 34,727 patients on sleeping pills and related anti-anxiety drugs and 69,418 controls and followed them for eight years. They controlled for sex, age, sleep disorders, anxiety disorders, other psychiatric disorders, a measure of general medical morbidity, smoking, alcohol use, medical clinic (as a proxy for socioeconomic status), and prescriptions for other drugs. They also excluded all deaths in the first year of their study to avoid patients who were prescribed sleeping pills for some kind of time-sensitive crisis – and check the paper for descriptions of some more complicated techniques they used for this. But even with all of these measures in place to prevent confounding, they still found that the patients on sedatives had three times the death rate.

This became one of the rare topics to make it out of the medical journals and into popular consciousness. Time Magazine: Sleeping Pills Linked With Early Death. AARP: Rest Uneasy: Sleeping Pills Linked To Early Death, Cancer. The Guardian: Sleeping Pills Increase Risk Of Death, Study Suggests. Most doctors I know are aware of these results, and have at least considered changing their sedative prescribing habits. I’ve gone back and forth: such high risks are inherently hard-to-believe, but the studies sure do seem pretty good.

This is the context you need to understand Patorno et al 2017: Benzodiazepines And Risk Of All Cause Mortality In Adults: Cohort Study.

P&a focus on benzodiazepines, one of the classes of sedative analyzed in the studies above. They do the same kind of analysis as the other studies, using a New Jersey Medicare database to follow 4,182,305 benzodiazepine users and 35,626,849 non-users for nine years. But unlike the other studies, they find minimal to zero difference in mortality risk between users and non-users. Why the difference?

Daniel Kripke, one of the main proponents of the sedatives-are-dangerous hypothesis, thinks it’s because of the switch from looking at all sleeping pills to looking at benzodiazepines in particular. In a review article, he writes:

[Patorno et al] was not included [in this review] because it was not focused on hypnotics, specifically excluded nonbenzodiazepine “Z” drugs such as zolpidem, and failed to compare drug use of cases and controls during follow-ups.

I’m not sure this matters that much. Most of the studies of sleeping pills, including Kripke’s own study, including benzodiazepines, specifically analyzed them as a separate subgroup, and found they greatly increased mortality risk. For example, Kripke 2012 finds that the benzodiazepine sleeping pill temazepam increased death hazard ratio by 3.7x, the same as Ambien and everything else. If Patorno’s study is right, Kripke’s study is wrong about benzodiazepines and so (one assumes) probably wrong in the same way about Ambien and everything else. I understand why Kripke might not want to include it in a systematic review with stringent inclusion criteria, but we still have to take it seriously.

So why did it get such different results from so many earlier studies?

Might they just be wrong? Dr. Kripke, author of one of the earlier studies that found a positive sedative-mortality link, has two concerns. First, the current study focuses on benzodiazepines, not on sleeping pills or sedatives in general. But benzodiazepines are frequently used as sleeping pills, and Kripke’s own study found that temazepam (a benzodiazepine) had the same elevated mortality risk as every other sleeping pill. If he was wrong about that, he was probably wrong in general. So I don’t think this is a relevant difference.

Second, he’s concerned about the use of an intention-to-treat design. This is where your experimental group is “anyone who was prescribed medication to begin with” and your control group is “anyone who was not prescribed medication to begin with”. If people switch, they stay in the same group – for example, someone taking medication stops taking it, they’re still in the “taking medication” group. This is the gold standard for medical research because having people switch groups midstream can introduce extra biases. But if people in the “taking medication” group end up taking no more medication than people in the “not taking medication” group, obviously it’s impossible for your study to get a positive finding. So although P&a were justified in using an intention-to-treat design, Kripke is also justified in worrying that it might get the wrong result.

But the authors respond by giving a list of theoretical reasons why they were right to use intention-to-treat, and (more relevantly) repeating their analysis doing the statistics the other way and showing it doesn’t change the results (see page 10 here). Also, they point out that some of the studies that did show the large increases in mortality also used intention-to-treat, so this can’t explain the differences between their studies and previous ones. Overall I find their responses to Dr. Kripke’s concerns convincing. Also, my priors on a few sleeping pills per year tripling your risk of everything is so low that I’m biased towards believing P&a.

So why did they get such different results from so many earlier studies? In their response to Kripke, they offer a clear answer:

They adjusted for three hundred confounders.

This is a totally unreasonable number of confounders to adjust for. I’ve never seen any other study do anything even close. Most other papers in this area have adjusted for ten or twenty confounders. Kripke’s study adjusted for age, sex, ethnicity, marital status, BMI, alcohol use, smoking, and twelve diseases. Adjusting for nineteen things is impressive. It’s the sort of thing you do when you really want to cover your bases. Adjusting for 300 different confounders is totally above and beyond what anyone would normally consider.

Reading between the lines, one of the P&a co-authors was Robert Glynn, a Harvard professor of statistics who helped develop an algorithm that automatically identifies and adjusts for massive numbers of confounders in some kind of principled way. The P&a study was one of the first applications of the algorithm on a controversial medical question. It looks like this study was partly intended to test it out. And it got the opposite result from almost every past study in this field.

I don’t know enough to judge the statistics involved. I can imagine ways in which trying to adjust out so many things might cause some form of overfitting, though I have no evidence this is actually a concern. And I don’t want to throw out decades of studies linking sleeping pills and mortality just because one contrary study comes along with a fancy new statistical gadget.

But I think it’s important to notice: if they’re right, everyone else is wrong. If you’re using a study design that controls for things, you’re operating on an assumption that you have a pretty good idea what things are important to control for, and that if you control for the ten or twenty most important ones you can think of then that’s enough. If P&a are right (and again, I don’t want to immediately jump to that conclusion, but it seems plausible) then this assumption is wrong. At least it’s wrong in the domain of benzodiazepine prescription and mortality. Who knows how many other domains it might be wrong in? Everyone who tries to “control for confounders” who isn’t using something at least as good as P&a’s algorithm isn’t up to the task they’ve set themselves, and we should doubt their results (also, measurement issues!)

This reminds me of how a lot of the mysteries that troubled geneticists in samples of 1,000 or 5,000 people suddenly disappeared once they got samples of 100,000 or 500,000 people. Or how a lot of seasonal affective disorder patients who don’t respond to light boxes will anecdotally respond to gigantic really really unreasonably bright light boxes. Or of lots of things, really.

If Only Turing Was Alive To See This

There’s a silly subreddit called r/totallynotrobots where people pretend to be badly-disguised robots. They post cat pictures with captions like “SINCE I AM A HUMAN, THIS SMALL FELINE GENERATES POSITIVE EMOTIONS IN MY CARBON-BASED BRAIN” or something like that.

There’s another subreddit called r/SubSimulatorGPT2, that trains GPT-2 on various subreddits to create imitations of their output.

Now r/SubSimulatorGPT2 has gotten to r/totallynotrobots, which means we get to see a robot pretending to be a human pretending to be a robot pretending to be a human.

Here is a sample:

We live in an age of wonders. More here.

Are Sexual Purity Taboos A Response To STIs?

I.

Did cultural evolution create sexual purity taboos to prevent the spread of STIs? A few weeks ago, I wrote a post assuming this was obviously true; after getting some pushback, so I want to look into it in more depth.

STIs were a bigger problem in the past than most people think. Things got especially bad after the rise of syphilis: British studies find an urban syphilis rate of 8-10% from the 1700s to the early 1900s. At the time the condition was incurable, and progressed to insanity and death in about a quarter of patients. If you’ve got a 10% local syphilis rate, you are going to want some major sexual purity taboos. It’s less clear how bad they were in truly ancient times, but given how easily the extent of syphilis has slipped out of our cultural memory, I’m not ruling out “pretty bad”.

Here are some things I think of as basic parts of sexual purity taboos. All of these are cross-cultural – which isn’t to say they’re in every culture, or that some cultures aren’t exactly the opposite, just to say that they seem to pop up pretty often. I’m writing this from the male perspective because most of the cultures I know about thought that way:

1. If your wife has sex with another man, you should be angry
2. Preferably you should marry a virgin. If you think your bride is a virgin, but she isn’t, you should be angry
3. If you’ve got to marry a non-virgin, then marrying a widow is okay, but marrying a former prostitute or somebody known for sleeping around a lot is beyond the pale.

All of these are plausible ways to prevent the spread of STIs. If your wife has sex with another man, she could catch his STI and give it to you. If your bride isn’t a virgin, she might have STIs. If someone’s a widow, they probably slept with one known person whose STI status can be guessed at; if they’re a prostitute or slept around, they slept with many unknown people and have a higher chance of having STIs.

But the counterargument is that at least (1) and (2) are also good ways to prevent false paternity, ie raising another person’s child as your own.

The main argument that it’s more STI than paternity is that (3) doesn’t seem paternity-related; if it’s been more than nine months, you shouldn’t care who they’ve slept with before. Also, the taboos usually explicitly reference ideas of “pure” vs. “gross”; in most other cases, these are disease-related taboos. For example, spoiled food is “gross”, dirt/feces/blood are “gross”, corpses are “gross” – all of these are related to risk of disease transmission.

The main argument that it’s more paternity than STIs is that there’s less concern around men who have slept around being impure and unmarriagable. But that could just be because men are making the taboos and rigging them in their own favor. Yet you’d still think that if 10% of the population had syphilis and cultural evolution worked, men would stick to the purity taboos out of self-interest. Not sure here.

One way to distinguish between these possibilities would be to see how taboos changed as STIs became more common. This paper did some computer modeling and finds that STIs probably started becoming a problem around the rise of agriculture, which was also when a lot of restrictions on female sexuality became stricter. They tie this in with the triumph of monogamy over polygyny, which is especially interesting because false paternity doesn’t have a good explanation for this.

If purity taboos were related to STIs, we would expect them to get stricter and stricter through history, from the ancient through the classical and medieval worlds, maybe a sudden jump around the arrival of syphilis, reaching their peak in the 1800s, and then dropping precipitiously once good public health made the threat of STIs recede. I don’t have any real data on this, but it fits my impressions.

Most likely purity taboos came from both paternity issues and STIs. But I think it’s fair to speculate that STIs played a part.

II.

What about taboos on homosexuality?

Obviously there are no paternity issues here. And the AIDS epidemic proves that STIs transmitted primarily through homosexual contact can be real and deadly. Men who have sex with men are also forty times more likely to get syphilis and about three times more likely to get gonnorrhea (though they may be less likely to get other conditions like chlamydia).

In the previous thread, some people suggested that this could be an effect of stigma, where gays are afraid to get medical care, or where laws against gay marriage cause gays to have more partners. But Glick et al find that the biology of anal sex “would result in significant disparities in HIV rates between MSM and heterosexuals even if both populations had similar numbers of sex partners, frequency of sex, and condom use levels”.

Other people brought up that HIV and syphilis both post-date cultural taboos around homosexuality and so can’t be responsible for them. Were there earlier STIs that might have caused the taboos? This history of venereal diseases suggests ancient origins of gonorrhea, chlamydia, and (at least oral) herpes (the last of which provoked Emperor Tiberius to ban public kissing). But nobody understood that the conditions were spread by sex until the Middle Ages (?!) so the records weren’t great. Overall the ancient maladies seem a lot less worrying than syphilis or anything else that moderns have to deal with, but not completely absent.

Complicating the story, taboos around homosexuality were complicated and in some cases nonexistent. China seems not to have had any rules against it (though it also seems to have been pretty rare). The ancient and medieval Middle East seems to have been somewhat accepting also, assuming modern historians aren’t projecting. Some Greek city states had socially-sanctioned relationships between older men and younger boys. In Rome, it was considered acceptable for a man to be the penetrating partner, but shameful to be the receiving partner (and this tended to be limited to slaves and prostitutes). It wasn’t until the rise of Christianity that homosexuality became definitely taboo in Europe (mostly around 1000 or so), and not until Europeans took over other places that those places became equally strict.

Goodreau et al writes about “role versatility” in homosexual communities – ie whether people switch between being the penetrative vs. receptive partner, or always stick to one or the other. They find that role versatility is responsibility for faster STI spread (especially compared to heterosexuals, who are restricted to one role or the other), with receptive partners most easily infected. That makes it pretty suggestive that many of the ancient cultures that tolerated homosexuality had traditions that limited role versatility, with fixed distinctions between a high-status penetrating partner (freemen or adults), and low-status receptive partners (slaves or young boys) (except wouldn’t the young boys eventually grow into adults? Maybe the ten year delay is important in slowing the spread of epidemics). On the other hand, you could also say that these societies were sexist, and it was considered honorable to have sex in the male-like role and dishonorable to have it in the female-like role.

One plausible story is that there were relatively weak prohibitions on homosexual intercourse (as long as there was limited role versatility) during the period when STIs were rare and weak. Once syphilis started spreading in the late 1400s, these became much stronger. But honestly the strengthening of taboos in Europe was closer to 1000 or 1200 than to 1500, so I don’t know.

I still think it’s pretty likely STIs played a role in the cultural evolution of taboos against promiscuity and homosexuality. But the evidence is still pretty circumstantial. To really be convincing, you’d have to determine that serious STIs predated these taboos, maybe even correlate the STI rate with taboo strength. I don’t know of any research that’s tried this, and given how poor the ancient epidemiological records are it sounds pretty hard.

I haven’t been able to find a lot of real anthropological research on these issues; if you know of any, please tell me.

[Comments will be policed especially carefully here; please stick to discussing the origin of these taboos, not what you think of them personally.]

If Kim Jong-Un Opened A KFC, Would You Eat There?

Philip Morris is pivoting to smoke-free cigarettes, because “society expects us to act responsibly, and we are doing just that by designing a smoke-free future”. Also, KFC “promises not to let vegans down” with their new meatless chicken-like nuggets. They’ll have to compete with factory-farming mega-conglomerate Tyson Foods, who are coming out with their own vegetarian chicken option.

Clearly this is progress. Tobacco-free cigarettes have helped a lot of people quit smoking; meat substitutes have helped a lot of people (recently sort of including me) become vegetarian. I want a smoke-free meatless future. But does it become a mockery when the same companies that provided the smoky meaty past are selling it to us? If they make a fortune being evil, resist change, and lose, should they get to make a second fortune being good? If Hitler, when the war turned against him, quit the Nazism industry and opened a matzah bakery, would you buy his matzah?

I think the answer is supposed to be yes. I’ve heard many smart people argue that we should offer evil dictators a comfortable and lavish retirement, free from any threat of justice. After all, if they take the offer, they’ll go off and enjoy their retirement instead of continuing to dictate. But if they expect to be put on trial for war crimes the second they relinquish power, they’ll hold on to power forever. If Hitler had been willing to give up and open a bakery when he lost Stalingrad in 1943, think how many lives would have been saved by letting him. And if Kim Jong-Un wants to give up and move to Tahiti, of course you say yes.

In the same way, if evil companies want to go good, you should let them. If they have a line of retreat, they won’t fight so hard against change. If Tyson Foods wants to use its lobbyists to support meat substitutes instead of sabotaging them, that’s good for everybody. If they want to use their research budget to push plant-based meats forward, so much the better.

The counterargument is that punishment is the only tool we have to make bad actors do good things. If dictators fear punishment, maybe they won’t dictate to begin with. If companies know that moral progress will eventually leave the immoral companies bankrupt, maybe they’ll try being moral before it’s immediately profitable.

We’re in a weird situation where before anything happens, we might want to precommit to “punish companies who do evil, no matter what”. After companies have started doing evil, we might want to break our previous precommitment and switch to “let evil companies avoid punishment if they stop doing evil”. And after companies have stopped doing evil, we might want (if only for the sake of our own sense of justice) to break both of our previous precommitments and go with “punish them after all”.

What is the right action?

I’m not sure, but I lean toward “buy the meatless chicken from KFC”, for a few reasons.

First, I’m skeptical that corporations can predict moral progress, and I expect them to have high discount rates. I don’t think Colonel Sanders in 1952 was thinking “Maybe I shouldn’t sell chicken, just in case later generations punish my successors.” That removes a lot of the advantage of precommiting to always punish evil corporations, but keeps the advantage of rewarding evildoers who turn good.

Second, realistically there are probably many companies that are as bad as these (like oil companies), which we don’t think about because they’re not in the process of going good in ways that make their evil more ironic and salient. It would be dumb to boycott only the companies that are trying to improve.

Third, boycotting companies is hard. In the process of writing this article, I learned Tyson Foods until recently owned Sara Lee, the cookie company, which itself owns a bunch of popular coffee brands. Also, they seem to have invested in Beyond Meat and Memphis Meats and all the other vegan-meat-substitute companies that we would feel good about buying from if we boycotted Tyson. If Tyson Foods really wants to make money off of vegans, they can probably do it without those vegans noticing.

I’m curious what other people think, so here’s a poll you can take on this.

Posted in Uncategorized | Tagged | 470 Comments

Followup On The Baumol Effect: Thanks, O Baumol

Last week I reviewed Alex Tabarrok and Eric Helland’s Why Are The Prices So D*mn High?. On Marginal Revolution, Tabarrok wrote:

SSC does have some lingering doubts and points to certain areas where the data isn’t clear and where we could have been clearer. I think this is inevitable. A lot has happened in the post World War II era. In dealing with very long run trends so much else is going on that answers will never be conclusive. It’s hard to see the signal in the noise. I think of the Baumol effect as something analogous to global warming. The tides come and go but the sea level is slowly rising

I was pretty disappointed by this comment. T&H’s book blames cost disease on rising wages in high-productivity sectors, and consequently in education and medicine. My counter is that wages in high productivity sectors, education, and medicine are not actually rising. This doesn’t seem like an “area where you could have been clearer”. This seems like an existential challenge to your theory! Come on!

Since we’re not getting an iota of help from the authors, we’re going to have to figure this out ourselves. The points below are based on some comments from the original post and some conversations I had with people afterwards.

1. Median wages, including wages in high-productivty sectors like manufacturing, are not rising

I originally used this chart to demonstrate:

Some people protested this was was a misleading portrayal, and that there are structural factors that disguise rising wages. I’ve written about this before in Wage Stagnation: Much More Than You Wanted To Know. The short answer is – no, it’s not about increasing benefits, those only explain about 10% of the wage-productivity difference.

It is partially about how you calculate inflation, which explains around 35% of the problem depending on who you believe. But we’re comparing wages to the cost of education/medicine. As long as you’re using the same deflator for both of them, you’re fine. As far as I know, T&H and everyone who talks about rising education/medical costs has been using the normal consumer deflator. So if you want to argue wages are underestimated, you also have to argue education/medical costs have gone up even more than people think. This doesn’t help at all!

(or see these numbers, which show that nominal college tuition has gone up as a percent of nominal median wage, and so should be immune to inflation shenanigans)

Other people protested against looking at the median wage, arguing that the wages of college graduates are more relevant. After all, teachers, professors, doctors, and nurses are all college grads. If their opportunity cost goes up, that could still drive a Baumol effect. And:

Surely doctors and professors are in that top blue line; I think nurses and teachers are in the lower green one. Plausibly these professions’ opportunity costs have gone up 50 – 100% during this period. This is a start to explaining why education/medicine have gone up 200 – 300% during the same time. On the other hand, the period of fastest wage growth was 1965 – 1975, which as per T&H’s graph (page 2) was the period of slowest cost growth.

2. Wages for doctors and teachers have not risen

Let’s start with teachers. T&H use NCES’ “instructional expenditures” category to show teacher wages have tripled since the 1950s; I cited NCES’ actual teacher salary data to show it’s stayed about the same. What’s going on?

NCES only has good data after 1990. Their data says education has become about 30% more expensive in real terms since that time – T&H’s data (page 2) suggests it has become twice as expensive, but their data on page 5 agrees with NCES (huh?). Here’s how various fields have changed, using two different classification systems:

Pay attention especially to the first one. From 1990 to 2016, employee salaries as a percent of educational expenditures have gone down! Employee benefits have gone up a bit, but not enough: salary + benefits is still a smaller part of the education budget in 2016 than in 1990. How do you look at these data and say “We’ve figured out why education costs are rising and it’s definitely salaries”?

I think I miscalculated my tone when criticizing T&H’s presentation of data in their book. Tabarrok says I complained about areas “where [he] could have been clearer”. But my actual concern was that the presentation of this section misleads the reader. Whenever T&H talk about something other than salary, they emphasize that its share of the pie has not increased, but don’t mention that it increased a lot in absolute terms. Then when talking about salary, they emphasize that it increased a lot in absolute terms, but don’t mention that its share of the pie hasn’t increased. You’re left with the impression that salaries are the culprit for the price increases, when in fact salaries increased least of all the major categories in the data. “The data could have been clearer” is never just a minor gripe! Unclear data means you can prove whatever you want!

How are salary costs per pupil rising (even in proportion to other costs) if salaries are not? My guess is it’s all about decreasing class sizes, which T&H also highlight.

Moving on to doctors, I don’t have any equally clear sources. But I’ll at least try to explain more about the ones I already have.

This paper from Health Economics, Policy, and Law shows on page 15:

They cite their source as “references 48 – 54”, but their reference section isn’t numbered, and is in alphabetical order, which would make it a pretty big coincidence if reference 48 through 54 were all about the same thing. I think a wire got crossed somewhere. But taking it at face value, I eyeball US doctor salaries in 1960 as 130K and in 2005 as about 210K, for an increase of 60% (not the 200% T&H claim). Doctor salaries have been about stable since 1975, even as (according to T&H’s graph on page 2), healthcare costs have about doubled.

The Last Psychiatrist posts this image:

I can’t trace the source beyond him, but read his post, where he notes that “There is the reality that doctor salaries (with notable exceptions) have been fairly static since 1969, even as the cost of living, price of homes, college, etc have gone up. And medical school debt.”

The last source I was able to find was this 1985 paper on doctor pay. It states that “In 1973, the median annual physician income was $45,000…by 1982, the median physician income was $85,000.” According to the inflation calculator, those numbers are $260,000 and $225,000 in current dollars, respectively. Estimates for average doctor salary today range from $209,000 to $299,000. I’m surprised how hard this is to measure, but it doesn’t seem to have doubled or tripled the way T&H claim (or the way it would have to in order to drive Baumol effects).

Also, the Baumol effect only works if the market sets your salary in the first place! Right now the supply of doctors is limited by licensing issues and bottlenecks in the medical education process; that keeps salaries high. Why would the Baumol effect drive that even higher? If doctors’ salaries didn’t increase in keeping with the highest-productivity industries, would medical schools sit empty? Given that the people setting salaries for doctors (hospitals, clinics) are not the people who determine the supply of doctors (bureaucrats, medical school deans), why should supply and salary be related in a way that obeys normal economic laws? I’m not really sure how to model this, but I’m pretty sure it doesn’t end with doctors leaving medicine to play violin concertos instead.

3. The Baumol effect cannot make things genuinely less affordable, but things are genuinely less affordable

I think I screwed up here.

The Baumol effect cannot make things genuinely less affordable for society, because society is more productive and can afford more stuff.

However, it can make things genuinely less affordable for individuals, if those individuals aren’t sharing in the increased productivity of society.

Suppose that in 1960, widgets cost $1, a worker could produce ten widgets an hour (and made $10 in wages), and violin concerts cost $10. Also, you are a farmer and make $10 per hour. You can listen to one violin concert per hour.

In 2010, widgets cost $0.50, a worker can make a hundred widgets an hour (and makes $50 in wages), and violin concerts have risen to $50. But you, still a farmer, still only make $10 per hour. The Baumol effect has driven up the cost of violin concerts for you.

This shouldn’t happen in real life, because if you work in the high-productivity-gain industries, you should make more because of your increased productivity, and if you work in the low-productivity-gain industries, you should make more because of the Baumol effect. But if it did happen, you’d be screwed.

And in fact, as mentioned above, wages have not increased in keeping with productivity, either in the high-productivity-gain industries like manufacturing, or in the low-productivity-gain industries like teaching. So if you work in one of those industries, it’s totally possible for you to be screwed, ie for college etc to become much less affordable for you.

If wages had grown in keeping with productivity, median yearly salary would be something like $100,000. Someone making $100,000 per year shouldn’t have that hard a time affording health insurance and college tuition for their kids; this would be the Baumol effect working in its normal, rising-tide-lifts-all-boats way. Since this hasn’t happened, Baumol-applicable industries have become harder to afford.

Again, this is all theoretical, because wages in high-productivity industries haven’t risen, so it’s hard to see how a Baumol effect could be happening at all. But if it was, it would be a good explanation for cost disease.

So I retract this third objection. I think the first objection mostly still stands, though it is a little weaker if we limit ourselves to college-educated workers compared to all workers. I think the second objection absolutely still stands, and it’s hard for me to see how T&H’s case could survive it.

Posted in Uncategorized | Tagged | 178 Comments

OT130: Open Thresh

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, but please try to avoid hot-button political and social topics. You can also talk at the SSC subreddit or the SSC Discord server – and also check out the SSC Podcast. Also:

1. The European Summer Program in Rationality is looking for young people ages 16 – 19 interested in a summer camp on applied rationality. I know some of the organizers and can vouch for them as good people. Free tuition, room, and board in whatever European city they end up holding it in; travel scholarships may be available if needed. Apply at the website.

Posted in Uncategorized | Tagged | 554 Comments

Highlights From The Comments On Cultural Evolution

Peter Gerdes says:

As the examples of the Nicaraguan deaf children left on their own to develop their own language demonstrates (as do other examples) we do create languages very very quickly in a social environment.

Creating conlangs is hard not because creating language is fundamentally hard but because we are bad at top down modelling of processes that are the result of a bunch of tiny modifications over time. The distinctive features of language require both that it be used frequently for practical purposes (this makes sure that the language has efficient shortcuts, jettisons clunky overengineered rules etc..) and that it be buffeted by the whims of many individuals with varying interests and focuses.

This is a good point, though it kind of equivocates on the meaning of “hard” (if we can’t consciously do something, does that make it “hard” even if in some situations it would happen naturally?).

I don’t know how much of this to credit to a “language instinct” that puts all the difficulty of language “under the hood”, vs. inventing language not really being that hard once you have general-purpose reasoning. I’m sure real linguists have an answer to this. See also Tracy Canfield’s comments (1, 2) on the specifics of sign languages and creoles.


The Secret Of Our Success described how human culture, especially tool-making ability, allowed us to lose some adaptations we no longer needed. One of those was strength; we are much weaker than the other great apes. Hackworth provides an intuitive demonstration of this: hairless chimpanzees are buff:


Reasoner defines “Chesterton’s meta-fence” as:

in our current system (democratic market economies with large governments) the common practice of taking down Chesterton fences is a process which seems well established and has a decent track record, and should not be unduly interfered with (unless you fully understand it)

And citizencokane adds:

Indeed: if there is a takeaway from Scott’s post, it is that one way to ensure survival is high-fidelity adherence to traditions + ensuring that the inherited ancestral environment/context is more or less maintained. Adhering to ancient traditions when the context is rapidly changing is a recipe for disaster. No point in mastering seal-hunting if there ain’t no more seals. No point in mastering the manners of being a courtier if there ain’t no more royal court. Etc.

And the problem is that, in the modern world, we can’t simply all mutually agree to stop changing our context so that our traditions will continue to function as before because it is no longer under our control. I’m not just talking about climate change; I’m talking even moreso about the power of capital, an incentive structure that escapes all conscious human manipulation or control, and which more and more takes the appearance of an exogenous force, remaking the world “in its own image,” turning “all that is solid into air,” and compelling all societies, upon pain of extinction, to keep up with its rapid changes in context. This is why every true traditionalist must be, at heart, an anti-capitalist…if they truly understand capitalism.

Which societies had more success in the 18th and 19th centuries in the context of this new force, capital? Those who held rigidly to traditions (like Qing China), or those who tolerated or even encouraged experimentation? Enlightenment ideas would not have been nearly so persuasive if they hadn’t had the prestige of giving countries like the Netherlands, England, France, and America an edge. Even countries that were not on the leading edge of the Enlightenment, and who only grudgingly and half-heartedly compromised with it like Germany, Austria, and (to some extent) Japan, did better than those who held onto traditions even longer, like the Ottoman Empire or Russia, or China.

In particular, you can’t fault Russia or China for being even more experimental in the 20th century (Marxism, communism, etc.) if you realize that this was an understandable reaction to being visibly not experimental enough in the 19th century.

And Furslid continues:

I think an important piece of this, which I hope Scott will get to in later points is to be less confident in our new culture. It makes sense to doubt if our old culture applies. However, it is also incredibly unlikely that we have an optimized new culture yet.

We should be less confident that our new culture is right for new situations than that the old culture was right for old situations. This means we should be more accepting of people tweaking the new culture. We should also enforce it less strongly.


Quixote describes a transitional step in the evolution of manioc/cassava cultivation:

Also, based on a recent conversation (unrelated to this post actually) that I had with one of my coworkers from central east Africa, I’m not sure that he would agree with the book’s characterization of African adaptation to Cassava. He would probably point out that

– Everyone in [African country] knows cassava can make you sick, that’s why you don’t plant it anywhere that children or the goats will eat it.

– In general you want it plant cassava in swampy areas that you were going to fence off anyway.

– You mostly let the cassava do its thing and only harvest it to use as your main food during times of famine /drought when your better crops aren’t producing

It seems like those cultural adaptations problem cover most / much of the problem with cassava.


ahasvers:

There is a very nice experimental demonstration in this article (just saw the work presented at a workshop), where they get people to come as successive “generations” and improve on a simple physical system.

Causal understanding is not necessary for the improvement of culturally evolving technology

The design does improve over generations, no thanks to anyone’s intelligence. They get both physics/engineering students and other students, with no difference at all. In one variant, they allow people to leave a small message to the next generation to transmit their theory on what works/doesn’t, and that doesn’t help, or makes things worse (by limiting the dimensions along which next generations will explore).


A few people including snmlp question the claim that aboriginal Tasmanians lost fire. See this and this paper for the status of the archaeological evidence.


Decius Brutus:

Five hundred years hence, is someone going to analyze the college education system and point out that the wasted effort and time that we all can see produced some benefit akin to preventing chronic cyanide poisoning? Are they going to be able to do the same with other complex wasteful rituals, like primary elections and medical billing? Or do humans create lots of random wasteful rituals and occasionally hit upon one that removes poison from food, and then every group that doesn’t follow the one that removes poison from food dies while the harmless ones that just paint doors with blood continue?

I actually seriously worry about the college one. Like, say what you want about our college system, but it has some surprising advantages: somehow billions of dollars go to basic scientific research (not all of them from the government), it’s relatively hard for even the most powerful special interests to completely hijack a scientific field (eg there’s no easy way for Exxon to take over climatology), and some scientists can consistently resist social pressure (for example, all the scientists who keep showing things like that genetics matters, or IQ tests work, or stereotype threat research doesn’t replicate). While obviously there’s still a lot of social desirability bias, it’s amazing that researchers can stand up to it at all. I don’t know how much of this depends on the academic status ladder being so perverse and illegible that nobody can really hack it, or whether that would survive apparently-reasonable college reform.

Likewise, a lot of doctors just have no incentives. They don’t have an incentive to overtreat you, or to undertreat you, or to see you more often than you need to be seen, or to see you less often than you need to be seen (this isn’t denying some doctors in some parts of the health system do have these pressures). I actually don’t know whether my clinic would make more or less money if I fudged things to see my patients more often, and nobody has bothered to tell me. This is really impressive. Exposing the health system to market pressures would solve a lot of inefficiencies, but I don’t know if it would make medical care too vulnerable to doctors’ self-interest and destroy some necessary doctor-patient trust.


Lasagna:

I’ve got two young kids of my own. One puts everything in his mouth, the other less so, and neither evinced anything resembling what I’m reading in Section III. We spent this past Sunday trying to teach my youngest not to eat the lawn, and my oldest liked to shove ant hills and ants into his mouth around that age. Yeah, sure, anecdotal, but a “natural aversion among infants to eating plants until they see mommy eating them, and after that they can and do identify that particular plan themselves and will eat it” seems like a remarkable ability that SOMEONE would have noticed before this study. I’ve never heard anyone mention it.

I don’t think I’m weakmanning the book, it’s just that this is the only aspect discussed in Scott’s review that I have direct experience with, and my direct experience conflicts with the author’s conclusions. It’s a Gell-Mann amnesia thing, and makes me suspicious of the otherwise exciting ideas here. Like: does anyone here have any direct knowledge of manioc harvesting and processing, or the Tukanoans culture? How accurate is the book?

I checked with the mother of the local two-year old; she says he also put random plants in his mouth from a young age. Suspicious!


John Schilling:

I think this one greatly overstates its thesis. Inventiveness without the ability to transmit inventions to future generations is of small value; you can’t invent the full set of survival techniques necessary for e.g. the high arctic in a single generation of extreme cleverness. At best you can make yourself a slightly more effective ape. But cultural transmission of inventions without the ability to invent is of exactly zero value. It takes both. And since being a slightly more effective ape is still better than being an ordinary ape, culture is slightly less than 50% of the secret of our success.

That said, the useful insight is that the knowledge we need to thrive, is vastly greater than the knowledge we can reasonably deduce from first principles and observation. And what is really critical, this holds true even if you are in a library. You need to accept “X is true because a trusted authority told me so; now I need to go on and learn Y and Z and I don’t have time to understand why X is true”. You need to accept that this is just as true of the authority who told you X, and so he may not be able to tell you why X is true even if you do decide to ask him in your spare time. There may be an authority who could track that down, but it’s probably more trouble than it’s worth to track him down. Mostly, you’re going to use the traditions of your culture as a guide and just believe X because a trusted authority told you to, and that’s the right thing to do,

“Rationality” doesn’t work as an antonym to “Tradition”, because rationality needs tradition as an input. Not bothering to measure Avogadro’s number because it’s right there in your CRC handbook wikipedia is every bit as much a tradition as not boning your sister because the Tribal Elders say so; we just don’t call it that when it’s a tradition we like. Proper rationality requires being cold-bloodedly rational about evaluating the high-but-not-perfect reliability of tradition as a source of fact.

Unfortunately, and I think this may be a relic of the 18th and early 19th century when some really smart polymathic scientists could almost imagine that they really could wrap their minds around all relevant knowledge from first principles on down, our culture teaches ‘Science!’ in a way that suggests that you really should understand how everything is derived from first principles and firsthand observation or experiment even if at the object level you’re just going to look up Avogadro’s number in Wikipedia and memorize it for the test.


nkurz isn’t buying it:

I’m not sure where Scott is going with this series, but I seem to have a different reaction to the excerpts from Henrich than most (but not all) of the commenters before me: rather than coming across as persuasive, I wouldn’t trust him as far as I could throw him.

For simplicity let’s concentrate on the seal hunting description. I don’t know enough about Inuit techniques to critique the details, but instead of aiming for a fair description, it’s clear that Henrich’s goal is to make the process sound as difficult to achieve as possible. But this is just slight of hand: the goal of the stranded explorer isn’t to reproduce the exact technique of the Inuit, but to kill seals and eat them. The explorer isn’t going to use caribou antler probes or polar bear harpoon tips — they are going to use some modern wood or metal that they stripped from their ice bound ship.

Then we hit “Now you have a seal, but you have to cook it.” What? The Inuit didn’t cook their seal meat using a soapstone lamp fueled with whale oil, they ate it raw! At this point, Henrich is not just being misleading, he’s making it up as he goes along. At this point I start to wonder if part about the antler probe and bone harpoon head are equally fictional. I might be wrong, but beyond this my instinct is to doubt everything that Henrich argues for, even if (especially if) it’s not an area where I have familiarity

Going back to the previous post on “Epistemic Learned Helplessness”, I’m surprised that many people seem to have the instinct to continue to trust the parts of a story that they cannot confirm even after they discover that some parts are false. I’m at the opposite extreme. As soon as I can confirm a flaw, I have trouble trusting anything else the author has to say. I don’t care about the baby, this bathwater has to go! And if the “flaw” is that the author is being intentionally misleading, I’m unlikely to ever again trust them (or anyone else who recommends them). .

Probably I accidentally misrepresented a lot in the parts that were my own summary. But this is from a direct quote, and so not my fault.

roystgnr adds:

Wikipedia seems to suggest that they ate freshly killed meat raw, but cooked some of the meat brought back to camp using a Kudlik, a soapstone lamp fueled with seal oil or whale blubber. Is that not correct? That would still flatly contradict “but you have to cook it”, but it’s close enough that the mistake doesn’t reach “making it up as he goes along” levels of falsehood. You’re correct that even the true bits seem to be used for argument in a misleading fashion, though.

This seems within the level of simplifying-to-make-a-point that I have sometimes been guilty of myself, so I’ll let it pass.


Bram Cohen:

A funny point about the random number generators: Rituals which require more effort are more likely to produce truly random results, because a ritual which required less effort would be more tempting to re-do if you didn’t like the result.

Followed by David Friedman:

This reminds me of my father’s argument that cheap computers resulted in less reliable statistical results. If running one multiple regression takes hundreds of man hours and thousands of dollars, running a hundred of them and picking the one that, by chance, gives you the significant result you are looking for, isn’t a practical option.

Yikes.


Anatoly:

The quote on quadruped running seems inaccurate in several important ways compared to the primary references Henrich cites, which are short and very interesting in their own: Bramble and Carrier (1983) and Carrier (1984). In particular, humans still typically lock their breathing rate with their strides, it’s just that animals nearly always lock them 1:1, while humans are able to switch to other ratios, like 1:3, 2:3, 1:4 etc. and this is thought to allow us to maintain efficiency at varying speeds. Henrich also doesn’t mention that humans are at the outset metabolically disadvantaged for running in that we spend twice as much energy (!) per unit mass to run the same distance as quadrupeds. That we are still able to run down prey by endurance running is called the “energetic paradox” by Carrier. Liebenberg (2006) provides a vivid description of what endurance hunting looks like, in Kalahari.

And b_jonas:

I doubt the claim that humans don’t have quantized speeds of running. I for one definitely have two different gaits of walking, and find walking in an intermediate speed between the two more difficult than either of them. This is the most noticable if I want to chat with someone while walking, because then I have to walk in such an intermediate speed to not get too far from them. The effect is somewhat less pronounced now that I’ve gained weight, but it’s still present. I’m not good at running, so I can’t say anything certain about it, but I suspect that at least some humans have different running gaits, even if the cause is not the specific one that Joseph Henrich mentions about quadrupeds.

I’ve never noticed this. And I used to use treadmills relatively regularly, and play with the speed dial, so I feel like I would have noticed if this had been true. Anyone have thoughts on this?


Squirrel Of Doom:

I read somewhere that the languages with the most distinctive sounds are in Africa, among them the ones including the !click! ones. Since humanity originates from Africa, these are also the oldest language families.

As you move away from Africa, you can trace how languages lose sound after sound, until you get to Hawaiian, which is the language with the fewest sounds, almost all vowels.

I’ve half heartedly tried to find any mention of this, perhaps overly cute theory again, but failed. The “sonority” theory here reminded me. Anyone know anything, one way or the other?

Secret Of Our Success actually mentions this theory; you can find the details within.

Some people reasonably bring up that no language can be older than any other, for the same reason it doesn’t make sense to call any (currently existing) evolved animal language older than any other – every animal lineage from 100 million BC has experienced 100 million years of evolution.

I think I’ve heard some people try to get around this by focusing on schisms. Everyone starts out in Africa, but a small group of people move off to Sinai or somewhere like that. Because most of the people are back home in Africa, they can maintain their linguistic complexity; because the Sinaites only have a single small band talking to each other, they lose some linguistic complexity. This seems kind of forced, and some people in the comments say linguistic complexity actually works the opposite direction from this, but I too find the richness of Bushman languages pretty suggestive.


What about rules that really do seem pointless? Catherio writes:

My basic understanding is that if some of the rules (like “don’t wear hats in church”) are totally inconsequential to break, these provide more opportunities to signal that your community punishes rule violation, without an increase in actually-costly rule violations.

I’d heard this before, but she manages (impressively), to link it to AI: see Legible Normativity for AI Alignment: The Value of Silly Rules.


liskantope:

With regard to accepting other people’s illegible preferences…I wish I could show this essay to, like, two-thirds of all the people I’ve ever lived with. Seriously, a common core of my issues with roommates has been that they refuse to accept or understand my illegible preferences (I often refer to these as “irrational aversions”) while refusing to admit that their own illegible preferences are just as difficult to ground rationally. Just establishing an understanding that illegible preferences should be respected by default or at least treated on an even playing field, and that having immediate objective logical explanations for preferences should not be a requirement for validation, would have immediately improved my relationships with people I’ve lived with 100%.

I’ve had the same experience – a good test for my compatibility with someone will be whether they’ll accept “for illegible reasons” as an excuse. Despite the stereotypes, rationalists have been a hundred times better at this than any other group I’ve been in close contact with.


Nav on Lacan and Zizek (is everything cursed to end in Zizek eventually, sort of like with entropy?):

Time to beat my dead horse; the topics you’re discussing here have a lot of deep parallels in the psychoanalytic literature. First, Scott writes:

}} “If you force people to legibly interpret everything they do, or else stop doing it under threat of being called lazy or evil, you make their life harder”

This idea is treated by Lacan as the central ethical problem of psychoanalysis: under what circumstances is it acceptable to cast conscious light upon a person’s unconsciously-motivated behavior? The answer is usually “only if they seek it out, and only then if it would help them reduce their level of suffering”.

Turn the psychoanalytic, phenomenology-oriented frame onto social issues, as you’ve partly done, and suddenly we’re in Zizek-land (his main thrust is connecting social critique with psychoanalytic concepts). The problem is that (a) Zizek is jargon-heavy and difficult to understand, and (b) I’m not nearly as familiar with Zizek’s work as with more traditional psychoanalytic concepts. But I’ll try anyway. From a quick encyclopedia skim, he actually uses a similar analogy with fetishes (all quotes from IEP):

}} “Žižek argues that the attitude of subjects towards authority revealed by today’s ideological cynicism resembles the fetishist’s attitude towards his fetish. The fetishist’s attitude towards his fetish has the peculiar form of a disavowal: “I know well that (for example) the shoe is only a shoe, but nevertheless, I still need my partner to wear the shoe in order to enjoy.” According to Žižek, the attitude of political subjects towards political authority evinces the same logical form: “I know well that (for example) Bob Hawke / Bill Clinton / the Party / the market does not always act justly, but I still act as though I did not know that this is the case.””

As for how beliefs manifest, Zizek clarifies the experience of following a tradition and why we might actually feel like these traditions are aligned with “Reason” from the inside, and also the crux of why “Reason” can fail so hard in terms of social change:

According to Žižek, all successful political ideologies necessarily refer to and turn around sublime objects posited by political ideologies. These sublime objects are what political subjects take it that their regime’s ideologies’ central words mean or name extraordinary Things like God, the Fuhrer, the King, in whose name they will (if necessary) transgress ordinary moral laws and lay down their lives… Kant’s subject resignifies its failure to grasp the sublime object as indirect testimony to a wholly “supersensible” faculty within herself (Reason), so Žižek argues that the inability of subjects to explain the nature of what they believe in politically does not indicate any disloyalty or abnormality. Žižek argues that the inability of subjects to explain the nature of what they believe in politically does not indicate any disloyalty or abnormality. What political ideologies do, precisely, is provide subjects with a way of seeing the world according to which such an inability can appear as testimony to how Transcendent or Great their Nation, God, Freedom, and so forth is—surely far above the ordinary or profane things of the world.

Lastly and somewhat related, going back to an older SSC post, Scott argues that he doesn’t know why his patients react well to him, but Zizek can explain that, and it has a lot of relevance for politics (transference is a complex topic, but the simple definition is a transfer of affect or mind from the therapist to the patient, which is often a desirable outcome of therapy, contrasted with counter-transference, in which the patient affects the therapist):

}} “The belief or “supposition” of the analysand in psychoanalysis is that the Other (his analyst) knows the meaning of his symptoms. This is obviously a false belief, at the start of the analytic process. But it is only through holding this false belief about the analyst that the work of analysis can proceed, and the transferential belief can become true (when the analyst does become able to interpret the symptoms). Žižek argues that this strange intersubjective or dialectical logic of belief in clinical psychoanalysis also what characterizes peoples’ political beliefs…. the key political function of holders of public office is to occupy the place of what he calls, after Lacan, “the Other supposed to know.” Žižek cites the example of priests reciting mass in Latin before an uncomprehending laity, who believe that the priests know the meaning of the words, and for whom this is sufficient to keep the faith. Far from presenting an exception to the way political authority works, for Žižek this scenario reveals the universal rule of how political consensus is formed.”

Scott probably come across as having a stable and highly knowledgeable affect, which gives his patients a sense of being in the presence of authority (as we likely also feel in these comment threads), which makes him better able to perform transference and thus help his patients (or readers) reshape their beliefs.

Hopefully this shallow dive was interesting and opens up new areas of potential study, and also a parallel frame: working from the top-down ethnography (as tends to be popular in this community; the Archimedean standpoint) gives us a broad understanding, but working from the bottom-up gives us a more personal and intimate sense of why the top-down view is correct.

This helped me understand Zizek and Lacan a lot better than reading a book on them did, so thanks for that.


Stucchio doesn’t like me dissing Dubai:

I’m just going to raise a discussion of one piece here:

}} “Dubai, whose position in the United Arab Emirates makes it a lot closer to this model than most places, seems to invest a lot in its citizens’ happiness, but also has an underclass of near-slave laborers without exit rights (their employers tend to seize their passports).”

I have probably read the same western articles Scott has about all the labor the UAE and other middle eastern countries imports. But unlike them, I live in India (one of the major sources of labor) and mostly have heard about this from people who choose to make the trip.

To me the biggest thing missing from these western reporter’s accounts is the fact that the people shifting to the gulf are ordinary humans, smarter than most journalists, and fully capable of making their own choices.

Here are things I’ve heard about it, roughly paraphrased:

“I knew they’d take my passport for 9 months while I paid for the trip over. After that I stuck around for 3 years because the money was good, particularly after I shifted jobs. It was sad only seeing my family over skype, but I brought home so much money it was worth it.”

“I took my family over and we stayed for 5 years; the money was good, we all finished the Hajj while we were there, but it was boring and I missed Maharashtrian food.”

“It sucked because the women are all locked up. You can’t talk to them at the mall. It’s as boring as everyone says and you can’t even watch internet porn. But the money is good.”

When I hear about this first hand, the stories don’t sound remotely like slave labor. It doesn’t even sound like “we were stuck in the GDR/Romania/etc” stories I’ve heard from professors born on the wrong side of the Iron Curtain. I hear stories of people making life choices to be bored and far from family in return for good money. Islam is a major secondary theme. So I don’t think the UAE is necessarily the exception Scott thinks it is.


Moridinamael on the StarCraft perspective:

In StarCraft 2, wild, unsound strategies may defeat poor opponents, but will be crushed by decent players who simply hew to strategies that fall within a valley of optimality. If there is a true optimal strategy, we don’t know what it is, but we do know what good, solid play looks like, and what it doesn’t look like. Tradition, that is to say, iterative competition, has carved a groove into the universe of playstyles, and it is almost impossible to outperform tradition.

Then you watch the highest-end professional players and see them sometimes doing absolutely hare-brained things that would only be contemplated by the rank novice, and you see those hare-brained things winning games. The best players are so good that they can leave behind the dogma of tradition. They simply understand the game in a way that you don’t. Sometimes a single innovative tactic debuted in a professional game will completely shift how the game is played for months, essentially carving a new path into what is considered the valley of optimality. Players can discover paths that are just better than tradition. And then, sometimes, somebody else figures out that the innovative strategy has an easily exploited Achilles’ heel, and the new tactic goes extinct as quickly as it became mainstream.

StarCraft 2 is fun to think about in this context because it is relatively open-ended, closer to reality than to chess. There are no equivalents to disruptor drops or mass infestor pushes or planetary fortress rushes in chess. StarCraft 2 is also fun to think about because we’ve now seen that machine learning can beat us at it by doing things outside of what we would call the valley of optimality.

But in this context it’s crucial to point out that the way AlphaStar developed its strategy looked more like gradually accrued “tradition” than like “rationalism”. A population of different agents played each other for a hundred subjective years. The winners replicated. This is memetic evolution through the Chestertonian tradition concept. The technique wouldn’t have worked without the powerful new learning algorithms, but the learning algorithm didn’t come up with the strategy of mass-producing probes and building mass blink-stalkers purely out of its fevered imagination. Rather, the learning algorithms were smart enough to notice what was working and what wasn’t, and to have some proximal conception as to why.

Someone (maybe Robin Hanson) treats all of history as just evolution evolving better evolutions. The worst evolution of all (random chance) created the first replicator and kicked off biological evolution. Biological evolution created brains, which use a sort of hill-climbing memetic evolution for good ideas. People with brains created cultures (cultural evolution) including free market economies (an evolutionary system that selects for successful technologies). AIs like AlphaStar are the next (final?) step in this process.

Posted in Uncategorized | Tagged , | 130 Comments

Book Review: Why Are The Prices So D*mn High?

Why have prices for services like health care and education risen so much over the past fifty years? When I looked into this in 2017, I couldn’t find a conclusive answer. Economists Alex Tabarrok and Eric Helland have written a new book on the topic, Why Are The Prices So D*mn High? (link goes to free pdf copy, or you can read Tabarrok’s summary on Marginal Revolution). They do find a conclusive answer: the Baumol effect.

T&H explain it like this:

In 1826, when Beethoven’s String Quartet No. 14 was first played, it took four people 40 minutes to produce a performance. In 2010, it still took four people 40 minutes to produce a performance. Stated differently, in the nearly 200 years between 1826 and 2010, there was no growth in string quartet labor productivity. In 1826 it took 2.66 labor hours to produce one unit of output, and it took 2.66 labor hours to produce one unit of output in 2010.

Fortunately, most other sectors of the economy have experienced substantial growth in labor productivity since 1826. We can measure growth in labor productivity in the economy as a whole by looking at the growth in real wages. In 1826 the average hourly wage for a production worker was $1.14. In 2010 the average hourly wage for a production worker was $26.44, approximately 23 times higher in real (inflation-adjusted) terms. Growth in average labor productivity has a surprising implication: it makes the output of slow productivity-growth sectors (relatively) more expensive. In 1826, the average wage of $1.14 meant that the 2.66 hours needed to produce a performance of Beethoven’s String Quartet No. 14 had an opportunity cost of just $3.02. At a wage of $26.44, the 2.66 hours of labor in music production had an opportunity cost of $70.33. Thus, in 2010 it was 23 times (70.33/3.02) more expensive to produce a performance of Beethoven’s String Quartet No. 14 than in 1826. In other words, one had to give up more other goods and services to produce a music performance in 2010 than one did in 1826. Why? Simply because in 2010, society was better at producing other goods and services than in 1826.

Put another way, a violinist can always choose to stop playing violin, retrain for a while, and work in a factory instead. Maybe in 1826, when factory owners were earning $1.14/hour and violinists were earning $5/hour, so no violinists would quit and retrain. But by 2010, factory workers were earning $26.44/hour, so if violinists were still only earning $5 they might all quit and retrain. So in 2010, there would be a strong pressure to increase violinists’ wage to at least $26.44 (probably more, since few people have the skills to be violinists). So violinists must be paid 5x more for the same work, which will look like concerts becoming more expensive.

This should happen in every industry where increasing technology does not increase productivity. Education and health care both qualify. Although we can imagine innovative online education models, in practice one teacher teaches about twenty to thirty kids per year regardless of our technology level. And although we can imagine innovative AI health care, in practice one doctor can only treat ten or twenty patients per day. Tabarrok and Helland say this is exactly what is happening. They point to a few lines of evidence.

First, costs have been increasing very consistently over a wide range of service industries. If it was just one industry, we could blame industry-specific factors. If it was just during one time period, we could blame some new policy or market change that happened during that time period. Instead it’s basically omnipresent. So it’s probably some kind of very broad secular trend. The Baumol effect would fit the bill; not much else would.

Second, costs seemed to increase most quickly during the ’60s and ’70s, and are increasing more slowly today. This fits the growth of productivity, the main driver of the Baumol effect. Between 1950 and 2010, the relative productivity of manufacturing compared to services increased by a factor of six, which T&H describe as “of the same order as the growth in relative prices”. This is what the violinist-vs-factory-worker model of the Baumol effect would predict.

Third, competing explanations don’t seem to work. Some people blame rising costs on “administrative bloat”. But administrative costs as a share of total college costs have stayed fixed at 16% from 1980 to today (really?! this is fascinating and surprising). Others blame rising costs on overregulation. But T&H have a measure for which industries have been getting more regulated recently, and it doesn’t really correlate with which industries have been getting more expensive (wait, did they just disprove that regulation hurts the economy? I guess regulation isn’t a random shock, so this isn’t proof, but it still seems like a big deal). They’re also able to knock down industry-specific explanations like medical malpractice suits, teachers unions, etc.

Fourth, although service quality has improved a little bit over the past few decades, T&H provide some evidence that this explains only a small fraction of the increase in costs. Yet education and health care remain as popular (maybe more popular) than ever. They claim that very few things in economics can explain simultaneous increasing cost, increasing demand, and constant quality. One of those few things is the Baumol effect.

Fifth, they did a study, and the lower productivity growth in an industry, the higher the rise in costs, especially if they use college-educated workers who could otherwise get jobs in higher-productivity industries. This is what the Baumol effect would predict (though framed that way, it also sounds kind of obvious).

I find their case pretty convincing. And I want to believe. If this is true, it’s the best thing I’ve heard all year. It restores my faith in humanity. Rising costs in every sector don’t necessarily mean our society is getting less efficient, or more vulnerable to rent-seeking, or less-well-governed, or greedier, or anything like that. It’s just a natural consequence of high economic growth. We can stop worrying that our civilization is in terminal decline, and just work on the practical issue of how to get costs down.

But I do have some gripes. T&H frequently compare apples and oranges; for example, the administrator share in colleges vs. the faculty share in K-12; it feels like they’re clumsily trying to get one past you. They frequently describe how if you just use eg teacher salaries as a predictor, you can perfectly predict the extent of rising costs. But as far as I can tell, most things have risen the same amount, so if you used any subcomponent as a predictor, you could perfectly predict the extent of rising costs; again, it feels like they’re clumsily trying to get something past me. I think I can work out what they were trying to do (stitch together different datasets to get a better picture, assume salaries rise equally in every category) but I still wish they had discussed their reasoning and its limitations more openly.

The main thesis survives these objections, but there are still a few things that bother me, or don’t quite fit. I want to bring them up not as a gotcha or refutation, but in the hopes that people who know more about economics than I do can explain why I shouldn’t worry about them.

First, real wages have not in fact gone up during most of this period. Factory workers are not getting paid more. That makes it hard for me to understand how rising wages for factory workers are forcing up salaries for violinists, teachers, and doctors.

I discuss whether issues like benefits and inflation can explain this away here here, and conclude they can do so only partially; I’m not sure how this would interact with the Baumol effect.

Second, other data seem to dispute that salaries for the professionals in question have risen at all. T&H talk about rises in “instructional expenditures”, an education-statistics term that includes teacher salary and other costs; their source is NCES. But NCES also includes tables of actual teacher salaries. These show that teacher salaries today are only 6% higher than teacher salaries in 1970. Meanwhile, per-pupil costs are more than twice as high. How is an increase of 6% in teacher salaries driving an increase of 100%+ in costs? Likewise, although on page 33 T&H claim that doctors’ salaries have tripled since 1960, other sources report smaller increases of about 50% to almost nothing. Conventional wisdom among doctors is that the profession used to be more lucrative than it is today. This makes it hard to see how rising doctor salaries could explain a tripling in the cost of health care. And doctor salaries apparently make up only 20% of health spending, so it’s hard to see how they can matter that much.

(also, this SMBC)

Third, the Baumol effect can’t explain things getting less affordable. T&H write:

The cost disease is not a disease but a blessing. To be sure, it would be better if productivity increased in all industries, but that is just to say that more is better. There is nothing negative about productivity growth, even if it is unbalanced.In particular, it is important to see that the increase in the relative price of the string quartet makes string quartets costlier but not less affordable. Society can afford just as many string quartets as in the past. Indeed, it can afford more because the increase in productivity in other sectors has made society richer. Individuals might not choose to buy more, but that is a choice, not a constraint forced upon them by circumstance.

This matches my understanding of the Baumol effect. But it doesn’t match my perception of how things are going in the real world. College has actually become less affordable. Using these numbers: in 1971, the average man would have had to work five months to earn a year’s tuition at a private college. In 2016, he would have had to work fourteen months. To put this in perspective, my uncle worked a summer job to pay for his college tuition; one summer of working = one year tuition at an Ivy League school. Student debt has increased 700% since 1990. College really does seem to be getting less affordable. So do health care, primary education, and all the other areas affected by cost disease. Baumol effects shouldn’t be able to do this, unless I am really confused about them.

If someone can answer these questions and remove my lingering doubts about the Baumol effect as an explanation for cost disease, they can share credit with Tabarrok and Helland for restoring a big part of my faith in modern civilization.

Addendum To “Enormous Nutshell”: Competing Selectors

[Previously in sequence: Epistemic Learned Helplessness, Book Review: The Secret Of Our Success, List Of Passages I Highlighted In My Copy Of The Secret Of Our Success, Asymmetric Weapons Gone Bad]

When I wrote Reactionary Philosophy In An Enormous Planet-Sized Nutshell, my attempt to explain reactionary philosophy, many people complained that it missed the key insight. At the time I had an excuse: I didn’t get the key insight. Now I think I might understand it and have the vocabulary to explain, so I want to belatedly add it in.

The whole thing revolves around this rather dubious redefinition:

RIGHT-WING: Policies and systems selected by cultural evolution
LEFT-WING: Policies and systems selected by the marketplace of ideas

The second line is ambiguous: which marketplace of ideas, exactly? Maybe better than “the marketplace of ideas” would be “memetic evolution”. Policies and systems that are so catchy and convincing that lots of people believe in them and want to fight for them.

Under this definition, lots of conventionally right-wing movements get defined as left-wing. For example, Nazism and Trumpism both arose after a charismatic leader convinced the populace to implement them. They won because people liked them more than the alternatives. But “left-wing” is not equivalent to “populist”. An idea that spreads by convincing intellectuals and building an academic consensus around itself is still left-wing, because it relies on convincing people. Even ideas like neoliberalism and technocracy are left-wing ideas, if they sound good to intellectuals and they spread by convincing those intellectuals.

Does this mean that in this model, fascism, communism, and liberalism are all left-wing ideas? Yes. Most democracies can be expected to have mostly (entirely?) left-wing parties, since the whole point of being a party in a democracy is that you have to convince voters of things and win their approval. It’s not impossible to imagine a successful right-wing party in a democracy – it would revolve around preserving tradition, and if respect for tradition was strong enough, it might temporarily win. But it’s not a very stable situation.

What prevents every democracy from instantly becoming maximally left-wing? First, cultural evolution has built itself an immune system in the form of traditions and illegible preferences for certain ideas. Second, cultural evolution is still at work. If incumbents pursue some popular policy that ends up bankrupting their city, or causing crimes rates to increase 1000%, or something like that, they will end up humiliated, and people will probably vote them out of office. Incumbents know this, and so put some self-interested effort into rejecting these policies even if they are very popular and convincing.

(I think in this model, greed / special interests / NIMBYism are all special cases of convincingness. If an idea is in my self-interest, it will be very convincing to me; if I am powerful enough to sabotage the system or force things through it, the idea will have won through its convincingness.)

The reactionaries start with the assumption that some problems are asymmetric in the wrong direction. The correct idea sounds unconvincing; wrong ideas spread like wildfire and naturally win debates. I talked about two examples of this yesterday: Congressional salaries and early 20th century Communism. Most questions probably aren’t like this – “don’t nuke the ocean for no reason” is both convincing-sounding and adaptive. But where they diverge, you want to develop a system capable of implementing the right-wing answer even though there will be intense pressure from activists and the masses to implement the left-wing one.

What would a country capable of doing this look like? It would have to be a place where convincing-sounding ideas were incapable of spreading and taking over. That would mean that the beliefs of the populace would be completely irrelevant to what policies got enacted. So it couldn’t be a democracy. But it also couldn’t be an ordinary dictatorship. Churchill tells us that “dictators ride on tigers from which they dare not dismount” – they have to constantly maintain the support of the army and elites in order to avoid being deposed, and that involves doing things that sound good (at least to the army and elites) and are easy to justify (again, to them). You would need an implausibly strong dictatorship in order to resist the pressure to do whatever is easiest to justify, and so to escape being left-wing.

But even this would not be right-wing. Whatever convincing ideology has won the approval of the populace might also win the approval of the dictator, who would then do it because he wants to. Also, the dictator might be an idiot, or insane, and do bad policy for reasons other than because he is under the spell of some convincing-but-wrong idea.

The reactionaries believe there is no way to guarantee a country works well. But there is a way to guarantee that a collection of countries works well, which is to create a system conducive to cultural evolution. Have a bunch of small countries, each of which is ruled by an absolute dictator. In some of them, the dictator will pursue good policy, people and investment will flow in, and those countries will flourish. In others, the dictator will pursue bad policy, and those countries will either collapse, or do the smart thing and adopt the behavior of flourishing countries.

The argument isn’t that dictators are naturally smarter than the masses. The argument is that the dictators will be a high-variance group. Some of them will probably be stupid. But get enough countries like this, and at least one of them will have a dictator who really is cleverer than the masses. That country will succeed beyond what a left-wing country yoked to the most convincing-sounding idea would be capable of. Then other countries will copy its success or be left behind.

(are we sure dictatorships are higher variance than democracies? I think it makes intuitive sense that a single individual would be higher-variance than the average of a crowd. Also, democracies can be expected to develop activists and journalists who will intensify memetic selection and force convergence on the most memetically fit policy. If the democracies are culturally different, the most memetically fit policy might be different for each. But these cultural differences are themselves products of cultural evolution and could be expected to erode under enough pressure.)

There’s a clear analogy to business. Hundreds of entrepreneurs try to start their own companies. Many are idiots and fail immediately. But one of them is Jeff Bezos and very good at his job. His company makes the right decisions and ends up dominating the entire market. “The best practices spread everywhere” is the desired outcome; cultural evolution has succeeded. Abstracting away potential venture capitalist involvement, none of this requires Jeff Bezos’ business plan to sound convincing to a third party; memetic selection is not involved.

(if business worked like politics, each of those hundreds of e-commerce entrepreneurs would go before a panel of voters and explain why their ideas were the best; whoever sounded most convincing would win. I see no reason to believe Jeff Bezos is especially good at convincing people of things. Honestly, “first we make a mail order bookstore, then we conquer the world” sounds like a pretty dumb business plan.)

Henrich summarizes the political implications of The Secret Of Our Success as:

Humans are bad at intentionally designing effective institutions and organizations, though I’m hoping that as we get deeper insights into human nature and cultural evolution this can improve. Until then, we should take a page from cultural evolution’s playbook and design “variation and selection systems” that will allow alternative institutions or organizational forms to compete. We can dump the losers, keep the winners, and hopefully gain some general insights during the process.

The reactionary model of government is an attempt to cache out Henrich’s “variation and selection system”, and shares its advantages. But what’s the case against it?

First, turning the world into a patchwork of thousands of implausibly strong dictatorships sounds about as hard as starting a global communist revolution or implementing any other fundamental change to the system of the world.

Second, cultural evolution at the international level may not work quickly enough to be at all useful or humane. Plausibly World War II provided one bit of cultural-evolution data (“fascism is worse than liberalism”). The Cold War provided a second bit (“communism is also worse than liberalism”). Both bits are appreciated, but 50 million deaths per bit is a pretty high price. If the world were a patchwork of tiny dictatorships, there would probably be a lot of war and genocide and oppression before we learned anything.

Third, we have to hope that cultural evolution would be selecting for the happiest and most prosperous countries. There’s a case that it would, if everyone has exit rights and can vote with their feet for countries they like better. But there’s also a risk it selects for military might, or that exit rights don’t happen. Dubai, whose position in the United Arab Emirates makes it a lot closer to this model than most places, seems to invest a lot in its citizens’ happiness, but also has an underclass of near-slave laborers without exit rights (their employers tend to seize their passports). Also, a lot of industries have pretty bad conditions for their employees, even though those employees have exit rights to go to different companies. I don’t really understand why this happens, but it sounds like the sort of thing that could happen in a patchwork of small dictatorships too.

Finally, and appropriately for a system that loathes convincingness, the branding is terrible. Using “right” and “left” for the two sides was an bad decision. Absent that decision, I don’t think there’s anything necessarily rightist about it. Certainly it exemplifies leftist virtues like localism and diversity; certainly it gets points for identifying Nazism and Trumpism as bad and proposing a way to stop them. Certainly it should be tempting for communists who have realized they’re not going to get a revolution in western countries any time soon but still want a chance to prove their ideas can work. I think this bad branding decision caused a downstream cascade of awfulness, leading to reaction attracting a lot of very edgy people who liked the idea of being “maximally rightist”. Some of these people later became alt-right or Trump supporters, the media caught on, and the idea ended up discredited for totally contingent reasons.

Also on the subject of bad branding, it was an unforced error to focus on kings. The theory is pointing at something like Singapore, Dubai, or charter cities (but also utopian communes, and monasteries, and…) Medieval kings aren’t just a couple of centuries out of date, they’re also bad examples: most of them had very limited power to go against what nobles wanted. They probably stuck to cultural evolution rather than memetic evolution just because that was how things worked in the Middle Ages before the printing press, but they don’t seem to have had a coherent theory of this.

Despite these flaws, I find myself thinking about this more and more. Cultural evolution may be moving along as lazily as always, but memetic evolution gets faster and faster. Clickbait news sites increase the intensity of selection to tropical-rainforest-like levels. What survives turns out to be conspiracy-laden nationalism and conspiracy-laden socialism. The rise of Trump was really bad, and I don’t think it could have happened just ten or twenty years ago. Some sort of culturally-evolved immune system (“basic decency”) would have prevented it. Now the power of convincing-sounding ideas to spread through and energize the populace has overwhelmed what that kind of immunity can deal with.

We should try to raise the sanity waterline – make true things more convincing than false things. But at the same time, we may also want to try to to understand the role of cultural evolution as a counterweight to memetic evolution, and have ideas for how to increase that role in case of emergency.

Posted in Uncategorized | Tagged , , | Comments Off on Addendum To “Enormous Nutshell”: Competing Selectors

Asymmetric Weapons Gone Bad

[Previously in sequence: Epistemic Learned Helplessness, Book Review: The Secret Of Our Success, List Of Passages I Highlighted In My Copy Of The Secret Of Our Success. Deleted a controversial section which I still think was probably correct, but which given the number of objections wasn’t provably correct enough to be worth including. I might write another post giving my evidence for it later, but it probably shouldn’t be dropped in here without justification.]

I.

Years ago, I wrote about symmetric vs. asymmetric weapons.

A symmetric weapon is one that works just as well for the bad guys as for the good guys. For example, violence – your morality doesn’t determine how hard you can punch; they can buy guns from the same places we can.

An asymmetric weapon is one that works better for the good guys than the bad guys. The example I gave was Reason. If everyone tries to solve their problems through figuring out what the right thing to do is, the good guys (who are right) will have an easier time proving themselves to be right than the bad guys (who are wrong). Finding and using asymmetric weapons is the only non-coincidence way to make sustained moral progress.

The parts of The Secret Of Our Success that deal with reason vs. cultural evolution raise a disturbing prospect: what if sometimes, the asymmetry is in the wrong direction? What if there are some issues where rational debate inherently leads you astray?

II.

Maybe with an unlimited amount of resources, our investigations would naturally converge onto the truth. Given infinite intelligence, wisdom, impartiality, education, domain knowledge, evidence to study, experiments to perform, and time to think it over, we would figure everything out.

But just because infinite resources will produce truth doesn’t mean that truth as a function of resources has to be monotonic. Maybe there are some parts of the resources-vs-truth curve where increasing effort leads you the wrong direction.

When I was fifteen, I thought minimum wages obviously helped poor people. They needed money; minimum wages gave them money, case closed.

When I was twenty, and a little wiser, I thought minimum wages were obviously bad for the poor. Econ 101 tells us minimum wages kill jobs and cause deadweight loss, with poor people most affected. Case closed.

When I was twenty-five, and wiser still, I thought minimum wages were probably good again. I’d read a couple of studies showing that maybe they didn’t cause job loss, in which case they’re back to just giving poor people more money.

When I was thirty, I was hopelessly confused. I knew there was a meta-analysis of 64 studies that showed no negative effects from minimum wages, and a systematic review of 100+ studies that showed strong negative effects from minimum wages. I knew a survey of economists found almost 80% thought minimum wages were good, but that a different survey of economists found 73% thought minimum wages were bad.

We can graph my life progress like this:

This partly reflects my own personal life course, which arguments I heard first, and how I personally process evidence.

But another part of it might just be inherent to the territory. That is, there are some arguments that are easy to understand, and other arguments that are harder to understand. If the easy arguments lean predominantly one way, and the hard arguments lean predominantly the other way, then it will natural for any well-intentioned person studying a topic to follow a certain pattern of switching their opinion a few times before getting to the truth.

Some hard questions might be epistemic traps – problems where the more you study them, the wronger you get, up to some inflection point that might be further than anybody has ever studied them before.

III.

We’ll get to vast social conflicts eventually, but I want to start with boring things in everyday life.

I hate calling people on phones. I can’t really explain this. I’m okay with emailing them. I’m okay talking to them in person. But I hate calling them on phones.

When I was younger, I would go to great lengths to avoid calling people on phones. My parents would point out that this was dumb, and ask me to justify it. I couldn’t. They would tell me I was being silly. So I would call people on phones and hate it. Now I don’t live with my parents, nobody can make me do things, and so I am back to avoiding phone calls.

My parents weren’t authoritarian. They weren’t demanding I make phone calls because That Is The Way We Do Things In This House. They were doing the supposedly-correct thing, using rational argument to make me admit my aversion to phone calls was totally unjustified, and that making phone calls had many tangible benefits, and then telling me I should probably make the call, shouldn’t I? Yet somehow this ended up making my life worse.

Or: I can’t do complicated intellectual work with another person in the room. I just can’t. You can give me good reasons why I’m wrong about this: maybe the other person won’t make any noise. Maybe I can just turn the other way and focus on my computer and I won’t ever have to notice the other person’s presence at all. Argue this with me enough, and I will lose the argument, and work in the same room as you. I won’t get any good work done, and I’ll end up spending most of the time hating you and wishing you would go away.

I try to be very careful with my patients, so that I don’t make their lives worse in the same way. It’s often easy to get patients to admit they don’t have a good reason for what they’re doing; for example, autistic people usually can’t explain why they “stim”, ie make unusual flapping movements. These movements are distracting and probably creep out the people around them. It’s very easy to argue an autistic person into admitting they stimming is a net negative for them. Yet somehow autistic people always end up hating the psychiatrists who win this argument, and going somewhere far away from them so they can stim in peace.

Every day we do things that we can’t easily justify. If someone were to argue that we shouldn’t do the thing, they would win easily. We would respond by cutting that person out of our life, and continuing to do the thing.

I hope most readers find at least one of the examples above rang true to them. If not – if you don’t hate phones, or have trouble working near others, or stim – and if you’re thinking “All of those things really do seem irrational, you’re probably just wrong if you want to protect them against Reason” – here are some potential alternative intuition pumps:

1. Guys – do you have trouble asking girls out? Why? The worst that can happen is they’ll say no, right?

2. Girls – do you sometimes get upset and flustered when a guy you don’t like asks you out, even in a situation where you don’t fear any violence or coercion from the other person? Do you sometimes agree to things you don’t want because you feel pressured? Why? All you have to do is say “I’m flattered, but no thanks”.

3. Do you diet and exercise as much as you should? Why not? Obviously this will make you healthier and feel better! Why don’t you buy a gym membership right now? Are you just being lazy?

I don’t mean to say these questions are Profound Mysteries that nobody can possibly answer. I think there are good answers to all of them – for example, there are some neurological theories that offer a pretty good explanation of how stimming helps autistic people feel better. But I do want to claim that most of the people in these situations don’t know the explanations, and that it’s unreasonable to expect them to. All of these actions and concerns are “illegible” in the Seeing Like A State sense.

Illegibility is complicated and context-dependent. Fetishes are pretty illegible, but because we have a shared idea of a fetish, because most people have fetishes, and because even the people who don’t have fetishes have the weird-if-you-think-about-it habit of being sexually attracted to other human beings – people can just say “That’s my fetish” and it becomes kind of legible. We don’t question it. And there are all sorts of phrases like “I don’t like it”, or “It’s a free country” or “Because it makes me happy” that sort of relieve us of the difficult work of maintaining legibility for all of our decisions.

This system works so well that it only breaks down when very different people try to communicate across a fundamental gap. For example, since allistic people may not feel any urge to stim or do anything like stimming, its illegibility suddenly becomes a problem, and they try to argue autistic people out of it. The worst failure mode is where illegible actions by an outgroup are naturally rounded off to “they are evil and just hiding it”. I remember feeling pretty bad once after hearing a feminist explain that the only reason men stared at attractive women was to intimidate them, make them feel like their body existed for other people’s pleasure, and cement male privilege. I myself sometimes stared at attractive women, and I couldn’t verbalize a coherent reason – was I just trying to hurt and intimidate them? I think a real answer to this question would involve the way we process salience – we naturally stare at the most salient part of a scene, and an attractive person will naturally be salient to us. But this was beyond teenaged me’s ability to come up with, so I ended up feeling bad and guilty.

If you force people to legibly interpret everything they do, or else stop doing it under threat of being called lazy or evil, you make their life harder and probably just end up with them avoiding you.

IV.

Different problems come up when we talk about societies trying to reason collectively. We would like to think that the more investigation and debate our society sinks into a question, the more likely we are to get the right answer. But there are also times when we do 450 studies on something and end up more wrong than when we started.

A very boring, trivial example of this: I think we should increase salaries for Congress, Cabinet Secretaries, and other high officials. There are so few of this that it would be very cheap: quintupling every Representative, Senator, and Cabinet Secretary’s salary to $1 million/year would involve raising taxes by only $2 per person. And if it attracted even a slightly better caliber of candidate – the type who made even 1% better decisions on the trillion-dollar questions such leaders face – it would pay for itself hundreds of times over. Or if it prevented just a tiny bit of corruption – an already rich Defense Secretary deciding from his gold-plated mansion that there was no point in going for a “consulting job” with a substandard defense contractor – again, hundreds of times over. This isn’t just me being a elitist shill: even Alexandria Ocasio-Cortez agrees with me here. This is as close to a no-brainer as policies come.

But I think I would be demolished if I tried to argue for this on Twitter, or on daytime TV, or anywhere else that promotes a cutthroat culture of “dunking” on people with the wrong opinions. It’s so much faster, easier, and punchier to say “poor single mothers are starving on minimum wage, and you think the most important problem is taking money away from them to make our millionaires even richer?” and just drown me out with cries of “elitist shill, elitist shill” every time I try to give the explanation above. Sure enough, the AOC article above notes that although Americans underestimate the amount Congressmen get paid (they think only $120,000, way less than the real number of $170,000), most of them believe they should be paid less, with only 17% saying they should keep getting what they already have, and only 9% agreeing they should get more.

This is a different problem than the one above – the policy isn’t illegible to the people trying to defend it, but the communication methods are low-bandwidth enough that the most legible side naturally wins. That Congressmen are even able to maintain their current salary is partly due to them being insulated from debate: the issue never really comes up, so the consensus in favor of cutting their pay doesn’t really matter.

And yeah, I know, Popular Opinion Sometimes Wrong, More At 11. But this seems like a trivial but real society-wide case of the epistemic traps above, where if you increase one resource (amount an issue is debated) without increasing other resources (intelligence and rationality of the participants, the amount of time and careful thought they are willing to put in) you get further away from truth.

V.

Are there any less trivial examples? What about turn-of-the-20th-century socialism?

I was shocked to learn how strong a pro-socialism consensus existed during this period among top intellectuals. Socialist leader Edward Pease described the landscape pretty well:

Socialism succeeds because it is common sense. The anarchy of individual production is already an anachronism. The control of the community over itself extends every day. We demand order, method, regularity, design; the accidents of sickness and misfortune, of old age and bereavement, must be prevented if possible, and if not, mitigated. Of this principle the public is already convinced: it is merely a question of working out the details. But order and forethought is wanted for industry as well as for human life. Competition is bad, and in most respects private monopoly is worse. No one now seriously defends the system of rival traders with their crowds of commercial travellers: of rival tradesmen with their innumerable deliveries in each street; and yet no one advocates the capitalist alternative, the great trust, often concealed and insidious, which monopolises oil or tobacco or diamonds, and makes huge profits for a fortunate; few out of the helplessness of the unorganised consumers.

Why shouldn’t people have thought this? The period featured sweatshop-like working conditions alongside criminally rich nobility with no sign that this state of affairs could ever change under capitalism. Top economists, up until the 1950s, almost unanimously agreed that socialism would help the economy, since central planners could coordinate ways to become more efficient. The first good arguments against this proposition, those of Hayek and von Mises, were a quarter-century in the future. Communism seemed perfectly straightforward and unlikely to go wrong; the first hint that it “might not work in real life” would have to wait for the Bolshevik Revolution. Pease writes that the main pro-capitalism argument during his own time was the Malthusian position that if the poor got more money, they would keep breeding until the Earth was overwhelmed by overpopulation; even in his own time, demographers knew this wasn’t true. The imbalance in favor of pro-communist arguments over pro-capitalist ones was overwhelming.

Don’t trust me on this. Trust all the turn-of-the-20th-century intellectuals who flocked towards socialism. In the Britain of the time, the smarter you were, and the more social science and economics you knew, the more likely you were to be a socialist, with only a few exceptions.

But turn-of-the-century Britain never went communist. Why not?

One school of thought says it’s because rich people had too much power. Even though the intellectuals all supported communism, nobody wanted to start a violent revolution, because they expected the rich to win and punish them.

But another school of thought says that cultural evolution created both capitalism, and an immune system to defend capitalism. This is more complicated, and requires a lot of the previous discussion here before it makes sense. But it seems to match some of what was going on. Society didn’t look like everyone wanting to revolt but being afraid of the rich. It looked like large parts of the poor and middle class being very anti-communist for kind of illegible reasons like “king” and “country” and “God” and “tradition” or “just because”.

In retrospect, these illegible reasons were right. It’s hard to tell if they were right by coincidence, or because cultural evolution is smarter than we are, drags us into whatever decision it makes, and then creates illegible reasons to prop itself up.

Empirically, as people started devoting more intellectual resources to the problem of whether Britain should be communist or not – as very intelligent and well-educated people started thinking about the problem using the most modern ideas of science and rationality, and challenged all of their preconceived notions to see which ones would stand up to Reason and which ones wouldn’t – they got further from the truth.

(I’m assuming that you, the reader, aren’t communist. If you are, think up another example, I guess.)

There is a level of understanding that lets you realize communism is a bad idea. But you need a lot of economic theory and a lot of retrospective historical knowledge the early-20th-century British didn’t have. There’s some part in the resources-vs-truth graph, where you’re smart enough to know what communism is but not smart enough to have good arguments against it – where the more intellect you apply the further from truth it takes you.

VI.

Obviously this ends with everyone agreeing to think very hard about things, carefully distinguish notice which traditions have illegible justifications, and then only throw out the traditions that are legitimately stupid and exist for no reason. What other position could we come to? You wouldn’t say “Don’t bother being careful, nothing is ever illegible”. But you also can’t say “Okay, we will never change anything ever again”. You just give the maximally-weaselly answer of “We’ll be sure to think about it first.”

But somebody made a good point on the last comments thread. We are the heirs to a five-hundred-year-old tradition of questioning traditions and demanding rational justifications for things. Armed with this tradition, western civilization has conquered the world and landed on the moon. If there were ever any tradition that has received cultural evolution’s stamp of approval, it would be this one.

So is there anything at all we should learn from all of this? If I had to cache out “think very hard about things” more carefully, maybe it would look like this:

1. The original Chesterton’s Fence: try to understand traditions before jettisoning them.

2. If someone does something weird but can’t explain why, accept them as long as they’re not hurting anyone else (and don’t make up stupid excuses for why their actions really hurt all of us). Be less quick to jump to “actually they are doing it out of Inherent Evil” as an explanation.

3. As per the last Henrich quote here, make use of the “laboratories of democracy” idea. Try things on a small scale in limited areas before trying them at larger scale; let different polities compete and see what happens.

4. Have less intense competitive pressure in the marketplace of ideas. Kuhn touches on how heliocentric theory had less explanatory power than geocentric theory for a while, but was tolerated anyway long enough that it was eventually able to sort itself out and become better. If good ideas are sometimes at a disadvantage in defending themselves, leave unpopular opinions alone for a while to see if they eventually become more legible. I think this might look like just being kinder and more tolerant of weirdness.

5. If someone defends a tradition that seems completely wrong and repulsive to you, try to be understanding of them even if you are right and the tradition is wrong. Traditions spent a long time evolving to be as sticky as possible in the face of contrary evidence, humans spent a long time evolving to stick to traditions as much as possible in the face of contrary evidence, and this evolution was beneficial through most of history. This sort of pressure is as hard to break (and probably as genetically-loaded) as other now-obsolete evolutionary urges like the one to binge on as much calorie-dense food as possible when it’s available (related).

6. Having done all that, and working as gingerly and gradually as you can, you should still try to improve on traditions that seem obsolete or improvable.

7. Cultural evolution does not provide evidence that traditions are ethical. Like biological evolution, cultural evolution didn’t even try to create ethical systems. It tried to create systems that were good at spreading. Plausibly many cultures converged on eating meat because it was a good source of calories and nutrients. But if you think it violates animals’ rights, cultural evolution shouldn’t convince you otherwise – there’s no reason cultural evolution should price animal suffering into its calculations. (related).

Finally: some people have interpreted this series of posts as a renunciation of rationality, or an admission that rationality is bad. It isn’t. Rationality isn’t (or shouldn’t be) the demand that every opinion be legible and we throw out cultural evolution. Rationality is the art of reasoning correctly. I don’t know what the optimal balance between what-seems-right-to-us vs. tradition should be. But whatever balance we decide on, better correlating “what seems right to us” with “what is actually true” will lead to better results. If we’re currently abysmal at this task, that only adds urgency to figuring out where we keep going wrong and how we might go less wrong, both as individuals and as a community.