codex Slate Star Codex

SELF-RECOMMENDING!

Highlights From The Comments On Cultural Evolution

Peter Gerdes says:

As the examples of the Nicaraguan deaf children left on their own to develop their own language demonstrates (as do other examples) we do create languages very very quickly in a social environment.

Creating conlangs is hard not because creating language is fundamentally hard but because we are bad at top down modelling of processes that are the result of a bunch of tiny modifications over time. The distinctive features of language require both that it be used frequently for practical purposes (this makes sure that the language has efficient shortcuts, jettisons clunky overengineered rules etc..) and that it be buffeted by the whims of many individuals with varying interests and focuses.

This is a good point, though it kind of equivocates on the meaning of “hard” (if we can’t consciously do something, does that make it “hard” even if in some situations it would happen naturally?).

I don’t know how much of this to credit to a “language instinct” that puts all the difficulty of language “under the hood”, vs. inventing language not really being that hard once you have general-purpose reasoning. I’m sure real linguists have an answer to this. See also Tracy Canfield’s comments (1, 2) on the specifics of sign languages and creoles.


The Secret Of Our Success described how human culture, especially tool-making ability, allowed us to lose some adaptations we no longer needed. One of those was strength; we are much weaker than the other great apes. Hackworth provides an intuitive demonstration of this: hairless chimpanzees are buff:


Reasoner defines “Chesterton’s meta-fence” as:

in our current system (democratic market economies with large governments) the common practice of taking down Chesterton fences is a process which seems well established and has a decent track record, and should not be unduly interfered with (unless you fully understand it)

And citizencokane adds:

Indeed: if there is a takeaway from Scott’s post, it is that one way to ensure survival is high-fidelity adherence to traditions + ensuring that the inherited ancestral environment/context is more or less maintained. Adhering to ancient traditions when the context is rapidly changing is a recipe for disaster. No point in mastering seal-hunting if there ain’t no more seals. No point in mastering the manners of being a courtier if there ain’t no more royal court. Etc.

And the problem is that, in the modern world, we can’t simply all mutually agree to stop changing our context so that our traditions will continue to function as before because it is no longer under our control. I’m not just talking about climate change; I’m talking even moreso about the power of capital, an incentive structure that escapes all conscious human manipulation or control, and which more and more takes the appearance of an exogenous force, remaking the world “in its own image,” turning “all that is solid into air,” and compelling all societies, upon pain of extinction, to keep up with its rapid changes in context. This is why every true traditionalist must be, at heart, an anti-capitalist…if they truly understand capitalism.

Which societies had more success in the 18th and 19th centuries in the context of this new force, capital? Those who held rigidly to traditions (like Qing China), or those who tolerated or even encouraged experimentation? Enlightenment ideas would not have been nearly so persuasive if they hadn’t had the prestige of giving countries like the Netherlands, England, France, and America an edge. Even countries that were not on the leading edge of the Enlightenment, and who only grudgingly and half-heartedly compromised with it like Germany, Austria, and (to some extent) Japan, did better than those who held onto traditions even longer, like the Ottoman Empire or Russia, or China.

In particular, you can’t fault Russia or China for being even more experimental in the 20th century (Marxism, communism, etc.) if you realize that this was an understandable reaction to being visibly not experimental enough in the 19th century.

And Furslid continues:

I think an important piece of this, which I hope Scott will get to in later points is to be less confident in our new culture. It makes sense to doubt if our old culture applies. However, it is also incredibly unlikely that we have an optimized new culture yet.

We should be less confident that our new culture is right for new situations than that the old culture was right for old situations. This means we should be more accepting of people tweaking the new culture. We should also enforce it less strongly.


Quixote describes a transitional step in the evolution of manioc/cassava cultivation:

Also, based on a recent conversation (unrelated to this post actually) that I had with one of my coworkers from central east Africa, I’m not sure that he would agree with the book’s characterization of African adaptation to Cassava. He would probably point out that

– Everyone in [African country] knows cassava can make you sick, that’s why you don’t plant it anywhere that children or the goats will eat it.

– In general you want it plant cassava in swampy areas that you were going to fence off anyway.

– You mostly let the cassava do its thing and only harvest it to use as your main food during times of famine /drought when your better crops aren’t producing

It seems like those cultural adaptations problem cover most / much of the problem with cassava.


ahasvers:

There is a very nice experimental demonstration in this article (just saw the work presented at a workshop), where they get people to come as successive “generations” and improve on a simple physical system.

Causal understanding is not necessary for the improvement of culturally evolving technology

The design does improve over generations, no thanks to anyone’s intelligence. They get both physics/engineering students and other students, with no difference at all. In one variant, they allow people to leave a small message to the next generation to transmit their theory on what works/doesn’t, and that doesn’t help, or makes things worse (by limiting the dimensions along which next generations will explore).


A few people including snmlp question the claim that aboriginal Tasmanians lost fire. See this and this paper for the status of the archaeological evidence.


Decius Brutus:

Five hundred years hence, is someone going to analyze the college education system and point out that the wasted effort and time that we all can see produced some benefit akin to preventing chronic cyanide poisoning? Are they going to be able to do the same with other complex wasteful rituals, like primary elections and medical billing? Or do humans create lots of random wasteful rituals and occasionally hit upon one that removes poison from food, and then every group that doesn’t follow the one that removes poison from food dies while the harmless ones that just paint doors with blood continue?

I actually seriously worry about the college one. Like, say what you want about our college system, but it has some surprising advantages: somehow billions of dollars go to basic scientific research (not all of them from the government), it’s relatively hard for even the most powerful special interests to completely hijack a scientific field (eg there’s no easy way for Exxon to take over climatology), and some scientists can consistently resist social pressure (for example, all the scientists who keep showing things like that genetics matters, or IQ tests work, or stereotype threat research doesn’t replicate). While obviously there’s still a lot of social desirability bias, it’s amazing that researchers can stand up to it at all. I don’t know how much of this depends on the academic status ladder being so perverse and illegible that nobody can really hack it, or whether that would survive apparently-reasonable college reform.

Likewise, a lot of doctors just have no incentives. They don’t have an incentive to overtreat you, or to undertreat you, or to see you more often than you need to be seen, or to see you less often than you need to be seen (this isn’t denying some doctors in some parts of the health system do have these pressures). I actually don’t know whether my clinic would make more or less money if I fudged things to see my patients more often, and nobody has bothered to tell me. This is really impressive. Exposing the health system to market pressures would solve a lot of inefficiencies, but I don’t know if it would make medical care too vulnerable to doctors’ self-interest and destroy some necessary doctor-patient trust.


Lasagna:

I’ve got two young kids of my own. One puts everything in his mouth, the other less so, and neither evinced anything resembling what I’m reading in Section III. We spent this past Sunday trying to teach my youngest not to eat the lawn, and my oldest liked to shove ant hills and ants into his mouth around that age. Yeah, sure, anecdotal, but a “natural aversion among infants to eating plants until they see mommy eating them, and after that they can and do identify that particular plan themselves and will eat it” seems like a remarkable ability that SOMEONE would have noticed before this study. I’ve never heard anyone mention it.

I don’t think I’m weakmanning the book, it’s just that this is the only aspect discussed in Scott’s review that I have direct experience with, and my direct experience conflicts with the author’s conclusions. It’s a Gell-Mann amnesia thing, and makes me suspicious of the otherwise exciting ideas here. Like: does anyone here have any direct knowledge of manioc harvesting and processing, or the Tukanoans culture? How accurate is the book?

I checked with the mother of the local two-year old; she says he also put random plants in his mouth from a young age. Suspicious!


John Schilling:

I think this one greatly overstates its thesis. Inventiveness without the ability to transmit inventions to future generations is of small value; you can’t invent the full set of survival techniques necessary for e.g. the high arctic in a single generation of extreme cleverness. At best you can make yourself a slightly more effective ape. But cultural transmission of inventions without the ability to invent is of exactly zero value. It takes both. And since being a slightly more effective ape is still better than being an ordinary ape, culture is slightly less than 50% of the secret of our success.

That said, the useful insight is that the knowledge we need to thrive, is vastly greater than the knowledge we can reasonably deduce from first principles and observation. And what is really critical, this holds true even if you are in a library. You need to accept “X is true because a trusted authority told me so; now I need to go on and learn Y and Z and I don’t have time to understand why X is true”. You need to accept that this is just as true of the authority who told you X, and so he may not be able to tell you why X is true even if you do decide to ask him in your spare time. There may be an authority who could track that down, but it’s probably more trouble than it’s worth to track him down. Mostly, you’re going to use the traditions of your culture as a guide and just believe X because a trusted authority told you to, and that’s the right thing to do,

“Rationality” doesn’t work as an antonym to “Tradition”, because rationality needs tradition as an input. Not bothering to measure Avogadro’s number because it’s right there in your CRC handbook wikipedia is every bit as much a tradition as not boning your sister because the Tribal Elders say so; we just don’t call it that when it’s a tradition we like. Proper rationality requires being cold-bloodedly rational about evaluating the high-but-not-perfect reliability of tradition as a source of fact.

Unfortunately, and I think this may be a relic of the 18th and early 19th century when some really smart polymathic scientists could almost imagine that they really could wrap their minds around all relevant knowledge from first principles on down, our culture teaches ‘Science!’ in a way that suggests that you really should understand how everything is derived from first principles and firsthand observation or experiment even if at the object level you’re just going to look up Avogadro’s number in Wikipedia and memorize it for the test.


nkurz isn’t buying it:

I’m not sure where Scott is going with this series, but I seem to have a different reaction to the excerpts from Henrich than most (but not all) of the commenters before me: rather than coming across as persuasive, I wouldn’t trust him as far as I could throw him.

For simplicity let’s concentrate on the seal hunting description. I don’t know enough about Inuit techniques to critique the details, but instead of aiming for a fair description, it’s clear that Henrich’s goal is to make the process sound as difficult to achieve as possible. But this is just slight of hand: the goal of the stranded explorer isn’t to reproduce the exact technique of the Inuit, but to kill seals and eat them. The explorer isn’t going to use caribou antler probes or polar bear harpoon tips — they are going to use some modern wood or metal that they stripped from their ice bound ship.

Then we hit “Now you have a seal, but you have to cook it.” What? The Inuit didn’t cook their seal meat using a soapstone lamp fueled with whale oil, they ate it raw! At this point, Henrich is not just being misleading, he’s making it up as he goes along. At this point I start to wonder if part about the antler probe and bone harpoon head are equally fictional. I might be wrong, but beyond this my instinct is to doubt everything that Henrich argues for, even if (especially if) it’s not an area where I have familiarity

Going back to the previous post on “Epistemic Learned Helplessness”, I’m surprised that many people seem to have the instinct to continue to trust the parts of a story that they cannot confirm even after they discover that some parts are false. I’m at the opposite extreme. As soon as I can confirm a flaw, I have trouble trusting anything else the author has to say. I don’t care about the baby, this bathwater has to go! And if the “flaw” is that the author is being intentionally misleading, I’m unlikely to ever again trust them (or anyone else who recommends them). .

Probably I accidentally misrepresented a lot in the parts that were my own summary. But this is from a direct quote, and so not my fault.

roystgnr adds:

Wikipedia seems to suggest that they ate freshly killed meat raw, but cooked some of the meat brought back to camp using a Kudlik, a soapstone lamp fueled with seal oil or whale blubber. Is that not correct? That would still flatly contradict “but you have to cook it”, but it’s close enough that the mistake doesn’t reach “making it up as he goes along” levels of falsehood. You’re correct that even the true bits seem to be used for argument in a misleading fashion, though.

This seems within the level of simplifying-to-make-a-point that I have sometimes been guilty of myself, so I’ll let it pass.


Bram Cohen:

A funny point about the random number generators: Rituals which require more effort are more likely to produce truly random results, because a ritual which required less effort would be more tempting to re-do if you didn’t like the result.

Followed by David Friedman:

This reminds me of my father’s argument that cheap computers resulted in less reliable statistical results. If running one multiple regression takes hundreds of man hours and thousands of dollars, running a hundred of them and picking the one that, by chance, gives you the significant result you are looking for, isn’t a practical option.

Yikes.


Anatoly:

The quote on quadruped running seems inaccurate in several important ways compared to the primary references Henrich cites, which are short and very interesting in their own: Bramble and Carrier (1983) and Carrier (1984). In particular, humans still typically lock their breathing rate with their strides, it’s just that animals nearly always lock them 1:1, while humans are able to switch to other ratios, like 1:3, 2:3, 1:4 etc. and this is thought to allow us to maintain efficiency at varying speeds. Henrich also doesn’t mention that humans are at the outset metabolically disadvantaged for running in that we spend twice as much energy (!) per unit mass to run the same distance as quadrupeds. That we are still able to run down prey by endurance running is called the “energetic paradox” by Carrier. Liebenberg (2006) provides a vivid description of what endurance hunting looks like, in Kalahari.

And b_jonas:

I doubt the claim that humans don’t have quantized speeds of running. I for one definitely have two different gaits of walking, and find walking in an intermediate speed between the two more difficult than either of them. This is the most noticable if I want to chat with someone while walking, because then I have to walk in such an intermediate speed to not get too far from them. The effect is somewhat less pronounced now that I’ve gained weight, but it’s still present. I’m not good at running, so I can’t say anything certain about it, but I suspect that at least some humans have different running gaits, even if the cause is not the specific one that Joseph Henrich mentions about quadrupeds.

I’ve never noticed this. And I used to use treadmills relatively regularly, and play with the speed dial, so I feel like I would have noticed if this had been true. Anyone have thoughts on this?


Squirrel Of Doom:

I read somewhere that the languages with the most distinctive sounds are in Africa, among them the ones including the !click! ones. Since humanity originates from Africa, these are also the oldest language families.

As you move away from Africa, you can trace how languages lose sound after sound, until you get to Hawaiian, which is the language with the fewest sounds, almost all vowels.

I’ve half heartedly tried to find any mention of this, perhaps overly cute theory again, but failed. The “sonority” theory here reminded me. Anyone know anything, one way or the other?

Secret Of Our Success actually mentions this theory; you can find the details within.

Some people reasonably bring up that no language can be older than any other, for the same reason it doesn’t make sense to call any (currently existing) evolved animal language older than any other – every animal lineage from 100 million BC has experienced 100 million years of evolution.

I think I’ve heard some people try to get around this by focusing on schisms. Everyone starts out in Africa, but a small group of people move off to Sinai or somewhere like that. Because most of the people are back home in Africa, they can maintain their linguistic complexity; because the Sinaites only have a single small band talking to each other, they lose some linguistic complexity. This seems kind of forced, and some people in the comments say linguistic complexity actually works the opposite direction from this, but I too find the richness of Bushman languages pretty suggestive.


What about rules that really do seem pointless? Catherio writes:

My basic understanding is that if some of the rules (like “don’t wear hats in church”) are totally inconsequential to break, these provide more opportunities to signal that your community punishes rule violation, without an increase in actually-costly rule violations.

I’d heard this before, but she manages (impressively), to link it to AI: see Legible Normativity for AI Alignment: The Value of Silly Rules.


liskantope:

With regard to accepting other people’s illegible preferences…I wish I could show this essay to, like, two-thirds of all the people I’ve ever lived with. Seriously, a common core of my issues with roommates has been that they refuse to accept or understand my illegible preferences (I often refer to these as “irrational aversions”) while refusing to admit that their own illegible preferences are just as difficult to ground rationally. Just establishing an understanding that illegible preferences should be respected by default or at least treated on an even playing field, and that having immediate objective logical explanations for preferences should not be a requirement for validation, would have immediately improved my relationships with people I’ve lived with 100%.

I’ve had the same experience – a good test for my compatibility with someone will be whether they’ll accept “for illegible reasons” as an excuse. Despite the stereotypes, rationalists have been a hundred times better at this than any other group I’ve been in close contact with.


Nav on Lacan and Zizek (is everything cursed to end in Zizek eventually, sort of like with entropy?):

Time to beat my dead horse; the topics you’re discussing here have a lot of deep parallels in the psychoanalytic literature. First, Scott writes:

}} “If you force people to legibly interpret everything they do, or else stop doing it under threat of being called lazy or evil, you make their life harder”

This idea is treated by Lacan as the central ethical problem of psychoanalysis: under what circumstances is it acceptable to cast conscious light upon a person’s unconsciously-motivated behavior? The answer is usually “only if they seek it out, and only then if it would help them reduce their level of suffering”.

Turn the psychoanalytic, phenomenology-oriented frame onto social issues, as you’ve partly done, and suddenly we’re in Zizek-land (his main thrust is connecting social critique with psychoanalytic concepts). The problem is that (a) Zizek is jargon-heavy and difficult to understand, and (b) I’m not nearly as familiar with Zizek’s work as with more traditional psychoanalytic concepts. But I’ll try anyway. From a quick encyclopedia skim, he actually uses a similar analogy with fetishes (all quotes from IEP):

}} “Žižek argues that the attitude of subjects towards authority revealed by today’s ideological cynicism resembles the fetishist’s attitude towards his fetish. The fetishist’s attitude towards his fetish has the peculiar form of a disavowal: “I know well that (for example) the shoe is only a shoe, but nevertheless, I still need my partner to wear the shoe in order to enjoy.” According to Žižek, the attitude of political subjects towards political authority evinces the same logical form: “I know well that (for example) Bob Hawke / Bill Clinton / the Party / the market does not always act justly, but I still act as though I did not know that this is the case.””

As for how beliefs manifest, Zizek clarifies the experience of following a tradition and why we might actually feel like these traditions are aligned with “Reason” from the inside, and also the crux of why “Reason” can fail so hard in terms of social change:

According to Žižek, all successful political ideologies necessarily refer to and turn around sublime objects posited by political ideologies. These sublime objects are what political subjects take it that their regime’s ideologies’ central words mean or name extraordinary Things like God, the Fuhrer, the King, in whose name they will (if necessary) transgress ordinary moral laws and lay down their lives… Kant’s subject resignifies its failure to grasp the sublime object as indirect testimony to a wholly “supersensible” faculty within herself (Reason), so Žižek argues that the inability of subjects to explain the nature of what they believe in politically does not indicate any disloyalty or abnormality. Žižek argues that the inability of subjects to explain the nature of what they believe in politically does not indicate any disloyalty or abnormality. What political ideologies do, precisely, is provide subjects with a way of seeing the world according to which such an inability can appear as testimony to how Transcendent or Great their Nation, God, Freedom, and so forth is—surely far above the ordinary or profane things of the world.

Lastly and somewhat related, going back to an older SSC post, Scott argues that he doesn’t know why his patients react well to him, but Zizek can explain that, and it has a lot of relevance for politics (transference is a complex topic, but the simple definition is a transfer of affect or mind from the therapist to the patient, which is often a desirable outcome of therapy, contrasted with counter-transference, in which the patient affects the therapist):

}} “The belief or “supposition” of the analysand in psychoanalysis is that the Other (his analyst) knows the meaning of his symptoms. This is obviously a false belief, at the start of the analytic process. But it is only through holding this false belief about the analyst that the work of analysis can proceed, and the transferential belief can become true (when the analyst does become able to interpret the symptoms). Žižek argues that this strange intersubjective or dialectical logic of belief in clinical psychoanalysis also what characterizes peoples’ political beliefs…. the key political function of holders of public office is to occupy the place of what he calls, after Lacan, “the Other supposed to know.” Žižek cites the example of priests reciting mass in Latin before an uncomprehending laity, who believe that the priests know the meaning of the words, and for whom this is sufficient to keep the faith. Far from presenting an exception to the way political authority works, for Žižek this scenario reveals the universal rule of how political consensus is formed.”

Scott probably come across as having a stable and highly knowledgeable affect, which gives his patients a sense of being in the presence of authority (as we likely also feel in these comment threads), which makes him better able to perform transference and thus help his patients (or readers) reshape their beliefs.

Hopefully this shallow dive was interesting and opens up new areas of potential study, and also a parallel frame: working from the top-down ethnography (as tends to be popular in this community; the Archimedean standpoint) gives us a broad understanding, but working from the bottom-up gives us a more personal and intimate sense of why the top-down view is correct.

This helped me understand Zizek and Lacan a lot better than reading a book on them did, so thanks for that.


Stucchio doesn’t like me dissing Dubai:

I’m just going to raise a discussion of one piece here:

}} “Dubai, whose position in the United Arab Emirates makes it a lot closer to this model than most places, seems to invest a lot in its citizens’ happiness, but also has an underclass of near-slave laborers without exit rights (their employers tend to seize their passports).”

I have probably read the same western articles Scott has about all the labor the UAE and other middle eastern countries imports. But unlike them, I live in India (one of the major sources of labor) and mostly have heard about this from people who choose to make the trip.

To me the biggest thing missing from these western reporter’s accounts is the fact that the people shifting to the gulf are ordinary humans, smarter than most journalists, and fully capable of making their own choices.

Here are things I’ve heard about it, roughly paraphrased:

“I knew they’d take my passport for 9 months while I paid for the trip over. After that I stuck around for 3 years because the money was good, particularly after I shifted jobs. It was sad only seeing my family over skype, but I brought home so much money it was worth it.”

“I took my family over and we stayed for 5 years; the money was good, we all finished the Hajj while we were there, but it was boring and I missed Maharashtrian food.”

“It sucked because the women are all locked up. You can’t talk to them at the mall. It’s as boring as everyone says and you can’t even watch internet porn. But the money is good.”

When I hear about this first hand, the stories don’t sound remotely like slave labor. It doesn’t even sound like “we were stuck in the GDR/Romania/etc” stories I’ve heard from professors born on the wrong side of the Iron Curtain. I hear stories of people making life choices to be bored and far from family in return for good money. Islam is a major secondary theme. So I don’t think the UAE is necessarily the exception Scott thinks it is.


Moridinamael on the StarCraft perspective:

In StarCraft 2, wild, unsound strategies may defeat poor opponents, but will be crushed by decent players who simply hew to strategies that fall within a valley of optimality. If there is a true optimal strategy, we don’t know what it is, but we do know what good, solid play looks like, and what it doesn’t look like. Tradition, that is to say, iterative competition, has carved a groove into the universe of playstyles, and it is almost impossible to outperform tradition.

Then you watch the highest-end professional players and see them sometimes doing absolutely hare-brained things that would only be contemplated by the rank novice, and you see those hare-brained things winning games. The best players are so good that they can leave behind the dogma of tradition. They simply understand the game in a way that you don’t. Sometimes a single innovative tactic debuted in a professional game will completely shift how the game is played for months, essentially carving a new path into what is considered the valley of optimality. Players can discover paths that are just better than tradition. And then, sometimes, somebody else figures out that the innovative strategy has an easily exploited Achilles’ heel, and the new tactic goes extinct as quickly as it became mainstream.

StarCraft 2 is fun to think about in this context because it is relatively open-ended, closer to reality than to chess. There are no equivalents to disruptor drops or mass infestor pushes or planetary fortress rushes in chess. StarCraft 2 is also fun to think about because we’ve now seen that machine learning can beat us at it by doing things outside of what we would call the valley of optimality.

But in this context it’s crucial to point out that the way AlphaStar developed its strategy looked more like gradually accrued “tradition” than like “rationalism”. A population of different agents played each other for a hundred subjective years. The winners replicated. This is memetic evolution through the Chestertonian tradition concept. The technique wouldn’t have worked without the powerful new learning algorithms, but the learning algorithm didn’t come up with the strategy of mass-producing probes and building mass blink-stalkers purely out of its fevered imagination. Rather, the learning algorithms were smart enough to notice what was working and what wasn’t, and to have some proximal conception as to why.

Someone (maybe Robin Hanson) treats all of history as just evolution evolving better evolutions. The worst evolution of all (random chance) created the first replicator and kicked off biological evolution. Biological evolution created brains, which use a sort of hill-climbing memetic evolution for good ideas. People with brains created cultures (cultural evolution) including free market economies (an evolutionary system that selects for successful technologies). AIs like AlphaStar are the next (final?) step in this process.

Posted in Uncategorized | Tagged , | 118 Comments

Book Review: Why Are The Prices So D*mn High?

Why have prices for services like health care and education risen so much over the past fifty years? When I looked into this in 2017, I couldn’t find a conclusive answer. Economists Alex Tabarrok and Eric Helland have written a new book on the topic, Why Are The Prices So D*mn High? (link goes to free pdf copy, or you can read Tabarrok’s summary on Marginal Revolution). They do find a conclusive answer: the Baumol effect.

T&H explain it like this:

In 1826, when Beethoven’s String Quartet No. 14 was first played, it took four people 40 minutes to produce a performance. In 2010, it still took four people 40 minutes to produce a performance. Stated differently, in the nearly 200 years between 1826 and 2010, there was no growth in string quartet labor productivity. In 1826 it took 2.66 labor hours to produce one unit of output, and it took 2.66 labor hours to produce one unit of output in 2010.

Fortunately, most other sectors of the economy have experienced substantial growth in labor productivity since 1826. We can measure growth in labor productivity in the economy as a whole by looking at the growth in real wages. In 1826 the average hourly wage for a production worker was $1.14. In 2010 the average hourly wage for a production worker was $26.44, approximately 23 times higher in real (inflation-adjusted) terms. Growth in average labor productivity has a surprising implication: it makes the output of slow productivity-growth sectors (relatively) more expensive. In 1826, the average wage of $1.14 meant that the 2.66 hours needed to produce a performance of Beethoven’s String Quartet No. 14 had an opportunity cost of just $3.02. At a wage of $26.44, the 2.66 hours of labor in music production had an opportunity cost of $70.33. Thus, in 2010 it was 23 times (70.33/3.02) more expensive to produce a performance of Beethoven’s String Quartet No. 14 than in 1826. In other words, one had to give up more other goods and services to produce a music performance in 2010 than one did in 1826. Why? Simply because in 2010, society was better at producing other goods and services than in 1826.

Put another way, a violinist can always choose to stop playing violin, retrain for a while, and work in a factory instead. Maybe in 1826, when factory owners were earning $1.14/hour and violinists were earning $5/hour, so no violinists would quit and retrain. But by 2010, factory workers were earning $26.44/hour, so if violinists were still only earning $5 they might all quit and retrain. So in 2010, there would be a strong pressure to increase violinists’ wage to at least $26.44 (probably more, since few people have the skills to be violinists). So violinists must be paid 5x more for the same work, which will look like concerts becoming more expensive.

This should happen in every industry where increasing technology does not increase productivity. Education and health care both qualify. Although we can imagine innovative online education models, in practice one teacher teaches about twenty to thirty kids per year regardless of our technology level. And although we can imagine innovative AI health care, in practice one doctor can only treat ten or twenty patients per day. Tabarrok and Helland say this is exactly what is happening. They point to a few lines of evidence.

First, costs have been increasing very consistently over a wide range of service industries. If it was just one industry, we could blame industry-specific factors. If it was just during one time period, we could blame some new policy or market change that happened during that time period. Instead it’s basically omnipresent. So it’s probably some kind of very broad secular trend. The Baumol effect would fit the bill; not much else would.

Second, costs seemed to increase most quickly during the ’60s and ’70s, and are increasing more slowly today. This fits the growth of productivity, the main driver of the Baumol effect. Between 1950 and 2010, the relative productivity of manufacturing compared to services increased by a factor of six, which T&H describe as “of the same order as the growth in relative prices”. This is what the violinist-vs-factory-worker model of the Baumol effect would predict.

Third, competing explanations don’t seem to work. Some people blame rising costs on “administrative bloat”. But administrative costs as a share of total college costs have stayed fixed at 16% from 1980 to today (really?! this is fascinating and surprising). Others blame rising costs on overregulation. But T&H have a measure for which industries have been getting more regulated recently, and it doesn’t really correlate with which industries have been getting more expensive (wait, did they just disprove that regulation hurts the economy? I guess regulation isn’t a random shock, so this isn’t proof, but it still seems like a big deal). They’re also able to knock down industry-specific explanations like medical malpractice suits, teachers unions, etc.

Fourth, although service quality has improved a little bit over the past few decades, T&H provide some evidence that this explains only a small fraction of the increase in costs. Yet education and health care remain as popular (maybe more popular) than ever. They claim that very few things in economics can explain simultaneous increasing cost, increasing demand, and constant quality. One of those few things is the Baumol effect.

Fifth, they did a study, and the lower productivity growth in an industry, the higher the rise in costs, especially if they use college-educated workers who could otherwise get jobs in higher-productivity industries. This is what the Baumol effect would predict (though framed that way, it also sounds kind of obvious).

I find their case pretty convincing. And I want to believe. If this is true, it’s the best thing I’ve heard all year. It restores my faith in humanity. Rising costs in every sector don’t necessarily mean our society is getting less efficient, or more vulnerable to rent-seeking, or less-well-governed, or greedier, or anything like that. It’s just a natural consequence of high economic growth. We can stop worrying that our civilization is in terminal decline, and just work on the practical issue of how to get costs down.

But I do have some gripes. T&H frequently compare apples and oranges; for example, the administrator share in colleges vs. the faculty share in K-12; it feels like they’re clumsily trying to get one past you. They frequently describe how if you just use eg teacher salaries as a predictor, you can perfectly predict the extent of rising costs. But as far as I can tell, most things have risen the same amount, so if you used any subcomponent as a predictor, you could perfectly predict the extent of rising costs; again, it feels like they’re clumsily trying to get something past me. I think I can work out what they were trying to do (stitch together different datasets to get a better picture, assume salaries rise equally in every category) but I still wish they had discussed their reasoning and its limitations more openly.

The main thesis survives these objections, but there are still a few things that bother me, or don’t quite fit. I want to bring them up not as a gotcha or refutation, but in the hopes that people who know more about economics than I do can explain why I shouldn’t worry about them.

First, real wages have not in fact gone up during most of this period. Factory workers are not getting paid more. That makes it hard for me to understand how rising wages for factory workers are forcing up salaries for violinists, teachers, and doctors.

I discuss whether issues like benefits and inflation can explain this away here here, and conclude they can do so only partially; I’m not sure how this would interact with the Baumol effect.

Second, other data seem to dispute that salaries for the professionals in question have risen at all. T&H talk about rises in “instructional expenditures”, an education-statistics term that includes teacher salary and other costs; their source is NCES. But NCES also includes tables of actual teacher salaries. These show that teacher salaries today are only 6% higher than teacher salaries in 1970. Meanwhile, per-pupil costs are more than twice as high. How is an increase of 6% in teacher salaries driving an increase of 100%+ in costs? Likewise, although on page 33 T&H claim that doctors’ salaries have tripled since 1960, other sources report smaller increases of about 50% to almost nothing. Conventional wisdom among doctors is that the profession used to be more lucrative than it is today. This makes it hard to see how rising doctor salaries could explain a tripling in the cost of health care. And doctor salaries apparently make up only 20% of health spending, so it’s hard to see how they can matter that much.

(also, this SMBC)

Third, the Baumol effect can’t explain things getting less affordable. T&H write:

The cost disease is not a disease but a blessing. To be sure, it would be better if productivity increased in all industries, but that is just to say that more is better. There is nothing negative about productivity growth, even if it is unbalanced.In particular, it is important to see that the increase in the relative price of the string quartet makes string quartets costlier but not less affordable. Society can afford just as many string quartets as in the past. Indeed, it can afford more because the increase in productivity in other sectors has made society richer. Individuals might not choose to buy more, but that is a choice, not a constraint forced upon them by circumstance.

This matches my understanding of the Baumol effect. But it doesn’t match my perception of how things are going in the real world. College has actually become less affordable. Using these numbers: in 1971, the average man would have had to work five months to earn a year’s tuition at a private college. In 2016, he would have had to work fourteen months. To put this in perspective, my uncle worked a summer job to pay for his college tuition; one summer of working = one year tuition at an Ivy League school. Student debt has increased 700% since 1990. College really does seem to be getting less affordable. So do health care, primary education, and all the other areas affected by cost disease. Baumol effects shouldn’t be able to do this, unless I am really confused about them.

If someone can answer these questions and remove my lingering doubts about the Baumol effect as an explanation for cost disease, they can share credit with Tabarrok and Helland for restoring a big part of my faith in modern civilization.

Addendum To “Enormous Nutshell”: Competing Selectors

[Previously in sequence: Epistemic Learned Helplessness, Book Review: The Secret Of Our Success, List Of Passages I Highlighted In My Copy Of The Secret Of Our Success, Asymmetric Weapons Gone Bad]

When I wrote Reactionary Philosophy In An Enormous Planet-Sized Nutshell, my attempt to explain reactionary philosophy, many people complained that it missed the key insight. At the time I had an excuse: I didn’t get the key insight. Now I think I might understand it and have the vocabulary to explain, so I want to belatedly add it in.

The whole thing revolves around this rather dubious redefinition:

RIGHT-WING: Policies and systems selected by cultural evolution
LEFT-WING: Policies and systems selected by the marketplace of ideas

The second line is ambiguous: which marketplace of ideas, exactly? Maybe better than “the marketplace of ideas” would be “memetic evolution”. Policies and systems that are so catchy and convincing that lots of people believe in them and want to fight for them.

Under this definition, lots of conventionally right-wing movements get defined as left-wing. For example, Nazism and Trumpism both arose after a charismatic leader convinced the populace to implement them. They won because people liked them more than the alternatives. But “left-wing” is not equivalent to “populist”. An idea that spreads by convincing intellectuals and building an academic consensus around itself is still left-wing, because it relies on convincing people. Even ideas like neoliberalism and technocracy are left-wing ideas, if they sound good to intellectuals and they spread by convincing those intellectuals.

Does this mean that in this model, fascism, communism, and liberalism are all left-wing ideas? Yes. Most democracies can be expected to have mostly (entirely?) left-wing parties, since the whole point of being a party in a democracy is that you have to convince voters of things and win their approval. It’s not impossible to imagine a successful right-wing party in a democracy – it would revolve around preserving tradition, and if respect for tradition was strong enough, it might temporarily win. But it’s not a very stable situation.

What prevents every democracy from instantly becoming maximally left-wing? First, cultural evolution has built itself an immune system in the form of traditions and illegible preferences for certain ideas. Second, cultural evolution is still at work. If incumbents pursue some popular policy that ends up bankrupting their city, or causing crimes rates to increase 1000%, or something like that, they will end up humiliated, and people will probably vote them out of office. Incumbents know this, and so put some self-interested effort into rejecting these policies even if they are very popular and convincing.

(I think in this model, greed / special interests / NIMBYism are all special cases of convincingness. If an idea is in my self-interest, it will be very convincing to me; if I am powerful enough to sabotage the system or force things through it, the idea will have won through its convincingness.)

The reactionaries start with the assumption that some problems are asymmetric in the wrong direction. The correct idea sounds unconvincing; wrong ideas spread like wildfire and naturally win debates. I talked about two examples of this yesterday: Congressional salaries and early 20th century Communism. Most questions probably aren’t like this – “don’t nuke the ocean for no reason” is both convincing-sounding and adaptive. But where they diverge, you want to develop a system capable of implementing the right-wing answer even though there will be intense pressure from activists and the masses to implement the left-wing one.

What would a country capable of doing this look like? It would have to be a place where convincing-sounding ideas were incapable of spreading and taking over. That would mean that the beliefs of the populace would be completely irrelevant to what policies got enacted. So it couldn’t be a democracy. But it also couldn’t be an ordinary dictatorship. Churchill tells us that “dictators ride on tigers from which they dare not dismount” – they have to constantly maintain the support of the army and elites in order to avoid being deposed, and that involves doing things that sound good (at least to the army and elites) and are easy to justify (again, to them). You would need an implausibly strong dictatorship in order to resist the pressure to do whatever is easiest to justify, and so to escape being left-wing.

But even this would not be right-wing. Whatever convincing ideology has won the approval of the populace might also win the approval of the dictator, who would then do it because he wants to. Also, the dictator might be an idiot, or insane, and do bad policy for reasons other than because he is under the spell of some convincing-but-wrong idea.

The reactionaries believe there is no way to guarantee a country works well. But there is a way to guarantee that a collection of countries works well, which is to create a system conducive to cultural evolution. Have a bunch of small countries, each of which is ruled by an absolute dictator. In some of them, the dictator will pursue good policy, people and investment will flow in, and those countries will flourish. In others, the dictator will pursue bad policy, and those countries will either collapse, or do the smart thing and adopt the behavior of flourishing countries.

The argument isn’t that dictators are naturally smarter than the masses. The argument is that the dictators will be a high-variance group. Some of them will probably be stupid. But get enough countries like this, and at least one of them will have a dictator who really is cleverer than the masses. That country will succeed beyond what a left-wing country yoked to the most convincing-sounding idea would be capable of. Then other countries will copy its success or be left behind.

(are we sure dictatorships are higher variance than democracies? I think it makes intuitive sense that a single individual would be higher-variance than the average of a crowd. Also, democracies can be expected to develop activists and journalists who will intensify memetic selection and force convergence on the most memetically fit policy. If the democracies are culturally different, the most memetically fit policy might be different for each. But these cultural differences are themselves products of cultural evolution and could be expected to erode under enough pressure.)

There’s a clear analogy to business. Hundreds of entrepreneurs try to start their own companies. Many are idiots and fail immediately. But one of them is Jeff Bezos and very good at his job. His company makes the right decisions and ends up dominating the entire market. “The best practices spread everywhere” is the desired outcome; cultural evolution has succeeded. Abstracting away potential venture capitalist involvement, none of this requires Jeff Bezos’ business plan to sound convincing to a third party; memetic selection is not involved.

(if business worked like politics, each of those hundreds of e-commerce entrepreneurs would go before a panel of voters and explain why their ideas were the best; whoever sounded most convincing would win. I see no reason to believe Jeff Bezos is especially good at convincing people of things. Honestly, “first we make a mail order bookstore, then we conquer the world” sounds like a pretty dumb business plan.)

Henrich summarizes the political implications of The Secret Of Our Success as:

Humans are bad at intentionally designing effective institutions and organizations, though I’m hoping that as we get deeper insights into human nature and cultural evolution this can improve. Until then, we should take a page from cultural evolution’s playbook and design “variation and selection systems” that will allow alternative institutions or organizational forms to compete. We can dump the losers, keep the winners, and hopefully gain some general insights during the process.

The reactionary model of government is an attempt to cache out Henrich’s “variation and selection system”, and shares its advantages. But what’s the case against it?

First, turning the world into a patchwork of thousands of implausibly strong dictatorships sounds about as hard as starting a global communist revolution or implementing any other fundamental change to the system of the world.

Second, cultural evolution at the international level may not work quickly enough to be at all useful or humane. Plausibly World War II provided one bit of cultural-evolution data (“fascism is worse than liberalism”). The Cold War provided a second bit (“communism is also worse than liberalism”). Both bits are appreciated, but 50 million deaths per bit is a pretty high price. If the world were a patchwork of tiny dictatorships, there would probably be a lot of war and genocide and oppression before we learned anything.

Third, we have to hope that cultural evolution would be selecting for the happiest and most prosperous countries. There’s a case that it would, if everyone has exit rights and can vote with their feet for countries they like better. But there’s also a risk it selects for military might, or that exit rights don’t happen. Dubai, whose position in the United Arab Emirates makes it a lot closer to this model than most places, seems to invest a lot in its citizens’ happiness, but also has an underclass of near-slave laborers without exit rights (their employers tend to seize their passports). Also, a lot of industries have pretty bad conditions for their employees, even though those employees have exit rights to go to different companies. I don’t really understand why this happens, but it sounds like the sort of thing that could happen in a patchwork of small dictatorships too.

Finally, and appropriately for a system that loathes convincingness, the branding is terrible. Using “right” and “left” for the two sides was an bad decision. Absent that decision, I don’t think there’s anything necessarily rightist about it. Certainly it exemplifies leftist virtues like localism and diversity; certainly it gets points for identifying Nazism and Trumpism as bad and proposing a way to stop them. Certainly it should be tempting for communists who have realized they’re not going to get a revolution in western countries any time soon but still want a chance to prove their ideas can work. I think this bad branding decision caused a downstream cascade of awfulness, leading to reaction attracting a lot of very edgy people who liked the idea of being “maximally rightist”. Some of these people later became alt-right or Trump supporters, the media caught on, and the idea ended up discredited for totally contingent reasons.

Also on the subject of bad branding, it was an unforced error to focus on kings. The theory is pointing at something like Singapore, Dubai, or charter cities (but also utopian communes, and monasteries, and…) Medieval kings aren’t just a couple of centuries out of date, they’re also bad examples: most of them had very limited power to go against what nobles wanted. They probably stuck to cultural evolution rather than memetic evolution just because that was how things worked in the Middle Ages before the printing press, but they don’t seem to have had a coherent theory of this.

Despite these flaws, I find myself thinking about this more and more. Cultural evolution may be moving along as lazily as always, but memetic evolution gets faster and faster. Clickbait news sites increase the intensity of selection to tropical-rainforest-like levels. What survives turns out to be conspiracy-laden nationalism and conspiracy-laden socialism. The rise of Trump was really bad, and I don’t think it could have happened just ten or twenty years ago. Some sort of culturally-evolved immune system (“basic decency”) would have prevented it. Now the power of convincing-sounding ideas to spread through and energize the populace has overwhelmed what that kind of immunity can deal with.

We should try to raise the sanity waterline – make true things more convincing than false things. But at the same time, we may also want to try to to understand the role of cultural evolution as a counterweight to memetic evolution, and have ideas for how to increase that role in case of emergency.

Posted in Uncategorized | Tagged , , | Comments Off on Addendum To “Enormous Nutshell”: Competing Selectors

Asymmetric Weapons Gone Bad

[Previously in sequence: Epistemic Learned Helplessness, Book Review: The Secret Of Our Success, List Of Passages I Highlighted In My Copy Of The Secret Of Our Success. Deleted a controversial section which I still think was probably correct, but which given the number of objections wasn’t provably correct enough to be worth including. I might write another post giving my evidence for it later, but it probably shouldn’t be dropped in here without justification.]

I.

Years ago, I wrote about symmetric vs. asymmetric weapons.

A symmetric weapon is one that works just as well for the bad guys as for the good guys. For example, violence – your morality doesn’t determine how hard you can punch; they can buy guns from the same places we can.

An asymmetric weapon is one that works better for the good guys than the bad guys. The example I gave was Reason. If everyone tries to solve their problems through figuring out what the right thing to do is, the good guys (who are right) will have an easier time proving themselves to be right than the bad guys (who are wrong). Finding and using asymmetric weapons is the only non-coincidence way to make sustained moral progress.

The parts of The Secret Of Our Success that deal with reason vs. cultural evolution raise a disturbing prospect: what if sometimes, the asymmetry is in the wrong direction? What if there are some issues where rational debate inherently leads you astray?

II.

Maybe with an unlimited amount of resources, our investigations would naturally converge onto the truth. Given infinite intelligence, wisdom, impartiality, education, domain knowledge, evidence to study, experiments to perform, and time to think it over, we would figure everything out.

But just because infinite resources will produce truth doesn’t mean that truth as a function of resources has to be monotonic. Maybe there are some parts of the resources-vs-truth curve where increasing effort leads you the wrong direction.

When I was fifteen, I thought minimum wages obviously helped poor people. They needed money; minimum wages gave them money, case closed.

When I was twenty, and a little wiser, I thought minimum wages were obviously bad for the poor. Econ 101 tells us minimum wages kill jobs and cause deadweight loss, with poor people most affected. Case closed.

When I was twenty-five, and wiser still, I thought minimum wages were probably good again. I’d read a couple of studies showing that maybe they didn’t cause job loss, in which case they’re back to just giving poor people more money.

When I was thirty, I was hopelessly confused. I knew there was a meta-analysis of 64 studies that showed no negative effects from minimum wages, and a systematic review of 100+ studies that showed strong negative effects from minimum wages. I knew a survey of economists found almost 80% thought minimum wages were good, but that a different survey of economists found 73% thought minimum wages were bad.

We can graph my life progress like this:

This partly reflects my own personal life course, which arguments I heard first, and how I personally process evidence.

But another part of it might just be inherent to the territory. That is, there are some arguments that are easy to understand, and other arguments that are harder to understand. If the easy arguments lean predominantly one way, and the hard arguments lean predominantly the other way, then it will natural for any well-intentioned person studying a topic to follow a certain pattern of switching their opinion a few times before getting to the truth.

Some hard questions might be epistemic traps – problems where the more you study them, the wronger you get, up to some inflection point that might be further than anybody has ever studied them before.

III.

We’ll get to vast social conflicts eventually, but I want to start with boring things in everyday life.

I hate calling people on phones. I can’t really explain this. I’m okay with emailing them. I’m okay talking to them in person. But I hate calling them on phones.

When I was younger, I would go to great lengths to avoid calling people on phones. My parents would point out that this was dumb, and ask me to justify it. I couldn’t. They would tell me I was being silly. So I would call people on phones and hate it. Now I don’t live with my parents, nobody can make me do things, and so I am back to avoiding phone calls.

My parents weren’t authoritarian. They weren’t demanding I make phone calls because That Is The Way We Do Things In This House. They were doing the supposedly-correct thing, using rational argument to make me admit my aversion to phone calls was totally unjustified, and that making phone calls had many tangible benefits, and then telling me I should probably make the call, shouldn’t I? Yet somehow this ended up making my life worse.

Or: I can’t do complicated intellectual work with another person in the room. I just can’t. You can give me good reasons why I’m wrong about this: maybe the other person won’t make any noise. Maybe I can just turn the other way and focus on my computer and I won’t ever have to notice the other person’s presence at all. Argue this with me enough, and I will lose the argument, and work in the same room as you. I won’t get any good work done, and I’ll end up spending most of the time hating you and wishing you would go away.

I try to be very careful with my patients, so that I don’t make their lives worse in the same way. It’s often easy to get patients to admit they don’t have a good reason for what they’re doing; for example, autistic people usually can’t explain why they “stim”, ie make unusual flapping movements. These movements are distracting and probably creep out the people around them. It’s very easy to argue an autistic person into admitting they stimming is a net negative for them. Yet somehow autistic people always end up hating the psychiatrists who win this argument, and going somewhere far away from them so they can stim in peace.

Every day we do things that we can’t easily justify. If someone were to argue that we shouldn’t do the thing, they would win easily. We would respond by cutting that person out of our life, and continuing to do the thing.

I hope most readers find at least one of the examples above rang true to them. If not – if you don’t hate phones, or have trouble working near others, or stim – and if you’re thinking “All of those things really do seem irrational, you’re probably just wrong if you want to protect them against Reason” – here are some potential alternative intuition pumps:

1. Guys – do you have trouble asking girls out? Why? The worst that can happen is they’ll say no, right?

2. Girls – do you something get upset and flustered when a guy you don’t like asks you out, even in a situation where you don’t fear any violence or coercion from the other person? Do you sometimes agree to things you don’t want because you feel pressured? Why? All you have to do is say “I’m flattered, but no thanks”.

3. Do you diet and exercise as much as you should? Why not? Obviously this will make you healthier and feel better! Why don’t you buy a gym membership right now? Are you just being lazy?

I don’t mean to say these questions are Profound Mysteries that nobody can possibly answer. I think there are good answers to all of them – for example, there are some neurological theories that offer a pretty good explanation of how stimming helps autistic people feel better. But I do want to claim that most of the people in these situations don’t know the explanations, and that it’s unreasonable to expect them to. All of these actions and concerns are “illegible” in the Seeing Like A State sense.

Illegibility is complicated and context-dependent. Fetishes are pretty illegible, but because we have a shared idea of a fetish, because most people have fetishes, and because even the people who don’t have fetishes have the weird-if-you-think-about-it habit of being sexually attracted to other human beings – people can just say “That’s my fetish” and it becomes kind of legible. We don’t question it. And there are all sorts of phrases like “I don’t like it”, or “It’s a free country” or “Because it makes me happy” that sort of relieve us of the difficult work of maintaining legibility for all of our decisions.

This system works so well that it only breaks down when very different people try to communicate across a fundamental gap. For example, since allistic people may not feel any urge to stim or do anything like stimming, its illegibility suddenly becomes a problem, and they try to argue autistic people out of it. The worst failure mode is where illegible actions by an outgroup are naturally rounded off to “they are evil and just hiding it”. I remember feeling pretty bad once after hearing a feminist explain that the only reason men stared at attractive women was to intimidate them, make them feel like their body existed for other people’s pleasure, and cement male privilege. I myself sometimes stared at attractive women, and I couldn’t verbalize a coherent reason – was I just trying to hurt and intimidate them? I think a real answer to this question would involve the way we process salience – we naturally stare at the most salient part of a scene, and an attractive person will naturally be salient to us. But this was beyond teenaged me’s ability to come up with, so I ended up feeling bad and guilty.

If you force people to legibly interpret everything they do, or else stop doing it under threat of being called lazy or evil, you make their life harder and probably just end up with them avoiding you.

IV.

Different problems come up when we talk about societies trying to reason collectively. We would like to think that the more investigation and debate our society sinks into a question, the more likely we are to get the right answer. But there are also times when we do 450 studies on something and end up more wrong than when we started.

A very boring, trivial example of this: I think we should increase salaries for Congress, Cabinet Secretaries, and other high officials. There are so few of this that it would be very cheap: quintupling every Representative, Senator, and Cabinet Secretary’s salary to $1 million/year would involve raising taxes by only $2 per person. And if it attracted even a slightly better caliber of candidate – the type who made even 1% better decisions on the trillion-dollar questions such leaders face – it would pay for itself hundreds of times over. Or if it prevented just a tiny bit of corruption – an already rich Defense Secretary deciding from his gold-plated mansion that there was no point in going for a “consulting job” with a substandard defense contractor – again, hundreds of times over. This isn’t just me being a elitist shill: even Alexandria Ocasio-Cortez agrees with me here. This is as close to a no-brainer as policies come.

But I think I would be demolished if I tried to argue for this on Twitter, or on daytime TV, or anywhere else that promotes a cutthroat culture of “dunking” on people with the wrong opinions. It’s so much faster, easier, and punchier to say “poor single mothers are starving on minimum wage, and you think the most important problem is taking money away from them to make our millionaires even richer?” and just drown me out with cries of “elitist shill, elitist shill” every time I try to give the explanation above. Sure enough, the AOC article above notes that although Americans underestimate the amount Congressmen get paid (they think only $120,000, way less than the real number of $170,000), most of them believe they should be paid less, with only 17% saying they should keep getting what they already have, and only 9% agreeing they should get more.

This is a different problem than the one above – the policy isn’t illegible to the people trying to defend it, but the communication methods are low-bandwidth enough that the most legible side naturally wins. That Congressmen are even able to maintain their current salary is partly due to them being insulated from debate: the issue never really comes up, so the consensus in favor of cutting their pay doesn’t really matter.

And yeah, I know, Popular Opinion Sometimes Wrong, More At 11. But this seems like a trivial but real society-wide case of the epistemic traps above, where if you increase one resource (amount an issue is debated) without increasing other resources (intelligence and rationality of the participants, the amount of time and careful thought they are willing to put in) you get further away from truth.

V.

Are there any less trivial examples? What about turn-of-the-20th-century socialism?

I was shocked to learn how strong a pro-socialism consensus existed during this period among top intellectuals. Socialist leader Edward Pease described the landscape pretty well:

Socialism succeeds because it is common sense. The anarchy of individual production is already an anachronism. The control of the community over itself extends every day. We demand order, method, regularity, design; the accidents of sickness and misfortune, of old age and bereavement, must be prevented if possible, and if not, mitigated. Of this principle the public is already convinced: it is merely a question of working out the details. But order and forethought is wanted for industry as well as for human life. Competition is bad, and in most respects private monopoly is worse. No one now seriously defends the system of rival traders with their crowds of commercial travellers: of rival tradesmen with their innumerable deliveries in each street; and yet no one advocates the capitalist alternative, the great trust, often concealed and insidious, which monopolises oil or tobacco or diamonds, and makes huge profits for a fortunate; few out of the helplessness of the unorganised consumers.

Why shouldn’t people have thought this? The period featured sweatshop-like working conditions alongside criminally rich nobility with no sign that this state of affairs could ever change under capitalism. Top economists, up until the 1950s, almost unanimously agreed that socialism would help the economy, since central planners could coordinate ways to become more efficient. The first good arguments against this proposition, those of Hayek and von Mises, were a quarter-century in the future. Communism seemed perfectly straightforward and unlikely to go wrong; the first hint that it “might not work in real life” would have to wait for the Bolshevik Revolution. Pease writes that the main pro-capitalism argument during his own time was the Malthusian position that if the poor got more money, they would keep breeding until the Earth was overwhelmed by overpopulation; even in his own time, demographers knew this wasn’t true. The imbalance in favor of pro-communist arguments over pro-capitalist ones was overwhelming.

Don’t trust me on this. Trust all the turn-of-the-20th-century intellectuals who flocked towards socialism. In the Britain of the time, the smarter you were, and the more social science and economics you knew, the more likely you were to be a socialist, with only a few exceptions.

But turn-of-the-century Britain never went communist. Why not?

One school of thought says it’s because rich people had too much power. Even though the intellectuals all supported communism, nobody wanted to start a violent revolution, because they expected the rich to win and punish them.

But another school of thought says that cultural evolution created both capitalism, and an immune system to defend capitalism. This is more complicated, and requires a lot of the previous discussion here before it makes sense. But it seems to match some of what was going on. Society didn’t look like everyone wanting to revolt but being afraid of the rich. It looked like large parts of the poor and middle class being very anti-communist for kind of illegible reasons like “king” and “country” and “God” and “tradition” or “just because”.

In retrospect, these illegible reasons were right. It’s hard to tell if they were right by coincidence, or because cultural evolution is smarter than we are, drags us into whatever decision it makes, and then creates illegible reasons to prop itself up.

Empirically, as people started devoting more intellectual resources to the problem of whether Britain should be communist or not – as very intelligent and well-educated people started thinking about the problem using the most modern ideas of science and rationality, and challenged all of their preconceived notions to see which ones would stand up to Reason and which ones wouldn’t – they got further from the truth.

(I’m assuming that you, the reader, aren’t communist. If you are, think up another example, I guess.)

There is a level of understanding that lets you realize communism is a bad idea. But you need a lot of economic theory and a lot of retrospective historical knowledge the early-20th-century British didn’t have. There’s some part in the resources-vs-truth graph, where you’re smart enough to know what communism is but not smart enough to have good arguments against it – where the more intellect you apply the further from truth it takes you.

VI.

Obviously this ends with everyone agreeing to think very hard about things, carefully distinguish notice which traditions have illegible justifications, and then only throw out the traditions that are legitimately stupid and exist for no reason. What other position could we come to? You wouldn’t say “Don’t bother being careful, nothing is ever illegible”. But you also can’t say “Okay, we will never change anything ever again”. You just give the maximally-weaselly answer of “We’ll be sure to think about it first.”

But somebody made a good point on the last comments thread. We are the heirs to a five-hundred-year-old tradition of questioning traditions and demanding rational justifications for things. Armed with this tradition, western civilization has conquered the world and landed on the moon. If there were ever any tradition that has received cultural evolution’s stamp of approval, it would be this one.

So is there anything at all we should learn from all of this? If I had to cache out “think very hard about things” more carefully, maybe it would look like this:

1. The original Chesterton’s Fence: try to understand traditions before jettisoning them.

2. If someone does something weird but can’t explain why, accept them as long as they’re not hurting anyone else (and don’t make up stupid excuses for why their actions really hurt all of us). Be less quick to jump to “actually they are doing it out of Inherent Evil” as an explanation.

3. As per the last Henrich quote here, make use of the “laboratories of democracy” idea. Try things on a small scale in limited areas before trying them at larger scale; let different polities compete and see what happens.

4. Have less intense competitive pressure in the marketplace of ideas. Kuhn touches on how heliocentric theory had less explanatory power than geocentric theory for a while, but was tolerated anyway long enough that it was eventually able to sort itself out and become better. If good ideas are sometimes at a disadvantage in defending themselves, leave unpopular opinions alone for a while to see if they eventually become more legible. I think this might look like just being kinder and more tolerant of weirdness.

5. If someone defends a tradition that seems completely wrong and repulsive to you, try to be understanding of them even if you are right and the tradition is wrong. Traditions spent a long time evolving to be as sticky as possible in the face of contrary evidence, humans spent a long time evolving to stick to traditions as much as possible in the face of contrary evidence, and this evolution was beneficial through most of history. This sort of pressure is as hard to break (and probably as genetically-loaded) as other now-obsolete evolutionary urges like the one to binge on as much calorie-dense food as possible when it’s available (related).

6. Having done all that, and working as gingerly and gradually as you can, you should still try to improve on traditions that seem obsolete or improvable.

7. Cultural evolution does not provide evidence that traditions are ethical. Like biological evolution, cultural evolution didn’t even try to create ethical systems. It tried to create systems that were good at spreading. Plausibly many cultures converged on eating meat because it was a good source of calories and nutrients. But if you think it violates animals’ rights, cultural evolution shouldn’t convince you otherwise – there’s no reason cultural evolution should price animal suffering into its calculations. (related).

Finally: some people have interpreted this series of posts as a renunciation of rationality, or an admission that rationality is bad. It isn’t. Rationality isn’t (or shouldn’t be) the demand that every opinion be legible and we throw out cultural evolution. Rationality is the art of reasoning correctly. I don’t know what the optimal balance between what-seems-right-to-us vs. tradition should be. But whatever balance we decide on, better correlating “what seems right to us” with “what is actually true” will lead to better results. If we’re currently abysmal at this task, that only adds urgency to figuring out where we keep going wrong and how we might go less wrong, both as individuals and as a community.

List Of Passages I Highlighted In My Copy Of “The Secret Of Our Success”

[Previously in sequence: Epistemic Learned Helplessness, Book Review: The Secret Of Our Success]

A rare example of cultural evolution in action:

Throughout the Highlands of New Guinea, a group’s ability to raise large numbers of pigs is directly related to its economic and social success in competition with other regional groups. The ceremonial exchange of pigs allows groups to forge alliances, re-pay debts, obtain wives, and generate prestige through excessive displays of generosity. All this means that groups who are better able to raise pigs can expand more rapidly in numbers—by reproduction and in-migration—and thus have the potential to expand their territory. Group size is very important in intergroup warfare in small-scale societies so larger groups are more likely to successfully expand their territory. However, the prestige more successful groups obtain may cause the rapid diffusion of the very institutions, beliefs, or practices responsible for their competitive edge as other groups adopt their strategies and beliefs.

In 1971, the anthropologist David Boyd was living in the New Guinea village of Irakia, and observed intergroup competition via prestige-biased group transmission. Concerned about their low prestige and weak pig production, the senior men of Irakia convened a series of meetings to determine how to improve their situation. Numerous suggestions were proposed for raising their pig production but after a long process of consensus building the senior men of the village decided to follow a suggestion made by a prestigious clan-leader who proposed that they “must follow the Fore’” and adopt their pig-related husbandry practices, rituals, and other institutions. The Fore’ were a large and successful ethnic group in the region, who were renowned for their pig production. The following practices, beliefs, rules, and goals were copied from the Fore’, and announced at the next general meeting of the community:

1) All villagers must sing, dance and play flutes for their pigs. This ritual causes the pigs to grow faster and bigger. At feasts, the pigs should be fed first from the oven. People are fed second.

2) Pigs should not be killed for breaking into another’s garden. The pig’s owner must assist the owner of the garden in repairing the fence. Disputes will be resolved following the dispute resolution procedure used among the Fore’.

3) Sending pigs to other villages is tabooed, except for the official festival feast.

4) Women should take better care of the pigs, and feed them more food. To find extra time for this, women should spend less time gossiping.

5) Men must plant more sweet potatoes for the women to feed to the pigs, and should not depart for wage labor in distant towns until the pigs have grown to a certain size.

The first two items were implemented immediately at a ritual feast. David stayed in the village long enough to verify that the villagers did adopt the other practices, and that their pig production did increase in the short term, though unfortunately we don’t know what happened in the long-run.

Let me highlight three features of this case. First, the real causal linkages between many of these elements and pig production are unclear. Maybe singing does cause pigs to grow faster, but it’s not obvious and no one tried to ascertain this fact, via experimentation for example. Second, the village leadership chose to rely on copying institutions from other groups, and not on designing their own institutions from scratch. This is smart, since we humans are horrible at designing institutions from scratch. And third, this transmission between groups occurred rapidly because Irakia already had a political institution in the village, involving a council of the senior members of each clan, who were empowered by tradition (social norms) to make community-level decisions. Lacking this decision-making institution, Fore’ practices would have had to spread among households, and thus been much slower in spreading. Of course, such political decision-making institutions themselves are favored by intergroup competition.

This is it. This is the five-point platform that the Democratic Party can use to win in 2020.


Yesterday’s review mentioned that children have certain “slots” in their heads that are ready for specific types of incoming information. How far can we take this idea?

The UCLA anthropologist Dan Fessler argues that during middle childhood (ages 6-9) humans go through a phase in which we are strongly attracted to leraning about fire, by both observing others and manipulating it ourselves. In small-scale societies, where children are free to engage this curiosity, adolescents have both mastered fire and lost any further attraction to it. Interestingly, Fessler also argues that modern societies are unusual because so many children never get to satisfy their curiosity, so their fascination with fire stretches into the teen years and early adulthood.


On prestige-based socialization and learning who to learn from:

By 14 months, infants are already well beyond social referencing and already showing signs of using skill or competence cues to select models. After observing an adult model acting confused by shoes and placing them on his hands, German infants tended not to copy his unusual way of turning on a novel lighting device: using his head. However, if the model acted competently, confidently putting shoes on his feet, babies tended to copy the model and used their heads to activate the novel lighting device.


Kind of unrelated to culture, but did you know quadruped animals run at quantized speeds?

Many four-legged animals are saddled with a design disadvantage. Game animals thermoregulate by panting, like a dog. If they need to release more heat, they pant faster. This works fine unless they are running. When they run, the impact of their forelimbs compresses their chest cavities in a manner that makes breathing during compressions inefficient. This means that, ignoring oxygen and thermoregulation requirements, running quadrupeds should breathe only once per locomotor-cycle. But, since the need for oxygen goes up linearly with speed, they will be breathing too frequently at some speeds and not frequently enough at other speeds. Consequently, a running quadruped must pick a speed that (1) demands only one breath per cycle, but (2) supplies enough oxygen for his muscle-speed demands (lest fatigue set in), and (3) delivers enough panting to prevent a meltdown (heat stroke), which depends on factors unrelated to speed such as the temperature and breeze. The outcome of these constraints is that quadrupeds have a discrete set of optimal or preferred speed settings (like the gears on a stick-shift car) for different styles of locomotion (e.g., walking, trotting and galloping). If they deviate from these preferred settings, they are operating less efficiently.

Humans lack these restrictions because (1) our lungs do not compress when we stride (we’re bipedal) so (2) our breathing rates can vary independent of our speed, and (3) our thermoregulation is managed by our fancy sweating-system, so the need to pant does not constrain our breathing. Because of this, within our range of aerobic running speeds (not sprinting), energy use doesn’t vary too much. That means we can change speeds within this range without paying much of a penalty. As a result, a skilled endurance hunter can strategically vary his speed in order to force his prey to run inefficiently. If his prey picks an initial speed just faster than the hunter, to escape, the hunter can speed up. This forces the prey to ‘shiftup’ to a much faster speed, which will cause rapid overheating. The animal’s only alternative is to run inefficiently, at a slower speed which will exhaust his muscles more quickly. The consequence is that hunters force their prey into a series of sprints and rests that eventually result in heat stroke. The overheated prey collapses, and is easily dispatched. Tarahumara, Paiute and Navajo hunters report that they then simply strangle the collapsed deer or pronghorn antelope.


Even locomotion is culturally learned!

To achieve a running form that maximizes both performance and freedom from injury, humans need to rely on some cultural learning, on top of much individual practice. The evolutionary biologist and anatomist, Dan Lieberman, has studied long-distance barefoot and minimally shod running in communities around the globe. When he asks runners of all ages how they learned to run, they never say they “just knew how.” Instead, they often name or point to an older, highly skilled, and more prestigious member of their group or community, and say they just watch him, and do what he does. We are such a cultural species that we’ve come to rely on learning from others even to figure out how to run in ways that best harness our anatomical adaptations.


Why we use spices:

Why do we use spices in our foods? In thinking about this question keep in mind that (1) other animals don’t spice their foods, (2) most spices contribute little or no nutrition to our diets, and (3) the active ingredients in many spices are actually aversive chemicals, which evolved to keep insects, fungi, bacteria, mammals and other unwanted critters away from the plants that produce them.

Several lines of evidence indicate that spicing may represent a class of cultural adaptations to the problem of food-borne pathogens. Many spices are antimicrobials that can kill pathogens in foods. Globally, common spices are onions, pepper, garlic, cilantro, chili peppers (capsicum) and bay leaves. Here’s the idea: the use of many spices represents a cultural adaptation to the problem of pathogens in food, especially in meat. This challenge would have been most important before refrigerators came on the scene. To examine this, two biologists, Jennifer Billing and Paul Sherman, collected 4578 recipes from traditional cookbooks from populations around the world. They found three distinct patterns.

1. Spices are, in fact, antimicrobial. The most common spices in the world are also the most effective against bacteria. Some spices are also fungicides. Combinations of spices have synergistic effects, which may explain why ingredients like “chili power” (a mix of red pepper, onion, paprika, garlic, cumin and oregano) are so important. And, ingredients like lemon and lime, which are not on their own potent anti-microbials, appear to catalyze the bacteria killing effects of other spices.

2. People in hotter climates use more spices, and more of the most effective bacteria killers. In India and Indonesia, for example, most recipes used many anti-microbial spices, including onions, garlic, capsicum and coriander. Meanwhile, in Norway, recipes use some black pepper and occasionally a bit of parsley or lemon, but that’s about it.

3. Recipes appear to use spices in ways that increase their effectiveness. Some spices, like onions and garlic, whose killing power is resistant to heating, are deployed in the cooking process. Other spices like cilantro, whose antimicrobial properties might be damaged by heating, are added fresh in recipes.

Thus, many recipes and preferences appear to be cultural adaptations adapted to local environments that operate in subtle and nuanced ways not understood by those of us who love spicy foods. Billing and Sherman speculate that these evolved culturally, as healthier, more fertile and more successful families were preferentially imitated by less successful ones. This is quite plausible given what we know about our species’ evolved psychology for cultural learning, including specifically cultural learning about foods and plants.

Among spices, chili peppers are an ideal case. Chili peppers were the primary spice of New World cuisines, prior to the arrival of Europeans, and are now routinely consumed by about a quarter of all adults, globally. Chili peppers have evolved chemical defenses, based on capsaicin, that make them aversive to mammals and rodents but desirable to birds. In mammals, capsicum directly activates a pain channel (TrpV1), which creates a burning sensation in response to various specific stimuli, including acid, high temperatures and allyl isothiocyanate (which is found in mustard or wasabi). These chemical weapons aid chili pepper plants in their survival and reproduction, as birds provide a better dispersal system for the plants’ seeds than other options (like mammals). Consequently, chilies are innately aversive to non-human primates, babies and many human adults. Capsaicin is so innately aversive that nursing mothers are advised to avoid chili peppers, lest their infants reject their breast (milk), and some societies even put capsicum on mom’s breasts to initiate weaning. Yet, adults who live in hot climates regularly incorporate chilies into their recipes. And, those who grow up among people who enjoy eating chili peppers not only eat chilies but love eating them. How do we come to like the experience of burning and sweating—the activation of pain channel TrpV1?

Research by psychologist Paul Rozin shows that people come to enjoy the experience of eating chili peppers mostly by re-interpreting the pain signals caused by capsicum as pleasure or excitement. Based on work in the highlands of Mexico, children acquire this gradually without being pressured or compelled. They want to learn to like chili peppers, to be like those they admire. This fits with what we’ve already seen: children readily acquire food preferences from older peers. In Chapter 14, we further examine how cultural learning can alter our bodies’ physiological response to pain, and specifically to electric shocks. The bottom line is that culture can overpower our innate mammalian aversions, when necessary and without us knowing it.

Fascinating if true. But don’t we use spices because of their taste? If spices are antimicrobials, why aren’t there any tasteless spices? I guess you could argue most plants taste like something, usually something bad, and if a plant is a good antimicrobial then we go through the trouble of culturally reinterpreting its taste to be “exciting” or “interesting”. Also, how far can this “cultural reinterpretation” idea go? Does this explain things like masochism, or like the weak form of masochism that makes people like naively unpleasant experiences like roller coasters?


I knew that Europeans had light skin because they lived in northern latitudes without much sunlight. But then how come Inuit and North Asians never developed light skin? Henrich explains:

To understand this, we need first to consider how culture has shaped genes for skin color over the last 10 millennia. Much evidence now indicates that the shades of skin color found among different populations—from dark to light—across the globe represent a genetic adaptation to the intensity and frequency of exposure to ultraviolet light, including both UVA and UVB. Near the equator, where the sun is intense year round, natural selection favors darker skin, as seen in populations near the equator in Africa, New Guinea and Australia. This is because both UVA and UVB light can dismantle the folate present in our skin, if not impeded or blocked by melanin. Folate is crucial during pregnancy, and inadequate levels can result in severe birth defects like spina bifida. This is why pregnant women are told by their physicians to take folic acid. In men, folate is important in sperm production. Preventing the loss of this reproductively valuable folate means adding protective melanin to our epidermis, which has the side effect of darkening our skin.

The threat from intense UV light to our folate diminishes for populations farther from the equator. However, a new problem pops up, as darker skinned people face a potential vitamin D deficiency. Our bodies use UVB light to synthesize vitamin D. At higher latitudes, the protective melanin in dark skin can block too much of the UVB light, and thereby inhibit the synthesis of vitamin D. This vitamin is important for the proper functioning of the brain, heart, pancreas and immune system. If a person’s diet lacks other significant sources of this vitamin, then having dark skin and living at high latitudes increases one’s chances of experiencing a whole range of health problems, including most notably rickets. A terrible condition especially in children, rickets causes muscle weakness, bone and skeletal deformities, bone fractures and muscle spasms. Thus, living at high latitude will often favor genes for lighter skin. Not surprising for a cultural species, many high latitude populations of hunter-gatherers (above 50-55q latitude), such as the Inuit, culturally evolved adaptive diets based on fish and marine animals, so the selection pressures on genes to reduce the melanin in their skin were not as potent as they would have been in populations lacking such resources. If these resources were to disappear from the diet of such northern populations, selection for light skin would intensify dramatically.

Among regions of the globe above 50-55q latitude (e.g. much of Canada), the area around the Baltic Sea was almost unique in its ability to support early agriculture. Starting around 6,000 years ago, a cultural package of cereal crops and agricultural know-how gradually spread from the south, and was adapted to the Baltic ecology. Eventually, people became primarily dependent on farmed foods, and lacked access to the fish and other vitamin D-rich food sources that local hunter-gatherer populations had long enjoyed. However, being at particularly high latitudes, natural selection kicked in to favor genes for really light skin, so as to maximize whatever vitamin-D could be synthesized using UVB light.

Secret Of Our Success spends a lot of time talking about gene-culture coevolution and how we should expect people from different cultures to have different genes. When asked whether this is potentially racist, it argues it’s really maximally anti-racist, because “racism” means “believing in exactly the same racial categories as 19th century racists”, and gene-culture coevolution proves that variation is actually much more widespread than that, so there.


In case you needed proof that high status increases your inclusive fitness:

Chris asked a sample of Tsimane to rank the men in two villages along a number of dimensions, including their fighting ability, generosity, respect, community persuasiveness, ability to get their way, and their number of allies. Each Tsimane’ man could then be assigned a score based on the aggregate results from his fellow villagers. Chris argues that his measures of fighting ability and community persuasiveness provide the best proxies for dominance and prestige, respectively, in this context. He then shows that both of these proxies for social status are associated with having more babies with one’s wife, having more extra-marital affairs, and being more likely to remarry after a divorce, even after statistically removing the effects of age, kin group size, economic productivity and several other factors. Beyond this, the children of prestigious men die less frequently and prestigious men are more likely to marry at younger ages (neither of these effects hold for dominant men). All this suggests that, at least in this small scale society, being recognized as either dominant or prestigious has a positive influence on one’s total reproductive output (children) or mating success over and above the consequences that might accrue from factors associated with status like economic productivity or hunting skills. Not surprisingly, both dominant and prestigious men tended to get their way at group meetings, but only prestigious men were respected and generous.


On the Sanhedrin:

Effective institutions often harness or suppress aspects of our status psychology in non-intuitive ways. Take the Great Sanhedrin, the ancient Jewish court and legislature that persisted for centuries at the beginning of the Common Era. When deliberating on a capital case, its 70 judges would each share their views beginning with the youngest and lowest ranking member and then proceed in turn to the “wisest” and most respected member. This is an interesting norm because (1) it’s nearly the opposite of how things would go if we let nature take its course, and (2) it helps guarantee that all the judges got to hear the least varnished views of the lower ranking members, since otherwise the views of the lowest status individuals would be tainted by both the persuasive and deferential effects of prestige and dominance. Concerns with dominance may have been further mitigated by (1) a sharing of the directorship of the Sanhedrin by two individuals, who could be removed by a vote of the judges, (2) the similar social class and background of judges, and (3) social norms that suppressed status displays.

I like this idea, but I worry it could backfire. Supposing that even the best of us are at least a little tempted to conform, it risks the youngest and least experienced members setting the tone for the discussion, so that the older and wiser members are tempted to conform with people more foolish than themselves. If the wisest people spoke first, at least we could get their untainted opinions and guarantee that any conformity was at least in favor of the opinion most likely to be correct. Overall it seems like they should have gone with secret ballots. I wonder if anyone’s ever done an experiment comparing wisest-first, youngest-first, and secret-ballot decision-making to see if any have a clear advantage. You could do it with one of those “guess the number of jelly beans in this jar” tasks or something, with participants who did well on a test problem clearly marked as “elders”.


On why societies often dictate naming children after their paternal relatives:

In building a broader kinship network, social norms and practices connect a child more tightly to his or her father’s side of the family, in subtle ways. In contrast to many complex societies, mobile hunter-gatherer populations often emphasize kinship through both mom and dad, and permit new couples much flexibility in where they can live after marriage. However, there’s always that problem of paternity certainty for dad’s entire side. Among Ju/’hoansi, mobile hunter-gatherers in the Kalahari Desert in southern Africa, social norms dictate that a newborn’s father—or, more accurately, the mother’s husband—has the privilege of naming the child. These norms also encourage him to name the child after either his mother or father, depending on the infant’s sex. Ju/’hoansi believe name sharing helps the essence of the paternal grandparents live on, and it consequently bonds both the grandparents and the father’s whole side of the family to the newborn. Relatives of the grandparents often refer to the newborn using the same kinship term they use for his or her older namesake—that is, the grandfather’s daughter will call the newborn baby “father.”

This bias to the father’s side is particularly interesting since Ju/’hoansi kinship relationships are otherwise quite gender egalitarian, emphasizing equally the links to both mom’s and dad’s sides of the family. This biased naming practice may help create that symmetry by evening out the imbalance that paternity uncertainty leaves behind. In many modern societies, where social norms favoring the father’s side have disappeared, the effect of paternity certainty emerges as maternal grandparents, uncles and aunts invest more than the same paternal relatives do. Thus, Ju/’hoansi practices link newborns directly to their father’s parents and simultaneously, via the use of close kin terms like “father” and “sister”, pull all of dad’s relatives closer

I wonder if this can be extended to our own practice of kids (mostly) taking their father’s last name rather than their mother’s.

And Joseph Henrich continues with an anecdote I eventually decided to consider cute:

More broadly, in Ju/’hoansi society, sharing the same name is an important feature of social life, which has many economically important implications. Psychologically, creating namesakes may work in two interlocking ways. First, even among undergraduates and professors, experiments suggest that sharing the same, or even a similar, name increases people’s liking for the other person, their perceptions of similarity and their willingness to help that person. In one study, for example, professors were more likely to fill out a survey and mail it back if the cover letter was signed by someone with a name similar to their own name. The perception of similarity suggests that namesakes may somehow spark our kin psychology, since we already know we use other cues of similarity (appearance) to assess relatedness. Second, even if this same-name trick doesn’t actually spark any change in immediate feelings, it still sets the appropriate social norms—the reputational standards monitored by others—which among the Ju/’hoansi specify all kinds of important things about relationships, ranging from meat sharing priorities to water-hole ownership. Norms related to naming or namesake relationships are common across diverse societies, and many people in small-scale societies intuitively know the power of namesakes, as my Yasawan friends with names like Josefa, Joseteki and Joseses often remind me. My own kids are named Joshua, Jessica and Zoey, thus matching my own first name by first initial or by rhyming.

(his wife is also an anthropologist, so maybe that makes naming your kids according to anthropological phenomena easier to pull off).


Relevant to a frequent discussion here about whether polyamory is “unnatural” or at least a violation of Chesterton’s Fence:

Even in societies with marriage, social norms and beliefs need not re-enforce concerns about sexual fidelity that arise from male pair-bonding psychology, but can instead promote investment in children in other ways. Many South American indigenous populations believe that a child forms in his or her mother’s womb through repeated ejaculations of sperm, a belief system that anthropologists have labeled partible paternity. In fact, people in many of these societies maintain that a single ejaculation cannot sustain a viable pregnancy, and men must “work hard” with repeated ejaculations over many months to sustain a viable fetus. Women, especially after the first fetus appears, are permitted, and sometimes even encouraged, to seek another man, or men, to have sex with in order to provide ‘additional fathers’ for their future child. Anyone who contributes sperm to the fetus is a secondary father. In some of these societies, periodic rituals prescribe extramarital sex after successful hunts, which helps establish and formalize the creation of multiple fathers. Secondary fathers — often named at birth by the mother — are expected to contribute to the welfare of their children (e.g., by delivering meat and fish), although not as much as the primary father, the mother’s husband. Frequently, the secondary father is the husband’s brother.

Obtaining a second father is adaptive, at least sometimes. Detailed studies among both the Bari’ in Venezuela and the Ache’ show that kids with exactly two fathers are more likely to survive past age fifteen than kids with either one father or three or more fathers.

Importantly, social norms cannot just make male sexual jealousy vanish. Men don’t like it when their wives seek sex with other men. However, rather than being supported by their communities in monitoring and punishing their wives for sexual deviations, they are the one’s acting defiantly—violating social norms — if they show or act on their jealousy. Reputational concerns and norms are flipped around here, so now the husband has to control himself. In the eyes of the community, it’s considered a good thing for an expectant mother to provide a secondary father for her child.

Henrich adds that about 85% of human societies have practiced something other than traditionally-understood monogamy.

Suppose somebody in a weird Californian counterculture scene is trying to decide to what degree polyamory is Chesterton’s-Fence-compliant. They might look around their own social network and find that most of the people they know have organically become polyamorous over the past decade or so, and decide it is the local tradition (and therefore it is good). But they could look on a broader scale and see that most people in their civilization over the past few centuries have been monogamous (and therefore polyamory is bad). Or they could look on an even broader scale and see that most people in the world throughout human history have been non-monogamous (and therefore polyamory is potentially good again). I understand other people’s intuition that the “my civilization, past few hundred years” scale seems important, but I’m not sure how you would non-arbitrarily justify choosing that particular scale instead of others. The strongest argument seems to be something like “Wait two generations to see if it builds strong families”, but I could see going either way.


I mentioned aversion to eating insects in the original review, but Henrich suggests some food taboos are easier to acquire than others:

There is reason to suspect that we humans have an innate susceptibility to picking up meat aversions, due to the tendency of dead animals to carry dangerous pathogens. Thus, we humans are primed to acquire meat taboos over other food avoidances


More on taboos. A lot of taboos were of the form “you personally are not allowed to eat this particular meat or else something terrible will happen to you, so you might as well share it with the less fortunate instead”; this looks like a pretty transparent attempt by cultural evolution to build a social safety net. Henrich asks why these taboos persisted in the face of greed:

A good learner will acquire this rule while growing up and never actually violate it (meat is consumed in public), so he’ll never directly experience eating the tabooed part and not having bad luck. Rare cases of taboo violation that, by coincidence, were followed by bad luck or illness will be readily remembered and passed on (psychologists call this “negativity bias”). Meanwhile, cases of violations followed by a long period when nothing bad happens will tend to be missed or forgetten, unless people keep and check accurate records.

Based on my field experience, any skeptic who questions the taboos will be met with vivid descriptions of particular cases in which the taboos were violated and then poor hunting, illnesses, or bad luck ensued.

This is a huge stretch, but I wonder if you could make an argument that evolution favored confirmation bias because it helped prevent people from questioning their cultural rules.


How social norms are maintained:

In research in the villages of Yasawa Island, my team and I have studied how norms are maintained. When someone, for example, repeatedly fails to contribute to village feasts or community labor, or violates food or incest taboos, the person’s reputation suffers. A Yasawan’s reputation is like a shield that protects them from exploitation or harm by others, often from those who harbor old jealousies or past grievances. Violating norms, especially repeatedly, causes this reputational shield to drop, and creates an opening for others to exploit the norm-violator with relative impunity. Norm violators have their property (e.g., plates, matches, tools) stolen and destroyed while they are away fishing or visiting relatives in other villages; or, they have their crops stolen and gardens burned at night. Despite the small size of these communities, the perpetrators of these actions often remain anonymous and get direct benefits in the form of stolen food and tools as well as the advantages of bringing down a competitor or dispensing revenge for past grievances.

Despite their selfish motivations, these actions act to sustain social norms, including cooperative ones, because—crucially—perpetrators can only get away with such actions when they target a norm-violator, a person with his reputational shield down. Were they to do this to someone with a good reputation, the perpetrator would himself become a norm-violator and damage his or her reputation, thereby opening themselves up to gossip, thefts and property damage. This system, which Yasawans themselves can’t explicitly lay out, thereby harnesses past grievances, jealousies and plain old self-interest to sustain social norms, including cooperative norms like contributing to village feasts.282 Thus, individuals who fail to learn the correct local norms, can’t control themselves or repeatedly make mistaken violations are eventually driven from the village, after having been relentlessly targeted for exploitation.

This sounds sort of like the Icelandic legal system in Legal Systems Very Different From Ours, in that the consequence of breaking the law is that the laws cease to protect you. But viewed from a more critical angle, it also sounds like the modern “tradition” of committing (and/or tolerating) hate crimes against people who don’t conform.


Speaking of hate crimes, Henrich (like me) thinks “racism” is not a natural category. He thinks ethnic hostility is much more natural than racial hostility, with the difference being that race is biological and ethnicity is culture. People are naturally friendly towards people of their own culture and skeptical of people from other cultures, which may or may not follow racial lines. He discusses an experiment in which children are asked to view a puppet playing a game incorrectly:

We can see how deeply norms are intertwined with our folk sociology by returning to the experiments with Max the puppet. The child subjects now encounter Max along with Henri. Max speaks native-accented German but Henri speaks French-accented German. Young German children protested much more when Max —their co-ethnic as cued by accent — played the game differently from the model than when Henri did. Co-ethnics are favored because they presumably share similar norms, but that also means they are subject to more monitoring and punishment if they violate those norms. This appears to hold cross-culturally, as people from places as diverse as Mongolia and New Guinea willingly pay a cost to preferentially punish their co-ethnics in experiments like the Ultimatum Game, over their non-co-ethnics, for norm violations.

This approach to how and why we think about tribes and ethnicity has broader implications. First, intergroup competition will tend to favor the spread of any tricks for expanding what members of a group perceive as their tribe. Both religions and nations have culturally evolved to increasingly harness and exploit this piece of our psychology, as they create quasi-tribes. Second, this approach means that the ingroup vs. out-group view taken by psychologists misses a key point: not all groups are equally salient or thought about in the same way. Civil wars, for example, strongly trace to ethnically or religiously marked differences, and not to class, income or political ideology. [310] This is because our minds are prepared to carve the social world into ethnic groups, but not into classes or ideologies.

Finally, the psychological machinery that underpins how we think about ‘race’ actually evolved to parse ethnicity, not race. You might be confused by this distinction since race and ethnicity are so often mixed up. Ethnic group membership is assigned based on culturally-transmitted markers, like language or dialect. By contrast, racial groups are marked and assigned according to perceived morphological traits, such as color or hair form, which are genetically transmitted. Our folk-sociologcial abilities evolved to pick out ethnic groups, or tribes. However, cues like skin color or hair form can pose as ethnic markers in the modern world because members of different ethnic groups sometimes also share markers like skin color/hair form, and racial cues can automatically and unconsciously ‘trick’ our psychology into thinking that different ethnic groups exist. And, this byproduct can be harnessed and reified by cultural evolution to create linguistically labeled racial categories and racism.

Underlining this point is the fact that racial cues do not have cognitive priority over ethnic cues: when children or adults encounter a situation in which accent or language indicate ‘same ethnicity’ but skin color indicates ‘different race’, the ethno-linguistic markers trump the racial markers. That is, children pick as a friend someone of a different race who speaks their dialect over someone of the same race who speaks a different dialect. [311] Even weaker cues like dress can sometimes trump racial cues. The tendency of children and adults to preferentially learn and interact with those who share their racial markers (mistaken for ethnic cues) likely contributes to the maintenance of cultural differences between racially marked populations, even in the same neighborhood.

This ties in to my crackpot theory that the number one way to fight racism in the US is to somehow get everyone speaking exactly the same accent.


In one well-studied case among the Gebusi, in New Guinea, my failure to meet my sister exchange obligations would increase the chances that I would, at some future date, be found guilty of witchcraft.

#out of context quotes


Henrich discusses a theory of intrinsic growth pretty similar to the one in my recent singularity post. But he introduces a neat experimental test: Polynesian islands. On larger islands (ie with higher carrying capacities), technological advance is faster:

Islands or island clusters with larger populations and more contact with other islands had both a greater number of different fishing-tool types and more-complex fishing technologies. Figure 12.2 shows the relationship between population size and the number of tool types. People on islands with bigger populations had more tools at their disposal, and those tools tended to be more sophisticated.

Another team, led by the evolutionary anthropologist Mark Collard, found the same kind of strong positive relationship when they examined forty nonindustrialized societies of farmers and herders from around the globe. Once again, larger populations had more-complex technologies and a greater number of different types of tools.

These effects can even be observed in orangutans. While orangutans have little or no cumulative culture, they do possess some social learning abilities that result in local, population-specific traditions. For example, some orangutan groups routinely use leaves to scoop up water from the ground or use sticks to extract seeds from fruit. Data from several orangutan populations show that groups with greater interaction among individuals tend to possess more learned food-obtaining techniques.

The point is, larger and more interconnected populations generate more sophisticated tools, techniques, weapons, and know-how because they have larger collective brains.

Henrich’s model is actually a little more complicated than mine, because it includes a term for forgetting technology (which actually happens pretty often when the group is small enough!) The more technology the group has, the more likely that one or two things slip through the cracks every generation and don’t get passed on to the kids. That means that most primitive societies are in an equilibrium between the rate of generating and the rate of losing technology, whose exact level depends on the population size:

SOme information was lost every generation, because copies are usually worse than the originals. Cumulative cultural evolution has to fight against this force and is best able to do so in larger populations that are highly socially interconnected. The key is most individuals end up imperfect, worse than the models they are learning from. However, some few individuals, whether by luck, fierce practice, or intentional innovation, end up better than their teachers…


One point the book really drove home is how much of the absolute basics of knowledge are cultural inventions. We laugh at primitive tribes who count “one, two, many”, but the idea of counting more specifically than this was a discovery that had to be discovered by someone, and only survived when there was a context that made it useful:

Many of the products of cumulative cultural evolution give us not only ready concepts to apply to somewhat new problems, and concepts to recombine (bows are projectiles + elastically stored energy) but actually give us cognitive tools or mental abilities that we would not otherwise have. Arabic numerals, Roman letters, the Indian zero, the Gregorian calendar, cylindrical projection maps basic color terms, clocks, fractions, and right vs. left are just some of the cognitive tools that have shaped your mind and mine

Alas, this quote is missing some context from the rest of the book showing just how hard these ideas were to develop. Remember that mathematicians spent a while debating whether “zero” was truly a number, that ancient people had what we consider very confusing concepts around color (even the Greeks were weird about this). Remember that the alphabet – breaking words up into their smallest components – arose only after millennia of logographs and syllabaries, and in some areas never arose at all. There’s even some speculation that basic ideas about introspection and emotion were invented pretty late. Or even:

Subordinating conjunctions like “after”, “before”, and “because of” may have evolved only recently, in historical times, and are probably no more a feature of *human* languages than composite bows are a feature of *human* technological repertoires. The tools of subordination seem less well-developed in the earliest versions of Sumerian, Akkadian, Hittite, and Greek. This makes these languages slow, ponderous, and repetitious to read. ..this is not to say that we humans don’t have some souped-up innate abilities for dealing with hierarchical structures, which may also be useful for making tools or understanding social relationships, but merely that the elegant bits of grammar that permit us to fully harness these abilities were built by cultural evolution.

This kind of thing is why Henrich thinks comparing the IQ of young chimps and human toddlers is fair, but comparing older chimps and older humans isn’t. Older humans have all of these deep-level concepts to work with that make solving even abstract puzzles much easier. This is also plausibly related to the Flynn Effect.


On sonority:

A successful communicator is one who can most effectively be understood, given the local social, environmental, or ecological conditions. As young or naïve learners focus on and learn from more successful communicators—who are using more effective communication tools—cumulative cultural evolution will gradually assemble sign or whistled repertoires, over time, in the same way that it hones kayaks, spears, and boomerangs. Given this, there’s no reason to suspect that such cultural evolutionary processes somehow apply only to whistled or gestural sign languages, and not to typical spoken languages. Thus, spoken languages should—under the right circumstances—show some response to the local acoustic environments and to nonlinguistic social norms, just as whistled and sign languages do. While researchers have done little work on such topics, there’s some preliminary evidence.

Spoken languages vary in their sonority. The sonority of our voices decreases as the airflow used for speech is obstructed and is highest for open vowels, like the /a/ and lowest for so-called voiceless stops like the /t/ in tin. Pronounce each of these sounds and note the difference in the constriction of your airflow. Both vowels and consonants vary in sonority, but vowels generally have much higher sonority than consonants. This means that more sonorous languages tend to have more vowels (e.g., Hawaiian), while less sonorous ones pack the consonants together (e.g., Russian). For the same energy and effort, more sonorous speech sounds can be heard at greater distances and over more ambient noise than less sonorous ones.

If languages adapt culturally, then we can predict that in situations in which people do relatively more talking over greater interpersonal distances with more ambient noise and sound dispersion, languages will be more sonorous. Many environmental variables might influence this, but Robert Monroe, John Fought, and their colleagues reasoned that climate, and specifically temperature, might have a big effect. The idea is simple: in warmer climates, people work, play, cook, and relax outdoors. Compared to life indoors, living outside means that communicators more frequently face the challenges of distance, noise and poor acoustics. Their team generated measures of sonority from word lists for dozens of languages and then looked at the relationship between sonority and measures of climatic temperature, like the number of months per year when it’s below 10°C (50°F).

It turns out that if all you know is climatic temperature, then you can account for about one-third of the variation in the sonority of languages. Languages in warmer climates tend to use more vowels than those in colder climates and rely more heavily on the most sonorous vowel, /a/. For consonants, languages in warmer climates rely more heavily on the most sonorant consonants, like /n/, /l/, and /r/. By contrast, languages in colder climates lean more heavily on the least sonorous vowels, as the /i/ in deep.7

This simple idea can have much nuance added to it. For example, not all warm climates are equally conducive to sonorous speech. In regions with dense forest cover, the advantages of high sonority might be less pronounced, or as the anthropologists Mel and Carol Ember have argued, very cold and windy climates may select against linguistic practices that involve opening one’s mouth widely, due to the increased heat loss. To this they added the idea that social norms about sexual restrictiveness might also influence sonority. Adding both of these nuances to the basic climatic temperature analysis, they managed to account for four-fifths of the variation in the sonority of language.

I’m a little worried about p-hacking here, but still, whoa! The thing where Inuit languages sound like tikkakkooktttippik but Polynesian languages sound like waoiuhieeawahiaii has a cause! The phonetic nature of words is shaped by the experience of the people who produce them! There’s something delightfully kabbalistic about this.


The chili pepper quote promised a study on cultural learning of pain, so here it is:

Ken Craig has directly tested the relationship between cultural learning and pain. Ken’s team first exposed research participants to a series of electric shocks that gradually increased in intensity and thus painfulness. Some participants observed another person – a “tough model” – experience the same shocks right after them, and some did not. Both the participant and model had to rate how painful the shock was each time. The tough model, however, was secretly working for the experimenter and always rated the pain about 25% less painful than the participant did. Then, after this, the model left and the participants received a series of random electric shocks. For this new series of shocks, the participants who had seen the tough model rated them half as painful as those who didn’t see the tough model….

Those who saw the tough model showed (1) declining measurements of electrodermal skin potential, meaning that their bodies stopped reacting to the threat, (2) lower and more stable heart rates, and (3) lower stress ratings. Cultural learning from the tough model changed their physiological reactions to electric shocks.

I see a commenter on Quillette has already thought to connect this to telling people they should be harmed by triggers and microaggressions. But also note the connection to the the predictive processing model of perception.


Books like this are supposed to end with an Exhortation Relevant To Modern Society, so here’s Henrich’s:

Humans are bad at intentionally designing effective institutions and organizations, though I’m hoping that as we get deeper insights into human nature and cultural evolution this can improve. Until then, we should take a page from cultural evolution’s playbook and design “variation and selection systems” that will allow alternative institutions or organizational forms to compete. We can dump the losers, keep the winners, and hopefully gain some general insights during the process.

If that sounds familiar, it could be because it’s capitalism; if it sounds very familiar, it could be because it’s also the case for things like charter cities and seasteads; if it sounds super familiar, it could be because it’s also Archipelago.

And to finish:

Once we understand the importance of collective brains, we begin to see why modern societies differ in their innovativeness. It’s not the smartness of individuals or the formal incentives. It’s the willingness and ability of large numbers of individuals at the knowledge frontier to freely interact, exchange views, disagree, learn from each other, build collaborations, trust strangers, and be wrong.

Hopefully this means Henrich won’t be too angry that I just quoted like half of his copyrighted book without permission.

Book Review: The Secret Of Our Success

[Previously in sequence: Epistemic Learned Helplessness]

I.

“Culture is the secret of humanity’s success” sounds like the most vapid possible thesis. The Secret Of Our Success by anthropologist Joseph Henrich manages to be an amazing book anyway.

Henrich wants to debunk (or at least clarify) a popular view where humans succeeded because of our raw intelligence. In this view, we are smart enough to invent neat tools that help us survive and adapt to unfamiliar environments.

Against such theories: we cannot actually do this. Henrich walks the reader through many stories about European explorers marooned in unfamiliar environments. These explorers usually starved to death. They starved to death in the middle of endless plenty. Some of them were in Arctic lands that the Inuit considered among their richest hunting grounds. Others were in jungles, surrounded by edible plants and animals. One particularly unfortunate group was in Alabama, and would have perished entirely if they hadn’t been captured and enslaved by local Indians first.

These explorers had many advantages over our hominid ancestors. For one thing, their exploration parties were made up entirely of strong young men in their prime, with no need to support women, children, or the elderly. They were often selected for their education and intelligence. Many of them were from Victorian Britain, one of the most successful civilizations in history, full of geniuses like Darwin and Galton. Most of them had some past experience with wilderness craft and survival. But despite their big brains, when faced with the task our big brains supposedly evolved for – figuring out how to do hunting and gathering in a wilderness environment – they failed pathetically.

Nor is it surprising that they failed. Hunting and gathering is actually really hard. Here’s Henrich’s description of how the Inuit hunt seals:

You first have to find their breathing holes in the ice. It’s important that the area around the hole be snow-covered—otherwise the seals will hear you and vanish. You then open the hole, smell it to verify it’s still in use (what do seals smell like?), and then assess the shape of the hole using a special curved piece of caribou antler. The hole is then covered with snow, save for a small gap at the top that is capped with a down indicator. If the seal enters the hole, the indicator moves, and you must blindly plunge your harpoon into the hole using all your weight. Your harpoon should be about 1.5 meters (5ft) long, with a detachable tip that is tethered with a heavy braid of sinew line. You can get the antler from the previously noted caribou, which you brought down with your driftwood bow.

The rear spike of the harpoon is made of extra-hard polar bear bone (yes, you also need to know how to kill polar bears; best to catch them napping in their dens). Once you’ve plunged your harpoon’s head into the seal, you’re then in a wrestling match as you reel him in, onto the ice, where you can finish him off with the aforementioned bear-bone spike.

Now you have a seal, but you have to cook it. However, there are no trees at this latitude for wood, and driftwood is too sparse and valuable to use routinely for fires. To have a reliable fire, you’ll need to carve a lamp from soapstone (you know what soapstone looks like, right?), render some oil for the lamp from blubber, and make a wick out of a particular species of moss. You will also need water. The pack ice is frozen salt water, so using it for drinking will just make you dehydrate faster. However, old sea ice has lost most of its salt, so it can be melted to make potable water. Of course, you need to be able to locate and identify old sea ice by color and texture. To melt it, make sure you have enough oil for your soapstone lamp.

No surprise that stranded explorers couldn’t figure all this out. It’s more surprising that the Inuit did. And although the Arctic is an unusually hostile place for humans, Henrich makes it clear that hunting-gathering techniques of this level of complexity are standard everywhere. Here’s how the Indians of Tierra del Fuego make arrows:

Among the Fuegians, making an arrow requires a 14-step procedure that involves using seven different tools to work six different materials. Here are some of the steps:

– The process begins by selecting the wood for the shaft, which preferably comes from chaura, a bushy, evergreen shrub. Though strong and light, this wood is a non-intuitive choice since the gnarled branches require extensive straightening (why not start with straighter branches?).

– The wood is heated, straightened with the craftsman’s teeth, and eventually finished with a scraper. Then, using a pre-heated and grooved stone, the shaft is pressed into the grooves and rubbed back and forth, pressing it down with a piece of fox skin. The fox skin becomes impregnated with the dust, which prepares it for the polishing stage (Does it have to be fox skin?).

– Bits of pitch, gathered from the beach, are chewed and mixed with ash (What if you don’t include the ash?).

– The mixture is then applied to both ends of a heated shaft, which must then be coated with white clay (what about red clay? Do you have to heat it?). This prepares the ends for the fletching and arrowhead.

– Two feathers are used for the fletching, preferably from upland geese (why not chicken feathers?).

– Right-handed bowman must use feathers from the left wing of the bird, and vice versa for lefties (Does this really matter?).

– The feathers are lashed to the shaft using sinews from the back of the guanaco, after they are smoothed and thinned with water and saliva (why not sinews from the fox that I had to kill for the aforementioned skin?).

Next is the arrowhead, which must be crafted and then attached to the shaft, and of course there is also the bow, quiver and archery skills. But, I’ll leave it there, since I think you get the idea.

How do hunter-gatherers know how to do all this? We usually summarize it as “culture”. How did it form? Not through some smart Inuit or Fuegian person reasoning it out; if that had been it, smart European explorers should have been able to reason it out too.

The obvious answer is “cultural evolution”, but Henrich isn’t much better than anyone else at taking the mystery out of this phrase. Trial and error must have been involved, and less successful groups/people imitating the techniques of more successful ones. But is that really a satisfying explanation?

I found the chapter on language a helpful reminder that we already basically accept something like this is true. How did language get invented? I’m especially interested in this question because of my brief interactions with conlanging communities – people who try to construct their own languages as a hobby or as part of a fantasy universe, like Tolkien did with Elvish. Most people are terrible at this; their languages are either unusable, or exact clones of English. Only people who (like Tolkien) already have years of formal training in linguistics can do a remotely passable job. And you’re telling me the original languages were invented by cavemen? Surely there was no committee of Proto-Indo-European nomads that voted on whether to have an inflecting or agglutinating tongue? Surely nobody ran out of their cave shouting “Eureka!” after having discovered the interjection? We just kind of accept that after cavemen working really hard to communicate with each other, eventually language – still one of the most complicated and impressive productions of the human race – just sort of happened.

(this is how I feel about biological evolution too – how do you evolve an eye by trial and error? I’ve read papers speculating on the exact process, and they make lots of good points, but I still don’t feel happy about it, like “Oh, of course this would happen!” At some point you just have to accept evolution is smarter than you are and smarter than you would expect to be possible.)

Taking the generation of culture as secondary to this kind of mysterious process, Henrich turns to its transmission. If cultural generation happens at a certain rate, then the fidelity of transmission determines whether a given society advances, stagnates, or declines.

For Henrich, humans started becoming more than just another species of monkey when we started transmitting culture with high fidelity. Some anthropologists talk about the Machiavellian Intelligence Hypothesis – the theory that humans evolved big brains in order to succeed at social maneuvering and climbing dominance hierarchies. Henrich counters with his own Cultural Intelligence Hypothesis – humans evolved big brains in order to be able to maintain things like Inuit seal hunting techniques. Everything that separates us from the apes is part of an evolutionary package designed to help us maintain this kind of culture, exploit this kind of culture, or adjust to the new abilities that this kind of culture gave us.

II.

Secret gives many examples of many culture-related adaptations, and not all are in the brain.

Our digestive tracts evolved alongside our cultures. Specifically, they evolved to be unusually puny:

Our mouths are the size of the squirrel monkey’s, a species that weighs less than three pounds. Chimpanzees can open their mouths twice as ide as we can and hold substantial amounts of food compressed between their lips and large teeth. We also have puny jaw muscles that reach up only to just below our ears. Other primates’ jaw muscles stretch to the top of their heads, where they sometimes even latch onto a central bony ridge. Our stomachs are small, having only a third of the surface area that we’d expect for a primate of our size, and our colons are too short, being only 60% of their expected mass.

Compared to other animals, we have such atrophied digestive tracts that we shouldn’t be able to live. What saves us? All of our food processing techniques, especially cooking, but also chopping, rinsing, boiling, and soaking. We’ve done much of the work of digestion before food even enters our mouths. Our culture teaches us how to do this, both in broad terms like “hold things over fire to cook them” and in specific terms like “this plant needs to be soaked in water for 24 hours to leach out the toxins”. Each culture has its own cooking knowledge related to the local plants and animals; a frequent cause of death among European explorers was cooking things in ways that didn’t unlock any of the nutrients, and so starving while apparently well-fed.

Fire is an especially important food processing innovation, and it is entirely culturally transmitted. Henrich is kind of cruel in his insistence on this. He recommends readers go outside and try to start a fire. He even gives some helpful hints – flint is involved, rubbing two sticks together works for some people, etc. He predicts – and stories I’ve heard from unfortunate campers confirm – that you will not be able to do this, despite an IQ far beyond that of most of our hominid ancestors. In fact, some groups (most notably the aboriginal Tasmanians) seem to have lost the ability to make fire, and never rediscovered it. Fire-making was discovered a small number of times, maybe once, and has been culturally transmitted since then.

But it’s not just about chopping things up or roasting them. Traditional food processing techniques can get arbitrarily complicated. Nixtamalization of corn, necessary to prevent vitamin deficiencies, involves soaking the corn in a solution containing ground-up burnt seashells. The ancient Mexicans discovered this and lived off corn just fine for millennia. When the conquistadors took over, they ignored it and ate corn straight. For four hundred years, Europeans and Americans ate unnixtamalized corn. By official statistics, three million Americans came down with corn-related vitamin deficiencies during this time, and up to a hundred thousand died. It wasn’t until 1937 that Western scientists discovered which vitamins were involved and developed an industrial version of nixtamalization that made corn safe. Early 1900s Americans were very smart and had lots of advantages over ancient Mexicans. But the ancient Mexicans’ culture got this one right in a way it took Westerners centuries to match.

Our hands and limbs also evolved alongside our cultures. We improved dramatically in some areas: after eons of tool use, our hands outclass those of any other ape in terms of finesse. In other cases, we devolved systems that were no longer necessary; we are much weaker than any other ape. Henrich describes a circus act of the 1940s where the ringmaster would challenge strong men in the audience to wrestle a juvenile chimpanzee. The chimpanzee was tied up, dressed in a mask that prevented it from biting, and wearing soft gloves that prevented it from scratching. No human ever lasted more than five seconds. Our common ancestor with other apes grew weaker and weaker as we became more and more reliant on artificial weapons to give us an advantage.

Even our sweat glands evolved alongside culture. Humans are persistence hunters: they cannot run as fast as gazelles, but they can keep running for longer than gazelles (or almost anything else). Why did we evolve into that niche? The secret is our ability to carry water. Every hunter-gatherer culture has invented its own water-carrying techniques, usually some kind of waterskin. This allowed humans to switch to perspiration-based cooling systems, which allowed them to run as long as they want.

III.

But most of our differences from other apes are indeed in the brain. They’re just not where you’d expect.

Tomasello et al tested human toddlers vs. apes on a series of traditional IQ type questions. The match-up was surprisingly fair; in areas like memory, logic, and spatial reasoning, the three species did about the same. But in ability to learn from another person, humans wiped the floor with the other two ape species:

Remember, Henrich thinks culture accumulates through random mutation. Humans don’t have control over how culture gets generated. They have more control over how much of it gets transmitted to the next generation. If 100% gets transmitted, then as more and more mutations accumulate, the culture becomes better and better. If less than 100% gets transmitted, then at some point new culture gained and old culture lost fall into equilibrium, and your society stabilizes at some higher or lower technological level. This means that transmitting culture to the next generation is maybe the core human skill. The human brain is optimized to make this work as well as possible.

Human children are obsessed with learning things. And they don’t learn things randomly. There seem to be “biases in cultural learning”, ie slots in an infant’s mind that they know need to be filled with knowledge, and which they preferentially seek out the knowledge necessary to fill.

One slot is for language. Human children naturally listen to speech (as early as in the womb). They naturally prune the phonemes they are able to produce and distinguish to the ones in the local language. And they naturally figure out how to speak and understand what people are saying, even though learning a language is hard even for smart adults.

Another slot is for animals. In a world where megafauna has been relegated to zoos, we still teach children their ABCs with “L is for lion” and “B is for bear”, and children still read picture books about Mr. Frog and Mrs. Snake holding tea parties. Henrich suggests that just as the young brain is hard-coded to want to learn language, so it is hard-coded to want to learn the local animal life (maybe little boys’ vehicle obsession is an outgrowth of this – buses and trains are the closest thing to local megafauna that most of them will encounter!)

Another slot is for plants:

To see this system in operation, let’s consider how infants respond to unfamiliar plants. Plants are loaded with prickly thorns, noxious oils, stinging nettles and dangerous toxins, all genetically evolved to prevent animals like us from messing with them. Given our species wide geographic range and diverse use of plants as foods, medicines and construction materials, we ought to be primed to both learn about plants and avoid their dangers. To explore this idea in the lab, the psychologists Annie Wertz and Karen Wynn first gave infants, who ranged in age from eight to eighteen months, an opportunity to touch novel plants (basil and parsley) and artifacts, including both novel objects and common ones, like wooden spoons and small lamps.

The results were striking. Regardless of age, many infants flatly refused to touch the plants at all. When they did touch them, they waited substantially longer than they did with the artifacts. By contrast, even with the novel objects, infants showed none of this reluctance. This suggests that well before one year of age infants can readily distinguish plants from other things, and are primed for caution with plants. But, how do they get past this conservative predisposition?

The answer is that infants keenly watch what other people do with plants, and are only inclined to touch or eat the plants that other people have touched or eaten. In fact, once they get the ‘go ahead’ via cultural learning, they are suddenly interested in eating plants. To explore this, Annie and Karen exposed infants to models who both picked fruit from plants and also picked fruit-like things from an artifact of similar size and shape to the plant. The models put both the fruit and the fruit-like things in their mouths. Next, the infants were given a choice to go for the fruit (picked from the plant) or the fruit-like things picked from the object. Over 75% of the time the infants went for the fruit, not the fruit-like things, since they’d gotten the ‘go ahead’ via cultural learning.

As a check, the infants were also exposed to models putting the fruit or fruit-like things behind their ears(not in their mouths). In this case, the infants went for the fruit or fruit-like things in equal measure. It seems that plants are most interesting if you can eat them, but only if you have some cultural learning cues that they aren’t toxic.

After Annie first told me about her work while I was visiting Yale in 2013, I went home to test it on my 6-month-old son, Josh. Josh seemed very likely to overturn Annie’s hard empirical work, since he immediately grasped anything you gave him and put it rapidly in his mouth. Comfortable in his mom’s arms, I first offered Josh a novel plastic cube. He delighted in grapping it and shoving it directly into his mouth, without any hesitation. Then, I offered him a sprig of arugula. He quickly grabbed it, but then paused, looked with curious uncertainty at it, and then slowly let it fall from his hand while turning to hug his mom.

It’s worth pointing out how rich the psychology is here. Not only do infants have to recognize that plants are different from objects of similar size, shape and color, but they need to create categories for types of plants, like basil and parsley, and distinguish ‘eating’ from just ‘touching’. It does them little good to code their observation of someone eating basil as ‘plants are good to eat’ since that might cause them to eat poisonous plants as well as basil. But, it also does them little good to narrowly code the observation as ‘that particular sprig of basil is good to eat’ since that particular sprig has just been eaten by the person they are watching. This another content bias in cultural learning.

This ties into the more general phenomenon of figuring out what’s edible. Most Westerners learn insects aren’t edible; some Asians learn that they are. This feels deeper than just someone telling you insects aren’t edible and you believing them. When I was in Thailand, my guide offered me a giant cricket, telling me it was delicious. I believed him when he said it was safe to eat, I even believed him when he said it tasted good to him, but my conditioning won out – I didn’t eat the cricket. There seems to be some process where a child’s brain learns what is and isn’t locally edible, then hard-codes it against future change.

(Or so they say; I’ve never been able to eat shrimp either.)

Another slot is for gender roles. By now we’ve all heard the stories of progressives who try to raise their children without any exposure to gender. Their failure has sometimes been taken as evidence that gender is hard-coded. But it can’t be quite that simple: some modern gender roles, like girls = pink, are far from obvious or universal. Instead, it looks like children have a hard-coded slot that gender roles go into, work hard to figure out what the local gender roles are (even if their parents are trying to confuse them), then latch onto them and don’t let go.

In the Cultural Intelligence Hypothesis, humans live in obligate symbiosis with a culture. A brain without an associated culture is incomplete and not very useful. So the infant brain is adapted to seek out the important aspects of its local culture almost from birth and fill them into the appropriate slots in order to become whole.

IV.

The next part of the book discusses post-childhood learning. This plays an important role in hunter-gatherer tribes:

While hunters reach their peak strength and speed in their twenties, individual hunting success does not peak until around age 30, because success depends more on know-how and refined skills than on physical prowess.

This part of the book made most sense in the context of examples like the Inuit seal-hunting strategy which drove home just how complicated and difficult hunting-gathering was. Think less “Boy Scouts” and more “PhD”; a primitive tribesperson’s life requires mastery of various complicated technologies and skills. And the difference between “mediocre hunter” and “great hunter” can be the difference between high status (and good mating opportunities) and low status, or even between life and death. Hunter-gatherers really want to learn the essentials of their hunter-gatherer lifestyle, and learning it is really hard. Their heuristics are:

Learn from people who are good at things and/or widely-respected. If you haven’t already read about the difference between dominance and prestige hierarchies, check out Kevin Simler’s blog post on the topic. People will fear and obey authority figures like kings and chieftains, but they give a different kind of respect (“prestige”) to people who seem good at things. And since it’s hard to figure out who’s good at things (can a non-musician who wants to start learning music tell the difference between a merely good performer and one of the world’s best?) most people use the heuristic of respecting the people who other people respect. Once you identify someone as respect-worthy, you strongly consider copying them in, well, everything:

To understand prestige as a social phenomenon, it’s crucial to realize that it’s often difficult to figure out what precisely makes someone successful. In modern societies, the success of a star NBA basketball player might arise from his:

(1) intensive practice in the offseason
(2) sneaker preference
(3) sleep schedule
(4) pre-game prayer
(5) special vitamins
(6) taste for carrots

Any or all of these might increase his success. A naïve learner can’t tell all the causal links between an individual’s practices and his success. As a consequence, learners often copy their chosen models broadly across many domains. Of course, learners may place more weight on domains that for one reason or other seem more causally relevant to the model’s success. This copying often includes the model’s personal habits or styles as well as their goals and motivations, since these may be linked to their success. This “if in doubt, copy it” heuristic is one of the reasons why success in one domain converts to influence across a broad range of domains.

The immense range of celebrity endorsements in modern societies shows the power of prestige. For example, NBA star Lebron James, who went directly from High School to the pros, gets paid millions to endorse State Farm Insurance. Though a stunning basketball talent, it’s unclear why Mr. James is qualified to recommend insurance companies. Similarly, Michael Jordan famously wore Hanes underwear and apparently Tiger Woods drove Buicks. Beyonce’ drinks Pepsi (at least in commercials). What’s the connection between musical talent and sugary cola beverages?

Finally, while new medical findings and public educational campaigns only gradually influence women’s approach to preventive medicine, Angelina Jolie’s single OP-ED in the New York Times, describing her decision to get a preventive double mastectomy after learning she had the ‘faulty’ BRCA1 gene, flooded clinics from the U.K. to New Zealand with women seeking genetic screenings for breast cancer. Thus, an unwanted evolutionary side effect, prestige turns out to be worth millions, and represents a powerful and underutilized public health tool.

Of course, this creates the risk of prestige cascades, where some irrelevant factor (Henrich mentions being a reality show star) catapults someone to fame, everyone talks about them, and you end up with Muggeridge’s definition of a celebrity: someone famous for being famous.

Some of this makes more sense if you go back to the evolutionary roots, and imagine watching the best hunter in your tribe to see what his secret is, or being nice to him in the hopes that he’ll take you under his wing and teach you stuff.

(but if all this is true, shouldn’t public awareness campaigns that hire celebrity spokespeople be wild successes? Don’t they just as often fail, regardless of how famous a basketball player they can convince to lecture schoolchildren about how Winners Don’t Do Drugs?)

Learn from people who are like you. If you are a man, it is probably a bad idea to learn fashion by observing women. If you are a servant, it is probably a bad idea to learn the rules of etiquette by observing how the king behaves. People are naturally inclined to learn from people more similar to themselves.

Henrich ties this in to various studies showing that black students learn best from a black teacher, female students from a female teacher, et cetera.

Learn from old people. Humans are almost unique in having menopause; most animals keep reproducing until they die in late middle-age. Why does evolution want humans to stick around without reproducing?

Because old people have already learned the local culture and can teach it to others. Henrich asks us to throw out any personal experience we have of elders; we live in a rapidly-changing world where an old person is probably “behind the times”. But for most of history, change happened glacially slowly, and old people would have spent their entire lives accumulating relevant knowledge. Imagine a Silicon Valley programmer stumped by a particularly tough bug in his code calling up his grandfather, who has seventy years’ experience in the relevant programming language.

Sometimes important events only happen once in a generation. Henrich tells the story of an Australian aboriginal tribe facing a massive drought. Nobody knew what to do except Paralji, the tribe’s oldest man, who had lived through the last massive drought and remembered where his own elders had told him to find the last-resort waterholes.

This same dynamic seems to play out even in other species:

In 1993, a severe drought hit Tanzania, resulting in the death of 20% of the African elephant calves in a population of about 200. This population contained 21 different families, each of which was led by a single matriarch. The 21 elephant families were divided into 3 clans, and each clan shared the same territory during the wet season (so, they knew each other). Researchers studying these elephants have analyzed the survival of the calves and found that families led by older matriarchs suffered fewer deaths of their calves during this drought.

Moreover, two of the three elephant clans unexpectedly left the park during the drought, presumably in search of water, and both had much higher survival rates than the one clan that stayed behind. It happens that these severe droughts only hit about once every four to five decades, and the last one hit about 1960. After that, sadly, elephant poaching in the 1970’s killed off many of the elephants who would have been old enough in 1993 to recall the 1960 drought. However, it turns out that exactly one member of each of the two clans who left the park, and survived more effectively, were old enough to recall life in 1960. This suggests, that like Paralji in the Australian desert, they may have remembered what to do during a severe drought, and led their groups to the last water refuges. In the clan who stayed behind, the oldest member was born in 1960, and so was too young to have recalled the last major drought.

More generally, aging elephant matriarchs have a big impact on their families, as those led by older matriarchs do better at identifying and avoiding predators (lions and humans), avoiding internal conflicts and identifying the calls of their fellow elephants. For example, in one set of field experiments, researchers played lion roars from both male and female lions, and from either a single lion or a trio of lions. For elephants, male lions are much more dangerous than females, and of course, three lions are always worse than only one lion. All the elephants generally responded with more defensive preparations when they heard three lions vs. one. However, only the older matriarchs keenly recognized the increased dangers of male lions over female lions, and responded to the increased threat with elephant defensive maneuvers.

V.

I was inspired to read Secret by Scholar’s Stage’s review. I hate to be unoriginal, but after reading the whole book, I agree that the three sections Tanner cites – on divination, on manioc, and on shark taboos – are by far the best and most fascinating.

On divination:

When hunting caribou, Naskapi foragers in Labrador, Canada, had to decide where to go. Common sense might lead one to go where one had success before or to where friends or neighbors recently spotted caribou.

However, this situation is like [the Matching Pennies game]. The caribou are mismatchers and the hunters are matchers. That is, hunters want to match the locations of caribou while caribou want to mismatch the hunters, to avoid being shot and eaten. If a hunter shows any bias to return to previous spots, where he or others have seen caribou, then the caribou can benefit (survive better) by avoiding those locations (where they have previously seen humans). Thus, the best hunting strategy requires randomizing.

Can cultural evolution compensate for our cognitive inadequacies? Traditionally, Naskapi hunters decided where to go to hunt using divination and believed that the shoulder bones of caribou could point the way to success. To start the ritual, the shoulder blade was heated over hot coals in a way that caused patterns of cracks and burnt spots to form. This patterning was then read as a kind of map, which was held in a pre-specified orientation. The cracking patterns were (probably) essentially random from the point of view of hunting locations, since the outcomes depended on myriad details about the bone, fire, ambient temperature, and heating process. Thus, these divination rituals may have provided a crude randomizing device that helped hunters avoid their own decision-making biases.

This is not some obscure, isolated practice, and other cases of divination provide more evidence. In Indonesia, the Kantus of Kalimantan use bird augury to select locations for their agricultural plots. Geographer Michael Dove argues that two factors will cause farmers to make plot placements that are too risky. First, Kantu ecological models contain the Gambler’s Fallacy, and lead them to expect floods to be less likely to occur in a specific location after a big flood in that location (which is not true). Second…Kantus pay attention to others’ success and copy the choices of successful households, meaning that if one of their neighbors has a good yield in an area one year, many other people will want to plant there in the next year. To reduce the risks posed by these cognitive and decision-making biases, Kantu rely on a system of bird augury that effectively randomizes their choices for locating garden plots, which helps them avoid catastrophic crop failures. Divination results depend not only on seeing a particular bird species in a particular location, but also on what type of call the bird makes (one type of call may be favorable, and another unfavorable).

The patterning of bird augury supports the view that this is a cultural adaptation. The system seems to have evolved and spread throughout this region since the 17th century when rice cultivation was introduced. This makes sense, since it is rice cultivation that is most positively influenced by randomizing garden locations. It’s possible that, with the introduction of rice, a few farmers began to use bird sightings as an indication of favorable garden sites. On-average, over a lifetime, these farmers would do better – be more successful – than farmers who relied on the Gambler’s Fallacy or on copying others’ immediate behavior. Whatever the process, within 400 years, the bird augury system spread throughout the agricultural populations of this Borneo region. Yet, it remains conspicuously missing or underdeveloped among local foraging groups and recent adopters of rice agriculture, as well as among populations in northern Borneo who rely on irrigation. So, bird augury has been systematically spreading in those regions where it’s most adaptive.

Scott Aaronson has written about how easy it is to predict people trying to “be random”:

In a class I taught at Berkeley, I did an experiment where I wrote a simple little program that would let people type either “f” or “d” and would predict which key they were going to push next. It’s actually very easy to write a program that will make the right prediction about 70% of the time. Most people don’t really know how to type randomly. They’ll have too many alternations and so on. There will be all sorts of patterns, so you just have to build some sort of probabilistic model. Even a very crude one will do well. I couldn’t even beat my own program, knowing exactly how it worked. I challenged people to try this and the program was getting between 70% and 80% prediction rates. Then, we found one student that the program predicted exactly 50% of the time. We asked him what his secret was and he responded that he “just used his free will.”

But being genuinely random is important in pursuing mixed game theoretic strategies. Henrich’s view is that divination solved this problem effectively.

I’m reminded of the Romans using augury to decide when and where to attack. This always struck me as crazy; generals are going to risk the lives of thousands of soldiers because they saw a weird bird earlier that morning? But war is a classic example of when a random strategy can be useful. If you’re deciding whether to attack the enemy’s right vs. left flank, it’s important that the enemy can’t predict your decision and send his best defenders there. If you’re generally predictable – and Scott Aaronson says you are – then outsourcing your decision to weird birds might be the best way to go.

And then there’s manioc. This is a tuber native to the Americas. It contains cyanide, and if you eat too much of it, you get cyanide poisoning. From Henrich:

In the Americas, where manioc was first domesticated, societies who have relied on bitter varieties for thousands of years show no evidence of chronic cyanide poisoning. In the Colombian Amazon, for example, indigenous Tukanoans use a multistep, multiday processing technique that involves scraping, grating, and finally washing the roots in order to separate the fiber, starch, and liquid. Once separated, the liquid is boiled into a beverage, but the fiber and starch must then sit for two more days, when they can then be baked and eaten. Figure 7.1 shows the percentage of cyanogenic content in the liquid, fiber, and starch remaining through each major step in this processing.

Such processing techniques are crucial for living in many parts of Amazonia, where other crops are difficult to cultivate and often unproductive. However, despite their utility, one person would have a difficult time figuring out the detoxification technique. Consider the situation from the point of view of the children and adolescents who are learning the techniques. They would have rarely, if ever, seen anyone get cyanide poisoning, because the techniques work. And even if the processing was ineffective, such that cases of goiter (swollen necks) or neurological problems were common, it would still be hard to recognize the link between these chronic health issues and eating manioc. Most people would have eaten manioc for years with no apparent effects. Low cyanogenic varieties are typically boiled, but boiling alone is insufficient to prevent the chronic conditions for bitter varieties. Boiling does, however, remove or reduce the bitter taste and prevent the acute symptoms (e.g., diarrhea, stomach troubles, and vomiting).

So, if one did the common-sense thing and just boiled the high-cyanogenic manioc, everything would seem fine. Since the multistep task of processing manioc is long, arduous, and boring, sticking with it is certainly non-intuitive. Tukanoan women spend about a quarter of their day detoxifying manioc, so this is a costly technique in the short term. Now consider what might result if a self-reliant Tukanoan mother decided to drop any seemingly unnecessary steps from the processing of her bitter manioc. She might critically examine the procedure handed down to her from earlier generations and conclude that the goal of the procedure is to remove the bitter taste. She might then experiment with alternative procedures by dropping some of the more labor-intensive or time-consuming steps. She’d find that with a shorter and much less labor-intensive process, she could remove the bitter taste. Adopting this easier protocol, she would have more time for other activities, like caring for her children. Of course, years or decades later her family would begin to develop the symptoms of chronic cyanide poisoning.

Thus, the unwillingness of this mother to take on faith the practices handed down to her from earlier generations would result in sickness and early death for members of her family. Individual learning does not pay here, and intuitions are misleading. The problem is that the steps in this procedure are causally opaque—an individual cannot readily infer their functions, interrelationships, or importance. The causal opacity of many cultural adaptations had a big impact on our psychology.

Wait. Maybe I’m wrong about manioc processing. Perhaps it’s actually rather easy to individually figure out the detoxification steps for manioc? Fortunately, history has provided a test case. At the beginning of the seventeenth century, the Portuguese transported manioc from South America to West Africa for the first time. They did not, however, transport the age-old indigenous processing protocols or the underlying commitment to using those techniques. Because it is easy to plant and provides high yields in infertile or drought-prone areas, manioc spread rapidly across Africa and became a staple food for many populations. The processing techniques, however, were not readily or consistently regenerated. Even after hundreds of years, chronic cyanide poisoning remains a serious health problem in Africa. Detailed studies of local preparation techniques show that high levels of cyanide often remain and that many individuals carry low levels of cyanide in their blood or urine, which haven’t yet manifested in symptoms. In some places, there’s no processing at all, or sometimes the processing actually increases the cyanogenic content. On the positive side, some African groups have in fact culturally evolved effective processing techniques, but these techniques are spreading only slowly.

Rationalists always wonder: how come people aren’t more rational? How come you can prove a thousand times, using Facts and Logic, that something is stupid, and yet people will still keep doing it?

Henrich hints at an answer: for basically all of history, using reason would get you killed.

A reasonable person would have figured out there was no way for oracle-bones to accurately predict the future. They would have abandoned divination, failed at hunting, and maybe died of starvation.

A reasonable person would have asked why everyone was wasting so much time preparing manioc. When told “Because that’s how we’ve always done it”, they would have been unsatisfied with that answer. They would have done some experiments, and found that a simpler process of boiling it worked just as well. They would have saved lots of time, maybe converted all their friends to the new and easier method. Twenty years later, they would have gotten sick and died, in a way so causally distant from their decision to change manioc processing methods that nobody would ever have been able to link the two together.

Henrich discusses pregnancy taboos in Fiji; pregnant women are banned from eating sharks. Sure enough, these sharks contain chemicals that can cause birth defects. The women didn’t really know why they weren’t eating the sharks, but when anthropologists demanded a reason, they eventually decided it was because their babies would be born with shark skin rather than human skin. As explanations go, this leaves a lot to be desired. How come you can still eat other fish? Aren’t you worried your kids will have scales? Doesn’t the slightest familiarity with biology prove this mechanism is garbage? But if some smart independent-minded iconoclastic Fijian girl figured any of this out, she would break the taboo and her child would have birth defects.

In giving humans reason at all, evolution took a huge risk. Surely it must have wished there was some other way, some path that made us big-brained enough to understand tradition, but not big-brained enough to question it. Maybe it searched for a mind design like that and couldn’t find one. So it was left with this ticking time-bomb, this ape that was constantly going to be able to convince itself of hare-brained and probably-fatal ideas.

Here, too, culture came to the rescue. One of the most important parts of any culture – more important than the techniques for hunting seals, more important than the techniques for processing tubers – is techniques for making sure nobody ever questions tradition. Like the belief that anyone who doesn’t conform is probably a witch who should be cast out lest they bring destruction upon everybody. Or the belief in a God who has commanded certain specific weird dietary restrictions, and will torture you forever if you disagree. Or the fairy tales where the prince asks a wizard for help, and the wizard says “You may have everything you wish forever, but you must never nod your head at a badger”, and then one day the prince nods his head at a badger, and his whole empire collapses into dust, and the moral of the story is that you should always obey weird advice you don’t understand.

There’s a monster at the end of this book. Humans evolved to transmit culture with high fidelity. And one of the biggest threats to transmitting culture with high fidelity was Reason. Our ancestors lived in Epistemic Hell, where they had to constantly rely on causally opaque processes with justifications that couldn’t possibly be true, and if they ever questioned them then they might die. Historically, Reason has been the villain of the human narrative, a corrosive force that tempts people away from adaptive behavior towards choices that “sounded good at the time”.

Why are people so bad at reasoning? For the same reason they’re so bad at letting poisonous spiders walk all over their face without freaking out. Both “skills” are really bad ideas, most of the people who tried them died in the process, so evolution removed those genes from the population, and successful cultures stigmatized them enough to give people an internalized fear of even trying.

VI.

This book belongs alongside Seeing Like A State and the works of G.K. Chesterton as attempts to justify tradition, and to argue for organically-evolved institutions over top-down planning. What unique contribution does it make to this canon?

First, a lot more specifically anthropological / paleoanthropological rigor than the other two.

Second, a much crisper focus: Chesterton had only the fuzziest idea that he was writing about cultural evolution, and Scott was only a little clearer. I think Henrich is the only one of the three to use the term, and once you hear it, it’s obviously the right framing.

Third, a sense of how traditions contain the meta-tradition of defending themselves against Reason, and a sense for why this is necessary.

And fourth, maybe we’re not at the point where we really want unique contributions yet. Maybe we’re still at the point where we have to have this hammered in by more and more examples. The temptation is always to say “Ah, yes, a few simple things like taboos against eating poisonous plants may be relics of cultural evolution, but obviously by now we’re at the point where we know which traditions are important vs. random looniness, and we can rationally stick to the important ones while throwing out the garbage.” And then somebody points out to you that actually divination using oracle bones was one of the important traditions, and if you thought you knew better than that and tried to throw it out, your civilization would falter.

Maybe we just need to keep reading more similarly-themed books until this point really sinks in, and we get properly worried.

[REPOST] Epistemic Learned Helplessness

[This is a slightly edited repost of an essay from my old LiveJournal]

A friend recently complained about how many people lack the basic skill of believing arguments. That is, if you have a valid argument for something, then you should accept the conclusion. Even if the conclusion is unpopular, or inconvenient, or you don’t like it. He envisioned an art of rationality that would make people believe something after it had been proven to them.

And I nodded my head, because it sounded reasonable enough, and it wasn’t until a few hours later that I thought about it again and went “Wait, no, that would be a terrible idea.”

I don’t think I’m overselling myself too much to expect that I could argue circles around the average uneducated person. Like I mean that on most topics, I could demolish their position and make them look like an idiot. Reduce them to some form of “Look, everything you say fits together and I can’t explain why you’re wrong, I just know you are!” Or, more plausibly, “Shut up I don’t want to talk about this!”

And there are people who can argue circles around me. Maybe not on every topic, but on topics where they are experts and have spent their whole lives honing their arguments. When I was young I used to read pseudohistory books; Immanuel Velikovsky’s Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals.

And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn’t believe I had ever been so dumb as to believe Velikovsky.

And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting.

And so on for several more iterations, until the labyrinth of doubt seemed inescapable. What finally broke me out wasn’t so much the lucidity of the consensus view so much as starting to sample different crackpots. Some were almost as bright and rhetorically gifted as Velikovsky, all presented insurmountable evidence for their theories, and all had mutually exclusive ideas. After all, Noah’s Flood couldn’t have been a cultural memory both of the fall of Atlantis and of a change in the Earth’s orbit, let alone of a lost Ice Age civilization or of megatsunamis from a meteor strike. So given that at least some of those arguments are wrong and all seemed practically proven, I am obviously just gullible in the field of ancient history. Given a total lack of independent intellectual steering power and no desire to spend thirty years building an independent knowledge base of Near Eastern history, I choose to just accept the ideas of the prestigious people with professorships in Archaeology, rather than those of the universally reviled crackpots who write books about Venus being a comet.

You could consider this a form of epistemic learned helplessness, where I know any attempt to evaluate the arguments is just going to be a bad idea so I don’t even try. If you have a good argument that the Early Bronze Age worked completely differently from the way mainstream historians believe, I just don’t want to hear about it. If you insist on telling me anyway, I will nod, say that your argument makes complete sense, and then totally refuse to change my mind or admit even the slightest possibility that you might be right.

(This is the correct Bayesian action: if I know that a false argument sounds just as convincing as a true argument, argument convincingness provides no evidence either way. I should ignore it and stick with my prior.)

I consider myself lucky in that my epistemic learned helplessness is circumscribed; there are still cases where I’ll trust the evidence of my own reason. In fact, I trust it in most cases other than infamously deceptive arguments in fields I know little about. But I think the average uneducated person doesn’t and shouldn’t. Anyone anywhere – politicians, scammy businessmen, smooth-talking romantic partners – would be able to argue them into anything. And so they take the obvious and correct defensive maneuver – they will never let anyone convince them of any belief that sounds “weird”.

(and remember that, if you grow up in the right circles, beliefs along the lines of “astrology doesn’t work” sound “weird”.)

This is starting to resemble ideas like compartmentalization and taking ideas seriously. The only difference between their presentation and mine is that I’m saying that for 99% of people, 99% of the time, taking ideas seriously is the wrong strategy. Or, at the very least, it should be the last skill you learn, after you’ve learned every other skill that allows you to know which ideas are or are not correct.

The people I know who are best at taking ideas seriously are those who are smartest and most rational. I think people are working off a model where these co-occur because you need to be very clever to resist your natural and detrimental tendency not to take ideas seriously. But I think they might instead co-occur because you have to be really smart in order for taking ideas seriously not to be immediately disastrous. You have to be really smart not to have been talked into enough terrible arguments to develop epistemic learned helplessness.

Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom’s simulation argument, the anthropic doomsday argument, Pascal’s Mugging – I’ve never heard anyone give a coherent argument against any of these, but I’ve also never met anyone who fully accepts them and lives life according to their implications.

A friend tells me of a guy who once accepted fundamentalist religion because of Pascal’s Wager. I will provisionally admit that this person “takes ideas seriously”. Everyone else gets partial credit, at best.

Which isn’t to say that some people don’t do better than others. Terrorists seem pretty good in this respect. People used to talk about how terrorists must be very poor and uneducated to fall for militant Islam, and then someone did a study and found that they were disproportionately well-off, college educated people (many were engineers). I’ve heard a few good arguments in this direction before, things like how engineering trains you to have a very black-and-white right-or-wrong view of the world based on a few simple formulae, and this meshes with fundamentalism better than it meshes with subtle liberal religious messages.

But to these I’d add that a sufficiently smart engineer has never been burned by arguments above his skill level before, has never had any reason to develop epistemic learned helplessness. If Osama comes up to him with a really good argument for terrorism, he thinks “Oh, there’s a good argument for terrorism. I guess I should become a terrorist,” as opposed to “Arguments? You can prove anything with arguments. I’ll just stay right here and not blow myself up.”

Responsible doctors are at the other end of the spectrum from terrorists here. I once heard someone rail against how doctors totally ignored all the latest and most exciting medical studies. The same person, practically in the same breath, then railed against how 50% to 90% of medical studies are wrong. These two observations are not unrelated. Not only are there so many terrible studies, but pseudomedicine (not the stupid homeopathy type, but the type that links everything to some obscure chemical on an out-of-the-way metabolic pathway) has, for me, proven much like pseudohistory – unless I am an expert in that particular subsubfield of medicine, it can sound very convincing even when it’s very wrong.

The medical establishment offers a shiny tempting solution. First, a total unwillingness to trust anything, no matter how plausible it sounds, until it’s gone through an endless cycle of studies and meta-analyses. Second, a bunch of Institutes and Collaborations dedicated to filtering through all these studies and analyses and telling you what lessons you should draw from them.

I’m glad that some people never develop epistemic learned helplessness, or develop only a limited amount of it, or only in certain domains. It seems to me that although these people are more likely to become terrorists or Velikovskians or homeopaths, they’re also the only people who can figure out if something basic and unquestionable is wrong, and make this possibility well-known enough that normal people start becoming willing to consider it.

But I’m also glad epistemic learned helplessness exists. It seems like a pretty useful social safety valve most of the time.

OT129: Opaque Thread

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, but please try to avoid hot-button political and social topics. You can also talk at the SSC subreddit or the SSC Discord server – and also check out the SSC Podcast. Also:

1. Thanks to everyone who made it to the San Francisco meetup today, despite my terrible directions and wild underestimation of the size of the park in question.

2. The Register of Bans now has a section for people who are banned from meetups. In keeping with the name of this thread, I won’t explain. If a banned person shows up, I’ll assume ignorance and ask them to leave. If they refuse, I may have to publicly announce the considerations that made me add their name to the list, in order to keep other meetup-goers safe. Please don’t make things reach that point. Thanks to the people who reminded me I should do this.

Posted in Uncategorized | Tagged | 612 Comments

Bay Area SSC Meetup 6/2

Join me at 5 PM on Sunday, June 2, for the traditional once-every-three-months big SSC Bay Area meetup. For a change of pace, it’ll be in San Francisco this time around.

Meet on the grass near the tennis courts at Mission Dolores Park (near the 16th Street Mission BART station).

EDIT: Blue tarp near restrooms near Hancock Street

I understand this conflicts with some other rationalist community events, but there are events every weekend for the next month or so, and I don’t want to not have meetups. We’ll probably be at the park until at least 7 or so, so feel free to come late.

Lurkers/newbies/people-who-aren’t-sure-if-they’re-welcome-or-not are welcome!

Posted in Uncategorized | Tagged | 14 Comments

Postscript To APA Photo-Essay

I was surprised how many people responded to my APA photo-essay with comments like “Seems psychiatry as a field is broken beyond repair” or “This proves you should never trust psychiatrists”.

The mood I was going for was more “let’s share a laugh at the excesses of the profession” than “everything must be burned down”. Looks like I missed it.

I was disappointed to see a lot of the most hostile comments coming from people in tech. It would be easy to write an equally damning report on the tech industry. Just cobble together a few paragraphs about Juicero and Theranos, make fun of whatever weird lifestyle change @jack is supporting at the moment, and something something Zuckerberg something Cambridge Analytica something. You can even throw in something about James Damore (if you’re writing for the left) or about the overreaction to James Damore (if you’re writing for the right). And there you go! Tech is a malicious cancerous industry full of awful people and everyone should hate it. We’ve all read this exact thinkpiece a thousand times.

I’ve tried to push back against this line of thinking. A lot of the most visible and famous things in tech are bad, because scum tends to rise to the top. But there’s also some extraordinary innovation going on, and some extraordinarily good people involved. “@jack invents new health fad of rolling around naked on glaciers” is a much juicier story than “we can now fit twice as many billions of transistors on a chip as we could last year”, but tech journalism that only reports on the former is missing an important part of the story.

I feel the same way about psychiatry. There’s a lot of cringeworthy stuff going on at conferences, but conferences are designed to be about signaling and we shouldn’t expect otherwise. There’s also a lot of great people working really hard to help fight mental illness and support the mentally ill. “Most Americans remain alive and basically functional despite record-breaking amounts of depression and anxiety” isn’t sexy any more than “Internet continues to connect billions of people around the world at the speed of light” is sexy. But it’s a much bigger part of the story than the part where silly people do silly things at conferences.

Michael Crichton invented the term Gell-Mann Amnesia as a reference to Nobel-winning physicist Murray Gell-Mann, who remarked that even though the newspapers were always wrong about his area of expertise (physics), he always found himself trusting them about everything else. If you don’t trust hatchet jobs against tech, try to be a little less credulous of hatchet jobs against psychiatry, even when I’m the one doing the hatcheting.

(also, RIP Murray Gell-Mann, who passed away last week – at least if you believe obituaries)