Tag Archives: culture

Highlights From The Comments On Cultural Evolution

Peter Gerdes says:

As the examples of the Nicaraguan deaf children left on their own to develop their own language demonstrates (as do other examples) we do create languages very very quickly in a social environment.

Creating conlangs is hard not because creating language is fundamentally hard but because we are bad at top down modelling of processes that are the result of a bunch of tiny modifications over time. The distinctive features of language require both that it be used frequently for practical purposes (this makes sure that the language has efficient shortcuts, jettisons clunky overengineered rules etc..) and that it be buffeted by the whims of many individuals with varying interests and focuses.

This is a good point, though it kind of equivocates on the meaning of “hard” (if we can’t consciously do something, does that make it “hard” even if in some situations it would happen naturally?).

I don’t know how much of this to credit to a “language instinct” that puts all the difficulty of language “under the hood”, vs. inventing language not really being that hard once you have general-purpose reasoning. I’m sure real linguists have an answer to this. See also Tracy Canfield’s comments (1, 2) on the specifics of sign languages and creoles.


The Secret Of Our Success described how human culture, especially tool-making ability, allowed us to lose some adaptations we no longer needed. One of those was strength; we are much weaker than the other great apes. Hackworth provides an intuitive demonstration of this: hairless chimpanzees are buff:


Reasoner defines “Chesterton’s meta-fence” as:

in our current system (democratic market economies with large governments) the common practice of taking down Chesterton fences is a process which seems well established and has a decent track record, and should not be unduly interfered with (unless you fully understand it)

And citizencokane adds:

Indeed: if there is a takeaway from Scott’s post, it is that one way to ensure survival is high-fidelity adherence to traditions + ensuring that the inherited ancestral environment/context is more or less maintained. Adhering to ancient traditions when the context is rapidly changing is a recipe for disaster. No point in mastering seal-hunting if there ain’t no more seals. No point in mastering the manners of being a courtier if there ain’t no more royal court. Etc.

And the problem is that, in the modern world, we can’t simply all mutually agree to stop changing our context so that our traditions will continue to function as before because it is no longer under our control. I’m not just talking about climate change; I’m talking even moreso about the power of capital, an incentive structure that escapes all conscious human manipulation or control, and which more and more takes the appearance of an exogenous force, remaking the world “in its own image,” turning “all that is solid into air,” and compelling all societies, upon pain of extinction, to keep up with its rapid changes in context. This is why every true traditionalist must be, at heart, an anti-capitalist…if they truly understand capitalism.

Which societies had more success in the 18th and 19th centuries in the context of this new force, capital? Those who held rigidly to traditions (like Qing China), or those who tolerated or even encouraged experimentation? Enlightenment ideas would not have been nearly so persuasive if they hadn’t had the prestige of giving countries like the Netherlands, England, France, and America an edge. Even countries that were not on the leading edge of the Enlightenment, and who only grudgingly and half-heartedly compromised with it like Germany, Austria, and (to some extent) Japan, did better than those who held onto traditions even longer, like the Ottoman Empire or Russia, or China.

In particular, you can’t fault Russia or China for being even more experimental in the 20th century (Marxism, communism, etc.) if you realize that this was an understandable reaction to being visibly not experimental enough in the 19th century.

And Furslid continues:

I think an important piece of this, which I hope Scott will get to in later points is to be less confident in our new culture. It makes sense to doubt if our old culture applies. However, it is also incredibly unlikely that we have an optimized new culture yet.

We should be less confident that our new culture is right for new situations than that the old culture was right for old situations. This means we should be more accepting of people tweaking the new culture. We should also enforce it less strongly.


Quixote describes a transitional step in the evolution of manioc/cassava cultivation:

Also, based on a recent conversation (unrelated to this post actually) that I had with one of my coworkers from central east Africa, I’m not sure that he would agree with the book’s characterization of African adaptation to Cassava. He would probably point out that

– Everyone in [African country] knows cassava can make you sick, that’s why you don’t plant it anywhere that children or the goats will eat it.

– In general you want it plant cassava in swampy areas that you were going to fence off anyway.

– You mostly let the cassava do its thing and only harvest it to use as your main food during times of famine /drought when your better crops aren’t producing

It seems like those cultural adaptations problem cover most / much of the problem with cassava.


ahasvers:

There is a very nice experimental demonstration in this article (just saw the work presented at a workshop), where they get people to come as successive “generations” and improve on a simple physical system.

Causal understanding is not necessary for the improvement of culturally evolving technology

The design does improve over generations, no thanks to anyone’s intelligence. They get both physics/engineering students and other students, with no difference at all. In one variant, they allow people to leave a small message to the next generation to transmit their theory on what works/doesn’t, and that doesn’t help, or makes things worse (by limiting the dimensions along which next generations will explore).


A few people including snmlp question the claim that aboriginal Tasmanians lost fire. See this and this paper for the status of the archaeological evidence.


Decius Brutus:

Five hundred years hence, is someone going to analyze the college education system and point out that the wasted effort and time that we all can see produced some benefit akin to preventing chronic cyanide poisoning? Are they going to be able to do the same with other complex wasteful rituals, like primary elections and medical billing? Or do humans create lots of random wasteful rituals and occasionally hit upon one that removes poison from food, and then every group that doesn’t follow the one that removes poison from food dies while the harmless ones that just paint doors with blood continue?

I actually seriously worry about the college one. Like, say what you want about our college system, but it has some surprising advantages: somehow billions of dollars go to basic scientific research (not all of them from the government), it’s relatively hard for even the most powerful special interests to completely hijack a scientific field (eg there’s no easy way for Exxon to take over climatology), and some scientists can consistently resist social pressure (for example, all the scientists who keep showing things like that genetics matters, or IQ tests work, or stereotype threat research doesn’t replicate). While obviously there’s still a lot of social desirability bias, it’s amazing that researchers can stand up to it at all. I don’t know how much of this depends on the academic status ladder being so perverse and illegible that nobody can really hack it, or whether that would survive apparently-reasonable college reform.

Likewise, a lot of doctors just have no incentives. They don’t have an incentive to overtreat you, or to undertreat you, or to see you more often than you need to be seen, or to see you less often than you need to be seen (this isn’t denying some doctors in some parts of the health system do have these pressures). I actually don’t know whether my clinic would make more or less money if I fudged things to see my patients more often, and nobody has bothered to tell me. This is really impressive. Exposing the health system to market pressures would solve a lot of inefficiencies, but I don’t know if it would make medical care too vulnerable to doctors’ self-interest and destroy some necessary doctor-patient trust.


Lasagna:

I’ve got two young kids of my own. One puts everything in his mouth, the other less so, and neither evinced anything resembling what I’m reading in Section III. We spent this past Sunday trying to teach my youngest not to eat the lawn, and my oldest liked to shove ant hills and ants into his mouth around that age. Yeah, sure, anecdotal, but a “natural aversion among infants to eating plants until they see mommy eating them, and after that they can and do identify that particular plan themselves and will eat it” seems like a remarkable ability that SOMEONE would have noticed before this study. I’ve never heard anyone mention it.

I don’t think I’m weakmanning the book, it’s just that this is the only aspect discussed in Scott’s review that I have direct experience with, and my direct experience conflicts with the author’s conclusions. It’s a Gell-Mann amnesia thing, and makes me suspicious of the otherwise exciting ideas here. Like: does anyone here have any direct knowledge of manioc harvesting and processing, or the Tukanoans culture? How accurate is the book?

I checked with the mother of the local two-year old; she says he also put random plants in his mouth from a young age. Suspicious!


John Schilling:

I think this one greatly overstates its thesis. Inventiveness without the ability to transmit inventions to future generations is of small value; you can’t invent the full set of survival techniques necessary for e.g. the high arctic in a single generation of extreme cleverness. At best you can make yourself a slightly more effective ape. But cultural transmission of inventions without the ability to invent is of exactly zero value. It takes both. And since being a slightly more effective ape is still better than being an ordinary ape, culture is slightly less than 50% of the secret of our success.

That said, the useful insight is that the knowledge we need to thrive, is vastly greater than the knowledge we can reasonably deduce from first principles and observation. And what is really critical, this holds true even if you are in a library. You need to accept “X is true because a trusted authority told me so; now I need to go on and learn Y and Z and I don’t have time to understand why X is true”. You need to accept that this is just as true of the authority who told you X, and so he may not be able to tell you why X is true even if you do decide to ask him in your spare time. There may be an authority who could track that down, but it’s probably more trouble than it’s worth to track him down. Mostly, you’re going to use the traditions of your culture as a guide and just believe X because a trusted authority told you to, and that’s the right thing to do,

“Rationality” doesn’t work as an antonym to “Tradition”, because rationality needs tradition as an input. Not bothering to measure Avogadro’s number because it’s right there in your CRC handbook wikipedia is every bit as much a tradition as not boning your sister because the Tribal Elders say so; we just don’t call it that when it’s a tradition we like. Proper rationality requires being cold-bloodedly rational about evaluating the high-but-not-perfect reliability of tradition as a source of fact.

Unfortunately, and I think this may be a relic of the 18th and early 19th century when some really smart polymathic scientists could almost imagine that they really could wrap their minds around all relevant knowledge from first principles on down, our culture teaches ‘Science!’ in a way that suggests that you really should understand how everything is derived from first principles and firsthand observation or experiment even if at the object level you’re just going to look up Avogadro’s number in Wikipedia and memorize it for the test.


nkurz isn’t buying it:

I’m not sure where Scott is going with this series, but I seem to have a different reaction to the excerpts from Henrich than most (but not all) of the commenters before me: rather than coming across as persuasive, I wouldn’t trust him as far as I could throw him.

For simplicity let’s concentrate on the seal hunting description. I don’t know enough about Inuit techniques to critique the details, but instead of aiming for a fair description, it’s clear that Henrich’s goal is to make the process sound as difficult to achieve as possible. But this is just slight of hand: the goal of the stranded explorer isn’t to reproduce the exact technique of the Inuit, but to kill seals and eat them. The explorer isn’t going to use caribou antler probes or polar bear harpoon tips — they are going to use some modern wood or metal that they stripped from their ice bound ship.

Then we hit “Now you have a seal, but you have to cook it.” What? The Inuit didn’t cook their seal meat using a soapstone lamp fueled with whale oil, they ate it raw! At this point, Henrich is not just being misleading, he’s making it up as he goes along. At this point I start to wonder if part about the antler probe and bone harpoon head are equally fictional. I might be wrong, but beyond this my instinct is to doubt everything that Henrich argues for, even if (especially if) it’s not an area where I have familiarity

Going back to the previous post on “Epistemic Learned Helplessness”, I’m surprised that many people seem to have the instinct to continue to trust the parts of a story that they cannot confirm even after they discover that some parts are false. I’m at the opposite extreme. As soon as I can confirm a flaw, I have trouble trusting anything else the author has to say. I don’t care about the baby, this bathwater has to go! And if the “flaw” is that the author is being intentionally misleading, I’m unlikely to ever again trust them (or anyone else who recommends them). .

Probably I accidentally misrepresented a lot in the parts that were my own summary. But this is from a direct quote, and so not my fault.

roystgnr adds:

Wikipedia seems to suggest that they ate freshly killed meat raw, but cooked some of the meat brought back to camp using a Kudlik, a soapstone lamp fueled with seal oil or whale blubber. Is that not correct? That would still flatly contradict “but you have to cook it”, but it’s close enough that the mistake doesn’t reach “making it up as he goes along” levels of falsehood. You’re correct that even the true bits seem to be used for argument in a misleading fashion, though.

This seems within the level of simplifying-to-make-a-point that I have sometimes been guilty of myself, so I’ll let it pass.


Bram Cohen:

A funny point about the random number generators: Rituals which require more effort are more likely to produce truly random results, because a ritual which required less effort would be more tempting to re-do if you didn’t like the result.

Followed by David Friedman:

This reminds me of my father’s argument that cheap computers resulted in less reliable statistical results. If running one multiple regression takes hundreds of man hours and thousands of dollars, running a hundred of them and picking the one that, by chance, gives you the significant result you are looking for, isn’t a practical option.

Yikes.


Anatoly:

The quote on quadruped running seems inaccurate in several important ways compared to the primary references Henrich cites, which are short and very interesting in their own: Bramble and Carrier (1983) and Carrier (1984). In particular, humans still typically lock their breathing rate with their strides, it’s just that animals nearly always lock them 1:1, while humans are able to switch to other ratios, like 1:3, 2:3, 1:4 etc. and this is thought to allow us to maintain efficiency at varying speeds. Henrich also doesn’t mention that humans are at the outset metabolically disadvantaged for running in that we spend twice as much energy (!) per unit mass to run the same distance as quadrupeds. That we are still able to run down prey by endurance running is called the “energetic paradox” by Carrier. Liebenberg (2006) provides a vivid description of what endurance hunting looks like, in Kalahari.

And b_jonas:

I doubt the claim that humans don’t have quantized speeds of running. I for one definitely have two different gaits of walking, and find walking in an intermediate speed between the two more difficult than either of them. This is the most noticable if I want to chat with someone while walking, because then I have to walk in such an intermediate speed to not get too far from them. The effect is somewhat less pronounced now that I’ve gained weight, but it’s still present. I’m not good at running, so I can’t say anything certain about it, but I suspect that at least some humans have different running gaits, even if the cause is not the specific one that Joseph Henrich mentions about quadrupeds.

I’ve never noticed this. And I used to use treadmills relatively regularly, and play with the speed dial, so I feel like I would have noticed if this had been true. Anyone have thoughts on this?


Squirrel Of Doom:

I read somewhere that the languages with the most distinctive sounds are in Africa, among them the ones including the !click! ones. Since humanity originates from Africa, these are also the oldest language families.

As you move away from Africa, you can trace how languages lose sound after sound, until you get to Hawaiian, which is the language with the fewest sounds, almost all vowels.

I’ve half heartedly tried to find any mention of this, perhaps overly cute theory again, but failed. The “sonority” theory here reminded me. Anyone know anything, one way or the other?

Secret Of Our Success actually mentions this theory; you can find the details within.

Some people reasonably bring up that no language can be older than any other, for the same reason it doesn’t make sense to call any (currently existing) evolved animal language older than any other – every animal lineage from 100 million BC has experienced 100 million years of evolution.

I think I’ve heard some people try to get around this by focusing on schisms. Everyone starts out in Africa, but a small group of people move off to Sinai or somewhere like that. Because most of the people are back home in Africa, they can maintain their linguistic complexity; because the Sinaites only have a single small band talking to each other, they lose some linguistic complexity. This seems kind of forced, and some people in the comments say linguistic complexity actually works the opposite direction from this, but I too find the richness of Bushman languages pretty suggestive.


What about rules that really do seem pointless? Catherio writes:

My basic understanding is that if some of the rules (like “don’t wear hats in church”) are totally inconsequential to break, these provide more opportunities to signal that your community punishes rule violation, without an increase in actually-costly rule violations.

I’d heard this before, but she manages (impressively), to link it to AI: see Legible Normativity for AI Alignment: The Value of Silly Rules.


liskantope:

With regard to accepting other people’s illegible preferences…I wish I could show this essay to, like, two-thirds of all the people I’ve ever lived with. Seriously, a common core of my issues with roommates has been that they refuse to accept or understand my illegible preferences (I often refer to these as “irrational aversions”) while refusing to admit that their own illegible preferences are just as difficult to ground rationally. Just establishing an understanding that illegible preferences should be respected by default or at least treated on an even playing field, and that having immediate objective logical explanations for preferences should not be a requirement for validation, would have immediately improved my relationships with people I’ve lived with 100%.

I’ve had the same experience – a good test for my compatibility with someone will be whether they’ll accept “for illegible reasons” as an excuse. Despite the stereotypes, rationalists have been a hundred times better at this than any other group I’ve been in close contact with.


Nav on Lacan and Zizek (is everything cursed to end in Zizek eventually, sort of like with entropy?):

Time to beat my dead horse; the topics you’re discussing here have a lot of deep parallels in the psychoanalytic literature. First, Scott writes:

}} “If you force people to legibly interpret everything they do, or else stop doing it under threat of being called lazy or evil, you make their life harder”

This idea is treated by Lacan as the central ethical problem of psychoanalysis: under what circumstances is it acceptable to cast conscious light upon a person’s unconsciously-motivated behavior? The answer is usually “only if they seek it out, and only then if it would help them reduce their level of suffering”.

Turn the psychoanalytic, phenomenology-oriented frame onto social issues, as you’ve partly done, and suddenly we’re in Zizek-land (his main thrust is connecting social critique with psychoanalytic concepts). The problem is that (a) Zizek is jargon-heavy and difficult to understand, and (b) I’m not nearly as familiar with Zizek’s work as with more traditional psychoanalytic concepts. But I’ll try anyway. From a quick encyclopedia skim, he actually uses a similar analogy with fetishes (all quotes from IEP):

}} “Žižek argues that the attitude of subjects towards authority revealed by today’s ideological cynicism resembles the fetishist’s attitude towards his fetish. The fetishist’s attitude towards his fetish has the peculiar form of a disavowal: “I know well that (for example) the shoe is only a shoe, but nevertheless, I still need my partner to wear the shoe in order to enjoy.” According to Žižek, the attitude of political subjects towards political authority evinces the same logical form: “I know well that (for example) Bob Hawke / Bill Clinton / the Party / the market does not always act justly, but I still act as though I did not know that this is the case.””

As for how beliefs manifest, Zizek clarifies the experience of following a tradition and why we might actually feel like these traditions are aligned with “Reason” from the inside, and also the crux of why “Reason” can fail so hard in terms of social change:

According to Žižek, all successful political ideologies necessarily refer to and turn around sublime objects posited by political ideologies. These sublime objects are what political subjects take it that their regime’s ideologies’ central words mean or name extraordinary Things like God, the Fuhrer, the King, in whose name they will (if necessary) transgress ordinary moral laws and lay down their lives… Kant’s subject resignifies its failure to grasp the sublime object as indirect testimony to a wholly “supersensible” faculty within herself (Reason), so Žižek argues that the inability of subjects to explain the nature of what they believe in politically does not indicate any disloyalty or abnormality. Žižek argues that the inability of subjects to explain the nature of what they believe in politically does not indicate any disloyalty or abnormality. What political ideologies do, precisely, is provide subjects with a way of seeing the world according to which such an inability can appear as testimony to how Transcendent or Great their Nation, God, Freedom, and so forth is—surely far above the ordinary or profane things of the world.

Lastly and somewhat related, going back to an older SSC post, Scott argues that he doesn’t know why his patients react well to him, but Zizek can explain that, and it has a lot of relevance for politics (transference is a complex topic, but the simple definition is a transfer of affect or mind from the therapist to the patient, which is often a desirable outcome of therapy, contrasted with counter-transference, in which the patient affects the therapist):

}} “The belief or “supposition” of the analysand in psychoanalysis is that the Other (his analyst) knows the meaning of his symptoms. This is obviously a false belief, at the start of the analytic process. But it is only through holding this false belief about the analyst that the work of analysis can proceed, and the transferential belief can become true (when the analyst does become able to interpret the symptoms). Žižek argues that this strange intersubjective or dialectical logic of belief in clinical psychoanalysis also what characterizes peoples’ political beliefs…. the key political function of holders of public office is to occupy the place of what he calls, after Lacan, “the Other supposed to know.” Žižek cites the example of priests reciting mass in Latin before an uncomprehending laity, who believe that the priests know the meaning of the words, and for whom this is sufficient to keep the faith. Far from presenting an exception to the way political authority works, for Žižek this scenario reveals the universal rule of how political consensus is formed.”

Scott probably come across as having a stable and highly knowledgeable affect, which gives his patients a sense of being in the presence of authority (as we likely also feel in these comment threads), which makes him better able to perform transference and thus help his patients (or readers) reshape their beliefs.

Hopefully this shallow dive was interesting and opens up new areas of potential study, and also a parallel frame: working from the top-down ethnography (as tends to be popular in this community; the Archimedean standpoint) gives us a broad understanding, but working from the bottom-up gives us a more personal and intimate sense of why the top-down view is correct.

This helped me understand Zizek and Lacan a lot better than reading a book on them did, so thanks for that.


Stucchio doesn’t like me dissing Dubai:

I’m just going to raise a discussion of one piece here:

}} “Dubai, whose position in the United Arab Emirates makes it a lot closer to this model than most places, seems to invest a lot in its citizens’ happiness, but also has an underclass of near-slave laborers without exit rights (their employers tend to seize their passports).”

I have probably read the same western articles Scott has about all the labor the UAE and other middle eastern countries imports. But unlike them, I live in India (one of the major sources of labor) and mostly have heard about this from people who choose to make the trip.

To me the biggest thing missing from these western reporter’s accounts is the fact that the people shifting to the gulf are ordinary humans, smarter than most journalists, and fully capable of making their own choices.

Here are things I’ve heard about it, roughly paraphrased:

“I knew they’d take my passport for 9 months while I paid for the trip over. After that I stuck around for 3 years because the money was good, particularly after I shifted jobs. It was sad only seeing my family over skype, but I brought home so much money it was worth it.”

“I took my family over and we stayed for 5 years; the money was good, we all finished the Hajj while we were there, but it was boring and I missed Maharashtrian food.”

“It sucked because the women are all locked up. You can’t talk to them at the mall. It’s as boring as everyone says and you can’t even watch internet porn. But the money is good.”

When I hear about this first hand, the stories don’t sound remotely like slave labor. It doesn’t even sound like “we were stuck in the GDR/Romania/etc” stories I’ve heard from professors born on the wrong side of the Iron Curtain. I hear stories of people making life choices to be bored and far from family in return for good money. Islam is a major secondary theme. So I don’t think the UAE is necessarily the exception Scott thinks it is.


Moridinamael on the StarCraft perspective:

In StarCraft 2, wild, unsound strategies may defeat poor opponents, but will be crushed by decent players who simply hew to strategies that fall within a valley of optimality. If there is a true optimal strategy, we don’t know what it is, but we do know what good, solid play looks like, and what it doesn’t look like. Tradition, that is to say, iterative competition, has carved a groove into the universe of playstyles, and it is almost impossible to outperform tradition.

Then you watch the highest-end professional players and see them sometimes doing absolutely hare-brained things that would only be contemplated by the rank novice, and you see those hare-brained things winning games. The best players are so good that they can leave behind the dogma of tradition. They simply understand the game in a way that you don’t. Sometimes a single innovative tactic debuted in a professional game will completely shift how the game is played for months, essentially carving a new path into what is considered the valley of optimality. Players can discover paths that are just better than tradition. And then, sometimes, somebody else figures out that the innovative strategy has an easily exploited Achilles’ heel, and the new tactic goes extinct as quickly as it became mainstream.

StarCraft 2 is fun to think about in this context because it is relatively open-ended, closer to reality than to chess. There are no equivalents to disruptor drops or mass infestor pushes or planetary fortress rushes in chess. StarCraft 2 is also fun to think about because we’ve now seen that machine learning can beat us at it by doing things outside of what we would call the valley of optimality.

But in this context it’s crucial to point out that the way AlphaStar developed its strategy looked more like gradually accrued “tradition” than like “rationalism”. A population of different agents played each other for a hundred subjective years. The winners replicated. This is memetic evolution through the Chestertonian tradition concept. The technique wouldn’t have worked without the powerful new learning algorithms, but the learning algorithm didn’t come up with the strategy of mass-producing probes and building mass blink-stalkers purely out of its fevered imagination. Rather, the learning algorithms were smart enough to notice what was working and what wasn’t, and to have some proximal conception as to why.

Someone (maybe Robin Hanson) treats all of history as just evolution evolving better evolutions. The worst evolution of all (random chance) created the first replicator and kicked off biological evolution. Biological evolution created brains, which use a sort of hill-climbing memetic evolution for good ideas. People with brains created cultures (cultural evolution) including free market economies (an evolutionary system that selects for successful technologies). AIs like AlphaStar are the next (final?) step in this process.