[Previously in sequence: Epistemic Learned Helplessness, Book Review: The Secret Of Our Success]
A rare example of cultural evolution in action:
Throughout the Highlands of New Guinea, a group’s ability to raise large numbers of pigs is directly related to its economic and social success in competition with other regional groups. The ceremonial exchange of pigs allows groups to forge alliances, re-pay debts, obtain wives, and generate prestige through excessive displays of generosity. All this means that groups who are better able to raise pigs can expand more rapidly in numbers—by reproduction and in-migration—and thus have the potential to expand their territory. Group size is very important in intergroup warfare in small-scale societies so larger groups are more likely to successfully expand their territory. However, the prestige more successful groups obtain may cause the rapid diffusion of the very institutions, beliefs, or practices responsible for their competitive edge as other groups adopt their strategies and beliefs.
In 1971, the anthropologist David Boyd was living in the New Guinea village of Irakia, and observed intergroup competition via prestige-biased group transmission. Concerned about their low prestige and weak pig production, the senior men of Irakia convened a series of meetings to determine how to improve their situation. Numerous suggestions were proposed for raising their pig production but after a long process of consensus building the senior men of the village decided to follow a suggestion made by a prestigious clan-leader who proposed that they “must follow the Fore’” and adopt their pig-related husbandry practices, rituals, and other institutions. The Fore’ were a large and successful ethnic group in the region, who were renowned for their pig production. The following practices, beliefs, rules, and goals were copied from the Fore’, and announced at the next general meeting of the community:
1) All villagers must sing, dance and play flutes for their pigs. This ritual causes the pigs to grow faster and bigger. At feasts, the pigs should be fed first from the oven. People are fed second.
2) Pigs should not be killed for breaking into another’s garden. The pig’s owner must assist the owner of the garden in repairing the fence. Disputes will be resolved following the dispute resolution procedure used among the Fore’.
3) Sending pigs to other villages is tabooed, except for the official festival feast.
4) Women should take better care of the pigs, and feed them more food. To find extra time for this, women should spend less time gossiping.
5) Men must plant more sweet potatoes for the women to feed to the pigs, and should not depart for wage labor in distant towns until the pigs have grown to a certain size.
The first two items were implemented immediately at a ritual feast. David stayed in the village long enough to verify that the villagers did adopt the other practices, and that their pig production did increase in the short term, though unfortunately we don’t know what happened in the long-run.
Let me highlight three features of this case. First, the real causal linkages between many of these elements and pig production are unclear. Maybe singing does cause pigs to grow faster, but it’s not obvious and no one tried to ascertain this fact, via experimentation for example. Second, the village leadership chose to rely on copying institutions from other groups, and not on designing their own institutions from scratch. This is smart, since we humans are horrible at designing institutions from scratch. And third, this transmission between groups occurred rapidly because Irakia already had a political institution in the village, involving a council of the senior members of each clan, who were empowered by tradition (social norms) to make community-level decisions. Lacking this decision-making institution, Fore’ practices would have had to spread among households, and thus been much slower in spreading. Of course, such political decision-making institutions themselves are favored by intergroup competition.
This is it. This is the five-point platform that the Democratic Party can use to win in 2020.
Yesterday’s review mentioned that children have certain “slots” in their heads that are ready for specific types of incoming information. How far can we take this idea?
The UCLA anthropologist Dan Fessler argues that during middle childhood (ages 6-9) humans go through a phase in which we are strongly attracted to leraning about fire, by both observing others and manipulating it ourselves. In small-scale societies, where children are free to engage this curiosity, adolescents have both mastered fire and lost any further attraction to it. Interestingly, Fessler also argues that modern societies are unusual because so many children never get to satisfy their curiosity, so their fascination with fire stretches into the teen years and early adulthood.
On prestige-based socialization and learning who to learn from:
By 14 months, infants are already well beyond social referencing and already showing signs of using skill or competence cues to select models. After observing an adult model acting confused by shoes and placing them on his hands, German infants tended not to copy his unusual way of turning on a novel lighting device: using his head. However, if the model acted competently, confidently putting shoes on his feet, babies tended to copy the model and used their heads to activate the novel lighting device.
Kind of unrelated to culture, but did you know quadruped animals run at quantized speeds?
Many four-legged animals are saddled with a design disadvantage. Game animals thermoregulate by panting, like a dog. If they need to release more heat, they pant faster. This works fine unless they are running. When they run, the impact of their forelimbs compresses their chest cavities in a manner that makes breathing during compressions inefficient. This means that, ignoring oxygen and thermoregulation requirements, running quadrupeds should breathe only once per locomotor-cycle. But, since the need for oxygen goes up linearly with speed, they will be breathing too frequently at some speeds and not frequently enough at other speeds. Consequently, a running quadruped must pick a speed that (1) demands only one breath per cycle, but (2) supplies enough oxygen for his muscle-speed demands (lest fatigue set in), and (3) delivers enough panting to prevent a meltdown (heat stroke), which depends on factors unrelated to speed such as the temperature and breeze. The outcome of these constraints is that quadrupeds have a discrete set of optimal or preferred speed settings (like the gears on a stick-shift car) for different styles of locomotion (e.g., walking, trotting and galloping). If they deviate from these preferred settings, they are operating less efficiently.
Humans lack these restrictions because (1) our lungs do not compress when we stride (we’re bipedal) so (2) our breathing rates can vary independent of our speed, and (3) our thermoregulation is managed by our fancy sweating-system, so the need to pant does not constrain our breathing. Because of this, within our range of aerobic running speeds (not sprinting), energy use doesn’t vary too much. That means we can change speeds within this range without paying much of a penalty. As a result, a skilled endurance hunter can strategically vary his speed in order to force his prey to run inefficiently. If his prey picks an initial speed just faster than the hunter, to escape, the hunter can speed up. This forces the prey to ‘shiftup’ to a much faster speed, which will cause rapid overheating. The animal’s only alternative is to run inefficiently, at a slower speed which will exhaust his muscles more quickly. The consequence is that hunters force their prey into a series of sprints and rests that eventually result in heat stroke. The overheated prey collapses, and is easily dispatched. Tarahumara, Paiute and Navajo hunters report that they then simply strangle the collapsed deer or pronghorn antelope.
Even locomotion is culturally learned!
To achieve a running form that maximizes both performance and freedom from injury, humans need to rely on some cultural learning, on top of much individual practice. The evolutionary biologist and anatomist, Dan Lieberman, has studied long-distance barefoot and minimally shod running in communities around the globe. When he asks runners of all ages how they learned to run, they never say they “just knew how.” Instead, they often name or point to an older, highly skilled, and more prestigious member of their group or community, and say they just watch him, and do what he does. We are such a cultural species that we’ve come to rely on learning from others even to figure out how to run in ways that best harness our anatomical adaptations.
Why we use spices:
Why do we use spices in our foods? In thinking about this question keep in mind that (1) other animals don’t spice their foods, (2) most spices contribute little or no nutrition to our diets, and (3) the active ingredients in many spices are actually aversive chemicals, which evolved to keep insects, fungi, bacteria, mammals and other unwanted critters away from the plants that produce them.
Several lines of evidence indicate that spicing may represent a class of cultural adaptations to the problem of food-borne pathogens. Many spices are antimicrobials that can kill pathogens in foods. Globally, common spices are onions, pepper, garlic, cilantro, chili peppers (capsicum) and bay leaves. Here’s the idea: the use of many spices represents a cultural adaptation to the problem of pathogens in food, especially in meat. This challenge would have been most important before refrigerators came on the scene. To examine this, two biologists, Jennifer Billing and Paul Sherman, collected 4578 recipes from traditional cookbooks from populations around the world. They found three distinct patterns.
1. Spices are, in fact, antimicrobial. The most common spices in the world are also the most effective against bacteria. Some spices are also fungicides. Combinations of spices have synergistic effects, which may explain why ingredients like “chili power” (a mix of red pepper, onion, paprika, garlic, cumin and oregano) are so important. And, ingredients like lemon and lime, which are not on their own potent anti-microbials, appear to catalyze the bacteria killing effects of other spices.
2. People in hotter climates use more spices, and more of the most effective bacteria killers. In India and Indonesia, for example, most recipes used many anti-microbial spices, including onions, garlic, capsicum and coriander. Meanwhile, in Norway, recipes use some black pepper and occasionally a bit of parsley or lemon, but that’s about it.
3. Recipes appear to use spices in ways that increase their effectiveness. Some spices, like onions and garlic, whose killing power is resistant to heating, are deployed in the cooking process. Other spices like cilantro, whose antimicrobial properties might be damaged by heating, are added fresh in recipes.
Thus, many recipes and preferences appear to be cultural adaptations adapted to local environments that operate in subtle and nuanced ways not understood by those of us who love spicy foods. Billing and Sherman speculate that these evolved culturally, as healthier, more fertile and more successful families were preferentially imitated by less successful ones. This is quite plausible given what we know about our species’ evolved psychology for cultural learning, including specifically cultural learning about foods and plants.
Among spices, chili peppers are an ideal case. Chili peppers were the primary spice of New World cuisines, prior to the arrival of Europeans, and are now routinely consumed by about a quarter of all adults, globally. Chili peppers have evolved chemical defenses, based on capsaicin, that make them aversive to mammals and rodents but desirable to birds. In mammals, capsicum directly activates a pain channel (TrpV1), which creates a burning sensation in response to various specific stimuli, including acid, high temperatures and allyl isothiocyanate (which is found in mustard or wasabi). These chemical weapons aid chili pepper plants in their survival and reproduction, as birds provide a better dispersal system for the plants’ seeds than other options (like mammals). Consequently, chilies are innately aversive to non-human primates, babies and many human adults. Capsaicin is so innately aversive that nursing mothers are advised to avoid chili peppers, lest their infants reject their breast (milk), and some societies even put capsicum on mom’s breasts to initiate weaning. Yet, adults who live in hot climates regularly incorporate chilies into their recipes. And, those who grow up among people who enjoy eating chili peppers not only eat chilies but love eating them. How do we come to like the experience of burning and sweating—the activation of pain channel TrpV1?
Research by psychologist Paul Rozin shows that people come to enjoy the experience of eating chili peppers mostly by re-interpreting the pain signals caused by capsicum as pleasure or excitement. Based on work in the highlands of Mexico, children acquire this gradually without being pressured or compelled. They want to learn to like chili peppers, to be like those they admire. This fits with what we’ve already seen: children readily acquire food preferences from older peers. In Chapter 14, we further examine how cultural learning can alter our bodies’ physiological response to pain, and specifically to electric shocks. The bottom line is that culture can overpower our innate mammalian aversions, when necessary and without us knowing it.
Fascinating if true. But don’t we use spices because of their taste? If spices are antimicrobials, why aren’t there any tasteless spices? I guess you could argue most plants taste like something, usually something bad, and if a plant is a good antimicrobial then we go through the trouble of culturally reinterpreting its taste to be “exciting” or “interesting”. Also, how far can this “cultural reinterpretation” idea go? Does this explain things like masochism, or like the weak form of masochism that makes people like naively unpleasant experiences like roller coasters?
I knew that Europeans had light skin because they lived in northern latitudes without much sunlight. But then how come Inuit and North Asians never developed light skin? Henrich explains:
To understand this, we need first to consider how culture has shaped genes for skin color over the last 10 millennia. Much evidence now indicates that the shades of skin color found among different populations—from dark to light—across the globe represent a genetic adaptation to the intensity and frequency of exposure to ultraviolet light, including both UVA and UVB. Near the equator, where the sun is intense year round, natural selection favors darker skin, as seen in populations near the equator in Africa, New Guinea and Australia. This is because both UVA and UVB light can dismantle the folate present in our skin, if not impeded or blocked by melanin. Folate is crucial during pregnancy, and inadequate levels can result in severe birth defects like spina bifida. This is why pregnant women are told by their physicians to take folic acid. In men, folate is important in sperm production. Preventing the loss of this reproductively valuable folate means adding protective melanin to our epidermis, which has the side effect of darkening our skin.
The threat from intense UV light to our folate diminishes for populations farther from the equator. However, a new problem pops up, as darker skinned people face a potential vitamin D deficiency. Our bodies use UVB light to synthesize vitamin D. At higher latitudes, the protective melanin in dark skin can block too much of the UVB light, and thereby inhibit the synthesis of vitamin D. This vitamin is important for the proper functioning of the brain, heart, pancreas and immune system. If a person’s diet lacks other significant sources of this vitamin, then having dark skin and living at high latitudes increases one’s chances of experiencing a whole range of health problems, including most notably rickets. A terrible condition especially in children, rickets causes muscle weakness, bone and skeletal deformities, bone fractures and muscle spasms. Thus, living at high latitude will often favor genes for lighter skin. Not surprising for a cultural species, many high latitude populations of hunter-gatherers (above 50-55q latitude), such as the Inuit, culturally evolved adaptive diets based on fish and marine animals, so the selection pressures on genes to reduce the melanin in their skin were not as potent as they would have been in populations lacking such resources. If these resources were to disappear from the diet of such northern populations, selection for light skin would intensify dramatically.
Among regions of the globe above 50-55q latitude (e.g. much of Canada), the area around the Baltic Sea was almost unique in its ability to support early agriculture. Starting around 6,000 years ago, a cultural package of cereal crops and agricultural know-how gradually spread from the south, and was adapted to the Baltic ecology. Eventually, people became primarily dependent on farmed foods, and lacked access to the fish and other vitamin D-rich food sources that local hunter-gatherer populations had long enjoyed. However, being at particularly high latitudes, natural selection kicked in to favor genes for really light skin, so as to maximize whatever vitamin-D could be synthesized using UVB light.
Secret Of Our Success spends a lot of time talking about gene-culture coevolution and how we should expect people from different cultures to have different genes. When asked whether this is potentially racist, it argues it’s really maximally anti-racist, because “racism” means “believing in exactly the same racial categories as 19th century racists”, and gene-culture coevolution proves that variation is actually much more widespread than that, so there.
In case you needed proof that high status increases your inclusive fitness:
Chris asked a sample of Tsimane to rank the men in two villages along a number of dimensions, including their fighting ability, generosity, respect, community persuasiveness, ability to get their way, and their number of allies. Each Tsimane’ man could then be assigned a score based on the aggregate results from his fellow villagers. Chris argues that his measures of fighting ability and community persuasiveness provide the best proxies for dominance and prestige, respectively, in this context. He then shows that both of these proxies for social status are associated with having more babies with one’s wife, having more extra-marital affairs, and being more likely to remarry after a divorce, even after statistically removing the effects of age, kin group size, economic productivity and several other factors. Beyond this, the children of prestigious men die less frequently and prestigious men are more likely to marry at younger ages (neither of these effects hold for dominant men). All this suggests that, at least in this small scale society, being recognized as either dominant or prestigious has a positive influence on one’s total reproductive output (children) or mating success over and above the consequences that might accrue from factors associated with status like economic productivity or hunting skills. Not surprisingly, both dominant and prestigious men tended to get their way at group meetings, but only prestigious men were respected and generous.
On the Sanhedrin:
Effective institutions often harness or suppress aspects of our status psychology in non-intuitive ways. Take the Great Sanhedrin, the ancient Jewish court and legislature that persisted for centuries at the beginning of the Common Era. When deliberating on a capital case, its 70 judges would each share their views beginning with the youngest and lowest ranking member and then proceed in turn to the “wisest” and most respected member. This is an interesting norm because (1) it’s nearly the opposite of how things would go if we let nature take its course, and (2) it helps guarantee that all the judges got to hear the least varnished views of the lower ranking members, since otherwise the views of the lowest status individuals would be tainted by both the persuasive and deferential effects of prestige and dominance. Concerns with dominance may have been further mitigated by (1) a sharing of the directorship of the Sanhedrin by two individuals, who could be removed by a vote of the judges, (2) the similar social class and background of judges, and (3) social norms that suppressed status displays.
I like this idea, but I worry it could backfire. Supposing that even the best of us are at least a little tempted to conform, it risks the youngest and least experienced members setting the tone for the discussion, so that the older and wiser members are tempted to conform with people more foolish than themselves. If the wisest people spoke first, at least we could get their untainted opinions and guarantee that any conformity was at least in favor of the opinion most likely to be correct. Overall it seems like they should have gone with secret ballots. I wonder if anyone’s ever done an experiment comparing wisest-first, youngest-first, and secret-ballot decision-making to see if any have a clear advantage. You could do it with one of those “guess the number of jelly beans in this jar” tasks or something, with participants who did well on a test problem clearly marked as “elders”.
On why societies often dictate naming children after their paternal relatives:
In building a broader kinship network, social norms and practices connect a child more tightly to his or her father’s side of the family, in subtle ways. In contrast to many complex societies, mobile hunter-gatherer populations often emphasize kinship through both mom and dad, and permit new couples much flexibility in where they can live after marriage. However, there’s always that problem of paternity certainty for dad’s entire side. Among Ju/’hoansi, mobile hunter-gatherers in the Kalahari Desert in southern Africa, social norms dictate that a newborn’s father—or, more accurately, the mother’s husband—has the privilege of naming the child. These norms also encourage him to name the child after either his mother or father, depending on the infant’s sex. Ju/’hoansi believe name sharing helps the essence of the paternal grandparents live on, and it consequently bonds both the grandparents and the father’s whole side of the family to the newborn. Relatives of the grandparents often refer to the newborn using the same kinship term they use for his or her older namesake—that is, the grandfather’s daughter will call the newborn baby “father.”
This bias to the father’s side is particularly interesting since Ju/’hoansi kinship relationships are otherwise quite gender egalitarian, emphasizing equally the links to both mom’s and dad’s sides of the family. This biased naming practice may help create that symmetry by evening out the imbalance that paternity uncertainty leaves behind. In many modern societies, where social norms favoring the father’s side have disappeared, the effect of paternity certainty emerges as maternal grandparents, uncles and aunts invest more than the same paternal relatives do. Thus, Ju/’hoansi practices link newborns directly to their father’s parents and simultaneously, via the use of close kin terms like “father” and “sister”, pull all of dad’s relatives closer
I wonder if this can be extended to our own practice of kids (mostly) taking their father’s last name rather than their mother’s.
And Joseph Henrich continues with an anecdote I eventually decided to consider cute:
More broadly, in Ju/’hoansi society, sharing the same name is an important feature of social life, which has many economically important implications. Psychologically, creating namesakes may work in two interlocking ways. First, even among undergraduates and professors, experiments suggest that sharing the same, or even a similar, name increases people’s liking for the other person, their perceptions of similarity and their willingness to help that person. In one study, for example, professors were more likely to fill out a survey and mail it back if the cover letter was signed by someone with a name similar to their own name. The perception of similarity suggests that namesakes may somehow spark our kin psychology, since we already know we use other cues of similarity (appearance) to assess relatedness. Second, even if this same-name trick doesn’t actually spark any change in immediate feelings, it still sets the appropriate social norms—the reputational standards monitored by others—which among the Ju/’hoansi specify all kinds of important things about relationships, ranging from meat sharing priorities to water-hole ownership. Norms related to naming or namesake relationships are common across diverse societies, and many people in small-scale societies intuitively know the power of namesakes, as my Yasawan friends with names like Josefa, Joseteki and Joseses often remind me. My own kids are named Joshua, Jessica and Zoey, thus matching my own first name by first initial or by rhyming.
(his wife is also an anthropologist, so maybe that makes naming your kids according to anthropological phenomena easier to pull off).
Relevant to a frequent discussion here about whether polyamory is “unnatural” or at least a violation of Chesterton’s Fence:
Even in societies with marriage, social norms and beliefs need not re-enforce concerns about sexual fidelity that arise from male pair-bonding psychology, but can instead promote investment in children in other ways. Many South American indigenous populations believe that a child forms in his or her mother’s womb through repeated ejaculations of sperm, a belief system that anthropologists have labeled partible paternity. In fact, people in many of these societies maintain that a single ejaculation cannot sustain a viable pregnancy, and men must “work hard” with repeated ejaculations over many months to sustain a viable fetus. Women, especially after the first fetus appears, are permitted, and sometimes even encouraged, to seek another man, or men, to have sex with in order to provide ‘additional fathers’ for their future child. Anyone who contributes sperm to the fetus is a secondary father. In some of these societies, periodic rituals prescribe extramarital sex after successful hunts, which helps establish and formalize the creation of multiple fathers. Secondary fathers — often named at birth by the mother — are expected to contribute to the welfare of their children (e.g., by delivering meat and fish), although not as much as the primary father, the mother’s husband. Frequently, the secondary father is the husband’s brother.
Obtaining a second father is adaptive, at least sometimes. Detailed studies among both the Bari’ in Venezuela and the Ache’ show that kids with exactly two fathers are more likely to survive past age fifteen than kids with either one father or three or more fathers.
Importantly, social norms cannot just make male sexual jealousy vanish. Men don’t like it when their wives seek sex with other men. However, rather than being supported by their communities in monitoring and punishing their wives for sexual deviations, they are the one’s acting defiantly—violating social norms — if they show or act on their jealousy. Reputational concerns and norms are flipped around here, so now the husband has to control himself. In the eyes of the community, it’s considered a good thing for an expectant mother to provide a secondary father for her child.
Henrich adds that about 85% of human societies have practiced something other than traditionally-understood monogamy.
Suppose somebody in a weird Californian counterculture scene is trying to decide to what degree polyamory is Chesterton’s-Fence-compliant. They might look around their own social network and find that most of the people they know have organically become polyamorous over the past decade or so, and decide it is the local tradition (and therefore it is good). But they could look on a broader scale and see that most people in their civilization over the past few centuries have been monogamous (and therefore polyamory is bad). Or they could look on an even broader scale and see that most people in the world throughout human history have been non-monogamous (and therefore polyamory is potentially good again). I understand other people’s intuition that the “my civilization, past few hundred years” scale seems important, but I’m not sure how you would non-arbitrarily justify choosing that particular scale instead of others. The strongest argument seems to be something like “Wait two generations to see if it builds strong families”, but I could see going either way.
I mentioned aversion to eating insects in the original review, but Henrich suggests some food taboos are easier to acquire than others:
There is reason to suspect that we humans have an innate susceptibility to picking up meat aversions, due to the tendency of dead animals to carry dangerous pathogens. Thus, we humans are primed to acquire meat taboos over other food avoidances
More on taboos. A lot of taboos were of the form “you personally are not allowed to eat this particular meat or else something terrible will happen to you, so you might as well share it with the less fortunate instead”; this looks like a pretty transparent attempt by cultural evolution to build a social safety net. Henrich asks why these taboos persisted in the face of greed:
A good learner will acquire this rule while growing up and never actually violate it (meat is consumed in public), so he’ll never directly experience eating the tabooed part and not having bad luck. Rare cases of taboo violation that, by coincidence, were followed by bad luck or illness will be readily remembered and passed on (psychologists call this “negativity bias”). Meanwhile, cases of violations followed by a long period when nothing bad happens will tend to be missed or forgetten, unless people keep and check accurate records.
Based on my field experience, any skeptic who questions the taboos will be met with vivid descriptions of particular cases in which the taboos were violated and then poor hunting, illnesses, or bad luck ensued.
This is a huge stretch, but I wonder if you could make an argument that evolution favored confirmation bias because it helped prevent people from questioning their cultural rules.
How social norms are maintained:
In research in the villages of Yasawa Island, my team and I have studied how norms are maintained. When someone, for example, repeatedly fails to contribute to village feasts or community labor, or violates food or incest taboos, the person’s reputation suffers. A Yasawan’s reputation is like a shield that protects them from exploitation or harm by others, often from those who harbor old jealousies or past grievances. Violating norms, especially repeatedly, causes this reputational shield to drop, and creates an opening for others to exploit the norm-violator with relative impunity. Norm violators have their property (e.g., plates, matches, tools) stolen and destroyed while they are away fishing or visiting relatives in other villages; or, they have their crops stolen and gardens burned at night. Despite the small size of these communities, the perpetrators of these actions often remain anonymous and get direct benefits in the form of stolen food and tools as well as the advantages of bringing down a competitor or dispensing revenge for past grievances.
Despite their selfish motivations, these actions act to sustain social norms, including cooperative ones, because—crucially—perpetrators can only get away with such actions when they target a norm-violator, a person with his reputational shield down. Were they to do this to someone with a good reputation, the perpetrator would himself become a norm-violator and damage his or her reputation, thereby opening themselves up to gossip, thefts and property damage. This system, which Yasawans themselves can’t explicitly lay out, thereby harnesses past grievances, jealousies and plain old self-interest to sustain social norms, including cooperative norms like contributing to village feasts.282 Thus, individuals who fail to learn the correct local norms, can’t control themselves or repeatedly make mistaken violations are eventually driven from the village, after having been relentlessly targeted for exploitation.
This sounds sort of like the Icelandic legal system in Legal Systems Very Different From Ours, in that the consequence of breaking the law is that the laws cease to protect you. But viewed from a more critical angle, it also sounds like the modern “tradition” of committing (and/or tolerating) hate crimes against people who don’t conform.
Speaking of hate crimes, Henrich (like me) thinks “racism” is not a natural category. He thinks ethnic hostility is much more natural than racial hostility, with the difference being that race is biological and ethnicity is culture. People are naturally friendly towards people of their own culture and skeptical of people from other cultures, which may or may not follow racial lines. He discusses an experiment in which children are asked to view a puppet playing a game incorrectly:
We can see how deeply norms are intertwined with our folk sociology by returning to the experiments with Max the puppet. The child subjects now encounter Max along with Henri. Max speaks native-accented German but Henri speaks French-accented German. Young German children protested much more when Max —their co-ethnic as cued by accent — played the game differently from the model than when Henri did. Co-ethnics are favored because they presumably share similar norms, but that also means they are subject to more monitoring and punishment if they violate those norms. This appears to hold cross-culturally, as people from places as diverse as Mongolia and New Guinea willingly pay a cost to preferentially punish their co-ethnics in experiments like the Ultimatum Game, over their non-co-ethnics, for norm violations.
This approach to how and why we think about tribes and ethnicity has broader implications. First, intergroup competition will tend to favor the spread of any tricks for expanding what members of a group perceive as their tribe. Both religions and nations have culturally evolved to increasingly harness and exploit this piece of our psychology, as they create quasi-tribes. Second, this approach means that the ingroup vs. out-group view taken by psychologists misses a key point: not all groups are equally salient or thought about in the same way. Civil wars, for example, strongly trace to ethnically or religiously marked differences, and not to class, income or political ideology. [310] This is because our minds are prepared to carve the social world into ethnic groups, but not into classes or ideologies.
Finally, the psychological machinery that underpins how we think about ‘race’ actually evolved to parse ethnicity, not race. You might be confused by this distinction since race and ethnicity are so often mixed up. Ethnic group membership is assigned based on culturally-transmitted markers, like language or dialect. By contrast, racial groups are marked and assigned according to perceived morphological traits, such as color or hair form, which are genetically transmitted. Our folk-sociologcial abilities evolved to pick out ethnic groups, or tribes. However, cues like skin color or hair form can pose as ethnic markers in the modern world because members of different ethnic groups sometimes also share markers like skin color/hair form, and racial cues can automatically and unconsciously ‘trick’ our psychology into thinking that different ethnic groups exist. And, this byproduct can be harnessed and reified by cultural evolution to create linguistically labeled racial categories and racism.
Underlining this point is the fact that racial cues do not have cognitive priority over ethnic cues: when children or adults encounter a situation in which accent or language indicate ‘same ethnicity’ but skin color indicates ‘different race’, the ethno-linguistic markers trump the racial markers. That is, children pick as a friend someone of a different race who speaks their dialect over someone of the same race who speaks a different dialect. [311] Even weaker cues like dress can sometimes trump racial cues. The tendency of children and adults to preferentially learn and interact with those who share their racial markers (mistaken for ethnic cues) likely contributes to the maintenance of cultural differences between racially marked populations, even in the same neighborhood.
This ties in to my crackpot theory that the number one way to fight racism in the US is to somehow get everyone speaking exactly the same accent.
In one well-studied case among the Gebusi, in New Guinea, my failure to meet my sister exchange obligations would increase the chances that I would, at some future date, be found guilty of witchcraft.
#out of context quotes
Henrich discusses a theory of intrinsic growth pretty similar to the one in my recent singularity post. But he introduces a neat experimental test: Polynesian islands. On larger islands (ie with higher carrying capacities), technological advance is faster:
Islands or island clusters with larger populations and more contact with other islands had both a greater number of different fishing-tool types and more-complex fishing technologies. Figure 12.2 shows the relationship between population size and the number of tool types. People on islands with bigger populations had more tools at their disposal, and those tools tended to be more sophisticated.
Another team, led by the evolutionary anthropologist Mark Collard, found the same kind of strong positive relationship when they examined forty nonindustrialized societies of farmers and herders from around the globe. Once again, larger populations had more-complex technologies and a greater number of different types of tools.
These effects can even be observed in orangutans. While orangutans have little or no cumulative culture, they do possess some social learning abilities that result in local, population-specific traditions. For example, some orangutan groups routinely use leaves to scoop up water from the ground or use sticks to extract seeds from fruit. Data from several orangutan populations show that groups with greater interaction among individuals tend to possess more learned food-obtaining techniques.
The point is, larger and more interconnected populations generate more sophisticated tools, techniques, weapons, and know-how because they have larger collective brains.
Henrich’s model is actually a little more complicated than mine, because it includes a term for forgetting technology (which actually happens pretty often when the group is small enough!) The more technology the group has, the more likely that one or two things slip through the cracks every generation and don’t get passed on to the kids. That means that most primitive societies are in an equilibrium between the rate of generating and the rate of losing technology, whose exact level depends on the population size:
SOme information was lost every generation, because copies are usually worse than the originals. Cumulative cultural evolution has to fight against this force and is best able to do so in larger populations that are highly socially interconnected. The key is most individuals end up imperfect, worse than the models they are learning from. However, some few individuals, whether by luck, fierce practice, or intentional innovation, end up better than their teachers…
One point the book really drove home is how much of the absolute basics of knowledge are cultural inventions. We laugh at primitive tribes who count “one, two, many”, but the idea of counting more specifically than this was a discovery that had to be discovered by someone, and only survived when there was a context that made it useful:
Many of the products of cumulative cultural evolution give us not only ready concepts to apply to somewhat new problems, and concepts to recombine (bows are projectiles + elastically stored energy) but actually give us cognitive tools or mental abilities that we would not otherwise have. Arabic numerals, Roman letters, the Indian zero, the Gregorian calendar, cylindrical projection maps basic color terms, clocks, fractions, and right vs. left are just some of the cognitive tools that have shaped your mind and mine
Alas, this quote is missing some context from the rest of the book showing just how hard these ideas were to develop. Remember that mathematicians spent a while debating whether “zero” was truly a number, that ancient people had what we consider very confusing concepts around color (even the Greeks were weird about this). Remember that the alphabet – breaking words up into their smallest components – arose only after millennia of logographs and syllabaries, and in some areas never arose at all. There’s even some speculation that basic ideas about introspection and emotion were invented pretty late. Or even:
Subordinating conjunctions like “after”, “before”, and “because of” may have evolved only recently, in historical times, and are probably no more a feature of *human* languages than composite bows are a feature of *human* technological repertoires. The tools of subordination seem less well-developed in the earliest versions of Sumerian, Akkadian, Hittite, and Greek. This makes these languages slow, ponderous, and repetitious to read. ..this is not to say that we humans don’t have some souped-up innate abilities for dealing with hierarchical structures, which may also be useful for making tools or understanding social relationships, but merely that the elegant bits of grammar that permit us to fully harness these abilities were built by cultural evolution.
This kind of thing is why Henrich thinks comparing the IQ of young chimps and human toddlers is fair, but comparing older chimps and older humans isn’t. Older humans have all of these deep-level concepts to work with that make solving even abstract puzzles much easier. This is also plausibly related to the Flynn Effect.
On sonority:
A successful communicator is one who can most effectively be understood, given the local social, environmental, or ecological conditions. As young or naïve learners focus on and learn from more successful communicators—who are using more effective communication tools—cumulative cultural evolution will gradually assemble sign or whistled repertoires, over time, in the same way that it hones kayaks, spears, and boomerangs. Given this, there’s no reason to suspect that such cultural evolutionary processes somehow apply only to whistled or gestural sign languages, and not to typical spoken languages. Thus, spoken languages should—under the right circumstances—show some response to the local acoustic environments and to nonlinguistic social norms, just as whistled and sign languages do. While researchers have done little work on such topics, there’s some preliminary evidence.
Spoken languages vary in their sonority. The sonority of our voices decreases as the airflow used for speech is obstructed and is highest for open vowels, like the /a/ and lowest for so-called voiceless stops like the /t/ in tin. Pronounce each of these sounds and note the difference in the constriction of your airflow. Both vowels and consonants vary in sonority, but vowels generally have much higher sonority than consonants. This means that more sonorous languages tend to have more vowels (e.g., Hawaiian), while less sonorous ones pack the consonants together (e.g., Russian). For the same energy and effort, more sonorous speech sounds can be heard at greater distances and over more ambient noise than less sonorous ones.
If languages adapt culturally, then we can predict that in situations in which people do relatively more talking over greater interpersonal distances with more ambient noise and sound dispersion, languages will be more sonorous. Many environmental variables might influence this, but Robert Monroe, John Fought, and their colleagues reasoned that climate, and specifically temperature, might have a big effect. The idea is simple: in warmer climates, people work, play, cook, and relax outdoors. Compared to life indoors, living outside means that communicators more frequently face the challenges of distance, noise and poor acoustics. Their team generated measures of sonority from word lists for dozens of languages and then looked at the relationship between sonority and measures of climatic temperature, like the number of months per year when it’s below 10°C (50°F).
It turns out that if all you know is climatic temperature, then you can account for about one-third of the variation in the sonority of languages. Languages in warmer climates tend to use more vowels than those in colder climates and rely more heavily on the most sonorous vowel, /a/. For consonants, languages in warmer climates rely more heavily on the most sonorant consonants, like /n/, /l/, and /r/. By contrast, languages in colder climates lean more heavily on the least sonorous vowels, as the /i/ in deep.7
This simple idea can have much nuance added to it. For example, not all warm climates are equally conducive to sonorous speech. In regions with dense forest cover, the advantages of high sonority might be less pronounced, or as the anthropologists Mel and Carol Ember have argued, very cold and windy climates may select against linguistic practices that involve opening one’s mouth widely, due to the increased heat loss. To this they added the idea that social norms about sexual restrictiveness might also influence sonority. Adding both of these nuances to the basic climatic temperature analysis, they managed to account for four-fifths of the variation in the sonority of language.
I’m a little worried about p-hacking here, but still, whoa! The thing where Inuit languages sound like tikkakkooktttippik but Polynesian languages sound like waoiuhieeawahiaii has a cause! The phonetic nature of words is shaped by the experience of the people who produce them! There’s something delightfully kabbalistic about this.
The chili pepper quote promised a study on cultural learning of pain, so here it is:
Ken Craig has directly tested the relationship between cultural learning and pain. Ken’s team first exposed research participants to a series of electric shocks that gradually increased in intensity and thus painfulness. Some participants observed another person – a “tough model” – experience the same shocks right after them, and some did not. Both the participant and model had to rate how painful the shock was each time. The tough model, however, was secretly working for the experimenter and always rated the pain about 25% less painful than the participant did. Then, after this, the model left and the participants received a series of random electric shocks. For this new series of shocks, the participants who had seen the tough model rated them half as painful as those who didn’t see the tough model….
Those who saw the tough model showed (1) declining measurements of electrodermal skin potential, meaning that their bodies stopped reacting to the threat, (2) lower and more stable heart rates, and (3) lower stress ratings. Cultural learning from the tough model changed their physiological reactions to electric shocks.
I see a commenter on Quillette has already thought to connect this to telling people they should be harmed by triggers and microaggressions. But also note the connection to the the predictive processing model of perception.
Books like this are supposed to end with an Exhortation Relevant To Modern Society, so here’s Henrich’s:
Humans are bad at intentionally designing effective institutions and organizations, though I’m hoping that as we get deeper insights into human nature and cultural evolution this can improve. Until then, we should take a page from cultural evolution’s playbook and design “variation and selection systems” that will allow alternative institutions or organizational forms to compete. We can dump the losers, keep the winners, and hopefully gain some general insights during the process.
If that sounds familiar, it could be because it’s capitalism; if it sounds very familiar, it could be because it’s also the case for things like charter cities and seasteads; if it sounds super familiar, it could be because it’s also Archipelago.
And to finish:
Once we understand the importance of collective brains, we begin to see why modern societies differ in their innovativeness. It’s not the smartness of individuals or the formal incentives. It’s the willingness and ability of large numbers of individuals at the knowledge frontier to freely interact, exchange views, disagree, learn from each other, build collaborations, trust strangers, and be wrong.
Hopefully this means Henrich won’t be too angry that I just quoted like half of his copyrighted book without permission.
Paprika. It’s bland and adds no flavor to a dish but is still called for in recipes all the time.
It has lots of carotenoids, which is good if you’re not getting them from other dietary sources
It’s spicy enough that my kids don’t like it.
Paprika’s just ground chillI that uses particularly mild varieties and has a correspondingly mild flavour.
Paprika can be a huge range of things. Actual Hungarian or Spanish smoked paprika is not bland- the hot variety is hot, though perhaps not as hot as say cayenne, while the sweet variety also has a flavour beyond simply “smoke”, which it also adds.
Generic supermarket paprika might well be bland, especially if it’s been allowed to go stale, but that is true of a lot of other spices.
Paprika adds lots of flavour to chorizo. Without it, it’s just a sausage.
A lot of paprika has all sorts of interesting, subtle, flavours, generally in the smoky, sweet, and hot range. However, even the most basic of paprikas has an advantage in cooking – colour. Even a small amount of it gives almost anything a really nice red hue, which is often desirable if, as is often the case for the sort of standard meal I cook (which is generally some meat and veg pan-fried with cheese and vegan creme fraiche substitute forming a sauce with the meat juices, plus pasta) the general look of the meal is a sort of pallid yellow/white without it. Even if it isn’t changing the taste (and I’d maintain it generally is, as long as I’ve used decent paprika), I really value it’s ability to make a meal look much more appetising.
Paprika is great with eggs! Sprinkle some on egg mayonnaise, it really gives a lift but it’s not too hot. It works well in beef stews also, and with roast chicken.
I discount the “spices covered up rotten meat” notion a bit, since (a) people in all sorts of cultures ate game, which is hung to tenderise the meat by aging, hence the “gamey” (slightly rotten) flavour (b) his bland Norwegians eating a bland diet also rejoice in the culinary achievement known as rakfisk, while the Swedes have surstromming and the Icelanders beat them all.
I think it’s a combination of taste, flavour and also ‘hey this helps preserve/prevent food going bad’.
I didn’t see any reference to spices being used to cover up the taste/smell of rotten food, rather that they are antimicrobial and so they make foods that may be contaminated safer to eat. Especially in a hot climate, the levels of salmonella and other pathogens can rise high enough to sicken you well before the food shows any sign of rot through odor or taste. The claim is that the spices function to inhibit these microbes. Also, a) “gamey” taste comes from the diet and species of the game animal, and has nothing to do with rot or hanging. Game cooked and eaten minutes after the hunter brings it down is just as gamey tasting as meat that has been hung, and all types of meat are hung to dry age, not just wild game. Hanging done right doesn’t allow the meat to rot at all; it tenderizes it through the release of lactic acid from muscle cells. All your examples in b) are lactofermentation, a method of food preservation that’s practiced everywhere. I think the difference is that cultures in hot places take the additional step of adding antimicrobial spices during preparation just before eating, and that acts as a safeguard against contamination by pathogens which can sicken you without necessarily producing putrefaction of the food.
Deviled eggs with paprika sprinkled on top are more likely to tempt younger eaters than the plainer version, based on n = a handful of kids at a picnic.
There exist bland paprikas with no flavor but the paprikas used in most ethnic dishes are not flavorless. I think bland paprika is mostly used for color in 1950s U.S.-type dishes.
ties in to my crackpot theory that the number one way to fight racism in the US is to somehow get everyone speaking exactly the same accent.
I’m sorry, but isn’t this the opposite of what we would conclude? The children punish the person with the same accent, but erratic behavior, more than they punish the outsider who presumably knows the least. As a result, the more difficult it would be to distinguish blacks and whites, the more likely extreme punishment would rain down on blacks (as an example).
We can imagine the most extreme scenario: Pop A is 90% and Pop B is 10%, the only way you can recognize someone is from either population is through a genetic test. However, pop B has a statistically higher chance of engaging in erratic behavior, like robbery. Per the study you discuss right before the above quote, people in Pop A are much more likely to approve of extreme punishments for robbery than if Pop B was composed of Green Skinned, heavily accented lizardmen.
I think we could conclude that if white people perceived black people as “part of the same culture”, they might punish black people for violations of shared etiquette norms more (though no more than they punish other white people). But they would be less likely to think of them as an “outgroup” and be easily moved to exclude or hate them. We forgive outsiders minor cultural differences, but overall we are more likely to favor co-ethnics.
I think the decreased demand for conformity among outgroups only goes so far. If an Arabic man didn’t say “God bless you” when someone sneezed, we would put it down to ignorance of US norms and not consider him rude. But if Arabic people commit terrorism, we get at least as angry as when native-born Americans do the same.
When I visited the UK, the black people I met (descendants of 20th-21st century African/Caribbean immigrants) spoke with the “normal” middle/upper class London accent. It gave a very different feeling from the US where African-Americans all speak with a very different accent.
I suspect that’s like saying Michelle Obama shows black Americans don’t have distinctive accents, which makes sense from my UK perspective but I suspect not to an actual American. Whilst there are plenty of black Britons who speak with standard accents, like their white co-accentees they are the generally professional middle class (my tribe i suppose, although my accent is noticeable to foreigners). Most black Britons speak with distinctively-tinged versions of local accents.
Note though that the UK black population is far more recent than the US one, so something different is happening in terms of integration. The accents are now British with a Caribbean tinge, not Caribbean with a British edge. I suspect in terms of culture the immigrant communities are converging with the local ones (note also the relatively high rates of what anthropologists might label out-group marriage). Ethnica-distinct accents haven’t quite gone yet though.
This may be technically true but would be highly offensive to a modern person who goes out of their way to combat racism. You would be accused of some combination of defining racism too narrowly and engaging in some form of cultural imperialism, i.e. blackmailing a certain group of people with the threat of unjust animosity if they failed to adopt a set of norms.
That last part is an exceptionally uncharitable, but the fact that someone would necessarily be less trusting of another person who acted very differently is only an easier pill to swallow than the distrust based on physiology for a very abstractly minded individual. Most people are more concrete-bound so it’s unlikely they could tell the difference Taking it as a fact of life would smack of defeatism.
I think the definition of racism became absurdly wide when “it’s in their genes” and “no, it’s absolutely not in their genes, it is a learned behavior” fall both into the same category.
But I agree, you would be accused of something absurd like this. Loyalty signaling trumps logic.
I think the decreased demand for conformity among outgroups only goes so far.
You might think that, but it isn’t what they quoted text would imply.
Social deviancy at its most extreme is having high crime rates, including murder, and an unhealthy relationship with law enforcement. And the study concerns deviancy. At best I think your “idea” would have taken away one (fairly weak IMO) signal that precipitates things like white flight. In that instance, whites would stay longer in neighborhoods that are slowly being taken over by minorities, which would for a time keep the overall deviancy levels of that particular neighborhood lower, but the end result would likely still be the same. Because people don’t like living in such neighborhoods.
Even if you are correct about militarism caused by terrorism, this is a category difference. Terrorism committed by Arabs or another foreigner is an act of war, humans instinctively know what to do in war. On the other hand, look at the extreme tolerance for crimes by minorities so long as they appear contained(ish). If the North Side looked like the South Side we would put every resident of Wrigleyville in a cage. Rotterdam’s child sex enslavement operation has gotten approximately the same amount of coverage as Robert Kraft’s handjob. Etc etc
Rotherham?
Not the same place.
Following the basic template used by the US Census Bureau, which defines “Hispanic” as an ethnicity, not a race, my definitions are:
A racial group is an extended family that has more coherence and continuity because it is partly inbred.
An ethnic group is a people who are united by some traits that are typically passed down within biological families, but don’t technically have to be: e.g., language, religion, cuisine, values, legends, etc etc
For example, a Korean baby adopted by an Italian-American family on Staten Island would likely grow up to be racially Korean and ethnically Italian-American.
One thing to keep in mind for evaluating what Henrich is talking about is that “race” and “ethnicity” overlap to a sizable extent.
Interestingly, accents are less of an ethnic trait than many other traits because children typically get their accents not from their parents but from their school and playground friends.
If race is who your ancestors were and ethnicity is who raised you, then class might be, in effect, who you go to school with. So, for example, in England, the ruling class’s Received Pronunciation accent is a product of attending “public” boarding schools like Eton and Harrow.
Anecdotal evidence in support of your crackpot theory for addressing racism.
I grew up in Memphis, but my parents are both from the NYC area and have mid-Atlantic (standard American) accents. As a little kid, I did not like anyone with a southern accent. I often perceived them as mean or at least strange. And I must have made a generalization about black people and southern accents, because I very distinctly remember the first time I met a black person who didn’t have a southern accent. I was four and we were visiting my aunt in Connecticut. One of her friends was black, and I remember just being shocked when she spoke and it wasn’t what I expected. (The cues of her being a family friend probably helped, but back home I had a black baby sitter who my parents liked but who I didn’t like, so I don’t think it was sufficient.) I remember liking this northern woman instantly, and literally reformulating a racist opinion — as a four-year-old — from “I don’t like black people” to “I don’t like people with southern accents”.
Didn’t a lot of the other people around you in Memphis have southern accents? Also, NYC area mid-Atlantic isn’t really standard American.
I am not sure about this, but perhaps the difference is not in the magnitude of the non-conforming behavior, but whether it has an impact on me (or my tribe).
For example, if another tribe had a habit of stealing pennies from people of my tribe, that would make me angry, because it is an action that hurts me, however slightly.
For an opposite example, if a subgroup of another tribe decides to murder another subgroup of the same tribe, I don’t give a fuck, as long as my tribe is safe. (Well, I do, but that’s because my culture taught me to consider every human being a part of my tribe in the wider sense.)
If the other tribe worships a wrong god, and they all end up in hell, it’s their problem, not mine. If they eat forbidden food, it’s their problem, but my tribe will have a norm to never eat at their places. In general, I want to believe that their non-conforming behavior will only have consequences on them, not me. (But how can I know what behaviors do or don’t have an impact on me? Well, my culture will tell me.)
I am on board for this plan provided only that the accent chosen is New Zealander
>and that for a long time people thought temperature was an abstraction and could never truly be measured.
This idea came up here in the links thread two years ago. I tried to confirm it then, and found no evidence for it and much to the contrary. I think it’s wrong.
Hmmmm. I got it from this tweet by Siberian Fox, who I usually trust. He suggests there’s some backup in the book “Inventing Temperature” by Hasok Chang, but I haven’t read it and can’t confirm that’s true, or which parts it backs up.
I’ve removed the statement since it’s unproven, but I suspect it’s probably true.
The temperature at which water freezes and the temperature at which it boils are both objective facts readily observed, which I would expect to encourage the idea that temperature in general is an objective fact, even if intermediate temperatures are not as easily measured.
Yes they’re object facts, but it can be tricky to measure them (especially boiling) if you don’t set up your apparatus properly. Superheating is one example.
From Hasok Chang (http://www.sites.hps.cam.ac.uk/boiling/):
People haven’t heard of this? I was thaught that water boils at a different point on different pressure (up a moutain, in a pressure cooker) so early in my life that, I don’t even know when.
The change int boiling temperaturs of water, when you dissolve salts in it, were one of the first experiments in my science class back in 5th grade.
There are discussions over wether to add salt to cooking water befor or after the water boils.
For me “water boils at 100°C” was always a short hand for a complexer topic. Like everything I was taught in school.
To me this sound of one of the cases where a philosopher does not know how to language.
I think this is fairly well known. Maybe not the full extent of it, but in particular I think most educated people know at least that salt and altitude can change those thresholds.
Erm. . . “once widely known”? I’m pretty sure every chemist and physicist knows this. This was my 7th grade science fair project.
Yes, freezing and boiling of water are objective phenomena. Metacelsus has pointed out that boiling is fuzzy. Freezing is also fuzzy. Impurities (most obviously salt) can shift the freezing point as well.
We know that people’s emotional states vary and some can be objectively observed. People cry and laugh. People blush. Heart rates rise with anger. Despite objectively observed phenomenon, we don’t expect a numerical scale of emotions.
Utilitarians do in fact expect a numerical scale of emotions.
David, this seems like an area you’d have special knowledge in. What do the ancient recipes you collect use as references for how hot to cook stuff? Do they get beyond boiling? Do they ever say how hot the oil should get, for instance? When do temperatures start appearing in recipes?
Actual temperatures don’t appear in recipes before 1600, which is, with some minor exceptions, my late end limit. I’m pretty sure that less specific terms do, that there are references to putting something in a hot oven or simmering something, but I would have to look through the recipes for examples.
Chang’s book came up in the discussion I linked to, with several people including myself not finding any backup in it. According to Chang (beginning of Ch. 2) very few scientists who studied temperature in the 17-18th centuries had any doubt that it was a physical quality that could be assigned a definite numerical value. He only briefly talks about attitudes before the invention of the thermometer in the late 16th century, but even then there’s no mention at all of this “could never be measured” attitude.
https://en.wikipedia.org/wiki/R%C3%B8mer_scale#Importance
> Women, especially after the first fetus appears, are permitted, and sometimes even encouraged, to seek another man, or men, to have sex with in order to provide ‘additional fathers’ for their future child. Anyone who contributes sperm to the fetus is a secondary father.
Wait, so Patria is real (or something kind of like Patria)?
> Polyamory
My impression (though now that I’m actually writing the comment I’m less sure) was that at least some polyamorous people saw polyamory as a thing that works for them but might not work for everyone, in which case a general cultural norm (for the entire civilization) would be less relevant.
…and perhaps, more generally, if people can determine if they’re different from typical people for whom traditions work, they might be more inclined to break tradition and use reason (or some alternate tradition) instead.
> On sonority:
That seems like it’s similar to this map (red = complex syllable structure, which means that syllables can have consonant clusters at the beginnings or ends of syllables ≈ less sonorous; white = simple syllable structure, which means every syllable is either a vowel or a consonant + a vowel ≈ more sonorous).
> My impression (though now that I’m actually writing the comment I’m less sure) was that at least some
polyamorous people saw polyamory as a thing that works for them but might not work for everyone, in which case a general cultural norm (for the entire civilization) would be less relevant.
This is by far the most common belief among people I know who are polyamorous. You might not hear about it proportionately often, because people who think everyone should be polyamorous are more likely to be talkative about it.
To clarify, it’s more that I couldn’t actually remember many poly people saying one way or the other whether it’s just for them or something they thought everyone should do (I vaguely recall hearing both positions at different points, but I’m not certain about that).
Patria was my attempt to worldbuild a partitive paternity culture in the present day.
A norm that people should decide what sort of relationships work for them is, in fact, a norm.
> A norm that people should decide what sort of relationships work for them is, in fact, a norm.
My point was more about what happens if one is in a culture without that norm. You could want to change tradition or not follow it yourself either because you think it’s a bad idea in general (like an arbitrary superstition or whatever), or because you think it’s not necessarily a bad idea for most people but is a bad idea for your specific case. The argument that these posts are talking about addresses the first possibility but not the second. In other words, the fact that monogamy has been around for a long time is (according to this argument) evidence for the idea that monogamy is good in general, but not necessarily for the idea that monogamy is good for one specific person who knows they’re different in some relevant way.
I suspect this is a result of the polyarmorists we know being part of a larger culture that insists on monogamy. They depend on the norm of “mind your own business” and have to be careful to practice it themselves and not look like they’re telling other people how to live.
Agreed.
If you’re pioneering a new social practice that rubs against the dominant culture in your area, it would seem to be a very good idea to say something like “Don’t worry, this is just for me, nobody is ever going to make you do it!”
If you’re unclear about that, you probably get chased out of town with torches and pitchforks.
4 year MFF poly relationship here. IME the norm is not “mind your own business” so much as “if it has informed consent among all participating adults, it’s OK”.
One place where the difference between these attitudes is illustrated is in the context of cheating. Poly people, myself included, tend to take a very dim view of cheating, if only because we have to explain so often why polyamory is different from cheating. It violates the norm of informed consent. Other types of consensual “sexual deviations”, though, have to be tolerated, on pain of massive hypocrisy.
Is polyamory Chesterton-compliant? I think not, and it bothers me sometimes. If we look at it from a Categorical Imperative perspective, if everyone who could was in a relationship like I’m in, we’d have a lot of single, unhappy males and an unstable society. I think the only kind of society that could stably tolerate polygyny would be one that has very high adult male mortality. Obviously that is not our situation, and it is not the typical historical situation.
If there were a relatively balanced amount of polygyny and polyandry, I think it could work. But I do not expect this to ever be the case. On the occasions when my partners were with other men, even for short periods, it caused me intense jealousy — a problem poly women, in general, do not seem to have so badly. This makes perfect biological sense. I had to tolerate and even encourage it because I don’t want to be a hypocrite, but I didn’t *like* it. Since IMO we have no reason to expect balanced polygyny and polyandry, I conclude widespread polyamory probably would be bad for society, and so a Chesterton prohibition on it makes sense.
You could also have widespread male homosexuality, bisexuality, or asexuality. I realize there is an argument that cultural norms that foster male homo/bi/asexuality may be less feasible to create than polygamous norms, but these are options other than high male mortality.
I doubt the claim that humans don’t have quantized speeds of running. I for one definitely have two different gaits of walking, and find walking in an intermediate speed between the two more difficult than either of them. This is the most noticable if I want to chat with someone while walking, because then I have to walk in such an intermediate speed to not get too far from them. The effect is somewhat less pronounced now that I’ve gained weight, but it’s still present. I’m not good at running, so I can’t say anything certain about it, but I suspect that at least some humans have different running gaits, even if the cause is not the specific one that Joseph Henrich mentions about quadrupeds.
Completely agree. Maybe it is not linked to respiration or panting, but there is definitely a natural gait (probably linked to weight, leg and maybe arm length) at which it is comfortable to walk. It’s not the same for everybody, and it’s infurating when walking at a slower speed, and tiring when you need to walk faster.
Same for running, but less marked I think.
And it’s funny you mention that, but I also feel the natural gait is less pronunced now that I am older. When I was young, it was super annoying to forced to walk at lower speed, so that I tended to stop then walk faster instead of adopting slower constant speed. Less so now, but I still feel a preference.
How much of that is down to the way you learned to run, though?
Not much. Frankly I think the author takes the responses he got when asking “how did you learn to run” far too literally. A lot of cultures expect juniors to thanks their elders/tutors for their accomplishments, regardless of actual teacher impact on such accomplishments. It’s just expected that skilled individuals remain modest and thanks their teacher. Even better if the teacher is dead. The typical senpai/sensei relation in Japan, but you see that in other cultures too, just less formalised: telling “I’m just naturally gifted” is almost universally frowned upon as lacking modesty.
Asking an average (or even better, a mediocre) runner “how did you learn to run” will probably result in him getting angry for your sarcastic question, but if you manage to go past that, they will say “nobody teached me, running is natural, but i am not just not good at it/ i have this disability/ I ate too many hamburgers”
I’m not sure what you’re getting at; competitive runners definitely learn technique consciously, and I assume mediocre runners are mediocre in part because they haven’t learned the proper technique.
As a former distance runner, I also had quantized running speeds, because I found it difficult to breathe out of rhythm with my footsteps. (By implication, I breathed under conscious control. I never thought to ask any of my teammates if they did the same, so it might just be me.)
That said, it wasn’t too hard to vary speeds somewhat, because I could always shorten a breath to match a shorter stride, or add in an extra breath on every third one, say.
Running became more fun for me once I switched breathing methods. I moved away from the inhale on right footfall to alternating and that had some immediate benefits. First, the stitch-in-the-side problem went away. Second, my times sped up and third I found it pretty easy to zone out on the alternate breathing while still enjoying the scenery.
For most of my life I had problems with a stitch in my side while running. I played two sports (soccer and wrestling) through high school and college, and it didn’t affect me during competition, but during conditioning training it invariably appeared. I couldn’t figure out why it happened, or what I was doing wrong, so I just tried to tough it out and rarely ran more than 5 miles.
I started syncing breathing with footfalls after reading a random post on an internet message board and it literally changed my life. I do still begin inhaling on right footfall, and it works for me, but I can vary my pace fairly easily by breathing according with the number of steps. For anything longer than a few miles I breathe out for 4 steps, in for four steps. To increase the pace I breathe out for three steps and in for three steps. If there is an inefficiency somewhere I increase or decrease footfalls/breath as necessary. I can also alter stride length somewhat in order to change pace.
My wife thinks I’m a crazy person, and I can’t listen to music anymore while I run because it throws me off completely. Still, I can run far faster for far longer than I ever could. It also helps me get into a zone where time just seems to fly.
I’m not sure about you, but I don’t walk/run nearly enough to be this specialized. I’d suspect that someone who is almost always walking or running would have much more fine-tuned control.
For example, I used to be a swimmer, and at my peak I definitely had ‘speeds’. So much so, in fact, that for longer distance events, my coach (and most coaches) would use the lap counters to signal “speed up,” “slow down,” “maintain pace,” etc.
Even so, human running is probably much less quantised.
Is there anyone here that does a sport where your chest moves as much as a running horse’s? Rowing or something?
Personally, I can either walk or run comfortably at basically any speed below my absolute max. There’s a bit of an awkward zone around the transition from walking to running, and the low end is basically limited by how patient I’m feeling that day. (There’s a meditation where you walk across a route taking sixty seconds per step, in constant motion. I never did this for long, but I did do it.)
I have no idea how much of this is teachable/learnable, but it mostly seems to do with adjusting length of stride, only secondarily with speed of gait.
The claim that humans can switch running speeds while maintaining equal energy efficiency is commonplace both in discussions of human evolution and discussions of running. I was surprised that this merited a “did you know?”
I myself make quite fine adjustments to my running speed over the course of a race. Runners are usually advised to do this and I’ve never heard anyone suggest that there’s any physiological difficulty in doing so, although it does require a certain amount of self discipline.
WRT polyamory, has anyone done a comparison of male mortality rates in monogamous vs polyamorous societies?
Also, my uneducated impression had been that the vast majority of non-monogamous societies were polygynous rather than polyandrous, but the scenario you are describing seems to be the latter. Is there any distinction drawn in the book between the two.
Yes, the vast majority are polygynous.
Or, more accurately, monogamous with elite polygyny.
>Chris argues that his measures of fighting ability and community persuasiveness provide the best proxies for dominance and prestige
>Not surprisingly, both dominant and prestigious men tended to get their way at group meetings, but only prestigious men were respected and generous.
Kevin Simmler also sees dominant people as feared and disliked, obeyed only through fear. I don’t understand it. Why can’t the good fighter man be seen as the protector of the tribe and thus respected? Or if not of the tribe, at least of his supporters? Nobody ever thinks like “I want this super scary guy to lead us because he will scare our enemies” ?
I mean, one of the sad facts of the modern world is that the Mussolini types were very often genuinely popular. People did not just follow them out of fear. Rather they thought their fearsomeness, their dominance will protect them from their enemies.
If you have no enemies, you want to be led by nice guys/gals. But proportional to the amount of fear you have from real or imaginary enemies, don’t you want that kind of leader who tends to scare everybody shitless?
And there is also Stockholm Syndrome. Which in my interpretation means we get scared, we don’t like feeling like a coward, so we rationalize the scary guy is actually somehow a good guy.
So no, I don’t understand why they say dominant, fearsome folks are always unpopular.
I am saying this because I had sorta asshole bosses… whom I liked. I am not a particularly submissive or masochistic type. But they were kind of reassuring. They emitted a don’t worry, everything is being taken care of vibe. They emitted a don’t be afraid vibe, nothing bad will happen, because he wants us to be successful and will just break down every obstacle to it. They were assholes in the dominant sense. Not liars or something. Quite honest. Just “my way or the highway”. No discussions but curt orders. Sometimes yelling or threats of the sack. One had the feeling that no recession or anything can shake our jobs. We just do our bit the way he wants it and he will ensure all will work out. Sort of how old-style Strict Fathers were still popular with their kids. The ol’ tyrant was somehow reassuring, like nothing bad can happen to the family under his watch of kind of vibe. My grandpa was still of that type. Had all the empathy of a rock but was just as solid and reliable.
EDIT: idea: dominant leaders can be popular when it is called for, dominant non-leaders are unpopular “gimme your sandwich or I’ll kick your butt”. “You! Shut up and follow me! We will kick the sandwich-thieves butt!” can be popular. When a dominant leadership style is called for, it will be interpreted as skill and therefore given prestige.
I think some people overly segregate emotions into “true” or not so true categories – based on whenever they seem baseless or obviously expedient. When in reality our feelings are evolutionary conditioned responses.
What I’m saying is that it makes perfect sense to me to respect those who can easily kill me, because not respecting would be evolutionary disadvantageous. Or how imagine a purely irrational “true” love and would not accept desire to seduce people with resources or good genes as such. It’s kinda hard to explain what I mean, but I think I’m getting the point across.
I don’t think this quote implies that dominance and prestige are anti-correlated, perhaps not even that they are not positively correlated. It just means that prestigious men are respected (whether they are dominant or not), but men who are dominant but not prestigious aren’t.
> Fore’
If that last letter is yearning to be an “e” with an acute diacritical mark, you can create it by holding down the “Alt” key and then pressing the keys (on the numeric keypad), in sequence, for its eight-bit extended ASCII character representation—in this case, 130.
Foré
Or just select the desired symbol in a word processor, and copy-and-paste it.
I prefer the compose key, myself, which is a more discoverable way to enter interesting characters. For instance, é is compose e ‘, and a proper emdash is compose – – -.
It’s a Linux thing natively, but there are software solutions for Windows and Mac. On my Windows PC, it runs in a little tray applet.
This is true on Windows, not on the Mac OS.
Macs, of course, have a much easier way of typing an acute-accented letter:
Simply press Option + e (that is, hold Option and press e), then press the letter key (e, in this case).
(Option + u for umlaut, Option + backtick/tilde for grave accent, Option + i for circumflex, Option + n for tilde)
Thank you. I’ve been looking for this information for months, if not years. I’d figured out some of it by exploration/good luck, but never saw any list putting it all together.
Google wasn’t much use – that may have been where I found the instructions for umlaut originally – but never a complete list. Except of how to do this on MS Windows ;-(
Of course what Apple wants you to do these days is use a pop-up menu (available in some but not all apps) which also contains emojis. I’ve had to resort to ugly tricks like creating an email message, putting accented characters into it, then cut and pasting them into programs that didn’t support this interface. (And generally getting them in a different font than the rest of my text, if I wasn’t very careful – but that’s another bitch about Mac UI choices.)
Also, for those using non-apple keyboards with apple computers, the key apple sometimes labels “option” is generally labelled alt on them.
Unless they changed it in more recent versions, if you enable the input menu in System Preferences (Keyboard > Input Sources in the version I have), there should be an option “Show Keyboard Viewer”. Click that, then hold down option and/or shift to see what symbols you can type with the current layout.
If you need accents often, you can tell Windows to use the US International Keyboard, which lets you type ‘e to make é, “e for ë, `e for è, etc., ‘c for ç, and ~n for ñ. It also lets you make letters by pressing the right Alt key (sometimes called AltGr): AltGr+[a|e|i|o|u] → [á|é|í|ó|ú], AltGr+z → æ, AltGr+? → ¿, AltGr+s →ß, AltGr+t → þ, etc.
It does mean that when you need one of`'”~ by itself, you have to type a space after it to prevent it from combining with the next letter, if that happens to be a letter it can combine with.
I just read the summaries of The Secret of Our Success and Moral Mazes ( https://thezvi.wordpress.com/2019/05/30/quotes-from-moral-mazes/ ) back to back. And corporate behavior in Moral Mazes makes a lot of sense if interpreted as collective social learning.
> People in hotter climates use more spices, and more of the most effective bacteria killers. In India and Indonesia, for example….
The answer to the question as to why British food is so legendarily bland: Blame the weather!
“Get a tan from standing in the English rain….”
Nope. Danes can cook well. They have good pastries because when pastry bakers went on a strike, imported Austrian ones. Who learned the art from Italians. But even their steaks are tastier. Or seafood. Is there even anything Swedes do better than Danes? 😀 (They had an old historic rivalry that by now turned into a playful and humorous, harmless rivalry so it is fun to stoke those flames a bit.)
I believe (was told by a history friend who studies this sort of thing in an academic capacity) that bland British food is a fairly recent stereotype, that came out of post-war rationing. In the UK, rationing continued long after the war and was actually strictest in 1947. After rationing, people just sort of continued cooking the kind of things they’d cooked under rationing, and food was horribly bland and basic in the UK for a few decades. I’m told that, before this period, the UK was notable for having a particularly rich and varied culinary culture, which makes sense if you think about what kind of places were in the British Empire, and so what kind of spices and exotic wares were traded back to the UK at preferential prices inaccessible to much of the rest of Europe.
Then, in the 70s and 80s, everyone just went mad for no adequately explained reason and started encasing everything in clear savoury jelly, and so I guess we decided that having a reputation for bland food was at least better than anyone digging too deeply and noticing that awkward phase :p
My maternal grandparents left the UK long before world war II. My comfort food tradition is English. Grandma didn’t use “hot” spices beyond black pepper and maybe ginger, but food was varied and tasty. She made things like Yorkshire Pudding routinely (hard to do now, with cuts of meat having as little fat as can be managed), organ meats, and traditional english fruit cake, Christmas pudding etc. Spices were used in baking, but meat dishes tended to have their spices on the side, in the form of e.g. mustard pickles. (Grandma’s mustard pickles were wonderful.)
What I notice is that typical Americans actively dislike many of the foods I listed above. Fruitcake is a byword for something no one wants to eat. They mostly don’t eat organ meats. Etc. etc. They happily eat mince pies if I make them (at one point, I brought one to every pot luck I attended) , but never ever cook them.
So that’s my anecdatum on this subject – and also that I’d always figured the British food reputation was just “we don’t like their food, so let’s laugh at them” from outside – until for some strange reason, Brits still in the UK went crazy and rejected their native cuisine for e.g. Indian.
I used to live by a traditional uk pie shop that has been open since 1902. It was extremely bland and greasy, in much the same way that other traditional british dishes are bland and greasy. Also, somehow everyone else in europe figured out how to make a good sausage except for the UK. Good cakes though.
Two thoughts:
1. The Irakia example reminds me of the westernising element of the Meiji restoration; if this is something that’s fairly common in small societies it seems much less weird.
2. Cultural evolution is probably an argument against polyamory (although by no means a decisive one): if 85% of societies used to be polygamous/polyamorous, but it’s now only found in small, isolated populations, that implies it’s fairly heavily selected against. The counter-argument would be that monogamy is potentially just a random Indo-European thing that piggy-backed on whatever drove their expansion and confers no more advantage than having a giant snake in your religion (the Jews, Japanese and Chinese all seem to have abandoned their [quasi-]polygamy under European influence, which could be an argument for either side). The point is, “this used to be a thing but isn’t” should be more of a warning than an advert.
1. Of course it is! The natural words would be multiamory or polyphilia. But all this needless mixing of Greek and Latin–what could be more unnatural?
(Kids these days. Next they’ll be telling me that the Latin Mass’s “Kyrie Eleison” is actually Greek….)
2. Yup, there’s a theory that that monogamy is socially adaptive. Relative to a world of harems, monogomy “redistributes” wealth (in the form of women) down the status scale of men, thereby “domesticating” a larger share of low-status (and otherwise marriageable) men who would otherwise be prone to organize with their unmarried peers into antisocial gangs.
But all this needless mixing of Greek and Latin–what could be more unnatural?
Such dysfunctional behaviour. What sociopaths would do such a thing? There’s only one cure for this: genocide. Take off, and nuke the site from geostationary orbit.
I blame the homosexuals.
“Yup, there’s a theory that that monogamy is socially adaptive. Relative to a world of harems, monogomy “redistributes” wealth (in the form of women) down the status scale of men, thereby “domesticating” a larger share of low-status (and otherwise marriageable) men who would otherwise be prone to organize with their unmarried peers into antisocial gangs.”
Or just abandon the society as a whole, refusing to defend or produce wealth for it. The harem is wonderful, until you get until the details of how to pay for it. This is why, despite official sanction, it was never really common in Islamic societies, with only around 3% of families being polygamous. This explains the modern attraction to polyamory, which promises a way to socialize the costs.
It’s not so much “a random Indo-European thing” as it is specifically a cultural norm of Rome which was exported via Christianity.
Polygamy, on the other hand, is not found only in small, isolated populations today. Rather, it’s found in quite a substatial fraction of populations- most Muslim majority societies practice polygamy, and specifically the structure of polygamy supported by Islam. Most societies today have marital structures derived from the influence of either Christianity or Islam.
In that context, I think the success of monogamy looks pretty contingent on, and probably not determinative of, the cultural success of a particular religion.
For how large a fraction of families?
An Englishman who lived in Cairo in the 19th century and wrote a book about it (The Modern Egyptians) reported that, among those he knew, only about one family in a hundred was polygamous.
This is kind of obvious in peacetime, unless you have really significant sex-skewed emigration. The major alternative for an Islamic society is making outgroup men the ultimate losers of polygyny by sending the single men out on jihad.
The fundamentalist Latter-Day Saints practice sex-skewed emigration within the United States by kicking out young adult male believers who are lower status than the women-monopolizing Elders. This makes them a very different and less healthy subculture from the mainstream LDS.
It was selected against in intensive farming societies because inheritance of land was a hugely important factor in success in those. But if you’re a farmer in a place where population densities are limited by disease and warfare, or a hunter-gatherer, or a white collar professional that’s not really a huge concern.
Polyamory was never common. For every primitive society with some weird cultural meme like “partible paternity,” there were 99 without it. They clearly are not the primitive human norm, anymore than the shakers or the Mormon fundamentalists are the norm among industrial era-humans.
Is this the same reason having only one child is correlated with loyalty?
It still seems weird to me, because the Meiji Restoration strikes me as one of the central examples of “uproot all our traditions in favor of Progress with a capital P,” which is supposedly the thing we’re not supposed to be doing. They’re doing basically the same thing, but if you say “A council of wise elders proposed a new tradition” it sounds like cultural evolution and if you say “A state decided to modernize its government, military, culture, and everything else” then it sounds like rational progress.
Maybe the salient fact is that someone else tried it first, but there are lots of cultural traditions out there that seem to be doing just fine until suddenly they aren’t.
Right, but they were trying to copy pretty much an entire proven cultural model, one which was by its own standards relativelly conservative and with centuries of tradition behind it. That’s different than “let’s adopt these ideas some progressive thinkers tell us will work better and have maybe been alpha-tested in a commune of three-sigma weirds”.
And then they decided that they couldn’t adopt the entire Western model and would have to instead create a new cultural fusion with no real precedent. That part didn’t work out terribly well, and after losing a nuclear war(*), a foreign military governor was given a mandate to create yet anther fusion culture on top of the remains of the last. Eventually they’ll get it right?
* Very progressive of them; history’s more tradition-bound cultures had only ever managed to lose conventional wars.
The quote on quadruped running seems inaccurate in several important ways compared to the primary references Henrich cites, which are short and very interesting in their own: Bramble and Carrier (1983) and Carrier (1984). In particular, humans still typically lock their breathing rate with their strides, it’s just that animals nearly always lock them 1:1, while humans are able to switch to other ratios, like 1:3, 2:3, 1:4 etc. and this is thought to allow us to maintain efficiency at varying speeds. Henrich also doesn’t mention that humans are at the outset metabolically disadvantaged for running in that we spend twice as much energy (!) per unit mass to run the same distance as quadrupeds. That we are still able to run down prey by endurance running is called the “energetic paradox” by Carrier. Liebenberg (2006) provides a vivid description of what endurance hunting looks like, in Kalahari.
I have a question about spices and food. There are lots of people with weird historical interests that might be able to answer.
If spices were used for antimicrobial properties, this would explain why hot regions use more spices than cold regions. There is another difference in infection risks. Some foods spoil faster or with more risk. Meats and seafood are dangerous. Grains and root vegetables are safe. Do traditional recipes spice foods that are dangerous more heavily than safer foods?
The cookbooks I have most experience working with (~10th-15th centuries) tend to confound this by being collections biased very heavily towards fancy, high-status recipes – otherwise why bother to write them down? Meat is more expensive and hence higher status, as is spicing, so cookbooks with fancier spicing have more meat. Even within a cookbook, if you look at marks of fanciness that don’t involve spices, something elaborate like a peacock with its feathers put back on after roasting or a mythical creature made up of several animals sewn together is more likely to be meat. (Witness all the examples I could think of were.) The fanciest non-meat dishes you get tend to be the vegetarian fake meat/”in time of lent” dishes. That means you’ve already got a confounder for your question – meat goes with spicing the same way saffron or gold foil would, antimicrobial properties aside.
You can filter for this somewhat by sticking to one cookbook, ideally as low-status a one as you can find. If you want to look for yourself, http://www.daviddfriedman.com/Medieval/Medieval.html has links to many cookbooks online, either originally in English or in English translation. Menagier is probably a good source for this, being late enough to be an actual middle-class cookbook. (Upper-middle-class, I think, but not upper class, which almost all our cookbooks are.) But that won’t entirely do it; a single cookbook still has higher- and lower-status dishes.
I have just done a quick look through the cookbook I translated in college (Italian, 15th-century), and the only recipes I have yet found that do not call for “fine spices” or “ginger” or something spice-related are in fact recipes that do not use meat. Some of them call for herbs or wine instead, but a couple only use salt and sugar. Note: one of them uses milk and butter, and is for serving alongside meat, but has no spices of its own.
Make of that what you will.
That information seems to match. The milk and butter example is not high risk. Milk would have likely been fresh. Butter keeps well in the short term and is produced year round. It lasts for several days at room temperature, and would have been kept in a buttery/root cellar with a relatively low temperature.
That sounds as though you think a buttery is a place for keeping butter. The name came from it being a place for butts—barrels containing beer and other liquors.
The Sir Mix-a-lot song is about beer?
It is also the source of the word buttler.
The servant in charge of the buttlery was closesed to the lord, so over time he became head of the servant staff.
Oops. I knew it was a cool room for storage and I accidentally pattern matched it to storing food rather than wine cellar.
Hmmm. All butts in the sense of barrels are the same size, so it is odd to make such a big deal of their being big, but they are also pretty big (over a hundred gallons), so he may just be commenting on that fact. He also says he doesn’t like flat butts, but that also makes sense for beer. So, maybe.
On “youngest first” deliberation: the test you propose to compare the effectiveness of youngest first, oldest first, and secret ballots in guessing the number of jelly beans in the jar wouldn’t test the right parameter. Youngest first isn’t best for voting, it’s best for deliberation on complex problems where no one person can reasonably be expected to have the perfect answer. Specifically, youngest first is the best method when those discussing the problem have different knowledge. The lack of inhibition associated with speaking before you know what the boss wants produces better, more candid information from inexperienced people.
Your point about shaping the discussion space is valid but backwards. The discussion space is locked down more irrevocably by elders than by juniors.
My reason for saying so is a background flying crew combat aircraft. Over thousands of flights I found that the best problem solving technique, by far, was to solicit input from crew-members in inverse experience order (when time constraints allowed). Within the constraints of whatever problem we were trying to solve I never noticed that conversations were limited in scope by the inexperienced; the experienced crew-members would invariably bring up factors that significantly changed the understanding of the problem.
It would be very interesting to try and test whether ‘inexperience first’ or ‘experience first’ produces consistently better results for specific types of problems, but the problems that need to be tested will be inherently difficult to create an apples-to-apples comparison on, because the technique works best when the problems are extremely complex and the stakes for being wrong extremely high (life or death both in my case and the case described in the post).
My suggestion for researching would be in hospital OR’s. This tool more than any other, I feel, would decrease surgeon error.
My gut instinct here is that “inexperienced first” would not be the best method for solving problems quickly, but would be superior for training purposes.
I think it’s pretty uncontroversial to say that one really effective teaching method is to give the student a task, have that student attempt the task, and have the teacher observe the attempt and offer guidance and/or improvements afterwards. “Good job killing that deer! If you had aimed your arrow at this spot, though, you would have hit its heart, and it would have died faster, so you wouldn’t have to work as hard to haul it home. Do that next time.”
If your experienced rabbis get to speak first, they’re going to give the right answer most of the time, and the inexperienced rabbis are going to either nod and agree with them, or lose status. You’re optimizing for fast consensus.
If your inexperienced rabbis are forced to speak first, they can’t hide behind the experienced rabbis, so they have to actually throw out ideas and reasoning. They may be wrong and get called out for being wrong by later, wiser rabbis, but each rabbi in line until the final one has another, even older and wiser rabbi waiting to pounce on any questionable statements of his, so they have some incentive to kind to the younger, dumber rabbis, lest they get a taste of their own medicine.
And by the time you reach Rabbi #70, probably the whole issue has been hashed out at length, so the last few rabbis in line have gotten to hear all the arguments and evidence and had plenty of time to make up their minds and compose suitably wise statements to give in their decisions.
Agreed.
If you’re sitting around pondering Jewish theology, “inexperienced first” seems fine enough to solicit new ideas and perspectives in an open and collaborative environment, etc.
If a patient is bleeding on the table and will die in minutes without treatment, I’d prefer the youth to just shut up and do what the most experienced surgeon in the room advises, thanks.
I think we’d also need to test for moral legal issues separately. In your aircraft, everyone was interested in solving the issue. The least experienced crew member could be wrong, but they didn’t risk offending their superiors with some idea. They also can’t reasonably anticipate how a more experienced member would act.
This isn’t the case for the courts. A senior judge might be known as a strong believer in Virtue Ethics. The junior member could anticipate offending them with a Utilitarian argument, and not make it. This could corrupt the process.
Passage and commentary are rather incoherent.
If humans are bad at intentionally designing institutions, then we’d be bad at intentionally designing for institutions that maximize “variation and selection” as well.
And secondly, as the quote already suggests, winnowing process (the use of reason to evaluate each competing institution) has to be deliberately guided at some object level beyond simply “variation and selection”. You don’t get to escape this by creating seasteads or archipelagos. At some point, even in the libertarian fantasy, a human still has to ask “what should the institutions of this seastead I’m designing be?”, and we’re back where we started. As much as libertarians are familiar with the classic bad-argument of “humans are inherently corrupt, therefore we need a state”, surely you must recognize the fallacy in its inverse: “humans are bad at planning, therefore we need to delegate all power to the property owners”.
And lastly, it’s not like we don’t already have competing political and economic systems we can examine, and the resulting political debates centered around what “dumping the losing [ideas], keep the winning [ideas]” exactly entails. It seems odd that the grand moral, the master plan, simply boils down to “create a world with many competing cultural, political and economic systems”. Uh, alright…done? Now for the winnowing phase?
I second the Quillette commenter. Woke culture rewards being a victim and by extension being extremely vulnerable to the slightest nonsense, as illustrated by the latest YouTube controversy. What’s interesting is that, based on the electroshock experiment, the woke are probably honest when they claim to be deeply hurt by the words of some guy on the internet, because they’ve been culturally conditioned to suffer pain at the slightest provocation. I used to believe that they were lying for attention, but I will adjust my priors in this regard.
To be fair, the unwoke side appears to be extremely vulnerable to the paralyzing fear that someone, somewhere, may be taking their own offspring to a drag queen story hour.
You are correct.
On one side, people are expressing discomfort at the idea of introducing children to sexual themes.
On the other side, you have people calling for (and committing) political violence, de-platforming, and firing of anyone with different views.
It’s basically the same thing.
For the benefit of easier interpretation, I suggest marking irony, or just always saying your actual argument, rather than the opposite. Especially since I don’t think introducing children to sexual themes is immediately parsed as obviously wrong by everyone on the forum.
Hm. You seem very harmed and distressed by the words of some guy on the Internet. Do you think this is because you have been culturally conditioned to suffer pain at the slightest provocation?
(For the record, I’d be inclined to argue that all politicians should have milkshakes thrown on them now and then, lest they start putting on airs.)
Fair point.
I would hope though that the violation of norms around political violence, which milkshaking is, would bring about some condemnation from the reasonable people on the side of the people doing the milkshaking.
It’s specifically designed to be at the somewhat acceptable edge of political violence, and to bring about an escalation, either from the side on the receiving end of the milkshaking, or from the side doing the milkshaking, once they figure out they can get away with it. It’s a really bad idea.
I can agree that politicians should be embarrassed as much as possible, but the right way to do that is to make fun of them, not to have random protesters throw projectiles at them.
So you realize that this tweet is, in and of itself, not very upsetting, but you’re concerned that it might lead to violence?
I feel like you have more sympathies to Carlos Maza than you might realize! In the general case, “fag” and “queer” are words that, for many LGBT+ people, are associated with threats of violence: they’re the words they called you when you got beat up in high school; they’re the words a large group of men yell at you when you make the wrong decision about which street to walk down holding your partner’s hand. Many LGBT people are afraid that the use of those words contributes to a culture where they’re afraid of violence for their sexual orientation.
In the specific case, there has been at least one case of a Vox writer getting doxxed and having to flee his home with his family for fear that people would attack him. The concern that Steven Crowder’s fanbase might escalate from unpleasant tweets to threats of violence is in fact practical!
@Ozy
Any evidence for this? None of the stories about the doxing mention this at all.
Couldn’t you have picked someone who didn’t get doxed and not have anyone actually show up just after he defended a bunch of hooligans actually showing up at someone’s house, yelling, making violent threats, and spray painting their driveway ?
Neither Yglesias nor Carlson may have deserved what happened (or maybe they both did if you’re feeling less charitable), but what happened to Carlson was much worse judging by your own linked articles.
I feel like your object level example is kind of working against your point. Tweets occurred and nothing happened (as is usually the case). Furthermore, twitter braveguy flees (supposedly? I don’t see that in the article) out of fear of disorganized and possibly violent hooligans showing up at his house… after defending disorganized and possibly violent hooligans showing up at his opponent’s house. It seems more like a parable about why not to condone in person violent threats against those you dislike than whatever point you are trying to make with it.
“The point is, larger and more interconnected populations generate more sophisticated tools, techniques, weapons, and know-how because they have larger collective brains.”
The book The Rational Optimist explores this observation and hypothesis in more detail. The author (Matt Ridley) claims that the mechanism by which larger population leads to more technology is trade and specialization. The more things you can trade for, and the more people you can trade with, the the stronger the incentive to specialize and focus on making very good copies of one particular tool or object, or becoming very good at hunting one kind of animal, and trading for the rest of what you need. But you need a critical mass of individuals with different specializations in order for all of the necessary areas to be covered.
I remember a particular case Ridley discussed, a part of (I believe) Australia which was separate from the mainland and became an island. The people living there showed signs of regression in terms of technology and skill level; when the last specialist in making fishhooks or spears died, the others had to take over without ever having really learned how, and so their tools declined in quality. It isn’t just about developing techniques; their potency is determines by the laws of economics. If this book didn’t mention it, I’m sure Henrich would love to hear about it.
I’m surprised yours is the only comment in this whole series mentioning Matt Ridley, particularly with Scott’s pull of “collective brain” here (being a chapter title Ridley used).
The Rational Optimist is so uplifting and persuasive. Can’t recommend it enough to anyone who hasn’t read it.
Minor thread of excerpts here
The Inuit look fairly white to me: https://www.google.com/search?q=inuit&tbm=isch
Second me as being confused by that passage.
The people in those images don’t look like they’re all the same race to me. I think some of them probably have a good bit of European blood.
The Inuit are well known for tanning, so that could explain why some are much darker than others.
They look like they span roughly the same range of paleness as Chinese people or Mediterranean people to me for comparisons. They don’t look as uniformly pale as Scandinavians.
The first critique I see is that of the 5 points 4 of them clearly could effect pig populations.
The second and more important critique is that the majority of his examples are of the least successful groups. Its very hard to go with the string of logic that says ‘the least successful often imitate the most successful’ and then jump to the conclusion that immitation is primary in our success.
Point 3 is that they imitated things that had to do with pigs, they didn’t simply imitate all aspects of the superior tribe but speculated that they ought to treat their pigs like the other tribe.
Pre-1964, black protesters would attempt to be seated at an all-white restaurant, but be turned away. The next day, one of the same protesters would return wearing a suit, glasses, and turban, speaking refined English, in the company of other suited white men who introduced him as the Ambassador from Tunisia (or whatever). They would be seated at once—demonstrating that the prohibition on serving “coloreds” was not really about color.
Granted the actual black diplomats often had it no better:
And from the first reply:
In Albert Camus’s L’Etranger (The Stranger), the protagonist seems callously indifferent to a variety of social norms—he ate, smoked, and refrained from crying throughout his mother’s funeral, for example. When he is later charged with murder, these transgressive acts are trotted out as evidence of his guilty character.
It seems to me that he is saying the exact opposite as you, or something at best orthogonal. Your contention was that “racism” wasn’t definable and that nobody actually “was” a racist (because who is a actually a murderist anyway).
His contention is that “racism” is easily understand as bog-standard inter-ethnic strife. What North Irish Catholic person was actually for the segregation and murder of Protestants? As it turns out, enough of them to keep “The Troubles” going for quite a while. And vice-versa.
This seems like an attempt at a massive retcon of your previous position.
I feel like this was easily countered by the already extant urge to trust older people more than younger. As the quoted passage mentions, this method is nearly the opposite of how things would naturally go. So I think the way the author wants us to look at this is as a culturally evolved mechanism to balance out the natural drive to just do what the elders say.
Consider, too, how eldest-first might play out. A case is presented. The elders weigh in. The younger ones effectively defer. One: over time, the value of younger opinions appear consistently worthless, as they add nothing to what the elders said. So why hear them? Two: as the eldest die off, the next eldest now have no opinion to defer to, so they end up winging it – asking themselves WWED? and voicing whatever answer pops into their heads. But this is a recipe for judging cases by playing the mental telephone game with its founding ancestor. Youngest-first stands to promote a model of what the law says ought to be done, rather than what the elder says.
It’s genuinely not clear to me which model is superior – especially since there’s nothing stopping younger judges from using WWED? to form their opinions when asked first, either. They might be worse at it, since they had less time with $Elder. But that still just means it’s a question of whether WWED? beats WWLD?. I’d like to say WWLD? is better, but that might also be my modernist bias. I’m guessing the Sanhedrin preferred WWLD?, so they organized their methods to make that more likely, but that again might be my bias.
It’s also not clear to me how this might have evolved. It’s not like there was a rival Sanhedrin that was pushing eldest-first, and one of them seemed to be putting out better quality judgements. At least, I don’t think there was. But it might still be adaptive. Maybe the judges felt some sort of endorphin rush from hearing novel opinions that could still be in line with existing law. Or maybe it was sheer accident, propped up by traditional inertia. It would’ve been interesting to study Sanhedrin records to see how many final judgements ended up going with the youngest versus the oldest, but I doubt we have records complete enough for stats.
And as Scott says, there’s an interesting experiment to be run here between EF, YF, and secret ballot. I don’t think jelly bean guessing would be probative, however; I think law requires different cognitive skills from spatial logic.
Youngest-first seems plainly better to me. The young are trying to copy the elders in either case, but having them go first forces them to copy the elder-approved methods of reasoning rather than just repeating the result of that reasoning. By the time they’re elders themselves, they’ve had a lot more experience reasoning than they would have in the elder-first system.
In terms of how it came about, it seems to me like some elder saw a problem in how things work under the default elder-first system and came up with a solution, and elders in other communities noticed that it worked well and copied it.
We don’t actually know what the real Sanhedrin did. There aren’t any “Sanhedrin records”. The only source we have for the rule under discussion is one sentence in the Mishnah, a collection of rules and prescriptions written down at least 200 years later and strongly biased to describe the past glory of the Sanhedrin in the best light possible, while simultaneously providing arcane theological justifications for every detail. That sentence also says that for civil cases the order is the other way around (start from the greatest judge).
200 years later than what? The Mishnah was written at the beginning of the third century. The last binding decision of the Sanhedrin was in the fourth century and the Sanhedrin was finally disbanded in 425.
200 years later than the Temple was destroyed and the Sanhedrin ceased to exist in its independent form in Jerusalem. Yes, it was later reconstituted elsewhere and continued semi-officially or sometimes underground, but jt didn’t have the same authority and there’s no reason to think the traditions and customs have been preserved. I think the laws and customs of the Sanhedrin described in the Mishnah are the Tannaim’s idea of the ideal Sanhedrin of the glorious times, and they explicitly refer to the Sanhedrin in Jerusalem here and there (describing a particular chamber of hewn stone etc.). I should have perhaps said 150 years, not 200, but it doesn’t change the argument.
Henrich shouldn’t be angry at all – just bought the book because of your wonderful review… 😉
This brings to mind Contrite Tit for Tat. Perhaps a model with collective knowledge and a more fluid reputation metric would accurately simulate the development of cooperation in small communities.
“The UCLA anthropologist Dan Fessler argues that during middle childhood (ages 6-9) humans go through a phase in which we are strongly attracted to leraning about fire”
When I became a Boy Scout at age 11, I soon was obsessed with building the perfect fire by structuring my logs, kindling, and tinder so adroitly that I could always light it with just one match.
In fact, I still am. If not actively dissuaded, I’ll spend an hour assembling the perfect fire while my wife puts up the tent and doing all the real work of the campout.
Funny, I read it as more like the Shasta County norms. Particularly “Norm violators have their property stolen and destroyed” (emphasis mine); it seems to be important for a justice system that the incentives to the enforcer are not too strong (else they lead to framing of innocents, entrapment etc. to produce more opportunities to punish — classic rent-seeking behaviour). So at least some of the property stolen from the violator is destroyed rather than being kept by the enforcer.
See also The Virtues of Inefficient Punishment; I’m not sure whether it goes into more or less detail than Legal Systems….
I suspect the quote just means that the violator’s property is stolen or destroyed depending on whether it’s feasible to take it and on whether it’s valuable to the person taking it, not that property is taken and then intentionally destroyed when there would be an incentive to keep it. (E.g. you can’t steal a house or a farm, but you can burn it down. As far as I understand, they don’t consider it fully legal to take the norm violator’s property, they just turn a blind eye to it, so you probably couldn’t evict them and take their house.)
Judging by the description of Shasta County norms in Order Without Law, butchering the neighbor’s cattle that have repeatedly been allowed to stray into your field and damage your crops is not permitted by the norms, driving them far away so that the owner will have a hard time finding them and retrieving them is.
Scott, I would love to see a review and discussion of David Deutsch’s Beginning of Infinity. It goes pretty far in depth on many of these ideas.
I’d also like to say I love the book review posts here in general.
I had the same nagging concern throughout. This stuff is all wonderful and super-interesting, but it hangs together so neatly that I can’t help thinking about researcher degrees of freedom, sample sizes and the replication crisis almost every time a study is quoted.
I’m not really sure what to do about that fact, since I haven’t the energy to bottom-out the studies themselves, but I suppose I place a bit less weight on the conclusions than I would have, say, ten years ago.
The combination of a preference for the ingroup with a greater tolerance for outgroup diversity is very interesting. It suggests we have to choose:
– conform and get the benefits of being accepted by the group
– segregate and get to be diverse, but without that acceptance
Social Justice then seems to be a desire for both, which may be impossible.
Presumably, with sufficient prestige you can have both (after a fashion, at least). I’m thinking about ascectic, monastic traditions – especially those found in India. In more flippant, but easier to understand terms: the Lord Buckingham is eccentric, you are just a loon.
(Generic “you”, of course.)
I conjecture that divergence from group norms is more easily tolerated if there is an expectation of benefits from association with the divergent party.
I don’t think that the lords were accepted as an equal by the commoners. Respected, yes. Tolerated, yes. But not treated as one of their own.
Of course, if you have many more goodies, you may not care.
My personal opinion is that social justice does not actually desire diversity, at least on things they believe to be important.
They want racial diversity, because they insist that race doesn’t matter.
They don’t want diversity of say, political commentators in newspapers, because they believe that political opinions in newspapers matter a lot.
To me it seems like the goal is to create conflicts… between men and women, or blacks and whites… just to take away attention from the actual power: the 0.01% vs the 99.99%.
If your group is not racially diverse, you will get attacked for not being racially diverse. But if your group is racially diverse, you will still get attacked for… something else. The possibilities are infinite; there is always some category that is missing in your group… which makes you a Nazi, of course.
The message is that you (and your friends) are never okay, and the goal is to make you hate your neighbor.
There are essentially two types of people attracted to “social justice”:
* cluster-B people, who believe that everything is someone else’s fault (and the movement tells them where to point their finger);
* depressed people, who suspect that everything actually might be their own fault (and the movement is happy to tell them “yes, and you are actually underestimating the magnitude of your inherent evil”).
In other words, those who enjoy yelling at others, and those who believe they deserve getting yelled at.
Having went through many similar thought processes I actually came to an opposite conclusion
We are currently (since industrial revolution ) on an unprecedented curve of growth and development where cultural priors tend to become obsolete faster than they can be updated, we are also in possession of tools and methods of memetic propagation on an unprecedented scale and an almost self-improving level of effectiveness – in this setting preferring legible norms with their reasoning and scope of applicability inseparable from the norms themselves over illegible traditions becomes a moral imperative as well as the best chance of a benignly sustained society.
I wonder if you could still make an argument, based on the book, for cultural racism. For example, in the Southern United States, the cultural taboos around integration and miscegenation seem to have outlasted (at least in some quarters) slavery, Jim Crow, etc… Leading to unfortunate incidents… https://www.ajc.com/news/local-govt–politics/georgia-mayor-under-fire-for-alleged-remarks-about-black-job-candidate/Qr403ZLnF5VuB8CzpngLjP/
This is, I suppose, just the standard SJW argument re-framed around this book, but the context you provided made me wonder if the claim could still be true that the US (or parts of it) are racist without racism being particularly intrinsic. And in that sense, the US might be a special case due to the unique way the economics of slavery built up a cultural framework here.
The US society is heavily racist in the sense that it imposes racial categories (including in its fight against chauvinism) on divisions which are instead ethnic.
Thus, a half-Kenyan, who was raised in Jakarta by a white woman from Kansas, can be considered to be “of the same people” as someone whose ancestors were brought to America as slaves centuries ago, who has specific cultural traits and dialect.
And this is based purely on their blood, or rather their color of skin, and in a way that demarcates the category of “Caucasian” as the “purer” race in cases of (recent) miscegenation.
And then Americans online tell me how they have nothing against “African-Americans”, they just won’t tolerate “improper English” and hip-hop fashion in their neighborhood. And this is supposed to be progressive!
Is this a reference to Jaynes & the Bicameralism hypothesis? I would love to see Scott’s thoughts on that. Fringe academia can be so fascinating.
The Coen Brothers edit their movies together under the assumed name Roderick Jaynes. I presume this surname was chosen as a tribute to Julian Jaynes, a philosophy instructor at Princeton who published his famous “Bicameral Mind” book while Ethan Coen was a philosophy major at Princeton.
The partible paternity thing doesn’t sound like polyamory to me–it sounds like a kind of prostitution (for lack of a more neutral term), wherein sexual access during pregnancy is traded for child support later. The passage doesn’t imply any kind of lasting relationship between sexual partners.
You could say the same about monogamy, no?
Not at all. Monogamy very much establishes a lasting relationship–ideally, a lifelong one. This South American custom establishes a relationship of sorts between the other man and the child, but he isn’t the woman’s husband or anything. He’s just this guy who had sex with her a couple of times. He very likely has the same not-a-relationship with several other women.
Oh, I misunderstood your point. I didn’t think polyamory is anything stricter than “non-exclusive sexual mores orthogonal to marriage”.
I read somewhere that the languages with the most distinctive sounds are in Africa, among them the ones including the !click! ones. Since humanity originates from Africa, these are also the oldest language families.
As you move away from Africa, you can trace how languages lose sound after sound, until you get to Hawaiian, which is the language with the fewest sounds, almost all vowels.
I’ve half heartedly tried to find any mention of this, perhaps overly cute theory again, but failed. The “sonority” theory here reminded me. Anyone know anything, one way or the other?
All (natural) languages are (in most likeness) equally old, just as genetic lineages are.
The absence of click consonants from all languages outside of South Africa is indeed a fascinating phenomenon, but there is nothing analogous for other types of phonemes.
Languages gain and lose sounds all the time, and different sounds are gained and lost at different rates, seemingly click consonants are difficult to gain since they weren’t observed anywhere outside of SA, but they are also difficult to lose since they are stable in languages that have them.
There is a tendency of gaining sounds found in neighboring languages, so that some of the Bantu languages that are recent arrivals to the Kalahari basin have also obtained click consonants in inherited vocabulary through contact with Khoisan (not a real language family) languages.
I found some stuff about this by Googling “phonemic diversity”.
At a bit of a tangent, I think English has one click in its vocabulary, if you’re willing to stretch the meaning of “vocabulary”: it’s usually written “tsk”, and means something like “that was dumb”.
More intriguingly, some German speakers apparently sometimes realize /tk/ (which occurs only at morpheme boundaries AIUI) as /k!/.
It’s not an insane theory. You can get a broad sense of something like that happening from the World Atlas of Language Structures’ consonant and vowel inventory size maps. New distinctions do form, however, muddying the waters (e.g. General American English /r/ [ɹ], the postalveolar approximant, which is a rare sound). And there seem to be other pressures like population size that are hard to disentangle.
In re population size, there are similar proposals about grammatical structures, namely that the more cosmopolitan the society, the more isolating (as opposed to agglutinating) its languages will grow to be over time, etc.
The parts about language adapting to the ecological context is an idea that surfaces periodically but never achieves significant support in the linguistic community.
I am pretty sure that it is wrong. In general languages exhibit subtle typological convergence across huge stretches of territory which is hard to quantify. This makes it very hard to tell how mutually dependentthe points of your dataset are, so the effective number of the researchers observations is much lower than assumed. I consider this also to be a likely problem with other comparative studies (of religion, culture, economy, psychology, …).
The forces that shape language are, from my experience of comparative linguistics, the language’s history, random chance, other languages that surround it and ease of communication, in that order.
As a more general criticism, isn’t this genre of proposing a general theory and then citing science papers that support it doomed to fall prey to confirmation bias? Let’s check back in 10 years and see how many of these findings get replicated. I don’t expect it to fare much better than “Thinking fast, thinking slow”.
(my secret agenda in this post is actually to attack the emphasis placed on confirmation bias and to push a positive, constructivist theory of knowledge against rationalist falsificationism)
I provisionally reject the antimicrobial explanation, but I am intrigued by the question. I have observed young men competing to see who can eat the spiciest food, which I find curious, because it seems irrelevant to any domain we should care about. I’m not aware of any similar behaviour in any other species.
I do find it plausible (although of course plausibility is insufficient for accepting a proposition) that people come to enjoy the taste of chili by reinterpreting the pain signals as pleasure or excitement, and that something similar might occur with masochism.
I don’t know much about vanilla sex, but it seems to be a common experience that stimulating another person’s genitals can move unintentionally from pleasure to pain by a minor miscalculation in the degree of pressure being applied, which suggests that the two types of sensation are closely linked even among those who dislike pain. I also note that (while it is difficult to form accurate impressions about so private a matter) there does seem to be an increasing recognition that people who are certainly not lifestyle masochists like to incorporate some mild pain (light slapping, hair-pulling etc) into their sex-lives. The word “spice” is even sometimes used in this context.
I don’t have the impression that any of this troubles any other species. ‘Reinterpreting’ a pain signal seems potentially maladaptive. Maybe some combination of our highly developed nervous systems and moving away from our ancestral environment has left us with a propensity to fire pain neurons too frequently, and the ability to ‘reinterpret’ them is an adaptation to this?
I come back to endurance running. I recently ran a marathon and I found the last fifth pretty unpleasant. Somebody else advised me, “The last 10k are hell at the best of times,” which at face value seems an extraordinary statement. Why would people engage in a leisure activity of which a quarter (about an hour at typical speeds) is hell at best? The injury rate is also high and Christopher McDougall quotes Dr Joe Torg (said to be the “godfather of sports medicine”) as saying, “the human body is not designed for that kind of abuse.” But if the endurance running hypothesis is correct, the human body is ‘designed’ (i.e. evolved) for precisely that. At least that provides an answer to the question “Why do we do it?” We do it because we’ve evolved to do it.
What I think might be going on here is that we’ve only very recently (in evolutionary terms) adapted to endurance running. Bipedal locomotion is characteristic of the genus, but the earlier species certainly couldn’t run far. Even H neanderthalis was likely an inferior runner. So endurance running pushes H sapiens against the limits of its retained traits.
Indulging in some pure speculation, I wonder whether successful endurance hunting requires a high degree of pain tolerance for this reason and whether this might be the reason for our ability to ‘reinterpret’ pain signals.
Saying the human body is designed for endurance running doesn’t mean that it’s designed to run 26 miles at relatively high speeds. Persistence hunting requires [citation needed] much less distance, and much less speed.
Marathons are actively bad for you. People who run marathons are much healthier, on average, than those who don’t, but that’s because the category “those who don’t” includes a lot of couch potatoes. People with a similar level of fitness to marathon runners who don’t run marathons are better off.
That being said, with a little bit of tweaking your hypothesis might still hold, I’m just being anal retentive about the marathon point.
I do think that in general one of the things cortex allows is the reinterpretation of signals. And humans are the cortical species par excellence.
Wikipedia suggests that persistance hunting involves running for “up to five hours” over a distance of “up to 35km”, so you appear to be correct. Or at least, it isn’t much less distance, but IME running 35km at 150% of that speed (albeit at cooler temperatures and with modern gear) is vastly less fatiguing than a full marathon.
Being able to reinterpret signals seems really useful for adapting to a wide range of environments, although I don’t think that can be the initial selective pressure for the trait, because by definition it evolved in some particular ancestral environment.
I’m late to the party, so not sure if anyone would even see it – but I find it peculiar, how the book author spends chapters on describing how strong culture influences everything. It can make us reinterpret pain as pleasure (spices), it can even make us commit suicide. But not a single word about how culture, may be, potentially, could have some influence on sexual preferences?
ps. For people who read things literally and start saying that the book does talk about it – specifically about culture influencing gay/straight divide.
Forgetting technology remains a potent force today!
A very high-tech example: when the Apollo program disbanded and was replaced by the Space Shuttle program, much of the knowledge of how to build capsule spacecraft was lost. Personnel were scattered to different aerospace companies; techinques were not recorded; some experts have died. The new Orion program is spending a lot of effort on the forensic reconstruction of the Apollo techniques, among other practices—recovering a technology that has been forgotten.
But this is true for plenty of mundane handicrafts from past eras. Does anyone today know how to make a window of animal horn? Do they know how to make it as well as the best medieval masters? If you wanted to install a pneumatic tube message system in your brand new office building, could you get it? What if you wanted to run a postal service that delivered four times a day in urban areas; could this be done?
Some of these forgotten “technologies” are primarily social—surely the carts and pants and so on of a pre-telephone postal worker could be recreated easily enough. But there are complicated cultural elements that might be harder to recover.
There are two major factors that separate our society from these models. One is literacy. This allows technologies to go into hibernation when they might otherwise have been lost. Yet much can still be lost! The use of seawater in the recipe for Roman concrete was only rediscovered in the past decade, even though it’s one of the most significant inventions of antiquity. Preserved recipes mentioned water as an ingredient but did not specify seawater. So a hibernating technology is a lot like a “dead” language preserved through texts. It’s easy to imagine everything is there, but rather harder to get the whole thing working again.
The second factor is replacement. We have lost the technology of making horn windows; but we have glass windows we consider to be superior in every regard; and in fact, there are parallels in all of the above. So we’re much more sanguine about technology loss, and indeed, we may not even notice that old technologies have been lost when new ones supplant them. But the forgetting occurs nevertheless.
Is this any different from the idea that “race is a social construct” (apart from potentially mapping different words to the same concepts)?
Man, pretty much every one of these anecdotes makes me think “reproduction crisis”!
For example, https://en.wikipedia.org/wiki/Nominative_determinism certainly suggests that the business of names is far from settled…
I do wonder which half of these claims will be considered “obvious nonsense” in thirty years.