[Previously in sequence: Epistemic Learned Helplessness, Book Review: The Secret Of Our Success, List Of Passages I Highlighted In My Copy Of The Secret Of Our Success. Deleted a controversial section which I still think was probably correct, but which given the number of objections wasn’t provably correct enough to be worth including. I might write another post giving my evidence for it later, but it probably shouldn’t be dropped in here without justification.]
Years ago, I wrote about symmetric vs. asymmetric weapons.
A symmetric weapon is one that works just as well for the bad guys as for the good guys. For example, violence – your morality doesn’t determine how hard you can punch; they can buy guns from the same places we can.
An asymmetric weapon is one that works better for the good guys than the bad guys. The example I gave was Reason. If everyone tries to solve their problems through figuring out what the right thing to do is, the good guys (who are right) will have an easier time proving themselves to be right than the bad guys (who are wrong). Finding and using asymmetric weapons is the only non-coincidence way to make sustained moral progress.
The parts of The Secret Of Our Success that deal with reason vs. cultural evolution raise a disturbing prospect: what if sometimes, the asymmetry is in the wrong direction? What if there are some issues where rational debate inherently leads you astray?
Maybe with an unlimited amount of resources, our investigations would naturally converge onto the truth. Given infinite intelligence, wisdom, impartiality, education, domain knowledge, evidence to study, experiments to perform, and time to think it over, we would figure everything out.
But just because infinite resources will produce truth doesn’t mean that truth as a function of resources has to be monotonic. Maybe there are some parts of the resources-vs-truth curve where increasing effort leads you the wrong direction.
When I was fifteen, I thought minimum wages obviously helped poor people. They needed money; minimum wages gave them money, case closed.
When I was twenty, and a little wiser, I thought minimum wages were obviously bad for the poor. Econ 101 tells us minimum wages kill jobs and cause deadweight loss, with poor people most affected. Case closed.
When I was twenty-five, and wiser still, I thought minimum wages were probably good again. I’d read a couple of studies showing that maybe they didn’t cause job loss, in which case they’re back to just giving poor people more money.
When I was thirty, I was hopelessly confused. I knew there was a meta-analysis of 64 studies that showed no negative effects from minimum wages, and a systematic review of 100+ studies that showed strong negative effects from minimum wages. I knew a survey of economists found almost 80% thought minimum wages were good, but that a different survey of economists found 73% thought minimum wages were bad.
We can graph my life progress like this:
This partly reflects my own personal life course, which arguments I heard first, and how I personally process evidence.
But another part of it might just be inherent to the territory. That is, there are some arguments that are easy to understand, and other arguments that are harder to understand. If the easy arguments lean predominantly one way, and the hard arguments lean predominantly the other way, then it will natural for any well-intentioned person studying a topic to follow a certain pattern of switching their opinion a few times before getting to the truth.
Some hard questions might be epistemic traps – problems where the more you study them, the wronger you get, up to some inflection point that might be further than anybody has ever studied them before.
We’ll get to vast social conflicts eventually, but I want to start with boring things in everyday life.
I hate calling people on phones. I can’t really explain this. I’m okay with emailing them. I’m okay talking to them in person. But I hate calling them on phones.
When I was younger, I would go to great lengths to avoid calling people on phones. My parents would point out that this was dumb, and ask me to justify it. I couldn’t. They would tell me I was being silly. So I would call people on phones and hate it. Now I don’t live with my parents, nobody can make me do things, and so I am back to avoiding phone calls.
My parents weren’t authoritarian. They weren’t demanding I make phone calls because That Is The Way We Do Things In This House. They were doing the supposedly-correct thing, using rational argument to make me admit my aversion to phone calls was totally unjustified, and that making phone calls had many tangible benefits, and then telling me I should probably make the call, shouldn’t I? Yet somehow this ended up making my life worse.
Or: I can’t do complicated intellectual work with another person in the room. I just can’t. You can give me good reasons why I’m wrong about this: maybe the other person won’t make any noise. Maybe I can just turn the other way and focus on my computer and I won’t ever have to notice the other person’s presence at all. Argue this with me enough, and I will lose the argument, and work in the same room as you. I won’t get any good work done, and I’ll end up spending most of the time hating you and wishing you would go away.
I try to be very careful with my patients, so that I don’t make their lives worse in the same way. It’s often easy to get patients to admit they don’t have a good reason for what they’re doing; for example, autistic people usually can’t explain why they “stim”, ie make unusual flapping movements. These movements are distracting and probably creep out the people around them. It’s very easy to argue an autistic person into admitting they stimming is a net negative for them. Yet somehow autistic people always end up hating the psychiatrists who win this argument, and going somewhere far away from them so they can stim in peace.
Every day we do things that we can’t easily justify. If someone were to argue that we shouldn’t do the thing, they would win easily. We would respond by cutting that person out of our life, and continuing to do the thing.
I hope most readers find at least one of the examples above rang true to them. If not – if you don’t hate phones, or have trouble working near others, or stim – and if you’re thinking “All of those things really do seem irrational, you’re probably just wrong if you want to protect them against Reason” – here are some potential alternative intuition pumps:
1. Guys – do you have trouble asking girls out? Why? The worst that can happen is they’ll say no, right?
2. Girls – do you sometimes get upset and flustered when a guy you don’t like asks you out, even in a situation where you don’t fear any violence or coercion from the other person? Do you sometimes agree to things you don’t want because you feel pressured? Why? All you have to do is say “I’m flattered, but no thanks”.
3. Do you diet and exercise as much as you should? Why not? Obviously this will make you healthier and feel better! Why don’t you buy a gym membership right now? Are you just being lazy?
I don’t mean to say these questions are Profound Mysteries that nobody can possibly answer. I think there are good answers to all of them – for example, there are some neurological theories that offer a pretty good explanation of how stimming helps autistic people feel better. But I do want to claim that most of the people in these situations don’t know the explanations, and that it’s unreasonable to expect them to. All of these actions and concerns are “illegible” in the Seeing Like A State sense.
Illegibility is complicated and context-dependent. Fetishes are pretty illegible, but because we have a shared idea of a fetish, because most people have fetishes, and because even the people who don’t have fetishes have the weird-if-you-think-about-it habit of being sexually attracted to other human beings – people can just say “That’s my fetish” and it becomes kind of legible. We don’t question it. And there are all sorts of phrases like “I don’t like it”, or “It’s a free country” or “Because it makes me happy” that sort of relieve us of the difficult work of maintaining legibility for all of our decisions.
This system works so well that it only breaks down when very different people try to communicate across a fundamental gap. For example, since allistic people may not feel any urge to stim or do anything like stimming, its illegibility suddenly becomes a problem, and they try to argue autistic people out of it. The worst failure mode is where illegible actions by an outgroup are naturally rounded off to “they are evil and just hiding it”. I remember feeling pretty bad once after hearing a feminist explain that the only reason men stared at attractive women was to intimidate them, make them feel like their body existed for other people’s pleasure, and cement male privilege. I myself sometimes stared at attractive women, and I couldn’t verbalize a coherent reason – was I just trying to hurt and intimidate them? I think a real answer to this question would involve the way we process salience – we naturally stare at the most salient part of a scene, and an attractive person will naturally be salient to us. But this was beyond teenaged me’s ability to come up with, so I ended up feeling bad and guilty.
If you force people to legibly interpret everything they do, or else stop doing it under threat of being called lazy or evil, you make their life harder and probably just end up with them avoiding you.
Different problems come up when we talk about societies trying to reason collectively. We would like to think that the more investigation and debate our society sinks into a question, the more likely we are to get the right answer. But there are also times when we do 450 studies on something and end up more wrong than when we started.
A very boring, trivial example of this: I think we should increase salaries for Congress, Cabinet Secretaries, and other high officials. There are so few of this that it would be very cheap: quintupling every Representative, Senator, and Cabinet Secretary’s salary to $1 million/year would involve raising taxes by only $2 per person. And if it attracted even a slightly better caliber of candidate – the type who made even 1% better decisions on the trillion-dollar questions such leaders face – it would pay for itself hundreds of times over. Or if it prevented just a tiny bit of corruption – an already rich Defense Secretary deciding from his gold-plated mansion that there was no point in going for a “consulting job” with a substandard defense contractor – again, hundreds of times over. This isn’t just me being a elitist shill: even Alexandria Ocasio-Cortez agrees with me here. This is as close to a no-brainer as policies come.
But I think I would be demolished if I tried to argue for this on Twitter, or on daytime TV, or anywhere else that promotes a cutthroat culture of “dunking” on people with the wrong opinions. It’s so much faster, easier, and punchier to say “poor single mothers are starving on minimum wage, and you think the most important problem is taking money away from them to make our millionaires even richer?” and just drown me out with cries of “elitist shill, elitist shill” every time I try to give the explanation above. Sure enough, the AOC article above notes that although Americans underestimate the amount Congressmen get paid (they think only $120,000, way less than the real number of $170,000), most of them believe they should be paid less, with only 17% saying they should keep getting what they already have, and only 9% agreeing they should get more.
This is a different problem than the one above – the policy isn’t illegible to the people trying to defend it, but the communication methods are low-bandwidth enough that the most legible side naturally wins. That Congressmen are even able to maintain their current salary is partly due to them being insulated from debate: the issue never really comes up, so the consensus in favor of cutting their pay doesn’t really matter.
And yeah, I know, Popular Opinion Sometimes Wrong, More At 11. But this seems like a trivial but real society-wide case of the epistemic traps above, where if you increase one resource (amount an issue is debated) without increasing other resources (intelligence and rationality of the participants, the amount of time and careful thought they are willing to put in) you get further away from truth.
Are there any less trivial examples? What about turn-of-the-20th-century socialism?
I was shocked to learn how strong a pro-socialism consensus existed during this period among top intellectuals. Socialist leader Edward Pease described the landscape pretty well:
Socialism succeeds because it is common sense. The anarchy of individual production is already an anachronism. The control of the community over itself extends every day. We demand order, method, regularity, design; the accidents of sickness and misfortune, of old age and bereavement, must be prevented if possible, and if not, mitigated. Of this principle the public is already convinced: it is merely a question of working out the details. But order and forethought is wanted for industry as well as for human life. Competition is bad, and in most respects private monopoly is worse. No one now seriously defends the system of rival traders with their crowds of commercial travellers: of rival tradesmen with their innumerable deliveries in each street; and yet no one advocates the capitalist alternative, the great trust, often concealed and insidious, which monopolises oil or tobacco or diamonds, and makes huge profits for a fortunate; few out of the helplessness of the unorganised consumers.
Why shouldn’t people have thought this? The period featured sweatshop-like working conditions alongside criminally rich nobility with no sign that this state of affairs could ever change under capitalism. Top economists, up until the 1950s, almost unanimously agreed that socialism would help the economy, since central planners could coordinate ways to become more efficient. The first good arguments against this proposition, those of Hayek and von Mises, were a quarter-century in the future. Communism seemed perfectly straightforward and unlikely to go wrong; the first hint that it “might not work in real life” would have to wait for the Bolshevik Revolution. Pease writes that the main pro-capitalism argument during his own time was the Malthusian position that if the poor got more money, they would keep breeding until the Earth was overwhelmed by overpopulation; even in his own time, demographers knew this wasn’t true. The imbalance in favor of pro-communist arguments over pro-capitalist ones was overwhelming.
Don’t trust me on this. Trust all the turn-of-the-20th-century intellectuals who flocked towards socialism. In the Britain of the time, the smarter you were, and the more social science and economics you knew, the more likely you were to be a socialist, with only a few exceptions.
But turn-of-the-century Britain never went communist. Why not?
One school of thought says it’s because rich people had too much power. Even though the intellectuals all supported communism, nobody wanted to start a violent revolution, because they expected the rich to win and punish them.
But another school of thought says that cultural evolution created both capitalism, and an immune system to defend capitalism. This is more complicated, and requires a lot of the previous discussion here before it makes sense. But it seems to match some of what was going on. Society didn’t look like everyone wanting to revolt but being afraid of the rich. It looked like large parts of the poor and middle class being very anti-communist for kind of illegible reasons like “king” and “country” and “God” and “tradition” or “just because”.
In retrospect, these illegible reasons were right. It’s hard to tell if they were right by coincidence, or because cultural evolution is smarter than we are, drags us into whatever decision it makes, and then creates illegible reasons to prop itself up.
Empirically, as people started devoting more intellectual resources to the problem of whether Britain should be communist or not – as very intelligent and well-educated people started thinking about the problem using the most modern ideas of science and rationality, and challenged all of their preconceived notions to see which ones would stand up to Reason and which ones wouldn’t – they got further from the truth.
(I’m assuming that you, the reader, aren’t communist. If you are, think up another example, I guess.)
There is a level of understanding that lets you realize communism is a bad idea. But you need a lot of economic theory and a lot of retrospective historical knowledge the early-20th-century British didn’t have. There’s some part in the resources-vs-truth graph, where you’re smart enough to know what communism is but not smart enough to have good arguments against it – where the more intellect you apply the further from truth it takes you.
Obviously this ends with everyone agreeing to think very hard about things, carefully distinguish notice which traditions have illegible justifications, and then only throw out the traditions that are legitimately stupid and exist for no reason. What other position could we come to? You wouldn’t say “Don’t bother being careful, nothing is ever illegible”. But you also can’t say “Okay, we will never change anything ever again”. You just give the maximally-weaselly answer of “We’ll be sure to think about it first.”
But somebody made a good point on the last comments thread. We are the heirs to a five-hundred-year-old tradition of questioning traditions and demanding rational justifications for things. Armed with this tradition, western civilization has conquered the world and landed on the moon. If there were ever any tradition that has received cultural evolution’s stamp of approval, it would be this one.
So is there anything at all we should learn from all of this? If I had to cache out “think very hard about things” more carefully, maybe it would look like this:
1. The original Chesterton’s Fence: try to understand traditions before jettisoning them.
2. If someone does something weird but can’t explain why, accept them as long as they’re not hurting anyone else (and don’t make up stupid excuses for why their actions really hurt all of us). Be less quick to jump to “actually they are doing it out of Inherent Evil” as an explanation.
3. As per the last Henrich quote here, make use of the “laboratories of democracy” idea. Try things on a small scale in limited areas before trying them at larger scale; let different polities compete and see what happens.
4. Have less intense competitive pressure in the marketplace of ideas. Kuhn touches on how heliocentric theory had less explanatory power than geocentric theory for a while, but was tolerated anyway long enough that it was eventually able to sort itself out and become better. If good ideas are sometimes at a disadvantage in defending themselves, leave unpopular opinions alone for a while to see if they eventually become more legible. I think this might look like just being kinder and more tolerant of weirdness.
5. If someone defends a tradition that seems completely wrong and repulsive to you, try to be understanding of them even if you are right and the tradition is wrong. Traditions spent a long time evolving to be as sticky as possible in the face of contrary evidence, humans spent a long time evolving to stick to traditions as much as possible in the face of contrary evidence, and this evolution was beneficial through most of history. This sort of pressure is as hard to break (and probably as genetically-loaded) as other now-obsolete evolutionary urges like the one to binge on as much calorie-dense food as possible when it’s available (related).
6. Having done all that, and working as gingerly and gradually as you can, you should still try to improve on traditions that seem obsolete or improvable.
7. Cultural evolution does not provide evidence that traditions are ethical. Like biological evolution, cultural evolution didn’t even try to create ethical systems. It tried to create systems that were good at spreading. Plausibly many cultures converged on eating meat because it was a good source of calories and nutrients. But if you think it violates animals’ rights, cultural evolution shouldn’t convince you otherwise – there’s no reason cultural evolution should price animal suffering into its calculations. (related).
Finally: some people have interpreted this series of posts as a renunciation of rationality, or an admission that rationality is bad. It isn’t. Rationality isn’t (or shouldn’t be) the demand that every opinion be legible and we throw out cultural evolution. Rationality is the art of reasoning correctly. I don’t know what the optimal balance between what-seems-right-to-us vs. tradition should be. But whatever balance we decide on, better correlating “what seems right to us” with “what is actually true” will lead to better results. If we’re currently abysmal at this task, that only adds urgency to figuring out where we keep going wrong and how we might go less wrong, both as individuals and as a community.