Jacobite – which is apparently still a real magazine and not a one-off gag making fun of Jacobin – summarizes their article Under-Theorizing Government as “You’ll never hear the terms ‘principal-agent problem,’ ‘rent-seeking,’ or ‘aligning incentives’ from socialists. That’s because they expect ideology to solve all practical considerations of governance.”
There have been some really weird and poorly-informed socialist critiques of public choice theory lately, and this article generalizes from those to a claim that Marxists just don’t like considering the hard technical question of how to design a good government. This would explain why their own governments so often fail. Also why, whenever existing governments are bad, Marxists immediately jump to the conclusion that they must be run by evil people who want them to be bad on purpose.
In trying to think of how a Marxist might respond to this attack, I thought of commenter no_bear_so_low’s conflict vs. mistake dichotomy (itself related to the three perspectives of sociology). To massively oversimplify:
Mistake theorists treat politics as science, engineering, or medicine. The State is diseased. We’re all doctors, standing around arguing over the best diagnosis and cure. Some of us have good ideas, others have bad ideas that wouldn’t help, or that would cause too many side effects.
Conflict theorists treat politics as war. Different blocs with different interests are forever fighting to determine whether the State exists to enrich the Elites or to help the People.
Mistake theorists view debate as essential. We all bring different forms of expertise to the table, and once we all understand the whole situation, we can use wisdom-of-crowds to converge on the treatment plan that best fits the need of our mutual patient, the State. Who wins on any particular issue is less important creating an environment where truth can generally prevail over the long term.
Conflict theorists view debate as having a minor clarifying role at best. You can “debate” with your boss over whether or not you get a raise, but only with the shared understanding that you’re naturally on opposite sides, and the “winner” will be based less on objective moral principles than on how much power each of you has. If your boss appeals too many times to objective moral principles, he’s probably offering you a crappy deal.
Mistake theorists treat different sides as symmetrical. There’s the side that wants to increase the interest rate, and the side that wants to decrease it. Both sides have about the same number of people. Both sides include some trustworthy experts and some loudmouth trolls. Both sides are equally motivated by trying to get a good economy. The only interesting difference is which one turns out (after all the statistics have been double-checked and all the relevant points have been debated) to be right about the matter at hand.
Conflict theorists treat the asymmetry of sides as their first and most important principle. The Elites are few in number, but have lots of money and influence. The People are many but poor – yet their spirit is indomitable and their hearts are true. The Elites’ strategy will always be to sow dissent and confusion; the People’s strategy must be to remain united. Politics is won or lost by how well each side plays its respective hand.
Mistake theorists love worrying about the complicated and paradoxical effects of social engineering. Did you know that anti-drug programs in school actually increase drug use? Did you know that many studies find raising the minimum wage hurts the poor? Did you know that executing criminals actually costs more money than imprisoning them for life? This is why we can’t trust our intuitions about policy, and we need to have lots of research and debate, and eventually trust what the scientific authorities tell us.
Conflict theorists think this is more often a convenient excuse than a real problem. The Elites get giant yachts, and the People are starving to death on the streets. And as soon as somebody says that maybe we should take a little bit of the Elites’ money to feed the People, some Elite shill comes around with a glossy PowerPoint presentation explaining why actually this would cause the Yellowstone supervolcano to erupt and kill everybody. And just enough People believe this that nobody ever gets around to achieving economic justice, and the Elites buy even bigger yachts, and the People keep starving.
Mistake theorists think you can save the world by increasing intelligence. You make technocrats smart enough to determine the best policy. You make politicians smart enough to choose the right technocrats and implement their advice effectively. And you make voters smart enough to recognize the smartest politicians and sweep them into office.
Conflict theorists think you can save the world by increasing passion. The rich and powerful win because they already work together effectively; the poor and powerless will win only once they unite and stand up for themselves. You want activists tirelessly informing everybody of the important causes that they need to fight for. You want community organizers forming labor unions or youth groups. You want protesters ready on short notice whenever the enemy tries to pull a fast one. And you want voters show up every time, and who know which candidates are really fighting for the people vs. just astroturfed shills.
For a mistake theorist, passion is inadequate or even suspect. Wrong people can be just as loud as right people, sometimes louder. If two doctors are debating the right diagnosis in a difficult case, and the patient’s crazy aunt hires someone to shout “IT’S LUPUS!” really loud in front of their office all day, that’s not exactly helping matters. If a group of pro-lupus protesters block the entry to the hospital and refuse to let any of the staff in until the doctors agree to diagnose lupus, that’s a disaster. All that passion does is use pressure or even threats to introduce bias into the important work of debate and analysis.
For a conflict theorist, intelligence is inadequate or even suspect. It doesn’t take a supergenius to know that poor farm laborers working twelve hour days in the scorching heat deserve more than a $9/hour minimum wage when the CEO makes $9 million. The supergenius is the guy with the PowerPoint presentation saying this will make the Yellowstone supervolcano erupt.
Mistake theorists think that free speech and open debate are vital, the most important things. Imagine if your doctor said you needed a medication from Pfizer – but later you learned that Pfizer owned the hospital, and fired doctors who prescribed other companies’ drugs, and that the local medical school refused to teach anything about non-Pfizer medications, and studies claiming Pfizer medications had side effects were ruthlessly suppressed. It would be a total farce, and you’d get out of that hospital as soon as possible into one that allowed all viewpoints.
Conflict theorists think of free speech and open debate about the same way a 1950s Bircher would treat avowed Soviet agents coming into neighborhoods and trying to convince people of the merits of Communism. Or the way the average infantryman would think of enemy planes dropping pamphlets saying “YOU CANNOT WIN, SURRENDER NOW”. Anybody who says it’s good to let the enemy walk in and promote enemy ideas is probably an enemy agent.
Mistake theorists think it’s silly to complain about George Soros, or the Koch brothers. The important thing is to evaluate the arguments; it doesn’t matter who developed them.
Conflict theorists think that stopping George Soros / the Koch brothers is the most important thing in the world. Also, they’re going to send me angry messages saying I’m totally unfair to equate righteous crusaders for the People like George Soros / the Koch brothers with evil selfish arch-Elites like the Koch brothers / George Soros.
Mistake theorists think racism is a cognitive bias. White racists have mistakenly inferred that black people are dumber or more criminal. Mistake theorists find narratives about racism useful because they’re a sort of ur-mistake that helps explain how people could make otherwise inexplicable mistakes, like electing Donald Trump or opposing [preferred policy].
Conflict theorists think racism is a conflict between races. White racists aren’t suffering from a cognitive bias, and they’re not mistaken about anything: they’re correct that white supremacy puts them on top, and hoping to stay there. Conflict theorists find narratives about racism useful because they help explain otherwise inexplicable alliances, like why working-class white people have allied with rich white capitalists.
When mistake theorists criticize democracy, it’s because it gives too much power to the average person – who isn’t very smart, and who tends to do things like vote against carbon taxes because they don’t believe in global warming. They fantasize about a technocracy in which informed experts can pursue policy insulated from the vagaries of the electorate.
When conflict theorists criticize democracy, it’s because it doesn’t give enough power to the average person – special interests can buy elections, or convince representatives to betray campaign promises in exchange for cash. They fantasize about a Revolution in which their side rises up, destroys the power of the other side, and wins once and for all.
Mistake theorists think a Revolution is stupid. After the proletariat (or the True Patriotic Americans, or whoever) have seized power, they’re still faced with the same set of policy problems we have today, and no additional options. Communism is intellectually bankrupt since it has no good policy prescriptions for a communist state. If it did have good policy prescriptions for a communist state, we could test and implement those policies now, without a revolution. Karl Marx could have saved everyone a lot of trouble by being Bernie Sanders instead.
Conflict theorists think a technocracy is stupid. Whatever the right policy package is, the powerful will never let anyone implement it. Either they’ll bribe the technocrats to parrot their own preferences, or they’ll prevent their recommendations from carrying any force. The only way around this is to organize the powerless to defeat the powerful by force – after which a technocracy will be unnecessary. Bernie Sanders could have saved himself a lot of trouble by realizing everything was rigged against him from the start and becoming Karl Marx.
Mistake theorists naturally think conflict theorists are making a mistake. On the object level, they’re not smart enough to realize that new trade deals are for the good of all, or that smashing the state would actually lead to mass famine and disaster. But on the more fundamental level, the conflict theorists don’t understand the Principle of Charity, or Hanlon’s Razor of “never attribute to malice what can be better explained by stupidity”. They’re stuck at some kind of troglodyte first-square-of-the-glowing-brain-meme level where they think forming mobs and smashing things can solve incredibly complicated social engineering problems. The correct response is to teach them Philosophy 101.
(This is the Jacobite article above. It accuses Marxists of just not understanding the relevant theories. It’s saying that there’s all this great academic work about how to design a government, and Marxists are too stupid to look into it. It’s so easy to picture one doctor savaging another: “Did you even bother to study Ingerstein’s latest paper on neuroimmunology before you inflicted your idiotic opinions about this case on us?”)
Conflict theorists naturally think mistake theorists are the enemy in their conflict. On the object level, maybe they’re directly working for the Koch Brothers or the American Enterprise Institute or whoever. But on the more fundamental level, they’ve become part of a class that’s more interested in protecting its own privileges than in helping the poor or working for the good of all. The best that can be said about the best of them is that they’re trying to protect their own neutrality, unaware that in the struggle between the powerful and the powerless neutrality always favors the powerful. The correct response is to crush them.
What would the conflict theorist argument against the Jacobite piece look like? Take a second to actually think about this. Is it similar to what I’m writing right now – an explanation of conflict vs. mistake theory, and a defense of how conflict theory actually describes the world better than mistake theory does?
No. It’s the Baffler’s article saying that public choice theory is racist, and if you believe it you’re a white supremacist. If this wasn’t your guess, you still don’t understand that conflict theorists aren’t mistake theorists who just have a different theory about what the mistake is. They’re not going to respond to your criticism by politely explaining why you’re incorrect.
Is this uncharitable? I’m not sure. There’s a meta-level problem in trying to understand the position “don’t try to understand other positions and engage with them on their own terms” and engage with it on its own terms. If you succeed, you’ve failed, and if you fail, you’ve succeeded. I am pretty sure it would be wrong to “steelman” conflict theory into a nice cooperative explanation of how we all need to join together, realize that conflict theory is objectively the correct way to think, and then use this insight to help cure our mutual patient, the State.
So if this model has any explanatory power, what do we do with it?
Consider a further distinction between easy and hard mistake theorists. Easy mistake theorists think that all our problems come from very stupid people making very simple mistakes; dumb people deny the evidence about global warming; smart people don’t. Hard mistake theorists think that the questions involved are really complicated and require more evidence than we’ve been able to collect so far – the weird morass of conflicting minimum wage studies is a good example here. Obviously some questions are easier than others, but the disposition to view questions as hard or easy in general seems to separate into different people and schools of thought.
(Maybe there’s a further distinction between easy and hard conflict theorists. Easy conflict theorists think that all our problems come from cartoon-villain caricatures wanting very evil things; bad people want to kill brown people and steal their oil, good people want world peace and tolerance. Hard conflict theorists think that our problems come from clashes between differing but comprehensible worldviews – for example, people who want to lift people out of poverty through spreading modern efficient egalitarian industrial civilization, versus people who want to preserve traditional cultures with all their thorns and prickles. Obviously some moral conflicts are more black-and-white than others, but again, some people seem more inclined than others to use one of these models.)
This blog has formerly been Hard Mistake Theory Central, except that I think I previously treated conflict theorists as making an Easy Mistake. I think I was really doing the “I guess you don’t understand Philosophy 101 and realize everyone has to be charitable to each other” thing. This was wrong of me. I don’t know how excusable it was and I’m interested in seeing how many comments here are “This is super obvious” vs. “I never thought about this consciously and I think I’ve just been misunderstanding other people as behaving inexplicably badly my whole life”. But people have previously noticed that this blog is good at attracting representation from all across the political spectrum except Marxists. Maybe that’s related to treating every position except theirs with respect, and appreciating conflict theory better would fix that. I don’t know. It could be worth a shot.
Right now I think conflict theory is probably a less helpful way of viewing the world in general than mistake theory. But obviously both can be true in parts and reality can be way more complicated than either. Maybe some future posts on this, which would have to explore issues like normative vs. descriptive, where tribalism fits in here, and “the myth of the rational voter”. But overall I’m less sure of myself than before and think this deserves more treatment as a hard case that needs to be argued in more specific situations. Certainly “everyone in government is already a good person, and just has to be convinced of the right facts” is looking less plausible these days. At the very least, if I want to convince other people to my position here, I actually have to convince them – instead of using the classic Easy Mistake Theorist tactic of “smh that people still believe this stuff in the Year Of Our Lord 2018” repeated over and over again.
I for one think this is a great change, and a brilliant post. Absolutely, less time delightedly exploring still more abstruse mistake-theory-legible problems (although these are fun and the theory that total unity is possible feels good) in favor of more time spent on projects such as, “which candidates are really fighting for the people vs. just astroturfed shills” … hear hear!
Uh… are you agreeing that mistake-theory-legible problems are not the main problems, and find that to be good cause to curse the heavens? Or did I draw a “goddamnit” by saying something stupid at the top of the comment thread?
I think “which candidates are really fighting for the people vs. just astroturfed shills” is not only the wrong question to be asking but also emblematic of one of the biggest problems out there, and I’m also a little bit afraid that you’re right, possible sarcasm or no, and we will be seeing more of that sort of question around here in the future. Either would merit a “goddamnit”, but I was thinking mainly of the latter when I wrote that.
Ah. Now I don’t like my comment either; I seem to have sounded really uncharitable. I think charity is a key tool for, er, The People. I mean, there really are a small set who are intentionally sowing confusion, and the way to beat them is to use the principle of charity and other mistake-view tools to each develop our own understanding of “easy” mistakes, and even some “hard” mistakes, and also of how to distinguish trolls and sophists from rubes and confused types (as much as we can)… and then sometimes you can catch someone being a troll or a sophist in how they discuss a mistake-type problem, and act appropriately. And when you do that in public, and people see you patiently correcting the honestly mistaken, while starving trolls and calling out sophists, that potentially comes with renown. There are small differences in temperament (perhaps especially when measured across people who have both motive and opportunity and the ability to rationalize their behavior, and people who don’t) but people are damn similar at bottom. Really smart, well-informed people with energy need to spend a little less time on enjoying the respect of other really smart, well-informed people and a little more time outside their comfort zones, in the messy places where it takes a bit of work, and even then you can’t quite perfectly tell who is a troll and who is a sophist.
I’m absolutely not calling for more mindless purity tests. I disapprove of anyone still litigating Bernie vs Hillary, although both candidates failed some purity tests, yada yada. Rather, folks on the purity test side of things need to remember that there *will* be good-hearted, learned individuals who have mistaken views about how to solve mistake-type problems, and who are easily brought back onto the right team. Or only brought back onto the right team with difficulty.
Politics is still a *hard* problem.
On both sides of most issues.
I am open to the idea of carefully and systematically examining which candidates are shills and which “care about the people”… But I fear that any such analysis would end up simply dividing candidates along tribal lines… Or in other words, I think correctly detecting which politicians really Care is difficult, and people trying to do so will make mistakes.
It may work better if you look mostly for the shills on your own side. Although that still runs into the problem that “your own side” may not be well defined.
Uh, yeah. I think somewhere I ran across the idea that about 9/10ths of one’s time should be spent at the object level (here, the “mistake” level) and about 1/10th at the meta-level (the “which people are actively helping address mistakes, and how much” level–with about 2/3rds of that focused on *oneself* and one’s closest relationships, since that’s where a betrayal would cut deepest.)
It seems to me that if you view governance as trivial or not worth trying to solve, you are, in fact, making an Easy Mistake. Obviously it’s still difficult to know what the right thing to do is when all the experts are potentially biased or self-interested, and in that respect conflict theory is worth engaging with and learning from, but if Marxists have anything to teach us, it’s only to the extent that they treat governance as a serious problem, too.
But since you asked, yes, I think you erred in not engaging more seriously with conflict theory. Having it written up like this is useful, but the basic problem of “what if the leftists are right, and the supposed experts are biased and corrupt, and the ostensibly technical arguments for leftist-disfavored solutions are really just cover for seizing a bigger piece of the pie” did occur to me on a number of occasions, and the absence of that perspective was a problem with your posts on topics where this is relevant, like minimum wage.
(This is not to say there aren’t any errors in your framing in this post. There probably are, since it’s a grand sweeping narrative theory of politics. But I’m not smart enough to spot them.)
Well… you’re probably not the person I need to say this to, but it’s been bugging me lately, so here goes:
I think there’s a common political attitude, probably having to do with teleology or intent, that strikes me as basically voodoo-ish.
Say there’s a new minimun wage law, and the Evil Plutocrats are unhappy because it’s getting in the way of their ability to bleed the People dry. They fund a think tank of elite scientists who argue that this will cause Yellowstone to erupt, for complicated sciency reasons. The People, noticing that the Plotucrats are chuckling and rubbing their hands together, feel pretty comfortable disregarding the argument.
And then Yellowstone erupts and thousands die.
The OP touches on this, but it bears underlining: Evil Plutocrats aren’t magic. Evil people in general aren’t magic. Reversed stupidity is- say it with me- not intelligence.
If corrupt, evil people make an argument while chuckling and rubbing their hands with glee and thinking ‘Ha, now I shall swindle these suckers,’ the chuckling and hand-rubbing and evil thoughts don’t leak into reality and make it be the other way around.
I think a lot of people feel like it’s unfair, profane, just *wrong* for evil people arguing in bad faith ever to be right about anything. But the universe doesn’t actually much care what we think is fair or not.
When you hear the words “just cover for seizing a bigger piece of the pie,” your conjunction-detector should emit a loud screeching noise, because the first of those words encodes a whole separate assertion that needs a whole separate argument to justify it than the rest of the quote.
(I know this is old-hat for a lot of the people here, and I apologize for the sermon, but it feels like it needs to be said every so often.)
(I’m tempted to go write a science-fiction story about a Superintelligence with this worldview, trying to carefully manipulate its enemies into lying to it about things it wants not to be true.)
A very good point – the object-level arguments should screen off the intentions of the person who made them. The problem is that for most social problems there is ample (weak) evidence both for and against new policies, and then the situation is not so clear.
Have a real-world example: Nazis discovered the connection between smaoking and lung cancer.
> Have a real-world example: Nazis discovered the connection between smaoking and lung cancer.
I particularly like this example because the Nazis did not just happen upon the right answer. They had already decided that they did not like smoking and wanted to stamp it out based on reasons that were, at least in some ways, selfish. Then, because of this, they went on fishing expeditions looking for ways tobacco is bad. This is not seeking for the truth; had they found that tobacco is good for you, the research would not have been published.
While doing that, they stumbled upon the real reason why you should not smoke. This is why “they are really evil and only pushing their selfish personal agenda” is never an argument against a position. Even if true, that might just mean that they went looking for reasons to push the position and actually found something real.
Motivated reasoning can sometimes produce the correct result. There are still good reasons to be wary of motivated reasoning and not to simply ignore the fact that it is motivated when analyzing it.
That’s pretty much the best example I could have asked for.
Thanks so much. I’m going to find so many uses for this.
That was a bit annoying. I used to cite early anti-tobacco people as evidence that “uptight people were right in the first place” until I found they included Nazis … which turned them into an example on the other side.
The Nazi belief system itself is a pretty great example of “evil means wrong” thinking in the other direction. They believed that the Jews were evil, so therefore “Jewish Science” must be ignored, resulting in Nazi Germany losing out completely on the atom bomb when it otherwise might have beaten the Americans to the punch. We should be thankful they weren’t more rational.
the germans didn’t have the wealth to develop the bomb as fast as the US did and and wage the war at the same time. the estimates heisenberg put forward for what it would cost and how long it would take are very stark, and given what the US spent on the effort, not far off.
You should know better than this. The US got the bomb in pretty much the most expensive way possible. The Germans might have chosen to do the same, but that’s because they were the Germans, and they’d never take cheap, simple, and quick over massive and complicated. The German bomb program was a joke. The Japanese were ahead of them because of how badly they’d screwed it up.
That said, Forward Synthesis isn’t exactly right on this, either. They abandoned it because they didn’t think they could make it work, which was true given the program they had.
The US chose to get the bomb in every way possible, because they had the budget to do it and they weren’t sure which way would work best. The german effort would have been more focused out of necessity, but that also means that they might have chosen wrong. I forget which methods that Heisenberg wanted to pursue, but the method that the US eventually settled on wasn’t the one that the best experts thought would work at first.
Hence “most expensive way possible”.
The British got it mostly right from the start. I was pointing out that saying that Heisenberg’s estimate was the same as the Manhattan Project was not an accurate estimate of how much it would actually have cost the Germans to get the bomb if they’d had someone competent in charge. Heisenberg may have been a great physicist, but the US was smart enough to put Leslie Groves, not a physicist of any sort, in charge of ours.
Germans most likely failed to make an atom bomb because they decided to use heavy water moderators instead of graphite moderators. The ideological explanation is a myth. There were some good reasons for this decision a priori but it completely derailed them.
We need a maxim for this. Something like “Nasty people are not less likely to be right than nice people”.
Startup idea: hire people to be nasty online in support of a disfavored cause. Although that’s hardly a new idea…
That maxim isn’t true. It’s certainly possible that someone arguing in bad faith can be right, but on a Bayseian level, arguing in bad faith is evidence against being correct.
“Intentions are only weak evidence”
Being nasty isn’t the same thing as arguing in bad faith (unless I misunderstand what bad faith means).
But you proposed the maxim in response to someone who referred to bad faith.
I was focusing more on the “evil people” part, but you’re right.
That’s true, but most people, including most quite well educated people are not equipped to follow the complicating sciency reasons. And even of those that are equipped in principle most don’t have the time and interest.
If you can’t trust experts bearing fancy titles and sciency explanations because either they’ve been directly subverted by oodles a cash from mustache twirling plutocrats or because complicated incentive structures have been put into the place or arisen naturally that bias things in a particular direction even with the scientists themselves all acting in good faith, then this is a huge problem regardless of whether or not it is possible in principle for someone to come along, do a deep dive literature review, seek out the raw datasets, and independently examine the evidence.
Similarly in journalism, I’m reminded of a time Noam Chomsky was being interviewed by someone from Sky News on the topic of media bias/propaganda, and the interviewer asked Chomsky if Noam thought he, the interviewer, was intentionally lying in his reports. And Chomsky responded “you wouldn’t be sitting in the chair you’re sitting in right now if you didn’t hold the opinions you do.”
Yes, this is IMO the strength of a conflict-theory approach to understanding the world. It appears on the right, as well, in some of the critiques of AGW that basically make the same point–if you didn’t basically accept the model of human-caused climate change by CO2 emissions, there are a thousand reasons you’d have ended up in another field by now (you think these complicated climate models are a waste of time, your views make you an outcast in your community, you can’t find anyone but coal companies to fund your research, it’s twice as hard to get your papers published because they go against the overwhelming beliefs of your scientific community, etc.)
I’m not sure I see why that’s a point in favor of the conflict theory worldview. It sounds very much like a hard mistake frame — we want to have this scientific apparatus that tells us true things about the world, but it turns out that when we try to set up systems to do that incentives and human nature comes in and distorts the outputs away from pure truth uncovering.
Because if you look at this through the pure mistake frame, you’ll see a lot of people all inexplicably happening to make mistakes in the same direction. If you look at it in the conflict frame, you’ll see self-interested actors making a “mistake” that supports their prior beliefs and their position in the world.
This makes me think a bit of Scott’s figure/ground inversion thing. (Government is a collective operation to provide services and enforce laws that happens to have some rent-seeking and empire building happening vs government is a rent-seeking and empire-building operation that happens to provide some services and enforce some laws sometimes.) Both frames are partly right, and it’s useful to be able to flip back and forth between them.
 I may not be thinking of this frame in the same way as Scott. I’m thinking of the conflict and mistake frames as a way to understand the world, and here we have a case where peoples’ prior ideological commitments and personal/institutional interests may be driving their arguments and findings. That’s different from the “let’s talk about our differences” vs “let’s destroy the heretics with fire until there’s no more heresy” strategy.
That’s not the understanding I came away with.
I don’t think it needs to be inexplicable for a mistake framework to fit it quite well. A system can in principle be robust against stupid, greedy, and evil people. If the system in question isn’t then someone made a mistake and the system should be fixed. This can be an easy problem or a very hard problem.
On the conflict side, it’s not that the system that needs to be fixed to be robust against the stupid, greedy, and evil — but rather it’s the stupid, greedy, and evil people themselves that are the irreducibly problem. Instead of trying to fix the system, we need to go out and kill/subjugate the stupid, greedy, and evil people and then everything will be great without any need for complicated systems.
The way it looks to me: If you think that the mistake-theorists proposing ideas and arguments are mainly just saying what serves their interests/ideology/employers, then I think you’re a lot less likely to think that a mistake-theorist approach to resolving our problems is going to bear fruit. Instead, it will just be a matter of the powerful people determining which theories get a hearing and which ones are suppressed or defunded or no-platformed.
It’s also true that less greedy, less stupid, less evil people need less complex systems to manage their interactions. You don’t need a “no pooping in the pool” sign at the community pool until some asshole starts pooping in the pool.
That might work as long as people have strongly similar preferences, outlooks, and neurological makeups. Then they can just anticipate each other’s preferences and apply the Golden Rule.
With differing people, things get complicated fast even if all are sincere and altruistic. Combinatorial explosion is a harsh mistress.
I’m pretty sure you just walked into a trap.
I guess I see what you are saying about “how the world works” vs “how we can solve problems”. I was was trying the second as primary in terms of taxonomy.
While I didn’t actually intend it that way (I meant what I said, more moral people need less government), Brad’s right it’s also an argument against multiculturalism and for homogenous societies. With multiple cultures you need more written rules to cover interactions between cultures than you would within a monoculture.
I think there is a very general principle there somewhere–to the extent we all agree on the basic rules and there are informal social enforcement mechanisms that work passably well (even if they’re just glares and offended looks and an occasional cold shoulder), we only need actual laws/rules (with the police or the HR department or the principal getting involved) for rare, exceptional cases. When we don’t all agree on the basic rules, or when we do but social enforcement mechanisms don’t work, then we end up needing a lot more overt enforcement mechanisms.
In some workplaces, it wouldn’t matter if you removed any management/HR rules about, say, not pinning up Playboy centerfolds in your workspace–anyone who did that would get so much social pressure that they’d very quickly take it down, or their coworkers would do so with as much rudeness as needed to get the message across. Only the most utterly clueless or trying-to-give-offense people would persist in such activities enough that the boss needed to get involved.
In other workplaces, either the overwhelming consensus or the willingness to use social enforcement mechanisms don’t exist, and if the boss doesn’t want Playboy centerfolds hung up in the workspace, he has to make it a formal policy and come down on people who violate it.
Where we have shared values + social enforcement mechanisms (which is partly about people having confidence in their shared values and willingness to push back on people violating them), we can get away with fewer police and courts. That’s not always a win–sometimes, social pressure can be pretty damned oppressive, and clear written rules are better. But I suspect it’s a win 90+% of the time to have things handled informally.
More even than this. If rationality is systematized winning, then we should expect winners to be more rational. We should expect the ideas of those with a track record of success to be more predictive of reality than those with a track record of failure. In a conflict of the Rich and Elite versus the Poor Masses, we should expect the masses to have less insight into the fact of the matter.
This is especially true when one side’s argument is emotionally satisfying and the other’s is not. True things which conform to basic human emotional biases tend to not be debated in the first place. False things which conform to them tend to be retained for some time even after the accumulated evidence indicates that they ought to be discarded. Pessimists will tend to have more accurate worldviews than optimists.
Winners are more rational in the sense that the things they do are rational ways of meeting their goals. That doesn’t make their arguments rational.
Someone who wants your house, tells lies to your boss, gets you fired, and buys your house for cheap when you go bankrupt and need to sell it is being rational–he just won. But he isn’t arguing rationally.
I am so sorry I just hit the “report” button on this by accident; please forgive.
Someone who successfully pulls off a predatory scheme should, all things being equal, be expected to have a more accurate map of the world than someone who fails to pull off a similarly predatory scheme.
It is only by believing that one side of a debate is systematically more predatory that one can conclude that the more powerful and wealthy side is no more likely to have accurate beliefs. This is, of course, what the typical conflict theorist thinks, but the system of inferences is clearly circular.
What beliefs the person who pulled of the scheme has, and what beliefs they claim to have, may diverge.
That being said, I think you’re dead on about the rest of it. And there’s maybe a (weak) feedback loop, where it’s easier to be rational in the first place if you’re not overly poor and miserable.
There’s maybe a larger pattern where it’s easier to do things if you’re relatively privileged, because you don’t have the disadvantages and distractions disadvantaged people do. And ‘do things’ includes things like ‘educate yourself’ and ‘learn to think clearly’ and maybe even ‘cultivate tolerance and compassion’- if you have less to be angry about, you might end up being less angry in general.
Which produces a dynamic that really strongly violates ‘just-world’ thinking and common memetic defenses against ‘just-world’ thinking, where privileged people *are* likely to be better according to some metrics, but due to causes beyond their control. If you mention this in public, there’s a chance the left will round it off to “Poor people are subhuman and deserve what they get” and the right will round it off to “Poor people aren’t responsible for anything they do, and no matter how good rich people are none of it counts”.
I’m not sure how strong this dynamic is- it’s easy to come up with counter-dynamics that would funge against it- but it’s the sort of thing that might explain some of our current social troubles.
And it’s really complicated, so I’m emphatically not taking any moral or policy stance here.
He may be expected to have a more accurate map in his head, but you don’t care so much about what’s in his head as you do about the arguments he is making. Predatory schemes may not involve arguments at all, or if they do, poor ones.
Well since wealth makes people more narcissistic and less compassionate we end up with a lot of rich people who think they popped out of an Horatio Alger novel and look at everyone who hasn’t pulled themselves up by their bootstraps with distain.
Which is why we have billionaires who think adding a work requirement to medicare will somehow magically make poor sick people able to hold down a job rather than just spiral down and out of control. Really, it’s their insufficient work ethic that is the problem; not the lack of jobs that pay a decent wage, the internalised feelings of uselessness for not being able to provide for yourself, or any other barriers to employment like a criminal record or even a hole in your resume because of an illness.
Thousands of people will die because of this and it is either because rich people are rationally greedy and don’t want to pay more taxes or that they are so far removed from the problems of people in poverty that they rely on bogus ideas of why some people are successful and others aren’t. Making them much less rational about how the programs they are operating work.
Disagreed. Our schemer should be expected to have a map better suited to predatory schemes, ie a more specialized map. That map is not more likely to be more correct than their target’s map in a meaningful way; it’s just more useful for the schemer’s purpose than we should expect their target’s map to be for blocking the schemer. This is to be expected; “avoid predatory schemes” is not likely to be as high a priority for a random person as “predatorily scheme” is for a predatory schemer, so the predatory schemer will dedicate more resources to their domain-specific tools.
The separation of people from reality in economics or politics is a mistake. You cannot run a massive minimum wage experiment under the assumption that it can be reversed by reducing or removing the minimum wage later if the experiment showed an undesirable outcome. It is entirely plausible that raising the minimum wage for a period, and then reducing it could lead to the worst outcome of the five generic states (holding flat, lowering it, raising it, raising it then lowering, lowering it then raising it), because these actions can effect reality. No they won’t cause a super-volcano to erupt, but they might actually cause the framework that we need to make measurements to shift.
Yup, Yudkowsky’s whole thing about the clever arguer. I’m not arguing that motivated reasoning isn’t a poor tool for finding truth, or that people don’t engage in dishonesty.
But I think there’s a strain of thought that goes pretty far beyond that.
I saw a TV show a couple of years ago- it’s been awhile, so my memory may be fuzzy. People in a small town were getting sick, and they thought a local corporation might be polluting the water supply. They didn’t really do any testing- either Official Scientific testing or LessWrong-style rationality testing, and no time was spent considering other environmental explanations (or psych explanations- the systems were kind of vague). It was just obvious that EvilCorp was doing it, because when they went to the media, EvilCorp tried to shut them up just like they would if they were guilty.
And no one noticed that, no, corporations don’t like it when you say bad things about them even when those things are false.
I think there’s a certain mentality where all self-interested discourse is untrustworthy, and all untrustworthy things are false. Or where, as long as the people on the other side are bad, you can be sure the people on your side are good, and virtue-goodness necessarily implies good consequences and vice-versa. And all of this looks kind of silly written out, but I worry that it doesn’t feel silly when people encounter it in the wild.
Your TV show probably wasn’t Henrik Ibsen’s 1882 play An Enemy of the People, but it sure sounds similar.
But isn’t the tannery actually responsible in An Enemy of the People? It’s true that it’s likely whoever wrote this TV show was familiar with the play, of course, but it sounds like the show is different? If Lecter remembers it right, that is. I certainly have no idea what show he’s talking about.
I read parent comment as saying that the show itself portrayed EvilCorp as obviously guilty.
I think one of the reasons people focus on the “just a cover up” part, and deny that maybe there can be something real beyond the power grab, is because they often cannot comprehend the supposed real reason at all. Discussing the actual thing often requires a level of understanding and dedication above the ability of many people, while discussing the motives behind the thing is quite accessible to everyone of normal inteligence.
For example, you need a lot of knowledge and dedication to meaningfully discuss the risks of AI explosion. In fact, if you don’t already have quite a lot of knowledge, you can barely understand what the AI-concerned people are concerned about. On the other hand, you need much less knowledge to discuss the motivations AI researchers may have to swindle the public. And if one ends up hearing all this discussion without getting anything about the content, one’s mind will tend to drift to the only part of the discussion they can follow: the motives.
I think a lot of the derailing we see on discussions of evolution, climate change, and economy boils down to this. The public is exposed to things they don’t understand at all, and they try to concoct a reason to why some people are so invested in things that sound like gibberish.
(full disclosure: I know jack squat about AI. And I often feel like it’s a swindle. Sorry guys, I can’t avoid. Just like I know my evangelical Christian acquitances can’t avoid feeling like evolution is a swindle)
It’s also the case that if something looks like a swindle to you, it will look like one to other people, including to swindlers, so if it starts getting some traction as a meme, swindlers, who are very numerous compared to researchers in unpopular fields will perk up their ears and say “Hey, this is the hot new swindle!” at which point, while “AI” may not imply a swindle when discussed by people who were “talking about it before it was cool” it will still imply a swindle when said by anyone who joined after the bubble began.
This seems very smart to me. I encounter no end of frustration with people’s preoccupation with motives, and sometimes fly off the handle a bit on climate change especially, but also sometimes other things, when motives are all people tend to talk about.
Ah, but consider: “what if the rightists are right, and the supposed experts are biased and corrupt, and the ostensibly technical arguments for rightist-disfavored solutions are really just cover for seizing a bigger piece of the pie”
Depending on the flavor of rightist, they are sometimes right. Also sometimes they are right but I don’t care about their complaint. For instance white nationalists are right that “white” people are going to become almost non-existent in America in a few generations for a lot of reasons. They are wrong that its about white genocide though. But I’m not necessarily sure that its a huge problem. I mean wealthy capitalists will still be wealthy capitalists even when their poor white stooges are gone.
If you believe that “white culture” is the underpinning of Western common-law civilization and freedom though, you can be concerned even without being some kind of tiki-torch stereotypical white supremacist. Wealthy capitalists in totalitarian China are also getting rich. Wealthy capitalists in Islamic Saudi Arabia are getting rich. In this sense I think the leftists are correct–we do live in a white supremacist culture. I also agree that to achieve their goals (destruction of Western civilization) we need to dismantle white supremacy. It’s important to note these Marxists are playing motte and bailey with the term white supremacy. Of COURSE racism is bad and we should judge people as individuals. But deep down they believe whiteness is irredeemable and a form of original sin.
That doesn’t have to be what it’s cover for.
Consider global warming. I think a lot of the alarmist arguments are wrong and some of them dishonest. But in most cases, as I interpret it, the motivation isn’t seizing a bigger piece of the pie in any ordinary sense. It’s providing arguments for things those people are in favor of for other reasons–and in most cases believe almost everyone should be in favor of. As per a cartoon that is popular with the same people whose arguments it is a reason to discount.
An even better example would be nuclear winter. As I interpret that, it was a PR project with scientific cover, a conclusion announced with great fanfare before the relevant scientific papers had been published, hence well before there had been an opportunity for them to be critiqued.
But it was a PR project with the defensible purpose of preventing nuclear war, making the pie bigger not seizing a larger part of it.
The fact that
global cooling global warmingclimate change necessitates One-World-Government doesn’t raise any alarm bells for you? Have you considered that cartoon to be a strawman argument against climate change skeptics?
I believe in global warming. I am also deeply, deeply skeptical about political schemes to address it, such as the Paris Climate Accords.
climate change necessitates One-World-Government
Strong words are usually wrong words. Please justify this statement, or at least define one-world-government.
C’mon – you’ve already concluded that the men behind this were dishonest and not acting as scientists – finish it off. They weren’t concerned with preventing nuclear war – they were part of the “nuclear disarmament” movement – itself a euphemism for the “surrender to the Soviet Union” movement.
In reality the risk of nuclear war dropped massively as a result of the exact opposite policy – one of aggressive confrontation with the USSR which caused them to fold. American disarmament would – at best – have massively increased the chance of a nuclear conflict because the USSR would likely have persisted. At worst the US capitulates in some way and everyone in the world lives under communism and there’s no alternative system to demonstrate how bad it really is.
It wasn’t my claim, but there is a sense in which it is true. The people arguing that climate change is a terrible risk don’t mostly argue for one world government. But the logic of their position does imply a case for it.
One problem with slowing climate change by reducing CO2 output in the present world is that doing so is a public good not only at the individual level but at the level of states. If England uses expensive renewable power instead of cheap fossil fuel, it pays all the cost and the benefit of slower warming is shared around the world, so England gets only a tiny fraction of it. Unless that tiny fraction of the benefit is greater than all of the cost, it doesn’t pay England to do it.
That problem can be reduced by international agreements, but there are a lot of countries and it is in the interest of each, if it can, to free ride on the efforts of others. So that makes it a problem which would be easier to solve if there were a one world government.
There are lots of disadvantages to a one world government, of course. And as it happens I don’t think there is good reason to expect climate change to have terrible effects. But if the more extreme claims were true, if climate change threatened the survival of the human race, it would be a serious argument for one world government.
I’m really surprised to hear you say this, David! This free-rider argument is basically the argument that everybody gives as to why there should be governments at all, and you don't buy it for that, so what's so special about climate change that now you would take the argument seriously?
It’s a legitimate argument for government. It’s just that there are stronger arguments against. Similarly in this case.
Market failure is real. There are situations where laissez-faire predictably produces worse outcomes than would be produced by a benevolent, all knowing and all powerful dictator. But the real alternative is not that, it’s a political mechanism where the kind of situation that causes market failure is the rule instead of the exception.
If there is a situation where market failure under laissez-faire is not merely inconvenient but catastrophic, then government, for all its problems, might be better–that’s the possibility I discussed in the context of national defense in Machinery. It might be the situation at a world level if global warming really was an extinction level threat.
I think I’m very close to your perspective here, that the main problem with this article is the strawmanning of conflict theory.
> conflict theorists aren’t mistake theorists who just have a different theory about what the mistake is
I’m not sure this is right. There probably are conflict theorist activists who fit the picture in this article, but I think that serious Marxists, especially Marxist academics, absolutely are mistake theorists, but their theory is that the bourgeois liberal theory of governance is mistaken, and that the bourgeois theory that capitalist political economy is still the best available system is also mistaken.
And further: a Marxist would say that the conflict between classes is a *structural* product of material conditions. Capitalists do not need to be bad people with bad intentions for them to be in class conflict with workers. A capitalist may sincerely believe capitalist ideology, and believe that everything they do is for the good of humanity. This belief is a *mistake*.
Finally, I’d say that Marxists do not think that governance is simple or solved. But they do think that there are serious issues that cannot be resolved as long as the bourgeoisie control the political process, and that therefore revolution is required prior to working on the remaining problems.
[Note: I am not really a Marxist, though I do count Marxist writers among my influences. I’m a libertarian socialist steelmanning Marxism here.]
One other thing to add: Any theory of politics or society that doesn’t incorporate some notion that there are groups with different interests and beliefs who are in conflict with one another will not describe reality very well.
In reality, we have interest groups with conflicts based on their interests, and conflicts on values that often split along culture-war lines, and also genuine disagreements about what policies would be best for reaching our shared goals.
This isn’t steel-manning Marxism, this is weak-manning or straw-manning mistake theory.
interesting. The relevant Marx quote here is “It is not the consciousness of men that determines their being, but, on the contrary, their being that determines their consciousness.”*
I have met academics calling themselves marxists on the grounds of accepting that premise and have so far done well enough predicting ideology in academics by that rule. That does not necessarily imply to them that capitalist (?) classical economic theory is wrong, nor does it imply that revolution is required (see: social democrats**).
What it does mean is that to solve social problems, like governance, as a marxist you stress the material side of a matter, rather than the ideas being discussed. So to a marxist it is obvious that you have to redistribute power and wealth (the being) to solve complicated issues like racism.
From there then the discussion is who is a victim of the structure one is morally inclined to help, what to redistribute, from where/whom to get it and by what means.
That looks very much like conflict theory there. It invites the answer:
just because the system disfavors group XYZ, you have not neither proven the system overall to be bad, nor a better idea, nor how to get there. Those are hard issues and probably the reason i dont see many marxists not makeing simple mistakes in politics. As soon as you take those seriously you are on common mistake-theory-grounds with non-marxists and can even start to discuss the premise.
My classification of the marxist position is obviously broader than prosthetic concience’s and i think this is probably due to “marxism” spanning a wide range of meanings.
* the being – german:sein – is translated as social being in the first translation i googled. that says something about anglophone marxism right there.
** one distinction i make is marxist politics vs marxist philosophy. I think i outlined a marxist philosophy, while marxist politics would be narrower and accepting marx’ political ideas about classes, revolution etc. That is obviously totally different from any democratic interpretation.
I don’t know about Marxists in general (I don’t think there’s all that much they universally agree on), but I think Scott made a pretty strong case in his linked post that Karl Marx himself viewed governance as not worth trying to solve, in a stronger sense than merely “whatever solution we come up with can’t be implemented until after the revolution”. And I think this is an Easy Mistake.
Marx was not interested in “solving governance” because he did not believe that there was such a thing as “governance” in the abstract that could be abstractly optimized at any time in any situation for some universal set of human preferences. As Marx saw it, different groups in society inevitably had conflicting interests. This is not due to their virtues or lack thereof. It’s not because they are disagreeable people. It is simply baked into their social roles. Those conflicting interests exist whether they take notice of them or not. But Marx thought that, on average, people would be aware of these conflicting interests, and they would want government to do different things.
So, you could design a government to optimally serve the interests of a feudal aristocracy in a particular historical situation (what “optimal” means will depend on the situation–Marx always paid attention to historical context), or a government to optimally serve the interests of a capitalist class in a particular historical situation, or a government to optimally serve the interests of a peasantry in a particular historical situation, or a government to optimally serve a proletariat in a particular historical situation. To Marx, these would all be different tasks, requiring different templates.
There are few, if any, universal principles. It might suit a feudal aristocracy’s interests to have an absolute monarchy in one situation, but an elective monarchy (like in the Polish-Lithuanian Commonwealth) in another. It might suit a capitalist class to have a Bonapartist dictatorship in one historical situation, or a liberal democracy in another. It just depends on a lot of factors.
Yes, specifics of social context are important and there’s no universally applicable solution to the problems of governance. None of this negates the fact that if you’re going to overthrow the government, you’d better have a concrete plan for solving the problems that government solves, or else things are going to completely predictably go really far downhill, really fast.
If revolution is required before we can even talk about real-world, grown-up problems, what business do Marxists have with (flailing) attempts to pick apart public choice theory?
The criticism of public choice in the three articles referenced by the Jacobite article is that public choice theory is wrong today. These articles are fairly representative of the left-wing take on public choice theory. There’s never a contention that public choice will be wrong after the revolution; the contention is that its assessment of governance in a capitalist society is wrong with regard to capitalist society.
So which is it? Can technical questions of governance be addressed by Marxists in a pre-revolution bourgeois society or can’t they? Because (quasi-)Marxists are taking concrete a position right now that public choice theory is mistaken in its attempt to describe the optimization of bourgeois governance. So this looks like pure motte-and-bailey:
There’s a second motte-and-bailey here.
Alice can be consistent and hold the position that public choice theory is bad because it cements capitalist society and inhibits the revolution/transition into a non-capitalist state. But that has nothing to do with technical claims! Public choice theory is either mistaken about its claims or it is not, and this doesn’t change based on whether you think that communism is good or not.
It seems like every attempt to engage with Marxists on the serious questions of making governments operate well ends up being a circular intellectual retreat. Other explanations (in this reply-thread) point in this same direction:
Well shucks, I guess we have to pack up and go home. We don’t have to examine the differences between the states of Norway and Zimbabwe because governance doesn’t exist. Public choice theory pwned again?
As far as I can tell, Public Choice is Right Marxism, so this is a particularly clear example of Marxists asking “Who Whom”, e.g. being conflict theorists.
That’s a bold claim. At its core, public choice theory is the belief that people who work for the government aren’t selfless. How on earth is that akin to marxism? Because if you’re trying to gin up some connection based on class interests, that’s gross abuse of the language.
This paragraph gives the reader +5 to all attempts to take Marxists seriously. Or at least, it gave me a place to stand that makes it easier to understand what Marxists are saying, and not mistake it for “Capitalists are evil.”
Firstly, we should probably at least glance at the right-wing conflict theorists, too.
E.g. “Democrats don’t really want to ‘increase diversity’ and ‘improve the economy through immigration’, they want to increase population demographics that consistently vote Democrat”. I don’t believe this, but it’s certainly an alarming idea.
Secondly, conflict theorists are definitely sometimes right. E.g. the whole nicotine industry influencing research on cigarettes and cancer thing. Perhaps the war in Iraq, I’m not sure how clear-cut a scam that’s considered nowadays.
I think paying attention to what the conflict theorists of the right are saying is important to understand what is going on in the world. In addition to declaring that Democrats want more Democratic voters, they also believe that Democrats and moderate Republicans favor immigration because they want low-skilled labor for their fat corporate donors.
Why can’t they want both?
It’s definitely true that there are groups with fundamentally competing interests. one of the failure modes for mistakists fail is forgetting this.
There are a lot of different Democrats. It’s really a question of what percentage that applies to, rather than whether it applies at all.
There’s also a much weaker version, which posits that in the counterfactual world where immigrants overwhelmingly vote Republican, the Democratic bloc would find a reason to oppose immigration, or at least be less enthusiastic about it.
Then the leftist Conflict Theorists should be able to capture most of the Mistake Theorists with a slam-dunk argument showing where the mistakes are in the technical arguments being deployed against them.
Oh, if only there were some institution filled with technical expertise but not in the employ of the Evil Right.
I don’t think this works. At least, it probably wouldn’t convince me.
Somebody upthread mentioned that conflict theorists will occasionally dismiss technical arguments on polarized issues, on the grounds that they can’t understand them but can understand what would motivate people to make them if they were false. But you can actually go further than that. I occasionally find myself unwilling to put much stock in such arguments even when I do understand them, because I don’t trust that there isn’t a hole in the argument that I simply wasn’t smart enough to catch.
Re treating governance as an easy problem, its not necessary to go to the meta level of governance structures, but to see that there are lots of policies that are broadly supported by experts, and in some cases the majority of the population, that don’t get implemented. Solving the deep hard problems isnt a priority while you still can’t implement the known solutions to easier ones.
To take a few well known American examples: Both experts and the majority of the population want tighter gun laws. Same for immigration reform, tighter campaign spending laws, etc.
So its not unreasonable for someone to conclude that the underlying problem with the system is not that we don’t have solutions, but the people in charge are unwilling to implement them, so we should change who is in charge.
I reject the notion that we already know what the optimal gun policies are. Many of the ideas in that article (like banning sales to people on watchlists, which have been managed quite poorly by the TSA, or with medical diagnoses) strike me as extremely questionable. Maybe the NYT found some pool of “experts” such that within it there’s expert consensus, but I don’t trust those “experts” to actually be right. See also Scott’s analyses of the issue, which are very far from coming down firmly on the side of gun control.
Immigration and campaign finance seem to be in the same state.
The issue isn’t whether you and I are personally convinced about the gun control issue but whether it’s reasonable for someone to get the impression there’s a public and expert majority on the side of tighter gun controls and then conclude that some non-democratic or non-benevolent force is stopping tighter gun control from happening.
Or perhaps that a non-democratic or non-benevolent force is influencing public opinion in favor of tighter gun control.
This worries me: mistake theorists converting to conflict theorists because they conclude that conflict theory is the only explanation for certain powerful people and their brainwashed constituents to keep opposing obviously good ideas. But often the empirical case and social consensus aren’t nearly as strong as they imagine.
It’s reasonable to get that impression, but it’s not an accurate impression. As a moderate member of the right-wing political coalition, I am okay with some marginally increased gun control regulations.
Other members of my coalition are really, REALLY opposed to this. The price of keeping them in this coalition is dialing back what gun control policies I support.
This is a totally acceptable trade-off to me, because I don’t think gun control is a big deal, especially in comparison to tax and regulatory policy.
So even if I end up in the 80% of people who want tougher gun control, there is a democratic force making sure gun control doesn’t happen.
This falls in with “mistake theory.”
The perfect is enemy of the good; the optimal gun control policy is unknowable, but you have to admit that almost all proposals would be better than the disaster of the current gun policy in the US.
At the very least, banning semiautomatic weapons should be the start.
I “have to admit” no such thing.
In what way is current gun policy a disaster in the US? For the size of our nation and the size of our individual arsenals, the US is a remarkably peaceful place.
I am very, very worried about what’ll happen when we try to enforce this. With the cops and public servants we have now, with the legal culture we have now, with the way the War on Drugs and Prohibition turned out.
Like, if we *sensibly* banned automatic weapons so people couldn’t buy them openly, then shrugged and accepted that we can’t stop all sales or ownership, that would be better than this. You can 3-D Print guns these days, so we’d still probably end up with a fair number of mass shootings, but there’d probably be fewer.
But I remember Scott saying on his old blog how more people drowned in swimming pools than fell to mass shootings, so I kind of suspect the anti-gun movement isn’t entirely actuated by carefully studying the statistics. I think at least part of this is about screwing over the Red Tribe, and the Red Tribe *know* that, so I don’t think banning guns is a good way to reduce ‘gun culture’.
Which leaves us with a state where lots of people still have guns, we still have enough mass shootings to fuel outrage cycles, and the police are trying to enforce a (structurally) victimless crime, with all the psychological effects arising from that situation- except this time, the people they’re trying to catch are, by definition, armed.
I don’t know if that would get us fewer deaths, but I don’t think it would be good for civil liberties. Or for Red-Blue cultural tensions.
But automatic weapons are effectively banned. You can’t buy any full auto weapon made after 1986.
This is the other problem with public opinion on firearms regulation: an awful lot of the public is very uninformed about firearms. It’s sort of like an inverse technocracy where the ones most in favor of regulation have the least expertise. Contrast with say climate change regulations, where the experts are the ones in favor of regulation.
I strongly suspect you do not understand what semiautomatic means in this context. But if you think you do, please explain what it is about semiautomatic weapons that you think makes banning a high priority.
Why that? Banning semiautomatic weapons would make hunting, varmint shooting, and self-defense a little harder. It would also make mass shootings a little harder. But mass shootings are a tiny fraction of the murder rate–they just get a lot of attention because they are dramatic, hence newsworthy. And I don’t think the effect would be all that large on any of those things.
I can’t tell if, as another commenter seems to think, you don’t know what semiautomatic weapons are or if there is an argument against them that I am missing.
Technically true, but the “technically” there elides a lot of complexity. You can 3D print most of an inaccurate single-shot weapon that might blow up when you try to fire it if you had the printer set up wrong (might need to add some off-the-shelf springs or firing pins, I’m not sure), or you can 3D print a receiver (the serial-numbered part, so legally the firearm) for some types of weapon and fill in the rest with aftermarket parts, but both are a lot of work and neither adds much value. It is not currently practical to use 3D printing as an end-run around “assault weapon” bans — which have nothing to do with automatic weapons, but that’s another conversation.
Cheap, commodified CNC milling machines, on the other hand…
But they don’t have the shiny (except sometimes literally), so nobody talks about those.
They originally talked about banning automatic weapons and how easy they were to buy in any gun store.
I see they’ve tried to stealth-edit their post after being called out.
There is one aspect of American gun law that I find maddenly insane.
My State of Resident has issued me several kinds of licenses. One is an Operators License for a vehicle. Another is a Marriage License, acquired in the past year.
And a third license is called a Concealed Pistol License.
This third license is the one that I cannot guarantee will be honored by all the other States in the Union. Each State that does honor the license has their own welter of laws that apply to how/when/where I can use the License to legally carry a firearm.
I find this to be insane.
I’ve had the Concealed Pistol License–and carried a concealed firearm–for more than a decade. In that time, I’ve had exactly zero violent encounters. I haven’t even had any stare-downs with people who were threatening me.
Some twenty or thirty times, I stepped into a legal gray-zone by walking into a building while carrying a concealed weapon. Sometimes, I wasn’t aware of whether the Law was worded precisely enough to tell me whether I was allowed to bring a firearm inside. Sometimes, I realized that a not-easy-to-see-sign near the entry requested that people not bring “weapons” into the building.
Again, I find this to be insane.
In all those cases, I was able to leave without being detected. And without causing any harm to the denizens of the building.
And I gained the knowledge that it’s very hard to deter a person with a concealed gun from walking into a building. Unless there is a metal-detector-manned-by-a-guard at the door.
I also learned that it is much harder to deter a person with a concealed weapon from walking down a sidewalk, entering a park, or strolling along a line of storefronts.
I find laws banning guns at bars (or sporting centers, or shopping malls) to be insane. Unless the people who support those laws are also willing to support metal-detectors-plus-law-enforcement at every such building.
Did not know that. Mea culpa.
I read a (kind of panicked) article about it a few years ago, did enough fact-checking to verify it wasn’t completely full of shit, and assumed its projections regarding technological progress were basically sound, since I couldn’t think of any reason they wouldn’t be.
My only excuse is that I was young, still believed the media was mostly accurate, and failed to reevaluate the cached thoughts from back then before posting.
On an emotional level, the idea of giving the government an excuse to do to the Internet what the did to the Fourth Amendment still makes me break out in hives- but my estimate for how likely this is was heavily influenced by the technical feasibility of 3-D guns, so it should be much lower now.
Republicans and the NRA have been discussing concealed carry reciprocity legislation for some time now. I’m surprised it hasn’t happened yet with the Republican congress. It seems silly, though. If my red state has to recognize your blue state’s gay marriage license, your blue state should have to recognize my red state gun license. Marriage, gay or otherwise, is not a specifically guaranteed right in the Constitution, whereas the right to bear arms is.
With regards to 3D printing of guns, Nornagest and John are right. It’s not practical (currently), but who cares? Guns are very simple tools and anyone with a basic machine shop can make one. If you look around gun forums there are lots of hobbyists who make their own guns. This is perfectly legal. It’s only illegal to sell a gun without the proper licensing.
Heartily seconded on the ridiculousness of the lack of concealed carry reciprocity, both from a legal standpoint (IMHO, it violates both the 2nd and 14th amendments, though admittedly there’s room for debate on those) and, more importantly, from a functional one. I’m currently stationed right on the border between California and Arizona; if I miss one stop on the highway, either because an exit is closed for construction, or because it’s just faster to get some places by getting off on the California side and crossing back over on the backroads, I’m technically a criminal. And before that I was stationed in San Diego, so despite being required to complete use of force training and qualify with a pistol at least annually, it was completely impossible for me to carry legally, unless I felt like donating thousands of dollars to the Sheriff’s re-election campaign.
More to the point, I just can’t fathom what anyone expects it to actually accomplish. As you said, it’s practically impossible to enforce in most places, so it’s not going to stop anyone with the intent to do harm, and your chances of being caught are so low as to make its effects as a deterrent negligible. Making guns harder to acquire in the first place might make it slightly harder for people to use guns to commit crimes, but literally the only people who restrictions on concealed carry will stop from carrying are the most scrupulous and law-abiding people out there, and those are precisely the people we most want carrying.
Even worse, as we all know, most guns used in crime are stolen. But where are the vast majority of stolen guns stolen from? Out of peoples’ cars. And why do people leave guns in their cars? Because they’re not legally allowed to carry them in whatever building they just drove their car to. If you really want to get guns off the street, it seems to me that the last thing you should be doing is incentivizing people to leave guns in their cars.
I can totally understand (if not agree with) the arguments in favor of banning, or at least restricting the ownership of guns in the first place. You’re (at least in theory) reducing the total number of an item in circulation. But if the guns are already THERE, the only thing that restricting concealed carry accomplishes is making everyone less safe.
(Well, at least from a mistake theory perspective, just to keep this thread on topic. :op )
I do need to disagree with you on one front, though; banning the carry of guns in bars is at least reasonably sensible. Well, more to the point, restrictions on drinking and carrying are sensible. Banning concealed carry in bars still runs into the “how the hell to enforce it” problem as banning concealed carry in general, but at the very least it also acts as a roundabout deterrent to drinking and carrying. (i.e. in the case of a DD who gets pressured into having JUUUUUUUST one drink…) I’d still prefer laws that only ban drinking and carrying, but I can’t complain too much about states that ban it in bars.
The reciprocity bill passed in the House, but stalled in the Senate.
They’d need to eliminate the filibuster if they wanted to pass it, and I don’t think anyone in Washington actually cares about gun rights enough to go that far just to get reciprocity, as long as they can still expect to count on votes from the people who do. Additionally, due to the current polarized climate, the Democrats have been hesitant to agree to attach the reciprocity bill to any other bill that they might be more amenable to, like banning bump stocks or the Fix NICS bill, and because of the “WHAT PART OF ‘SHALL NOT BE INFRINGED’ DO YOU NOT UNDERSTAND?!?!?” crowd, the Republicans haven’t been willing to push to attach any kind of new gun regulation, whatsoever, to the reciprocity bill in the first place. So it’s likely to remain stalled indefinitely.
> Both experts and the majority of the population want tighter gun laws.
“Our expert survey asked dozens of social scientists, lawyers and public health officials how effective each of 29 policies would be in reducing firearm homicide deaths,”
So basically they decided who the experts were, and who *weren’t* the experts. Given the NY Times history in both the “Gun Control” debate and to trustworthiness in regards to activism, why would you trust them at all?
This expands to the broader point, and is why we have such a YUGE problem with policy debates.
The problem with is that you have to trust the people who anoint the “experts” as “experts”. When you are trusting an organization with a long history of being biased towards one position or another to pick “experts”, then you push people from “Mistake” to “Conflict” because you appear (appear) to be operating in that mode yourself.
If you trust the NY Times and other similarly…oriented “news” organizations, then it is not unreasonable to conclude that the people who run the system are unwilling to implement them.
But if your sources of news are all singing from the same hymn, but reality isn’t lining up, maybe there’s a different problem.
If your news sources are more diverse, and you dig deeper you find out that things aren’t as cut and dried as a single news article or 30 second evening news clip can contain.
It’s very hard for me to discuss this generally because while I may not technically be an expert in this field, it’s only a lack of credentials–not knowledge. I have held instructor certifications for rifles and pistols, and for using them for self defense. I’ve spent 10s of thousands of dollars on my own training. I know generally the federal laws in this area, and the state laws of any state I am living in at the time, I have kept up to date (and occasionally refresh that knowledge) on the statistics etc. Hell, I used to be a activist in the area.
I also work very hard to counter the Gell-Mann amnesia effect. So when I see “experts” advocating positions that I know are *utterly useless* at achieving their stated goals and these experts–and their positions–are being pushed hard across multiple media fronts I not only discount anything along those lines in that media, I also start to question their veracity *across subjects*.
If they’re going to flat out lie to me about Gun Control, why are they going to tell me the truth about Immigration or anything else contentious and political?
There are very, very rich people in this country on all colors of the political spectrum who feel very passionate about issues, or know a lot about them, or maybe just want to get richer off them, or maybe their angry at their mothers for not loving them enough and want to destroy the world over it. Whatever, but they and their money are part of the process too.
So yeah, on the surface it’s reasonable for some to conclude that the underlying problem with the system is not that we don’t have solutions, but that the people in charge are unwilling to implement them.
But that’s a shallow analysis. The deeper analysis is that on most of the issues facing us today are hard problems.
Another moderately contentious example–simplistic uni-variant analysis shows that women make 73 cents for every dollar men make. This is a YUGE problem says the Feminists and the Left. Multi-variant analysis closes this gap to what, somewhere around 93 to 97 cents on the dollar depending on the analysis and who you trust (there we go again). Breaking it out we also find out things like “Single professional women make more than Single Professional Men” (or used to, I don’t know if that’s changed in the last 15 years) and other things that suggest more than just “bias” is going on.
But we have Policy Wonks in D.C. proposing equal wage laws at the federal level, and Republicans being made fun of when they suggest that we don’t need another law like that since we’ve had one since 1963.
Laws against (flipping back here) mentally ill people buying firearms? Already federal law. Violent Criminals buying firearms? We’ve gone one better–NO felons can buy firearms legally, and certain violent misdemeanor violations. So there’s at least two solutions that are already law that the “Experts” think we need. If you don’t know your proposal is already law, then how the hell are you an expert? We already have *required* background checks for buying firearms from someone in the business of legally selling firearms.
You know the one thing that lowers violence that *none* of the “Experts” is suggesting? If you get convicted of a violent crime you are locked up until you’re 30. Go dig through the statistics, I think you’ll find that if you take people who have a record of violence between 14 and 18, and lock them up until their 30 the rate of recidivism goes way down. Of course the side effect of that is that prosecutors looking for a “win” will let them plead to a lesser/non-violent crime (one of the reasons we’ve got so many people in prison for “possession” right now that they pled a trafficking or weapons violation down).
 https://twitter.com/iowahawkblog/status/332494589934047234, https://twitter.com/jtLOL/status/501493192953319424
 Banning “Assault weapons” would, 99% of the years have no measurable impact on crime or rates of violence. Less than 450 murders (out of roughly 16k) are committed with a rifle, and most of those aren’t “assault weapons” by legal definition. “Assault weapons” are generally defined in such a way as to refer to cosmetic features rather than functional features. Likewise “High Capacity Magazines”–in all but one mass shootings I am familiar with, the rate of fire is such that a 7 or 10 round magazine could have been used almost as efficiently as a 15 or 30 round.
 I would also suggest significant reform of the prison system in the US such that prisoners have a reasonable chance at job training, but this is a complex and difficult problem.
I am not sure what “experts” in immigration reform would be testifying to, exactly. My impression is that the evidence really does indicate that low-skilled immigration depresses low-skilled wages to some extent, and that immigration does lower “social trust.” But most of what I would call the educated, expert class favor more immigration *anyway* because they put a high value on the well-being of the immigrants, who are much better off if they’re allowed to live where they want.
That’s surely one reason. But conventional economic analysis also suggests that more immigration provides net benefits to those already here, although not necessarily to all people already here.
You’re right–that is a better description of the expert consensus. I see a lot of parallels between what anti-immigration populists say about immigration and what lefter people say about things which are good for the economy, GDP, businesses overall but which are nonetheless bad because of the effect on “inequality” or some specific group x.
Part of the problem with using a phrase like “immigration reform” is that it can mean anything to anyone, so it is meaningless. I want immigration reform: deport illegals, reduce legal immigration, end birthright citizenship and chain migration and move to a skills-based model for who gets to come in. Endless reforms! But I’m betting that’s not what you think “immigration reform” is…
Governance is trivial.
It’s as simple as switching from;
A Democracy rooted in conflict theory in the form of Plurality voting.
A Democracy rooted in mistake theory in the form of Score voting.
Doing it is as simple as changing the ballot from “mark one” to “score each” and optimal governance would auto-resolve.
How does this solve the problem of rational ignorance? The public good problem in political action that leads to legislation designed to transfer from dispersed to concentrated interest groups even if the net effect is negative?
How are you going to get around strategic voting? If you have two candidates, the rational move is to score your prefered candidate at 100% and their opponent at 0. This is less of a problem with more candidates, but remains an issue.
Also, democracy is always limited by the knowledge and intelligence of the voters. Any system of counting votes is still vulnerable to the people’s mistakes.
The worst case scenario for score voting – universal strategic voting in which voters only assign 100% to their single preferred candidate, and 0% to all others – is that it degenerates into the system we already have: One Person, One Vote. I don’t see the creation of massive potential utility gains as a problem that we have to “get around” just because its lower bound is the same as what we are currently stuck with.
The problem with our One Vote system is that it entrenches two parties in power, and narrows the expression of voters’ political will to whatever those parties choose to offer – leading, in our latest election, to the two least-liked pair of candidates ever. In a score voting system, we wouldn’t be stuck with those shitty options, and we wouldn’t have had to hold our noses and vote for whichever we perceived as the lesser evil. If Hillary and Donald were somehow still seen as the front runners in that hypothetical campaign, people could still vote strategically by giving either of them 100% and the other 0% – but they could also give Bernie 100%, or Rand Paul, or whoever they actually liked and genuinely most agreed with. And we would start to see what the real range of political opinion is, and candidates could begin to build new coalitions of previously under-represented constituents. And massive power to control the collective Overton window would no longer be given to small groups of party insiders for no good reason.
This is a really interesting post that I think crystallizes a number of similar dynamics for me.
I’ve thought for a while that there’s two definitions of “racism” in use: colloquial, in the sense of treating people differently on the basis of race, and academic, in the sense of structural, privilege + power. And what I found so frustrating about this dynamic wasn’t just that different people were using different definitions, but that the colloquial definition was academically racist, because it ignores and perpetuates power dynamics, and the academic definition is colloquially racist because it’s says, for instance, that black people can’t be racist. I think this might be a case of mistake vs. conflict theory. To confirm this, I’d need to find out more about how the more Klansman sort of conflict theorist person views racism, but I’d guess that would bear this theory out.
This also reminds me a little of the Cactus Person story, but I may be overmatching there. I’ll think about that comparison more in the morning.
It’s kind of similar in that both present a sharp dichotomy between perspectives, with one being skeptical and analytic and the other being more like Dumbledore in HPMOR- seeing the world as comprised of emotionally satisfying narratives. If you wanted, you could pattern-match this to the empathizer-systematizer scale, C. P. Snow’s ‘Two Cultures’, the left-brain/right-brain distinction, and pretty much any model that splits the world between logic and feeling.
Which is ‘better’ depends on how the world really is, both in general and in the domain you happen to be focused on.
My personal narrative, at this point, is something like ‘mistake theorists are Humanities People who’ve stumbled into a complicated, systemic-type domain, and are trying to force it to fit their narratives/prejudices’. But I should be skeptical of this, because the underlying reality is probably a complicated, systemic-type place, and I shouldn’t try to force it to fit my narrative/prejudices.
EDIT: I’m emphatically not against Humanities People in general- their approach is perfect for literary analysis, creative work, and all the spheres of human activity where emotion and narrative work really well. Many of them are genuinely lovely people, and do good work. They’re just uniquely ill-matched to this specific category of problem.
Do you mean to say “mistake theorists” in the first sentence of the second-to-last paragraph, or did you mean “conflict theorists”? If you meant what you said, explain?
Yeah, that’s a typo. Was so busy trying to remember which uses were capitalized, I mixed up the categories.
Off topic, but can I just say I really love your work? Without geeking out too much that you’re talking to me, because that would be awkward, but you’re kind of incredibly inspiring.
But not all conflict theorists are easy conflict theorists. You can have an anylysis in terms of conflict that doesn’t contain the easy answer that one of the conflicting sides is 100% correct.
I feel like there’s a distinction between ‘people who are conflict theorists’ and ‘people who think conflict is part of the picture’.
If you’re a hard conflict theorist, you think the other side has an understandable perspective that motivates its actions, but most of your energy is still going into fighting the people who have the wrong answers instead of trying to find the right ones.
(The way I read it) hard mistake theorists can have a fully fleshed-out, non-cartoon version of their opponents, but they still think their opponents are ‘The Problem’ and not ‘Problem #46, Incentive Tier’.
That’s interesting, because I took the opposite conclusion. That Conflict types were the ones most likely to say their opponents are ‘The Problem’ and need to be gotten rid of, and Mistake types were most likely to say “We’re all reasonable people here, just need to figure out how to point that reasoning in the right direction (ie incentives, checks & balances)”
Arrgh, I keep mixing up the words. Pretty much what you said.
My guess (honestly) is that some part of my mental auto-processing software thinks conflict theorists are making a mistake, so it keeps autocorrecting their designation to ‘mistake theorist’ without me noticing.
I get how ironic this is in context, and I do not endorse it, but it explains why I made the same stupid typo twice in the same thread.
Not all conflict theorists are “Humanities People”. I brought up the “conflict vs mistake” idea with one of my coworkers today. He’s a college-educated programmer (nowhere near the Silicon Valley level though) with blue-collar sensibilities who is conservative and very pro-Trump, and he emphatically agreed with the Conflict side, taking the position that society is rigged by the wealthy elites / establishment who are either (in the case of the media & academia) pushing a Leftist agenda or (in the case of the Republican establishment) are too weak / self-serving / incompetent to offer the first group any pushback. Only solution is to kick them all out and put people with some common sense in charge.
I don’t know of any answer to how people end up as Mistake vs Conflict people, since I’ve seen both types on all sides and from all backgrounds
And I know some (literal) Humanities People at college who really don’t strike me as conflict theorists.
I didn’t exactly mean this as a metaphor, originally, but it’s pretty clearly not literally true. I do think there’s a connection, though, even if I’m having trouble formulating exactly what it is. Something to do with partially overlapping clusters in peoplespace, maybe? But that still feels a little hand-wavy.
I see what you’re after, and I can’t get it quite to work either.
I think conflict theorist versus mistake theorists is a horseshoe model, and the Two Cultures is a straight binary, so they can’t really line up.
My own personal narrative is that the Conflict theory people are like the dog that finally caught the truck. They worked really, really hard, for a really, really long time, to make everything reduce to a no-quarter brawl between groups for gain– and now they’ve finally got their wish, and are finding out what it’s really like to have powerful interest groups to gang up on you, stab you in the back at every turn, and count “making you lose” as a primary goal.
I have a subagent that thinks this is really funny.
And a second subagent that’s horrified at the first subagent because we are Not The Kind Of Person who takes pleasure in bad things happening to the Outgroup- we’re a Nice, Cooperative Person!
And a third subagent who sort of sniffs dismissively at the second subagent, because we are Not The Kind Of Person who lies to ourself about what kind of person we are.
And a fourth subagent who’s honestly unhappy at the thought of the conflict theory people suffering, but, when questioned, admits that it’s partly reacting to the ‘dog’ analogy and it really likes dogs, and anyway people in pain aren’t always safe to be around.
And a fifth subagent who thinks this is all a little artificial and performative and possibly signaling, and we probably shouldn’t be calling these ‘subagents’ when they’re more ‘inner voices’ or ‘feelings’.
I think this is progress, but I’m not sure how to feel about it yet. Or, I guess, which feelings to endorse.
This is only barely related to your comment, but I really dislike the Humanities/Science(?) distinction because I feel like it just doesn’t hold up very well when you try to apply it to actual people. Whatever happened to being well rounded and having multiple interests? For instance I’m a computer science and art history double major and I don’t feel like I’m using wildly different skills or parts of my brain when I engage with each subject. I just think it is impossible to say that “Humanities people” are even a definable group that all approach problems using emotion and narrative.
I agree that these are the two most common, but given where we are it is fair to mention a third–the one from Against Murderism
Under that definition, Scott would exclude a lot of people that the colloquial definition people would definitely say are racist (see part IV). And at least in certain parts of the internet that type of definition is gaining ground. It basically holds that almost no one is racist and indeed that it is pretty close to a meaningless slur that ought to be retired.
So the spectrum goes from certain academics that think that everyone is racist because that just means living in a society that embodies structural racism; through the colloquial definition where everyone might be a little racist, most people aren’t a lot racist, but there’s a non-trivial minority that are; through almost no one is racist because you’d need to be totally irrational on the subject but not sub-clinically (or clinically) mentally ill.
Re: two uses of racism, I think we need new terms with less divergent connotations. Instead of “colloquial (or personal) racism” and “academic (or institutional) racism” how about something like “prejudice” and “discrimination”?
(Not sure how this ties into mistake/conflict theory, though, except that I tend to see the righty we-are-all-anti-racist(prejudiced) folks and the lefty minorities-are-all-super-hurt-by-racism(discrimination) folks as caught up in conflict theory while I’m more concerned about both recognizing our progress in fixing mistakes and recognizing the mistakes that remain to be fixed…)
Mistake theories are best suited to the task of avoiding negative-sum conflicts. Conflict theories are best suited to the task of winning zero-sum conflicts. The “easy” half of your easy/hard split appears to be the people in the zero-sum meta-conflict of which framing should dominate who argue that the opposing position doesn’t describe a real circumstance.
I experience this fact acutely when I play conflict-driven games. I mostly play chess as a cooperative effort to have a “great game”, even to the point of trading colors with my opponent if we become mismatched. I usually don’t enjoy games like Diplomacy or Risk, because they’re too zero sum and there’s not much about them (especially Risk) that feels like learning or like a beautiful game. But I think it’s a great mental exercise for mistake-lens people to play these games, and really attend to what is different about them than (say) DnD. And to how some people playing these games get more uncomfortable, whereas other, typically disengaged people, suddenly come alive into a flow state.
The mistake-type and conflict-type attitudes aren’t, IMHO, fundamental. (at least not normally)
Rather, they’re products of ecology. When people feel they’re playing Risk, Chess ( in a one-off game for high stakes against an uncharitable opponent), or Diplomacy, they develop their conflict lens. When they play DnD or play chess against the same opponent many times, both aiming to improve their games… they develop their cooperative lens.
Creating an ecology where being a candidate for office more resembles DnD than Diplomacy is a *hard* problem. There’s room for help from all willing hands.
ooooh so what does it mean that the Russians are playing chess while the Americans are playing poker?
We’re either playing Checkers, or Whack-a-mole.
Diplomacy is an interesting example, because while the ultimate goal is zero-sum and conflict-type, much negotiation is aimed at convincing the other players using mistake-type reasoning. It’s me as England convincing France that we have a win-win solution to crush Germany.
Business litigation is a fascinating real-world example–the parties want to achieve a positive solution through a negative-sum process. Parties that view a dispute on purely business terms use discovery primarily to discover what the true likelihood of victory is and then settle around the appropriate number. Parties that bring personal vendettas into cases view litigation as a negative-sum, winner take all game, and avoid settling. Good lawyers have to freely switch between attitudes throughout the case.
If there is conflict in the territory, which you can’t do anything about, mistake theory isn’t much good at all.
I feel like the words ‘which you can’t do anything about’ are doing a lot of work here.
Often, in the real world, people find there’s a mountain or something in an inconvenient place, and for awhile they have to make allowances for it. In the long term, though, we’ve had a lot of success tunneling under it. Or installing ski lifts or treadmills to make surmounting it easier. Or setting up shop somewhere else.
If you’re motivated to avoid conflict, that’s a technical problem we can work out how to solve. Or not. But our record for solving technical problems that seemed impossible is pretty impressive.
I don’t think short-term long-term considerations should be trivialized, policies need to be decided on in a reasonable timescale and our record for predicting when we’ll actually solve any given problem is less impressive.
So, sure, focus on mitigating conflict but you still need to devote some resources to managing it in the meanwhile.
EDIT: I reread the list of authoritative pronouncements I linked earlier, and the one about trains stood out to me so I googled it. That one’s apocryphal (the one purportedly from Martin van Buren). I googled a couple of the others and they seem to be genuine, though.
I still think the list as a gestalt points to something true and important, but at least one of the more extreme examples is fabricated, so, y’know, ignore that one.
Several of them are still “True” though.
We don’t have space travel–we have the ability to put someone up there and get them back safely most of the time, but not really “travel”.
That makes sense- and explains why we still see both styles instead of one being predominant. Incorporating it into my model. Thanks.
I am a mistake theorist (like I suspect a lot of the regulars here are). I tend toward Hard Mistake Theory, but dabble in Easy Mistake Theory as well. The thing that most frequently pushes me toward Conflict Theory is the gnawing suspicion that people I encounter are not looking for an answer, that they’re not even trying to be rational, that they’re simply indulging their emotional narratives without even thinking things through.
Basically, that they’re conflict theorists.
tl;dr: Mistake Theory is almost always correct, except when it comes to describing Conflict Theorists, who are moral mutants and hate goodness for no reason.
Is this meant to be ironic? I genuinely can’t tell anymore
Definitely ironic, in the sense of looking back at my own thought processes with bemusement and a little distaste, and wondering how much effort I should put into changing them.
“Moral mutants” is
Yudkowsky’s phrase– meant to be a reference to Yudkowsky’s Are Your Enemies Innately Evil (which conforms to Betteridge’s Law). Scott uses the phrase in this article, in a way that underscores its absurdity.
The comment section here is often so bizarre its hard to tell.
Ironically I was going to post that link in my reply when i was unsure if you were serious, but i didnt notice the symmetry of the phrase
It’s a really good article.
I like the weirdness- it’s kind of bracing. The absurdity heuristic really doesn’t work very well here.
Was nice talking to you :).
https://youtu.be/udJw-CzX7sA (10 seconds, just for amusement)
To be fair, those criteria don’t technically conflict. (For certain values of ‘technically’ arguably being a ‘mutant’ implies being at least a little odd, outside biology departments anyway.)
Any tips for routing around them?
I think that when it comes to politics people instinctively resort to moralistic tribalism. Routing around that tribalism is the hard problem that the mistake theorists really need to solve.
I hope this isn’t true, because if it is, then I don’t think I like the vast bulk of humanity very much and am less concerned about what happens to them. If conflict theory is an intractable part of human nature, and all we have is team vs team with flexible facts to fit the convenient narratives needed to “win”, then it doesn’t seem like there’s any point in even trying to make a better world. Actually, it’s the kind of feeling that if I wasn’t cursed with average intelligence and ability would make me legitimately consider becoming a supervillain.
I wonder if mistake theory isn’t more attractive to cognitive minorities, since it allows them an escape from having to join one of the inevitable two big teams, when neither one fits their cognitive profile, and are tailored for two different types of normals. People who have no tribe to begin with other than their friends and local relations aren’t going to want everything to devolve into abstract ideological tribes where reality becomes relative to achieving the distant and messianic end goals of the group.
I saw a Twitter post a few months ago from either @ClarkHat or somebody he retweeted that said something like:
“My preferences for governmental system, in rank order:
1. A limited, federal republic with almost no government interference in day to day business.
2. A strong central government that controls almost all aspects of our daily lives with my team in charge.”
I sometimes think that at least part of the rise of the “alt-right” is a big chunk of America starting to believe (rightly or wrongly) that another big chunk of America is doing everything they can to crush them and their way of life, and deciding to fight back on that other chunk’s terms.
Also see the posts on Status 451 about what right-wingers need to learn from the left (tl;dr: having power is more important than convincing people that your ideas are correct): https://status451.com/2017/11/11/radical-book-club-what-righties-can-do/
Actually, I love this quote from the article since it seems very related to this topic of Mistake vs. Conflict:
“The legendary biographer Robert Caro mentioned once that he had heard college professors talk very convincingly about how the paths for freeways in New York City were chosen. The professors listed variables, and considerations, and trade-offs, and they talked very knowledgeably and nothing they said was worth a damn because the paths for freeways in New York City were chosen for one reason and one reason only: a freeway was where it was because Robert Moses wanted to build the freeway there. Considerations meant nothing next to power.”
That quote is annoying as hell. It’s not like Moses just randomly picked them out of a hat.
It’s even worse – Moses didn’t even pick them. The Regional Plan Association (which was basically a tool of the Rockefellers) laid out where the highways were to go long before Moses came on the scene. Moses was the Rockefeller’s hired muscle to get it done, and when he stopped doing what they told him to he was slapped down.
Does that make me a conflict theorist?
(See “The Assassination of New York City” by Robert Fitch for the details of, well, see it’s title…)
I think there’s a third axis, that measures social trust. Conflict theorists ignore it because they assume zero social trust, and mistake theorists ignore it because they assume total social trust. But there’s also room for considering social trust as a variable, and deciding your approach to an issue based on where exactly you think it is now, or on how the issue would affect trust.
This doesn’t sound right. Mistake theorists routinely treat the absence of social trust as a problem that needs to be worked around.
Yes, but from the same outside-view that leads them to consider conflict theorists as making a basic mistake. It sees it from the outside, but doesn’t go into the inside view mode.
Here’s one of my go-to links for pointing at lack-of-trust as a problem to be worked-around…
About once every two years I seem to be posting this game-theory link to my facebook wall… http://ncase.me/trust//
Nifty game theory lesson. And the music reminds me of Toady’s noodling in Dwarf Fortress.
I love that link, and also send it to absolutely anyone who will listen.
I often use it as an argument for why small churches (ideally with a sense of humor about their doctrines) are so valuable. Even if the larger world goes nutso defect-defect on you, a small church can keep everybody singing on Sundays, and generously supporting all with each’s relative specialties. I never trusted a mechanic, a dentist, or a lawyer so well as when said professional was doing work for someone who taught Sunday school or otherwise attended church at the same place as the professional’s kids. Particularly for kids, for whom the sense that there’s a high-functioning haka–a place where everybody moves in step, for everyone’s benefit–and extra-particularly for kids with only one parent… well, small churches can be *great* at giving small kids a place to belong that’s loving and sacred, when the parents aren’t in a position to supply that at home.
What’s the best church for community with a sense of humor and not too much oppressive stuff? I’ve heard good things about Mormons but also that they can be pretty strict (and being gay in a Mormon area is apparently terrible). Maybe reform Jews are better, but Jews are argumentative at the best of times and have a tendency to have a high ritual to community ratio.
Finally got the time to do it. It’s so great, thank you.
Glad you liked it!
I don’t have much more to say besides “small, sense of humor” as my suggestions for finding a church. Actually, another thing to look for might be independence from national political trends–e.g., the fact that Utah goes its own way politically (or seems to, lately) is a strong positive for the LDS church.
I agree that a major trade-off to religious community can be the strictness of the restraints / exclusions they enforce. Particularly for women (expected to sacrifice their careers to have kids and be at home) and the whole range of sexually non-vanila people out there, from gays and lesbians to master/slave and bdsm, to poly communities. Religion kinda functions like a canal for flowing through life. It really saves a lot of time and effort–a veritable lifesaver!–if the canal goes your way, but otherwise, very much less so.
The best thing about small churches with leadership with a sense of humor is that the leadership can keep track of everyone’s differences, and find ways to accommodate or dig side-canals as needed, while still keeping the main canal clear and flowing. Maybe you’d enjoy the story “On Mars, Do we have a Rabbi!” I think I first saw that linked to here on SSC, not sure who shared it.
Zero social trust in the far-away “real world”, total social trust among people discussing solutions?
A good example of this is how people treat internal vs. international politics. Members of one common polity who all gain from the collective good of that group can treat eachother as well meaning but mistaken collaborators. But in a situation where your goals honestly don’t match, and what i good for country A might be bad for country B you need to act like competitiors
Some conflicts are genuinely zero-sum. If I’m 100% pro-life and you’re 100% pro-choice, then we’re probably not going to come to agree–one of us will win and get their policy imposed on the other, or maybe we’ll end up in some middle-ground where we both think we’re having evil policies imposed on us (like if you allow abortions up to halfway through the pregnancy and then forbid them.) But anything I count as a win, you’ll count as a loss, so compromise isn’t really on the table.
But most conflicts aren’t zero-sum. If you only want things to go better for women in our society, and I only want things to go better for men in our society, there are zero-sum conflicts that can come out of that, but most things we can do to make society work better will make both women and men better off. When we’re talking about child support arrangements, maybe we’re stuck in a conflict-only situation, but when we’re talking about tax or trade policy, probably we can come to some compromise that makes both men and women better off.
Alright, now my brain is stuck trying to think up a non-zero sum solution to this. Here’s where I’m at so far:
Scenario 1: Pro life (PL) person gets unwantedly pregnant, does not get abortion
Pro-Choice (PC) is unaffected, given that they don’t care what the PL does.
Net effect in terms of overall happiness: PL + 0, PC + 0
Scenario 2: PL gets unwantedly pregnant, gets an abortion
No one is happy here: PC -1, PL -1
Scenario 3: PC gets unwantedly pregnant, is unable to get an abortion due to regulation imposed by PL
Net effect is PL + 0, PC -1
Scenario 4: PC gets unwantedly pregnant, is able to get an abortion
Net effect: PL -1 , PC + 0
In this case, the only inciting factor to decreasing overall happiness is when PC gets pregnant. As such, rational actors from both camps could realize that their net happiness is increased if they minimize the number of times that PC gets pregnant. Therefore a non-zero sum solution would be to try and reduce the number of times that happens (via sex ed, access to contraceptives, etc)
This doesn’t describe reality. So let’s go back to the scenarios. I think my error was in scenario 1. It might look more like this
Scenario 1b: Pro life (PL) person gets unwantedly pregnant, does not get abortion.
Let’s now assume that PL being pregnant, even though it is unwanted, increases the net happiness in that tribe.
Net effect in terms of overall happiness: PL + 1, PC + 0
This immediately removes PL’s incentive to reduce the overall pregnancy rate, since they’re cutting themselves off from the increase in happiness from scenario 1. In this paradigm, you would now have them doing very little to combat unwanted pregnancy (since it might successfully combat the pregnancy’s they want to happen) and instead focusing entirely on preventing PC from being able to get an abortion… but that can’t be right.
The closest thing to non-zero-sum policies I can think of is that if both sides agree that unwanted pregnancies are bad regardless of what happens to them, maybe they can both agree on trying to decrease the number of unwanted pregnancies.
In my experience, Pro-Life and Pro-Choice DO agree that decreasing unwanted pregnancies is one solution to the problem of abortion, but PC says “The solution is more contraception so that casual sex doesn’t lead to unwanted pregnancy” and PL says “The solution is less casual sex and, to the extent that contraception greatly normalizes / incentivizes casual sex, less contraception”.
If the people with whom you are having a a discussion about some
intractabledifficult problem *consistently* have demonstrated that they cannot be trusted to follow through with their agreements, is that zero social trust, negative social trust, or do I fundamentally misunderstand the word?
Probably zero. Negative trust would describe a scenario where someone will go out of their way to screw you over against their own interest (like if Jerry stopped Newman from getting a postal route in Hawaii even though it’d mean Newman stays near him).
But even negative trust creates reliability, since it makes the person you’re dealing with more predictable. And it could also lead to social trust, if negative social trust is the product of someone disliking you for a specific reason (since you’d want to avoid making people dislike you).
What if you’re a mistake theorist who’s started noticing all the conflict theorists hanging around and sharpening their knives, and believes in self-defense?
That’s an essentially conflict-theoretic perspective.
Maybe you can be different things on different levels of meta?
I feel like there’s a perfectly coherent position that kind of merges the two. Something like: ‘Our problem, that we need to solve, is that other people have mistakenly adopted this conflict-type perspective. This is essentially a mind-virus that places them beyond reason, so we can’t really debate with them. But it’s not their fault they’re like this, and those of us who are uninfected should work together to find a cure!’
That still seems like an essentially mistake-theoretic perspective, but one that’s very prepared to treat conflict-theorists as the enemy in a functional sense.
This might go to what Scott was talking about on the Political Spectrum Quiz; maybe some mistake theorists are conflict theorists one meta-level up.
Isn’t that basically the reason for Marxist Democratic Centralism? People who are vetted Cadre can be trusted to discuss using Mistake theory. Comrade Luxembourg is simply mistaken about the role of the Vanguard party.
Outside this, everyone is a Capitalist Running Dog, bound into conflict by class dialectic, and Conflict Theory prevails.
In that it divides the world into “us folks” and “those fuckers”? Yeah, I guess. But the point I’m trying to make is that if you have any concept of “those fuckers” at all, who you bucket into it depends just as much on your model of their motivations as on your grand unifying theory of politics. And that depends to some extent on the other theories of politics in the wild.
Right. Well, with your left hand you work on evangelizing the principle of charity and on getting those conflict-types to notice that positive sum interactions are possible, and to get mistake-type people to help you with this evangelism type work… and with your left hand you dig a bomb shelter, keep a list of people you think are substantially more conflict-type than you (and keep them away from things like your values, while you keep a close eye on which of their tactics are so obviously paying off that you can’t afford not to play along).
It’s an ecology. If it pays to be conflict type, we’re going to see more of that.
I think you are getting quite close to re inventing the Overton window there.
But this ecology is more like a prisoner’s dilemma – positive sums when all sides cooperate (choose mistake). advantage to conflict types when they do their thing and the other side is trying to cooperate ineffectively but when both defect(choose conflict) it’s more of a gamble as to who has the bigger bomb(shelter) and what is the costs/spoils ratio so maybe not prisoner dilemma.
But if my analog holds, then we’d expect hybrid types like tit-for-tatters to dominate.
Also, the ecology is multi layered. Each person is part of a group that chooses conflict or debate with other groups, for the group to be stronger and survive conflicts all members need to be strict debaters within the groups. So now you’d expect most interactions in the ecology to be mistake-type.
“But if my analog holds, then we’d expect hybrid types like tit-for-tatters to dominate.”
The problem is that it’s a tragedy of the commons, not prisoner’s dilemma. It’s very difficult for one agent, or even a small collection of cooperating agents, to shift the behavior of the entire population.
There’s no commons here, no pool of shared resources that you’re burning, only the strategies of other players. Scaling and coordination problems make it more complicated than a pure PD, but I think PD is still a better analogy — at least for the case where you’re only trying to determine your own strategy — than the tragedy of the commons.
Tit-for-tat does seem like a generally good approach.
It’s multipolar, though, and it only takes one Defectbot to throw all the tit-for-tatters into full Defect mode.
Hey look, it’sBill the Galactic hero (sorry, you’re using two left hands, so I hadda…)
That left hand is doing a lot of work!
But what if you don’t have two left hands?
(Or what if you made a typo and intended to reference a right hand?)
Yep. Shoulda said “right” in one of those two places where I wrote “left.” Glad you all rightly pointed that out. Otherwise, so many people would have been left directionless.
Then you have a map that doesn’t match the territory.
Care to justify that statement?
The knives have been out for a good hundred years at least. I do feel there was a marked shift towards conflict theory in 2014-2016, but I hope it will turn out to be temporary. That is not to say that conflict theory is wrong, but it would be bad if it became dominant.
You mean we have had a hundred years of conflict theory, or left-driven conflict theory. Scott nods towards right-driven conflict theory as well. The original reactionaries had a theory that the populace should be prevented from overturning the established order.
And conflict, if not conflict theory, has been around forever. How else did we end up with adversarial political systems, where opposing parties face each other in debating chambers like armies?
Time? Maybe democracies naturally merge into two parties, and those parties become more and more polarised, and people who use nasty tactics outcompete those who are honest and charitable. It could just be good old Moloch up to his tricks again.
Please read a few 19-20 c. history of Europe books, y’all.
No idea, but I definitely notice and am worried by all the knife sharpening too. :-/
Without having read further, and realizing that it doesn’t contradict what you say, I want to note here for the record that I first learnt about public choice theory from a Marxist. (I mean some kind of Old Left holdout, a Stalinist or Trotskyist, I never figured out exactly where he lay.) I have some books on it that I got from him and haven’t read. (He was also into linear programming, by the way.)
I mentioned this in another comment, but Old Left are essentially Mistake theorists internally, and Conflict Theorists externally. This is mostly because when the far Left attempts to Mistake theory the government (“Hey King Louis, you know what would really help the average person do better in our society? If we took the means of production from the rich and distributed it to them so they had a collective stake in society and production.”) they are quickly taught the nature of Conflict Theory.
I never thought about this consciously and I think it is an *enormously* useful concept.
Like, easily top 5 among the posts of this blog in terms of making me go “So much makes sense now that didn’t 30 minutes ago.” And that is not light praise.
FWIW, I am a dyed-in-the-wool mistake theorist (I echo Lecter’s suspicion that this blog’s regular readers will be mostly the same). I believe most of my family is as well (my parents are both veterinarians, which I doubt is a coincidence).
I am in the process of sending this to about eight people, most with some personalized variation of “Remember [that half-articulated idea I tried to explain last week/that fight we had a month ago/that “why is the world the way it is” conversation from some point last year]? Reading this made it click for the first time.”
Ya done good, Scott.
“FWIW, I am a dyed-in-the-wool mistake theorist”
I think there’s an important distinction between “mistake theorist” and “never thought about it and assumed mistake theory was the only game in town”.
I would urge people to think about which one they actually are. I doubt doing so will change many people’s minds but it might be edifying.
I have a friend who is clearly conflict, and I realize now I was dimly aware of this axis, and that I’ve been trying to convince him in arguments using the mistake approach for years until I gave up because I realized he just resets. It’s useful to have this crystalized in such a concise form. Of course my first impulse is to just explain this axis to him (We are otherwise politically on the same side but frequently disagree, now it’s clear why) but of course that’s just not going to work.
I guess it’s just abortions for some and little american flags for others?
Do you realize that you also have things to learn from him?
Hmm, I had a long reflection about that and… in many aspects of life I would ask for his counsel, but politically he’s as naive as a child. This may be a prejudice on my part, but it’s a very strong one I cannot consciously break. I’d as soon take medical advice from Deepak Chopra
You know, I’d bet if I asked your acquaintance, he’d say that you are smart in many ways, but politically as naive as a child, and he’d as soon take medical advice from Doctor Who. 🙂
> “never thought about it and assumed mistake theory was the only game in town”.
I guess you could call it “mistake-theorist-by-default”.
I think you can be one way on some issues and the other on some issues.
For example I am definitely in the “Conflict Theorist” camp on Gun Control, but I am a Mistake Theorist on violence abatement and crime reduction.
I never thought about it until I read your 2018 survey, and the questions about ‘political disagreement’ – which whose offered answers largely seemed to be variants of ‘who is making the mistake, and why?’. I couldn’t answer; the choices seemed to _presuppose_ a mistake-theorist view of the world. I think there was also an option to say the opponents are evil, but the very fact that this was the main counter-option to ‘mistake’ seemed to reinforce my complaint; only a mistake-theorist would see that as covering the ‘non-mistake’ options.
I wouldn’t describe myself at all as a conflict-theory person, based on your description.
But I’m most certainly not a mistake-theorist which, as you describe it, seems factually wrong and (to me) somewhat morally repugnant. So I don’t really think the dichotomy is really right or even a very useful tool.
I’m probably failing the ITT here, but:
If you can, would you mind elaborating on the ‘morally repugnant’ part? I can come up with stories for why you might feel that way, but none of them have the ring of truth.
Maybe that was a tad too strong, but it certainly makes me very uncomfortable. As I read Scott – the analogies about finding the best treatment for societies disease, the “best policy”, “saving the world”, everyone wanting “a good economy” and only differing on the means … and then the actual definitions of soft and hard mistake-theorists, one thing jumps out:
I not see _anything_ he says, not one word, suggesting mistake theorists acknowledge different utility functions and values. But there’s no best policy, best economy, and ‘saving the world’ means (if anything) as many things as there are people. For most interesting societal questions, the (singular) truth is not out there, and IMO you are making a mistake (!) if you take it as axiomatic that it is.
So mistake theorists are either wrong(*), or – and this is the creepy part – they think there _is_ a right set of Values and people who disagree have the wrong ones (but we can solve this with the right education I suppose). I’d not want to live in a society where this is conventional wisdom.
(*) We could really soften things a lot. Something like: political disagreement is generally really messy, but some people have a tendency to attribute a somewhat larger role to disagreements over matters of objective fact (mistakes) than others do, and this tendency influences their politics somewhat in such-and-such ways. But now (a) it’s a bit boring, and (b) even if I had somewhat more of this tendency than average, nothing but trouble is gained by saying (as people are here) “I’m a mistake theorist.”
Thanks, this helps a lot. I get where you’re coming from now.
I think I was reading it with the word “best” in heavy, not-actually-visible quotation marks. Which makes it a lot less creepy, but may not have been the most accurate reading.
I’ve also got a pretty strong allergy to people trying to impose their personal values on others, especially by force. This isn’t necessarily consistent, since it involves trying to impose my values about other people trying to impose their values, but it doesn’t mesh well with the Knight Templar memes.
Metaethically I’m somewhere in the emotivist/error theorist neighborhood, which a lot of people (understandably) find creepy, but probably for the opposite reasons as the ones you express. I’m not sure how much it influences my object-level ethical intuitions, though, which are mostly vanilla enlightenment-liberal. I like consent a lot, and tend to gravitate toward utilitarianism in dubious situations out of a feeling that it produces less horrifying failure modes (although a lot of people would vehemently disagree with this, and my own feelings about it are far from resolved.)
I arguably have a bit of a blind-spot where Scott’s concerned- finding this blog was kind of a turning point for me, so I have enough good feeling and gratitude associated with him that it’s harder to see him as a sinister Brave-New-World type, even though when I slow down I can kind of see the parallels. I think he genuinely means well, and my primate brain attaches enormous significance to this fact before I really get a chance to weigh in.
Thanks again for responding- I was genuinely puzzled, and now I’m much less so, and that’s always a thing to celebrate.
> I think I was reading it with the word “best” in heavy, not-actually-visible quotation marks. Which makes it a lot less creepy, but may not have been the most accurate reading.
Thanks for engaging, and your thoughtful comments. I see that you can fudge the reading of ‘best’, but to the extent you allow some slop, you weaken the coherence the mistake-oriented point of view. If Scott had made some noticeable nod, any at all, towards the value-difference issue, I’d find the ‘mistake-theory’ view so much less interesting, but also less objectionable, and would not have commented. But he didn’t, and the “there is a best”, “there is a truth”, simply pervades his writing on this and IMO it stretches the principle of charity (and makes nonsense of his position) to suppose he isn’t really thinking that.
Scott gives one very concrete example, which has the virtue of not being a lightening rod for high emotion. He suggests, under MT, people might differ about their preferred interest rate policy, but they all want a ‘good economy’ and so the only question is who is actually getting it right. But suppose, at next financial crisis, the entire world takes Paul Krugman’s preferences on this as gospel, and let’s go further and say Krugman is actually perfectly correct as to to what will cause what and how to keep the economy as good (as he sees it) as can be done. So what is left, part from evil people who only seek conflict?
Well, there will be (e.g.) those who think transferring spending power from retirees with a hard-earned nest-egg (and appropriately for their age, conservative investments) to the young and poorer (and freer spending, and bolder)(*) – is morally questionable, and would trade off a bit of society-wide ‘economic goodness by any of the standard measures’ to respect this concern and (in their view) unfairness and (in their view) long-term incentive problems.
So no, interest rate policy differences are NOT going to be resolved – maybe lessened, but no more – by a mistake-oriented analysis. That’s simply not all that is going on.
(*) Which is part of what the Keyensian low interest rate/higher-inflation policy does, and certainly plays a part (I guess large part, but that’s very debatable) in why this policy ‘works’ in the first place.
I think I agree with you on this. I wouldn’t say I’m either type of theorist. I don’t think morality is real except in a consensus sense, so I think a lot of human conflict and argument is not to discover what’s moral or what’s the “correct” course of action but to define what’s moral by winning whether by winning an argument or a war. I may have particular preferences here and only be willing to be in a coalition with certain people, but I don’t think it’s quite the same viewpoint as thinking those I disagree with are evil.
Of course, mistakes will be made left and right almost regardless of my moral viewpoint, but I’m not sure those are a more significant cause of problems than fundamental disagreements with no empirical grounding.
My first reading of this article, I thought that I was a mistake theorist. After rereading the article and most of the comments, I’m beginning to think that I’m probably some sort of weak conflict theorist. I believe that most issues are based on mistakes, not conflict, and that structural problems are at the root of many political issues rather than “just need to elect the right people”. On the other hand, I think that there are concrete differences between individuals (I’m not sure whether to term them preferences or aesthetics or core values) that are not compatible with each other. If my preference is a society like Star Wars where everyone walks around with lightsabers and blasters, how can that coexist with someone whose preference is an unarmed society?
In one of the Open Threads a week or two back, someone asked about how the “politically correct” Overton Window thought of Libertarians (harmless or threat?), and my input into the convo was that, regardless of political leanings, people who view politics as [what I called “technocratic” but which Scott is calling “mistake”] tend to handle disagreement much better than people who view politics as [what I called “existential” but which Scott is calling “conflict”], who may consider dissent to be a mark of The Enemy.
I also think this might be my new favorite post, not because it’s an amazing new insight (other posts have felt more profound), but because I think it’s a good summation of why I read here: to avoid the conflict-perspective in favor of the mistake-perspective.
In other news, I think that, after being a reader for a year+, I’m finally becoming a commenter. Huh.
It took me about that long, too. There’s so much to absorb, it can be a little overwhelming, and I’m never sure I’m not just telling people things they already know (some of the commenters here are scary-smart.). But so far it’s going really well.
I’m not qualified to speak for others, but I welcome your company.
I think some misunderstanding of Marxism is suggested by the fact that you attribute such a moralistic attitude to “conflict theorists”, one which sees class conflicts in terms of villainous capitalists behaving in immoral ways towards the the noble and heroic working class. No doubt some socialists, even many who call themselves Marxists, do frame things in such moral terms, but Marx himself was strongly opposed to this sort of moralism, and frequently derided other socialists who framed class conflict in this way. See https://books.google.com/books?id=ieixAAAAQBAJ&lpg=PR1&pg=PA82 for more discussion of Marx’s rejection of moral arguments for socialism.
Marx’s attitude, as I understand it, was that ideology is heavily influenced by the “material conditions” of a society, so it’s mostly futile to try to change people’s ideological beliefs about the way society ought to work via moral exhortation, any actual mass change in attitudes will happen due mostly to changes in those material conditions (such as his prediction that ever-increasing automation would cause a “tendency of the rate of profit to fall” eventually leading to a major crisis for capitalism, an idea I think makes some sense if you consider the limit case of self-replicating machines, where if multiple competing sellers have such machines, market competition would be expected to drive the prices of the machines down to barely more than the cost of materials and energy needed for them to replicate a new copy, so profits from selling them would become negligible). In modern terms you could describe it as a view that people’s economic circumstances exert a strong selection pressure on which memes are most popular in the society…as Upton Sinclair put it, “It is difficult to get a man to understand something, when his salary depends upon his not understanding it”. Vivek Chibber had a good, if a little overlong, essay on this at https://catalyst-journal.com/vol1/no1/cultural-turn-vivek-chibber
Yes, this needs to be emphasized. The sort of conflict theory that Scott was describing is the sort of thing that I would expect to hear out of the mouth of some right-wing populist…especially the part about blaming the George Soroses of the world for problems. That sort of emotional, moralistic, paranoid scapegoating is very un-Marxist. (And if this sounds like a “No-True-Scotsman” defense, then I’ll admit that many self-styled Marxists will sometimes resort to this sort of thinking…but I still think we would benefit from having a vocabulary term for “unemotional, non-moralistic descriptions of inherent conflicts of interest within capitalism,” and if Marxism as it is practiced no longer fits that bill, then we need a new term.
Likewise with the complaints about “corporate greed” that we heard coming out of Occupy Wall Street. That’s just cheap, unsophisticated populism. Marx would not chide capitalists for maximizing profit. That’s what they are supposed to do! If moral chiding could actually work, then Marx would have just proposed that instead. It would have been a hell of a lot simpler than proposing a social revolution to restructure the very fabric of daily life!
Likewise, Marx’s theory of exploitation is often misunderstood in a very vulgar way. Most importantly, Marx did not think that exploitation took place in the realm of exchange. When workers sell their labor-power to capitalists for wages, neither is systematically cheating the other. They each receive their due worth. (Sure, the occasional worker might be underpaid for her labor-power, but the occasional worker might also be overpaid for her labor-power).
For Marx, exploitation took place in the realm of production, and is a description of a purely objective phenomenon. Marx’s theory of exploitation was simply an assertion that workers, in the average case, add more to the final sale price of the commodity than they are paid. Now, one would still argue that this was right and proper, or wrong and improper, and certainly Marx argued that this arrangement was not in the interests of workers (that sort of descriptive claim about the interests of workers is the closest you’ll be able to come to locating normative judgments in Marx’s writings). Juridicially, though, Marx was ever clear to point out that neither workers nor capitalists were cheating each other.
Now, on the other hand, if workers were selling the commodities that they produced, and capitalists systematically bought them for less than their sale prices and then resold them for a profit, then there would be unequal exchange. And indeed, Marx notes that this is how very early primitive capital accumulation often happened in the late middle ages, and why (coincidentally, Jewish) pawn-brokers who preyed upon desperate artisans and peasants by buying low and selling high (while not even providing useful transportation services, as merchants-proper did) were so hated. But as a rule, under modern capitalism workers do not sell the commodities that they produce. They merely sell their ability to labor, and they are fully paid for that.
Nitpick: proposing such a revolution is eminently simple. Often fun. Not very difficult. If done correctly, not risky or emotionally taxing.
Figuring out how such a revolution should work, however, is a mindbogglingly complicated and difficult thing to do, and I do not envy the poor sap who takes it upon erself to do so. Particularly if one wants the revolution to actually happen, and particularly particularly if one cares about what happens afterward.
I’m not clear on exactly how much of this work Marx actually did- according to Scott’s reading of Singer, post-revolution planning wasn’t a big concern for him, but that’s obviously only one small part of the total work.
‘Exploitation’ in Marxism is a bait and switch- bait with ‘exploitation’ just meaning useful. Then switch to the connotations of ‘I’m exploited’ and ‘you’re just using me!’
Do you think Marx himself is guilty of this bait-and-switch, or are you just accusing some of his followers (perhaps those with a less intellectually sophisticated understanding of his ideas) of this? Marx would have defined exploitation in terms of the amount of “surplus value” employers are getting from their employees–at http://www.marxistsfr.org/archive/deville/1883/peoples-marx/ch09.htm you can find the quote “The rate of surplus-labor is, consequently, the exact expression of the degree of exploitation of labor-power by capital, or of the laborer by the capitalist” (for the purposes of discussing whether Marx was guilty of any bait-and-switch we can leave aside the question of whether the labor theory of value and the notion of surplus value actually make sense–I’m not sure it does, but if anyone’s interested I talked a little about my best attempt to make sense of it in terms of an equilibrium model at https://www.reddit.com/r/neoliberal/comments/7klon1/whats_a_legitimate_argument_against_the_labour/drfvix0/ ). When I was reading up on Marx’s dismissal of moralistic arguments against capitalism, one source I found was his “notes on Adolph Wagner” at https://www.marxists.org/archive/marx/works/1881/01/wagner.htm , where Wagner tried to do exactly the sort of bait-and-switch you describe–switching from Marx’s technical notion of capitalists extracting surplus value from workers to a moralistic notion that the capitalists were “robbing” the workers of surplus value. Marx was completely dismissive of this tact, denying the validity of any ahistorical moral sense in which the capitalist was doing anything wrong or unjust:
and a little later:
What would Marx’s prescription be then? Let capitalism run its course and then……..
The issue with what I will call a “soft” reading of Marxism is that he is mostly just describing industrial capitalism, it isn’t advocating for anything (expect perhaps legal recognition of unions) once people are emancipated from the land. It isn’t a moral, political or economic philosophy because it advocates nothing.
Bertrand Russell said of Marx:
(source is online, but I also have a physical copy somewhere).
I had to read him for class last year, and this really matched my impression, even after correcting for hindsight bias. Marx was a little less vitriolic than I’d imagined, but considerably more prone to stupid mistakes we’d just sort of glide over without discussing. And the mistakes all seemed to point one way.
I haven’t studied Marx enough to have a lot of confidence that I’m not missing something important. With that disclaimer: when Marx says he’s not trying to appeal to moral feelings, my current inclination is to treat this statement as bearing approximately the same resemblance to reality as Donald Trump saying “I love Mexicans!”
Bertrand Russell is selling Marx short here. Marx arrived at his theory of value through a lifetime of study of classical political economy. Marx’s theory directly descends from Adam Smith’s labor theory of value, especially as it was developed further by David Ricardo. It was not some populist slogan thought up on the cheap.
You are viewing Marx through a vulgar lens if that is all you got out of Marx’s quote above.
Marx never hesitated to congratulate capitalism on its achievements. Yes, capitalism is historically, contingently useful for building up surplus value and enhancing the productivity of labor. That’s not bait and switch. It’s just something a little more complicated than “boo! capitalism bad! socialism good!” or whatever else Marxist populists are arguing nowadays.
One of the biggest questions among Marxists has always been, “How long do we have to let capitalism run its course?” The Mensheviks, for example, thought that capitalism needed MUCH more time in Russia. Although I don’t hear it that often, one could make a Marxist case for allowing capitalism to run its course for quite a bit longer. Did it not give us The Internet? What other treasures might this golden goose provide?
If capitalism can continue to deliver widely-shared increases in living standards, then those in favor of capitalism have nothing to fear. It will be an immortal social system. Marx’s prime concern about capitalism was not that he had a bleeding heart for workers. It was Marx’s perception that capitalism constantly threw avoidable barriers in front of itself, that it fettered its own development, and that a time would come when capitalism would fall far short of taking advantage of the full physical limit of production and bringing people the full maximum limit of human freedom possible at a given technological level. Means of production, workers, and innovations would sit idle and un-implemented due to unprofitability.
Yes, machines, human laborers, and innovations would be potentially useful but unprofitable. That is because Marx argued that it was not the subjective usefulness of commodities that determined their prices, but instead their socially-necessary labor times. (Note: this is a purely descriptive claim). So, for example, a benevolent AGI would be highly useful and possibly profitable in a physical sense of producing useful wealth but monetarily unprofitable because all prices would drop to zero.
Marx never denied that machines produce use-values just as much, if not more than, humans do. What makes humans unique is that they produce value—i.e. their labor gives useful things prices. No human labor, no sale price.
Um, the person you’re quoting isn’t me, but I was the one who posted the Russell quotes. Just so we’re clear.
My understanding of all this is incomplete, and I have a firm policy of not pretending to understand things I couldn’t explain to someone like my mother (e.g. fairly smart, no major cultural or language barriers, etc.).
Also, Russell wasn’t perfect. So I may be merely repeating bad arguments. But my model of him doesn’ predict him pulling things out of thin air or Eulering people, so my prior is that he probably has a point.
Russell also says
This doesn’t read like a vacuous criticism to me, at a first glance. It has big words in it, and I know what some of them mean and I can see how they might be relevant. I think I’ve got a sort of general picture of what he’s saying, although IANAE and if David Friedman or someone wants to clarify, I’d appreciate it.
@Do you think Marx himself is guilty of this bait-and-switch-
Yes, habitually. ‘State of Siege’ is a brilliant and accurate essay, and anyone who’s read the 42 volumes of Marx-Engels letters and checked out all the allusions is intelligent and educated, but Marx did like this bait and switch.
No, it did not give us the Internet, which was a product of the military/academic/industrial research complex. Capitalism gave us Compuserve and AOL.
I can see what Marx is getting at if he thinks that machinery supplanting workers (Marx called this the “organic composition of capital”) will lead to a decline in profit, but talking about “value” is a really weird way of putting it. The nearest I can get to making sense of Marx is that if workers do less and less work then they wouldn’t be paid enough to buy the products that are being produced, increasingly by using “dead labor” (or capital), so capitalism would reach a limit and/or crash catastrophically.
The problem here is that this has nothing to do with labor giving things prices. What’s going on here is the transformation of wages into prices, and purchasing power can be totally disconnected from wages as a product of labor, as we all know living in Western states with social programs and welfare handouts.
The problem with Marx is that he might not have been making moral claims inside of his economic works like Das Kapital, but he clearly was outside of them, and it’s pretty clear that Marx was a political activist. His position didn’t end at simply saying “wage labor is ultimately self-detonating”, but in adding “…and therefore communism”. It’s the “therefore communism” bit that is unjustified and resembles magical thinking in his belief in a post-scarcity directly democratic system in which private property is abolished. The two can’t easily be untangled, because Marx saw communism as a necessarily inevitable outcome of the economic process he described (besides the “common ruin of the contending classes”), when it’s trivial by converting Marx’s statement about labor to one about wages to show that other outcomes are possible. Did Marx even consider the possibility of nationalized wages AKA a basic income, for example? Capitalism could easily keep the profit cycle under such a scenario. Private property and the market being abolished isn’t a foregone conclusion of the end of human labor.
That’s an issue Marx considered as a source of periodic capitalist crises, but it isn’t really his main argument about the tendency of the rate of profit to fall due to automation, so your suggestion of a basic income wouldn’t really solve the problem of decreasing profits according to his analysis. I think the main weakness of his argument is that he assumes capitalist profits are always based on capitalists selling physical goods made using means of production that they own, whereas in the modern world a lot of companies can use intellectual property laws to make money even while they outsource production to other companies, or produce informational goods like software.
But if you assume a hypothetical economy with no intellectual property laws, I think it does make sense to say profits would tend to approach zero as the amount of human labor went to zero too, even if some policy like a basic income allowed there to be plenty of buyers for mass-produced goods. Marx would argue this in terms of the labor theory of value, which I think is most sensibly seen as a statement about what the prices would be in the equilibrium state of an idealized economy where any worker can train to do any production job, and workers always try to find the jobs with the highest hourly rate (this of course doesn’t describe reality exactly, but it may have use as an approximation to reality, much like the simplified equilibrium models in neoclassical economics). But I think there’s a good argument for automation decreasing profits which doesn’t depend on the labor theory of value, just based on considering the limit case where 100% of the physical labor in mass production can be done by machines. In this case, some sufficiently large set of machines and parts would be self-replicating (if provided the necessary raw materials and energy)–every machine or part can be replicated using other machines/parts in the set.
So consider a compact self-replicating machine, like a 3D printer with robot arms that can print and assemble all the parts needed to make a duplicate of itself, given raw materials and energy. In this case, if you own one of these, the production cost for you to get another is no more than the raw materials and energy that go into making one. This means that if many different people own these 3D printers and are trying to sell them, and buyers know one seller’s 3D printer is as good as any other’s and so just opt for whichever one they see on the market with the lowest price, then the prices on these 3D printers will soon get driven down to a price only negligibly different from the cost of raw materials and energy that go into each one–profits for the sellers will be negligible, in other words.
Likewise, suppose a seller wants to sell some other good that can be made by such a 3D printer, a “widget” in econo-speak. If a single 3D printer can turn out 1000 widgets before wearing out and needing to be replaced, then the owner can turn out endless widgets with the production cost per widget being only (cost of raw materials and energy going into one widget) + (cost of raw materials and energy needed to make a new 3D printer, divided by 1000). And again, if many sellers are trying to sell these widgets–no artificial monopolies created by intellectual property laws–the market price of a widget would tend to be driven down to this amount, and so profits would become negligible even if plenty of people were buying them.
The same basic argument should apply even if the self-replicating set of machines is much less compact, like a huge factory complex. As long as there is genuine market competition, with multiple capitalist firms owning such self-replicating factory complexes, then for generic goods whose profits today don’t depend on intellectual property–forks, say–it the profits should get driven down to next to nothing, because the price for such goods is driven down to raw materials and energy plus some extra cost (probably much smaller) for occasional replication of the machines used in production as older ones need to be replaced.
Even if the possibility of making money from intellectual property allows a way out of the conclusion that automation would destroy capitalist profits completely, I think this at least suggests it’s likely the system is in for something like a phase transition, as there will no longer be situations where it’s more profitable for companies to own the production facilities as opposed to just outsourcing all the production (like what Apple does with hiring FoxConn to produce all its iPhones). In this case it seems plausible to me that national or local governments would use some tax money to create their own self-replicating production facilities (or buy up existing private ones for cheap), the cost being much lower than today given that all the machines would cost little more than the raw materials and energy that went into them, which could themselves be cheaper if automated mining equipment, solar panels etc. were also much cheaper. In this case, private businesses might increasingly go in the direction of just outsourcing all their production to such publicly-owned production facilities. So even if this wouldn’t involve any revolutionary overthrow of capitalism, the case that automation will lead to decreasing profits from production and that this is likely to culminate in some kind of public ownership of the “means of production” still seems pretty good to me.
@Forward Synthesis, I think you are attributing to Marx what many Marxists would call an “underconsumptionist” critique of capitalism—the argument that capitalism suffers dysfunctions because workers do not earn enough to buy back what they produce.
While that line of thinking has always had a respectable following in Marxist circles, it is by no means the only interpretation of crises, or even the most popular. (Nor is it my personally favored interpretation). For more information on this topic, I suggest starting here with the Critique of Crisis Theory blog.
I’ve considered these things before at length. I guess it depends on what you count as “capitalism”. Small profits are still profits after all, even if they are more widely spread. The cost of raw materials and energy are still going to exist as you note. We might see a much more perfect market with fewer monopolies. Another cost to note along with raw materials and energy is land, because if I have more land than you, then I can make more stuff, and bigger stuff (and parts for other things) than you can because I have more storage space.
Also, it seems unlikely to me that this would lead to the abolition of private property, since private property is a useful form of state granted title that people would especially want to apply to their means of production if things like 3D printers and personal robots become really advanced. That seems to moreso involve everyone becoming petit bourgeois and/or artisans than it does everyone becoming proletarian in a fully socialized system. It brings to mind more the notion of distributism than socialism, since it would be necessary to retain private property title under such a scenario.
Now other systems have had private property besides capitalism (every system between it and “primitive communism” according to Marx), but I think so long as the possibility for even small profits remains there will still be capitalism. It’s just that superprofits will be impossible. Another factor to add on besides land and space, is the rarity of certain materials. If some company owns an asteroid mining facility then they control a large supply of metals that are rarer on Earth. The 3D printers will still need processed powder to produce anything.
Sure, but “some kind of public ownership of the means of production” isn’t sufficient to be communism if there’s still the state and private property rights. We have “some kind of public ownership of the means of production” now providing you consider control of some services by representative democratic states to be “public ownership” in any meaningful sense. One of my philosophical problems with the idea of communism (vs the economic Marxist analysis which makes sense) is that I find the idea of “public ownership” to be a lie at best. I believe in an iron law of hierarchy. If you have the state control everything, democracy isn’t enough to prevent the fixation of some kind of bureaucratic class, and then you just have something like state capitalism.
So I think there’s lots of room for things that aren’t at all a stateless (impossible), classless(meaningless), moneyless(would still be useful even under full automation), private propertyless (would be even more useful under full automation since it applies to more people) society that is of a global scale (national, cultural, and numeric prejudices would still exist), even if capitalism as we understand it today comes to an end. Marx was right about there being contradictions to wage labor, but I don’t think these things are insurmountable for capitalist profit (even if we have smaller profits shared by far more “businesses”), and even if they are, it doesn’t follow that the end of capitalism means communism, since profit could fall while the state and private property are retained for other reasons.
So I can perhaps be convinced of Marxist economics, and perhaps that the end of capitalism is possible, but I don’t communism is a coherent or possible idea itself, and I don’t think the state and its private property laws are going to stop existing even if they change form.
I’ll check that link out. I believe the underconsumptionist theory of crisis because I came to it independently. It follows from the end of human labor that if you don’t have some kind of government welfare system to keep money cycling, the economy would crash, because workers would have no wages to spend. Of course, since basic income is already being widely discussed by the powers that be, I think that issue is already solved. There would have to be a different reason for a titanic ending to capitalism (vs a slow whittling away of profits due to a greater number of producers and costs going down to raw material, energy, and land costs).
This is because “Marxism” without the revolution is just unsophisticated capitalism, stripping individual wants down to a base level and reasoning from there.
The thing is, it makes no sense for privileged people to be leftist amoral conflict theorists–if there’s no right or wrong, just people fighting for their own interests, then you should fight to defend your privilege.
Another aspect of Marxism is that class interests are more significant than anything else. So even if you have white privilege, male privilege, or whatever (even class privilege in an SES sense of class rather than in a Marxist sense), then as long as you make your living from selling your labour power rather than from capital that you own, then it is in your interest to make common cause with all of the more oppressed people in your class and be a leftist.
Marxist entrepreneurs, on the other hand, now those are useful idiots (ironically suffering from a false consciousness).
A privileged person might not define self-interest solely in terms of the amount of material goods they can amass, though–for example they might value ideas and art, and thus prefer a situation where far more people were free to develop their talents in these areas. Also, even if a conflict theorist is “amoral” in the sense of not seeing the conflicts in moral terms where one side of the conflict is “bad” and the other “good”, they may still be moral in other ways, like the utilitarian desire to maximize everyone’s happiness (which can be applied to beings we don’t see as morally blameworthy or praiseworthy, like animals–would anyone deny that there is some genuine conflict of interest between predators and prey, for example?), or a Buddhist who sees all harmful actions as due to “ignorance” but still feels compassion for all sentient beings. I also read an interesting piece at https://www.marxists.org/reference/subject/philosophy/works/us/brenkert.htm which argues that although Marx rejected arguments for socialism based on moral duties or other rules, he did have a kind of virtue ethics where he certain human qualities tend to be seen as good ones in all eras (this could include not just moral qualities such as kindness but also non-moral types of ‘virtue’ like creativity or thoughtfulness), and he thought that socialism would be most conducive to developing these qualities in people.
I have not read Marx, so I can’t vouch that myself (and I’ll disclose I have a strong bias against him); but I’ll notice this seems to be a common problem of every intellectual tradition. The Founders and the High Philosophers are often more careful and better thinkers than the Followers who spread the word and the hacks that profit from it. Given that the tradition’s thought is also made by these less careful thinkers, the whole sect becomes distanced from the founder’s ideal, usually for the worse.
As I read somewhere else, this happens in three generations:
First generation aka The Founders: “Though it may be counterintuitive, our studies show that one cannot talk meaningfully of X without taking Y in consideration, and we propose the main driver of Y is .Z”
Second generation aka The Preachers: “The founders proved that Z causes Y and Y causes X.”
Third generation aka The Hacks: “The Preachers said that X doesn’t matter because Z causes Y.”
Fuck that shit. We need to crush those fuckers who believe in conflict theory since they are the enemy.
Ok, I said it that way because I thought it was funny but in a very real way I do believe that conflict theory is like a conspiracy theory with a self-reinforcing worldview that is resistant to any debate or evidence so no matter how well intentioned they are the only option left is to crush them.
What I’ve found works better than “crush them” is to admit of layers in most of our thinking. Libertarians, existentialists, Nietzsche–they’re all a little bit right that there’s something *personal* about power, and a way in which the single, simple thing that can “go right” is for a person to start using their personal power to their own benefit, which is something very similar to “develop a self.” Small children in a restaurant don’t just defer to what their parents order for them–they don’t actually have preferences yet, or an ability to be disappointed if they don’t get what they want. Having preferences, and a strong will to satisfy them, is an advancement… and will create conflict. Of course the shepherd will try to convince the sheep that what’s in his interest is also in theirs… but it won’t always be true. But parents do not generally find it necessary to crush their children, nor shepherds to crush sheep. We can *mostly* get along.
I think it depends on whether we’re talking about people who are just, like, intuitively more conflict-theoretic, versus those who have seriously gone all-in on it and accept it entirely. The former you can hope to teach a better way. (This also applies perhaps to those who espouse it but haven’t really accepted it.)
The true hardcore conflict-theorists though… I mean, you can’t really have useful debates with people who don’t believe in useful debate… or honesty… or even truth (a lot of these conflict theorists are anti-realists). So, y’know…
But, it’s worth noting that there’s more to “conflict theory” than just a descriptive theory that disagreements are conflicts. Which is after all true in some cases — not just in the ironic case of dealing with conflict theorists, but also, as has been mentioned, things like copyright law. But that doesn’t mean one should be a “conflict theorist” about such cases! Scott breaks it down here as if it’s fundamentally this one disagreement between the two points of view, but really there’s a number of disagreements — we’re looking at two clusters here — and one of those is the fact that conflict theorists just don’t really worry about mistakes, like, at all. They don’t seem to consider it important to put systems in place to keep one aligned with reality; they seem to think that if they win the conflict the right things will happen automatically.
So even where the “this disagreement is a conflict” aspect is true, you still don’t want to be a conflict theorist, it still contains a lot of wrongness. Which is to say, we can fight — but we can fight like mistake theorists.
Agree that there’s a spectrum, and not everyone is ‘all in’.
I’ve noticed that this is one of the heuristics my lizard brain tends to use to decide if it considers someone an ally or not. I think it’s one of those divisions between people that actually accounts for a lot of the observed differences in outlook.
(Another one is whether people are trying to ‘push in a direction’ or ‘hit a target’- which I think is very tied to whether they’re interested in binding their outlook to the truth. I have you to thank for this conceptualization, which I’ve introduced to various family members, so it seemed as good a time as any to say thanks.)
Oh, well, glad you’ve found it a helpful distinction! 🙂
Or Guerilla theorists that treat politics as an armed struggle by individuals against the state, forever fighting to determine whether the State should release its control. Sometimes there is no ‘us’ only ‘me, and possibly a few fellow travellers’.
So yah, I used the term “crush them” because it was amusing in light of the article not because I actually want to murder them….merely somehow use means other than rational persuasion to ensure they don’t get to choices for polities I’m a member of.
You should read about Academic Choice Theory.
Help, I tried to go a meta-level up by making fun of this and writing about Blogger Choice Theory, but I just ended up turning into Robin Hanson. Now I’m sitting in an office in George Mason University in a 50-something year old man’s body and I have no idea how to get home.
This is especially concerning because Robin Hanson is currently in Davos.
Could be worse. Could be a giant centipede.
Is this a reference to something?
My first thoughts are the old Buddhist claim (possibly apocryphal?) that those who enjoy frightening others will be reborn as centipedes, and the litany of “It could be worst- at least we’re not (graphic nightmarish scenario)” in Fight Club.
The most salient difference between Scott and Robin Hanson is that reading Scott doesn’t make me want to slit my f@cking wrists. If everything is signalling, I desire to believe everything is signalling, but I honestly don’t know how to live in that world.
>I honestly don’t know how to live in that world.
We already are, and it’s doable. Actually I think this point is touched upon in the Elephant In The Brain book.
The Screwtape Letters.
I’m pretty sure I read this when it was first posted, but thanks for reminding me of it.
I think its a dangerous temptation to diagnose people who disagree as ignorant, as the Jacobite article does. The evidence that people are ignorant of the questions seems to be that they don’t focus on the particular subjects and buzzwords the author thinks are serious policy.
To make an analogy from the other direction, a fundamentalist christian could diagnose others as ignorant because they don’t talk about moral degradation of society and undermining of traditional family values in any serious way, which are obviously the most important issues.
The underlying problem for both i think is failing to appreciate how deeply different the other sides worldview is. Secular politicians aren’t ignorantly ignoring moral degradation, they genuinely believe it isn’t a real thing, or they consider it much less important than say the economy. The far left doesn’t just ignore technical questions out of ignorance but thinks they are less important than changing other aspects of the system, chiefly who has power.
It doesn’t matter whether you know what the cure is for the patients disease, if you don’t have the money for medicine, or a doctor to perform the operation. So, to stretch the analogy slightly, we are not dealing with one patients complex illness, but we have thousands of patients dying from easily preventable diseases, but we don’t have basic antibiotics to save them. In that circumstance getting the antibiotics as quickly as possible should be the priority.
Important correction from my experience as a friend of some very religious people:
“a fundamentalist christian could diagnose others as ignorant because” they don’t know that God exists and all morality derives from him, it would be hard to treat anyone who denies something as basic as the existence of God as anything but ignorant.
Moral degradation can still be debated, the problem of our children’s souls being sent to hell is just straight out of the secular politician’s Overton window
If we don’t know better than people who disagree with us, what exactly is our grounds for disagreeing with them?
One way to frame this might be:
Conflict theorist: The problem here is that evil people are causing problems.
Mistake theorist: The problem here is that the world is broken and too complicated to easily fix.
To which, one might respond: Is there any reason to think there’s only one problem here?
I’m not sure I believe in evil, but I believe in stupid, crazy, and incorrigible.
I think ‘evil’ isn’t really a useful term. At best, it’s severely nonspecific, and carries peak cultural baggage.
(‘Crazy’ has some of the same problems, but I think I know what you mean.)
Have you seen the Zetas and other Mexican drug cartels that skin children alive? Is that stupid, crazy, incorrigible, or something else?
Not having met any of them, it’s hard to be sure. I’d bet on ‘crazy’, which is admittedly not as specific as you might like.
Some measure of ‘stupid’ and ‘ignorant’ is also on the table- if one truly wishes, it’s not hard to reduce lack of empathy to these.
If you’re looking for an expression of disapproval, I’ll oblige: skinning children alive is a bad bad thing to do* according to my utility function.
*In most situations. As with anything else, we can probably think of exceptions, but in this case I don’t anticipate them carrying much practical relevance.
Is there a particular reasons “evil” isn’t on your list?
Presumably because it’s not on mine.
The less snappy but more accurate way of putting at it would be that “evil” is a descriptive class but not a prescriptive one. It makes sense to me to say that Pol Pot was evil because he had a third of his country killed; it does not make sense to say that Pol Pot had a third of his country killed because he was evil.
Well of course. Unless evil is a thing in itself rather than the absence of a thing (good), it makes no sense to say so-and-so did X because he’s evil. Rather we have to ask “What inspired the behavior that caused so much privation of good?”
I like this framing.
A counter-response would be that they are both phrasing the same problem differently. Something like – Evil(differently-valuing) people having power is too complicated complicated to easily fix, this is possibly the fault of said people having broken the world.
Meh, ended up too much conflicty instead of a 50-50 split…
Maybe the counter-response would be that since you can specify each of these problems in the terms of the other you need some additional evidence that they exist independently.
Because of the complexity of the world it is easy for evil people to gain power and hard for good people to be effective enough to retain power.
If you find someone in power, odds are they are some variety of small-e evil, but odds are also that replacing them won’t improve things on net.
(epistemic status: floating a theory)
I buy it.
Actually, it seems so obviously true that it kind of feels like I’ve always believed it.
Take out the “evil” and “good”: In order to get to the top levels of power, whether that’s political or corporate power, your overriding priority has to be the acquisition and preservation of power, because if it’s not, at some stage you’ll be swept aside by one of the people who made it their top priority. If you make it into the top ranks, whatever your goals and politics are in theory, you’re already compromised.
Achieving almost any difficult goal means that you tend to have to trade off other values for that one, make compromises, etc.
Reaching the top levels of political power in the US is a pretty demanding goal, so you can expect that the people who get there have had to compromise a lot of their other values to get there. And the specific nature of what you have to do to get to the top levels of political power in the US tends toward requiring you to be willing to compromise a lot of your personal principles, align yourself with repugnant people whose support you need, kiss up to unappealing potential donors, take positions you don’t really agree with because they’re popular among the voters you need, promise support for programs you think are probably a bad idea to get votes, etc. You can refrain from some of these things, some of the time and still be successful. But I think it’s quite hard to end up in the white house or in a leadership position in Congress if you aren’t willing to do those things most of the time.
This doesn’t require evil, exactly (though politicians often do pretty evil things to get or keep power), but it certainly rewards being someone with pretty flexible principles and morality.
That’s begging the question.
The point of setting up a system of governance is to put in selective pressures that hamper the rise of amoral careerists. Saying that amoral careerists will rise to the top because they can better leverage Goodheart’s law is assuming that it’s an unsolvable problem.
Sounds like the chapter in The Road to Serfdom on “Why The Worst Get on Top.”
The point is to design your government so the worst don’t get to the top – that’s Carlyle’s entire formulation of governance.
It’s Marx’s also but Marx claimed that there’s an easy way to identify the men who are not the worst – they’re the intellectuals who speak for the workers. This has been empirically disproven.
I just want to take a moment to say that “people having power is too complicated complicated to easily fix” was a lot easier for me to notice than “Paris in the the spring”.
First off, I think Scott’s post is a good post that seems to make good points and takes steps on the path to true knowledge.
But, as I read it I kept thinking about Indian wise men describing th elephant. Or rather, I pictured two amateur carpenters arguing about how wood is joined together, one being a “nail” theorist and the other a “screw” theorist.
Now obviously, you build using wood or nails, but structures out in the real world are not built exclusively using one or the other.
This seems to me to be a recurring failure of thinking on Scott’s part, honestly. The tendency to naturally think in binary terms (even while knowing this is incorrect).
That’s not a fair framing, of course. A fair framing would be that conflict theorists believe evil people cause problems and mistake theorists believe that dumb and crazy people cause problems.
Those are both conflict theory positions as I read it.
No, as I read it, that’s precisely not how Scott is describing it.
I’m naturally inclined to think that there are elites who want to hold on to their power *and* that everyone (elites and otherwise) makes mistakes. Easy mistake theory and easy conflict theory are pretty much incompatible, but why not both hard mistake theory and hard conflict theory?
I think that because mistake theory suggests the problem is coordination and the solution is compromise, while conflict theory suggests the problem is malice and the solution is winning, the two are made mutually exclusive in the case of any given actor.
Maybe I believe that Scott is wrong about psychiatry but is motivated deep down by the desire to help, while a theoretical Dr Maison is wrong about psychiatry because he is motivated by the desire to earn money and defang the revolution by medicalising poverty. Is there any belief I could hold where Scott was wrong for both reasons equally, or would I always choose one above the other when deciding how to act?
A lot of social and political action comes down to talking. Saying what we believe, trying to influence what other believe.
So even if you think one factor dominant over another, there’s often good reason to acknowledge and address both.
Maybe in the case of a given actor on a given issue, you have to choose one or the other.[^1] But not as overall philosophies. There are a few conflicting Dr Maisons (even more in politics than in psychiatry), but also lots of mistaken Dr Alexanders.
[^1]: Although the most frustrating issues are those where someone is both incompetent and nefarious in such a way that the honest mistakes and the evil intentions make things worse together than either would alone. A lot of foreign policy issues seem like this to me, from the perspective of world leaders. But if you’re going to insist that I decide which reason is more prominent, then even these issues will fall one way or the other.
Yes, but do you think that “they’re trying to hold on to their power” is the only reason an elite would ever advance an argument defending their status, for whatever given value of ‘elite’? I think that’s the difference. I keep thinking of examples but they all trigger fights – no doubt people here can all think of examples of specific methods of combating unfairness that would be counterproductive, or involve a cure worse than the disease.
Not the only reason, but one reason, certainly. If you're going to define the terms so that mistake theory is the normal thing and you're only a conflict theorist if you believe that this is the only reason, then of course I'm a mistake theorist, but then that’s not saying much.
Let’s complete the four by four.
Hard mistake theory, hard conflict theory, easy conflict theory, and just enough easy mistake theory to create plausible deniability for the hard conflict theorists to undermine the hard mistake theorists.
I think this is a really useful axis of distinction, and not one I’ve seen articulated well before. I also think your articulation of it (with respect to conflict theory, at least) is still largely in Easy Mistake land, though, because the the conflict theorist you describe sounds like an anti-rational boogeyman. I think a tendency toward conflict theory can come from historic abuse at the hands of people who preach things that sounds a lot like mistake theory (even if they aren’t actually adopting it in good faith). As a paradigm for policy development, it seems like mistake theory heavily favors the status quo until deliberation/research suggests an alternative; if that deliberation takes a long time and the status quo seems harmful to you, it’s awfully hard to distinguish from malicious actors hiding behind a facade of careful consideration, which isn’t without precedent.
I think the thrust of my thought on this is that a person can be on board with the principles of mistake theory but not trust other people who claim to be mistake theorists to play fair ball, and accordingly adopt behaviors that look a lot like conflict theory. (The more I articulate this, the more convinced I am that this is likely happening both on the left and right, though I’m not certain they have equal cause for it.)
That aside, though, this is yet another obviously insightful sketch of a concept I wouldn’t have gotten a lasso around on my own, so thank you for that, Scott.
Which you shouldn’t- it seems almost tautological that, at any given moment, there are at least some bad actors in play. If someone repeatedly makes reasonable-sounding arguments that, on closer inspection, turn out to be bullshit, it eventually stops being worth your time to listen to them.
To some extent, this is justified… but the fact that other people aren’t acting in good faith to solve problems doesn’t mean the problems will solve themselves. There’s a vast, yawning abyss between ‘I am sure that you are wrong about X’ and ‘I know how to fix X’, even ignoring the fact that it’s possible to be both dishonest and right.
I suspect it’s different for different people. I’m attending a very left-leaning college right now, so most of the idiots, trolls and fanatics I happen to run into are of the SJ type. My sister is a teacher at a public high school in a different part of the state, and has an analogous experience with people on the right.
This certainly isn’t to say the total cumulative would balance, if you could somehow aggregate it- just that individual perspectives can differ quite widely for purely situational reasons, even before you factor in bias/subjectivity/framing effects.
The thing of it is, to me the mistake theorist Scott describes ALSO sounds wildly wrong. At least, public choice theory as I (very much a layman) understand it is nearly 100% opposed to the idea that “all you need are really smart technocrats.” Public choice theory stands against the idea that the government can just magically solve problems. Government is composed of people who 1) cannot possibly have enough information to always make “correct” decisions and 2) are just as self-interested as everyone else.
These days “really smart technocrats” usually means technocrats who decide to create a prediction markets and use those to make correct decisions, it speaks to the intuition that these are all technical problems that can at lest theoretically have technical solutions.
Would like to upvote, turn into an article, and turn into a book!
This is THE KEY THING that SJWs are basically about condemning the Privileged for not recognizing adequately.
To go up a meta level, maybe conflict theorists are mistake theorists who have applied the same methodology to “how do we get our ideas implemented” and decided based on the empirical evidence that emotional appeals and group action are more effective than policy analysis
To go sideways a meta level, maybe mistake theorists are conflict theorists who have applied the same methodology to “how do we avoid taking heavy mutual casualties during internal ideological conflicts” and decided that it makes strategic sense to participate only in limited verbal skirmishes, except where overwhelming force can be coordinated against especially hated/dangerous foes.
Amazing post, and definitely something I had never explicitly considered before. In fact, I made an account just to comment on this.
That being said, I think some of these opposing viewpoints are nothing more than false dichotomies. People like having power and coming up with policy is hard. A true Mistake Theorist is naive in (implicitly) assuming that all politicians/policy-makers/voters are working towards the common goal of [your utility function], and a true Conflict Theorist is naively throwing out Pareto improvements on the basis that power grabbing has to be zero-sum.
I think it might be healthy to consider these viewpoints from a Consequentialist stance. Under which conditions will being a Conflict Theorist lead to better outcomes (I think this is when a policy with huge negative impact is being proposed/implemented and all your effort and political capital has to go to fighting this) and under which conditions will being a Mistake Theorist lead to better outcomes (I think this is when it is not clear which policy leads to the best outcomes)? And isn’t switching between the two depending on the situation simply less naive than either one?
Keep in mind that the hottest newest evidence on policy has to factor into your long-term strategy for making policy somehow. On each individual issue it might be worth going full Conflict Theorist until the situation of the People has been improved, but in the back of your mind you should keep track of which evidence would convince you that you are wrong.
True in theory, but in practice, given cognitive limitations and the need to compete with specialists, any given person is likely to benefit from specialization.
I never thought about this consciously and I think it’s a useful concept. Three comments:
This exposition makes me more sympathetic to conflict theory, but only on the specific issues where conflict theorists are clearly on to something. The three that spring to mind are global warming, redistribution towards the poorest, and making people less racist and sexist. However, it doesn’t seem like conflict theorists have particularly good tools for distinguishing these cases from ones which push at all the same group identity levers (respectively: opposition to GMOs/nuclear power, massive minimum wages increases/other policies which harm the economy overall, making people deny sex differences or the existence of gender). Perhaps having conflict theorists who scaremonger about GMOs is the price we need to pay in order to have people who actually take large-scale action against global warming; that may well be worth it. But on the other hand, you don’t want conflict theorists who are able to take actions at too large a scale, because they have no solution to the problem of their actions creating a system where there are new elites and new power structures. To get a system that avoids that, it feels like your best bet is to create slow cultural change, Scandenavia-style.
Secondly, if “everyone in government is already a good person, and just has to be convinced of the right facts” is false, that doesn’t necessarily imply “everyone in the government is a bad person”, but could instead imply “everyone in government is incapable of changing their minds”. Maybe the reason for that is their deep mistrust of the other side. Then we need to figure out whether people who deeply mistrust the other side, but are wrong, are Actually Bad People. The left-wing are too inclined to say yes, but the readers of this blog (let’s call it the Slate Star position; is there a better term?) are probably too inclined to say no. The left-wing are correct that the best way to deal with these people is by treating them like Actually Bad People (i.e. fighting back), but the Slate Star position is probably correct that the best way to prevent another generation of people like them is to treat them like Actually Good (but misguided) People.
Thirdly, it sounds like the people who hate EA most are probably conflict theorists, and the people who like EA most are probably mistake theorists.
[ten minutes of screaming]
Wow. All I got was a “Goddamnit.”
Yeah, so I thought I might get this reaction, and should probably clarify. I don’t mean that all the solutions that conflict theorists propose in those three domains are good ideas. Rather, I mean that there are *some* solutions which they propose in these areas which would be good ideas, and they have correctly identified that the main opposition to these has come from powerful interest groups who don’t want to lose out.
Solutions I’m thinking of are, respectively: subsidies for renewable energy/taxes on pollution; spending money on better homeless shelters and soup kitchens; getting rid of Jim Crow laws and making outright discrimination illegal. All of these seem to me to have significant positive utility, and were/are opposed for no particularly good reasons. Do you disagree?
I think cost of goods directly relates to energy spent in their construction, so if your renewables don’t make economic sense without subsidies then they’re probably not worth deploying. I’m strongly pro nuclear energy, and I find that many of the arguments against nuclear are profoundly conflict theorist in nature.
Interesting test case.
Are you simply making the _mistake_ of confusing emissions with energy consumption. Or are you simply aiming an argument in the direction of the other side in the knowledge that, even if many people will spot the flaw, some won’t, so it is still a net win to say the wrong thing?
Huh. Don’t know where it fits in this framework, but your comment makes me think of an attitude that I’d paraphrase as “This is the biggest crisis ever, so you should compromise. (Of course I don’t have to compromise.)” Think of the way H. Clinton tried to get libertarian and/or socialist votes, or how most vocal anti-global-warming also steadfastly oppose nuclear power…
1soru1, I’m predicating that fiat currency can be used as an acceptable stand-in for energy use. On that basis, if the currency cost of deploying a solar panel isn’t less than the expected currency return from deploying that solar panel, it’s not worth doing regardless of how many emissions were created in the process since it would be a net energy loss.
The obvious issue with this theory is that fossil fuel emissions externalize costs, so the stated price for fossil fuel power doesn’t accurately track with actual costs, and sufficiently cheap oil would prevent renewables from coming online while still polluting the hell out of everything. But it’s somewhere to start.
So you are quite aware that what you stated was wrong (i.e. ‘obvious issue with this theory’), but you said it anyway.
Can you reach back and reconstruct your state of mind when you said it? Was it temporarily forgetting that issue, or deliberate engaging in a conflict between sides?
I don’t think articulating an argument compels one to reiterate all possible counterarguments, if only because they tend to be so numerous in almost all cases. Discussion invariably contains gaps, and we don’t have the resources to explore all of them. ‘All models are wrong’. But not, of course, equally.
I can’t say I exactly understand the point in dispute, here, so maybe this was a particularly blatant example. (I
halfunderstand it- I could bullshit my way through an essay question about it and come away with a decent grade- but I don’t have a good framework for the exact degree of tie-in between monetary costs and energy costs, or the limits of potential externality issues.)
For what it’s worth, though, I wouldn’t have parsed ‘I think X directly relates to Y’ as an assertion of fact that X and Y had an inelastic unvarying 1:1 relationship.
all-externalizations-included cost accounting of the sort the ExternE project did imply very strongly that the only energy sources worth using are dams and reactors.
Wind is acceptable if you have enough dams to smooth supply, and nothing else is remotely economic.
Coal in particular is terrifyingly expensive, you just mostly pay for it at the doctor or the mortuary.
I think the cost of goods more strongly relates to the labor spent in their construction than the energy, and to the extent that it includes energy it is because of the labor spent obtaining energy. It is quite likely that the cheaper good is cheaper because it traded the labor of one coal miner against that of two master craftsmen and said, “burn the coal to brute-force this one”.
My dad used to say that you could do anything if you had enough people and energy. (He’s an engineer. To engineers, energy matters). I would add time. The three main contributors to value are people, energy, and time, or the interaction between the three.
… haven’t we already done that? I mean, I admit that I don’t know exactly what Jim Crow laws are, but I’ve always heard them brought up as an example of “bad thing that we used to have but that we thankfully got rid of.”
Jim Crow laws were laws in the South that made discrimination against blacks mandatory. They are indeed no longer a thing.
Also, it’s important to note that “mandatory” part, because there seems to be a group that says “if we got rid of nondiscrimination laws, it’d be just like Jim Crow again”. Unless you think every restaurant owner is chomping at the bit to break out their “whites” and “coloreds” signs, removing laws that make something forbidden will not have the same effect as making it mandatory.
Warming is, on the whole, good when you are cold and bad when you are hot. So if it’s a straight conflict theory fight, with each group supporting its own interest, Canada and the Scandinavian countries should be anti-AGW, the U.S. and China about neutral, India and other warm countries pro-AGW, where “pro-AGW” means “believing that warming is a very serious problem which something must be done about.”
That’s not the pattern I see. It looks more like poor countries being pro-AGW mostly in the belief that they can use the argument to get money from rich countries, everyone else dividing on ideological rather than self-interest lines.
My thought is that your statement “on the whole, good when you are cold and bad when you are hot” is so far wrong that everything else has to be mostly not-even-wrong. Knowing who AGW is bad for means knowing about climate variability, storm intensity, rainfall patterns, refugee flows, ocean acidification, clathrate gun hypotheses… it’s not about “on the whole, good when you are cold and bad when you are hot.”
Also, things are mostly not good or bad for whole countries; they’re good or bad for individuals and interest groups. Don’t ask what’s good for “the U.S.”; ask what’s good for the professionals who benefit from high energy levels on red team or blue team, or what’s good for middle class folks who care about (un)employment, or what’s good for the investor class as a set, or for specific billionaires with investments in (say) coal, natural gas, or solar. Goldman Sachs has climate change science specialists on payroll, and is buying water rights in target areas. Or look at how much it did for the Sierra Club to get a huge injection of cash from big natural gas to go after coal, and ask if it did as much to help climate action as a cause as it did to give natural gas a boost over coal.
Here’s an example of how my view explains other’s AGW politics. Say you are a ruling elite in a petrol-rich state, like Russia, where something like a trillion dollars (that’s the order of magnitude, anyway) worth of fossil fuels that are somewhat marginal (below permafrost, or coal and hence quite dirty, or very far from consumers and requiring big new pipelines) are still below ground. Policies that tend to make it uneconomic if you try to drill up and sell these marginally-profitable fossil fuels… you hate those policies. So do all the folks who work for you–your lawyers, business hacks, security guards–everyone who traces their well-being to your largesse will also want the oil to flow. Some people with a little cleverness see how much you are likely to hate those, and build whole careers anticipating the largesse that will flow from you if you see them as solving this problem for them… by any acceptable means. (THE SPICE MUST FLOW!) So, in Russia we’d expect that Putin and the others who seem to view the country as their personal fiefdom these days… they probably don’t really want people to think climate change is real.
Now do that again across a lot of other places where agendas can form. (Again, the U.S. isn’t a thinking thing, don’t try to find what “The US” wants. Find what David Koch wants, or what a 22-year-old climate activist wants, such as a 20-something-year-old buddy of mine who did a 40-day vitamins-only fast to protest climate inaction… etc. Notice that it’s no more profitable, in dollar terms, (and arguably a *lot* less so) to be a pro-AGW mainstream scientist than to just take a payout and go work for Heartland or etc. Climate change is very complicated to understand, but there’s a very, very serious collective action problem at the bottom of it. In whose personal interest is it to be the chap who tells Vladimir Putin that climate change is so bad that he should stop all his elite cronies from doing what they might otherwise do to ensure they don’t have to write off $1 trillion?
The name Alexander Litvinenko springs to mind for some reason… can’t seem to think of why.
If you want to know the object-level facts of climate change–to decide what *you* think–look for the single person, or small set of them, that are most incentivized to tell only the robust truth (i.e. only the stuff that they can defend against claims that they’ve lied). I’d say if you get a graduate student in a physical sciences program to tell you about the atmospheric and oceanic effects–the direct temperature and pH changes–and then get an ecologist grad student or two to tell you what they can say with some confidence is threatened by that (e.g. is ocean acidification bad for some people more than others, and if so, who?), then you are well on your way. I don’t think you’ll end up with “on the whole, good when you are cold and bad when you are hot.”
Instead, it’s more like “AGW… quite bad. On balance, not worth caring about if you are rich and powerful and don’t care about the future, since you’d have to spend down/use up your power or money to do anything. But since, if you are rich and powerful and prone to shooting the messenger, it’s not that important if you care about the future anyway, because you won’t know it’s bad, because no messenger smart enough to persuade you is stupid enough to try. If you aren’t rich, powerful, or you do care about the future, it’s pretty bad. How bad depends a lot–there’s some uncertainty on how much, at this point–on whether anything is done, and how much, and how soon.”
In principle, conflict theory is easily separable from political Manicheanism. In practice, it usually requires a crowbar.
> The three that spring to mind are global warming, redistribution towards the poorest, and making people less racist and sexist
Uh, the first one sure, but wouldn’t economic redistribution as well as identity politics be almost the archetypical complex mistake-problems that need careful analysis, but that people (well conflict-types) typically love to treat as easily solvable if the other side would just admit they’re evil bastards and go stand in a corner?
I should have been more careful with my phrasing. There are definitely lots of problems which mean that redistribution and identity politics don’t have easy solutions. However, the last century or two of progress in combating racism and sexism have been mostly conflicts against people who didn’t think the interests of minorities were very important. We’re now reaching a time when the gains are a lot more marginal, and therefore maybe mistake theory is a better lens.
Same with redistribution: pumping more money into somewhat dyfunctional modern welfare systems probably should be analysed with mistake theory, but setting up a basic safety net in the first place was very much a conflict-based struggle.
Possibly the clearest conflict-theoretic examples today are international borders, which very effectively preserve the interests of citizens of wealthy countries.
Also, at the risk of flattering myself overmuch, I suspect that an increased appreciation of conflict theory (or rather, an increased wariness of overly-trusting forms of mistake theory) is a common thread among many of my moderate-but-unpopular-among-rationalists political views. These include:
• The median voter should have more influence over government. (By median I mean in terms of power, status, wealth, etc., not in terms of ideology like in the median voter theorem.) In particular, mechanisms designed to limit voters’ influence (like the Supreme Court and superdelegates) are dangerous, and direct democracy (or some smarter variant like liquid democracy) is probably good on the margin.
• A Pigovian wealth tax might be a good idea.
• The ability of arbitrary fringe groups to get their message out via platforms like Facebook is basically a positive development, and everyone screaming about how Mark Zuckerberg is morally culpable for the outcome of the 2016 election needs to calm down and go read Douglas Adams’s 1999 essay which anticipated this whole controversy.
With respect to democracy in particular, I’ve heard a fair number of rationalists explicitly state that they don’t view corruption (which technocracy and other undemocratic systems naturally facilitate) as a serious problem, and that instead of worrying about it we should worry about voters making dumb and destructive choices. And yes, the most recent United States presidential election was evidence in favor of this proposition. But I still see plenty of problems caused by corruption, and I suspect that if the typical rationalist lived under a regime less favorable to people like themselves, they’d feel differently.
(Incidentally, I am confused about the idea that public choice theory is a mistake-theory thing. The triumph of concentrated interests over diffuse ones is a central insight of public choice theory, and that insight is the major reason why I fear corruption and want more democracy on the margin. And it plays nicely into conflict theory; everyone agrees that of course concentrated interests are going to fight kicking and screaming against anything that reduces their relative influence.)
Your ideas are intriguing to me, and I wish to subscribe to your newsletter. But since this is slatestarcodex, do we have any evidence that increased democracy reduces corruption (etc), or are we just engaging in wishful thinking?
Also, your DNA link is broken.
I wouldn’t call it “wishful thinking” so much as a 101-level spherical-cow argument. To the extent that voters influence policy, those policies have to match those voters’ interests. To the extent that people and groups not accountable to voters influence policy, they can steer towards policies that benefit themselves at the expense of voters (which is what I mean by “corruption”). And indeed there are many examples of this in practice.
We do need more empirical tests of these kinds of questions, though. The spherical-cow factor doesn’t always dominate.
The link worked when I posted it, and Google gives the same URL as the top result. It must have literally just gone down. Wayback Machine link.
Are you pulling this from the kind of arguments/research of The Dictator’s Handbook? Because if not you should definitely consider doing so! Those authors note that in various fields, whether we’re talking government, corporations, or sports, needing to have a broader base of support decreases corruption. The simplified version of that argument is that the more people you need to have supporting you to hold power, the more you need to provide public goods that make everyone better off rather than private goods benefiting a few.
Haven’t read The Dictator’s Handbook; the above argument wasn’t meant to be anything more than the bog-standard pro-democracy argument that you learn in kindergarten.
Update: It’s back up.
And back down, or at any rate not up for me. I suppose that the Sirius Cybernetics Corporation is to blame.
>The ability of arbitrary fringe groups to get their message out via platforms like Facebook is basically a positive development
I thought rationalists were on board with this position.
There was a big media narrative that conservatives were hearing political information from their friends and other sources from their filter bubble, instead of from authoritative sources, and this was why they believed in crazy things like Pizzagate, leading to Trump’s election victory. Most educated people I know are in agreement with this. I’m not so sure about rationalists, but there’s definitely a major anti-Facebook movement going on in the rationalsphere right now.
I’m pretty sure we’ve moved on from that because it was too easy to justifiably call the “authoritative” sources that-thing-we-can’t-say. Now the line is that Facebook is bad because Russians can buy ads on it.
I don’t know about anyone else, but I’m pretty down on FB because it seems to me to be having a bad effect on public discussions, and also because I feel like it often has a bad effect on me and people in my social circle. It’s not a matter of whether Russia has bought some Facebook ads or Zuckerberg is a good guy or a bad guy or anything–it just looks like it’s making the world worse in many ways even as in others (like helping people keep in touch with far-flung friends and family) it makes the world better.
I found your 3rd para confusing. Maybe I’m just mindkilled by recent politics*, but my instinct is that the association b/t technocratic vs. democratic and corrupt vs. uncorrupt is nearly backward. There are a million technocratic diagnoses and solutions to various kinds of corruption, which go routinely ignored not because serious, well-intentioned people disagree on the prescription, but because beneficiaries of corrupt bargains are central enough members of political coalitions to protect their privileged position. The more a political system is subject at detailed junctures to democratic processes vs. technocratic decision-making, the more opportunities for interests to maintain corrupt bargains. See, for example, local land-use decision-making favoring incumbents, vs. the increasingly broad push for zoning liberalization coming from technocratic institutions and seeing support only at the state or federal levels.
* ’16 I saw not as dumb vs. corrupt, but as dumb and corrupt vs. merely corrupt–but on the Hansonian view maybe really just reflects corrupt-in-a-way-that-disfavors-me vs. corrupt-in-a-way-that-favors-me.
I think the housing crisis is mostly caused by voters voting in support of their own interests. Normally this is good. The reason it’s bad this time is because a lot of morally relevant people (those who’d like to immigrate but can’t) don’t get a vote. So in a sense, the solution is more democracy. (Which is sort of what’s happening at the state level.)
More generally, the problem with technocracy as a solution to corruption is that, if the technocrats themselves are corrupt, then you’re screwed. If they have control over the broad future direction of things, then this could plausibly be worse than smaller-scale forms of corruption that we tolerate because it’s part of a system that prevents voters’ interests from being steamrollered too badly.
A point in favour of mistake theory in the case of the US election – in many other countries, with the same distribution of votes, Donald Trump would not have been selected as president.
And in those countries, some other distribution of votes would lead to equally perverse outcomes, contrary to the will of a majority of the population. Arrow’s Theorem is totally a thing, and it is a thing you need to account for to avoid making mistakes in this area.
The lack of a perfect solution does not imply that all real solutions are equally bad.
Obviously, because your clear implication was that the US solution was exceptionally bad. But if the extent of your argument is that a solution is exceptionally bad because look at this one single failure in a marginal case, then that does look like you are arguing for the existence of a perfect solution that never fails.
Also, your initial statement was weak on the facts. There are not “many” countries, and may not be any country, in which a 48/46/3/1/1 popular vote split results immediately in the candidate with 48% being appointed head of government. Whether Donald Trump would have been selected as president/prime minister/whatever depends on second-choice preferences that are not recorded on US ballots and about which you cannot make confident assertions.
With different election rules, the campaigns would have been different. Whether intentionally or not, Trump traded millions of worthless votes in California and New York for thousands of critical votes in Pennsylvania and Michigan.
Campaigns don’t move that many votes. It’s plausible that different campaigning would have lost Trump Michigan and Wisconsin, and mayybe won him New Hampshire (though probably not Pennsylvania or Florida), since they were only a few thousand votes margin. The popular vote gap was way to big for campaigning to shift.
Anything that reduces benefits to that particular interest group. Preventing something that reduces benefits to concentrated interest groups in general probably not, since the collection of all concentrated interest groups is itself a diffuse interest group.
For your previous point, note that interest groups are not people. The logic of public choice implies that governments will give benefits to the concentrated interest group I am a member of at the cost of the diffuse interest groups I am a member of, and I am a member of both. That’s why it might be true both that I would lose as a stockholder in a steel company from a shift to free trade but benefit overall from that same shift.
Hence the free trade vs protection argument may be a conflict issue from the standpoint of interest groups but a mistake issue from the standpoint of the individuals who compose them.
Yeah, to be clear, “that particular interest group” was what I meant.
The only way for the average individual to benefit from policies favoring concentrated interest groups over diffuse ones, is if those policies are net positive. Maybe some forms of welfare are like this, but the more typical example is a group being awarded the right to extract rents in a way that’s economically inefficient. Even if everyone is a member of some concentrated interest group that benefits from some such policy, they all still lose. (The real world, of course, is inegalitarian, so some people are net beneficiaries and most people aren’t.)
Both sides sound like caricatures.
I feel like the situation in the realm of policy is no different from what we see in the realm of truth. Radical rationalism ultimately hits a limit because you cannot conjure truth our of pure reason, at some point you need to build upon undemonstrated axioms (and if you think you don’t, you’re probably not noticing your assumptions). Likewise, in policy you ultimately run into competing interests that cannot be explained away or reconciled.
The conflict theorist is like the fundamentalist zealot, and the mistake theorist is like the fedora atheist. Both extremes are bad. However, in general our world still needs more rationality than zealotry, so I invite you to keep the current course.
I see your point, and I think I’m reasonably good at not making the mistakes you describe, but I still identified extremely hard with the mistake theory side and against the conflict theory side of that dichotomy (inset standard caveats here). I don’t think it’s that caricatural.
Seeing this comment section, I think I’m being pretty representative here.
By “ultimately” do you mean ultimately in all issues, or only that there will be at least one issue where this happens?
Suppose my view of the effects of minimum wage is correct. Then most of its supporters, who support it in order to help the poor, are making a mistake. Some supporters, workers in or stockholders of firms that use skilled labor but compete with firms that use less skilled labor (the northern textile industry supporting the minimum wage to handicap their southern competitors), are not making a mistake, but they don’t have enough political power to get what they want without the help of their benevolent and mistaken allies. Have we ultimately run into competing interests?
I mixed up two different things for the sake of brevity. I think we can say that ultimately we always run into some interests or goals that cannot be further reduced or proven right (they are similar to axioms); furthermore, in a non-negligible number of cases (but not in all cases, as you point out) the interests we run into are in conflict.
Wouldn’t this theory predict that you’d have an audience lacking both left-wing conflict-theorists and right-wing conflict-theorists? But you’ve had far less trouble attracting representation from people who think the primary driving conflict is Western Civilization vs Barbaric Degenerates, although that is definitely still a conflict-based narrative.
Still, as you say. It could be worth a shot. Hypotheses are there to be tested.
The people bemoaning civilizational degeneracy have been treated with more respect here than they would’ve been in virtually any other ideologically-neutral intellectual space.
I will grant that the… Raumgeist? of this ostensibly-neutral place casts hostility leftwards more easily than rightwards, but even so, I don’t think it’s enough to explain the discrepancy. Adding what should be a symmetric effect from lack of conflict-theoretical consideration wouldn’t increase the asymmetry of the outcome.
Hmm, but maybe addition is the wrong metaphor. Perhaps feeling like people aren’t listening to your message or speaking a discursive language you can understand is multiplicatively more unpleasant than merely one or the other.
I would have said the people here were unusually good at this. Maybe I’m misunderstanding?
Sorry, that should really be “nor.” That is, the hypothetical Marxist might feel that they’re being attacked as an Outgrouper, and that the local population are talking in the weird way that ‘mistake theorists’ do, thereby seeming like shills for Wall Street.
I don’t know if hostility, but there are a couple of blog entries that would naturally attract some of the more right-wing conflict-oriented people.
To which the response started:
Alternatively, other “ideologically-neutral intellectual spaces” qualify their neutrality by the principle that there is no enemy to the left, that people to their right are evil, people to their left at worst mistaken. It’s the relative openness to right wing conflict types that’s at issue, after all.
That’s probably true, and I think they’re correct to think that way. As a libertarian, I find myself often agreeing with left-liberals on ends, but not always on means (i.e. I agree with liberal terminal values like “improving quality of life for everyone” and “ending discrimination based on immutable traits like race and gender”, but I also believe that a lot of left-liberal policies are at best ineffective and at worst counter-productive at achieving such goals). Conversely, I find myself completely disagreeing with conservatives on ends, while sometimes agreeing with them on means (i.e. even when I agree with conservative policies, I feel like conservatives are promoting those polices for all the wrong reasons; for instance, I oppose the minimum wage because it hurts the poor, whereas they seem to oppose the minimum wage because they either don’t care about the poor or actively believe that poor people deserve to remain in poverty).
Effectively, I tend to apply conflict theory when dealing with people to my right and mistake theory when dealing with people to my left (with some exceptions, like authoritarian far-leftists, who I definitely see as the enemy just as much as conservatives and far-rightists).
I oppose the minimum wage because it hurts the poor, whereas they seem to oppose the minimum wage because they either don’t care about the poor or actively believe that poor people deserve to remain in poverty).
I have literally *never* met anyone on the right (and I have spent my life among them) who “didn’t care about the poor” or who “actively believe[d] that poor people deserve to remain in poverty.”
I have met people whose concern for the impoverished was of a different shape than mine, or a different degree, and I have met people who defined poverty and poor differently than I. I have met people who believe that actions have consequences and that it is a predictable (and sad) consequence of spending money on booze, smokes, gambling, and cheap frills that one has less for other things, and that people who exert themselves at school and at work get more and better raises than those who don’t. Even those who viewed the conditions of poverty as capable of teaching the value of hard work and thrift when all the nagging in the world would not very rarely saw an issue with other people gifting the poor with their own money – just so long as it stopped with the handout coming from their own pocket, and not dipping into others.
keranih: Someone who believes that poor people are poor because they didn’t work hard enough, or because they wasted all their money on vices, is basically saying that poor people deserve to be poor. Those reasons might be true for some individual poor people, and I’m not against calling out individuals who genuinely have made bad life decisions, but if you assume that *most* poor people are just poor because they’ve made bad choices, that’s ignorant at best and downright elitist at worst. And yes, I actually have seen plenty of conservatives denounce voluntary charity towards people or groups they feel are “undeserving” of help, to the point of criticizing companies that choose to pay living wages to entry-level employees (admittedly, some of that criticism might be rooted in opposition to what they perceive as corporate virtue signaling).
Case in point, libertarians* will usually argue against the minimum wage by pointing out that it leads to lower employment rates and ultimately hurts poor people in the long run, or that it’s unfair to small business owners who can’t afford to pay their employees so much. Conservatives will usually argue against the minimum wage by saying that baristas don’t deserve to be making $15/hour just for making coffee, often with the implication that minimum wage workers are stupid and/or lazy. There is a sense in which it’s true that baristas don’t “deserve” to be making as much as people with jobs that require more education/training or involve greater labor/risks (e.g. construction workers, firefighters, neurosurgeons), but that’s self-evident to the point where it’s basically a strawman (it’s not like anyone is proposing that baristas *should* make as much as neurosurgeons), and there’s no reason to assign a moral judgment to that argument rather than a purely economic argument.
*At least the libertarians I know, who (like me) mostly tend to be Blue Tribe libertarians with Blue Tribe values, even if they support traditionally ‘right-wing’ fiscal policies for pragmatic reasons.
It’s elitist, but it’s only ignorant if it’s false.
They are proposing that baristas make as much as garbagemen or construction workers, however. And the arguments _for_ minimum wage tend to have a moral component, so it’s no surprise the arguments against them do as well.
But you’re stopping in the middle.
I believe that for the most part, a poor person is poor because they didn’t care enough about not being poor.
But that doesn’t mean I want him to be poor. Nor does it mean that, because of my outrage at how he has mismanaged and wasted his life, I would be at all inclined to take steps to make his plight even worse.
My disagreement with various sorts of bleeding hearts lies in what kinds of policies might possibly improve the situation of such a person. Welfare evidently doesn’t, usually — it just kicks down the road the point where the person faces facts and finally takes steps to improve himself. Minimum wage doesn’t, usually — it just makes it illegal to hire the poor slob who has never bothered to learn more lucrative skills, or to take a discounted chance on a person whose record suggests that he might or might not show up sober on any given day.
We are no longer allowed to make a distinction between the deserving poor and the undeserving poor; it’s an insult to the latter, or something. But, which the cases close to the boundary are always hard, surely it is still a distinction that makes sense, and should matter to us?
Strawman, I think. Anybody with the slightest economic sense knows that baristas making $15/hour deserve $15/hour to the same extent anybody else deserves what they are getting paid — somebody is willing to pay it without coercion.
The objection to minimum-wage laws is that it leaves completely out in the cold anybody who deserves (by virtue of what added value they can deliver) $14/hour, because it is illegal to hire them for what they deserve.
Someone who believes that poor people are poor because they didn’t work hard enough, or because they wasted all their money on vices, is basically saying that poor people deserve to be poor.
@ LadyJane – I don’t know how exactly to explain to you that NO, that is NOT “basically” what is being said.
What is being said is that consequences exist. In the case of people who spend more than they bring in, those people do not have money. They are poor. This is a consequence of the equation above.
To say “deserve” is to imply “not deserve” – so that the greater expense and less income somehow re-arranged itself for those of virtue and and only those of vice were bound by mathmatics.
Those reasons might be true for some individual poor people, and I’m not against calling out individuals who genuinely have made bad life decisions, but if you assume that *most* poor people are just poor because they’ve made bad choices, that’s ignorant at best and downright elitist at worst.
Neither ignant or eleist – it’s the result of having been among, and lived among, so many in that state. Many of us come to our senses and get out of it. Others continue to compound previous bad mistakes with others.
(admittedly, some of that criticism might be rooted in opposition to what they perceive as corporate virtue signaling).
See? Even your example of “bad” conservatives badmouthing charity has its root in something other than dislike of the poor.
Conservatives will usually argue against the minimum wage by saying that baristas don’t deserve to be making $15/hour just for making coffee, often with the implication that minimum wage workers are stupid and/or lazy.
They don’t deserve to be making more than minimum wage based on the worth of their labor. Not because of their *human* worth, but because pulling coffee and accurately dispensing change via pushing the auto change button is hardly a rare skill.
There is a sense in which it’s true that baristas don’t “deserve” to be making as much as people with jobs that require more education/training or involve greater labor/risks (e.g. construction workers, firefighters, neurosurgeons), but that’s self-evident to the point where it’s basically a strawman
It is not self-evident enough, as there are still plenty of people who feel that artists shouldn’t be starving and that PhD graduates “deserve” jobs of X quality and that the term “living wage” even exists in the much-manipulated form that it does.
(Look up “living wage” for your area. Then go look and see what expenses it’s supposed to cover for a single wage earner supporting one non-working adult and two kids. Then ask yourself why childcare is on the list as a necessary, typical expense.)
and there’s no reason to assign a moral judgment to that argument rather than a purely economic argument.
There is no reason to say that *any* worker has any moral right to any set level of monetary compensation. (Aside from what has been agreed upon by that worker and his employer.) Yet this is an argument made all the time by the left. “They deserve better!” No, actually, “they” don’t.
Would they (and their families, if they are supporting any) be better off with a higher wage? Almost assuredly, but that’s math again. Does every human have worth regardless of how much or how little they can produce in a day? Absolutely, but that’s in the eye of God, not the market. Does every human have worth regardless of whether they are smart or stupid, a liar or honest, lazy or hard working, crippled or whole? Absolutely, but again, that’s the eye of God – which does not use income as a metric (at least not in my denomination.)
(I would caution you against the hasty assertion that it is typical of the Left to see humans as God does.)
Could you point at conservatives actually making that argument? I don’t read National Review or similar sources so might have missed it, but it sounds more like what critics of conservatives would imagine they are saying.
There is no reason to say that *any* worker has any moral right to any set level of monetary compensation. (Aside from what has been agreed upon by that worker and his employer.) Yet this is an argument made all the time by the left. “They deserve better!” No, actually, “they” don’t.
I agree with you! Fundamentally, no one has any *right* to anything except to be left alone (i.e. the right to not be subject to violence, coercion, or fraud), and other rights like freedom of speech and freedom of religion are just extensions of that basic principle. But as both a political science and a classical liberal, I have a fairly strict definition of what exactly a ‘right’ entails. So yes, there’s a sense in which a barista doesn’t deserve to make a living wage, but it’s the same sense in which no one deserves anything other than to NOT be the victim of a crime. By that logic, a neurosurgeon doesn’t intrinsically deserve to be making $100,000+ a year any more than the barista deserves to be making $15/hour; they both ‘deserve’ whatever the market decides their labor is worth.
As The Nybbler pointed out, left-liberals tend to make moral arguments for why people deserve to be making at least $15/hour, while conservatives tend to make moral arguments for why some of those people don’t deserve to be making $15/hour. But I reject both of those arguments in favor of my own pragmatic utilitarian views. There’s no real objective way to determine what people ‘deserve’ because different people have different values, and you’ll just drive yourself insane if you try to figure it out. Instead of focusing on what people deserve, we should be focusing on what works best for everyone.
See? Even your example of “bad” conservatives badmouthing charity has its root in something other than dislike of the poor.
Honestly, if that is the case, that just makes them even worse in my view. If they’re badmouthing charity because they genuinely don’t think the recipients deserve it, or because they think it’ll make the recipients weak and dependent or some such nonsense, then at least they’re being morally and intellectually honest, even if I strongly disagree with their worldview and value system. If they’re badmouthing charity just because they see it as virtue signalling, that means they’re willing to throw desperate people under the bus just to prevent the ‘other side’ from scoring a few political points, and that’s totally, irredeemably, unjustifiably evil to me.
It’s bad for the left to use disenfranchised people as political footballs. It’s much worse for the right to shoot down those disenfranchised people just to keep the left from scoring a goal.
DavidFriedman: It’s not something you’ll see in a lot of respectable conservative publications, or even publications like Fox News and National Review that try to maintain some veneer of respectability. It’s what you’ll see in the comments sections of Facebook posts and Yahoo News articles, on conservative subreddits and alt-right image boards, on inflammatory media outlets like Breitbart that don’t even pretend to be neutral or credible. It’s what you’ll hear radical Red Tribe conservative populists scream about, and what you’ll hear moderate Blue Tribe conservative elitists whisper about in their dining rooms when no one else is listening.
And yes, that’s all anecdotal evidence. Maybe you don’t believe me, or just don’t think that my personal experiences are representative of the American conservatism as a whole. But I’m going by what I’ve seen and heard firsthand.
Exactly, it’s the neutral vs conservative thing again. The mainstream media isn’t really a hotbed of Marxism, as some people like to think, but at least Marxists are free to join in the comment section without getting banned for expressing their views. So for the right-wing equivalent to comment outside their own ideological bubble, they pretty much have to come here.
Hypothesis: Blogs attract (angry?) comments from conflict theorists opposed to the viewpoints expressed in the blog, even if the blog mostly makes mistake theoretic arguments. More expansively, it may well be that mistake theorists will only comment on the blogs of other mistake theorists (regardless of viewpoint) while conflict theorists prefer to comment on the blogs of people with opposing viewpoints (regardless of theory).
As a result, a mistake theoretic blog would (according to this hypothesis) attract every demographic -except- for conflict theorists aligned with the blog’s overall goals; while a conflict theoretic blog will attract conflict theorists of both sides and very few mistake theorists of either kind.
Are you claiming there aren’t any Marxists here because the blog is Marxist? (or overall Marxist-aligned).
If so I think we can discard said Hypothesis safely.
More that Marxists do not regard SSC as ‘the enemy’ in any appreciable way.
Plausibly SSC and Marxists share a technocratic ideal rather than the racialist/culturist ideal of the Alt Right?
Maybe not my most charitable interpretation, but I think Right-wing conflict theorists stay here because it’s one of the few cosmopolitan comment sessions where they don’t get banned, and Left-wing conflict theorists avoid this place precisely because the Righ-wingers are allowed to stay.
(They see Right-wingers around, and conclude the blog is a Right-wing fortress; they choose to move because they have other cosmopolitan blogs to voice their opinions with more peer support)
Hey, I even saw one of them saying something like this!
Two points: What struck me in your post was that the examples you gave for conflict theories all came from the Marxist perspective. While (cultural) Marxists may be the most obvious, unabashed conflict theorists these days, the behavior of the American right wing looks like they have their fair share of conflict theorists, and Republican tax and health care policy often smells more of an undeclared class warfare than of careful consideration of the pros and cons.
Which brings me to the seconds point: this looks not like a fundamental question of what the world is really like, and more like a multi-player game theory problem, in particular a multi-player prisoner’s dilemma. It’s all fine and dandy – in fact, it’s probably the most constructive, helpful thing to do – to play “mistake theory” if everyone else is playing the same game, but if you have a sufficiently strong faction playing “conflict theory” (refusing to compromise, because everyone else is the devil), they have more success than they should. “Conflict theory” is like a bad Nash equilibrium, a self-fulfilling prophecy – if everyone behaves according to the diagnosis “It’s power-hungry, uncompromising people on the other side who cause the problem”, there will be no lack of power-hungry, uncompromising people on all sides, causing all sorts of problems.
The first part of this is more or less what I wanted to say; Conflict Theory is basically how we wound up with Trump.
We basically wound up with Trump because Clinton got caught when she stole the primary from Bernie. On that note, this is a socialist Presidential candidate’s best chance ever in America. It’s like if Wilson got caught stealing the primary from Debs and used his remaining clout to make the D party make fools of themselves covering for him. People in 2020 will have a choice between voting for D party fools and crooks who get caught, voting for Trump, and voting for a socialist.
Or a Libertarian. Or a Green.
I don’t think the two main parties stay in power mostly because everyone likes them more than the alternatives. I think they stay in power because they’re pretty effective Schelling Points, and there are a lot of alternatives.
Right, Clinton got caught stealing the primary, which made certain groups realize she was in Conflict with them rather than just making Mistakes.
Better than 90% of American voters today, do not acknowledge the existence of choices other than voting for the guy with the (D) after their name and voting for the guy with the (R) after their name. If that changes in 2020, it will be because Trump soured some of the (R) crowd on that approach, but for the same reason team (D) will be even more convinced that “wasting” their votes on a third party is an intolerable risk.
The #NeverTrump former Republicans are not going to even consider voting for a socialist, so the only way that a socialist candidate has a better chance this year than last is if the socialist actually captures the Democratic party nomination. How likely do you really think that is?
Not hugely likely, but not beyond the realms of possibility. It was a reasonably close race between Sanders and Clinton by the end – the gap was significant but not necessarily insurmountable, so a socialist candidate could potentially take it if they played their cards right and their opponents weren’t incredibly strong.
I think it’s at least plausible, if not necessarily *likely*.
TBH people thinking it’s not going to happen is probably one of the factors that helps make it possible. People outside the party concentrate smears etc. on the person they see as a serious contender and so advantage the underdog; candidates themselves do the same, and it potentially lets the left-field candidate sneak in. That’s kind of what happened with Jeremy Corbyn and the UK Labour leadership elections – people endorsed him as an outside candidate to ‘open up the debate’, and once he was on the ballot he’d mobilized enough popular support to swing it before anyone was really taking him seriously as a threat. And now he’s got the position and got that popular support, he’s proven very difficult to oust.
It’s possible Sanders has entirely blown the US socialists’ chance to manage the same trick by getting as far as he did and thereby calling everyone’s attention to the possibility, but it’s also possible that the amount of vitriol thrown at Clinton during and since the campaign will have had the effect of pushing enough people towards Sanders and by extension the left of the party after the fact to make a difference next time. Especially if the most left-wing candidates are still not considered serious contenders worth actively countering by the rest of the party, and are allowed to get a good run-up.
The Democrats can basically either swing right and hope to capture the #NeverTrump vote, or swing left and hope to catch enough think-the-Dems-are-too-rightwing voters to score them an overall win once the Republicans are weakened by losing #NeverTrump folk to the Libertarians or to straight-up ballot-spoiling (is that a thing in the US?). From the limited amount I know about the situation, both of those sound like at least semi-plausable options.
Sanders had a major issue reaching minority voters, any socialist running is going to have to address this issue without driving middle of the road voters away.
For a second i did a double take and wondered if this was sarcasm or tinfoil hattery, and i was really hoping it would the former. Imagine my disappointment on reading the rest of the post and finding that you are dead serious.
Clinton won because more Democratic primary voters wanted Clinton. This is confirmed by the polls, which largely tracked with Clinton and Sander’s actual performance in the primaries. Unless the Clinton campaigned managed to somehow sabotage both the vote and the polls without anybody noticing, they did not steal the primary.
As far as i can tell, the only evidence of the Clinton campaign doing anything like stealing the primary are emails showing DNC antipathy toward Sanders. This amounts to… what precisely? Psychic anti-Bernie rays? There is no evidence that this materially swung the election from Sanders to Clinton, or indeed that the DNC even tried to do such a thing. This whole narrative stands on nothing more than bad vibes.
Maybe the real reason we wound up with Trump because Sanders damaged the party front runner by refusing to concede when it was obvious he’s already lost. This is not something i really believe, but it sure as hell sounds more way more plausible than, “Clinton stole the primary!”
I’d bet five coin tosses in a row Clinton stole the primary. Could use a new tin foil hat, this one’s not working.
She didn’t steal the primary from Sanders, though. She maneuvered to exclude all other serious contenders beforehand. And of course she stole the Republican primary for Trump. (OK, that overstates the case by a lot, but search for [pied paper candidate])
Nope, totally stole the primary. She pulled the sleazy political hacks of “being more electable,” “bothering to invest in party relationships before hand,” “having policy positions that didn’t alienate non-socialists.”
It is super sketchy to say that a policy email outlining a policy whereby the dems would try to force the republicans further right during the primary than would be acceptable in a general election is “stealing the primary for Trump.”
If you remember 2015 and 2016, you’ll recall that that pretty much every talking head was saying this same thing, and that in the months and weeks before the republican convention, the biggest fear on the left was that an establishment republican would manage to coopt the process and install Rubio or Cruz as the candidate instead of the (at the time) clearly unelectable Trump.
I think there’s probably something to the idea that the Clinton’s were privately avoiding harm to the Trump campaign, and even significantly supporting it as their perceived least-electable opponent. There was a headline that Trump had received a phone call from Bill Clinton, encouraging him to run. Now, looking back for good reporting… Google gives me this link. Anyway, immediately after seeing some reporting on this story, (so, around August 2015), and straight through until Trump’s election, I was joking/soap-boxing privately, with friends that Hillary and Bill would live to regret 2015 as the worst mistake of their lives, when (in the supposed service of the greater good) they lent their influence to get the most repugnant person available to run for the presidency as an R, and to get that person the nomination, and then saw that person use the momentum from that win, and Hillary’s own history of dancing-with-devils-for-the-greater-good to defeat Hillary in the general and completely undo any sense the Clinton’s had that their net effect on history is positive. They sowed early support of an ethically monstrous narcissist, and, as the saying goes, as you sow, so shall ye reap. And now the joke’s on me, because while I was joking in a “joking but actually really serious” kind of way, my friends never seemed to get the “but actually really serious” half.
* I consider electoral politics something similar to war by other means, with religion and charity and compassion and institutions and etc., being the reason it thankfully isn’t conducted with actual violence [well, mostly not, in many countries, including the U.S., where I and my family and friends mostly live]. Consequently I can’t say “steal” and really mean the connotation–politics is rough and tumble. If it wasn’t violent or illegal, and you had a sincere and ethically-defensible belief that the compromises of principle that were involved were on-net good for the arc of history, good on you for doing it. That’s all that’s available to even the very best of us. The question of how to draw lines around what’s “ethically defensible” is, like most of politics, a hard problem.
So you say that Clinton got “caught when she stole the primary”, as if this was some incontrovertible fact. When i point out that you have no evidence of this assertion beyond DNC favouritism and the general sleaziness of the Clintons, you have the gall go go and say that you’re putting 97% odds on Clintons stealing the primary. Fine, you believe what you want to believe, i just want to make it abundantly clear here that you’ve got no rational basis for that belief. You might as well as claim the Trump got the Russians to steal the 2016 election for him, and Jeb Bush stole the 2000 election for his brother. Oh and George Bush did 9/11, the moon landing was staged, and LBJ put a second gunman on the grassy knoll.
I understand you to be saying both that it’s crazy to think Julian Assange was right and Clinton stole the primary, and that the Russia investigation against Trump is crazy. I think you are wrong about the first part.
I don’t think Assange’s allegations against the DNC and the Clintons are crazy. I don’t think Donna Brazile was talking about psychic anti-Bernie rays. I don’t trust the Washington Post to honestly report the Iowa coin tosses, or the rest of the election. I think the Clintons took a half-billion dollar bribe from Microsoft’s competitors to sic the Justice Department on Microsoft, founding the billion dollar slush foundation that bankrolls Hillary Clinton’s career. I think she is a habitual conspirator.
I could easily imagine Trump beginning his political career taking a massive bribe, and going on as he began. I could easily imagine Hillary Clinton starting out as slumlord, moving on to carriage trade real estate, casinos, Mr America contestants waking alone on some cold hillside, poor bare forked things, shrivelled by a night of shame with the judge of the contest. Except she’d need some executive ability.
I think it’s odd of you to claim only obvious lunatics believe people of bad character steal high office. Creepy, sure. Crazy, no.
It is not in question that people of bad character attempt to steal high office, nor in question that the Clintons are shady, sleazy, and corrupt. The problem is that there is no evidence that the election was in fact stolen. First you have to establish that fact, then you can start pointing fingers at who did it. Right now, all you’ve got is that you didn’t like the result. That’s the crazy part, that you are asserting a crime happened and the Clintons did it based on nothing more than bad vibes.
Are they? The sheer amount of partisan investigation on them has revealed basically nothing concrete, and the vague insinuations it has revealed are exactly the sort of thing you’d get when investigating an honest person to this intensity.
Have you looked at the details of the cattle futures case? It was a long time ago and the amount involved, about a hundred thousand dollars, is small change by modern standards. But it’s hard to see any plausible interpretation of what happened other than a bribe to Bill Clinton disguised as speculative profits to his wife, arranged by a broker playing games with records of which trade was for which customer.
DavidFriedman makes a good point, here is the study that shows how staggeringly unlikely it was that clinton got her money legitimately.
On top of that, you have her blatant violation of security laws in hosting almost all senior state department business on her personal servers. A US navy sailor is currently going to jail for having a couple of pictures on his phone, clinton had tens of thousands of emails on her servers, hundreds of them with classified material, and that’s just what we know about in the tens of thousands of emails she didn’t delete despite them being under subpoena. That brazen obstruction of justice on top of the violations we know about.
@That’s the crazy part, you are asserting a crime happened and the Clintons did it based on nothing more than bad vibes.
‘I won Trump the election, obviously,’ says Julian Assange. Hillary Clinton blames Wikileaks for losing the election. Assange is a smart loose cannon.
@all you’ve got is that you didn’t like the result.
All I’ve got is Wikileaks and Donna Brazile. I wouldn’t have minded Sanders or Clinton winning the primary.
Donna Brazile’s claim is that the Clinton campaign was bankrolling the DNC. The emails released by Wikileaks could be interpreted as evidence of the DNC effectively being an arm of the Clinton campaign. Both of these things are corrupt, but neither amounts to Clinton stealing the primary. Sanders was still able to reach the voters and get his message out. His problem is they didn’t like it as much as Clinton’s, and a completely impartial DNC would not have changed this. Contrast this with the GOP’s anti-Trump bias doing nothing to keep him from winning the primary because he did have the voters.
@Both of these things are corrupt, but neither amounts to stealing the primary-
So all your foofraw about tinfoil hattery, no rational basis, etc is over the difference between ‘bought the ref’ and ‘stole the contest’.
I think Clinton would have won the primary without suborning the judges, and I think she bent the judges and lost the general election because she got caught. That’s what Julian Assange says happened. Hillary Clinton says it only looks like that’s what happened because vast right wing conspiracy. Perhaps you’d say they are both tinfoil-hat loonies with no rational basis to speak. Simpler to assume that a politician whose career is based on a half-billion dollar bribe to suborn the Justice Department is telling another lie.
She didn’t suborn the ref, she suborned the sports association. There is a huge gulf between the NFL pulling for your team and your team stealing the Super Bowl. The polls show Democrat voters wanted Clinton, they got Clinton, nothing was stolen.
As for why Clinton lost the election, the answer is basically bad luck. Electoral College only victories are black swan events, you can’t plan for them. Nobody likes that narrative though, because people don’t like admitting that sometimes their lives are at the mercy of cosmic dice. About the only lesson to be drawn from 2016 by the Democrats is that alienating the rural white vote is risky due to population distribution. They can still win if they do it, but the EC margins become narrow enough to be susceptible to bad rolls.
@She didn’t suborn the ref, she suborned the sports association-
@As for why Clinton lost the election, the answer is basically bad luck-
With you there. I thought she’d win. Easy to overthink luck.
Still, I like my Wilson/Debs analogy a lot better than I bet you like ‘She didn’t suborn the ref, she suborned the sports association’ much less ‘the polls show’. Polling is an unbiased science now? And if I was a D party socialist candidate I’d be going for it.
Properly weighted aggregated polling seems to have a fairly low margin of error when compared to real world results, which i think makes it accurate enough to draw real world conclusions.
How do you define success? If the other side is right in its mistake theory, then the policies the conflict theory faction is pushing are bad for it as well as for others.
Exactly. I believe in mistake theory, but my opponent believes in conflict theory so I have to deal with them in conflict terms. I can’t convince them, I just have to out-compete them.
I never thought about this consciously like this and I think it’s a useful concept. I was already aware that many people care more about ideology than policy.
I think I’d consider myself a hard conflict theorist, in a way. I’d prefer to concern myself with mistake theory, but policy details are not as relevant as fixing the main ideology. E.g., I don’t think there is much value in discussing the pros&cons of increasing minimum wage if we can’t agree whether we want to help poor people in the first place.
Or to put it differently: the mistake theorists can only begin their work when the conflict theorists are done.
Everyone agrees that helping poor people is a good thing–the bottom half of the income distribution pays close to zero federal income tax and the Republican tax bill did not change that. People disagree about what policies help the poor and about how much they want to help the poor, what cost in other things they are in favor of they are willing to pay.
I’m a big believer in conflict theory as a descriptive theory of politics. I think you’re selling it a little short in this post by overemphasizing Marxist theory when it’s not strictly necessary.
Let’s pick a relatively clean example: the continuing copyright extensions to keep Mickey Mouse out of the public domain. If you try to understand the extensions through mistake theory you’re going to get very very confused, because the arguments will seem obviously nonsensical. However, if you use conflict theory and assume that Disney is wielding power over Congress, everything makes sense and you’ll accurately guess what will happen the next time copyright extension comes up.
Another example: American Slavery. The intellectual and moral arguments against slavery were well developed at the time of the revolution, but it took almost a century for slavery to actually be abolished. All the evil and stupid arguments for slavery were downstream of the massive power slaveholders had, and their strong interest in maintaining that power. The only way to resolve the problem was to reduce the power of the slaveholders in the traditional way, by killing most of their young men and conquering their territory. The democracy was a ritualized alternative to the bloodshed, but the same fundamental power underlies them both. If you’re trying to make predictions about the world, the troops matter much more than the arguments.
So yes, I think for most political issues, both “sides” correctly understand that they will benefit from winning and the other side will lose from losing. This can be complicated by complex alliances and radically different moral values, but eventually if you dig deep you’ll find where the “winners” actually win. The Marxist version combines this knowledge with the idea that a specific alliance (the working class) and a specific interest (full value of the work produced) but you don’t need that to understand the idea and make accurate predictions.
I don’t know if you have seen, but copyright will probably not be extended again the near future. Mostly because the anti-copyright side is now much stronger politically than it was in 1998. https://arstechnica.com/tech-policy/2018/01/hollywood-says-its-not-planning-another-copyright-extension-push/
That seems to support a conflict theory way of looking at the issue. It’s not that the arguments against a copyright extension have gotten better, it’s that the opponents of a copyright extension have gotten stronger.
I’ve seen this mentioned in the news, but I still don’t understand why. Just how far in advance do you think you can predict American politics? The first time that not extending copyrights in the U. S. would have significant effects will be in 2023, which is five years from now, and definitely after the next presidential election. And that’s not even a hard deadline, because it has happened at least once that U. S. copyright law was modified in such a way that some works whose copyright protection has previously expired in the U. S. became protected again (in 1994, see “https://en.wikipedia.org/wiki/Uruguay_Round_Agreements_Act”).
> The only way to resolve the problem was to reduce the power of the slaveholders in the traditional way, by killing most of their young men and conquering their territory.
Note that that was only necessary precisely because the slaveholders consciously defined themselves as such and interpreted any argument attempting to find a mistake in their presentation of the rationality or theology of the case as an attack.
Most countries were able to get away with abolition via compensation because both sides were still operating in mistake mode.
I wish I had the time to answer this with a thousand words, but I’ll just say this:
There’s a thousand caveats to that, where I could point out that people should use both theories to become a true Gray Jedi of political theory, but I think that wouldn’t address your point.
I think both a Pure Mistake Theorist and a Pure Conflict Theorist will (probably, I don’t know the specifics) look at the “Disney meddles with public domain” issue and say “Disney is defecting in the cosmic Prisoner’s Dilemma”.
The fundamental difference isn’t that Mistake Theory says “We should always cooperate no matter what”. It’s more subtle and nuanced than that, where Conflict Theory focuses more on the conflict part, finding targets, gathering troops, while mistake theory focuses more on making defecting hard and cooperating easier, making honest mistakes harder, etc.
An example is the Net Neutrality debate. We all want cheaper, faster internet. Some people think that, since ISPs want to make more money, the way to go is to do make it illegal for them to make certain things more expensive. I think this inefficient and/or counter-productive, and the way to go would be to do what France did and go for aggressive local-loop unbundling, to encourage competition. No specific ISP would be punished or forbidden to do things, and people wouldn’t have to rely on ISPs somehow not being evil, but market rules would lower the prices nonetheless (again, see the French market for details).
This sounds like a prediction. If Steamboat Willie enters the public domain, will you consider that evidence against your current model? Or just evidence that Disney wasn’t as strong as you thought?
Put another way: give your current model, how weak would Disney’s influence have to be to produce this result, and how (else) could we gauge this?
Even the Civil War isn’t that clear a case. The war imposed enormous costs on both sides. If the outcome had been accurately predicted by both sides they could have saved a lot of blood and treasure, all been better off, with some compromise, perhaps along the lines of what was done in the British West Indies.
So the decision of the Confederate states to go to war was, ex post, a mistake, as was the failure of the Union to offer them more attractive terms for abolition.
Under the 1909 act, the copyright would have expired in 1984. The 1976 act extended protection for Mickey but it did so as a result of bringing U.S. practice into alignment with European practice. The Sonny Bono copyright term extension act of 1998 was the first change in the law that one could plausibly describe as designed to keep Mickey Mouse out of the public domain.
Also, so far, the last.
How could the Union have offered more attractive terms, when the South seceded before any terms were given? The Civil War happened because the South did not even want to discuss the matter, and on realizing it could no longer be avoided, they chose to draw blades rather than come to the table.
It seams to me they both have a nugget of truth to them. I don’t think that the Koch bros or George Soros go home at night curling their mustache while petting a white cat while thinking about how they can get more rich and powerful. I think they developed a world view based on their own life experiences and think that that gives them some kind of special insight on how to run the economy and that their donations are purely altruistic to help people. I’m also 100% sure that is the worst possible thing they could be spending money on.
As for the happy technocrats that just want the best policy; I’m sure that is why during the ACA fight we had a long discussion about how single payer works twice as well for half the cost in every other country….. Oh, wait nobody even mentioned it because our right wing president Obama just pulled something off the shelf at heritage and then dangled a public option before pulling that back.
Technocrats like Obama who designed programs to use poor home owners to ‘Foam the Runway’ for the big banks.
Oh, wait. They probably weren’t very good technocrats anyways since Obama let Citibank staff his cabinet just after they crashed the economy.
IMO technocrats fall into the “It is difficult to get a man to understand something, when his salary depends on his not understanding it.” category.
I remember someone on a Magic: The Gathering discussion who said roughly “If your personality can be summarized with a set of colors on the Personality Color Wheel, then you suck and your personality is really boring and flat”.
The same applies here. Nobody (worth talking to) is only ever on one side of any given axis.
U.S. health care is indeed unusually expensive buy the alternatives don’t work twice as well–on measures of quality of health care the U.S. does reasonably well. .
And single payer doesn’t describe health care in every other country. Other countries have a range of policies involving various mixes of public and private provision.
I’ll cop to being slightly hyperbolic, but every other major country has universal coverage at a much more affordable rate, which plenty of people just short hand to single payer. We are the only industrialized country that doesn’t treat health care as a human right. The result being people put off going to the doctor because they can’t afford it turning an easy preventable case into a costly corrective procedure. It is absolutely insane the way we do it and it is 100% due to technocrats willful ignorance on the subject because they, or whichever party they are propagandizing for, are being funded by insurance and pharma companies who want to maintain their profits. Then our famously free press does it’s best job to present each party’s propaganda ignoring the obvious solution from the rest of the world. Because the elites in this country have nothing but disdain for the rest of us and are quite content to see differences of 10 to 15 years in life expectancy between the richest and the poorest.
The obvious question is what happens when we control that number for the various risks that low-income people have. I believe that smoking, alcoholism and obesity are all inversely correlated with income. So, how much of the reduced life span among our low-income population is down to those, and how much is because they’re being denied life-saving treatment by heartless elites?
In the UK (with the NHS) rich men live 8.2 years longer than poor ones. So 15-8.2 =6.8 years by not having single payer.
For women it’s 5.8 so 10-5.8 =4.2 Years less for not having single payer.
That’s some extremely sketchy math. At minimum you’d need to make sure that the non-institutional risk factors for poverty are constant between the two countries before you can make that kind of comparison.
Is it a perfect comparison? no. Is it good enough to make the point that the lack of universal coverage in this country is killing poor people? Absolutely.
I suspect you underestimate how good your comparison has to be to convince people who aren’t already convinced.
It seems clear to me that some political problems are mistakes and some are conflicts. The issue here is really about the meta-level: when theorists disagree about which problems are mistakes and which are conflicts, are those disagreements themselves mistakes or conflicts? It seems to me that either disagreements about theory are mistakes, or we’re doomed to epistemological nihilism in which “truth”, “science”, etc are nothing more than rhetorical weapons. If being a Marxist means having a conflict theory of political *theory*, as opposed to just a conflict theory of politics, then perhaps Marxists are scarce here simply because they deny the relevance of the conversation you’re trying to have. Such meta-Marxists can’t be reasoned with because they reject reason itself. Whether that rejection is a mistake or a conflict may be left as an exercise for the reader.
Yes, I agree. There are a lot of conflicts, and some groups have different interests.
I am an Israeli, and there are a lot of examples of conflicts in the Israeli context – Jew/Arab is the main one, where a large group of Israeli Arabs have conflicted national Identity.
But not only that – the religious settler movement also have very different interests from main Jewish society, and would spend lots of resources to achieve ends that are not in line with what most people believe.
Or the Ultra-Orthodox society, which want to keep state support for religious practices and their own group.
I think treating society as homogeneous leads to mistake theorists, but when different groups arise, conflict theorists come to the fore.
There are also the demonstrably true facts that:
a. People tend to find mistake-theory type arguments that align with their interests or beliefs a lot more convincing than ones that contradict their interests or beliefs.
b. People with strong beliefs/interests on one side of an issue often fund the thinkers/writers/researchers who are arguing their side of the mistake-theory debate. Sometimes this just means finding sympathetic people and giving them money; other times it’s more like intellectual hired guns.
c. Powerful people can and do get some mistake-theory-type arguments excluded from the public sphere. (Think of someone like Charles Murray, who’s operating almost entirely in the realm of mistake theory.)
All of these are things that play very well with a smart conflict theorist’s worldview–sure, sometimes there are genuine differences of opinion about the best policies, but those conflicts routinely have powerful interests putting a heavy thumb on the scales to make sure the outcomes of the debates favor their interests.
Mistake theorist: there can be multiple reasons that …
Conflict theorist interrupts: MANSPLAINER!
I don’t know if I approve of this*, but it made me laugh.
Conflict Theorists are gonna give us flak for this…
(OK, I laughed too)
It seems like the best argument for conflict theory is other conflict theorists. You don’t have to look too close at the rhetoric of Donald Trump to see that him and his supporters view the world from a zero sum conflict theory standpoint. So how do you respond to that? They aren’t going to be any more receptive to how they might be mistaken than the Marxists are. Rather you need to rely on building a mobilized opposition, and work to divide his base if you wish to successfully oppose any of his aims.
Put another way, perhaps even if one is inclined more towards the mistake theorist world view it can be necessary to adopt the conflict theorist one for dealing with other conflict theorists? Are they even separate world views, or just a reflection of ones level of social trust? I would think the appropriate level of social trust is one proportionate to the degree of social trustworthiness.
By the way were you thinking of another way to frame the post-modernism post when you wrote this? It would seem like, by your definition anyway, the post-modernist view would correspond with the conflict theorist one, just seen from a different angle.
It’s a little more complex than that.
Mistake/conflict is an important dichotomy and identifies two orthogonal vectors in idea-space.
The problem facing someone who looks at the world purely in terms of conflict is that if they want to win and acquire power, they do need to engage with the world as it is and not how they wish it were.
The problem facing someone who looks at the world purely in terms of mistakes is that boy! do your ideological opponents often make mistakes that benefit themselves personally! Isn’t it funny how that works out?
Seen this way, Donald Trump and ‘Trumpism’ is more of a synthesis of the conflict/mistake thesis/antithesis. It is a combination of the low-tax, low-regulation policies that have always been advocated by the intellectual Right with “lock her up” and “drain the swamp” and “you have to go back.” The kind of us-and-them fighting-talk that hasn’t been seen at the forefront of the Right for a *long* time.
I have noticed this. It does not appear to be exclusive to my ideological opponents.
It’s still useful to target the mistakes, though, even if people are self-deceiving. Your enemies depend on the lies they tell themselves. Every piece of muddled thinking is a strategic weakness.
Yudkowsky alluded about this here. (I am a fan of the school that says you can quote someone approvingly without that suggesting you actually like them. That being said, I do actually like Yudkowsky. Quite a lot, actually.)
My point is that mistake theorists don’t realise that sometimes the people who disagree with them are not self-deceiving. Not really. They say they believe in the invisible dragon, they believe that they believe in the invisible dragon, but all their predictions about reality are as if they did not really believe this.
They act as if they believe in the invisible dragon when that belief benefits them, and not when it doesn’t, but if you ask them if they believe in the invisible dragon they will say “oh of course I believe in the invisible dragon.”
While we’re quoting people we don’t necessarily approve of, proto-alt-right Lawrence Auster coined the phrase “unprincipled exception” to describe this behaviour on the Left.
Edit: I also see Scott has used this phrase.
alwaysoften forget about belief-in-belief. I need to watch for that blind spot.
I had the same thought after reading it. There appears to be a lot of melding between the conflict and the mistake paradigm. In particular I’m thinking about Communist Russia and FDR and the new deal. In both cases they figured if you were just smart enough you could fix the state with technocrats, but in both cases they had a fairly conflict driven ideology.
Except there is no rigorous reason to believe that The Neal Deal fixed anything, and Communist Russia sure as shit didn’t.
No one is saying the melding worked. Just that it was an example of the melding of the two theories.
Hmm.. where have I seen this before? Oh right:
I’ve seen this before:
So… is that a good thing or a bad thing?
They eat babies so the whole discussion is pointless; peace or war, their regular life cycle is way more bloodier than their conflicts.
I’ve read the story (and it was pretty good).
I meant, if I’m a mistake theorist, does the quote imply that I should be
a. Wrestling with my previously-unconscious bias that predisposes me to see conflict theorists as the enemy
b. Wrestling with my previously-held heuristic that seeing people as the enemy is unproductive at best and disastrous at worst?
Is it possible that current Trump supporters (and Trump himself) started as mistake theorists on illegal immigration, and after 30 years of making coherent arguments against lax enforcement of immigration laws on moral, practical, and economic grounds realized they were not dealing with mistake theorists in an illegal immigration debate, but with conflict theorists in a demographic war, and responded?
Which came first, “open borders” and “no person is illegal” or “you have to go back?”
To the extent that it is possible to evaluate where mistake theorists stand by looking at expert opinion, the majority of evidence seems to fall on the pro-immigration side. Vox has a good summary. (In particular, I would draw your attention to the “Immigration-skeptical experts are rare and eccentric” section — but read the whole thing.) Noah Smith has a bunch of links in this Twitter thread.
If Trump supporters were disillusioned mistake theorists, I would expect them to have engaged with these arguments and come up with satisfactory replies. Where is the evidence of that engagement? I agree that there are conflict theorists on both sides of the immigration debate, but as far as I can tell the mistake theorists are heavily clustered on one side.
Click on lengthy Vox article. Control-F “illegal.” Two matches. Hmmmm….
I too believe in totally eliminating almost all illegal immigration … by legalizing it.
There are some synonyms for ‘illegal’ also used.
jhertzlinger, should they be given the vote? Why would you want people from a completely different culture, with a completely different understanding of civics from typical Americans to have political power alongside you?
Would your answer to this question change if the illegals were evangelical Christians highly likely to vote for Republicans, instead of impoverished minorities highly likely to vote for Democrats?
I’d rather have Mexicans voting in California than the current crop of Californians.
BTW, I am not any kind of leftist. If anything, I would prefer the votes of evangelical Christians.
The article was a glorified listicle that didn’t steelman any of the arguments for keeping our border with Mexico porous. It didn’t address the fact that poor immigrants consume more in taxpayer benefits than they provide. Let’s step outside the US. Take a look at the Somalian unemployment rate in Sweden. Last I checked it was somewhere above
90%75%. While I am sure the Somalians themselves are better off, do you think the Somalians are a net benefit for the native Swedes economically?
It would be a “net benefit to happiness” if I let homeless people sleep in my home, but I’m not going to do it. It would drastically change the culture of my home and incur social and economic costs for me. Trump’s approach to immigration is this attitude at a nation-level. Is immigration a privilege which we grant to those we deem worthy, or a right to those who twist the door handle and make themselves comfortable?
EDIT: To further elaborate because I cannot reply to the two intelligent responses below. I used the Swedish example to point out that “economic benefit” seems to be more of a Motte-and-Bailey argument than a core argument. In other words, if these illegal immigrants were not a net economic benefit, the response on the Left would be the same. To the extent there is an economic benefit from these illegal immigrants, I believe that is why the moderate Republicans in Congress steadfastly refuse to enforce immigration laws even when they are elected to do so.
I believe Trumpism’s argument against illegal immigration is a triad and one of the principle reasons he won the primary and was elected in the first place, and not addressed well in the Vox article: 1) it harms those at the bottom of the American economic ladder the most, by putting them in direct competition with illegal immigrant labor willing to work for less, 2) immigration without assimilation will balkanize and weaken the United States as a whole, and 3) the rate of immigration (legal + illegal) is very, very high compared to other countries and the consequences are unknown. So let’s enforce the laws on the books while we still can until we figure out just what those consequences are.
First, it’s not 90%, though the unemployment rate for Somalis in Sweden is pretty shocking. But how is it relevant to how porous the Mexican border with the U.S. should be? The unemployment rate of Mexicans in the US is barely higher than the total unemployment rate, so it would appear that either Mexicans and Somalis are different, or the US and Sweden are different, or perhaps both.
The issue here is that the model treating your nation as your house, entails treating other people’s houses as part of your house.
The debate is not over whether immigrants are allowed to live in your house, but whether they’re allowed to be invited into other peoples’ houses, or sold houses, or sold land, by those other people.
If the answer is yes, those people get control of their own property. If the answer is no, their property is getting controlled by other people, who presume they own it because it’s in the country they live in.
I can see it as being reasonable to not want consequentialism used to tell you who to let in your house, but it seems reasonable for it to tell you that you’re not allowed to control other people’s houses.
Answer to first question: No.
Blaming problems on newcomers appears to be one of the commonest failure modes in human thinking. This applies to opponents of relaxed zoning laws, opponents of gentrification, and opponents of colonialism. Maybe it also applies here.
In a related story, I’ve become more reluctant to blame Trump on Democrats newly converted to conservatism.
So would the Native Americans have been wrong to blame the Settlers for their problems? Or those that were brutally colonized wrong to blame the colonialists? Please elaborate.
Maybe we can resume this discussion at the next open thread.
Registered to say that I’m definitely in the “Hard Conflict Theory” classification, and it always seemed obvious to me.
I come here for Scott’s posts, which are usually extremely high-quality though with certain blind spots sometimes, but sure, everybody has them. The ability to make what would normally be rather dry medical literature into an extraordinarily engaging read is really an amazing talent and so I read pretty much every main post that comes up.
Sometimes, I read the comments, which, you know, is something you should never do, but hey, he moderates the blog and people are generally very civil! I don’t read them a lot, though, so I might not have gotten the best read. Still, in the past I just always found it a little weird that the readers and commenters of this blog would talk about incentive structures, but didn’t seem to apply that logic to the way they apply to power and wealth in our current, real-life society, and…I really *did* attribute it to “These people are closer in the hierarchy towards the rich and powerful, and are therefore incentivized to be opposed to any radical changes to the status quo.” Not in a malice sense, in a more “Job depends on it and therefore won’t question it” sense, but, heh, I felt that it was there. That might be a little mean to y’all here in the comments, but it’s sort of an important point, right? Because y’all seem to make such a big deal of how smart, and how well-considered you try to be, and all that, and I’m in the column of “This is super obvious, and furthermore, anybody who is posting here, where High IQ is regularly mentioned as being both a real fact and a point of pride who doesn’t see it must be lying, which makes them either silently complicit, or an enemy.”
That makes me sound a lot more aggressive than I am in person! But it has been something I’ve thought of when reading the comments. And…while I appreciate that some of the folks here might really just be people who assume good faith all the time, the post hasn’t really disabused me of the feeling that many people, here and elsewhere, are more intentional about that. I don’t know if that’s both necessary and true, so, ya know, this can be my one and only post on here if you’d like.
I’m getting the feeling that the commentariat is dismissing conflict theory a bit rashly because it sounds like a vindication of those mean SJW bullies.
A lot of people are conflating descrptive conflict theory — “interests conflict, and conflicts of interest drive many things” — and normative conflict theory — “My Side Must WIn!!”
Oh, that makes sense, nicely spotted.
Yes, thank you, I can’t tell which one Scott wants to talk about.
That’s exactly what struck me as wrong about this post. With a more neutral description of conflict theory, there doesnt really turn out to be much of a dichotomy. It’s trivially easy to be both a conflict theorist and a mistake theorist.
There’s also gene’s eye normative conflict theory. “I must win by being as close as possible to the center of the winning side. Since everyone will race to the bottom and all information about past defection and cooperation behavior will be erased, the winning side will be the largest recognizable kin group or the most numerous green-beard. If I’m able to defect freely without record being kept from close to the center of the winning side, I can put many copies of my genes into the gene pool and even if future generations are much smaller than the present generation due to population collapse due to inability to coordinate action on a large scale my fitness can be high.
We can predict that genes for implicitly holding this theory, in contexts where conflict theory seems like a good description of the situation, will have risen in fitness in many times and places.
It’s also a vindication of Trump’s approach to those same mean SJW bullies. Both Trump and SJWs are looked on rather poorly here, but I have to admit Trump’s approach seems to be fighting them more effectively than careful rhetoric ever has.
He makes a lot of them unhappy. I’m guessing others are thrilled with him.
(Speaking of ‘SJW Bullies’ which != SJ people in general, because, conjunction if nothing else).
Unless I’ve missed something, Trump hasn’t done anything to take the wind out of their sails, though. Kind of the opposite. This does not seem like a good thing.
I just wanted to say I really like that you are sharing this.
I’d like to hear more about this, if you’d care to elaborate.
The point is your political philosophy is DANGEROUS to me. Scott is way way way too kind to Marxism in this piece. And your providing an example of it.
You state directly that anyone not agreeing with you is complicit with the bourgeoisie, and therefore an enemy. You understand how aggressive this makes you sound. That’s because the Conflict Theory leaves only one “solution” for fixing that problem, which is destroying your enemy!
This is why Communism killed 100 million people in the 20th century!!!
I vehemently disagree with the idea that this blog should be “more fair” to Marxists. If anything its been too kind.
Marxist theory is utter complete deadly bullshit. Its final solution is always mass murder. It can’t escape from that solution or come up with any other solution.
My vote is to discard it completely. If Conflict Theory wants to be taken seriously, the idea that “anybody who is posting here, where High IQ is regularly mentioned as being both a real fact and a point of pride who doesn’t see it must be lying, which makes them either silently complicit, or an enemy.” MUST be taken off the table!
Because if we’re talking about good governance and your solution is to kill/imprison/subjugate everyone who disagrees with you about what good governance is, you need to step aside and let the adults govern.
>Sometimes, I read the comments, which, you know, is something you should never do
I really strongly disagree with this sentiment. You should always read the comments for an article because they will (usually) give a rebuttal to it. Yes that rebuttal has a good chance of being passionate, polemic, and/or profane, but it will usually point out at least some weaknesses in the author’s points. In fact, there seems to be a pattern that writers who say comments are terrible or close comment sections tend to also be ones that write outrage bait with bad epistemiology.
I guess there is no way of saying what you are trying to say without sounding rude; in my opinion, you did a good job and didn’t sound obnoxious at all.
Maybe it will sound like I’m trying to play gotcha, but here I go:
You should be more specific. What is it that folks here are failing to discuss in terms of incetive stgructures, and why is it a problem? Bear in mind that this blog can’t cover every possible topic, and that any many of these social topics are complicated and may lead people operating under the same Mistake Theory paradigm to different conclusions.
Somewhat paradoxically, looks like you have a higher opinion of us than most of the commenters themselves do. The general IQ and compassion may be high here, but we are not immune to error and to disinformation. Even the smartest philosophers among the Greeks believed in things latter proved wrong. Maybe we really don’t get what is obvious to you because of different info, background, education, etc.
Most, if not all, of the “incentive problems” that are ascribed to the state can also be ascribed to private property. This is even more so, in a world where private ownership becomes the highest entity in the social hierarchy.
The Jacobite blog post is pretty blatant in this mistake. It’s not that the initial Jacobin post was ignoring the principal agent problem (go back and read it if you don’t believe me!), its that it was arguing that shifting the balance of power from the state to property doesn’t solve the problem. The Jacobite mistook (?) this as saying the Leftist was uninterested in the problem, going on to declare Marxists as “uninterested in theory”.
except they are orders of magnitude smaller, because the institutions in question are orders of magnitude smaller, and unlike with the state, the people who let them fester suffer direct, personal loss, not generalized societal loss, so the problem is more visible and the people creating it have more incentive to fix it.
This looks like the brewing of a bad dichotomy, but I’m not sure what you mean enough to critique it.
One reason property holdings remain small is that the highest level of social authority, and the one responsible for defense, is the state. If you took that position away, I don’t think private property holdings would remain small for long, as they would begin to fill that role.
And anyway, its not size that matters so much, as whether the power is autocratic or democratic. Say what you will about democracy’s bad incentives- autocracy is worse, and has a bloodier history to prove it.
Competitive dictatorship, which is how we run hotels and restaurants, on the other hand, has much better incentives. I have no vote on what is on the menu in your restaurant, an absolute vote on which restaurant I eat at.
Is that what you mean by autocracy? It’s the main control mechanism under private property.
@Guy in TN:
I wrote a few paragraphs suggesting that families with children tend to be (very small) autocracies, and most people don’t mind them much, but rereading it it seemed needlessly snarky.
I agree that we want to avoid violence- violence is bad. And I’m not actually a fan of autocracy (between you and me, I honestly have my doubts about the way we treat children).
But I think most people do actually have an intuition that size is pretty important. I’d rather be metaphorically enslaved by a corporation I could leave if I ever really really wanted to than literally enslaved by a government where the only hope of release was death.
Informally, I have the sense that both the government and the market are large, clunky machines that sometimes break down or go haywire in ways that result in enormous human cost. There are certainly a lot of ways in which they mirror each other. But my current sense is that corporate breakdown is ‘safer’- not in the sense that it doesn’t matter, but in terms of scale.
The Financial Crisis was bad.
The Holocaust was worse.
King Leopold II’s reign in the Congo was pretty terrible, and I’ve heard some people blame capitalism for that, but I feel like it’s not a coincidence that we’re talking about a King with a government to sic on people who resist, so I’m reluctant to pin that one entirely on the market.
But the history I know is only a tiny fraction of the history there is. Can you give me an example of holocaust-level failure resulting from a haywire market? Not a case where a market failed to save people, but one where it actually killed them?
It’s the same problem we discussed a few weeks ago. Looking at the relative non-violence of, say, the government of a random county in Michigan, and thinking that if we just vest the highest level of power with these folks, world peace would be assured.
Yes, private property is an example of autocratic control, at least in theory. In practice, it is tempered by the higher, democratic control of state power. It works okay when it is controlled like this; its our current status quo. But changing to a system of absolute, undemocratic authority would be very different from the current system.
Not unrelated: Have you thought about the reason that actually-existing rights enforcement agencies choose not to cooperate with each other? It seems to be that once you are at the top, you lose your incentive to be competitive.
Your family argument isn’t bad, honestly. In a scenario with a sharp divergence in mental capacities (such as parent/toddler), an autocratic situation is better than a democratic, at least to a certain point.
You won’t get any disagreement from me here: the failure or malevolence of corporations doesn’t hold a candle to the failures or malevolence of states, and its because of scale. If you change the scale of an entity, which includes not just its physical size, but the power it exerts, then you change its influence. There are no examples of private entities bombing cities, because they are not at the top of the social hierarchy in our system. There are also no examples of country governments bombing cities, despite these government being democratically controlled, non-market entities.
This is why arguments along the lines of “Look at all the bad things the state has done, let’s decrease its size, and in turn increase the size of private power, which up to this point hasn’t had failures at such a large scale” make no sense to me. Replacing the authority of the state with the authority of private property drastically increases the power of property, which increases its influence on world events.
Maybe to limit fallout from failure, we should have geographically smaller states? It’s a question worth examining, at least.
Right. Governments actually are property owners, are solely property owners (all of their powers derive from their ownership), and in the theoretical libertarian sense of property (allodial title) they are actually the only property owners.
The Norman Conquest reduced all allods in England to fiefs, and the state, or the crown, was the sole owner of England from that point. The crown charters that created the colonies never granted allodial title to “owners” since then, and the American Revolution did not alter their status — it merely gave them a vote in the organization that had become the new single allod.
Fee simple estates, colloquially known as “property,” are subsidiary agreements with the owner (allod) — the self-styled “owner” in the context of a modern state is a mere tenant. That is why he is not as dangerous as a government (i.e., true owner, allod) — though he can still be dangerous, insofar as the allod delegates to him the power to be dangerous.
Yes, I've often thought that, from the deontological perspective that motivates anarchocapitalism, we already live in an anarchocapitalist world, one in which all habitable land (and most of the unhabitable land and even much of the sea) is the property of one of about 200 corporations (mostly nonprofits, mostly run on some more or less democratic membership model). That this isn't what anarchocapitalists really want shows the incoherence of their philosophy (or so I like to think), and the same thing goes for some on the antiauthoritarian left who think that member-run nonprofits are the perfect organizational model.
That said, it doesn't really argue against David Friedman's reasons for anarchocapitalism, which are pragmatic and consequentialist. Presumably (and hopefully he'll correct me if I'm wrong), he'd argue that it's a good idea for all of these organizations to run internally on a libertarian basis (and probably break apart due to diseconomies of scale, and with the successor organizations being less bound to obsolete membership criteria like geography), and perhaps the same at every level of organization, but that it would only make things worse if some higher authority had the power to compel this. And speaking of authority in the family, David is also known for having raised his children in a non-authoritarian way, so at least he's no hypocrite!
As an anarchist (a left-anarchist, not just an anarchocapitalist), I don’t accept the legitimacy of any authority, whether family, corporation, or state. But it also seems clear that in practice, size usually makes things worse (although I do appreciate the help that large organizations sometimes provide in limiting the authority of their smaller subsidiaries). This makes my short-term economic policy preferences unusually right-wing for a left-anarchist.
It’s funny, I’ve got it flipped from the way you see it: I think the standard deontological Rothbardian has a good case that we don’t live in a system that conforms to his values. Since the homestead principle isn’t incorporated into our legal property system, our system of property is built on a foundation of lies (to him).
In contrast, I think the “competing legal systems” style of ancap should focus on explaining why we aren’t currently living in the results of their desired system having already been brought into fruition. I mean, there are Rights Enforcement Agencies who (for the right price) will do your bidding. We just call them the “police”, the “military”, or the “mob” instead.
@Toby Bartels, @DavidFriedman
It does argue against what DavidFriedman is saying in this thread, specifically:
Because these hotels and restaurants are absolutely subject to our vote. That is why they cannot discriminate or segregate clientele on the basis of race. That is why they cannot pay a chambermaid less than the minimum wage (and why she can collect the difference by force if they do). They cannot serve as an example of autocracy, because their position is inferior to that of the actual owner (the state) who imposes on them these obligations (among many others… such as fire safety, food safety, etc.).
It is no defense of autocratic power for hotels, as contrasted with the autocratic power of the state, to say that hotels use their power to benefit society when they are given exactly as much power as the state deliberately chooses to give them.
(All this is just to reiterate the point originally made by Guy in TN.)
@ Guy in TN :
It is built on lies, since it lies about what is and isn't property. But the main result of that is confusion and irony, not economic injustice. If your reference to homesteading is because the governments claim huge tracts of virgin land, then I agree that you have a point, although I still doubt that it would have much effect on contemporary economics or politics if those were treated as international waters, as long as the state got to keep the valuable land being used for grazing and mining.
Of course, the way that the land was acquired in the first place was quite unjust. But ultimately, that’s true for almost any property whatsoever! I’ve seen right-libertarians argue that Coase's Theorem means that we don't need to worry about correcting past injustices as long as we start being libertarian from now on. So a deontological anarchocapitalist who accepts that argument should have very little to complain about, whereas one who rejects it should also be clamoring for slave reparations, renegotiating treaties with the Lakota, and maybe even (considering the name of the magazine whose article began this discussion) returning Great Britain and Ireland to the Stuarts. I don't know of any anarchocapitalists that take such positions, but if there are any, then I would like to meet them!
Any individual commenter knows that his comments have essentially no effect on the overall structure of the society. Insofar as we have a selfish interest it isn’t in comments that maintain a status quo we like, it’s in comments that make us seem smart, or interesting, or in other ways get us status here and, if we comment under our real names, elsewhere.
To make your argument work, you need something more like “we benefit from the status quo, we will feel guilty if we believe the status quo is radically unjust in our favor, we don’t like feeling guilty, so we have an incentive to believe that the status quo is about right.”
Most commenters here, left right or libertarian, don’t seem to believe the status quo is about right, although they differ in what is wrong with it. But I can’t think of any who appear to believe not only that it is wrong but that it is wrong in ways which largely benefit them.
This is the explicit belief of one group which is mostly absent here –SJWs, or at least SJWs who are white or male. It’s also true of wealthy Marxists, though only implicitly so, so they may believe otherwise.
Well, we probably just spill our brains most of the time — brains that are filled with beliefs for reasons having nothing to do with the effects of the immediate conversation here. We will naturally make our speech here consistent with our existing beliefs.
Feelings don’t have to enter into it at all. And certainly not feelings of guilt. Maybe conformity to power is mediated by emotion somehow (I guess most everything is somehow) but it’s more relevant to look at the social structure as the ultimate cause (if it is).
Consider someone who is an active duty service member in the US Army. This person has an incentive to believe, one way or another, in what the US Army is basically doing overall. Either (ideally) belief in the immediate mission, or else a more complex belief about why the mission itself doesn’t define the institution. Such beliefs will make it easier for him to perform his job, to fit in socially with his peers, and to relate to his superiors. (Attached feelings may be: enthusiasm, pride, affection and respect for peers and superiors, admiration for celebrated heroes and top leaders.)
If he comments here, he’s going to express those exact same beliefs, even though the incentive he has to believe them has nothing to do with influencing people here (or really with influencing anyone anywhere).
(He does have an incentive to put these beliefs forward to every person he meets socially in order to find out if there is any future social compatibility. Furthermore there is an incentive to put forward one’s self-image in social contexts that are not likely to develop into relationships because you gain information about how others respond to your projection of your self. Also, you just get some simple practice performing your self. You satisfy the expectations of others that you would put your self forward, which allows you to gain whatever social benefits they would confer upon you. None of this stuff is ordinarily conscious though, it is just natural social behavior.)
I’m just trying to put forward an account of what is likely really going on when people conform to the ideologies of the powerful. The whole business about “we will feel guilty if we believe the status quo is radically unjust in our favor” seems to be very detached from any actual understanding of human psychology. I think you just had this model where you needed an incentive and inserted “feelings” by default. The actual psychology of it is a bit more complicated and indirect.
Of course the US Army has one set of beliefs that it’s helpful to believe; Silicon Valley startups, big corporations, universities, churches, etc., have different sets. All of one’s connections to institutions and to society pull on one’s beliefs.
Oh, hey, there’s…a lot of replies to this.
I skimmed most of them, and read some! Unfortunately I don’t have either the spare time, or effort available to actually engage with the folks here (evident in that now it’s like, near two days after I wrote that and there’s a ten paragraph conversation I haven’t read), so I apologize that I’m basically dropping that up there and more or less disappearing. If Scott hadn’t more or less asked for feedback on that I wouldn’t have normally commented.
I do appreciate that there’s only one person freaking out and accusing me of wanting to murder him. Philosophy aside y’all seem very civil, and that’s genuine compliment.
Most folk here, like most folk everywhere, don’t really know how to think rigorously about situations where good faith can’t be assumed, so rather than giving up rigor, like everybody else, they either give up rigor or they give up good faith.
I have been trying for a long time to point to a third way, to advocate the extremely difficult path from initially assuming good faith and trying to think rigorously to assuming a mix of good and bad faith where bad faith preys on the principle of charity using distraction, disinformation, reflexive control, corruption, manipulation, fear, shame and evolutionary game theory.
It’s important that people understand that the toxoplasma of rage, e.g. bad faith, can be evolutionarily fit at the expense of its hosts.
That is a great analysis. And a corrollary of the mistake theory/conflict theory immediately sprang to my mind.
Mistake theory is a great tool for people whose main strength is their intelligence. It’s an intellectual strategy.
Conflict theory is a great tool for people whose main strength is their strength of will. It’s an emotional strategy.
You can clearly see this coming out in the two sides of the argument – the Jacobite article actually bothers to explain what public choice theory IS, on the grounds that if you’re making an intellectual argument it’s fairly key that people actually understand what you’re talking about. The people who wrote the Baffler argument don’t appear to care very much if their audience is – well – baffled. ‘Take it from us, these public choice people are just Bad And Wrong, and you know this because we’re the Good Guys and we’re telling you.’ It looks a little like an intellectual argument (because it’s written down at all in the first place) but its primary purpose is to stir up emotions.
Avoiding mistake theory arguments is rational for a conflict theorist, because the more intellectually skilled mistake theorists might be able to persuade them they’re wrong even if they’re not, using their uber rhetorical skills.
Avoiding conflict theory conflicts is rational for a mistake theorist, because after sufficient yelling and handwaving they’re likely to run out of emotional juice and just agree to whatever the conflict theorist wants in order to keep the peace.
Mistake theorists would probably like to think that they can educate conflict theorists into being mistake theorists instead, but my analysis above still holds, since even if the general level of smarts in the population is high, somebody has to be in the bottom 50 percent, and it’s in these people’s interests to move a dispute onto a ground that they have more of an advantage in.
I don’t know what people who are both dumb AND weak-willed do in this situation.
Go hide under the bed?
Or maybe just not talk about politics at all.
I think this model makes a lot of sense. Of note: if Conflict Theory attracts people who are intellectually disadvantaged, that probably doesn’t just mean dumb people. I imagine are a lot of people with perfectly good intellectual ability who still find themselves mysteriously loosing arguments even when they Know They Are Right. If your identity gets tangled up with a bad model, rational debate may be somewhat hazardous for you even if your intellectual abilities are generally good.
Conversely, if your identity gets tangled up with a good model, you can punch above your weight-class in rational debate
People who are both dumb and weak-willed, or who recognize there is a risk they may run up against someone smarter or stronger willed than themselves, well, those people can join teams. Plus, other people are joining teams too, so even the smartest/toughest has to do that also, just to keep up. For some contests there may be a natural ceiling to effective team size, and for some contests that size may be very close to “one” for practical purposes. Even so, nearly nobody has a strong enough signal as to how smart/strong willed other people are to feel too confident that they’re sufficiently outside the mean to be the smartest/most willful of all 7 billion people on the planet, or even of all the 100k or so folks they might cross paths with in their life, particularly given there are incentives for people who are/are not extremely strong or extremely smart to disguise the fact. So everybody will at least kinda try to have a team for basically everything important.
… Which in turn means that if you convince someone to treat your arguments as something other than soldiers, you’ve convinced them you and they are on the same team, at least provisionally, at some level. And if you’re smarter or stronger-willed than they, and empathize with them (or appear to), perhaps they’ll experience that as “charisma” because everyone really wants to be on teams that are smart and strong willed.
Also, team memberships (religion, red-team/blue-team, something-something-gender-mumble-cough) with major alignment-type feelings can be expected to have systematic reach into one’s whole world view, and changing these “highest-order” alignments is the sort of thing that would completely destabilize nearly all one’s other team-memberships / relationships, much the way that falling in love tends to do.
Some people might try to avoid even being open to having beliefs at all on these highest-level areas, and take a strategy of “but not very serious about it” toward these highest-order alignments, and mostly aim to get through life as sports fans and sushi lovers, rather than as materialists-and-very-serious-about-it, or as be-fruitful-and-multiply-Christians-and-very-serious-about-it, or etc.
… and also means that institutions like “marriage” and “gender roles” might be a way to help people get through life without big zero-sum or even negative-sum power struggles. Which might be why, although I’m not a reactionary, I’m now going to point at SSC’s planet-sized-reactionary-ideas injection post, because I think what reaction is really about (and maybe exclusively about, although I definitely don’t understand most of it) is rejecting the kind of team-building from which revolutions are made.
(I should also point at this one: https://slatestarcodex.com/2013/10/20/the-anti-reactionary-faq/, and this response-to-the-response making the point that reaction is about rejecting revolution-style-team-building… https://nickbsteves.wordpress.com/2013/10/21/shots-across-the-bow/)
Which also might be why, symptoms of everyone becoming unsure about their basic alignments (and the alignments of everyone else) can result in lots of shallow people who are emphatically “on the team” and eager to celebrate the team’s strengths, and there you go fascism after hyperinflation.
Jeez I’m making no sense. Can anyone help me make sense of this jumble?
I got some of it. People are forming alliances to compensate for their individual weaknesses, letting them compete with other people for (I’m guessing power or status).
There are maybe different kinds of teams, and some social institutions (like marriage) also function to help build teams in the game.
The heart of Re@ction is maybe rejecting the sort of patchwork/identpol teams that tend to upset the social order.
Sometimes the signals about who’s on which team get scrambled, and everyone rushes to identify themselves either with their pre-existing teams or big powerful teams.
Team membership ties into worldview, which ties into self-perception, so things that mess with worldview and self-perception too drastically can strongly interfere with team-identification.
How’d I do?
It makes a lot of sense. And thinking of teams be the way we compensate for our own weaknesses also might help explain the growing hyperpartisanship over the last decade, in that the more we’re exposed to larger and larger numbers of people who oppose our opinions, the more we feel the need to build and maintain strong and unified teams to help us put forward our agenda, and to police those teams to ensure that everyone’s on board with every issue.
So, if I’m primarily concerned with, say, global warming, a deal where all the feminists, all the gay-rights activists, and all the ethnic minorities get on board with opposing global warming, in return for me policing my fellow environmentalists to make sure they all support feminism, gay rights and minority rights, might be very much worth my while.
If I were confident that I was often likely to be the smartest or most strong-willed person in the room, I could reject that deal and make up my own mind about feminism, gay rights, and minorities, or ignore them all completely in favour of focusing on environmental issues all the time. But the more we’re exposed to ever-increasing numbers of people in online spaces, the less likely it is that we won’t come across at least someone with better arguments or more passion than us. So more and more people feel the need for a strong unified team at their back.
@ Jack Lecter:
Yep. Exactly. Especially the bit about how when signals get scrambled, everyone has to rush. I can point at well-respected men who briefly and clumsily fell into “rush” mode during peak MeToo. It wasn’t supposed to be about men generally, but some older gents of my acquaintance missed that somehow, and got very defensive about manhood, with predictable toxoplasmosis style blow-ups.
@ Embry: Yep. Maybe even more than lots of people coming across lots of other people online, it’s lots of people having no friends other than the near strangers they hang out with online, or not even that. The “bowling alone” phenomenon comes to mind. And it’s not so much that most members of blue team or red team have concrete policy goals themselves, for which they extract concessions, as that they have concrete insecurities paved under by specific team identifications (e.g. “I’m of no value to anyone” is paved beneath “I’m a member of team do-something-about-the-climate.”) and sometimes end up doing other things (say, speaking in favor of minority protections) as part of getting to preserve their sense of membership on the team they care about.
So new question, then. Does this possibly relate to cost disease? Are things are more expensive all the time because people are all the time becoming more reliant on mistake theory? Or because people are all the time becoming more reliant on conflict theory? Or because the mistake-theory people and the conflict-theory people are increasingly unable to communicate or compromise with each other at all?
One piece of (weak) evidence in favor of conflict theory: It’s easy to imagine The-Elephant-in-the-Brain-style claims that “[system] is really about [Conflict Theory category], not [Mistake Theory category]”. For example, “Education is really about social class, not teaching people things.” On the other hand, when I try to generate plausible claims of the reverse form (“[system] is really about [Mistake Theory category], not [Conflict Theory category]”) I come up empty.
I don’t know why that would be, if it’s accurate. My impression is that Mistake Theory explanations make better rationalizations and Conflict Theory explanations are more likely to need rationalizing, but I don’t have the terminology to explain why.
Maybe Hansonian conflict theory requires imagining one hidden motive, where Hansonian mistake theory works better if people make multiple mistakes?
Paranoia doesn’t really have to be taught- it wouldn’t surprise me if we had some dedicated hardware designed to detect the workings of hostile agency. On the other hand, the idea that other people are making mistakes that seem rational and/invisible from their perspectives but which you can detect is famously counterintuitive. The research on hindsight bias (imo) suggests it can be hard to empathize with someone else’s errors because once you can see the mistakes you automatically correct for them, the same way your top-down system imposes order on bottom-up data.
Alternative explanation: mistake theory is always right. Therefore only conflict theory explanations need rationalization. The latter set is actually empty, because nothing is described by conflict theory in the first place. (The somewhat more charitable version of this is that when mistake theory is right, it is *obviously* right, so conflict theory rarely gets misapplied to it.)
On the other hand, I really think that sexism is a mistake theory issue that’s consistently presented as a conflict theory issue. I mean, just look at how much feminist rhetoric talks about how men and/or society “hate women”. This is obviously false in the general case, and yet somehow it has become a dominant narrative.
Claims about systems, sure. “This impersonal system is actually driven by personal conflict” has the form of a potentially interesting argument, whereas “this impersonal system is just a system” sounds empty.
The alternatives on the other side would be claims about people, in situations that initially look like conflicts. “Peer reviewers aren’t trying to humiliate you, they’re making sure your work stands up to scrutiny, because that’s how we know what to believe.”
The trick is: find a conflict theorist for whom you are on the side of righteousness on the object level, and then attempt to understand their position “the two of us should not try to engage with our common Mistaken Enemy”.
Perhaps it seems like this would be impossible, because:
But you’re overapplying the conflict/mistake analogy here. Mistakists can consistently think that literally anyone who disagrees with them is making a mistake, and the result will be interesting discussions. Conflictists, if they want to win the conflict, need to form alliances; the more competent ones will listen to you if you seem to share their main values and/or goals.
Theres the rub.
Mistake theorists think everyone who doesn’t agree with them is making a mistake and will try to convince them of it.
Conflict theorists think everyone who doesn’t agree with them is they enemy and will try to eliminate them.
It’s pretty easy to figure out which set of theorists in this case is far more dangerous than the other….
I’m a mistake theorist with conflict-theorist friends; empirically, they do not think I’m the enemy, nor have they shown any interest in eliminating me.
Hmm, I maybe should taboo Scott’s terms for clarity. I’m a person who thinks politics is, or at least should be, mostly a question of figuring out what policies would actually benefit people, which can be difficult and lead to counterintuitive results. I have friends who think politics is mostly a question of uniting to defeat the Bad People who are doing bad things. We all recognize that well-intentioned policies with bad consequences are a thing, and that bad actors are a thing; we just have different emphases. Although we disagree about many things, my friends don’t automatically consider me one of the Bad People.
Nice work. This is a good way of explaining that our relative emphases, conflict versus mistake, don’t determine which team we play for. Rather, they determine which tactics we emphasize.
However, certain goals preclude certain tactics. If my goal is for there to eventually be a sufficiently large coalition of people who all put the effort in to build pressure for action on climate change for the coalition to broadly succeed in forcing powerful special interests to leave trillions of dollars of fossil fuels in the ground… With all that that would entail to be successful on a global scale, particularly the *clarity of resolve* that would be required for the coalition to persevere, there may be certain kinds of tactics I simply cannot permit to my team members. (Deliberately sowing confusion is a good example. Violence is another. Anarchists aren’t generally welcome in the act-on-climate tent. Certainly the aren’t welcome as spokespeople.)
Coexistence between mistake theorists and conflict theorists is possible because at some level, M-theory and C-theory are solutions to different problems. M theory is an attempt to solve interpersonal problems within a shared Overton Window; C theory is an attempt to solve interpersonal problems when there is no shared Overton Window. In the first case, the M-theorist can take for granted that both she and her opponent have identical (or similar) conceptions of the “truth” or “good” against which the M-theorist is comparing her opponent’s position. The M-theorist’s arguments gain legitimacy by appealing to that shared set of values/intuitions/norms/etcetera. However, when there is no such shared set, appealing to some “objective” standard confers no legitimacy, and is as likely to confuse the issue as enlighten. In those circumstances, the C-theorist has the advantage. He avoids engaging with his opponent’s ideas – they don’t even have all that much of a common framework within which to compare them, and constructing one would be contentious, difficult, and draining – and skips straight to removing his opponent from the debate. Not only is this more effective when there is no shared Overton Window, but, if successfully executed, has the salutary effect of *creating* an Overton Window within which everyone on the C-theorist’s side can happily trundle about in M-theory mode.
Hm. @Schmendrick… I don’t think the overton window concept aligns with mistake-lens / conflict-lens so neatly.
Let’s use an example of a house party in the U.S., where a football game is currently playing, but the home-owners have been called away, and now the guests can do whatever they want with the TV.
Overton windows are about hypothesis space for not sounding crazy or hostile. At our house party, you can say “I’m not that into football”, and you might get people to agree to change the channel, and you will have stayed inside the overton window. You could say, “people who are into football just like seeing black men smash their brains out for money, and literally every football fan deserves to experience a game’s worth of getting hit by offensive linemen” and then change the channel yourself, and glare at the first person who voices a protest. I think it’s safe to say at most U.S. parties with a football game playing, that’d be well outside the overton window.
But now ask, at this party, is your view that everyone comes with a different, pre-baked and effectively unchangeable preference for what to watch at house parties (conflict theory) or do you think the people whose preferences differ from yours are probably mistaken, or else have information that shows *you* are mistaken about what is best to watch (mistake theory)? In the conflict-theory situation, while you may use words as tools, or even as soldiers (e.g. including identifying allies and building rapport with them, and verbally attacking enemies and sowing mistrust and confusion among them), you aren’t trying to learn anything about the merits of watching football, nor trying to get anyone else to learn anything about the merits of football. In the mistake situation, discussion could be effective at changing people’s preferences and building consensus, and the best use of your time might be to find the person who disagrees with you *most* and focus on making sense of their opinion. In the conflict situation, particularly if you don’t favor the status quo, waiting for consensus to emerge from discussion with those who most disagree with you is a recipe for *disaster*–just take action, and build allegiances where you can.
So here’s where your model seems to break down. All the people at the party have the *same* overton window, or very nearly (some of them might accidentally say something gauche, or deliberately be offensively ‘contrarian’, but my point is mostly everyone knows how to be polite). And if one of them says something outside the overton window, it may be a mistake-theory effort to shift the overton window to include the true answer, or it may be a conflict-theory effort to explode reasoned debate entirely.
I think it is more of a case by case type of situation, then a this or that case.
I mean, sometime politics is a mistake theory – like when setting the interest rates.
Sometime it is a conflict – like when deciding on abortions pro-choice/pro-life.
If you actually believe abortions are murder, it won’t convince you that it is lowers crime.
And, by the same logic, you won’t be okay ‘killing babies’ even if the pregnancy was the result of rape or incest-rape. But some people are.
I can’t say I fully understand what goes through people’s heads here, but I don’t think it’s entirely immune to cost-benefit analysis.
Scott wrote about this here, but I can’t say he managed to totally dissolve my confusion on the matter.
It’s a contradiction of principles, yes, but not everyone is good at being consistent. I think the line of thought goes “Being a rape victim is awful, so I’m more willing to go along with what she thinks will reduce that awfulness.” Sympathy overrides principle.The higher-level rationalization is that this is an extremely exceptional case (which is true–a tiny fraction of abortions are the result of rape), and exceptional cases are exactly what exceptions are for.
Again, it’s not consistent with the “life begins at conception” theory of abortions, and it’s worth noting that there are a number of people who hold this position who don’t make a rape exception. But it’s also worth noting that abortion is already a hard problem that involves tradeoffs of sacred values (life vs liberty), and it’s not surprising that people break on principles when you make the situation even harder.
Or perhaps in extra simplified form, “Force woman to have pregnancy = bad ; Kill baby = Bad; Bad > bad; being rape victim = bad; Bad > bad + bad? Result unclear”
@eyeballfrog: I’m not criticizing them for this- just expressing confusion.
I intuitively understand having principles you think you should live up to and failing through weakness of will, or realizing your principles didn’t say what you thought they said, or having a hard time putting into words exactly what your principles are.
There’s something else going on here that definitely happens, is definitely normal, and seems more intellectually mysterious. As is often the case when you have a common occurrence which people can’t really explain, asking about it tends to elicit repeated claims that the occurrence does in fact happen (this isn’t what you were doing, but it’s happened enough times by now that I kind of expect it going in.)
I’m not saying I don’t do this (it’s hard to catch yourself, but probabilistically I doubt I’m immune to the effect), and I’m not saying people who do this are bad, and I’m not asking if people really do this. But I’d like to understand exactly how we’re pulling this off- are we somehow forgetting we had the principle in the first place? Dissociating from the event so it doesn’t trigger the subroutine that cares about the principle? Is it a communication issue, where the real principle was never exactly what people said it was, and it always had this exception built in but it just never came up before?
For that matter, I ought to ask- what’s a principle? I feel like I understand the word intuitively, but maybe I’m subtly misinterpreting it?
I remember being just extraordinarily grateful to Scott for writing that post. It’s one of the first posts of his that I read, and it was a couple years old when I found it, so I didn’t comment. But at the time I wanted to say, “Yes! Thank you! Feel free to think I’m completely wrong and should totally be a consequentialist, but thank you for believing that I’m not lying or being inconsistent. You’re the only person I’ve ever found who understands.”
@JustToSay: I’m sorry to bring it up- I don’t want to make you feel bad.
FWIW, I don’t think you’re lying. That’s something I would understand, and realistically I’d probably be a jerk about it, but I wouldn’t be asking questions or expressing confusion.
I can’t say I understand, but I’m not trying to round you off to something I can.
In retrospect, this topic turned out to be a little more trigger-y than I’d thought through. Apologies.
If you think of abortion as war, some people might think of abortion in case of rape as a just war.
Seems like a fair description of normie politics. Not so much anything out of the beaten path, like Death Eaterism, which seems orthogonal to the dichotomy.
Moldbug’s tone, at least in the early essays, is very mistake-theoretic: you, the reader, have some misconceptions which he’s going to clear up for you. It’s condescending and kind of bitter, but a long way from entreaties to cleanse with fire and sword.
His proposed solutions, likewise, are technical, engineering-oriented things (which would not work, but that’s not the point).
The conspiritorial aspects of his work- Cthulhu, the Cathedral, etc- are proposed explicitly to explain the fact that (he thinks) his opponents as acting in good faith, yet doing bad things in a way that appears nonrandom.
I don’t know how representative he is of Death Eater culture generally (with which I am not involved), but there’s certainly a case to be made that, despite his radical policy prescriptions and generally caustic tone, he’s a mistake theorist at heart.
Honestly, I think this is exactly the difference between the elitist anti-activism EnArEcks lot, and the populist (and much more popular) alt-right. Both are extremist in their solutions and way off from the mainstream right, but EnArEcks (or just “Formalism” as Moldbug puts it) is mistake theoretic, whereas the alt-right is conflict theory based. This is reflected in the alt-right’s resort to the classic nazi bogeyman of the eternal Jew.
Moldbug wrote a piece titled “Why I am not a white nationalist”, and I don’t remember it because it’s been years, but you could possibly boil that down to “Because they are irrational conflict theorists who can’t solve any problems” from what I recall.
But Moldbug would take 1000 times as many words to say it. :–)
Yeah, Moldbuggery is fundamentally mistake-theoretic. It’s just that the central mistake it likes to harp on — roughly, that the political aspect of the Enlightenment was a bad deal and should never have happened — is so huge and so alien to modern perspectives that it tends to break most people’s mental taxonomies.
Some other strains of Death Eater strike me as more conflict-theoretic, though.
I’ve had similar thoughts- at least, that some of the other people I’ve heard identify as Death Eaters don’t seem to belong in the same category as Moldbug at all; some of them have really different styles of engagement. And some of them are doing the “let’s talk this through rationally” thing, and some really, really aren’t. And that seems like kind of an important distinction.
The complication that this piece misses is that it is about Public Choice Theory, which, while generally falling under the technocratic “mistake theory” umbrella, is specifically a theory about conflicts.
So that there is more of an asymmetry in this case: the mistake theorists are acknowledging the existence of conflicts, and attempting to reason about them; the conflict theorists are dismissing the possibility of making a mistake.
This is a very important point! I already said this in another comment, but, let me restate it here: There’s more to “conflict theory” than just a descriptive theory that disagreements are conflicts. Which is after all true in some cases. But that doesn’t mean one should be a “conflict theorist” about such cases! Scott breaks it down here as if it’s fundamentally this one disagreement between the two points of view, but really there’s a number of disagreements — we’re looking at two clusters here — and one of those is the fact that conflict theorists just don’t really worry about mistakes, like, at all.
Or, in short, even where things are conflicts, conflict theory still contains lots of badness, and taking a conflict theory point of view is still the wrong thing to do.
Right, there’s a related but separate dichotomy about how to handle things that we all agree are pretty much zero-sum conflicts. On the one hand there is “treat each party as having legitimate interest and try to cut a deal, i.e. compromise” and on the other hand there is “the enemy is fundamentally wrong/evil and we will win by outright conquest”.
Each has its place, but some are more willing to go the second route right away. I tend to think that the best way to handle things, and what should be the social norm, is basically to bend over backwards to recognize that almost everyone has some sort of legitimate point, even if it’s one you don’t find particularly important or valid.
Apologies, but if that’s a debate that exists it’s because people don’t understand what “zero-sum” means. In an actual zero-sum situation, fighting to the death is the correct solution. But that’s because zero-sum (more generally, constant-sum, more generally, no outcomes Pareto comparable) is actually a very restrictive condition, and things that people call zero-sum generally aren’t. Actual literal wars are very much not constant-sum; that’s why it’s possible for people to surrender and to accept surrender, for instance. If you are in a situation where any sort of negotiation or compromise makes sense, it’s not constant-sum — because no compromise will be accepted unless both parties find it to be better for them than fighting, and in a constant-sum situation no option is a Pareto improvement over any other (to the parties involved).
My suspicion therefore is that the concept you are trying to get across here when you say “zero-sum” is not actually the property of being zero-sum, and you should find a better label for it.
I get your point, there’s a “minimal” and a “maximal” sense of zero-sumness.
Two people sharing a pile of money is zero-sum in the minimal sense: what I get you don’t get. But it isn’t in a maximal sense, if we fight over it instead of making a deal peacefully it imposes extra costs (concrete costs, opportunity costs in terms of a ruined relationship, and the moral cost of screwing over another person) on us both.
What you’re saying is that if something is zero-sum in the maximal sense then it doesn’t make sense to compromise at all. Sure, I agree. I also doubt such maximal zero-sum issues are all that common.
Indeed, the most characteristic belief of public-choice theorists is that if you have “a technocracy in which informed experts can pursue policy insulated from the vagaries of the electorate”, the policies they will pursue are the ones that most benefit the informed experts.
I feel that the dichotomy is false because those groups aim for the different places in the “food chain” so to speak.
Mistake theorists want to be technocrats on the payroll of politicians/elites (which are willing to govern rationally of course) while conflict theorists want to be those politicians/elites themselves.
That’s why there is so little attention to governance from the conflict side.
And that’s why since elites are more keen to preserve their status than to govern rationally, technocrats are limited in what they can do and thus are seen as to be supporting status quo.
Why, then, do technocrats typically aim for positions that are intentionally insulated from politics? The central example of arch-technocrats, after all, is the academic economist with tenure! I would say that the goal of most mistake-theorist types (at least, those who don’t just want to shout in the wilderness) is to constrain the set of options that a rotating cast of politicians/elites have at hand–take the terrible ones off the table, put good ones on the table, and try to steer the conversation towards less-mistaken options.
My impression (as someone on the mistake-theory/technocrat side of the equation by both inclination and training) is that the median technocrat supports a package of policies that deviate from the status quo by more than the median politician, while working within the reality that tweaks on the margin are what’s realistically on the menu 99% of the time.
Exactly, technocrats do tweaks on the margin, conflict types don’t feel the difference so their assumption is that you can’t fix the system by “technocrating”, only by radical displacement of elites.
Personally I tend to switch back and forth between the two approaches based on the particulars of the problem at hand.
Clearly there are problems, like measles outbreaks in the US due to anti-vaccers, that would go away if some set of people were just a bit smarter and/or a bit better-informed. It’s really not clear to me how conflict theory could possibly account for something like anti-vaccine sentiment given that there do not appear to be any stakes involved in terms of power, wealth, or status that would rationally motivate someone to adopt the theory that vaccines cause autism.
Equally clearly, when rich individuals or corporations violate ethics and/or laws to increase their own wealth or power, they do not believe that they are working towards the best interests of society as a whole, while being mistaken about how to pursue those interests. Instead they have surveyed the options and decided to violate social, legal, and/or moral norms because they perceive an advantage for themselves in doing so. I find it hard to imagine some piece of information or some level of intelligence that would cause them not to try to take advantage of the system and their positions in it.
Societies establish schools and universities to address problems of the first type, and courts and police to address problems of the second type. But actually I think that most problems are mixed.
Take problems of tribal epistemology. Do they exist because, lacking adequate rationalist training, people default to believing what their friends and neighbors believe? Or do they exist because most people quite rationally perceive that it is in their interests to pursue status within their own group and they do this by adopting group beliefs? Both. That’s why the problem is so intractable. That’s also why politics includes both social/emotional coalition building and debate about optimal solutions/public information campaigns.
I think all these things are involved at some level when you’re talking about conspiracy theories, because the people shameless enough to espouse them can rise to levels of minor celebrity they would’ve been incapable of had they stayed in the mainstream. Andrew Wakefield might be disgraced to the broader scientific community, but he still gets feted and paid for speeches and asked his opinion by enough of a subset of laymen that I imagine he’s probably pretty internally satisfied with his life. Go up a level to the promoters (your Alex Joneses or Kevin Trudeaus) and you can build a media empire and become a household name simply by acting as an amplifier for the always-hot topic of What They Don’t Want You To Know.
That might explain someone promoting anti-vaccine sentiment but it doesn’t really explain someone adopting anti-vaccine sentiment.
We don’t need them to be altruistically working for the benefit of society as a whole, nor do we need to adopt Conflict Theory to claw back “our” wealth from “those” selfish rich bastards. We’ve got over two hundred years of theory and experience on how to channel their greed (and ours) towards the common good, and we know that this almost always leads to better outcomes for just about everyone than does any of the alternatives.
Part of this process does involve laws against e.g. dumping toxic waste in the local water supply, and a thousand other things. If corporations are disobeying those laws, and if this poses a serious problem, then you have clearly made A Mistake. Passing laws that won’t be enforced or obeyed is a classic Mistake. A Mistake theorist can learn from that and try something different. A Conflict theorist can basically only say “Those bastards! Clearly we need more and harsher laws!”, and doubling down on something that didn’t work the last time is likely just going to be another Mistake.
But there’s still a conflict between the lawbreakers and the lawmakers. You’re just abstracting away from the conflict by choosing to consider it as the result of a mistake in the conflict-handling system rather than as a primary motivating force in political economy; but of course the conflict between those who put their personal interests ahead of the group and those who follow group norms predates capitalism by all of human history minus 200 years, so the conflict must be considered primary and the system is just a way of handling that conflict.
Conflict theorists aren’t irrational slavering monsters. They’re capable of saying “here is a conflict, now I wonder what the most rational way to deal with it is.” Violent vs. non-violent protest is a tactical argument that often takes place wholly within conflict-theorist groups. Incrementalism vs. revolution is a strategic debate that conflict theorists often fall into. For those who view political economy as a conflict between rich and poor, very few people are actually saying “let’s just storm the mansions and eat the rich.” Most of us just want to charge them a higher marginal tax rate and use the proceeds to pay for a first-world health care system.
Mistake theory says “if our arguments are strong enough we can convince them to pay more taxes through logic and reason alone.” Conflict theory says, “no, we’re basically just going to have to have the government take some of their money away – using the threat of force, as is the nature of government – and redistribute it.” I think that if you view the historical record you will find that the number of times the rich have voluntarily paid more taxes pales in comparison with the number of times they’ve used loopholes, or even broken the law, to stash their money where the government can’t get it.
Actually, any law at all, if backed by the force of the government, is an exercise in conflict theory. By passing a law, by definition you’re saying “this is something not everyone would agree to do voluntarily no matter how good our arguments are.”
If the State passes a 55-mph speed limit but doesn’t flood the streets with highway patrolmen, and the People nigh-universally drive 70 mph while packing radar detectors, does that represent a fundamental Conflict between the State and the People, or did the state just make a Mistake?
There’s no conflict involved in breaking a law that isn’t being enforced, because it takes two to make a fight. There’s only token conflict in breaking a law that is seeing only token enforcement, and if we’re going to try to make a Conflict vs Mistake distinction, it shouldn’t be over tokens.
This isn’t the conflict you are looking for. Not if you are invoking (first-world) rich people and corporations.
Rich people and corporations, for the most part, follow rich-people group norms. It is in their personal interest to follow rich-people group norms because rich-people group norms are engineered to benefit rich people. Poor people do exactly the same thing w/re poor-people group norms. And the norms that encompass rich and poor alike, most rich people and most poor people generally do follow them.
But poor people declaring a norm for rich and poor alike, doesn’t make it a group norm for the group of rich people. Not even if the poor people are a majority in a democratic society who pass it into law, if they do so without buy-in from the rich community and offer only token enforcement. You don’t get to accuse people of violating group norms for their quietly dissenting from your token gestures. And you’ll rarely see rich people or corporations violating the laws that are being seriously enforced.
Neither? The idea of conflict presumes two groups with opposed interests with the State forming the battleground or the object of contestation between those two groups. Presumably in the case of speed limits it’s “concerned citizens who want people to drive safely” vs. “hurried citizens who want to get where they’re going faster”, with the values in conflict being safety vs. efficiency.
The idea of a “mistake” presumes two groups of people with the same objective, about which one group has more correct information than the other group. Again, the State is merely the organization that carries out whatever the current resolution of the debate happens to be. Mistake theorists are the ones arguing that actually, lower speed limits are more efficient or actually, higher speed limits save lives (I do not endorse or deny either of these arguments, because IANAE) to try to win over the other side.
A conflict is not necessarily a fight. We’re talking about conflict of interests here, like between “safety” and “efficiency.” Someone who drives at an unsafe speed is equally unsafe regardless of what the law is or whether it is enforced or not. Therefore they are “in conflict” with people who perceive their interests to require that others drive slowly/safely.
I don’t think that a group of people, no matter their resources, get to just secede from society and establish their own norms. But it’s immaterial to my point. If rich people and non-rich people are counted as being in the same society, then when the rich don’t act for the benefit of society then other members of society have an interest in stripping them of their wealth and power and redistributing it among people who do act for the benefit of society. On the other hand if rich people are in a whole different society from non-rich people, then non-rich people have an interest in conquering the society of the rich in order to strip them of their wealth and power and redistribute it among people who act for the benefit of society. It’s exactly the same conflict, you’re just analyzing it slightly differently.
Do all societies have an interest in conquering all other societies? Do you see your society as having such an interest? Because other societies have Stuff, and it would be in the selfish interest of your society to take their Stuff and distribute it amongst yourselves.
I think it is a classical Easy Mistake to see other societies only in terms of the Stuff you could take from them and the Conflict that this would generate. But if that’s your take, if it’s always “Look, that other society has Stuff, therefore we are in Conflict because we want to take their Stuff, and that’s no Mistake!”, then that’s a degenerate framing of the potentially useful Conflict vs. Mistake distinction Scott is trying to make.
And if you throw in,
counting even the “conflicts” nobody is bothering to fight over, then yeah, you’re just trying to reshape everything into a conflict, Us against Them.
Some of us, aren’t. That’s the distinction.
No, but clearly large, relatively poor societies have an interest in conquering tiny, ludicrously wealthy societies with no standing army that occupy the same geographical location.
Considering I led with an example of a situation that was not a conflict, this seems like a particularly ill-placed strawman you’re arguing against. I am not “trying” to “reshape” anything into a conflict. I have presented the argument that conflict theory explains why wealthy people evade taxes. Perhaps you would care to present an argument that wealthy people are evading taxes because they just don’t know they’re supposed to pay taxes? Or perhaps they are making some other mistake I haven’t thought of? Or perhaps I am the one who is mistaken, and tax evasion by the wealthy is actually a net social good?
It is obvious to me on its face that rich people who evade taxes to increase their wealth and corporations which break laws to increase their profits do so because of a moral failing rather than an intellectual failing. When the public debates tax policy and the government implements it, many of the people involved are thinking “I’m trying to do the best thing for my country by setting up an appropriate taxation system.” No wealthy person thinks “I’m trying to do the best thing for my country by illegally hiding all of my assets in overseas shell corporations” or whatever. Instead they think “I’m trying to do the best for myself and my family” or “I earned this money fair and square” or “f**k poor people, let all the moochers and the looters die in a fire for all I care” or whatever rich people think. It’s not the same interest pursued by different means; it’s incompatible interests. Hence, conflict theory.
I am guessing that you mean “avoid” not “evade”–take advantage of legal loopholes rather than doing things that are illegal. But either way, what is the moral theory which makes you view rich people who do things (legal or illegal) to hold down their taxes as morally wrong?
Are you assuming that morality is defined by legality–that right and wrong are made by act of Congress? If so, why? Alternatively, is it your view that the existing tax system is inherently just, hence it is wrong to try to pay less than it prescribes? If so, again why?
I used “evade” deliberately because I think there’s a stronger argument that breaking the law to avoid paying taxes is immoral/unethical than that taking advantage of legal loopholes to avoid paying taxes is immoral/unethical.
I don’t believe that moral theory determines peoples’ moral views, so I’d say there is no answer to this question. My alternative proposal is that people render moral judgments on individual questions or situations, then aggregate these into moral views or opinions (such as “murder is wrong” or “people should pay their fair share”) and that moral theories or frameworks are post hoc attempts to explain/rationalize their moral views. Mistake theorists may even try to change people’s minds with reference to moral theories, although I’m not sure I have ever seen this actually work in practice. I think the only way to change people’s moral views is to expose them to numerous situations in which they are prompted to render moral judgments that, upon reflection, turn out to support a moral view different from the one they thought they had. This is why, for example, it’s very hard to get someone to think that it’s okay to be gay by arguing with them on the basis of moral theory, but not so hard if you get them to meet or observe a number of gay people, each of whom they morally judge to be okay.
That being said, it is my moral judgment that a wealthy member of society should give a significant portion of their wealth back to the society in which they live. Obviously if a person gives so much money to charity that their deductions bring their tax burden down to zero, I wouldn’t have a moral objection (this technically “avoids” paying taxes but is not tax evasion). It’s not taxes per se that I think are moral.
However, if I had to put a lower limit on someone’s ethical obligation to give back to society, that limit would be set at that person’s legal tax burden. That amount is the consensus view on what government needs from each person to be able to provide basic public services like infrastructure, education, security, and some amount of social welfare. If a libertarian society finds a way to provide good education without any government action, maybe that tax burden will be lower. I haven’t seen any empirical evidence that this could happen, but if it did I’d be open to it (I say this as someone who works at a private school). Similarly if a technocrat finds a cheaper way to, for example, conduct the Census, or administer VA benefits, or whatever, then I’m entirely comfortable lowering everyone’s tax burden proportionately. But I think it’s obviously inadequate to ask each person to pay only what they’d need to pay in an ideal world governed by perfect technocrats and organized according to perfect libertarian principles. Instead each person should pay what we actually need, right now, to run our society. And given the current state of affairs (inadequate healthcare, food, and housing for millions of Americans) I’d say that the tax burden is actually quite a bit lower than it morally ought to be; but given my knowledge of human behavior I wouldn’t expect anyone to voluntarily pay more taxes than they are asked to pay.
Finally, I sympathize with the view that actually paying taxes is immoral because the government is going to use some of that money to do [thing I think is immoral]. Given that wealthy people have a) a disproportionate influence on policy and b) a disproportionate ability to just exit a society they think is being run immorally, I think that this excuse is much less morally exonerating the more money someone has.
Short of some very difficult and creative accounting, there is no way that most rich people can avoid paying a significant portion of their income in taxes, quite aside from other things they do to benefit the society they live in. Despite lots of rhetoric to the contrary, the federal income tax system is progressive–richer people pay a larger fraction of their income than poorer people.
That’s the part I find puzzling. The tax burden is the outcome of a political process in which lots of different people are trying to get outcomes they want. Some of those people want the government to spend money making food more expensive or jailing people who use drugs or subsidizing what those people are doing. Most of them would prefer that someone else pay for those things, and do what they can to get that result. It isn’t a consensus view and I don’t see where it gets any moral weight from.
But suppose it was. Suppose we took a vote on the total budget and the pattern of taxes and somehow discovered that, in some sense, 51% of the people supported a particular result. Why is that morally binding on me? Doesn’t your argument depend on what is actually necessary to do the things you think government should do, not on what some other people think is necessary? If so, shouldn’t you base your view of the subject on what you believe is true, not on what others believe is true?
I’m missing the “neutral conflict theorist” point of view here. I think “different blocs with different interests are forever fighting to determine whether the State exists to enrich the Elites” or to enrich a different elite.
I also think that most people make the mistake of thinking they can stay on top of shifting alliances in these conflicts, but most are in fact to stupid to actually pull it off and are thrown under the bus after the revolution.
This was a very interesting post. I find it especially interesting as somebody whose roots are very much in conflict theory but who has been sympathising more and more with mistake theory with time (partly as a result of reading this blog).
I need to mull this over more, but initial thoughts: I think we need to find the Mistakes and fix them. The way to do this is to use all the tools in the Mistake theorists’ toolbox. But there is a Conflict, which we need to be aware of. There are some people who are fighting for their own material interests, not trying to solve the great Puzzle.
To make this explicit: I think the Question is how to make life better for people. It seems to be more important to help people who are struggling than to help people who are already doing fine. Given the shape of the wealth curve, this is a motivation for redistribution. But some redistribution measures will be mistakes and others will not (EDIT: or perhaps they’re all mistakes). That is, some will actually make things worse for everybody and some will make things better for everybody. This question needs to be answered using the tools of Mistake theorists. We need studies. We need debate. We need new ideas that might achieve the same aims as the old ideas but with fewer drawbacks.
But there *is* a conflict going on. There are some people who are acting in their material interests and not in the aforementioned goal of “making life better for people”. So some poor people will favour redistribution measures that do more harm to society than good because they help “me” and some rich people will oppose redistribution measures that do more good than harm to society because they hurt “me”. And then it gets complicated further because you will have some people making mistakes, so you will get some poor people who favour a redistribution measure that even makes “me” less well-off, but it sounds like it’ll make “me” better off. The libertarians would probably say this is most redistribution measures.
I think the way society has evolved has been through conflict, but the conflict isn’t between “us”, the good and virtuous and “them”, the evil and dangerous. Or at least, not always. The conflicts that drive society are just between “me and people who share my interests” and “them with interests that are opposed to mine”. Maybe it *feels* like you’re fighting evil, but what it really is is a political struggle.
I think this is the important distinction. If you ask “what’s the best way to shape society?” then the answer is all the tools of Mistake theory, but if you ask “why is society the way it is?” then I think Conflict theory does most of the work. We didn’t get to be where we are by loads of powerful philosophers trying to figure out how best to do things and doing it; we got here by a political struggle arising out of the conflicts of interest between different classes.
Wouldn’t it be cool if “people who are acting in their material interests ” actually “made life better for people”.
Capitalism says Hi!!!!
To really hammer the point home.
Capitalism: Pulled 2 billion people out of poverty in the last century…
Marxism: Killed 100 million people in the last century…
I can’t tell if you’re being sarcastic or not. Marx was one of the most vocal believers in the power of capitalism to increase productivity and prosperity.
And attributing deaths to economic systems and political philosophies is kind of dumb. Like, how many people has “capitalism” killed? What does that question even mean?
(If you were being sarcastic and I just massively missed the point then I apologise!)
It’s very hard to attribute deaths to ideologies or systems – and usually degenerates into cherry picking so you can say “the other guy’s system has killed 200 million people!” or whatever. Usually the math is very dubious, a lot of context is missing (is the decline in starvation worldwide due to capitalism, to communism, to general technological improvement? Is an uptick in starvation somewhere due to capitalism, to communism, to bad luck?)
However, you can far more reasonably attribute deaths to individual leaders or groups of leaders. Take Snyder – he puts Hitler at around 12 million, Stalin at about 10 million. (I’ve seen anti-communists accuse Snyder of counting very low, and communists say that Snyder can’t be trusted because if he was trustworthy, ie a communist, he would just ignore anything Ukrainians have ever said about anything).
The supposed death toll of communism presented by some anti-communists is unbelievably high (ends up looking closer to 40 million than 100), but communists who come up with claims that capitalism has killed such-and-such a number of people depend on claiming that every potentially-preventable death from disease and famine outside of communist territory is capitalism’s fault (as though those things didn’t happen before capitalism came into the world). I remember there was one commenter here who claimed that the Soviet famine in the early 30s was because of capitalism, which seems a wee bit odd.
However, a system the leaders of which more commonly cause mass deaths, is maybe worth looking a bit askance at, and communism has that problem more than capitalism.
When you say 100 million is ‘unbelievably high’ do you mean the evidence makes such a claim look ridiculous or that it is unbelievable that 100 million people could have died under communism?
That the evidence is a bit dubious for 100m and that the people who come up with numbers that high were as much or more propagandists during the Cold War as they were impartial academics.
Getting to 100m involves a high count for Stalin, very high counts for starvation under Mao, counting famine and disease during the 1918-21 civil war (bad shit was going to happen in Russia no matter what following a disastrous war and the collapse of the government) the same as the basically manmade famine in the early 30s (which was due to collectivization and attempts to root out an imaginary Ukrainian conspiracy), etc.
This is as true as the statement that the people who came up with the 12 million deaths in the holocaust were as much anti-fascist propagandists as they were impartial academics. It’s not false, and they were trying to castigate an ideological system that they found abhorrent, but that doesn’t mean they were wrong. Their figures for the USSR have been largely confirmed by post-USSR research, and god only knows knows how many people starved to death under Mao. 100 million is a nice round figure that, if not precisely correct, is not far off.
But the 12m number for Hitler is a low count (it elides the issue of how to divide blame for war deaths) and it isn’t based on post-war inflated commie numbers. It’s not the highest # you could lay at Hitler’s door.
If you take a higher count for Mao, meanwhile (modern Maoists, in my experience, will if you poke them hard enough admit to about 15 million dead; a lot of the more credible estimates sit in the ~30m range) you get to 50m, maybe 60m, but both are closer to 40m than 100m.
WW2 starts when Nazis and Communists invade Poland so if we’re going to start assign blame for war deaths, the communist toll is going to rise.
Neither is 10-20 million for the USSR. the holocaust figures are quite reliable because german was militarily occupied after the war, and so we could use their documents to measure what they did. The communist scholars in the cold war did not have that benefit, they had to estimate, and the reputable among them were not far off.
If you assume mao got 30 instead of 60, you’re looking more at ~70 million deaths instead of 100, still far more than any other idea in history, still and unspeakably horrific. And the higher numbers for Mao are not implausible.
However, western scholars didn’t have access to the places where the vast, vast majority of the Nazi deaths took place. After the USSR fell, estimates tended to fall a bit too, due to better archival access. Also, the Soviets blamed some stuff they did (like Katyn) on the Germans, although these tended to be pretty small numbers (Katyn was 22k, for example) compared to the German mass killings.
With regard to Mao, I just think that he’s more debatable than Stalin, and the death toll that can be laid at his door involves less intentionality. It’s also easier to make the argument that, despite the human cost, in the end the net effect was positive: he took China from a subsistence economy that had just gone through a brutal occupation, and turned it into a feared world power. In comparison, Stalin’s actions played a role in the USSR almost losing the war in 1941.
In any case, regardless of my numbers, my point is that there’s a difference between “this idea killed people” and “this idea enabled awful people to get into power” – personally, I think that the problem is not communism, but revolutionary communism. The vanguard party is a dreadful concept, because the vanguard never hands over power, and the concept of a revolution led by a small cadre leads to dictatorships, and that’s how you get Stalin or Pol Pot or whoever. It should be noted also that the death toll that can be laid at capitalism’s door also involves dictators, in the form of western governments working to overthrow any democratically-elected governments they thought were too left-wing, and replacing them with right-wing dictators.
Shouldn’t WW2 be counted as starting either when Japan invades China (starting the first of the continental wars of the era), or when Germany declares war on the US (linking the two continental wars into a true World War)?
But acknowledging that China was a party to WW2 makes the accounting rather tricky here, because you’ve got a three-way conflict with megadeaths that need to be apportioned to the Communists, the Nationalists, and the Japanese.
It’s rather difficult even to define a starting point for the Chinese theater of WWII. 1937, when the Japanese invaded China proper? 1931, when they invaded Manchuria? 1928, when the Kuomintang and the Chinese Communist Party (previously allies) kicked off the main phase of the Chinese Civil War? 1912, when the Qing Dynasty was overthrown? There was fighting throughout.
There was I believe an official armistice in Manchuria in 1932, making 1937 the start of something new. And there’s always some group of hotheads waving guns around somewhere in the world, so either there’s World War Always (4004 BC – Armageddon), or local insurgencies don’t count as part of a World War.
This went a little above the level of local insurgencies. The amount of shooting varied, but between 1912 and 1928 there was essentially no central authority over most of China, and after that the KMT and the CCP were busy tearing each other to shreds. It’s probably most comparable to the Second Congo War in terms of other 20th century conflicts.
1937 is as good a starting point as any, though.
Unfortunately we’ve reached the recursion limit and I’m a little late to the party – but this comment is responding to your first one.
I think this is a very reasonable take on it. There have definitely been disproportionately many preventable deaths (including murders, mass-murders and some but not all famines) under self-described communist leaders than under non-communist ones in the 20th century. I agree that this should cause alarm bells and one should be very careful indeed of supporting communism or Marxism or whatever unless one can say in what ways one differs from those that have gone before and done terrible things.
I think it’s possible to do this. I think the terrible things done by self-described communists and Marxists are not a direct result of their believing in communism/Marxism. (Although I acknowledge my biases in this…)
No, but they had access to the guys that sent those people to poland to die, and they didn’t come back,
When you take peasants seed grain, and they starve, you are murdering them every bit as much as if you ordered them shot. When you increase grain exports in the face of famine in order to post impressive export figures for propaganda purposes, you’re murdering people.
No he didn’t. China was still a subsistence economy when mao died. It had nukes, but it was still poorer than sub-saharan africa.
There is no other kind. And frankly, an idea that only attracts awful people to it is just as bad as an inherently awful idea.
the toll of such deaths that can be laid at capitalism’s door is multiple orders of magnitude smaller. the worst white terror was the indonesian, which killed a few hundred thousand people, more than every other white terror put together.
What you are saying is about as plausible as saying that the holocaust wasn’t a result of naziism. The marxists said they were going to liquidate their class enemies, and they did, or tried to. There is a direct, clear line from words to action.
-I think you’re missing something from your first bit.
-how do you value an attempt at accelerated industrialization vs the lives of peasants? Would a leader be justified in saying “well, lots of peasants will die if we do this, but we need to build up a military base ASAP; lots of peasants died when the Japanese were around here too”?
-“having nukes” goes more hand in hand with “feared power” than “high standard of living”.
-I didn’t say communism only attracted awful people. The vanguard party idea, which is horribly flawed, leads to dictators who never hand power over to the proletariat. Dictators are far more likely to do awful shit (in their home countries, at least).
-I think you’re missing something from your first bit.
It doesn’t matter how I value it, because Mao didn’t achieve a meaningful amount of accelerated industrialization.
The results of communist regimes were universally terrible, with no exceptions. And Communism did not just produce run of the mill dictators, it produced several people who might claim the title of the worst dictators in all of history, in pretty very narrow span of years. There are only two possibilities, either there is something awful about the ideology, or the ideology only attracts awful people. You’ve denied the former, which leaves only the latter. Frankly, I don’t care which is more accurate, I suspect it’s both, but either way, the results are awful and no one espousing the ideology should be allowed to be to dog catcher, much less trusted with another country to ruin.
In the case of Mao, the largest number of his victims were almost certainly the peasants who died in the famine during the Great Leap Forward. Peasants were not his class enemies. Unlike the Ukraine famine, which was arguably deliberate, that was an unintentional result of a different bad policy.
I don’t think we will ever know for certain whether Mao realized that millions of people were starving and decided to keep exporting food anyway or whether he was fooled by the false information that the incentive structure he had set up generated and really believed that agricultural output was high enough to permit exports without creating famine.
it wasn’t just a famine, it was a famine combined with requisitions and export of grains. And while peasants were not mao’s class enemies, “rich peasants” definitely were, and that was always who he insisted he was going after.
Frank Dikötter presents compelling evidence that he did know. he might not have known the sheer scale, but he knew that huge numbers were dying.
The majority of deaths in the Ukraine famine were peasants and not the ‘undesirable’ classes, but the famine was caused at least in large part by class warfare against the more productive peasants. We don’t know if Mao intentionally created the famine, but it doesn’t particularly matter, engaging in class warfare caused it. He couldn’t end the famine without repudiating (in action at least) Communism.
Wasn’t China recognized as a world power immediately at the end of the war? The ROC was one of the five permanent members of the UN Security Council.
He kept the world’s most populous country dirt poor when other countries were getting rich.
One striking statistic: From Mao’s death to 2010, the per capita real GNP of China went up twenty fold.
Certainly. There has been an enormous increase in the Chinese standard of living due to capitalist-ish reforms. Deng Xiaopeng can be credited with initiating some huge increases in real standard of living. But that’s not the same thing as the change of role on the international stage that happened under Mao.
EDIT: And, what countries comparable to China were getting rich at the same time, by adopting free-market reforms?
China’s change in role happened because its civil war ended, but since mao was responsible for one side of that civil war, you can’t exactly give him credit for ending it. he could have just surrendered decades earlier. Had chiang won, china would have undergone a similar shift in geo-political importance.
Depends what you mean by comparable to. The only country with a comparable population was India, and it was running with five year plans and exchange controls and the permit raj.
Countries that were poorer than China in terms of natural resources, most easily measured in population density, would include Singapore, Taiwan, South Korea and, except that it wasn’t a country, Hong Kong. All of which did spectacularly better than China with, compared to China, relatively free market policies.
You’re assuming that there’s one, clear, obvious description of “the best”, and that this is knowable equally by everyone. But it isn’t true, and the more fundamental differences there are between the members of the populace (i.e. how “diverse” it is, in ways that matter) the more points of conflict there will be over things that matter. You’re assuming that all right-thinking people are, deep down, just like you and merely need to have the facts explained to them patiently.
One group thinks “the best” schools teach evolution as undisputed fact; another group thinks it “best” if the theory is not brought up at all, or brought up as a controversial opinion perhaps. There’s no “mistake theory” answer to the “best” way to shape the schools in this situation, because the two groups don’t share the same desire for outcomes. The US solution was originally to minimize what the government tried to do– you can’t have a governance conflict over the schools if the government isn’t setting the curriculum– but we can’t seem to agree to go back to that modus vivendi. The European nationalist solution was to break the polities down to reasonably homogenous ethnic groups– everyone spoke the same language, worshiped the same way, sang the same songs to their kids, had the same view of history, etc., and so didn’t likely have a lot of fundamental conflicts– which required either the borders to move to the people, or the people to move to the borders, of course. Temporarily very cruel to Greeks in Turkey and vice versa, et cetera, but it did result in a low-violence end state… and is kind of coming apart at the seams now as migration leads these tacit understandings coming apart. I don’t know of a third answer.
Isn’t that disagreement based on a disagreement about facts for which mistake theory is relevant? If evolution is false, as presumably many opponent believe, then almost everyone should be against teaching it. If it is true, almost everyone for.
Of course, there might be people who believe that even if evolution is false, teaching it is a good way of undermining religion. And there might be people who believe that even if it is true, teaching it is bad because it undermines religion. But even there, the disagreement hinges in part on beliefs about religion–whether it is true and whether believing in it, true or false, has good or bad consequences.
Not exactly, although I see I wasn’t clear about the problem in my comment. Let me try to explain.
Evolution in schools is a specific example of a larger problem. I’m pretty sure everybody wants “the truth” taught in schools. But if it was as simple as just, lay the facts out on the table, this would have been over 100 years ago (if not longer). The problem is, we don’t agree on the ultimate goal of our lives (and our society), nor on the relative weights to give to various sources of authority, nor on how to balance benefits and drawbacks– just to name a few!
So, for evolution. My experience is, the vast majority of the populace doesn’t actually understand the details of evolution. I doubt I’ll get much argument in this crowd about anti-evolutionists not understanding; but I’m always struck by how confused pro-evolutionists can be. For example, they claim that evolution “says” such and such (exactly as creationists say the Bible “says” something). Or they make a moral claim based on evolution, although evolution has no more moral content than F=ma. Or they demand it as unquestionable truth in every living organism– except humans, where it has no role whatsoever, especially with respect to sexual dimorphism. Or they insert a ritual obeisance to evolution in the middle of some totally unrelated conversation (although I’ve noticed that this role has been replaced by the ritual obeisance to AGW, in recent years).
So… something is going on under the surface here. I identify several causes:
1) The vast majority of the populace actually don’t have a grasp of the mathematics and biology behind evolution. They’re “outsourcing” their judgment to authorities they trust– as, in fact, we all have to do for much of our lives. (I can’t independently verify any but a tiny handful of the important things in my life– the efficacy and safety of medical care, my employer’s accounting and tax status, political events I don’t personally witness, etc.) It’s hard-to-impossible to come to a unified agreement on who are the most reliable authorities– and part of the problem is that everybody agreeing to trust the same authorities can enable some truly horrific problems.
2) But that’s not all! A fact is literally a “factum”, something that is constructed in the mind of the thinker, and that process depends on our internal metaphysics and philosophy. We don’t agree on those either. The evangelical finds his experience of the working of the Spirit in his heart and soul to be the single most important “fact” about the universe; but the materialist doesn’t accept this as reality at all!
3) We’re still not done! There’s the question of overall goals, and how to balance competing goods against each other, or how to resolve asymmetric rewards (good for me, bad for you). In schools, this might show up as things like: how much deference should the school show to the parents’ authority over their children? How much “general” education, versus vocational education, should be targeted? Should the schools teach ethics at all (and is that even practically possible)? Or maybe a “minimum set” of ethics, such as “sportsmanship”, “academic ethics/honor code”, “citizenship”, etc. (and what would those be)? Is academic tracking a good thing, because it allows smart kids to achieve more, or is it a bad thing, because it short-sheets kids who are already facing a challenging situation? And on, and on.
Given all of this, it’s not surprising that we end up with a chronic jam when we try to “reason” with each other over these hot topics, if we don’t already start out at a high level of agreement on our worldview, goals, values, etc. and a general agreement on the trustworthiness of various authorities or data sources. The “melting pot” or “assimilation” concept is, effectively, imposing a minimum set of philosophy, metaphysics, and shared facts on the populace sufficient to support the level of government intrusion into their lives. Less intrusion = less need for imposition, and I’m in favor of it. But it’s not going to “just happen”… even if you talk real slow to those morons over there about Daaaarrrrrwiiiiinnnnnn….
I agree with your general point, and have made the same argument in the past with regard to both evolution and AGW. Almost everyone is working on second hand information, so it depends what sources you trust.
Incidentally, not only does almost nobody on either side understand evolution, almost nobody on either side of the AGW dispute understands the greenhouse effect. For some evidence … .
No I’m not. Finding “the best” is in two parts: choosing a question and finding the answer to that question. My question is an essentially utilitarian one: “how do we make life better for people?” For people who disagree with my question, we’re never going to agree on the answer.
The more interesting case is where people broadly agree that the question is the right one but disagree on the answer. But there’s no surprises that we disagree on the answer, because it’s an incredibly hard question! We need to debate and experiment and hopefully limp towards some kind of common answer. It’s not clear nor obvious.
Don’t we all wish… but no. I’m an engineer. Defining what “the best” means, what measurements to take and how to cope with noise and error, and in fact whether we should be even aiming for “the best” at all (vs. the “good enough, and iterate as needed”)– is the single most important part of the project, even if it’s a $5 widget. Not getting everyone on the same page at the beginning for these types of questions is the #1 cause of failure in R&D. And object design is very simple compared to the “big” societal questions we’re talking about.
I recently had a friend of mine very vigorously argue US immigration policy from the perspective of “What Would Jesus Do?” But to an atheist, WWJD isn’t even a thing, let alone the most important benchmark of the rightness (or righteousness) of the policy. My friend was sincerely trying to answer the question, how do we make life better for people? But “better”, to her, is inextricable from the image of Jesus separating the sheep from the goats at the final judgment, and herself really needing to go to the right, with the sheep. Atheists… have probably ground another layer of tooth enamel off already. But they’ll come up with a different definition of “better”, which may very well be morally disgusting to my friend (who will, of course, then see them as actively seeking to maximize the vileness of the situation).
Also, who are “the people” to be bettered? Americans? Every human currently alive? Future generations? Poor people? Me and my family only? How would you measure it? And how do you rank choices which help some, hurt others (i.e. all decisions, more or less)? There’s no SI unit for these things, there’s no objective measurement. It’s a category error to think you can treat this as a math problem, when it’s Calvinball game.
I think we’re speaking slightly at cross-purposes. I think we agree that there are two problems and that they’re of different classes: one is defining “the best”, deciding the objectives, asking the “question”, whatever; the other is computing “the best” based on our definition, meeting the objectives and answering the “question”
We also agree, I think, that the tools you need to approach both sides of the problem are different ones.
I was interested to hear your opinion that the first part (defining objectives) is the biggest cause of problems in R&D.
But what I wanted to emphasise is that for big societal questions, the second part (meeting objectives) is so incredibly hard that even if you manage to agree entirely with somebody’s “questions” (and I accept that this is very far from trivial) then you’ll still be likely to disagree on answers.
Hanson’s “Elephant in the Brain” touches on this ambiguity. We make a lot of mistakes, but many of those “mistakes” happen to help our coalition in whatever conflict we’re in.
Count me in the “this is obvious” camp – although I wouldn’t phrase it so dismissively, this is probably the best description of it I’ve ever seen. I find it such a great source of frustration because it, by itself, is more responsible for the sad state of public debate than anything else.
It’s uniquely hard to deal with because not only do the parties misunderstand each other’s communications, they disagree fundamentally about the reasons they’re even talking.
Disclosure: On an emotional level, I hate “conflict theorists”. My id thinks the dichotomy described isn’t so much any old way to categorize people as the very difference between Good and Evil.
But that doesn’t mean they are wrong. As in, not totally wrong on a factual level.
Mistake theory vs. conflict theory seems a textbook case of two complementary (but believed to be substitutes) partial narratives, two different stories you can tell about the same phenomenon by focusing on different aspects and connecting the dots differently (like drawing two different and partially overlapping constellations in the night sky).
It’s obvious to anyone with half a brain that examples of both exist: there are certainly zero-sum conflicts, but also plenty of problems that are failures of rationality and system design.
The question is which one of the two narratives you prefer, because even if you on some level understand that both of them have valid points it’s still incredibly hard to keep both in mind at the same time. They sort of interfere with each other. It feels like a contradiction even if it isn’t one, strictly speaking. Therefore you’ll resolve the cognitive dissonance by having one of them represent the “fundamental truth” and the other as a “corrective” to account for the noise that doesn’t quite fit*.
While some subscribe to only one narrative, most admit that correctives exist when not in the heat of battle. The important difference is which one you’ll put first, as that will determine how you act most of the time, whenever nuance and charity is less than maximal.
That in turn depends on where in time, space and context you’re situated (and what your personal characteristics are). In other words, depending on what parts of reality you come on contact with and how you interpret them, you’ll put one or the other first**.
What I’m saying is that it’s not necessarily degree of truth or validity that makes mistake theory better than conflict theory (because I do think it is). Truth and validity for something this vague is going to be heavily dependent on local conditions and personal interpretations.
No, what makes mistake theory “better” is consequences. When we act as though mistake theory is true, things tend to get better. When we don’t consider naked power plays acceptable it becomes more difficult to pull them off. When we expect civil servants not to be corrupt it becomes easier to shame them when they are (and the corrupt are less drawn to civil service). When we expect people to be charitable and rational in debate it raises the costs of not being so. The price is eternal vigilance etc. etc.
Ideologies do kind of reshape the world in their image – to the extent that it’s possible – which is why the “best” ideologies are practical but not cynical, optimistic but not utopian. Historically, when conflict theory gets to define the way politics is done, things turn to shit (or Mountains of Skulls).
To be plain: cooperation is better than defection, and going around saying “hey there is lots of defecting going on, therefore I’m going to defect and so should everyone else on my side” is antisocial behavior that amounts to a deliberate destruction of social capital (the good response is to try to change things so defecting gets comparatively harder and less profitable).
But I have to admit that it is a rational course of action if you believe that your enemy is already defecting all the time and won’t ever change. There really is no reasoning with someone who believes that. Sometimes that’s even right, and it often was in premodern times. But it’s rarely the case in modern democracies, and if you truly believe that there is no positive-sum processes to nurture and develop, they you are probably under the spell of a destructive ideology.
*I wrote this model down first in a comment here about a year ago and fleshed it out in an article last month and I’ve been sort of stuck on applying it to everything since then.
**I might be way off here, but I wonder if not academics and politicians could be more prone to conflict theory than businesspeople because their everyday experience is less characterzed by positive-sum exchanges. Idk.
Very well said!
Reading through the other comments made me think that there really are two separate dichotomies here, not just one, and that I was a little confused about which one the original post was talking about. If “mistake theory” means to believe that policy questions have a correct answer then I’m not one of those, not emotionally and not rationally. It’s obvious that that isn’t the case. We also can’t and shouldn’t act as if that was true.
The other dichotomy (where my id really does hate the other side, ironically) is between two kinds of preferred conflict resolution. “Mistake theorists” are right that some conflicts can be resolved by making everybody better informed, but this is far from all cases. Often people do have different values and then the relevant issue becomes how to resolve these conflicts. One of them is to treat a conflict as a war where the other side should be defeated, the other is to treat it as a business negotiation and compromise. The second, however, requires that you see the other side as basically legitimate. And this, I think, has something to do with “mistake theory”. Not like thinking that the other side is mistaken and not evil. Neither of the two, because both of them presupposes that there is a right/good side and that you’re on it. Instead it requires recognizing that they have a different viewpoint and set of values that you might not even understand because it isn’t expressible in familiar terms. You need to have a similarly open, charitable and inquisitive attitude to values as a mistake theorist does about facts to be able to empathize with an alien other party in a conflict-theoretical situation. This in order to achieve an amicable solution that builds long term social capital.
Basically, there is a certain moral humility that accepts the conflict theory as broadly true but approaches it with a mistake theorist’s rationality and openness. What truly bad is to refuse to engage with a viewpoint before you can understand it well enough to empathize with why it makes sense to the person that holds it. Then of course you can disagree, and forcefully. Even fight.
+1, and ‘practical but not cynical, optimistic but not utopian’ is where I thought Scott was going in Guided By The Beauty Of Our Weapons, which I was reading as an argument for mistake over conflict for as long as you can manage it: