Varieties Of Argumentative Experience

In 2008, Paul Graham wrote How To Disagree Better, ranking arguments on a scale from name-calling to explicitly refuting the other person’s central point.

And that’s why, ever since 2008, Internet arguments have generally been civil and productive.

Graham’s hierarchy is useful for its intended purpose, but it isn’t really a hierarchy of disagreements. It’s a hierarchy of types of response, within a disagreement. Sometimes things are refutations of other people’s points, but the points should never have been made at all, and refuting them doesn’t help. Sometimes it’s unclear how the argument even connects to the sorts of things that in principle could be proven or refuted.

If we were to classify disagreements themselves – talk about what people are doing when they’re even having an argument – I think it would look something like this:

Most people are either meta-debating – debating whether some parties in the debate are violating norms – or they’re just shaming, trying to push one side of the debate outside the bounds of respectability.

If you can get past that level, you end up discussing facts (blue column on the left) and/or philosophizing about how the argument has to fit together before one side is “right” or “wrong” (red column on the right). Either of these can be anywhere from throwing out a one-line claim and adding “Checkmate, atheists” at the end of it, to cooperating with the other person to try to figure out exactly what considerations are relevant and which sources best resolve them.

If you can get past that level, you run into really high-level disagreements about overall moral systems, or which goods are more valuable than others, or what “freedom” means, or stuff like that. These are basically unresolvable with anything less than a lifetime of philosophical work, but they usually allow mutual understanding and respect.

I’m not saying everything fits into this model, or even that most things do. It’s just a way of thinking that I’ve found helpful. More detail on what I mean by each level:

Meta-debate is discussion of the debate itself rather than the ideas being debated. Is one side being hypocritical? Are some of the arguments involved offensive? Is someone being silenced? What biases motivate either side? Is someone ignorant? Is someone a “fanatic”? Are their beliefs a “religion”? Is someone defying a consensus? Who is the underdog? I’ve placed it in a sphinx outside the pyramid to emphasize that it’s not a bad argument for the thing, it’s just an argument about something completely different.

“Gun control proponents are just terrified of guns, and if they had more experience with them their fear would go away.”

“It was wrong for gun control opponents to prevent the CDC from researching gun statistics more thoroughly.”

“Senators who oppose gun control are in the pocket of the NRA.”

“It’s insensitive to start bringing up gun control hours after a mass shooting.”

Sometimes meta-debate can be good, productive, or necessary. For example, I think discussing “the origins of the Trump phenomenon” is interesting and important, and not just an attempt to bulverizing the question of whether Trump is a good president or not. And if you want to maintain discussion norms, sometimes you do have to have discussions about who’s violating them. I even think it can sometimes be helpful to argue about which side is the underdog.

But it’s not the debate, and also it’s much more fun than the debate. It’s an inherently social question, the sort of who’s-high-status and who’s-defecting-against-group-norms questions that we like a little too much. If people have to choose between this and some sort of boring scientific question about when fetuses gain brain function, they’ll choose this every time; given the chance, meta-debate will crowd out everything else.

The other reason it’s in the sphinx is because its proper function is to guard the debate. Sure, you could spend your time writing a long essay about why creationists’ objections to radiocarbon dating are wrong. But the meta-debate is what tells you creationists generally aren’t good debate partners and you shouldn’t get involved.

Social shaming also isn’t an argument. It’s a demand for listeners to place someone outside the boundary of people who deserve to be heard; to classify them as so repugnant that arguing with them is only dignifying them. If it works, supporting one side of an argument imposes so much reputational cost that only a few weirdos dare to do it, it sinks outside the Overton Window, and the other side wins by default.

“I can’t believe it’s 2018 and we’re still letting transphobes on this forum.”

“Just another purple-haired SJW snowflake who thinks all disagreement is oppression.”

“Really, do conservatives have any consistent beliefs other than hating black people and wanting the poor to starve?”

“I see we’ve got a Silicon Valley techbro STEMlord autist here.”

Nobody expects this to convince anyone. That’s why I don’t like the term “ad hominem”, which implies that shamers are idiots who are too stupid to realize that calling someone names doesn’t refute their point. That’s not the problem. People who use this strategy know exactly what they’re doing and are often quite successful. The goal is not to convince their opponents, or even to hurt their opponent’s feelings, but to demonstrate social norms to bystanders. If you condescendingly advise people that ad hominem isn’t logically valid, you’re missing the point.

Sometimes the shaming works on a society-wide level. More often, it’s an attempt to claim a certain space, kind of like the intellectual equivalent of a gang sign. If the Jets can graffiti “FUCK THE SHARKS” on a certain bridge, but the Sharks can’t get away with graffiting “NO ACTUALLY FUCK THE JETS” on the same bridge, then almost by definition that bridge is in the Jets’ territory. This is part of the process that creates polarization and echo chambers. If you see an attempt at social shaming and feel triggered, that’s the second-best result from the perspective of the person who put it up. The best result is that you never went into that space at all. This isn’t just about keeping conservatives out of socialist spaces. It’s also about defining what kind of socialist the socialist space is for, and what kind of ideas good socialists are or aren’t allowed to hold.

I think easily 90% of online discussion is of this form right now, including some long and carefully-written thinkpieces with lots of citations. The point isn’t that it literally uses the word “fuck”, the point is that the active ingredient isn’t persuasiveness, it’s the ability to make some people feel like they’re suffering social costs for their opinion. Even really good arguments that are persuasive can be used this way if someone links them on Facebook with “This is why I keep saying Democrats are dumb” underneath it.

This is similar to meta-debate, except that meta-debate can sometimes be cooperative and productive – both Trump supporters and Trump opponents could in theory work together trying to figure out the origins of the “Trump phenomenon” – and that shaming is at least sort of an attempt to resolve the argument, in a sense.

Gotchas are short claims that purport to be devastating proof that one side can’t possibly be right.

“If you like big government so much, why don’t you move to Cuba?”

“Isn’t it ironic that most pro-lifers are also against welfare and free health care? Guess they only care about babies until they’re born.”

“When guns are outlawed, only outlaws will have guns.”

These are snappy but almost always stupid. People may not move to Cuba because they don’t want government that big, because governments can be big in many ways some of which are bad, because governments can vary along dimensions other than how big they are, because countries can vary along dimensions other than what their governments are, or just because moving is hard and disruptive.

They may sometimes suggest what might, with a lot more work, be a good point. For example, the last one could be transformed into an argument like “Since it’s possible to get guns illegally with some effort, and criminals need guns to commit their crimes and are comfortable with breaking laws, it might only slightly decrease the number of guns available to criminals. And it might greatly decrease the number of guns available to law-abiding people hoping to defend themselves. So the cost of people not being able to defend themselves might be greater than the benefit of fewer criminals being able to commit crimes.” I don’t think I agree with this argument, and I might challenge assumptions like “criminals aren’t that much likely to have guns if they’re illegal” or “law-abiding gun owners using guns in self-defense is common and an important factor to include in our calculations”. But this would be a reasonable argument and not just a gotcha. The original is a gotcha exactly because it doesn’t invite this level of analysis or even seem aware that it’s possible. It’s not saying “calculate the value of these parameters, because I think they work out in a way where this is a pretty strong argument against controlling guns”. It’s saying “gotcha!”.

Single facts are when someone presents one fact, which admittedly does support their argument, as if it solves the debate in and of itself. It’s the same sort of situation as one of the better gotchas – it could be changed into a decent argument, with work. But presenting it as if it’s supposed to change someone’s mind in and of itself is naive and sort of an aggressive act.

“The UK has gun control, and the murder rate there is only a quarter of ours.”

“The USSR was communist and it was terrible.”

“Donald Trump is known to have cheated his employees and subcontractors.”

“Hillary Clinton handled her emails in a scandalously incompetent manner and tried to cover it up.”

These are all potentially good points, with at least two caveats. First, correlation isn’t causation – the UK’s low murder rates might not be caused by their gun control, and maybe not all communist countries inevitably end up like the USSR. Second, even things with some bad features are overall net good. Trump could be a dishonest businessman, but still have other good qualities. Hillary Clinton may be crap at email security, but skilled at other things. Even if these facts are true and causal, they only prove that a plan has at least one bad quality. At best they would be followed up by an argument for why this is really important.

I think the move from shaming to good argument is kind of a continuum. This level is around the middle. At some point, saying “I can’t believe you would support someone who could do that with her emails!” is just trying to bait Hillary supporters. And any Hillary supporter who thinks it’s really important to argue specifics of why the emails aren’t that bad, instead of focusing on the bigger picture, is taking the bait, or getting stuck in this mindset where they feel threatened if they admit there’s anything bad about Hillary, or just feeling too defensive.

Single studies are better than scattered facts since they at least prove some competent person looked into the issue formally.

“This paper from Gary Kleck shows that more guns actually cause less crime.”

“These people looked at the evidence and proved that support for Trump is motivated by authoritarianism.”

“I think you’ll find economists have already investigated this and that the minimum wage doesn’t cost jobs.”

“There are actually studies proving that money doesn’t influence politics.”

We’ve already discussed this here before. Scientific studies are much less reliable guides to truth than most people think. On any controversial issue, there are usually many peer-reviewed studies supporting each side. Sometimes these studies are just wrong. Other times they investigate a much weaker subproblem but get billed as solving the larger problem.

There are dozens of studies proving the minimum wage does destroy jobs, and dozens of studies proving it doesn’t. Probably it depends a lot on the particular job, the size of the minimum wage, how the economy is doing otherwise, etc, etc, etc. Gary Kleck does have a lot of studies showing that more guns decrease crime, but a lot of other criminologists disagree with him. Both sides will have plausible-sounding reasons for why the other’s studies have been conclusively debunked on account of all sorts of bias and confounders, but you will actually have to look through those reasons and see if they’re right.

Usually the scientific consensus on subjects like these will be as good as you can get, but don’t trust that you know the scientific consensus unless you have read actual well-conducted surveys of scientists in the field. Your echo chamber telling you “the scientific consensus agrees with us” is definitely not sufficient.

A good-faith survey of evidence is what you get when you take all of the above into account, stop trying to devastate the other person with a mountain of facts that can’t possibly be wrong, and start looking at the studies and arguments on both sides and figuring out what kind of complex picture they paint.

“Of the meta-analyses on the minimum wage, three seem to suggest it doesn’t cost jobs, and two seem to suggest it does. Looking at the potential confounders in each, I trust the ones saying it doesn’t cost jobs more.”

“The latest surveys say more than 97% of climate scientists think the earth is warming, so even though I’ve looked at your arguments for why it might not be, I think we have to go with the consensus on this one.”

“The justice system seems racially biased at the sentencing stage, but not at the arrest or verdict stages.”

“It looks like this level of gun control would cause 500 fewer murders a year, but also prevent 50 law-abiding gun owners from defending themselves. Overall I think that would be worth it.”

Isolated demands for rigor are attempts to demand that an opposing argument be held to such strict invented-on-the-spot standards that nothing (including common-sense statements everyone agrees with) could possibly clear the bar.

“You can’t be an atheist if you can’t prove God doesn’t exist.”

“Since you benefit from capitalism and all the wealth it’s made available to you, it’s hypocritical for you to oppose it.”

“Capital punishment is just state-sanctioned murder.”

“When people still criticize Trump even though the economy is doing so well, it proves they never cared about prosperity and are just blindly loyal to their party.”

The first is wrong because you can disbelieve in Bigfoot without being able to prove Bigfoot doesn’t exist – “you can never doubt something unless you can prove it doesn’t exist” is a fake rule we never apply to anything else. The second is wrong because you can be against racism even if you are a white person who presumably benefits from it; “you can never oppose something that benefits you” is a fake rule we never apply to anything else. The third is wrong because eg prison is just state-sanctioned kidnapping; “it is exactly as wrong for the state to do something as for a random criminal to do it” is a fake rule we never apply to anything else. The fourth is wrong because Republicans have also been against leaders who presided over good economies and presumably thought this was a reasonable thing to do; “it’s impossible to honestly oppose someone even when there’s a good economy” is a fake rule we never apply to anything else.

Sometimes these can be more complicated and ambiguous. One could argue that
“Banning abortion is unconscionable because it denies someone the right to do what they want with their own body” is an isolated demand for rigor, given that we ban people from selling their organs, accepting unlicensed medical treatments, using illegal drugs, engaging in prostitution, accepting euthanasia, and countless other things that involve telling them what to do with their bodies – “everyone has a right to do what they want with their own bodies” is a fake rule we never apply to anything else. Other people might want to search for ways that the abortion case is different, or explore what we mean by “right to their own body” more deeply. Proposed without these deeper analysis, I don’t think the claim would rise much above this level.

I don’t think these are necessarily badly-intentioned. We don’t have a good explicit understanding of what high-level principles we use, and tend to make them up on the spot to fit object-level cases. But here they act to derail the argument into a stupid debate over whether it’s okay to even discuss the issue without having 100% perfect impossible rigor. The solution is exactly the sort of “proving too much” arguments in the last paragraph. Then you can agree to use normal standards of rigor for the argument and move on to your real disagreements.

These are related to fully general counterarguments like “sorry, you can’t solve every problem with X”, though usually these are more meta-debate than debate.

Sometimes isolated demands for rigor can be rescued by making them much more complicated; for example, I can see somebody explaining why kidnapping becomes acceptable when the state does it but murder doesn’t – but you’ve got to actually make the argument, and don’t be surprised if other people don’t find it convincing. Other times these work not as rules but as heuristics – for example “let people do what they want with their body in the absence of very compelling arguments otherwise” – and if those heuristics survive someone else challenging whether banning unlicensed medical treatment is really that much more compelling than banning abortion, they usually end up as high-level generators of disagreement (see below).

Disputing definitions is when an argument hinges on the meaning of words, or whether something counts as a member of a category or not.

“Transgender is a mental illness.”

“The Soviet Union wasn’t really communist.”

“Wanting English as the official language is racist.”

“Abortion is murder.”

“Nobody in the US is really poor, by global standards.”

It might be important on a social basis what we call these things; for example, the social perception of transgender might shift based on whether it was commonly thought of as a mental illness or not. But if a specific argument between two people starts hinging on one of these questions, chances are something has gone wrong; neither factual nor moral questions should depend on a dispute over the way we use words. This Guide To Words is a long and comprehensive resource about these situations and how to get past them into whatever the real disagreement is.

Clarifying is when people try to figure out exactly what their opponent’s position is.

“So communists think there shouldn’t be private ownership of factories, but there might still be private ownership of things like houses and furniture?”

“Are you opposed to laws saying that convicted felons can’t get guns? What about laws saying that there has to be a waiting period?”

“Do you think there can ever be such a thing as a just war?”

This can sometimes be hostile and counterproductive. I’ve seen too many arguments degenerate into some form of “So you’re saying that rape is good and we should have more of it, are you?” No. Nobody is ever saying that. If someone thinks the other side is saying that, they’ve stopped doing honest clarification and gotten more into the performative shaming side.

But there are a lot of misunderstandings about people’s positions. Some of this is because the space of things people can believe is very wide and it’s hard to understand exactly what someone is saying. More of it is because partisan echo chambers can deliberately spread misrepresentations or cliched versions of an opponent’s arguments in order to make them look stupid, and it takes some time to realize that real opponents don’t always match the stereotype. And sometimes it’s because people don’t always have their positions down in detail themselves (eg communists’ uncertainty about what exactly a communist state would look like). At its best, clarification can help the other person notice holes in their own opinions and reveal leaps in logic that might legitimately deserve to be questioned.

Operationalizing is where both parties understand they’re in a cooperative effort to fix exactly what they’re arguing about, where the goalposts are, and what all of their terms mean.

“When I say the Soviet Union was communist, I mean that the state controlled basically all of the economy. Do you agree that’s what we’re debating here?”

“I mean that a gun buyback program similar to the one in Australia would probably lead to less gun crime in the United States and hundreds of lives saved per year.”

“If the US were to raise the national minimum wage to $15, the average poor person would be better off.”

“I’m not interested in debating whether the IPCC estimates of global warming might be too high, I’m interested in whether the real estimate is still bad enough that millions of people could die.”

An argument is operationalized when every part of it has either been reduced to a factual question with a real answer (even if we don’t know what it is), or when it’s obvious exactly what kind of non-factual disagreement is going on (for example, a difference in moral systems, or a difference in intuitions about what’s important).

The Center for Applied Rationality promotes double-cruxing, a specific technique that helps people operationalize arguments. A double-crux is a single subquestion where both sides admit that if they were wrong about the subquestion, they would change their mind. For example, if Alice (gun control opponent) would support gun control if she knew it lowered crime, and Bob (gun control supporter) would oppose gun control if he knew it would make crime worse – then the only thing they have to talk about is crime. They can ignore whether guns are important for resisting tyranny. They can ignore the role of mass shootings. They can ignore whether the NRA spokesman made an offensive comment one time. They just have to focus on crime – and that’s the sort of thing which at least in principle is tractable to studies and statistics and scientific consensus.

Not every argument will have double-cruxes. Alice might still oppose gun control if it only lowered crime a little, but also vastly increased the risk of the government becoming authoritarian. A lot of things – like a decision to vote for Hillary instead of Trump – might be based on a hundred little considerations rather than a single debatable point.

But at the very least, you might be able to find a bunch of more limited cruxes. For example, a Trump supporter might admit he would probably vote Hillary if he learned that Trump was more likely to start a war than Hillary was. This isn’t quite as likely to end the whole disagreement in a fell swoop – but it still gives a more fruitful avenue for debate than the usual fact-scattering.

High-level generators of disagreement are what remains when everyone understands exactly what’s being argued, and agrees on what all the evidence says, but have vague and hard-to-define reasons for disagreeing anyway. In retrospect, these are probably why the disagreement arose in the first place, with a lot of the more specific points being downstream of them and kind of made-up justifications. These are almost impossible to resolve even in principle.

“I feel like a populace that owns guns is free and has some level of control over its own destiny, but that if they take away our guns we’re pretty much just subjects and have to hope the government treats us well.”

“Yes, there are some arguments for why this war might be just, and how it might liberate people who are suffering terribly. But I feel like we always hear this kind of thing and it never pans out. And every time we declare war, that reinforces a culture where things can be solved by force. I think we need to take an unconditional stance against aggressive war, always and forever.”

“Even though I can’t tell you how this regulation would go wrong, in past experience a lot of well-intentioned regulations have ended up backfiring horribly. I just think we should have a bias against solving all problems by regulating them.”

“Capital punishment might decrease crime, but I draw the line at intentionally killing people. I don’t want to live in a society that does that, no matter what its reasons.”

Some of these involve what social signal an action might send; for example, even a just war might have the subtle effect of legitimizing war in people’s minds. Others involve cases where we expect our information to be biased or our analysis to be inaccurate; for example, if past regulations that seemed good have gone wrong, we might expect the next one to go wrong even if we can’t think of arguments against it. Others involve differences in very vague and long-term predictions, like whether it’s reasonable to worry about the government descending into tyranny or anarchy. Others involve fundamentally different moral systems, like if it’s okay to kill someone for a greater good. And the most frustrating involve chaotic and uncomputable situations that have to be solved by metis or phronesis or similar-sounding Greek words, where different people’s Greek words give them different opinions.

You can always try debating these points further. But these sorts of high-level generators are usually formed from hundreds of different cases and can’t easily be simplified or disproven. Maybe the best you can do is share the situations that led to you having the generators you do. Sometimes good art can help.

The high-level generators of disagreement can sound a lot like really bad and stupid arguments from previous levels. “We just have fundamentally different values” can sound a lot like “You’re just an evil person”. “I’ve got a heuristic here based on a lot of other cases I’ve seen” can sound a lot like “I prefer anecdotal evidence to facts”. And “I don’t think we can trust explicit reasoning in an area as fraught as this” can sound a lot like “I hate logic and am going to do whatever my biases say”. If there’s a difference, I think it comes from having gone through all the previous steps – having confirmed that the other person knows as much as you might be intellectual equals who are both equally concerned about doing the moral thing – and realizing that both of you alike are controlled by high-level generators. High-level generators aren’t biases in the sense of mistakes. They’re the strategies everyone uses to guide themselves in uncertain situations.

This doesn’t mean everyone is equally right and okay. You’ve reached this level when you agree that the situation is complicated enough that a reasonable person with reasonable high-level generators could disagree with you. If 100% of the evidence supports your side, and there’s no reasonable way that any set of sane heuristics or caveats could make someone disagree, then (unless you’re missing something) your opponent might just be an idiot.

Some thoughts on the overall arrangement:

1. If anybody in an argument is operating on a low level, the entire argument is now on that low level. First, because people will feel compelled to refute the low-level point before continuing. Second, because we’re only human, and if someone tries to shame/gotcha you, the natural response is to try to shame/gotcha them back.

2. The blue column on the left is factual disagreements; the red column on the right is philosophical disagreements. The highest level you’ll be able to get to is the lowest of where you are on the two columns.

3. Higher levels require more vulnerability. If you admit that the data are mixed but seem to slightly favor your side, and your opponent says that every good study ever has always favored his side plus also you are a racist communist – well, you kind of walked into that one. In particular, exploring high-level generators of disagreement requires a lot of trust, since someone who is at all hostile can easily frame this as “See! He admits that he’s biased and just going off his intuitions!”

4. If you hold the conversation in private, you’re almost guaranteed to avoid everything below the lower dotted line. Everything below that is a show put on for spectators.

5. If you’re intelligent, decent, and philosophically sophisticated, you can avoid everything below the higher dotted line. Everything below that is either a show or some form of mistake; everything above it is impossible to avoid no matter how great you are.

6. The shorter and more public the medium, the more pressure there is to stick to the lower levels. Twitter is great for shaming, but it’s almost impossible to have a good-faith survey of evidence there, or use it to operationalize a tricky definitional question.

7. Sometimes the high-level generators of disagreement are other, even more complicated questions. For example, a lot of people’s views come from their religion. Now you’ve got a whole different debate.

8. And a lot of the facts you have to agree on in a survey of the evidence are also complicated. I once saw a communism vs. capitalism argument degenerate into a discussion of whether government works better than private industry, then whether NASA was better than SpaceX, then whether some particular NASA rocket engine design was better than a corresponding SpaceX design. I never did learn if they figured whose rocket engine was better, or whether that helped them solve the communism vs. capitalism question. But it seems pretty clear that the degeneration into subquestions and discovery of superquestions can go on forever. This is the stage a lot of discussions get bogged down in, and one reason why pruning techniques like double-cruxes are so important.

9. Try to classify arguments you see in the wild on this system, and you find that some fit and others don’t. But the main thing you find is how few real arguments there are. This is something I tried to hammer in during the last election, when people were complaining “Well, we tried to debate Trump supporters, they didn’t change their mind, guess reason and democracy don’t work”. Arguments above the first dotted line are rare; arguments above the second basically nonexistent in public unless you look really hard.

But what’s the point? If you’re just going to end up at the high-level generators of disagreement, why do all the work?

First, because if you do it right you’ll end up respecting the other person. Going through all the motions might not produce agreement, but it should produce the feeling that the other person came to their belief honestly, isn’t just stupid and evil, and can be reasoned with on other subjects. The natural tendency is to assume that people on the other side just don’t know (or deliberately avoid knowing) the facts, or are using weird perverse rules of reasoning to ensure they get the conclusions they want. Go through the whole process, and you will find some ignorance, and you will find some bias, but they’ll probably be on both sides, and the exact way they work might surprise you.

Second, because – and this is total conjecture – this deals a tiny bit of damage to the high-level generators of disagreement. I think of these as Bayesian priors; you’ve looked at a hundred cases, all of them have been X, so when you see something that looks like not-X, you can assume you’re wrong – see the example above where the libertarian admits there is no clear argument against this particular regulation, but is wary enough of regulations to suspect there’s something they’re missing. But in this kind of math, the prior shifts the perception of the evidence, but the evidence also shifts the perception of the prior.

Imagine that, throughout your life, you’ve learned that UFO stories are fakes and hoaxes. Some friend of yours sees a UFO, and you assume (based on your priors) that it’s probably fake. They try to convince you. They show you the spot in their backyard where it landed and singed the grass. They show you the mysterious metal object they took as a souvenir. It seems plausible, but you still have too much of a prior on UFOs being fake, and so you assume they made it up.

Now imagine another friend has the same experience, and also shows you good evidence. And you hear about someone the next town over who says the same thing. After ten or twenty of these, maybe you start wondering if there’s something to all of this UFOs. Your overall skepticism of UFOs has made you dismiss each particular story, but each story has also dealt a little damage to your overall skepticism.

I think the high-level generators might work the same way. The libertarian says “Everything I’ve learned thus far makes me think government regulations fail.” You demonstrate what looks like a successful government regulation. The libertarian doubts, but also becomes slightly more receptive to the possibility of those regulations occasionally being useful. Do this a hundred times, and they might be more willing to accept regulations in general.

As the old saying goes, “First they ignore you, then they laugh at you, then they fight you, then they fight you half-heartedly, then they’re neutral, then they grudgingly say you might have a point even though you’re annoying, then they say on balance you’re mostly right although you ignore some of the most important facets of the issue, then you win.”

I notice SSC commenter John Nerst is talking about a science of disagreement and has set up a subreddit for discussing it. I only learned about it after mostly finishing this post, so I haven’t looked into it as much as I should, but it might make good followup reading.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

435 Responses to Varieties Of Argumentative Experience

  1. Sniffnoy says:

    Tangential, but I think Graham’s original hierarchy needs some revision. I’ll copypaste from where I wrote about this earlier:

    I would like to suggest two revisions to the disagreement hierarchy.

    Revision 1: Add level DH4.5: Nonconstructive refutation.

    The archetypical example of refuting an argument is finding a hole in it — “Your inference of P is unjustified given only what you’ve established so far.” (Or, better yet, “Your inference of P is unjustified given only what you’ve established so far; indeed, here is an example where what you’ve established so far holds, but P does not.”) But it’s possible to show an argument wrong without actually finding a hole in it. The classic example is showing that an argument proves too much. If an argument proves too much, you can conclude that it’s wrong — but you still don’t necessarily know exactly why it’s wrong. It’s still a form of refutation and should be above counterargument, but it’s not as good as a constructive refutation.

    Revision 2: Replace DH6, “Refuting the central point”, with “Refutation and counterargument”.

    “Refuting the central point” doesn’t really strike me as qualitatively different from “refutation”. Honestly to my mind, if you’re refuting some peripheral thing, that hardly even counts. When I argue I like to spot the other person lots of points because I want to get to the central disagreement as quickly as possible; arguing over peripheral stuff is mostly a waste of time. Of course, sometimes peripheral stuff becomes central later, but you can always un-spot a point.

    Anyway, point is, what is qualitatively different is refuting and counterarguing. If you only refute but you don’t counterargue, all you’ve established is that the other person’s argument is wrong — not that your own position is right! Refutation does not automatically include counterargument, and I think this is worth singling out a separate higher level.

  2. nameless1 says:

    Let’s clarify something: outside, in the big room with the blue ceiling called meatspace, pretty much all high school lunch room debates are about social shaming and there is not much else. If you demand rigor, you are a dork and you lost. In fact, most people participating in them is probably not even aware of the higher levels.

    When the Internet began, it was a highly intellectual space. Think UseNet, mostly professors. Of course the standards were high.

    On the other hand, Twitter is now not much different from the high school lunchroom.

    Blogs are in between. Or Reddit.

    You should be aware where you are. Never demand rigor on Twitter. *rolls eyes* is a good argument on Twitter. It is literally the same as rolling eyes in the high school lunchroom.

    • Mary says:

      Still remember the person whose idea of contributing to an online discussion was “You’re RUDE” — for asking someone to back up what she said.

  3. Michael Handy says:

    I propose this pyramid (or a revised version thereof) be posted at the start of all SSC comment threads.

    I wonder if there might be use in a study of how disagreements, responses, and rhetorical techniques ( like Schopenhauer’s guide.) Intersect. And how to bump a discussion that’s falling down the pyramids back up using rhetorical techniques.

    • Radu Floricica says:

      Unfortunately, it’s not as easy to “grok” as the one by Graham. Needs reading the whole article.

      Offtopic, do you know of anything similar to Schopenhauer’s guide?

    • Edward Scizorhands says:

      What if people could denote a background shade color for their posts, with one color for meta-arguments, and another color for the actual argument? Unless you are near the top of a chain you usually shouldn’t change color from the comment above you.

      I feel it would probably fail because these things always fail. Maybe this comment should be colored with “High-level generators of disagreement.”

      • AdamOfCascadia says:

        This comment would be colored with high-level generators of disagreement, I think.

        I don’t think forcing this scheme onto every slatestarcodex comment is a good idea. It would make the comments less welcoming to newcomers. Plus, not every comment needs to be a structured debate. Perhaps this would be something people decide to do when debating. I think one difficulty is accurately assessing the color in question. Everyone wants to think they are getting at the root of the issue, but often, we’re not.

    • Yaleocon says:

      Let me go on record as saying that this is perhaps the worst idea in the universe, not because the pyramid is incorrect, but because it will create tons and tons of meta-debate.

      You know how sometimes, people will look at an argument, accuse it of one “fallacy”, and then leave it at that? “Correlation does not imply causation, your argument is invalid!” This is a terrible form of argument, closer to meta-debate than true debate. (Debate would go something more like “correlation can suggest causation, but there are lots of confounders here, and I’m wondering if you can back up what seems like an important assertion of causation more fully, or else make your argument without leaning on it.”)

      Now imagine everyone having an easy reference for accusing one another of meta-debating, or using single facts or single studies, or not rising above the level of “disputing definitions”. And then imagine them disagreeing about whether that’s really what’s going on, and arguing about that instead of the actual point. I don’t want to see an SSC where that’s normalized.

      If people were perfect debaters, then they would be able to use the pyramid well, and it wouldn’t create more meta-debate; but then again, if they were perfect debaters, they wouldn’t have to use the pyramid at all. We’re not perfect, which is why we fuck up in the first place; and even admitting that the pyramid is correct, I think it might haunt the comment section of this blog for years to come by creating lots and lots of meta-debate about who is arguing at what level of the pyramid.

      • John Schilling says:

        Let me go on record as saying that this is perhaps the worst idea in the universe, not because the pyramid is incorrect, but because it will create tons and tons of meta-debate.

        To make it the worst idea in the universe, you’d need to reframe the pyramid as a Bingo card.

      • Michael Handy says:

        I feel that since people meta debate about what level of the pyramid they are on already, it might speed those sub debates up a bit.

        Of course by the time you accept this diagram is valid your are likely at least a third of the way up it.

    • The Nybbler says:

      I’m pretty sure most SSC debates need the sphinx more than the pyramid.

  4. b_jonas says:

    If there’s one thing I learnt in my previous job, it’s that when I disagree with the opinion of an intelligent co-worker, it’s worth to ask for clarification. Very often the disagreement is simply because of communication failure, that is, I misunderstood what the co-worker was saying, and after clarifying the details, it turns out we’re actually on mostly the same opinion, but phrasing it differently.

    • cuke says:

      I’ve been amazed at how much a few reflective listening skills can improve a conversation. Just to stop long enough to say “Here’s what I think you’re saying, is that right?” before diving in with one’s opinion goes a really long way in a potential disagreement.

    • dick says:

      It struck me that what Scott described as “Clarifying” seems to be the de facto bar between low and high quality discourse, but I’ve never seen it explicitly described as such before now, or called out as a requirement for participation. For example, many fora have adopted some variation of the Principle of Charity as a local guideline (e.g. Hacker News: “Please respond to the strongest plausible interpretation of what someone says, not a weaker one that’s easier to criticize”) but I’ve never seen one that goes on to say, “…and also paraphrase their point, and ask them if that is indeed what they are arguing.”

      It seems like a super-weapon, good for combatting both intentional mischaracterization (straw-men, by far the most common example of unproductive argumentation I see online) and honest misunderstandings. In particular, I’m wondering about the Vinay Gupta thread – some of the ingredients that led to that thread going off the rails seem like they could’ve been cleared up with good rhetorical hygiene. I wonder if a strong norm around explicit Clarifying, say by prefacing all combative comments with “It sounds like you’re saying X, is that right? If so, you’re wrong because….”, would’ve led to a better outcome?

      • Alsadius says:

        I like it in principle, but it’s easy to do badly. If you do it badly, it comes across as “It sounds like you want to kick babies, is that right?”, and that can make the argument go downhill even faster than not trying.

        • cryptoshill says:

          One of the things that brought Jordan Peterson to the limelight was a talk show host attempting to use this exact method on him, repeatedly.
          “So what you’re saying is that women should just accept being paid less?”
          (Apologies for interjecting with culture-war but the interview is almost a perfect example) – linke here https://www.youtube.com/watch?v=aMcjxSThD54

  5. onyomi says:

    Higher levels require more vulnerability.

    I think this is a very important point.

    It’s easier for me to say “look at these statistics proving more libertarian countries have higher life expectancy” than to say “since childhood, I’ve always felt a strong revulsion and defiance toward what I perceive as arbitrary, unjustified authority and/or social bullying. As an adult, much of what I see states doing and much of my own interactions with the state pattern-match to these qualities of human behavior I’ve always hated. Over the years, I’ve found countless examples of what seem like empirical data and real-life experience backing up my intuitions (of course, I was looking for them), so even if you can show me a different study proving big government works better in this case you’re not likely to shake my underlying conviction that big government is generally just bad.”

    Yet I think one is also more likely to arrive at some form of mutual understanding when willing to frankly discuss reasons for disagreement of the latter sort, than when staying at the level of “your study says this, but my study says that,” or certainly at the level of social shaming.

    But the reasons this takes more vulnerability is that if I am willing to admit my reasons for believing x aren’t fully rational and the interlocutor isn’t, I’ve just put myself in a weak position, rhetorically. Hence people would prefer to use facts to imply that they’re being fully rational while their opponents are guided by gut reactions and hunches, and/or remain at the bottom and/or Sphinx levels, which basically allow you to argue from the gut but without exposing yourself the way the higher level does.

    • moscanarius says:

      I think you’re right, but I also think Scott meant “vulnerability to refutation”, not “emotional vulnerability”.

      The higher levels of argumentation require using more precise definitions and counting the evidence that supports the points, instead of just asserting them vaguely. That’s bound to make your argument more vulnerable to attacks, as your opponent can now focus on the specific points you brought instead of guessing what your position entails.

      The lower level assertion “the elites hate the people” is less vulnerable to criticism/refutation than the heigher level “the elites, defined as [owners of production means / people making more than X per year / people with de facto political privileges] hate the poor, and these four papers plus my lifestory are the reason why I believe this”.

      • Doesntliketocomment says:

        I am pretty sure that the vulnerability Scott is talking about was not the chance of refutation, it is social vulnerability, the chance that a bad actor could use your willingness to entertain thinking at that level as proof that you are secretly endorsing socially unacceptable views (see Peterson, Jordan). When discussing things at a higher level, the chance that you will inadvertently produce a negative sound bite is much, much higher. This is one of the reason recent politicians (with one notable exception) seem to sound so affected, they are trying hard never make any phrase which can be used out of context.

        • cuke says:

          I think so too, that this is about social/emotional vulnerability, not logical vulnerability (though there’s likely entanglement between these).

          It would be easy for someone to say back to onyomi’s first long paragraph, “So the basis of your libertarian position is not evidence of what works in the world but because you were bullied as a child?”

          On the other hand, when I read onyomi’s paragraph, it immediately makes me more interested in talking with them about their libertarian views — which is not a feeling I usually have about libertarianism in general. The reason is that I also have a button about unjust, arbitrary authority because of being mistreated by my father. For whatever reason, those same feelings did not lead me to libertarianism, but I can see they definitely influenced my political views. And I’ll have to think about this more, may even contribute to views I hold that are more libertarian than other views I hold.

          Oniyomi disclosing this (and I’m taking it as hypothetical in this context) makes me identify with them in a way I wouldn’t have otherwise and makes me think about what brought me to my own views in a different way. Maybe other people have a different response and this self-disclosure would undercut that person’s credibility so much that no further discussion is considered worthwhile. I’m much more likely to trust someone’s reasoning if they seem aware of their potential biases.

          Maybe discussants need to agree at the outset whether consideration of how personal experience shapes our view on this topic is something we want to include in this conversation or not. And then the agreement is that both parties need to offer it up so the vulnerability is shared. (side note: I suspect finding structured ways for interlocutors to increase vulnerability gradually and symmetrically could be helpful to good conversations about difficult topics — maybe diplomats know something about this)

          And then the agreement is something like “We’re going to include discussion of the non-scientific factors that may inform our positions and trust each other not to weaponize that information or to reduce all of our views to being the product of those factors.”

          My assumption is that significant personal experiences almost always underlie our most deeply held beliefs about the world. At the same time, I’d consider it unfair to treat this personal experience angle as if it obliterates everything else the person has to say. Disclosing it though completely changes for me what feels possible in the conversation.

          • Doesntliketocomment says:

            finding structured ways for interlocutors to increase vulnerability gradually and symmetrically

            This sounds a bit like dating.

    • Baeraad says:

      It’s easier for me to say “look at these statistics proving more libertarian countries have higher life expectancy” than to say “since childhood, I’ve always felt a strong revulsion and defiance toward what I perceive as arbitrary, unjustified authority and/or social bullying. As an adult, much of what I see states doing and much of my own interactions with the state pattern-match to these qualities of human behavior I’ve always hated. Over the years, I’ve found countless examples of what seem like empirical data and real-life experience backing up my intuitions (of course, I was looking for them), so even if you can show me a different study proving big government works better in this case you’re not likely to shake my underlying conviction that big government is generally just bad.”

      Well, for what it’s worth – much like Scott predicted, after hearing you say that I respect your honesty and understand where you’re coming from even though I disagree entirely. 🙂

      I will try to match it and admit that in my case, I felt from childhood that the authorities (parents, teachers) were fundamentally on my side, had expectations on me that were made clear and were possible for me to fulfil, and had a plan that, while sometimes flawed, at least made sense, while the rebels (schoolyard bullies and troublemakers) were constantly hostile to me for arbitrary reasons that they seemed to make up as they went along. As an adult, I find that that those two groups map, respectively, to the government (which makes its rules plain and puts them in writing, and which has its admittedly ham-fisted way done much to help me get by in the world) and to employers and (for lack of a better term) “authorities on what is socially acceptable” (both of whom keeps saying one thing while they mean another, demand the impossible from me, and are hostile to me for not measuring up to some standard that is always left infuriatingly vague).

      So… yeah, I make no claim to have come by my opinion from rational analysis of statistics.

      • onyomi says:

        The reactions to this comment describing non-rational reasons for having views different from my own are both encouraging and discouraging:

        Encouraging in that commenters here seem willing to engage in this way, and that they share my intuition that discussion on this level seems more likely to generate empathy or intuitive understanding of ideological opponents’ views, if not necessarily agreement.

        Discouraging in that it confirms my sense that many, if not most disagreements about tribal/political issue are very hard, if not impossible to resolve at the level of purely rational debate.

        This takes me back to a thread not too long ago where I suggested evo-psych reasons for the, to me, inexplicable continued popularity of socialism. This was met with at least some accusations of Bulverism, not to mention descent into debate about object-level merits of capitalism and socialism, which was very much not what I was interested in.

        So I am wondering if a different approach to Bulverism is not in order, with the caveat that it can definitely be misused: “responsible Bulverism”? “Mutually Assured Bulverism” (this is starting to sound like a sex act…)?

        That is, if we stipulated that Baeraad, Cuke, and I were a three-member subcommittee, lived on an island, or were otherwise somehow in a position where we had to reach a consensus on some kind of policy rather than just going our own ways (and assuming we are only making policy based on what we actually think best, as opposed to factors like pleasing various constituencies), I predict that willingness to talk about this kind of thing (along with facts, studies, etc.) would produce better results.

        Put another way, feelings, intuitions, and predispositions are facts about the world. A rational analysis cannot fail to take them into account, and rationality, in fact, might not function without them: apparently people with damage to emotional processing centers (Phineas Gage) don’t become super-rational Vulcans, they just become people who make very bad life choices, because they don’t know where to focus their attention, how to evaluate risk, etc.

        I’m not saying go full “facts-don’t-matter” Scott Adams, but I do think he’s right that psychological factors like the way an issue is framed are probably way more important than almost anyone realizes. My current proposed solution is increased bilateral willingness to discuss facts about one’s own introspection, in addition to facts about the world.

        • cuke says:

          I really like this sentence of yours:

          “Put another way, feelings, intuitions, and predispositions are facts about the world. A rational analysis cannot fail to take them into account, and rationality, in fact, might not function without them…”

          How might our conversations look different if we took seriously the idea that our predispositions behave like facts for each of us?

          I often think of this in terms of a person’s default mode. The set of beliefs and assumptions they carry about themselves and others and the world. This default mode is going to entirely run a person in the background if they aren’t aware of how it works. Awareness of it at finer and finer grained levels is going to open up the possibility of not letting it run automatically. The less the default mode gets to run automatically, without question, the more flexibility that enables over time to make new choices.

          And this one:

          “My current proposed solution is increased bilateral willingness to discuss facts about one’s own introspection.” I second that.

          Another potential source of good news to my mind is that for many of us our feelings, intuitions, and predispositions don’t all run in the same direction, so that all kinds of internal contradictions are possible.

          I am oddly very eager to please authority and very eager to resist it, depending on the circumstances. After more than five decades, I haven’t quite figured out the rules for what factors send me one way or the other. I am truly ambi-valent towards authority. Perhaps most of us are, given the unevenness of parenting and schooling. I also don’t think my political/moral/philosophical views can be reduced to these facts of my psychology.

          I would venture to guess though that most people’s political views are shaped in some important ways by early experiences with authorities and caregivers and how trustworthy or not we found them to be.

          Some of it I have to believe is hard-wired too though. And that means our genetics can run totally counter to our early experiences in ways that produce yet more interesting combinations.

        • onyomi says:

          Thinking a bit more about practical application of what I just wrote above, if what I wanted was to hear in the old thread was socialists and people with any warm-fuzzy feelings about socialism and socialism-adjacent ideas to explain to me why they feel those, to me, inexplicable positive inclinations, maybe a better strategy would have have been to say something like:

          “Here’s why, in addition to philosophical arguments, statistics, and studies, I think there is a deep part of me predisposed to go ‘boo socialism’ regardless of the particular case I’m considering; can anyone whose internal applause lights tend to go on with regard to socialism explain/describe to me those sentiments?”

          This way it comes off less as a unilateral trap (asking the opponent to give you free ammunition, in effect) and more an invitation for understanding. Instead of “My position is backed up by all the facts; therefore I suspect people who disagree with me are being irrational; please tell me your irrational reasons for disagreeing with me,” rather “given the facts I know and personal experiences/predispositions, I feel this way about x and can’t understand how anyone could feel that way; can anyone who feels that way about x describe that experience and, if possible, its origins?”

          • Enkidum says:

            Uh so apparently I wrote a response to your hypothetical post in the “old thread”, without realizing this is a new thread. Well, here y’go.

            I grew up with pretty damn left-wing parents, and some fairly strong left-wingers of various persuasions in my extended family. I would describe myself as a democratic socialist, I suppose. Probably the closest thing to a come to Jesus moment for me was reading Orwell. Apologies for self-promotion, but I think it’s relevant? Anyways, I wrote about it here. It really was a profound case of emotional resonance for me. There’s obviously a lot more to my beliefs but it’s maybe an interesting starting point.

          • Probably the closest thing to a come to Jesus moment for me was reading Orwell.

            I also like Orwell. But Orwell’s position, at least in his later writing, seems to be that it will be very difficult to make the sort of democratic socialism he wants work, but that all other alternatives are worse. I’m thinking in part of his joint review of The Road to Serfdom and a book by Zilliacus.

            Western Europe and the U.S. didn’t go socialist after WWII and they didn’t end up with the sort of nightmare scenario Orwell seems to have expected—basically monopoly plus a return to Great Depression conditions. Does that mean that his argument for socialism as the least bad alternative fails?

          • Enkidum says:

            I’d argue along the lines of what I think Scott was trying to get at with the Fabian Society post: that many aspects of what was then radical socialism have now become central components of our society. Others haven’t. Similarly for capitalism.

            So Western Europe didn’t “go socialist”, but it went more socialist. Orwell’s extreme assumptions certainly may not have been warranted, but it’s easy to be an extremist when you’ve spent twenty years watching the rise of fascism.

          • cuke says:

            “given the facts I know and personal experiences/predispositions, I feel this way about x and can’t understand how anyone could feel that way; can anyone who feels that way about x describe that experience and, if possible, its origins?”

            Just wanted to pull that question out because it seems like a really useful way to initiate a conversation about difficult issues. I plan on using it somewhere later, and I promise to give credit, onyomi.

        • paultorek says:

          Agreed about Phineas Gage and the lesson to draw. Relatedly, I think that metis and phronesis and those other Greek words have a much larger role to play than what Scott seems to hope for. Which isn’t to say that logic-chopping shouldn’t be tried for all it’s worth: low hanging fruit shouldn’t be left to rot.

        • If it’s any consolation, I’ve had the opposite experience of you as well – I’ve always gotten along really well with authority figures – but I share many of your political views, being a fellow anarcho-capitalist.

      • Aapje says:

        @Baeraad

        I will try to match it and admit that in my case, I felt from childhood that the authorities (parents, teachers) were fundamentally on my side, had expectations on me that were made clear and were possible for me to fulfil, and had a plan that, while sometimes flawed, at least made sense, while the rebels (schoolyard bullies and troublemakers) were constantly hostile to me for arbitrary reasons that they seemed to make up as they went along

        More or less the same here, although I mainly trusted my peers less than the authorities and didn’t necessarily trust the latter that much on an absolute, rather than relative level.

        • In my case I think I trusted my parents, had reservations about both my peers and my teachers. I don’t remember seeing the latter groups as either “fundamentally on my side” or “constantly hostile to me.”

  6. timujin says:

    Second, because – and this is total conjecture – this deals a tiny bit of damage to the high-level generators of disagreement. I think of these as Bayesian priors; you’ve looked at a hundred cases, all of them have been X, so when you see something that looks like not-X, you can assume you’re wrong – see the example above where the libertarian admits there is no clear argument against this particular regulation, but is wary enough of regulations to suspect there’s something they’re missing. But in this kind of math, the prior shifts the perception of the evidence, but the evidence also shifts the perception of the prior.

    I think the high-level generators might work the same way. The libertarian says “Everything I’ve learned thus far makes me think government regulations fail.” You demonstrate what looks like a successful government regulation. The libertarian doubts, but also becomes slightly more receptive to the possibility of those regulations occasionally being useful. Do this a hundred times, and they might be more willing to accept regulations in general.

    This only really works if your opponent is truly Bayesian. For most people, this would only strengthen their beliefs as per The Cowpox of Doubt\attitude innoculation.

    • Alsadius says:

      I don’t think that’s fair. Non-Bayesians do sometimes change their mind. It’s not usually a conscious process, and probably it’s less likely than a rationalist doing so, but it still happens. The Cowpox of Doubt is about only ever experiencing bad counter-arguments, whereas if you can get to the top of the pyramid you’re giving good arguments against a point. No one argument in this vein is going to carry the day(unless you have an opponent who is both unusually scrupulous and who has never heard that argument before), but large numbers of them will often tell in time.

      • Simon_Jester says:

        I agree.

        The Bayesian effect on people’s priors that Scott discusses in this article isn’t caused by your debating partner being a Bayesian philosopher. It’s caused by your debating partner having a Bayesian brain. Brains do in fact run on a semi-Bayesian system, or they wouldn’t work at all.

        The system is generally very bad at solving any specific, abstract example of a Bayesian problem, much as most people can’t solve equations of motion when you set the things out in front of them on pencil and paper But it still routinely performs Bayesian-like analysis of problems, just as people routinely catch a ball even though they can’t solve its equations of motion consciously.

        Our entire sensory system revolves around prior expectations and our brain’s ability to match stimuli to the most likely pattern available in our consciousness given its list of priors.

        Our ability to navigate the physical and social world depends on us having a mix of ultra-high-confidence priors (“if I press the gas pedal, the car accelerates”), high-confidence priors (“the other drivers are mostly trying to avoid collisions”), and low-confidence priors undergoing experimentation (“this is the fastest route to work, but if I get caught in a traffic jam eight times in a row then I’m probably wrong about this and should find a new route”). If we never updated our priors we’d never [i]learn.[/i]

        And in that spirit, if you provide enough experiences and reasons to change a prior, most people will change that prior in the direction of what you provide. The precise value of ‘enough’ may be frustratingly high, or movement in that direction may be slower than you like, but it’s not a null effect.

        • Nancy Lebovitz says:

          I’m reasonably sure I’m more likely to change my priors if I get reasons to do so from independent sources, or at least apparently independent sources.

          No one of the sources has total responsibility or gets total credit for changing my mind.

    • Enkidum says:

      Perhaps we’re all Bayesians in some sense? At least those willing to rise above the dotted lines in Scott’s triangle.

      • RC-cola-and-a-moon-pie says:

        In my experience, over 90% of references to Bayes in actual practical discussion like this are utterly superfluous. In nearly all such cases one could simply refer, e.g., to considering new evidence and adjusting one’s beliefs according to its strength. I think it’s quite rare that jargony references to updating priors and the like add anything meaningful.

        • Enkidum says:

          Agreed, actually.

          I think the non-superfluous references to Bayesianism are along the lines of Simon_Jester’s use of it to refer to just the way our brains work. I use the jargon because it’s the default in my field, but I’m not sure that it’s terribly useful in practical discussions most of the time, as you say. So perhaps I should change my *ahem* priors about what language is appropriate.

        • Barr says:

          Not only that, but there doesn’t exist a way in the literature to incorporate Bayesianism into a huge number of functions we expect from dialectical reasoning.

          There is no way to use Bayesianism (or more generally Aumann-style dialectic) to do theory-building like constructing a model for problems. Likewise, it’s useless for mathematical deduction. How do you use Bayesian updating to improve your confidence of a mathematical statement?

          And of course, there is no way to apply Bayesianism to ethical statements i.e. “we should debate in a generous way as toward convergence which should be closer to the truth”.

          I feel like 90% of this stuff was figured out almost a hundred years ago through the work and failures of the positivists, especially Wittgenstein.

          • Simon_Jester says:

            You can’t apply the probability-crunching mathematical version of Bayes’ theorem to these cases.

            You can apply the broader concept of “I have opinions, and I believe they are probably-correct but am not literally 100% certain they are correct.” Because anything you are literally 100% certain of, you’d be incapable of changing your mind on.

            So when studying a mathematical proof, you start out with a probability of [negligible] that the conclusion is true. Then you skim the proof and you update it to [pretty high number], assuming it was in a credible source. Then you pore over the line-by-line details and update it to [much higher number], by which point you’re probably willing to use the result yourself without undue fear of wasting time. Then you really pore over it, to the point where you’ve internalized it, and you update your probability that it’s true to [1 – negligible].

            You’re still not in a binary belief/disbelief state, because (again) if you were, it’d be literally impossible for you to entertain the notion of a disproof of your new theorem.

            A morality argument is an even better example, the trick being that you can’t just model the probability you assign to a prior as “this is the likelihood that this theory of ethics is the One True Way.” The value you assign there influences not just your abstract opinions, but also things like how willing you are to actually live by your theory, and how strongly you expect other people to follow it. A moral law that you are 99.999999% certain applies is a law you will be willing to inconvenience yourself to follow (insofar as you ever are) and a law you will be put out at others for ignoring.

          • Barr says:

            A true Bayesian is 100 percent certain not to hold beliefs with 0 or 100 percent certainty.

            The broader concept of “I have opinions but they’re not 100% or 0%” and thereby always assailable to evidence has a lot of problems with it. And I would argue that it’s not even really in the spirit of Bayesianism.

            For mathematical cases, you have to have a Bayesian to reason by induction. For instance, it has been conjectured that the number of twin primes is infinite. But how certain should we be of this fact? Any schema for updating our belief in a Bayesian way will arrive at one of two conclusions: Either our certainty is just our prior (disregard all evidence) or it’s exponentially close to 1 (virtually any credence to the evidence unless we apply some strange rule).

            Neither of these conclusions seem congenial to how mathematics is done. Moreover, the whole Bayesian approach doesn’t fit with the search for better schemas or worldviews.

            Worldviews are completely irreducible to sets of affairs or probability weights thereof; they tell us how to weigh evidence in the first place.

    • Doesntliketocomment says:

      Isn’t the point of Bayesian reasoning that you are practicing it whether you think you are or not? I thought it was supposed to be inescapable, like evolution.

  7. John Nerst says:

    Thanks for the shoutout 🙂

    I thought of this post as an excellent example of erisology while reading it and was pleasantly surprised to find out that you knew about the concept when I got to the end. I’ve carefully avoided calling it the science of disagreement though, so far favouring the study of disagreement because much of it (the stuff I do anyway) doesn’t hold scientific standard. Also it might fit better in with the humanities than the sciences (although this sort of goes against the whole idea of transcending intellectual gulfs) because the natural way to do it is most likely philosophy-like analysis and history-like textual interpretation. The description on my blog is more of a wishlist – but yeah, some actual science would be cool.

    Dividing the pyramid into an empirical and a philosophical half is a great idea. And even more important is how the lower levels are actually downstream from the highest – it’s almost as if the upper grey triangle is actually “behind” the rest of the pyramid, propping it up, but you can only see it by climbing all the way up to the top level and get an unobscured view.

    It seems that for disagreements, Plato’s Cave might be literally true.

    • emblem14 says:

      Your analysis of the Harris/Klein disagreement was epic. I’d like to think there could be a way to get them both to read it and respond to the critique! I too thought after listening to their podcast their inability to effectively communicate, and more critically the specific ways they were failing at productive conversation was both illuminating and despair-inducing. If the likes of them can’t figure it out, who could? But I think you’re touching on some extremely important phenomena. Moreover, if Harris and Klein both absorbed some of your analytical framework on this issue, I believe they could actually come to an amicable resolution if they had a do-over, without either of them needing to change their core commitments.

  8. Alsadius says:

    I’d also add one further category at the tip of the pyramid – differing axioms. The classic example is “An embryo is a human blessed by God with a soul at the instant of conception” vs “An embryo is just a few cells with potential – if it can’t subjectively experience the world, it isn’t a moral agent and deserves no consideration in its own right”.

    Arguments over axioms are supremely unproductive, because a true case of differing axioms is unresolvable even in principle, but they’re unproductive in a similar way to high-level generators(where both sides can understand and respect the other in principle), not in the same way to gotchas and social markers.

    • liskantope says:

      I feel like Scott sort of covered that under “high-level generators of disagreement”.

      • Alsadius says:

        In Scott’s hierarchy it’s clearly in that category, but deep-seated heuristics that generate broader political arguments are very different in practice from true axioms, because the heuristics can change due to evidence. I think it’d be useful to distinguish between them.

        • Watchman says:

          Too bring in a meta-argument here, is there really a separate category of axioms separate from heuristics, or are they simply heuristics that are more firmly held and less easily challenged? Axioms can change (take the idea of a change in religious beliefs) and arguments against axioms are not futile as a result, even if they are unlikely to bring about a change in belief, so whilst a good-faith argument requires people to acknowledge their priors I am not sure that there is a real distinction between axioms and heuristics here. It looks like a single sliding scale of priors to me, all of which are best acknowledged without attempting to buttress particular ones by making them axiomatic; indeed the introduction of axiom as a category is probably non-conducive to open and honest argument as it allows a participant to actively discourage discussion of particular beliefs by labelling them as axiomatic, rather than simply expressing the fact they hold this particular prior for this particular reason.

          Note also the simpler point that regardless of how firmly held each party’s opposing priors might be this does not make seeking out high-level generators of disagreement (I might want a snappier title to keep using that…) pointless. I have a very strong prior that can be caricatured as “socialism is never the right option” but still enjoy discussing politics with people with a belief that socialism is the answer, because I learn how they see the world working and where my prior does and does not speak to theirs. That our respective strongly-held priors are unchanged does not make the attempt pointless or unproductive.

        • 10240 says:

          It was also in “operationalization”, though somewhat hidden.

          “An argument is operationalized when every part of it has either been reduced to a factual question […], or when it’s obvious exactly what kind of non-factual disagreement is going on (for example, a difference in moral systems, or a difference in intuitions about what’s important).”

          This is also a huge generator of dishonest debating tactics: if you know you can’t convince most people of your terminal values, you may try to convince them that your preferred policies follow from their terminal values, whether that’s true or not.
          People do this in a subconscious way, rather than a cynical, calculating way, e.g. you don’t spend too much time evaluating a counterargument if it’s based on other people’s values, and clearly doesn’t change your support of the policy based on your values.

          • po8crg says:

            if you know you can’t convince most people of your terminal values, you may try to convince them that your preferred policies follow from their terminal values, whether that’s true or not.

            That isn’t necessarily dishonest, though. There are plenty of cases where the same policy can arise from lots of different terminal values; indeed that’s the essence of a political coalition (policy X benefits both A and B, so A and B work together to achieve it – defining “benefit” here in a nonselfish way, ie “achieves goals compatible with their terminal values”). This can also happen when one person’s terminal value is another person’s intermediate value (ie I believe X as an axiom; you believe X as a result of believing Y as an axiom and believing that X results in Y).

            The essence of politics is putting together a majority coalition for a policy, and since there are very few terminal values that are shared by a majority (other than the near-universal ones), it’s usually necessary to combine people who hold a number of different terminal values.

          • Iain says:

            This is also a huge generator of dishonest debating tactics: if you know you can’t convince most people of your terminal values, you may try to convince them that your preferred policies follow from their terminal values, whether that’s true or not.

            I’ll go even further than po8crg and claim that, far from being dishonest, the ability to target your arguments at the terminal values of other people is necessary for persuasive discourse.

            People don’t change their terminal values willy-nilly. You’d be crazy to expect that every person you talk to can be talked into embracing your values. Any worthwhile debate is necessarily going to involve people with non-identical (but possibly overlapping) sets of terminal values. So how do you justify your arguments?

            You’re never going to convince somebody by appealing to values they don’t care about. You have to meet people where they are. If I support a policy for several reasons, and I know that you will only care about a subset of those reasons, it’s crazy to say that focusing on that subset is dishonest. I’m just saving us all time and frustration. Similarly, although you should be clear when you are doing it, there’s nothing inherently dishonest about presenting an argument that you don’t personally find compelling.

            It might be that your proposed policy does not, in fact, follow from their terminal values. But that would make you wrong, not dishonest.

          • Nornagest says:

            People don’t change their terminal values willy-nilly.

            People don’t change their terminal values, period. That’s what “terminal value” means: the values that every other value is rooted in, the ones that don’t depend on your model of the environment. If you can persuade someone to change their values, those values weren’t terminal.

            (I am not persuaded that people even have terminal values in a meaningful sense, but that’s another conversation.)

          • crh says:

            ‘Necessary for persuasion’ and ‘dishonest’ aren’t at all mutually exclusive, though. If I’m a salesman, and I want you to buy some product, it is necessary for me to convince you that the product is good/will be useful to you/will elevate your status/etc. Of course the actual reason I want you to buy the product is so I will earn a commission, but I have little hope of convincing you to share my terminal value of, “maximizing how much money crh has,” so I’d stick to the other arguments. Depending on who you are and what the product is, these arguments might not be dishonest. But they often will be, and I do think a disconnect between “why I believe X” and “why I’m saying you should believe X” in general promotes dishonesty. (Which is why salesmen are stereotypically dishonest.)

          • 10240 says:

            It’s not necessarily dishonest to argue from other people’s values, only when you claim and argue that a policy follows from their values, even though you know it doesn’t, or you have no idea if it actually does.

            I’m not trying to moralize, but to explain why people often seem to refuse to accept rational arguments, ignore your arguments, or not prefer the style of debate where evidence is thoroughly reviewed. This is often attributed to their irrationality (or to your arguments not being actually good). But it may also be that if the debate is focused on whether a policy follows from your values, they may not have an interest in uncovering the truth if they have different values.

            @Nornagest: Terminal values can definitely change, just not from a perfectly logical, factual debate or thought, and typically gradually. (A typical question which involves terminal values for many people, the way I understand the term, is how you weigh the importance of freedom and (economic) equality against each other. I’ve definitely changed on that.) A common way terminal values change is that you find a contradiction between your (or, if you’re persuading someone, their) values, which often makes one shift one or both of those values.

          • Nornagest says:

            A common way terminal values change is that you find a contradiction between your (or, if you’re persuading someone, their) values, which often makes one shift one or both of those values.

            Terminal values — or facets of a single terminal value, depending on how you look at it — can have different magnitudes and can come into conflict, and what happens then is that the stronger one wins, but that doesn’t make the one that’s weaker in this context change, it just means that it isn’t taking priority right now. Take Alice as a (very simplified) example. Alice has a strong preference for owning puppies, and a weak one for eating meat. Those are her terminal values; everything else in her life is instrumental to her puppy-owning and meat-eating. She receives a puppy. Does she eat the puppy? Obviously no; but that doesn’t mean she won’t be happy to eat a steak if someone gives her one.

            Terminal value has a very specific meaning. A value is not terminal just because you feel it very strongly. It isn’t even necessarily terminal if you can’t think of anything more fundamental. It’s only terminal if it’s inexpressible in terms of anything else. That is a very strong criterion, and among other things it means that being talked out of your terminal values is definitionally impossible.

            Note however that an ethical system‘s terminal values are not necessarily those of the person implementing that system. People implement ethics only imperfectly, and we can definitely be talked out of one ethical framework and into another; that just means that the framework’s values were not a perfect match with the person’s. Human terminal values, if they exist, are probably some kind of messy neurochemical function; anyone that’s ever told you “my terminal values are such-and-such” is almost certainly wrong.

          • Iain says:

            @Nornagest:

            You’re being unhelpfully pedantic.

            In the context of this conversation, “terminal value” was being used as shorthand to refer to Scott’s “non-factual disagreements (a difference in moral systems, or a difference in intuitions about what’s important)”. I don’t see any sign that anybody else was confused by that.

            You’re trying to enforce a specific interpretation of the phrase “terminal value”, while at the same time arguing that your interpretation isn’t useful. It doesn’t help that your preferred definition is actually not very widely used. The SEP doesn’t have an entry; Wikipedia just redirects to “instrumental and intrinsic value“. There’s a LessWrong wiki page, but I don’t think we’re all bound to use words a particular way just because Eliezer Yudkowsky wrote an article once.

    • albatross11 says:

      But it’s often really valuable to recognize when you’re arguing against a definition or premise vs a factual claim.

  9. 6jfvkd8lu7cc says:

    I think the «value of searching the high-level generator of disagreement» part could benefit from an explicit mention of «splash damage».

    Just as social shaming is for bystanders, a good argument can change your «bystander beliefs» — the studies cited might not convincing enough to overturn the balance of many well-researched value tradeoffs, but you learn things about adjacent issues where you didn’t have any strong opinions, and you might learn something about them.

    Also, you say that you can respect the other person and their differing position — while this is immensely valuable in itself, I think learning about tangentially related issues where you want to collaborate (in addition to learning where you need to agree to disagree on values) might be a significant additional gain.

  10. tailcalled says:

    I think this fails to take Aumann’s Agreement Theorem seriously enough. Even at the upper levels, you assume a model of disagreement where people’s opinions will change gradually and predictably from discussions, either towards a compromise or just remaining static. This is not what we’d expect from genuine disagreements. The gradual compromise does make sense if you’re modelling argument as a sort of cultural battle, where the factual questions are merely weapons that you use to support your side, and where cultural rules force you to yield political ground in proportion to the arguments lost. However, you don’t seem to explicitly acknowledge this model much in your post.

    • Aapje says:

      I would argue that it can be seen as weakening the foundation of a building. The building is often not going to fail gradually, even if the weakening is gradual. Instead, you get catastrophic failure. If one support goes, the others get too much load and they go as well.

    • Nornagest says:

      People probably don’t follow the VNM axioms, so Aumann probably doesn’t apply. The independence axiom’s particularly fraught, and I think human preferences aren’t always transitive either.

      • simon says:

        Even if we do follow the axioms, we may not have enough computational power.

        In Aumann’s original paper, he writes:

        Worthy of note is the implicit assumption that the information partitions P1 and P2 are themselves common knowledge. Actually, this constitutes no loss of generality. Included in the full description of the state w of the world is the manner in which information is imparted to the two persons.

        Now, before reading further, you might want to take some time to think what calculation this statement is implying. What’s the implicit algorithm to find out someone’s information partition?
        .
        .
        .
        .
        .
        It’s to simulate the world state, for each possibility under consideration, in sufficient detail that the simulation includes every aspect of your interlocutor’s life history that has any bearing on the question of what evidence your interlocutor has as to whether the possibility is true. Your life history is likely compatible with enormously many possible such simulations, and you must take them all into account.

        Of course, the fact that there’s a hard way to calculate something does not exclude the possibility of there being an easy approximation.

        But, does it seem that people are, in fact, good at determining other peoples information partitions? It seems to me that people have a hard enough time determining their own.

        Aaronson’s paper does not solve this, he treats the partition functions as already known, albeit with some discussion at pages 7-8.
        ——–

        I’d like to add an additional irrelevant point. It seems to me that people tend to assume that, when two people converge by Aumann’s theorem, the value they converge to is the same value that they would get if they shared information.

        But not only is using Aumann’s theorem properly enormously more difficult than sharing relevant information directly, it also leads to less accurate results!

        Any time two people don’t know how much information they have in common, they can overweight the evidence supplied by the other person’s belief (if they rationally think they have less evidence in common than they actually have) and wind up overconfident, or they can underweight the evidence of the other person’s (if they rationally think they have more evidence in common than they actually have) and wind up underconfident.

        In principle one way to solve this, and greatly simplify the task of coming to agreement to boot, is to actually share the direct evidence. Not to abstract it away with a percent confidence number. Of course, people may not be able to determine and verbalize all their evidence, but using Aumann’s theorem isn’t likely to be any easier, and is less accurate.

      • simon says:

        Never mind what I said above about computational power, I forgot about Hanson’s argument.

        But, even if there would be convergence, it wouldn’t be convergence necessarily to the proper Aumann value, let alone to the best estimate given the collective evidence.

  11. Murphy says:

    Related to the top section, re: splits based on high level ethical positions etc for commonly debated issues where there’s a set of reasonably coherent streams to most positions it feels like it should be possible to create flowcharts of the reasonable coherent chains of reasoning.

    Because most have a few coherent logic paths.

    [Do you believe a 1 month old foetus is a person with full ethical weight] yes/no/maybe

    [Do you believe inequality is more important than absolute wealth, ie is 100 people with 100 bucks each better than 99 people with 101 and one person with 1,000,000? ] yes/no/maybe

    The thing is how to treat the “islands”.

    Because some people take bizarre positions that stem from what seems to be the belief that [oddball belief] just a facet of the universe. It’s not exactly “illogical” or incoherent. After all, most positions have at their base at least a handful of base assumptions like “People being in agony is mostly bad” or “Murdering people is bad”

    But most people share a small-ish subset of common-ish lower level precepts and mostly seem to differ on a modest number like whether murdering 1 person to save 1,000,000,000 is ok.

    But then some people have islands out in the middle of nowhere of “Failing to wear yellow hats every third wednesday is basically worse than being a murder-rapist”

    And I’m not sure how to treat those islands in a flowchart other than a giant nest of esoteric short-tree positions off to the side.

  12. liskantope says:

    Beautiful post, which deserves a place on the list of SSC classics. The one part that makes me sort of uneasy is the characterization of social shamers, where I can’t help but feel that this take on their intentions is overly-sweeping and/or uncharitable. (I’ve had this feeling before about similar discussions on this blog, and I tend to find others’ views of people’s intentions to be too cynical, and this may well say more about me than about anyone’s characterization of social shamers.)

    In particular, I’m kind of torn on this passage:

    Nobody expects this to convince anyone. That’s why I don’t like the term “ad hominem”, which implies that shamers are idiots who are too stupid to realize that calling someone names doesn’t refute their point. That’s not the problem. People who use this strategy know exactly what they’re doing and are often quite successful.

    On the one hand, I believe a lot of the time this is true, and social shamers even openly admit it (“I want to make racists feel afraid again of expressing their views!”). On the other hand, I know people personally who are sometimes inclined to argue in a way that would fall under this post’s definition of social shaming, and I don’t have the impression that this tactic is deliberate, or that even if it’s their subconscious intention they realize that’s what they’re doing. Perhaps with some people it’s more of a plea (which is almost an argument but skips the crucial argumentative step) along the lines of “(1) It’s clear that my opponent’s position amounts to X; and (2) we’ve already agreed that X is so ridiculous/repugnant that it’s not worth listening to; therefore (3) my opponent shouldn’t be listened to”. This is a lazy excuse for argumentation to be sure, but I see a (admittedly fine) line between that and a consciously-intended social power play.

    • Aapje says:

      I agree. This may be Scott typical-minding. Perhaps people-oriented people are quite frequently swayed by peer pressure and other appeals to emotion, but this is far less the case for system-oriented people?

      So then it is not so much the case that these people are defecting from the norm, but rather, that they use debating tactics that are effective on themselves.

      • albatross11 says:

        I think some people react to some offensive idea with genuine revulsion and anger. But many other people have learned that cultivating revulsion and anger in themselves and others, in response to certain ideas, is an effective technique for shutting down the other side. It’s not so easy to decide which case you’re in.

        • Aapje says:

          Isn’t revulsion/anger caused by people noticing a threat to their group cohesion? Then they react to threats to group cohesion by shouting down dissent & if that doesn’t work, by ostracization of the threat.

          If so, there is not actually a real separation between taking offense and shutting down the other side. It is part of the same mechanism.

          • Alsadius says:

            Let’s take a pretty extreme example here – if someone were to suggest legalization of child porn, I think most people would be revolted and angry. But it’s not a threat to any particular sort of ingroup cohesion – there’s the sense where it threatens the group consisting of all humans, but at that point “group cohesion” stops being the most parsimonious explanation.

            I think that the revulsion action often protects the group’s beliefs, but I think the mechanism of action is that people really believe in their group. For example, I get cranky when I see people attacking nerds, but it’s not a tactic – I know a lot of nerds, most are really good people(if awkward), and so it feels grossly unfair. The reason for my annoyance isn’t that it’s my group, the reason is the reason why it’s my group in the first place.

          • Simon_Jester says:

            There’s a real revulsion reaction which can be used subconsciously or consciously as a tool for extended campaigns of ‘social warfare,’ just as humans have real running abilities that can be used deliberately or spontaneously as a contest of physical fitness aginst other humans.

            If you enjoy competitive running, that doesn’t mean your ability to run is a fake thing cultivated purely for social purposes. But it does tend to mean that your biological propensity to run has been cranked up and hypertrophied in order to better serve your own ends.

    • Simon_Jester says:

      Yeah, I think there are sort of three overlapping categories that rely mainly on social shaming/signaling in ‘debates’ (more accurately, arguments, more accurately still, posturing matches disguised as arguments).

      1) People who genuinely are too thick to realize that insulting people does not in fact to anything to prove them wrong.

      2) People who deliberately use the insults as a way to mark out a social space in which they can insult their enemies without consequence, or who are intentionally fighting to ‘conquer’ a social space for their own side.

      3) People who are a bit too thick to have consciously analyzed and worked out a strategy of denigrating dissenters to secure more influence over their social space, but who still reflexively behave in this way, because it serves the purposes outlined in (2) and they are conditioned to pursue those goals even if they don’t pursue them constantly.

      (1) is rare. (2) requires a fairly high level of self-awareness. But (3)? (3) is most of the human race, because we start playing social influence games in early childhood, and the fear of losing social influence is a strong pressure to optimize one’s strategies for preserving status. Children and young adults can and do optimize those strategies long before they get anywhere near the level of self-awareness required to understand what they’re doing. Which is a large part of why the psychiatric industry even has a job.

      Nearly all humans exhibit this instinctive urge to defend our social ‘territory’ and (under some conditions) press attacks into others’ “territory.” To define a domain in which others cannot insult or offend against us without consequence, to create a safe ‘citadel’ within which [i]we[/i] can insult [i]others[/i] without consequence, and to overthrow the domains and citadels of those we are opposed to.

      So we do a lot of things that are, in effect, social power plays of the “huh, I sure showed HIM” type. And we may have conscious motivations for advancing the arguments we use to do that… but on some very important level, we wouldn’t be doing it if not for a desire to metaphorically pee on trees to mark our social territory.

      • Alsadius says:

        I think there’s also (4), people who don’t have any empathy for the other side and act cruelly because cruelty is fun. Pain + distance = comedy, and all that.

        (I’m not sure if that’s what you meant by #3, actually…)

        • Simon_Jester says:

          (4) would be a subset of (2). Deliberately inflicting psychological pain on others is a way of asserting social dominance, exercising control over social territory, and denying those things to others. The reason people get sadistic enjoyment out of such actions is precisely because they feel like they have conquered social territory at the other person’s expense.

          (3), by contrast, is people who are doing the same things, but doing them subconsciously. The entire “Someone is WRONG on the Internet!” mentality is a subset of (3), because your instincts are telling you that the Internet (all of it, or part of it) is your social territory. As such, your instincts also say that you are required to run around marking your territory, and repelling intruders in order to protect that territory.

      • Mary says:

        I have had people literally tell me that they are not indulging in ad hominem, but insulting me, as if that were an improvement.

        The fun one was the one who tried to insist that even though it’s the name of a thing (a fallacy), “ad hominem” was an adjective and therefore I was grammatically illiterate.

        • Alsadius says:

          From a logical point of view, “I’m not trying to prove you wrong, I’m just trying to insult you” is a more accurate statement than “$insult, therefore you’re wrong”, but you’re still a jerk.

          (And I say that as someone who’s used that exact line once or twice in past)

        • SamChevre says:

          Well, that is a possibility. In terms of amazing rants by former colleagues about former colleagues: “I’m not saying he’s wrong, I’m saying he’s an idiot. If he’s not wrong it’s entirely by accident.”

    • arlie says:

      *thoughtful* I had a probably relevant experience a few years ago.

      I’d decided that I needed to learn about climate change. I knew the topic was politically fraught, so selected a MOOC purporting to teach the scientific underpinning of climate change, i.e. how it worked, rather than anything either obviously partisan or intended for those who avoid STEM.

      Fairly early on I asked a question that mentioned the medieval warm period. It was something like “how does this prediction compare with the temperatures of the medieval warm period?” The result was somewhat of a pile on; it appears that the “medieval warm period” is mentioned in some common low quality arguments about climate change. Thus my use of it “proved” that I was an anti-climate change troll.

      This was in a context of other IMO poor quality arguments in favour of the reality of human caused climate change – in particular, in a course on mechanisms, presumably intended for the scientifically competent, the instructors found it neccessary to begin by stressing the proportion of climate scientists who agree that the phenomenon is real. It took a lot of research for me to get past this evidence that some believers in climate change were “true believers” who did not understand the scientific method, and were so frightened of clarifying questions that they resorted to social shaming in response.

      Probably the root cause of the nonsense was a combination of fatigue – self-appointed refuters of anti-climate-change propaganda too tired to distinguish a question from the start of a disingenuous debating tactic – and, importantly, research showing that the single most effective argument for convincing the average person of the reality of climate change is to cite the proportion of cllimate scientists who agree with it.

      At any rate, I think both parts of this experience – the shut down from the audience, and the implicit claim that the scientific method is based on majority voting – are relevant to Scott’s points, and to yours. I’m just not sure how they fit.

      [And by the way, for those NOT in STEM, the scientific method involves designing experiments to potentially refute a theory. It’s not a popularity contest – at least in theory – though sometimes it certainly winds up looking like one.]

      • liskantope says:

        self-appointed refuters of anti-climate-change propaganda too tired to distinguish a question from the start of a disingenuous debating tactic

        Yes. I think I meant to sharpen my point in my comment above, by saying that although the syllogism I laid out (“(1) It’s clear that my opponent’s position amounts to X; and (2)…”) is mainly unreasonable because it doesn’t bother to justify (1); yet many of those who follow it probably do sincerely believe that (1) is true and very obviously so and/or are feeling too tired/lazy to check if (1) really does hold.

    • Nornagest says:

      I’d go further than that. I think social shaming is usually intended to persuade, because, between sufficiently people-oriented people, it is persuasive. People are pretty good at rationalizing a position when they have a good reason to, and the prospect of being drummed out of the tribe for being a weirdo is a pretty good reason.

    • Doesntliketocomment says:

      The issue is that social shaming isn’t a method of argument, it’s a method of control. Like any method of control, it can be misused, but when used properly it serves a purpose.

      By way of example, I recall not too long ago seeing in the comment section an individual who saw that the community was open and willing to debate without the usual social thought embargoes, and thus decided this was a good place to start a discussion on the benefits of sexual relations with minors. Not only did he not get the discussion he wanted, he was very quickly informed that he should not broach the subject again. Both his topic and his interest in discussing the topic were both seen as shameful and were censured accordingly.

      • 10240 says:

        Wait, what was the purpose of the shaming in this case? This is not even an opinion where it’s easy to argue that it would become popular without proper shaming (as one might argue about e.g. racism).

        • Aapje says:

          Allowing some arguments can:
          – drive out commenters who dislike those arguments
          – pull in more people with the same beliefs, transforming the commentariat
          – make the forum look bad to outsiders, especially since many people seem to confuse a non-moderated opinion from one commenter with the opinion of everyone on the forum

      • The Nybbler says:

        Social shaming is a special case of the most effective argument in the world, the argumentum ad baculum (argument from force). If you really want to change someone’s mind, the best way to do it is to figure out some way to punish them if they don’t. Pretty soon they’ll have rationalized themselves into the new position. “A man convinced against his will / is of the same opinion still” holds only for a few; most people cannot maintain beliefs in opposition to what they are required to express.

        • Alsadius says:

          I think that the fall of the Warsaw Pact is strong counter-evidence to your claim. A lot of people spent decades risking death or disaster if they fought the government, and so they kept their heads down.

          The very day that one functionary misspoke on live TV, the Berlin Wall came down, because people finally thought they could safely express their opinions. Within two years, every single nation in the region had (at least in theory) gone democratic and capitalist, because that was what the people wanted all along.

          • Radu Floricica says:

            Romanian here. We’re having a trial right now about how our popular revolution was just a regime change. Wasn’t as eroic as that at all.

          • The impression I got on my recent visit to Bucharest was that there had been a significant change—it didn’t feel like a communist country—but that a lot of the population felt that the only thing that had been wrong with the previous system was the ruler.

    • Illuminatus major says:

      Wanted to log in just to say this; should there be some kind of revamp to this post, it should include some incorporation of liskantope’s comment

  13. theredsheep says:

    On a related note, years ago I made up (for myself) “Cole’s hierarchy of discourse,” a way of ranking arguments according to what they intend to do and how far removed they are from accomplishing something. Tier one is two people who agree that something should be done and what that something is, and they’re arguing over the proper way to do it, e.g. two people on a trip arguing over the quickest route. Tier two is when one person is legitimately trying to bring the other around to a different point of view (so that, in theory, you can then accomplish something). Tier three is when you’ve accepted that the other person’s POV won’t change but you’re arguing anyway in the hopes that you’ll learn something about each other or arrive at new insights as a side benefit. Tier four is arguing just to argue, AKA being a dick. The other person is basically irrelevant there, and you’re arguing to reinforce to yourself your own sense of cleverness and superiority.

    I composed the hierarchy years ago, when I mostly hung around on internet forums having arguments that don’t really matter with other people who argue for fun. The shaming-shutdown aspect basically never came up, possibly because they never took it that seriously. I guess that would qualify as tier five, arguing purely to terminate the argument. Not that it matters, since the whole point of the hierarchy was so I could ask myself, when I got into arguments, just what I was hoping to get out of this, and tier four is useless already. NB that two people can be on different tiers for the same argument; you can be honestly trying to convince me while I’m just being a jerk …

  14. Nabil ad Dajjal says:

    I liked the post but there’s one thing that drives me nuts whenever I see it: the scientific consensus.

    Scientists sometimes form a consensus within a field after carefully reviewing the evidence and weighing different possible models. More often, they form a consensus because of intellectual incest. One or two well-known papers produce a suggestive result; that result is cited in contemporary reviews; later reviews cite those reviews; eventually it ends up in a textbook just in time to be shown to be incomplete and misleading.

    In my own field the consensus explanation for how the system I work with is organized came about through this kind of game of telephone. You can actually go back to the original papers and read in the authors’ own words that their experiments can’t distinguish between what would become the consensus and alternative explanations. But the consensus in the field is so strong that in the last twenty years nobody has actually went back and checked using modern techniques which could make that distinction. That’s good for me, because that lack of interest leaves me with an awesome thesis project, but it’s bad for science.

    Consensus isn’t scientific; it’s in fact the precise opposite of science to base your conclusions on what other researchers think rather than interpreting the experimental evidence.

    • Enkidum says:

      I have a really strong objection to your last sentence, but I’m trying to frame it in a way that is productive, and follows the guidelines our host is trying to encourage with this post.

      We always weight evidence according to how well it agrees with what we already think. “Extraordinary claims require extraordinary evidence” is simply a statement of that fact. This is a huge part of science, and I’d say it’s a feature, not a bug.

      It might be that we need explicit mechanisms for allowing and encouraging consensus-challenging ideas. But at what point does that mean we have to start taking creationists and parapsychologists seriously? Even a serious consideration of their ideas requires time and energy that cannot then be devoted to stuff we actually think might be worthwhile.

      I’m not sure how to balance things correctly, but I think your comment pushes too heavily on the anti-consensus side of the scales.

      • Aapje says:

        The problem with dogma is that scientists can:
        – greatly overestimate the quality of the evidence in favor of the dogma
        – start interpreting the outcomes of their experiments in a biased way, because they favor the dogma over what the experiments show
        – decide not to publish experiments that give the ‘wrong’ outcome
        – look down on, deny jobs and funding to people who try to check the validity of the dogma; resulting in scientists being afraid to do so
        – be very skeptical of some theories & experiments and far less skeptical of others, because of the political implications of the theories & experiments being correct
        – build a large quantity of work that depends on the dogma being correct, resulting in them becoming fearful for their own reputation and that of their field, if they were to admit to be wrong

        At a certain level you can have a reinforcing system where it looks like all the evidence clearly supports the dogma, even though there is actually a feedback-loop at work, where the community prevents strong evidence against the dogma from being discovered and presented.

        Personally, I favor erring on the side of being more skeptical. At that end of the scale, the cost is that you collect more evidence than is strictly necessary, which is really not that bad, especially given the replication crisis.

        • Enkidum says:

          I can’t deny any of the points you bring up. I’ve seen them all in action.

          That being said, I think that it’s possible to err on the side of iconoclasm, and I’ve seen people on this very website essentially arguing that BECAUSE something is dogma in, e.g., a social science, it is inherently suspicious. Which is madness. Sometimes a whole bunch of smart people thinking the same thing do so because it’s, you know, right. And I feel that should be our default prior.

          However I’m not sure where I want to come down at the end of the day in terms of how this applies to actual researchers within a field, like, say, Nabil, or me. Still ruminating on that one.

          • mcd says:

            When something needs to be treated as a dogma (AGW) as opposed to just being commonly accepted (the germ theory of disease), then it is inherently suspicious.

          • Enkidum says:

            I’m not sure what the meaningful practical difference between those two conditions is. How can I distinguish between the two?

          • mcd says:

            If you mean dogma vs. acceptance, I assumed you had a difference in mind already when you mentioned dogma. I think there are two differences between dogma-style consensus and non-dogmatic consensus: the fact that anyone feels the need to point out the consensus at all, and the political or religious ferment surrounding the issue.

            Once upon a time there was religious and political/sociological opposition to the germ theory of disease, but there is none to speak of anymore (not even among anti-vaxxers). Significantly, this opposition would have existed regardless of the objective truth of the germ theory–that is, even if it had been disproven by later science. Today there is both political opposition to AGW and political support for AGW. The motives behind both seem rather orthogonal to any future, post-political conclusion about the truth of AGW.

          • Watchman says:

            Enikdum,

            I’ll bite on that question although I should note that like Nabil I have an academic profile based on sceptical responses to existing orthodoxy. It might be significant that I don’t hold an academic post in this regard (voluntarily) since this means professional risk is minimal.

            The honest answer is that there is no way of clearly establishing if an academic consensus is worth upholding or not without testing it. The twin of the replication crisis, the failure to publish negative results makes it very hard to assess what has been tested and what the outcomes were though.

            What I would say is that the signs of a false consensus I would look for are the following:
            – a link between the consensus and a political movement which is self-reinforcing. Take for example the dominance of economic history by Marxists, which meant economic history could be presented (with serious effort – towards the end of this consensus a fourth stage of economic development was developed to help patch up the growing holes in the theory…) as proving Marx correct.
            – appeals to authority rather than the evidence. That 97% of a sample selected in a manner that is not stated believe something is no more useful than an opinion poll suggesting Hilary Clinton would become President. It tells us something but about beliefs and maybe incentives as much as about actual reality.
            – That all the major supporters of the consensus literally belong to the same school of thought, with an academic genealogy back to one or two progenitors of the idea, whose pupils become employable through their association’s with the theory and therefore incentives to defend it. I’d point out though that in niche areas this is difficult to differentiate from a subject where opportunities to study it wee limited to one or two schools when it became fundable.
            – Perversely perhaps but I also tend to think references to the peer-reviewed literature rather than to specific studies is generally a sign of possible false consensus simply because it is taking comfort in the consensus rather than addressing the evidence.

            None of these things being present means I would automatically say there was a false consensus, but I’d see each as a dig here flag rather than a sign of a healthy academic environment, although there are other reasons any of these factors might exist. Not sure if this answers your question fully, but it’s my best attempt to set out my feelings on this.

          • 10240 says:

            @mcd The fact that people feel the need to point out the consensus comes from the existence of politically-motivated denial. The same would happen with the germ theory of disease if significant groups started denying it for no good reason.

          • mcd says:

            @10240 Yes, the fact that a field is both new and politicized is what makes it inherently suspicious. People don’t up and start doubting the germ theory of disease “for no good reason” because the field is neither new nor politicized.

          • 10240 says:

            @mcd Didn’t the field of climate change become politicized because people started to question it for non-scientific reasons? You can always make a field look suspicious by politicizing it, and then saying that it’s suspicious because it’s politicized. (Though I don’t know how exactly the political debate about global warming started.)

          • John Schilling says:

            @mcd Didn’t the field of climate change become politicized because people started to question it for non-scientific reasons?

            That’s a muddled question, especially if you go back far enough. But the modern era of “Consensus vs the Deniers!” begins with people of the Green persuasion eagerly adopting the strong-AGW hypothesis for non-scientific reasons. Then, when most every public discussion of climate change ended with “…and thus we must downsize industrial civilization” rather than e.g. “…and thus we must build lots of shiny nuclear reactors”, we saw reactionary politicization from the other direction.

      • Nabil ad Dajjal says:

        If I understand you correctly, you’re asking when we’re justified in being skeptical of unlikely claims.

        The answer to that is that we don’t need a justification to be skeptical. Skepticism is the default response in science. The onus is always on the person advancing a model to provide experimental evidence to validate it. Reversing that would be insanely wasteful, as you point out.

        The alternative that you favor, being skeptical only in the absence of consensus opinion, may be acceptable for a layman who is incapable of properly evaluating scientific evidence. But for a scientist or someone working in an applied science like engineering or medicine you need to understand the limitations of the models you use in order to do your job properly. That means understanding both the model itself and the evidence which supports it.

        Especially now, when large portions of the established literature in many fields are being revealed as irreproducible, you can’t rely solely on the judgement of other scientists. You need to roll up your sleeves and try to figure out whether the literature actually supports the claims people are making.

        • Enkidum says:

          It’s not that I favour only being skeptical in the absence of consensus. It’s that I favour being less skeptical in the presence of consensus (or more skeptical in its absence). Basically, I’m suggesting that consensus should affect our priors, and priors should affect our judgement of evidence.

          However both you and Aapje above reveal problems with this view. I’m going to have to think about this for a while.

          • Randy M says:

            At the base level, you are right Enkidum. At the very least, consensus means “other people have looked at this problem and found the evidence persuasive.” That is positive Beyesian evidence for the conclusion, it just isn’t anything close to dispositive because people attempt to bring about consensus other than through strictly rational persuasion, and those that make up the consensus are imperfect.

          • John Schilling says:

            It’s not that I favour only being skeptical in the absence of consensus. It’s that I favour being less skeptical in the presence of consensus (or more skeptical in its absence).

            What is the pragmatic manifestation of “being less skeptical”, when we are talking about the difference between 80% certain something isn’t true and 90% certain something isn’t true?

            Most of science can, I think, claim better than 20% reliability in its consensus beliefs. But most of science’s consensus beliefs are of no relevance to anyone outside that field. The ones that do claim to be of paramount importance to laymen, and particularly the ones that call for substantial action, are the worst of the bunch and are the ones with the least spread in p(true) between the consensus and contrarian views.

            TL,DR: My prior for all (non-rocket) scientific belief or opinion, consensus or otherwise, is p<<<0.05 that I should give a damn.

          • Simon_Jester says:

            This sounds like a recipe for an Iron Age lifestyle.

            I mean, the germ theory of disease is a scientific consensus that governs much of how we live and behave, including large swathes of public regulations (food safety, etc.). It affects our private behavior (people used to treat the sick very differently before they believed in germs). It affects how we react to new, unknown sickness in our area.

            Under your philosophy, John Schilling, why is the germ theory of diseases not a good example of a case where we should “assume p <<< 0.05 that any given scientific prescription for how to live is correct?" Why shouldn't we be ignoring the germ theory of disease on the grounds that it's probably just something a handful of quacks pressured everyone else into believing?

          • Enkidum says:

            @Randy M: agreed with everything you say, though reading between your lines I suspect you take that to imply a lot more skepticism about consensus than I think is warranted. But *shrugs* I don’t know that I have a good enough justification for my opinion here, and I think that there’s enough anecdata out there about consensus being wrong that there are definite reasons to be wary of over-emphasizing the importance of consensus.

            @John Schilling: Eh… the 80% vs 90% cases aren’t the kinds of things I was thinking of (and aren’t the kinds of things I think can be usefully quantified). Rather, more along the lines of effective certainty about truth. So effectively 100% prior certainty vs something way, way closer to 0%. (That is, according to the consensus.)

            So… what about those cases? I think you would agree that in those cases, even people within the field have reason to weight their judgements according to the consensus opinion? Perhaps not as strongly as the consensus might imply (for reasons that have been given up and down this thread), but I think it should play a role. And as for people outside the field, I think to the extent they’re entitled to an opinion, it’s pretty much got to be “whatever pretty much everyone in the field seems to agree on”.

            However… I think you are definitely correct that there’s a problem in moving from purely factual questions to practical advice that is supposedly implied by scientific consensus. I don’t know if I agree with your claim about p(true) of consensus and contrarian views, but at the very least I agree that there’s reason to universally downgrade p(true) of ALL views when they’re impacting on practical matters, unless there’s an existing technology that works, in a way that is apparently explained by the view. (Which I think is, more or less, what you meant by the rocket science thing?)

          • Enkidum says:

            @Simon_Jester

            See my suggestion above that John’s “rocket science” = science that resulted in or explains existing technology. Which I think “germ theory of diseases” pretty clearly falls into.

          • John Schilling says:

            Rather, more along the lines of effective certainty about truth. So effectively 100% prior certainty vs something way, way closer to 0%. (That is, according to the consensus.)

            If there’s an ongoing debate within the field, then there isn’t “effectively 100% prior certainty” and anybody who says otherwise is selling snake oil. That’s the point where my prior goes to “80% this claim is either false or too weakly true to be usefully actionable”.

            If there isn’t an ongoing debate in the field, then the claim probably still isn’t usefully actionable for a layman. If I doubt the germ theory of disease, what am I going to do about it? Save money by not paying extra for that fancy “pasteurized” milk?

            And at the meta level: “Consensus” is like “everybody knows”, a self-refuting term in that, where it is actually true, people generally don’t bother using it or even holding the sort of discussions where they would be inclined to use it.

          • A Definite Beta Guy says:

            If I doubt the germ theory of disease, what am I going to do about it?

            Not worry so much about that delicious medium-rare chicken!

          • Enkidum says:

            @John

            If there’s an ongoing debate within the field, then there isn’t “effectively 100% prior certainty” and anybody who says otherwise is selling snake oil

            Agreed, with a little bit of hand-waving I’m not going to get into here. But what I’m interested in, and what I think Nabil was originally thinking about, is cases where there ISN’T such debate, because there genuinely is consensus. What should that do to our priors, both as experts and laypeople?

          • John Schilling says:

            …cases where there ISN’T such debate, because there genuinely is consensus. What should that do to our priors, both as experts and laypeople?

            Render them irrelevant, because we aren’t having a debate and probably aren’t thinking privately about the question. Pretending that your every thought, word, and deed is a Bayesian calculation, just makes it harder to do explicit Bayesian calculation on the rare occasion when it is appropriate.

        • Murphy says:

          I somewhat doubt that you reject consensus as much as you imply.

          In any field investigating anything every tool and method is going to be based on thousands of assumptions. Most of them probably mostly correct or “correct enough” for the methods/tool to work as expected.

          I am not an expert on chemical fluorescence but I’m happy to accept the consensus in the form of accepting results from machines which utilize it.

          I’m not an expert on RNA interaction and digestion… but I’m willing to accept data generated from workflows that rely on it.

          For every belief within your field that you’re genuinely skeptical about there’s probably 1000 consensus’s that you happily build upon.

          • gbdub says:

            I think it’s important to make a distinction between “established scientific fact” and “a consensus theory on something that is still uncertain”. Or something like that anyway.

            E.g. using germ theory to justify following “consensus”. That’s something directly observable, we can see bacteria and watch them multiply. We can make predictions (“this thing that kills bacteria will make you feel better”) and rapidly prove (or disprove) the result.

            Or Keplerian orbital dynamics, which are directly observable (as are their limitations).

            Something like AGW or the many worlds interpretation etc., these are theories that have a lot of support, and are plausible explanations for observable phenomenon. So a “consensus” in these fields isn’t meaningless. But it is still a “best guess” that’s yet to be proven out (AGW because we’re making forward predictions about a very complex system we don’t fully understand, many worlds because it might not be directly observable at all).

            And none of this even touches on the sort of false consensus that gets built by one or two flawed studies getting overhyped until they become dogma.

            Point is, there are levels of consensus, and it’s not fair to treat all skepticism of “consensus” as equivalent to “oh, so you’d question whether the earth revolves around the sun? That’s a consensus!” Just as it’s unfair to treat all “consensus” as baseless dogma.

          • Nabil ad Dajjal says:

            I endorse gdub’s comment but want to add something from a practical standpoint to show more of where I’m coming from.

            When I do a new assay for the first time I look up how it’s supposed to work, and I look at the sort of results these kinds of assays generate in the literature well before I ever attempt it. I try to think through whether it’s appropriate for my experiments although I have been mistaken before. And whenever possible I test it out with positive and negative controls to make sure that it’s measuring what it’s supposed to be measuring.

            That may sound like bragging but in science that’s actually the bare minimum. If you don’t ask these questions and try to answer them… well, you get a shitty paper like the one I’m presenting today in journal club.

            There is a lot that I don’t understand about the machines and reactions that I use in the lab, but I can evaluate them when it touches my field. I can make sure that the computer program gives sensible results with prior data, or that a flow cytometer is picking up signal and not random noise, without understanding the math or optics beyond a basic level.

          • Patrick Cruce says:

            I would disagree that global warming is less observable than germ theory. (1) The absorption spectrum of CO2 is well known. (2) You can prove
            global warming experimentally, about as well as germ theory. (you need a small amount of lab equipment, but nothing too extravagant) (3) It as common sense as putting on a blanket.

            Sure climate models and predictions are tough, but it is banally foundational physics that increasing the proportion CO2 in the atmosphere will add insulation which increases temperature.

            (required equipment: thermometer, lamp, sealed chamber(aquarium would do) co2, gn2)

      • albatross11 says:

        There’s a difference between

        a. I’m an outsider wanting to know what practitioners in the field believe.

        b. I’m an insider trying to figure out what we practitioners in the field *should* believe.

        For (a), an expert consensus is useful–since I’m not a sophisticated consumer of climate models, I probably can’t do a lot better than accepting the consensus among experts in the field. Though I probably also want to try to figure out at a meta-level whether these experts know what they’re talking about–are they chemists talking about what happens when you spill fuming nitric acid on your foot, or are they postmodern theorists talking about the de-colonizing of repressive white male math?

        For (b), expert consensus is what you’re trying to improve, and factually weak expert consensus is sometimes an opportunity (though other times, it’s quicksand and you need to avoid it to get to a place where you can do useful work despite the crazy current expert consensus making your whole field dumber right now).

      • Nicholas Conrad says:

        “Extraordinary claims require extraordinary evidence” is simply a statement of that fact.

        In one sentence ostensibly about good scientific reasoning you’ve introduced 2 subjective measures. It’s better to simply say:

        “Claims require evidence”.

        Adding the subjective “extraordinary”s unpacks to:

        “claims confirming my priors need little or weak supporting evidence to be convincing; those contradicting my priors can likely never have enough or strong enough evidence to be convincing.”

        I’ve heard more than one scientist asked about “argument from consensus = appeal to authority” offer the unsatisfying (though infinitely self-serving) response “Yes, well technically, but it doesn’t count when we’re the authority”. Now that’s a claim that requires evidence.

        • Enkidum says:

          No, but claims confirming my priors require less evidence than those supporting them. This is actually a consequence of Bayes.

          • Nicholas Conrad says:

            The first principle is that you must not fool yourself and you are the easiest person to fool.

            Yes, and you (in general, not personally) being the easiest person to fool is a consequence of relying on your priors to subjectivity evaluate evidence. This is the quintessential difference between scientific and unscientific reasoning.

    • DocKaon says:

      You shouldn’t rely on the scientific consensus when you’re doing science. It’s a tool for everyone outside of a specific field to base their decisions on. You can’t train every single citizen to the level of a PhD in every field they need to have an opinion on so they are able to synthesize the available evidence into their own scientific opinion. The consensus may be wrong, but I certainly know the way I’m going to place my bets.

      I do think it would be valuable to more formally build a scientific consensus on issues of major importance, like the IPCC does with AGW, rather than rely on an informal consensus which can be based on very weak evidence.

      • Anonymous Bosch says:

        I do think it would be valuable to more formally build a scientific consensus on issues of major importance, like the IPCC does with AGW,

        I’m not really sure how valuable this is for the purposes of argument. I would guesstimate that roughly 1% of global warming skeptics are lukewarmers arguing for the low end of IPCC predictions, and 99% either have never read anything by the IPCC and think “water vapor is the biggest greenhouse gas” is a devastating riposte, or can’t be convinced the IPCC isn’t a socialist cabal that simply fabricates what it likes and ignores what it doesn’t.

        • gbdub says:

          Well I would guesstimate <1% of anti-skeptics are actual climate experts, and 99% are people parroting "denial is a crime" because The Daily Show made them laugh at an icky climate denialist, or various other stuff gleaned second- or fifth-hand from Al Gore PowerPoints.

          I still think the IPCC is a useful "formal consensus building" exercise, and could be valuably applied to other areas where we aren't quite certain enough to establish "facts" but generating a formal best guess is valuable for advising policy.

        • I would guesstimate that roughly 1% of global warming skeptics are lukewarmers arguing for the low end of IPCC predictions

          Speaking as one of them, it’s not so much arguing for the low end of IPCC predictions as arguing that even at the high end it isn’t clear whether the net effect is positive or negative.

          With regard to gbdub’s point, it’s consistent with my casual observations. Back when I was arguing climate issues on FB, I concluded that almost nobody on either side understood the greenhouse effect.

          And it isn’t limited to FB. There’s a webbed “demonstration” of it put up by the Cleveland Museum of Natural History and the Clean Air Conservancy, that only works if you don’t understand what the greenhouse effect is.

          • Alsadius says:

            There’s a really good test for whether CO2 is preferentially absorbing short-wavelength visible light, and it’s one that the kid actually did(even if he didn’t draw attention to doing so).

            He pointed a camera at the jars.

          • Not all the short wavelength light coming from the sun is in the visible part of the spectrum.

            And given how short a distance the light was passing through, I’m not sure that a significant difference in absorption would have been visible to the naked eye.

      • Nabil ad Dajjal says:

        I agree somewhat. If the doctor hands you a prescription pad then, unless you have scientific or medical training, you’re better off trusting him than not.

        That said, there aren’t many cases where a layman actually needs to “have an opinion on” any scientific question. If you can’t evaluate a claim then your opinion on it is worthless. We can’t and shouldn’t train everyone to be a scientist but we could at least try to instill some intellectual humility.

        Also, I would be horrified if my field started behaving like climatologists. I understand the siege mentality that they’re under, given how thoroughly politicized their research is. But it’s not healthy to enforce message discipline on a scientific field. Journals are primarily responsible for disseminating results to other scientists; when they take it on themselves to be “responsible” for how laymen might misinterpret those findings, they’re abandoning their actual responsibility.

        • Simon_Jester says:

          Note that extreme ‘bad end’ climate change outcomes are borderline existential risk for humanity.

          That being the case, it’s pretty easy to construct arguments for why the scientists working in the field have a higher duty to the cause of “get the truth about our field, and ONLY the truth, to the laymen” than to the cause of “keep up a maximally high-minded discussion within our own ivory tower.”

          • Lapsed Pacifist says:

            I am confident that I could construct a scenario where any given scientific field has a duty to “get the truth out to avert existential crisis.”

            This is how critics of GMOs are framing their objections, on the strongest possible interpretation of the narrowest band of worst outcomes.

          • I had a blog post a few years back on the general issue, sparked by a case in my field. It’s tempting to argue for dishonesty for the greater good–that isn’t the way you put it, but it is what it comes down to if a scientist deliberately avoids saying true things that he thinks provide support for the wrong conclusion. But it ends up subverting the mechanism by which we find out what is for the greater good.

            You might want to consider the parallel between the present campaign to persuade people that AGW is a terrible threat and the campaign fifty years ago to persuade people that overpopulation was a terrible threat.

          • Simon_Jester says:

            I had a blog post a few years back on the general issue, sparked by a case in my field. It’s tempting to argue for dishonesty for the greater good–that isn’t the way you put it, but it is what it comes down to if a scientist deliberately avoids saying true things that he thinks provide support for the wrong conclusion. But it ends up subverting the mechanism by which we find out what is for the greater good.

            Well, the thing is, there’s a good reason that it isn’t how I put it. So I’m going to resist the attempt to rephrase the discussion as “should we withhold truths that undermine our position or not?”

            The problem is that “our position” exists on multiple levels. There is the broad-level position “uh yeah, there’s a high risk of elevated CO2 levels in the atmosphere causing major climate disruptions, since we probably don’t want the climate disrupted, we should probably avoid raising CO2 levels too rapidly.” Then there’s the detail-level position “exactly how much rise, how fast, who specifically are the winners and losers?”

            There is, by nature, much more consensus on the broad picture than the narrow one. Failing to accurately present that fact does the public a grave disservice, because it creates the illusion that scientists don’t honestly believe they know, on the basis of the best calculations they’ve been able to do, that global warming is happening.

            And the general public not being aware of that until sea level rise eats Miami is… problematic.

            I in no way oppose the ongoing debate within the narrow consensus, but refusing to present a broad consensus on grounds that it might somehow bias continued debate is, in effect, advocating for science to become useless. At some point you do have to be prepared to have all the experts put their heads together and make a recommendation, and climate science is ready to make its broad recommendation.

          • Perhaps I misread you. What do you mean by:

            “get the truth about our field, and ONLY the truth, to the laymen”

            In particular, what is its implication for whether one tells the laymen true things which point to what you believe is the wrong conclusion?

            I don’t know if you followed the link, but my attitude is partly based on my experience in a context having nothing to do with global warming, where people who claimed to be experts informing the interested lay public about an issue deliberately omitted information obviously true and obviously important because it pointed in what they regarded as the wrong direction.

          • Simon_Jester says:

            Perhaps I misread you. What do you mean by:

            “get the truth about our field, and ONLY the truth, to the laymen”

            In particular, what is its implication for whether one tells the laymen true things which point to what you believe is the wrong conclusion?

            What I mean is… Suppose there is a broad consensus within the field about a core finding of the field, such as “command economies don’t work very well,” “energy is a conserved quantity,” “organisms have evolved over millions/billions of years of time, not a few thousand,” or “diseases are generally caused by infectious microbes, not by smelly air or witchcraft.”

            If such a core finding exists, then it is imperative that this finding be communicated to the layman in an accurate, legible manner. that gives them a basic grasp of how the field’s core findings impact (or should impact) their lives. Public outreach and popularization of the field should proceed with the need to convey this core finding in a simple, concise manner.

            If there is a significant amount of information in the field that points away from the core finding… Well, then there is or ought to be still significant debate within the field over whether the core finding is true. If doctors are genuinely arguing over whether infectious microbes are a thing, then it’s not time to focus on making sure everyone else ‘knows’ that microbes are a thing. I certainly shouldn’t take it on myself to deliberately withhold any strong evidence for the miasma theory. If there IS any strong evidence for the miasma theory, then the germ theory isn’t firmly established.

            On the other hand, if it’s pretty well settled that infectious microbes are causing many diseases, and the argument is “so, what about cancer, is that caused by a virus?” then it’s time to start telling people about the germ theory of disease. It’ll probably save lives.

          • Nornagest says:

            If such a core finding exists, then it is imperative that this finding be communicated to the layman in an accurate, legible manner. that gives them a basic grasp of how the field’s core findings impact (or should impact) their lives. Public outreach and popularization of the field should proceed with the need to convey this core finding in a simple, concise manner.

            The only thing on your list that directly impacts the layman’s life is the germ theory of disease, and even there the real takeaway is closer to “wash your hands periodically, especially after handling disease-carrying material”. (Which doesn’t follow as closely from the germ theory of disease as you might think — long after the theory was accepted, it wasn’t common for medical professionals to wash regularly.)

        • arlie says:

          There are more alternatives than either trusting your doctor’s prescription, or not trusting it. Doctors make mistakes, and the less time they have per patient, the more mistakes they are likely to make. I am not qualified to prescribe medications – but I can certainly double check potential interactions with my other medications, make reasonable evaluations of potential side effects (vs leaving the problem untreated), and even consider potential effects beyond myself.

          Thus for example, when given antibiotics for something that’s probably viral, just in case it’s bacterial, I can make a decision about whether to use them. (Of course it helps if the doctor says that’s what she’s doing, either to me or in my hearing.)

          If you don’t have time for more than a coin flip, then your odds are indeed better if you trust the professional. But if it’s important to you, you probably have time for more. And I’ve received a lot of mutually contradictory advice from medical professionals (broadening beyond doctors), much of it plainly out of date. (Health plans in the US publicize many outdated nostrums, sometimes putting them into the mouths of nurses and nursing assistants [who I suspect are working from a script, not from professional knowledge]).

      • Enkidum says:

        But you also can’t train every scientist to the level of a PhD on every aspect of their own field.

        I’m kind of a neuroscientist. I know vastly more than virtually anyone alive about some very, very particular aspects of my field, but once I get more than one or two steps removed from those specific aspects, there are literally hundreds if not thousands of hours of work that would be necessary for me to get to the consensus-challenging stage. Or even to the stage of judging what is a plausible challenge vs what is pseudoscientific wishful thinking.

        I’m going to be dogmatic about this: the above is true for every human being that has ever lived, save that most of them aren’t really experts about anything other than the mundane details of their own lives.

        So everybody, including ALL experts, depend on a massive web of consensuses. There’s no way around this. We have to hope that the very few people around who are capable of judging a particular consensus are doing due diligence, but we have to be realistic that in many cases, they’re not. So… I dunno.

        It occurs to me that I may have misread the emphasis in your comment and I’m possibly not really disagreeing with anything you said. It also occurs to me that I should go get some of that neuroscience I supposedly do done.

        • Aapje says:

          Sure, but you should then not mistake your ignorance (for practical reasons) for actual knowledge.

        • So everybody, including ALL experts, depend on a massive web of consensuses

          The way I usually put this is that we are all working almost entirely from second hand information, including the people who are producing a tiny bit of the information we are all working from—and depending for their conclusions on information most of which was produced by other people.

          That’s part of my argument against the process that creates apparent consensus. If there is strong pressure to reach a particular conclusion, each person biases his work a little–not necessarily lying about it, but selectively presenting it to support the conclusion. His opinion that he ought to support that conclusion is based mostly on the work of other people which, if there is an official truth that they are all supporting, is itself biased in the same direction.

          • Enkidum says:

            What do you think is the practical alternative, then?

          • Aapje says:

            @Enkidum

            I would favor taking activism out of academia, rather than the current trend, to make academia about activism.

            Activism is politics and politics is about building a maximally persuasive case & about accumulating power, not about creating a maximally correct case or being open to dissent.

            This argument is similar to the arguments for the trias politica, if you merge the roles in the same institutions/people, they corrupt each other.

          • At the level of the individual scientist, the alternative is to be compulsively honest.

            At the level of the consumer of scientific information, the implication is that apparent consensus is weaker evidence of truth than it would otherwise appear to be.

          • 10240 says:

            There might be pressure against e.g. claiming that the AGW theory is entirely wrong but, as far as I know (I don’t know much), there is a wide range of estimates on its extent in the scientific community, and debate about various details. And in a complex field like this, a single paper usually doesn’t refute or confirm the entire theory, but it adds to our understanding of some detail, which may then increase or decrease the estimates.

            Then if the AGW theory somehow came into being but it was entirely false (or much less significant than the current consensus says), then scientists’ own findings would lower the estimates all the time, and most of the time they publish estimates, those would fall near the lower end of the consensus. Then the lower end of the range that can be claimed without pressure would decrease, and the consensus would gradually return to the truth. As such, I expect that a wrong consensus wouldn’t stay for long, or even form in the first place.

    • Randy M says:

      I think given the replication crisis and the doubt put on well regarded theories, scientific consensus should be weighted lower than it would in the ideal world. Perhaps differently in various fields, but that requires some expert knowledge itself.

      • Enkidum says:

        Lower than in the ideal world, certainly, but nowhere near 0.

        • gbdub says:

          Who is arguing for zero? The argument is against using “scientific consensus” as a shaming tactic to put skepticism outside the Overton window, not that “99 out of 100 smart people agree” is totally meaningless.

        • Randy M says:

          This is one of those topics where some people need to hear “do more” and some people need to hear “do less” and I don’t know either what the correct level is nor which side you or most anyone here falls onto.

          If you are an absolutist, you probably need to modify your view.

        • albatross11 says:

          What are we comparing it with? Scientific consensus is often wrong, but it’s at least using techniques that might somehow end up getting us closer to reality–most other ways of figuring out what’s what don’t even have that going for it.

          • Randy M says:

            I think we are comparing to to considering the evidence and rationale for ourselves.

            One could be humble and suppose that if you reason a different view than scientists, those smart/knowledgeable scientists are the ones who get it right, so just skip the middle step and believe what the most scientists believe.

            But that kind of view is the one that needs to keep in mind the points in this thread that novel scientific findings may fail to replicate for various reasons, and scientific consensus can form around ideas that haven’t had sufficient proof.

            It’s a matter of degree of certainty or strength of evidence, not absolutely ruling out certain kinds of evidence.

    • crh says:

      I think it’s a mistake to dismiss appeals to authority just because they’re not valid deductive logic. Lots of things that aren’t deductively valid are nonetheless super-useful heuristics. Scientific communities can go astray, yes, but so can you. You have to weight the probability that the consensus has gone awry against the probability that you will make a mistake when you try to evaluate all the evidence yourself. When you’re a non-expert trying to evaluate a technical claim, the probability that you will make substantive mistakes is pretty much 1, and you should weight the Authority Heuristic pretty heavily.

      • Enkidum says:

        +1

        This is something in the background of what I was trying to say above, but far better-expressed than I managed.

      • crh says:

        Can something really simultaneously be 1. a good reason for believing X, 2. my actual reason for believing X, and yet simultaneously 3. inadmissible as evidence in a debate about X?

        That idea seems deeply weird to me. I need to think about it more.

      • Alsadius says:

        crh: At first glance, #2 can go with either, but #1 and #3 are hard to match up.

        The only case where I can imagine both is if you’re intentionally limiting the debate to a subset of the issue – e.g., “women should have complete control over their bodies” may be both a good reason to believe in legal abortion and your actual reason, but it doesn’t apply to a debate that’s specifically limited to the question of fetal personhood.

      • crh says:

        I don’t think it’s valueless. It establishes burden of proof. If you’re going against the scientific consensus, especially if it’s an area in which you are not yourself an expert, then a priori it’s much more likely that you made a mistake than that the field as a hole did. So you’d better have some really good arguments for why that’s not the case. It sounds like in the specific case of AGW you have (or at least believe you have) both arguments against the consensus conclusions directly and arguments for why the consensus is more likely than usual to have gone awry in this particular case. That’s fine. I’m not saying appeals to authority are always correct or that they can admit no counterarguments, I’m just saying that merely labeling something an “appeal to authority” is not such a counterargument.

      • gbdub says:

        Consensus is valuable but fragile. It’s better than nothing, but (should be) vulnerable to any evidence that contradicts the theories the consensus is based on. Consensus beats conjecture, but evidence trumps consensus.

        Consensus can be dangerous if it prematurely causes an uncertain question to be considered resolved.

      • carvenvisage says:

        Can something really simultaneously be 1. a good reason for believing X, 2. my actual reason for believing X, and yet simultaneously 3. inadmissible as evidence in a debate about X?

        That idea seems deeply weird to me. I need to think about it more.

        Not saying there isn’t something there, but ‘inadmissable as evidence’ vs ‘not a good idea’ is really not the same thing. it’s like ‘actively forbidden’ vs ‘not profitable’. (and also jumping from an ‘epistemological’ frame to one of social action)

        Consider the extremely simple case of someone who has an excellent ‘high level generators of disagreement’, such as, “I just get the feeling it’s wrong to murder people just to take their stuff, ..that it’s not good”, but can’t articulate it very.

        -‘This seems kinda fucked up’ is certainly admissable evidence within the bounds of your own mind, but it’s not in itself a brilliant armament to wade into a (pre-existing) bravery debate trying to deromanticise criminality.

        _

        Or to try and put it in a sentence, conflating ‘inadmissable’ with ‘not a good idea’ obviates at least two factors: “can you, fundamentally, articulate your reasons”, and “practically can you present them in a way that people will listen”

      • Nicholas Conrad says:

        “Consensus view is X (coincidentally, likely confirming my priors) therefore I find X more plausible than not X” = reasonable opinion

        “Consensus view is X (coincidentally, likely confirming my priors) therefore policy should by Y” = fallacious argument

        “Consensus view is X (coincidentally, likely confirming my priors) therefore anyone who believes policy should be not Y is a ‘science denier'” = fallacious argument that deeply misunderstands what science is

        • crh says:

          1. The consensus view is X, therefore X is probably true
          2. We should base policies on our best estimates of what is true
          3. Therefore policy should be Y

          Where’s the error?

          • Nicholas Conrad says:

            The consensus view is X, therefore X is probably true

            There are myriad reasons one could reject any particular consensus ranging from groupthink, to systemic bias, to the pessimistic meta-induction. You may personally find it convincing (though likely do not if it contradicts, rather than confirms your priors) but it is not evidence in support of an argument as it is an appeal to authority fallacy.

            We should base policies on our best estimates of what is true

            We usually base policies not on preponderance of evidence, but instead in deference to some value. This is why we ask for proof beyond a reasonable doubt, not 50% plus a feather before convicting accused criminals. This is why, despite best estimates of only moderate warming in the next few hundred years, many of the loudest ‘concerned scientists’ are demanding the ‘precautionary principle’ be employed to hedge against the admittedly unlikely worst-case-scenarios. Strong evidence suggests strict speech codes could prevent some suicides, yet we hold free speech in such high regard we have preemptively prohibited congress from making any law respecting it.

            Therefore policy should be Y

            As addressed above, “The consensus view is X, therefore X is probably true” is a fallacy, therefore your “therefore” does not follow from this premise.

            Additionally, even if you still feel consensus is valid evidence on its own, the ‘consensus view’ is based on the consensus of experts in the field. I daresay you would not accept the consensus of fox news producers as authoritative in the climate change discussion. Neither then should you accept the consensus of climate scientists as authoritative in policy discussions as their domain of expertise is climate science rather than public policy. In fact, it would seem to follow from your reasoning that the lack of consensus in the policy domain should be a convincing argument for exploring policy alternatives besides Y.

            Finally, it is possible to completely agree with all of your premise, and still disagree with your conclusion. For instance I can agree that AGW is real, but most of the harms will be in the distant future when people will very likely be much richer and more technologically advanced, and have specific knowledge of time and place where the harms are having impacts, thus can address the harms much more easily and effectively than we could. Therefore I can conclude it’s probably more important to grow global GDP now and alleviate the suffering of the bottom billion than to spend significant portions of global GDP trying to make life a little better for rich people in the far future (the ‘consensus’ policy prescription).

        • Tim van Beek says:

          Consensus view is X (coincidentally, likely confirming my priors) therefore anyone who believes policy should be not Y is a ‘science denier’” = fallacious argument that deeply misunderstands what science is

          From a sociological viewpoint, a scientific community defines a set of believes that have to be accepted with a given certainty threshold by you after “being presented evidence” to be initiated into the group.
          Of course that’s not static, both the sets of believes and the threshold depend, among others, on the evolution of the group, the rank of the individual within the group, and the concrete context of a given conversation.

          Example: Points 2 and 6 of John Baez’s Crackpot Index.

          The resistance to act on said believes in form of a policy may be used to infer that you are below the required certainty threshold.

          Example: “Germs are real, but I still do not think surgeons should disinfect their hands before an operation.”

          That an individual is rejected from the group, or at least loses authority, as a consequence, is exactly how scientific groups work, from, again, a sociological viewpoint.

          P.S.: For the time being I will do my best to try to follow the implicit assumptions about what “science” and “policies” etc. are, from an epistemological viewpoint. Don’t hold it against me if I get that wrong 🙂
          P.P.S.: In case you work in an experimental group in some “exact science”, try how long you can tell your colleagues something along the line “there is some flaw in your/our experiment that we all don’t understand yet, but you/we cannot publish any results until we do” before they throw you out 🙂
          P.P.P.S.: Thinking about it: maybe do that as a thought experiment, instead.

          • Nicholas Conrad says:

            I think this is a good comment because I cannot tell if you are agreeing with me or disagreeing. 🙂

        • crh says:

          There are myriad reasons one could reject any particular consensus

          Fine. Then make those particular arguments in those particular cases. My point is that just saying, “Appeal to authority!” and declaring victory is not an argument, it’s just a kind of name-calling. Unless, I guess, you are so skeptical of the scientific consensus that you think it’s accurate no more often than chance. Otherwise it makes sense start from the position that the consensus is probably true and only deviate from that position with good reason.

          We usually base policies not on preponderance of evidence, but instead in deference to some value.

          Sure, fine. I don’t think this has anything to do with the question of whether “appeal to authority” is a useful way to reason about the world, but I admit that I vastly oversimplified the connection between evidence and policy.

          Neither then should you accept the consensus of climate scientists as authoritative in policy discussions as their domain of expertise is climate science rather than public policy.

          I agree. This is maybe a good time for me to notice that you (and maybe some of the other people in the thread) seem to want this argument to be a proxy battle in the AGW wars. I can understand how you got there, but let me state emphatically that that’s not where I’m coming from. I come at this from a longstanding annoyance with the various ways I think taxonomies of “fallacies” actually impoverish debate, and one of those ways is, “people refusing to use useful heuristics on the grounds that they’re not deductively valid.” In particular, I don’t think I’ve said anything in this thread that is incompatible with the object-level beliefs of AGW skeptics.

          • Nicholas Conrad says:

            This is maybe a good time for me to notice that you… seem to want this argument to be a proxy battle in the AGW wars.

            If we are identifying ways of arguing that impoverish debate, ascribing secret motives to your interoculars would top my list. I use AGW only for simplicity due to its recent close association to the phrase: ‘scientific consensus’ and the well documented efforts to use that particular consensus as evidence in the policy domain, for social shaming, and to move the Overton window. I could just as easily talk about corporate tax, subjective value theory, minimum wage, or E2EE, to name a few other areas where expert consensus goes unheeded.

            My point is that just saying, “Appeal to authority!” and declaring victory is not an argument, it’s just a kind of name-calling.

            I think our fundamental disagreement is that I believe you have that backwards: in my view just saying “Consensus” and declaring victory is not an argument (though is frequently used as a social shaming tactic). By contrast, pointing out the appeal to authority fallacy seems to me not name calling, but dispassionate refutation.

            Even saying consensus may not be correct in all cases, but establishes the burden of proof is problematic. Consensus as a class is subject to issues of groupthink, systemic bias, error, Lysenkoism, etc., so whether you find the consensus more convincing, or claims of bias, etc, in any given consensus more convincing is almost exclusively a function of the sign in relation to your priors:

            I have argument A for the proposition, you have argument B against the proposition; slapping “consensus” on the table in either direction refutes neither, and convinces no one. I fail to see how it contributes to enriched or elevated debate. Assuming it is not an outright shaming ploy, it can only possibly be ignored, or lead to a proxy meta-debate about the strengths of biases at play in the relevant field.

            It occurs to me we may have a natural experiment: you express frustration with the number of times people raise the ‘appeal to authority’ fallacy against your ‘argument from consensus’, it seems like you hear that a lot. But if argument from consensus is convincing evidence, wouldn’t you have to paradoxically at least consider that your opponents, by their sheer volume, are correct? Or do you remain steadfast in your own considered opinion despite the volume of detractors, perhaps suspecting there’s a confounding bias (i.e.: “[waging] proxy battles in the AGW wars”)?

            If you have deeper arguments than “consensus” then it is incumbent upon you to use them. If you have no deeper argument than “consensus” then it seems to me you have no deep argument at all. I’m sorry if that is irritating to you, from other comments of yours I’ve read you are very thoughtful and well reasoned, but I just don’t think you and I are going to come to consensus (pun intended!) on this… Though I have changed my mind on bigger things before, so who knows, maybe a year from now your arguments will have sunk in in a way I’m not seeing at the moment.

        • crh says:

          I don’t have time tonight to compose a full response but there’s one point that I feel the need to address immediately.

          If we are identifying ways of arguing that impoverish debate, ascribing secret motives to your interoculars would top my list.

          You are 100% correct about this and I apologize. What I wanted to express was that I was very worried about being misperceived as having an ulterior motive. I even thought that’s what I had expressed until I saw your reply. Rereading my comment it’s clear that I fumbled horribly and somehow accused you of exactly what I was afraid of being accused of. This was terrible and I feel very badly about it. Mea maxima culpa.

        • Ketil says:

          I think we all(?) agree that knowledge should be based on evidence, not opinion. So consensus is then just a shortcut, and the majority opinion is only valuable to the extent it is based on evidence. Scientific consensus ought to mean that a bunch of smart people carefully looked at the evidence, and based their opinion on it. But it should – at least in theory – be possible to follow the trail of evidence and come to the same conclusions.

          • Tim van Beek says:

            Meta: Human conversation and its analysis depends essentially on context, like who is talking to whom how about what in front of which audiences with which intentions? It seems to me that both Scott and most commentators assume (implicitly) that the context of this post is something similar to:

            It is about conversations between scientists from different communities who disagree about a statement that should be in the realm of “objective evaluation is possible” disregarding audience and disregarding communication channels (personal contact, voice only, text based synchronous or asynchonos). And the question is: Why don’t modes of conversation that work in my scientific community work outside of it?

            I don’t know if I got that right, but I do have the impression that the discussion is hindered by differences in the assumed context and also changes of context mid discussion.

            Back from meta to the “factual statement level”:

            …be possible to follow the trail of evidence and come to the same conclusions.

            To expand on my previeous, seemingly cryptic 🙂 comment:

            1. How do you determine who is smart? Coming to the same conclusions as the community when presented the same “evidence” (undefined and possibly unclear term, depends heavily on context) for a set of test questions is part of it, right?

            2. How do you determine that you have seen enough evidence to come to a conclusion? Is the threshold objectivly measurable?

            3. You cannot evaluate all evidence, for most topics, ever (AGW is a particularly good example). That one should be able to do so in principle translates to, in practice, for the producer of the evidence, to a “when the new graduate student of my collegue who asks all the smart questions gets to see this, I hope I’m not screwed” kind of quality measure. This is not objective, and it may not match expectations outside of the given scientific community.

      • qwertie says:

        @Gossage you read dozens of which papers, I wonder? There are well over 10,000 peer-reviewed papers in climate science. The field is so big that IPCC reports take thousands of pages merely to summarize the research results. The majority take no position about human causation (if the topic is not relevant to a paper, it won’t be mentioned) and there are over 100 peer-reviewed papers that disagree with the consensus (about 1%), some from household-name contrarians like Richard Lindzen, Roy Spencer, John Christie, and (to a lesser extent) Judith Curry, and others from more obscure folks, not including people who pretend to be experts like Tim Ball and Christopher Moncton. Of course, mainstream scientists have produced response papers, but using AGW skeptic sites like WattsUpWithThat as one’s guide, one will be either unaware of the responses or primed to reject them as “bullshit”.

        Trying to understand AGW denial has been a hobby of mine for the last year – so much so that SkepticalScience.com took notice and invited me to join their team. I must say, Gossage, I am interested to hear what you have to say on the topic, since your presence at SSC suggests you are likely to have a higher level of thoughtfulness on the topic than I am used to. Maybe you could link to something you’re written elsewhere? Or to something that strongly influenced your thinking?

      • @ qwertie:

        Since you say you have a connection with sks.com, you might want to look at my old blog post concerning Cook et. al. 2013. I claim to demonstrate deliberate dishonesty by John Cook, who I believe is responsible for sks.com. The evidence I rely on is publicly available, mostly webbed by Cook and his coauthors.

        If true, that might concern you. If you can show that my claim is false, that would concern me–you would, of course, be welcome to put your rebuttal as a comment on my post.

        there are over 100 peer-reviewed papers that disagree with the consensus

        “The consensus” is ambiguous. Do you mean

        1. “global temperature is trending up,”

        2. “the increase in global temperature is in part due to human actions”

        3. “The increase in global temperature is primarily due to human actions”

        4. “The increase in global temperature is primarily due to human actions and if nothing is done about it the results will be catastrophic.”

        Cook et. al. 2013 offered evidence of consensus on claim 2, but a lot of people represent that as consensus on claim 4. Which claim (or some other) do you mean?

      • qwertie says:

        Hi David. In broad strokes I agree with you – i.e. Cook 2013 itself doesn’t show that 97% of papers endorse that humans are the “main” cause of global warming or that “most” of the warming is human-caused. Last month I brought up basically that same topic on the SkS internal forum after someone commenting on SkS brought up that topic by dividing 10 by 65 (category 7 divided by category 1) to find an “87% consensus” and was, I would say, not understood by the moderators and therefore not treated with proper respect. (IIRC I had earlier butted heads with Bärbel, one of our most dedicated volunteers, about it and I didn’t think she grokked what I said.) I don’t think the issue here is honesty – the SkS team is honest, but as insiders whose friends are climate scientists, some of them aren’t good at seeing the facts from a contrarian’s point of view.

        I argued that SkS should admit that in fact Cook et al doesn’t show 97% of papers agree with the “most” warming or “main cause” thing, and instead argue primarily based on other consensus studies, but John Cook’s response was to give me some additional data about author self-ratings that was gathered but not actually published in Cook 2013 (anonymized data is, however, published in The Consensus Project associated with Cook 2013). This data is actually mentioned already in the SkS rebuttal where the user was complaining about the 87% thing, but it’s stated in a sufficiently obscure way that I didn’t notice it.

        So I have written a better explanation – see the discussion of Cook 2013 near the bottom of this post I wrote about the consensus. TL;DR: papers making an explicit quantification show a 96% consensus by self ratings. BTW this is all basically off-topic so I suggest directing further comments to that Medium post.

      • @ qwertie:

        I have responded, in some detail, in a comment to your Medium post.

      • Re my exchange with qwertie …

        Here is my initial post to the blog he suggested we move the dicussion to.

        Here is his reply.

        And Here is my reply to it.

    • arlie says:

      Yes. Argments from consensus are only relevant to people who either cannot understand the underlying data, or choose not to put in the effort to do so. Being human, scientists are subject to all the usual sources of error, including being influenced by the beliefs of their peers. But that’s not what the scientific method is about, and anything that claims to be “science” but doesn’t use the scientific method, is at best using the term in the archaic sense, as “a system of knowledge” – and at worst, claiming the mantle of science in order to profit from deceiving the unwary.

      Congratulations on spotting the omission.

      • crh says:

        Argments from consensus are only relevant to people who either cannot understand the underlying data, or choose not to put in the effort to do so.

        I disagree. They ought to be relevant to anyone with enough intellectual humility to think Prob(I have/might misunderstand the underlying data) is at least comparable to Prob(the majority of experts have misunderstood the underlying data).

        It’s good to be a skeptic. It’s not good to pair your skepticism of everyone else with perfect confidence in your own ability to reason from data.

        • arlie says:

          Perhaps I phrased that a little strongly, but it’s unlikely that I’ll ever be persuaded of anything stronger than “I should recheck my work” and/or “I should get critical feedback from someone competent, who’s unlikely to have an axe to grind” by any argument from consensus.

          I’ve seen too many generally agreed upon truths change over the course of my lifetime, or when I moved to another country, and I’m too aware of the human wiring that makes us move towards whatever beliefs are commonly shared in our environment.

          About all consensus is good for is indicating which arguments/theories to look at first 😉 They may be totally wrong – but at least they’ll give you the context and common assumptions for most people’s arguments. And *if* folks promoting the consensus provide evidence, and the evidence checks out, maybe you can stop there.

          But if they are true because we believe them, and we believe them because they are true and/or because we are good people and/or other nonsense, then while the beliefs might still be true, you might as well move on to something else.

          • crh says:

            I’ve seen too many generally agreed upon truths change over the course of my lifetime, or when I moved to another country

            How does the number of times you’ve seen generally agreed upon truths change compare to the number of times you’ve realized you were wrong about something?

            (Tangent: I’m choosing to interpret, “generally agreed upon” to mean, “generally agreed upon among the relevant experts.” I think, “what does the general population believe,” can be a useful heuristic too, but it clearly performs worse than scientific consensus and anyway it’s not what I want to talk about/though we were talking about.)

            Nobody is arguing, “The consensus is always correct and you should just trust it.” Or at least, I’m not arguing that, and if anyone else is I think they’re being foolish. The question I’m interested in is, “What strategy will minimize the number of wrong beliefs I hold at any given time?” I’m arguing that the optimal strategy is something like, “Assign more weigh to expert consensus than to your own analysis, unless you have special reasons (high technical expertise, an unusual set of personal experiences, you are a super-genius and are never wrong, etc.) to do otherwise.” I do believe that “always rely on your own analysis” is a worse strategy, in terms of the metric I set, than, “always rely on the consensus,” but I’m not advocating for the latter; the former is just especially bad.

          • arlie says:

            @crh

            First of all, I was not restricting the topic to esoteric details within arcane fields, where no one but a specialist is even likely to understand the question 😉

            I was thinking of the universe of things where I’m likely to want to know the answer, rather than being content to say “I don’t know” in one form or another.

            But if we’re talking hard sciences, my go to example of consensus change is epigenetics. When I was learning biology, textbooks made it absolutely clear that only genetic inheritance mattered; life experience had no effect on offspring. This usually came complete with a dig at Lysenko, and the Soviet state that supported him. Well, epigenetics is not Lysenkoism, and while I don’t know exactly what Lysenko claimed, they probably aren’t compatible. But we now know that starvation in a certain stage of pregnancy can affect your granddaughters, and not via selection (some genotypes more likely to be miscarried). We even know some of the mechanisms.

            With regard to the other comparison, I’m perhaps more willing than most to say “I don’t know”. But also I think we’re arguing at cross purposes. Your strategy of “Assign more weigh to expert consensus than to your own analysis, unless you have special reasons (high technical expertise, an unusual set of personal experiences, you are a super-genius and are never wrong, etc.) to do otherwise.” is pretty close to mine, except I’m a bit more likely to hold things in a state of “I don’t know, but experts generally claim …”

            Where I may be systematically wrong – or not – is the case where the experts aren’t explaining their beliefs. It’s a red flag for me if I can’t find either statements about the evidence or statements about the mechanism, that stand up to scrutiny. Likewise if the statements don’t appear to make sense. I exclude the case where I’m not qualified to understand such statements – mostly those that rely on various branches of mathematics. (In that case the statements need to be present, and not have mathematicians lining up claiming large ugly holes in them. I.e. I rely on experts.)

            Sometimes non-explanation or non-rigorous explanation isn’t an indication of snake oil. That’s especially true if I’m getting information from popularizers and/or non-specialists. And there’s a special case where it’s hard to find a general introduction – the experts are writing about contested details on the fringes, ignoring the generally agreed upon central thesis. The trick there is usually to find teaching material intended for juniors in the field. But if the field is too esoteric, that may only exist in the form of seminars and similar, not texts. Fortunately the set of things of that type where I care about the answer is very small.

            [Edit: reading the above, I see it could be misunderstood as suggesting that I don’t do math – which, if true, would make me unable to understand most scientific arguments. I do many types of math competently, or at least well enough to understand someone else’s argument. But not all of them, and I ran up against a “can’t seem to learn this” barrier in one particular area.]

          • albatross11 says:

            I think you also need an evaluation of the field’s ability to generate consensus that’s meaningfully related to reality.

            At one end, you have stuff where you can do (and people have done) experiments and verified their consensus view, or where they’ve at least done tons of observations in many different places and ways and verified their consensus view.

            At the other end, you have stuff where there’s no practical way to test the consensus view, and it’s the result of which side made the more convincing arguments in rooms full of the right kind of experts.

            Somewhere in the middle, you have fields where some stuff can actually be verified by experiment or observation, but there’s a lot being inferred from that. And fields where there’s a lot of practice being done based on the consensus, and people who would at least have an incentive to improve on the consensus view, if they could.

            That tells you something about how much weight you should give consensus of a field. For fields where the consensus is checkable and has been checked multiple times by experiment, it’s really solid. For fields where the consensus means one side won all the arguments and pushed the other side out of the field, you shouldn’t give it a lot of weight.

          • crh says:

            Your strategy of “Assign more weigh to expert consensus than to your own analysis, unless you have special reasons (high technical expertise, an unusual set of personal experiences, you are a super-genius and are never wrong, etc.) to do otherwise.” is pretty close to mine, except I’m a bit more likely to hold things in a state of “I don’t know, but experts generally claim …”

            I have a hard time understanding how this is compatible with statements like, “Argments from consensus are only relevant to people who either cannot understand the underlying data” or even “all consensus is good for is indicating which arguments/theories to look at first”.

            I wonder if maybe you have a more binary conception of “being agnostic about things” than I do. I agree that if the only two possibilities are “being completely certain X is true” and “being totally neutral on the question of X”, then appeals to authority are unhelpful, because they’ll never get you all the way to the former. But then, neither will anything else, for most values of X.

            [Examples of red flags/reasons to distrust the consensus in specific cases.]

            That all seems reasonable. My position is only that one has to actually make the here’s-why-this-consensus-is-unreliable arguments in each case where they apply, instead of just declaring “appeal to authority fallacy!” and moving on.

          • arlie says:

            @crh Yes, I think I’ve been very sloppy here.

            There are at least 3 contexts:

            1. Some kind of debate about The Truth (TM).
            2. Trying to make better decisions, when the truth matters, but is not perfectly known
            3. Improving the breadth and accuracy of one’s beliefs.

            In context (1), I’m happy with my initial soundbite “Arguments from consensus are only relevant …”

            In context (2), Your heuristic is pretty decent. “Assign more weigh to expert consensus than …” This is particularly true when it’s impractical to arrive at a high level of certainty, e.g. because action is time critical.

            In context (3), I like my other heuristic “… consensus is good for is indicating which arguments/theories to look at first …”

            I think albatross11’s comment is also highly relevant. “…you also need an evaluation of the field’s ability to generate consensus that’s meaningfully related to reality…”

            Obviously I have high confidence in my own ability to learn, and relatively low belief in the existence of arcana which cannot be meaningfully comprehended without being properly initiated and accepted among those officially defined as specialists.

            Yes, I’ve certainly encountered clueless amateurs, and even been the clueless amateur myself. But I usually have a pretty good idea of the extent and reliability of my understanding, and confine myself to polite questions when I rate both of those as low.

            You say:
            “My position is only that one has to actually make the here’s-why-this-consensus-is-unreliable arguments in each case where they apply, instead of just declaring “appeal to authority fallacy!” and moving on.”

            I’m disinclined to do that, especially in matters of science. In science, either there’s real data, which can be cited, summarized, etc. – or there’s a theory which makes predictions, which can be and has been tested repeatedly.

            Or failing that, even the specialists don’t know – they just have guesses about what farther research will eventually demonstrate ;-(

            Don’t tell me that 99.9% of physicists agree about the “law of gravity.” Either show me the Newtonian model, and its equations, and the predictions from them, and invite me to go make some measurements. Or go all the way and give me the relativistic version as well.

            Because the truth is, unless you give me the full explanation, I don’t actually know what this “law of gravity” really is 🙂

            And now I’ve contradicted myself again – by using the strong form (case 1) in a context of case 3 (learning) 😉 I guess the truth is, if I don’t understand it, I don’t really trust it – or at least I don’t think I actually *know* it.

            [Edit: By the way, that latter is a really *good* heuristic when dealing with recommendations from “financial planners”, “economists”, and a whole raft of other non-scientific people, particularly when they are selling something. If I don’t understand it, then I certainly cannot evaluate its truth ;-( So I probably should not buy whatever product they are selling ;-)]

        • RC-cola-and-a-moon-pie says:

          Imagine that I were able to produce a rigorous empirical demonstration that the vast majority of scientists believe that appeals to consensus should carry no weight as a matter of scientific epistemology. (That’s just made up for purposes of proposing a hypothetical.) What would defenders of deference to consensus do with that showing? If one wants to support deferring to a scientific consensus then one would have to reject such deference. (The opponent of deference to consensus would not face any similar paradox, since they would be free to agree with that conclusion without deferring to it.)

          Edit: I’m not raising this as a categorical argument against deference to scientific consensus (not least because I have not made the empirical showing). I support deference in many circumstances. I’m just curious how strong defenders of the practice would react to such a demonstration.

          • Enkidum says:

            I’d probably think I should trust consensus a bit less? I mean yes there would be something paradoxical in my reasoning, obviously, but I feel like I would avoid being Buridan’s Ass simply by getting on with my life?

      • 10240 says:

        On any given issue, 99.9% of people don’t have the time to analyze the underlying data themselves.

        • For most issues, that includes the experts. Someone who is an expert on the analysis of global temperature data doesn’t have the time to analyze for himself the analysis that goes into building climate models or projecting CO2 emissions or figuring out the effects of lowering ocean pH. I’m willing to give considerable weight to his opinion about what average global temperature is–certainly much more than to my guess on the subject. But I give no more weight to his opinion on the rest of the issues that go into the AGW controversy than to mine—probably less than to mine if I don’t have some additional reason to trust his judgement.

    • Douglas Knight says:

      Lucky for you, Scott gave you a great excuse to ignore the scientific consensus. He said that it can only be measured by survey. I agree that this is the best way to measure it and I wish people did more surveys, but they are practically nonexistent, so he has defined the consensus out of existence (or at least out of argumentative scope).
      That’s almost the opposite of what he seemed to me to say before.

    • Nabil ad Dajjal says:

      A lot of people have mentioned this so I was probably unclear:

      If an experimental result clearly validates a model and it has been reproduced in the literature, there will almost certainly be a consensus in favor of that model. So, yes, there’s a scientific consensus of ~100% of scientists in favor of the germ theory of disease. But the bare fact that the consensus exists is by far the weakest piece of evidence in favor of germ theory that you could possibly point to. The theory can stand on its own thanks to the mountain of widely available evidence supporting it.

      Another point that I don’t think was clear enough is that it makes sense for laymen to assume that the doctors, engineers, etc in his life know what they’re talking about until results demonstrate otherwise. The problem is that the layman doesn’t then have “an opinion” on the underlying science, at least not one worth talking about. Science is not an evangelical religion which needs believers to go out and spread the good news.

      • Everybody is a layman about things that are not their speciality. If examining evidence were low-cost, then you could safely ignore consensus,. but examining evidence that is outside your field is high cost, because evidence often requires interpretation.

        • Nabil ad Dajjal says:

          Everybody is a layman about things that are not their speciality.

          Yes and no.

          Most of the skills you develop as a scientist are actually highly transferable between fields. It’s quite common for scientists to move into entirely different fields from what they were trained in and do well in them. You need to familiarize yourself with the literature and techniques of the new field, but it’s not like you need to get a new PhD.

          If it’s important for some reason that you have an opinion on a scientific issue, that’s entirely doable with a bit of study. If it’s not important enough to put in the effort to study the issue, then it’s not important enough for you to have an opinion on it.

          • Most of the skills you develop as a scientist are actually highly transferable between fields.

            Two real world examples from my experience in climate arguments:

            1. A doctrine accepted in a number of fields is that you judge a theory by whether its predictions fit the data. After observing claims on both sides that IPCC projections did or did not fit what happened thereafter, I went back to the early reports, figured out what I would expect from what each of them said, then compared it to what happened. That does not tell me much about what will happen, but it does tell me something useful about how reliable IPCC projections are.

            2. I spotted a logical error in the reason many people believed a particular climate claim (the claim itself might be true), by combining my knowledge of an unrelated issue in economics and of the relevant physics.

          • DocKaon says:

            DavidFriedman gives a great example of how, unless you’re familiar with the field, arguments can easily be incorrect or misleading. He compares the Business as Usual emissions scenarios against observed warming and concludes that the predictions are bad. Unless you know that emissions were substantially below the Business as Usual scenarios you’d be inclined to think that the climate science has problems rather than the economic modeling underlying the emissions scenarios.

          • RC-cola-and-a-moon-pie says:

            Hey, DocKaon. For purposes of any policy response, isn’t it just as important to know that emissions will be high enough to have deleterious effects as it is to know what will happen if those emissions occur? I appreciate that the former may not embarrass the underlying science as much but the case for policy action presumably requires the establishment of every fact leading to the bad policy outcome, isn’t that right?

          • DocKaon says:

            They’re both important, but the two errors drive completely different policy conclusions, because total emissions is where most policies would act.

            If climate science overestimates the warming impact of a given amount of greenhouse gases, the cost benefit of doing something decreases. There is less harm to reduce.

            If economics overestimates the amount of greenhouse gasses released because it overestimated economic growth then the cost benefit ratio is unchanged and we might have more chronological time.

            If economics overestimates the amount of greenhouse gasses released because it overvalues the release of greenhouse gases or overestimates the cost of reducing the emissions, the cost benefit of doing something increases. We’re giving up less economic activity or the cost of reducing emissions is less than what we thought.

          • qwertie says:

            “emissions were substantially below the Business as Usual scenarios”? Fact check: when I cross-reference Global Carbon Project emissions with the BaU scenario in IPCC’s first 1990 report, I see that emissions in 2016 are indeed lower than projected, but I also see that emissions in 1990 are lower in GCP than in the IPCC report. Fig. 5 on p.23 of FAR shows 1990 emissions as 7-7.5 GtC whereas GCP data says emissions in 1990 were about 6 GtC. Interestingly the first IPCC report also projected a faster rate of warming than later reports did… coincidence? Maybe.

            The original scenarios were later replaced with “RCPs” and RCP 8.5 is the closest thing we have to a business-as-usual scenario. The graph here shows current emissions were tracking RCP 8.5, and not the other 3 scenarios, as of 2014.

    • nameless1 says:

      Of course it is the perfect opposite of science. When you are writing a thesis, you are supposed to do science. But when you are conducting an internet debate, you are not supposed to do science. Cite science, maybe. But not DO science, that is, the internet debate is not research.

      Look, you gonna write that thesis. From that on, when scientists debate the subject in a pub next to the university, they will cite your thesis. But before you published that thesis, what should they cite? The consensus, what else! You cannot expect them all to do a thesis level work for the sake of something as unimportant as a debate! The utility of you making that thesis is precisely that now scientists will cite that in debates and not the consensus. Your work is debate-fuel. I hope your stuff even gets in the textbooks. You would expect them to do thesis-level work before every time they want to voice an opinion? Or when they write a textbook? Come on.

      To be fair, consensus became a four letter word since the whole climate change thing. I know. But only because people use it wrongly. When people come up with actually evidenced disagreements with the alarmist opinion, way too many people just close their ear and say “the consensus accepts it, you cannot possibly have found evidence that all those experts have ignored”. Thus, they use the consensus as a cudgel to beat down empiricism, skepticism based on the scientific method, as an appeal to authority. That is wrong. When the consensus is properly challenged, it has to be properly defended or discarded.

      But you even want to not rely on the consensus before even it is properly challenged? That makes no sense to me, sorry. Do that thesis. Challenge the consensus properly. Then and only then can you expect them to update their opinions. Why should they do so before?

    • Thegnskald says:

      The issue isn’t conensus, or the lack thereof; the issue is when consensus is used as evidence, rather than recognized as a lossy proxy for evidence.

      I think what angers people about “consensus” is when it is used as an argument in a discussion which is otherwise about evidence. The base argument, when consensus is brought up, shifts from evidence, to an implication that your opponents aren’t qualified to present evidence; it is, effectively, an ad-hominem, lowering the effective level of the discussion.

      Discussion of whether or not consensus is important is missing the point.

      • arlie says:

        Yes

        • albatross11 says:

          When the consensus in some field becomes a tool in political/social battles, there is also a huge incentive for partisans to try to drive the consensus in the field in a desired direction.

          This is a variant of Goodhart’s law–as long as the consensus view among {climate modelers, macroeconomists, psychometricians, anthropologists} is just the result of a bunch of people arguing about what reality looks like, it’s useful for policy. Once that consensus view is seen as a soldier in a political fight, it quickly gets corrupted by political concerns.

          • Yes. An important point.

            Consider the process by which a professional association ends up making a public statement on some issue. It is likely to be driven by a small minority of the members who actually care, in one direction or the other, about the issue, and how hard they try to get their way depends in part on whether the statement will be useful in the associated political conflict.

      • Enkidum says:

        the issue is when consensus is used as evidence, rather than recognized as a lossy proxy for evidence.

        I mean, I think part of the point is that what you call “evidence” is a lossy proxy for evidence. Consensus is simply a more forest-level proxy, which has all sorts of specific weakness. In many fields of reasonable complexity, however, the more tree-level evidence is always questionable for a variety of reasons. Consensus, assuming you’ve got some way of measuring it and some reason to trust the consensus-havers, is evidence that the trees are growing in good soil, well-watered, not rotten, etc.

        And it’s all you’re ever going to be able to access about the vast majority of human knowledge.

  15. Toby Bartels says:

    I was unhappy with your examples of isolated demands for rigor, because I thought that two of them were very good arguments: quite brief, of course, but getting right to the heart of the matter.
    Then when I read your explanations as to why they were bad, I saw that I do apply those standards to everything. So yay for me, I guess?

    • Alsadius says:

      I suspect this blog selects very heavily for people who are unusually fond of taking a few firm principles and following them rigorously – most people have a broad cloud of principles they place different weights on, with few or none being absolute, and even the really strong ones probably haven’t been thought through in real depth.

    • crh says:

      +1. I was really confused to see, “Capital punishment is just state-sanctioned murder,” on that list. I don’t see that as a knock-down argument against capital punishment; I just see it as a (frankly kind of trivial) fact about the world. Same for imprisonment being state-sanctioned kidnapping — yeah, of course it is. What else would it be?

      • Alsadius says:

        The acts are similar, but there’s an important distinction here. Capital punishment is state-sanctioned killing, but “murder” generally refers to wrongful killing, not merely any killing. For example, putting down your terminally ill dog is killing, but most would agree it’s not murder.

        Everyone agrees that abortion, euthanasia, capital punishment, shooting someone in self-defence, and shooting someone on a battlefield are all killing, but most will say that some (or all) of them are not murder. And therein lies the distinction being drawn.

        • crh says:

          I would have defined “murder” as something like “deliberate killing”. Maybe throw in a “premeditated”, I’m not sure. Once you throw in “wrongful” as part of the definition my confusion evaporates.

          • Alsadius says:

            Ah, that makes sense – I’ve never come across “murder” being used to imply non-wrongful killing, but if that’s how you think of it, I understand.

            As with so many debates, this really just comes down to definitions of words.

          • Nick says:

            Is it not uncommon for people to think of murder as just deliberate and/or premeditated killing? I don’t see how that lines up with common usage, but I admit I haven’t taken a survey or anything.

            To press the point, since I definitely think “wrongful killing” is more reasonable here: it seems like euthanasia is deliberate, premeditated killing, so do you think it’s fair to call it murder? How about abortion?

          • Alsadius says:

            My own personal opinions are that euthanasia is killing with consent and that early-term abortions are killing of something which has not yet achieved personhood. Both are fine by me – they’re deliberate, of course, but not wrongful. As such, I wouldn’t call either one “murder”. Late-term abortions could plausibly be wrongful, but I don’t have a really strong opinion on that.

            However, I know many people who’d passionately disagree with me on both of those, and consider them both murder.

          • crh says:

            Is it not uncommon for people to think of murder as just deliberate and/or premeditated killing?

            Maybe it is, I don’t know. I’m not committed to the definition I offered, it’s just the one I had in mind when I made my first comment.

            It seems like euthanasia is deliberate, premeditated killing, so do you think it’s fair to call it murder? How about abortion?

            Prior to this conversation I would have said, “Yeah, duh, of course euthanasia is murder.” I say this as someone who is pro-euthanasia. Abortion is more complicated because whether it even counts as killing or not depends on whether you think of the unborn as “alive”.

          • Sigivald says:

            Isn’t the entire point of the words “murder” vs. “killing” – as words that are not the same word – that the former is a killing deemed immoral or wrong or unlawful (depending on context)?

            (Thus all the above clears up, in that “we disagree whether killing X is a murder” mostly or entirely depends on “whether we think killing X is wrong or just fine”, for any reasons we happen to have.)

          • crh says:

            the former is a killing deemed immoral or wrong or unlawful (depending on context)?

            I checked the OED and those are the two senses it gives (plus a bunch of others that are irrelevant to the present conversation). I happily concede that my sense of what “murder” meant was weird/nonstandard/wrong.

          • Nick says:

            Ah, if we all concede it’s nonstandard I’m happy too. 🙂

        • Nicholas Conrad says:

          I guess I fail to see why ‘lawful’ negates ‘wrongful’. In fact, we often hear that, to take an obscure example: “hitler murdered 6 million jews”. The fact that it was legal under german law at the time seems not to morally redeem these killings (nor should it). Then why so with other state-sanctioned murders?

          If I write a list of ‘bad’ actions in my diary, it doesn’t change the moral baring of those actions. If I hired a gang of thugs to beat up people who did the ‘bad’ things I wrote, it still wouldn’t change the morality of those things, and it wouldn’t change the immorality of me and my thugs beating up people. Getting over 100 mostly old white guys to write in a diary together, and giving their thugs shiny badges doesn’t change the immorality of any of their actions.

          • crh says:

            I think the point is not, “Capital punishment is legal, and therefore not wrongful, and therefore cannot be called murder.” Rather the point is that, “Capital punishment is murder, and therefore is wrong,” is not much of an argument, since if you rephrase it as, “Capital punishment is wrongful killing, and therefore is wrong,” it’s clear that it begs the question.

        • nameless1 says:

          Murder is defined as unlawful, unjustfiable killing with malice aforethought. Not as “wrongful”. But perhaps you can replace “unjustifiable” with “wrongful”. Self-defense, battlefield etc. i.e. non-wrongful killings you can also call justifiable. So let’s say that refers to this element.

          Now of course the unlawful element cannot relate to capital punishments.

          I think people who say capital punishment is murder they think 1) malice aforethought is a very important element of murder 2) capital punishment involves malice aforethought. The argument is that if someone murders your mother, and you kill him in revenge, not in defense of her, a year after the act, that counts as malice aforethought and thus murder. And the state just does the same exact thing.

          I think this reasoning is faulty because EVERY punishment the state may mete out is generally something private people are not allowed to do themselves as vengeance. Thus if I am not allowed to lock the killer of my mother in my basement for 15 years then the state may also not lock up the murderer. So at the end of the day no punishment would be possible: prisons are just state-sanctioned kidnapping, fines are just state-sanctioned theft and so on.

          • whateverthisistupd says:

            ugh, I didn’t mean to report your comment, accidental click.

            “So at the end of the day no punishment would be possible: prisons are just state-sanctioned kidnapping, fines are just state-sanctioned theft and so on.”

            There are absolutely people who believe this and advocate it on a consistent basis.

          • I think this reasoning is faulty because EVERY punishment the state may mete out is generally something private people are not allowed to do themselves as vengeance.

            The usual A-C position is that private individuals are entitled to punish, provided they take adequate precautions to only impose appropriate levels of punishment and only on the guilty. The usual expectation is that there will be middle men, rights enforcement agencies, doing the punishing, but that they will have no rights other people don’t have, just division of labor advantages for exercising those rights.

      • Simon_Jester says:

        ‘Murder’ is one of those words that’s good at triggering noncentral example fallacies. The central example of killing in our comparatively civilized and peaceful societies is, well, a murder. Consequently, killing in general is easily identified with murder as part of the connotation, and for people who aren’t thinking carefully this looks like a very strong argument, made possible by a sort of denotation/connotation two-step.

        If you are thinking carefully, you don’t see it as a strong argument and may not grasp the appeal or rhetorical effect… but then, a lot of people aren’t motivated to think carefully about this issue.

        • whateverthisistupd says:

          Right. I think the point of that is to make people think about how they have a different view of killing based on a narrative about why the person is doing it.

          Obviously “murder” is defined a different way, but the point is to draw a moral equivalent.

      • gbdub says:

        It’s a bit weird that Scott uses it as an example of “isolated demand for rigor” since it’s often cited as a central example (heh) of the noncentral fallacy.

      • helloo says:

        Felt this is more of a “prove too much” rather than a isolated demand of rigor.

        Most people who argue against capital punishment, are not against imprisonment by the government.
        However, they are similarly against murder and kidnapping.

        So while the argument that you should oppose capital punishment, could also apply to imprisonment, they do not seem to be acting on that.

        I think a better summary of this spot would be “the argument is justified by the conclusion rather than vise versa”. In which case, the people using those arguments aren’t trying to debate, but rather justify their beliefs to others.

        • Simon_Jester says:

          Every isolated demand for rigor contains an argument that, if generalized, would prove either too much or, at any rate, more than is intended by the person advancing the argument.

          Thus, I would say that the two categories of bad argument are related. Most arguments that prove too much are being deployed in the same role that an isolated demand for rigor would be deployed, and in many cases one can be converted into the other easily.

  16. HaraldN says:

    This post crystallized a lot of things I’d been thinking about for a while.

    Don’t have anything constructive to add, just wanted to say thanks. I’ve read through most of the archives on this blog over the last few months and it’s been a blast, and made me consider spending money on charity.

  17. Ohforfs says:

    The fifth is wrong because we ban people from selling their organs, accepting unlicensed medical treatments, using illegal drugs, engaging in prostitution, accepting euthanasia, and countless other things that involve telling them what to do with their bodies; “everyone has a right to do what they want with their own bodies” is a fake rule we never apply to anything else.

    Hm. AFAIK, all of them are not punished for the buyer side, but for the seller (selling organs, euthanasizing (sp?) someone, etc. Prostitution and drugs are more complicated cases, prostitution is actually punished on the seller side – although in many countries it is not punished while helping it is, and drug selling is often punished while drug usage not). So i don’t think that these are the best examples, perhaps.

    • Toby Bartels says:

      But don't we also punish abortion primarily on the seller side?

      • John Schilling says:

        We punish everything on the seller’s side, if there’s a seller around to punish. Gun control, to take this post’s recurring theme, is 90% the BATF regulating licensed and shutting down unlicensed dealers and 10% police trying to take guns away from people who shouldn’t legally have them.

        We do this, first, because there are fewer sellers to punish and their need to market their product or service makes them easier to find. Second, because the seller’s explicitly mercenary motive makes them less sympathetic. And third, because driving the seller underground and into explicit black-marketeering usually accomplishes most of the material goals and all of the political goals.

        This is orthogonal to whether we sincerely believe a thing to be wrong, or to whether it actually is wrong by some objective standard.

        • Ohforfs says:

          I didn’t mean punish in practice, i mean in the law. At least here, both prostitution and abortion are legal to do – but assisting in prostitution and performing abortion on someone are illegal.

          • John Schilling says:

            Here in the United States, prostitution is illegal to do, and abortion used to be.

            I don’t think there’s a terribly meaningful distinction in the fact that, e.g., some jurisdictions every cop and DA knows the unwritten rule that you only really go after the pimps and other jurisdictions had to write that down because someone wasn’t getting the message.

        • Lambert says:

          I suspect this is just because there are more buyers than sellers.

          Why go after 20,000 drug users when you can do the same by stopping 200 dealers?

          • John Schilling says:

            If there are still 20,000 drug users, you’ll get another 200 dealers soon enough.

            Though if you’re a well-paid professional drug warrior, that may be a feature rather than a bug. Doubly so if you’re allowed to drive around in the fastest or fanciest car owned by any of the dealers you arrest.

    • Anonymous says:

      Prostitution is fully legal here and, what’s somewhat curious, unregulated. Due to some quirks of the law, among others in the form of outlawing of brothels and pimping – making profits from someone else’s prostitution, which applies also to taxation of prostitution, to get around the problem of the state-pimp – it is not legally enforceable to demand payment for sex or for someone else’s sex acts. Which means that if you are a prostitute, you won’t go to jail for it, but your revenues are quasi-illegal – since you may not demand payment for sex – and may be subject to penalty tax rates.

  18. av says:

    Hey, having skimmed this and particular the last paragraphs, I thought you might be interested in https://onlinelibrary.wiley.com/doi/abs/10.3982/TE436

  19. Freddie deBoer says:

    I never would have thought that self-identified leftists would reject the dialectic, but that’s the culture now.

    • Aapje says:

      It’s pretty bad when even Marxists like you are not authoritarian enough 🙂

      /joke

      • carvenvisage says:

        isn’t the point of this kind of (somewhat insulting) joke that it’s ‘countersignalling’, i.e. that it works best between war buddies and worst between acquaintances? (And worst on identity/sensitive subjects?)

        Maybe I’m missing something but this seems like genuine oversight of said principle.

        • Aapje says:

          My joke points out the incongruity between the popular image of Marxists/communists as being authoritarian and the most extreme kind of leftists & having a Marxist ask for more civility among the left.

          By a non-leftist, it can be read agreeably as ‘the left sucks,’ but it can also be seen as criticism of a simplistic categorization of Marxists as the most authoritarian and extreme kind of leftists (and in general, of equating support for radical beliefs with support for radical means).

          By leftists, it can be read agreeably as ‘the other left sucks’ or as an insult: ‘the left sucks.’ When read agreeably, it can create a bond of mutual dislike among those on the left and right who both reject a certain part of the left, creating a bond on the meta-level (‘we may disagree on many things, but we agree on the social norms’). When read in this last way, it is countersignalling (‘you are my opponent, but you are still part of one of my ingroups, in a meaningful way’).

          So I don’t think that my joke has one point. It was a Rorschach joke.

    • Sniffnoy says:

      I’d be interested to hear you expand on this point.

    • benwave says:

      Wouldn’t you though? It’s not as if the papers we distribute at marches run headlines like “Dialectics 101!” or “Unaffordable housing fault of contradiction between use values and exchange values!”

      It took me a pretty long time hanging out with socialists before I first encountered the idea

    • zzzzort says:

      They’ve moved on to the anti-dialectic. Still waiting for a good synthesis though…

  20. Patrick Cruce says:

    I was thinking about the scandal with Tyler Cowen and the Mercatus Center, and it got me thinking that if there is strong evidence that someone is arguing in bad faith, then it is legitimate to make arguments that put them outside the overton window. A scientific study is a complicated enterprise where small judgements can affect the outcome. I don’t always have the time or expertise to investigate in detail. If I can’t trust that the work going into a study was done ethically, then it is probably a better use of my time to write it off as a suspicious source of evidence.

    I would like to term this the Shaggy Exception to the recommendation against social shaming, because if I’m “caught red handed creepin’ with the girl next door”, you shouldn’t trust any claims that “It wasn’t me” even if you miss the opportunity to roll 1D4 damage to my priors.

    • Alsadius says:

      > the scandal with Tyler Cowen and the Mercatus Center

      Can you elaborate? I can’t find anything that seems to be about this on Google. The closest I can find is that Mercatus took money from the Kochs, but that’s hardly a scandal.

      • Patrick Cruce says:

        https://www.nakedcapitalism.com/2018/05/tyler-cowen-koch-brothers-funiding-mercatus-center-george-mason-university-academic-freedom.html

        It was in WaPo and the NYT, too. But pretty fresh, so maybe not indexed by google yet.

        tl;dr That they took money was not the scandal. The scandal is that the money came with contractural strings allowing the donors to select personnel and review academic product, then lied about the existence of the strings.

        • Alsadius says:

          So basically, the donors got a minority role on the board that selected how their gift would be spent? That’s not much of a string – they don’t have control over the funds, and if the university wants something then they’ll get it regardless of what the Koch reps say.

          I actually did see that post in my search, and it was what I was referring to. But it’s such a weak scandal that I thought there had to be more to it.

          • Anonymous Bosch says:

            So basically, the donors got a minority role on the board that selected how their gift would be spent? That’s not much of a string – they don’t have control over the funds, and if the university wants something then they’ll get it regardless of what the Koch reps say.

            I’m sure the majority members of the board felt completely free to ignore the wishes of their donors.

          • Alsadius says:

            I doubt the issue came up at all, tbh. Mercatus, GMU, and the Kochs are all pretty much believers in the same things, which makes most of this(heh) academic.

          • Patrick Cruce says:

            So you’d be ok selling Warren Buffet 4 seats on the supreme court?

            It is a minority, right? I’m sure it wouldn’t change anything. If your answer is no, please explain how that is different.

            Sarcasm aside, it is completely legal for a private institution to be employed as a public relations firm, but we can no longer classify their output as academic, and so it would be substantially degraded in the taxonomy of arguments that was outlined in the OP. This is for the same reason that a survey of research is considered a good argument, whereas a survey of press releases is not. (unless you’re an academic studying press releases)

          • I’m sure the majority members of the board felt completely free to ignore the wishes of their donors.

            Probably not. But they wouldn’t have felt free to do so even if the donors had no representatives on the committee.

            Universities like being given money. One way of making it more likely that someone will give you more money is to use the money he already gave you in ways you think he will approve of.

        • A Definite Beta Guy says:

          I….don’t see the issue. Isn’t the Mercatus Center a think tank run out of a university? A private think tank is setting up rules governing admission to its private club and has additional review to determine whether a professor is meeting obligations under the rules of said private club.

          It sound additionally that the rules in place allow the donors to select a board member from the existing faculty at the university?

          Given that this person is admittedly not an expert in this…uhhh…why should trust them? Especially when they throw in random rheto-bombs like “UnKoching” and “god bless those teens fighting against irrational gun policy!”

          Seems GMU itself has a problem with this relationship, so it looks like a norm violation just on that front, but this seems a long way from the Koch brothers firing GMU professors or controlling curriculum. It looks like there are a lot of firewalls in place.

    • gbdub says:

      ” If I can’t trust that the work going into a study was done ethically, then it is probably a better use of my time to write it off as a suspicious source of evidence.”

      Fair enough, but if you only apply that filter when the Kochs are involved, that’s an isolated demand for rigor.

      • j1000000 says:

        Yeah. The “file-drawer effect” section from Scott’s Reactionary FAQ is evergreen.

      • Patrick Cruce says:

        I’d agree, but if we can’t assume good faith from ostensibly academic institutions then we can’t rely on many common sources of evidence.

        I brought this up not because of the politics but because it happened rather recently(so was on my mind) and struck me as a sufficiently egregious violation of academic norms that it demonstrated my point.

        • gbdub says:

          But we can assume good faith defense of academic freedom from a group called “UnKoch My Campus” and an author that makes remarks like

          We all know this goes on, of course. Does anyone think that these extreme libertarian and right-wing ideas are so widespread because they’re actually better ideas? No, I don’t think so.

          This is not a defense of strings-attached scientific funding. Rather, it’s a strong suspicion that an awful lot of scientific study starts from similarly questionable beginnings, but doesn’t get noticed because it advances causes more in line with the beliefs of the article author.

          Or more bluntly, I’m much more confident that the misdeeds of the Kochs will get called out than similar behavior from less politically controversial (among the academy) sources.

          I’ll also note that it’s never made clear how exactly this involvement of the Federalist society hurt the truth seeking ability of academics, just a general assumption that anything that gets clerkships for Federalists or (oh, the horror!) faculty members getting Trump administration positions is ipso facto terrible.

          EDIT: and besides all of that, law (or even economic) faculty aren’t exactly where I look for “unbiased questing for scientific truth” – I kind of expect them to all be advocates for something, and by all accounts the legal profession is pretty incestuous anyway. I don’t think the biggest problem in legal academia is that too many libertarians / conservatives can find employment. I’d of course prefer them to be upfront about whatever they are doing, and if GMU is serious about thinking this was wrong than I’ll take their word for it.

          • Patrick Cruce says:

            I’m not really interested in defending that author’s personal biases. That article is mainly nice because you can find links to the NYT and WaPo articles.

            Nor am I interested in defending the biases of unkoch my campus; an activist group which makes no claim to being an academic institution.

            But I will strongly disagree with the presumption that an “awful lot” of scientific research of research starts from similarly questionable beginnings. In my experience, an awful lot of academic research comes from people who singularly passionate about finding new truths. Obviously, it is hard to prove this statement empirically, because it is rare to see a case like this one where a conflict of interest is specified so clearly in a contract. But I think you would be making a mistake to generalize from this one example at George Mason to the belief that this sort of conflict of interest is the norm; rather than the exception.

            On the contrary, I would argue that it is the exceptional nature of this case which justifies shaming. Going back to my point, that sometimes the bottom level is appropriate; when the counterparty to the debate is provably acting in bad faith.

    • Nicholas Conrad says:

      I’ve never understood why the ‘money influences science’ argument is only applied unidirectionally. The government is by far the largest science funder yet research that consistently aligns with stated policy objectives of the relevant government body (EPA, USDA, etc..) is never questioned on the basis of its funding.

      • Patrick Cruce says:

        Interestingly enough, the fed doesn’t actually demand seats on harvard’s board of regents nor do they try to hide their efforts to fund research. If the Koch brothers had created the Koch Institute of Economic research, their behavior would have been beyond reproach.

        • johnWH says:

          Well: https://www.charleskochinstitute.org/

          Also, do you think the Koch brothers are influencing GMU economists to be more libertarian, or are they giving money to a department thats already very libertarian?

          • Patrick Cruce says:

            I don’t see any contradiction in approving of the charles koch institute while condemning george mason university.

            FWIW, organizations that have apparent founding biases can do superlative work. The christian science monitor being one example that comes to mind.

            But it is important for such institutions to be open about these biases to allay suspicion. Hiding and lying are breaches of trust that merit deep concern about the reliability of institutional work product.

            As for George Mason. It was a commuter university before the Koch endowment. It isn’t like it was the University of Chicago. And the releases seem to indicate the stipulations of the endowment went beyond soft influence and were formally required.

          • johnWH says:

            But it is important for such institutions to be open about these biases to allay suspicion. Hiding and lying are breaches of trust that merit deep concern about the reliability of institutional work product.

            I agree to an extent, but I get the sense that you’re trying to take the next step and say we can dismiss all research that comes out of GMU. I don’t think that’s warranted. Many GMU scholars publish in mainstream journals. I don’t think the peer review process is perfect, but I think that is some signal of quality.

            And if you know that a particular book or research paper was produced by a GMU scholar, I can understand being more suspicious, but shouldn’t that suspicion encourage you to scrutinize the piece more thoroughly rather than dismiss it out of hand?

          • Patrick Cruce says:

            I’m not saying that everything produced by the university is necessarily incorrect and I think it unfortunate that some scholars who are genuinely well meaning will be tarnished by the reputational damage done to the university.

            But I also think that truly deep review is rare. A reliable peer review requires trust. Raw data or source code is often not made available and even if it is the reviewers don’t have the time or resources to inspect it closely. It often takes the form of a dialog between the reviewer and the author, and the expectation of accurate answers.

            So for my part, if a study is published by a GMU scholar, I don’t think there is any harm in not trusting its veracity until it has been corroborated elsewhere.

          • albatross11 says:

            Patrick:

            You said:
            But I also think that truly deep review is rare. A reliable peer review requires trust. Raw data or source code is often not made available and even if it is the reviewers don’t have the time or resources to inspect it closely. It often takes the form of a dialog between the reviewer and the author, and the expectation of accurate answers.

            As I understand it, the story is that the Koch brothers had a say in the hiring and promotion of professors at GMU. There’s no implication that GMU researchers are engaging in some kind of fraud. But that’s what would be required for your statement above to make any sense.

          • So for my part, if a study is published by a GMU scholar, I don’t think there is any harm in not trusting its veracity until it has been corroborated elsewhere.

            Remove the words “by a GMU scholar” and the statement is still true. Academics, like other people, have lots of incentives other than the desire to learn and publish the truth, hence unless you have strong reasons to trust a particular scholar you should never trust the veracity of published work that has not been confirmed.

          • Patrick Cruce says:

            @albatross
            They had a say in hiring and firing professors. That firing part is pretty important.

            @david
            You’ve stated the existence of a prior probability of academic malfeasance. I agree it exists. I don’t see how the existence of the prior would suggest that I shouldn’t condition my posterior probability on evidence.

          • albatross11 says:

            Patrick:

            Yes. Basically, it’s putting GMU researchers in the same position as researchers working at a private company’s research labs, or at a think tank. I think this is a bad idea for a university, and I’m glad this came to light. But I don’t think this should make me a lot more skeptical of GMU researchers’ publications than of publications from other universities.

            What should we expect the consequences of this to be? As far as I can tell, we should expect:

            a. Some lines of research won’t be pursued by GMU researchers, if they know it’s likely to annoy the donors enough to get them fired.

            b. It’s possible that researchers will drop some line of research or decide not to publish it, because of fears that this will annoy the donors enough to get them fired.

            Neither of these seem to me to undermine peer-review, or make the research results published by GMU unreliable. Instead, they tell us that there are some areas where we can’t expect GMU researchers to do any research, and maybe even some findings they would choose not to publish.

            Note that this is *exactly the same* situation we often see among researchers in politically/socially contentious topics–researchers will often explicitly decide not to work in some area (race and IQ, say) because they don’t want political hassle or pushback from their colleagues or (worse) their tenure committee. I assume researchers sometimes abandon lines of research when they’re politically too hot, too, but I don’t know where we’d get a lot of examples.

          • Patrick Cruce says:

            They also lied about it. Which is very different to a company like google, where the researchers don’t try to hide their affiliations.

            It suggests an institutional lapse in ethics. Which is a big problem in a discipline that is reliant on trust.

        • Nicholas Conrad says:

          Wait, so your contention is not that funding per se conveys influence, only that secret funding does? I don’t see how this distinction alters the incentives. Also, we can see from, for just one instance, recent dear colleague letters, that the feds can exert incredible control over universities’ internal policies at threat of withholding funding without needing to place plants on those schools’ boards.

  21. Urstoff says:

    Got chas? A few, yes.

  22. Nicholas Weininger says:

    ‘“it is exactly as wrong for the state to do something as for a random criminal to do it” is a fake rule we never apply to anything else.’

    ‘“everyone has a right to do what they want with their own bodies” is a fake rule we never apply to anything else.’

    In my experience plenty of anarchists, rightly or wrongly, do apply these rules consistently to everything. Did you mean to exclude those people from “we”?

    Also, where in this hierarchy do you put disagreements over what counts as evidence? For instance, suppose I argue that the minimum wage probably does cost jobs by pointing out that for most goods setting a price floor decreases the quantity demanded and almost everyone agrees that this is so, and so if you want to claim labor is different you have a heavy burden of both proof and explanation; and someone else disagrees and says we should assume labor is special and only look at studies specifically about labor rules.

    • Simon_Jester says:

      ‘“it is exactly as wrong for the state to do something as for a random criminal to do it” is a fake rule we never apply to anything else.’

      ‘“everyone has a right to do what they want with their own bodies” is a fake rule we never apply to anything else.’

      In my experience plenty of anarchists, rightly or wrongly, do apply these rules consistently to everything. Did you mean to exclude those people from “we”?

      Most of the people advancing those arguments (against capital punishment and abortion, respectively) are not anarchists.

      An anarchist who really truly believed in absolute rights to bodily autonomy and to the state never doing anything it would be illegal for a private citizen to do with an equal amount of coercion involved could say those things without making an isolated demand for rigor. However, such an anarchist is obviously a counter-case we devised purely to invalidate this example of a valid general point: namely, how often we see isolated demands for rigor in political debate. And how extremely common that is, to the point where many SSC readers* will be able to think of quite a number of times when they have advanced such an isolated demand in politics.

      *(As in, exactly the kind of person who normally thinks isolated demands for rigor are wrong)

      Like, a LOT of people will say “it’s wrong for the state to kill someone because murder is wrong,” who would never say “it’s wrong for the state to take your money because theft is wrong” or “it’s wrong for the state to put someone in jail because kidnapping is wrong.”

      A slim minority of those people will say that because they would say those other things, but let’s not ignore the insight about the majority due to the existence of a minority to which the insight does not apply.

      [Tries manfully to resist the urge to get sidetracked to talk about burdens of proof and labor being different in certain important ways from other commodities]

  23. helloo says:

    This isn’t exactly on topic, but is quite similar to something that I’ve been thinking about.

    Are there definite standard terms to these two types of arguments/responses –

    Type A: Refutes statement (or part of the central point) directly.
    Ie: Shows a math error, debates against one of the conclusions, calls out on using data from a retracted study, arguments regarding definitions
    Type B: Undermines statement based on outside information/scenario
    Ie: “But the other option is worse!”, author is Xist, data is backed by biased sources, trying to prevent such is unlikely to succeed and a waste of money, ad hominem

    I’m currently calling them direct and indirect for now, but those are probably already taken, so up for suggestions if something doesn’t exist for them already.

    Neither are necessarily “bad” or weak arguments (or rather both can be done badly).
    However, I wonder if it might be a good idea to differentiate between the two and possible lead the discussion so that one or the other is more prominent.
    For instance, take these two statements-
    “Capitalism promotes excessive competition, making it a virtue, and I’d rather live in a less competitive world even if it is somewhat less effective/advanced.”
    “Capitalist society’s winners tend to be those who are greedy, power-hungry, and immoral. We do not need role models such as these.”
    I tried to give them uncommon complaints (thus no canned responses) about roughly the same topic that try to focus arguments one way or the other. Are there more definite ways to “shape” an argument that also shapes the responses? Should we be careful of them or encourage them?

    • Lambert says:

      Not sure there are standard terms, but, if the argument is:
      If P, Q
      P,
      Therefore Q

      ,Type A is a refutation of P
      Type B is a refutation of If P, Q.
      Sort of.

      • crh says:

        The sound vs. valid distinction seems relevant here. An argument is valid if the conclusion follows logically from the premises. An argument is sound if it is valid and, in addition, all the premises are true.

        In your example, Type A is accusing the argument of being unsound. Type B is accusing the argument of being invalid.

  24. skybrian says:

    There is a basic assumption that getting into arguments online is a useful thing to do. I think that’s actually not a good assumption for most comment sections. People are busy and mostly unwilling to do research as “homework” for a stranger on the Internet. A symptom of this is the common meta-debate about who has the “burden of proof”.

    Once we acknowledge that, as a matter of politeness, nobody is required to debate us, what can we expect from an online discussion? I like to to encourage two things: telling stories and sharing links. I feel like if people are telling interesting stories and sharing interesting links to read, the discussion is going well, even if these stories and links are unlikely to resolve the argument, or are only tangentially related to it.

    Often stories and links are shared in support of some argument, and that’s fine. But I’d judge them by whether they’re interesting in themselves and not by how well they support the argument.

    This is also a good way to make essays more worthwhile to read. Even if the main argument fails to convince, if the examples are really good, I’ll feel like it’s worthwhile.

    (Apologies in advance for not telling any stories or sharing any links. I’m usually better about links, at least.)

    • Enkidum says:

      I think most of us would agree that the proportion of arguments that consistently rise above both dotted lines is vanishingly small. But I’ve experienced them on rare occasions, I think, as have (I hope) others. I think this is simply trying to increase their frequency by a tiny amount (as is the adversarial collaboration thing that Scott’s pushing in previous posts). Which I think is a noble goal.

      That being said, I like stories and links too. But I don’t have any relevant ones.

    • Nick says:

      Of course it’s not a good assumption for most comment sections—Scott alluded to as much when he said Twitter is great for shaming. But I think (or hope, for our own sanity) that folks here are selective enough that the places we argue at are more worthwhile. I like sharing interesting links as much as the next guy, but I’d be bummed if that’s all we did around here.

      • skybrian says:

        Sure, but they’re not entirely opposed, since you can also share links to interesting arguments.

        Maybe another way to think about it is that there are low-effort and high-effort discussions, and we should be clear about what we’re doing and what to expect. High-effort discussions are great, but they’re also better presented as actual blog posts and reshared via links. (If you’re going to do the work, it’s worth a bit more trouble to make it stand out and share with more people.)

        The highest-effort discussion consists of actual scientific research. Typically only a few people are willing and able to participate at that level for any given subject, but the rest of us can follow along.

        We could also distinguish between low-effort theorizing (what we’re doing) and low-effort sharing of evidence. (Telling stories, taking pictures, and so on.)

        Based on my casual reading of Less Wrong, I feel like the rationalist community is strong on theorizing (at various levels of effort) and you couldn’t stop it if you tried. We’re weak on storytelling and other forms of evidence-sharing, so that’s where I will lazily encourage more effort.

    • Wrong Species says:

      I completely disagree. Internet arguments do have higher variance than IRL arguments. That does mean they are often worse but it also means they are often better. When you are talking to someone IRL, you usually end up making too weak claims because you are going to see this person and interact with them outside this conversation. That makes people far too reluctant to say the things they want to say. On the internet you obviously don’t have this worry. You feel much more free to speak your mind. Not only that, but you can’t link things IRL either, which means you are both just disagreeing on the facts without a way to resolve it in the span of that argument. They do have their advantages, like being faster and more fluid but that’s not always a good thing. And the internet platform does determine the limits of debating. But aside from a few small exceptions, I would much rather have an argument here on SSC than among any person I know.

      • skybrian says:

        Yes there’s good stuff. That’s why we’re here!

        But I’m not sure that encouraging people to “speak your mind” is responsible for it. I associate that with making discussions more emotional, which tends to make them worse in the classic flamefest way.

        Instead, I associate good discussions with politeness, civility, and a bit of distancing as a community norm. It’s easier to talk about serious things as something that happened (a story) rather than an active emergency (stuff happening to you right now).

        Even if you’re reading about the Holocaust in the library, you’re still in the library. Good community spaces are more like the library.

      • nameless1 says:

        Yes, verbal debates are the absolute worst, even when you see experts do it on TV, they are not going to go and google the relevant research they half-remember mid-debate, so they will be poorly backed up etc.

        But there are also debates better than internet ones. Debates that happen in journals. Or books. Seriously, reading a book and writing another in response is usually better than whatever you could do in this comment box?

        • Wrong Species says:

          The problem with book and journal debates is that they spend a lot of time talking past each other. It’s not really a dialogue so much as different people presenting their own view points. On the internet, people call you out if you don’t really answer their question.

          They have their place. They’re definitely better when it comes to gathering and presenting data. But I think having an actual dialogue is the best way to get to the root of disagreement.

    • Mark V Anderson says:

      I like to to encourage two things: telling stories and sharing links. I feel like if people are telling interesting stories and sharing interesting links to read, the discussion is going well, even if these stories and links are unlikely to resolve the argument, or are only tangentially related to it.

      I’m a little confused by this, because I think usually links and stories often get in the way of discussion. I think we’ve all seen the person who gives us 10 links and says “see here is your proof!” And then when one reads the links, they are all irrelevant or pure polemic. Often links are how the lazy debater does it.

      And I think we’ve also seen the person who goes on and on about personal stories and segues about various stuff. I’m always saying in my head: “Get to the point.”

      Maybe I’m confused what you are talking about, but links and stories to me usually mean a debate gone awry.

  25. Neutrino says:

    Fascinating article and comments, a pleasure to read and learn. One idea that sprang to mind was a type of definition of an argument or discussion market or space. Bear with me as I lay out some thoughts.

    Have discussants provide the following:
    Category/ies of position, e.g., on Scott’s pyramid or Sphinx.
    Initial assessment of strength of conviction, analogous to percentage of epistemic certainty.
    Provide a points-and-authorities section as some measure of a good faith survey component.
    Provide double crux or similar cues.
    Administer some post-discussion assessment for how well the initial assessment was met.
    Consider a double-blind or a sort of anonymizing tool for discussant administration.

  26. Tim van Beek says:

    “Erisology” seems to encompass every discipline that humans made up in order to make sense of the world and talk about it. If the motivation behind this idea (and/or this blog post) is “I am disappointed in how a few comment threads turned out”, it would be a huge overreaction and not very productive*.

    Scott, was this the trigger of this blog post? (If not, skip the rest.)

    We are all participants in a global experiment about, among other things, how humans react, individually and as groups, if they have the ability to interact with people they don’t know from different cultural contexts using text based asynchronous communication channels.

    It is not a surprise that this can be really frustrating.

    But there are netiquette and moderation policies that seem to improve the overall experience, both from social media and news forums, and they are surprisingly simple, like “formulate clear rules”, “delete offending posts fast, be consistent, explain the deletion”, “offer a meta discussion about policies”.

    *Just as an example: The discussion about “what is a fact?” including the meta-question “how does this question make any sense or doesn’t it?” has been ongoing for thousands of years, without any hope of convergence of viewpoints, in epistemology. You can earn a PhD in philosophy just by reconstructing some aspects of the viewpoints of Kant, Derrida, and comparing the two.

  27. RC-cola-and-a-moon-pie says:

    Scott says about the meta-issues denoted by the sphinx: “I’ve placed it in a sphinx outside the pyramid to emphasize that it’s not a bad argument for the thing, it’s just an argument about something completely different.”

    This is what bothered me so much about his recent article expressing sympathy with what he called “conflict theory.” The “conflict” arguments all belong in the sphinx, I believe. And that illustrates why, to my mind, “conflict theory” is not simply a different way to approach evaluating a policy issue. It is simply addressing a different set of issues than that dealt with by what Scott calls “mistake theory.” Mistake theorists are discussing the issue at hand while conflict theorists are engaged in a discussion of something else entirely, and right or wrong that conversation cannot even in principle resolve the merits issue being discussed by the so-called “mistake theorist” (i.e., the person talking about the subject at issue).

  28. “then you win”

    As Socrates pointed out, only the person who changes their mind wins anything, while the other guy obviously gets nothing out of it.

    • gbdub says:

      You at least get a potential ally, adherent, or admirer. Or did Socrates get nothing out of teaching a bunch of people things?

    • Randy M says:

      Unless there’s some real world effect of holding the belief, which may impact something the argument “winner” cares about (which might very well be the other person).

      Also, he gets to update his certainty in that belief, as it has survived a challenge and received validation. This may allow him to act more appropriately in the future.

    • christhenottopher says:

      Socrates was a guy who got so hated by the populace of Athens, they decided to put him to death for the crime of basically being a public nuisance. He may have a point on the amount of learning happening, but in terms of social relations I’m not sure he’s a particularly good go-to source of insight.

  29. Enkidum says:

    I was going to argue… but then I realized that yeah, you’ve got a real point there.

    I think maybe we could say that the system given here is a hierarchy of arguments for the sake of truth-finding. So it’s a given that this is what you’re trying to do, at least in part.

    However it’s also true that this isn’t what most people are trying to do, most of the time, even when they think they are.

    Perhaps one practical role that this could have for society at large would simply be to suggest to people that they try and occasionally consider if they actually care about the truth of what someone is saying, and maybe don’t frame things as an argument if they don’t? But perhaps that’s naively optimistic.

  30. ragnarrahl says:

    ” “it is exactly as wrong for the state to do something as for a random criminal to do it” is a fake rule we never apply to anything else. ”
    That doesn’t seem accurate. “Random criminal” is not how that rule would be formulated, it’s “The same moral rules that apply to you and I apply to the government.” Pretty much the entire Enlightenment was a bunch of philosophers proposing that rule as an alternative to Privilege in the classical sense, the privilege of clergy, nobility, and so forth, applying it a bunch of things and seeing what stuck.

    Modern libertarians are pretty much defined by attempting to apply that totally-not-fake rule to literally everything.

    Granted, to turn this into “capital punishment is wrong” you need more than that rule– you need a rule that says the same thing would be wrong for private citizens, even if they like, set up a trial with significant due process safeguards and such.

    • Fluffy Buffalo says:

      it’s “The same moral rules that apply to you and I apply to the government.” Pretty much the entire Enlightenment was a bunch of philosophers proposing that rule as an alternative to Privilege in the classical sense, the privilege of clergy, nobility, and so forth, applying it a bunch of things and seeing what stuck.

      That seems to capture only part of the picture. As far as I can see, the big progress during enlightenment was to abstract the power of the state, as an institution, from the power of the ruler, as a person. The right to exert violence, demand money, restrict people’s liberties etc. lies with the state as an institution, but is exerted through people appointed as the state’s representative for certain functions. So while a judge may have a right to condemn someone to death (following the state’s rulebooks), he does not, as a person, have the privilege to kill someone in his free time.
      The application of that principle has allowed citizens to outsource many concerns about personal safety and drastically reduced the necessary application of violence. Questioning the state’s monopoly on violence in principle looks like a dumb and dangerous move to me. (That doesn’t mean that there or shouldn’t be a discussion about the rules under which the state can apply violence, and which forms of violence are permissible. That is absolutely necessary.)

      • whateverthisistupd says:

        Questioning the state’s monopoly on violence in principle looks like a dumb and dangerous move to me. (That doesn’t mean that there or shouldn’t be a discussion about the rules under which the state can apply violence, and which forms of violence are permissible. That is absolutely necessary.)

        I can’t really perform the ethical doublethink needed to get where you are. (I’m one of the “take this fake rule literally” people).
        It literally makes my head hurt, it’s just so counter-intuitive to me. You have to say that an action that normally would warrant self-defense doesn’t, because… this is where I get confused.

        I’m not trying to argue in bad faith, I genuinely don’t understand this.

        Like, I can get “having a unified government is better then having competing governments”.

        But how do you do on that the personal ethical level where Person X can kill Person Y and Y has no right to self-defense because X claims authorization from Monopoly Government, whereas Person A can defend themselves against Person B because Person B claims authorization from Minority Government, where all other factors are the same?

        It seems to you either need to have given up on your sense of a priori moral feeling in for utilitarian moral principles, but without a priori feeling, how do you maintain a fundamental compass to inform morality?

        Or alternatively, your a priori moral feeling works on a much more abstract level?

        I understand why most people feel this way, it’s based on an equation of authority with moral correctness.

        I don’t get how you do it as a rationalist.

  31. Edward Scizorhands says:

    “you can never oppose something that benefits you” is a fake rule we never apply to anything else.

    This rule is used all the time. It’s a stupid rule, and you can see why when you state it as plainly as you do here, but just on the government issue I’ve heard conservatives say “liberals who want higher taxes must be hypocrites because they don’t voluntarily pay more taxes” and liberals say “conservatives are hypocrites because they want smaller government yet continue to use it.”

    It’s a stupid gotcha, and unfortunately not isolated at all.

    • Simon_Jester says:

      🙁

      I knows, buddy.

      Sadly, the hallmark of the isolated demand for rigor isn’t that it’s a strictly one-use argument, never used again. We’d be better off if it was. It’s that it’s an argument used only when it is needed, and which is treated as if it doesn’t exist or apply, whenever we’re not using it to win an argument.

    • The Nybbler says:

      It turns out there’s counterparts, “You can never support anything which benefits you”: “You just want lower taxes because you’re selfish 1%er”. “You can never oppose anything which doesn’t benefit you”: “You only oppose universal pre-K because you don’t have any kids”. I’m not sure about “you can never support anything that doesn’t benefit you”, but I can’t rule it out.

      • Alsadius says:

        > I’m not sure about “you can never support anything that doesn’t benefit you”, but I can’t rule it out.

        That’s also quite common, though mostly about public figures. Consider “(Famous politically-active rich guy) just gave a million dollars to an anti-spousal abuse charity. I wonder if he’s beating his wife, or if he’s just trying to virtue signal before he runs for office next year.”

        • Lambert says:

          “When I was poor and complained about inequality they said I was bitter; now that I’m rich and I complain about inequality they say I’m a hypocrite. I’m beginning to think they just don’t want to talk about inequality.”

          Russel Brand

  32. Sigivald says:

    This entire post was an excuse to post that Sierpinski triangle, wasn’t it?

  33. fion says:

    I like this. It reads kind of like a Greatest Hits.

  34. eqdw says:

    Two things

    —-

    First, I would like to briefly devils’ advocate in the defense of gotchas.

    Do you know the old joke about the comedians convention?

    Sammy is attending his first Comedian’s Convention. He’s very excited to see all the comic’s he’s seen on TV sitting at tables all around him.
    The proceedings begin with a joke session. Jay Leno gets up and says “Number 64.” Everyone in the hall laughs uproariously, except Sammy. Sammy turns to the older comedian who brought him, and says, “I didn’t hear any joke. What’s everybody laughing about?”
    “These are all professional comedians,” says his friend. “They don’t need to hear jokes. They all know the jokes so well, they’ve given every joke a number. They just get up and say the number. It saves time.”
    Chris Rock has gotten up and said, “Number one hundred forty three,” and again, everyone in the room cracks up.
    “Could I try it?” Sammy asks his friend.
    “Of course,” the friend says.
    “So Sammy stands up and he says, “Number fifteen.” Nobody laughs. Sammy is so embarrassed, he sits back down. Then he hears a voice mutter down near the end of his table, “Some people just don’t know how to tell a joke.”

    I think that a lot of gotchas have a function similar to the numbers in this joke. To follow the example you gave above:

    For example, [“When guns are outlawed, only outlaws will have guns”] be transformed into an argument like “Since it’s possible to get guns illegally with some effort, and criminals need guns to commit their crimes and are comfortable with breaking laws, it might only slightly decrease the number of guns available to criminals. And it might greatly decrease the number of guns available to law-abiding people hoping to defend themselves. So the cost of people not being able to defend themselves might be greater than the benefit of fewer criminals being able to commit crimes.”

    I think that the social function of the gotcha has very little to do with arguing at all. I think it’s more like saying “number 27” to your comedian friends. The gotcha is a short, concise pointer to a larger argument. The larger argument is expected to be understood by everyone participating. It’s not meant to be taken literally, as an instance of ignorance and stupidity, but instead to refer to and/or emphasize the wider argument, amongst a group of people who all have similar feelings and understandings on that subject.

    I think, further, that the reason that it appears to be used as a low-effort argument towards counterparties is an artifact of social media, and specifically how it has blurred the boundaries between private and public spaces. Twitter is a fully public space, for example, but most of the people I know, and most of the accounts I see, seem to act as if it is more or less a private space. They might tweet the guns gotcha, intending it to be private gossip amongst friends. Of course, since Twitter is not private, it inevitably is seen by people who are not in the loop. They might agree with it, or they might disagree with it, but in either case, they’re not part of the ingroup that knows to automatically dereference the pithy phrase to the reasonable argument.

    Come to think of it, if we’re being very charitable this would seem to be a reasonable explanation for both motte-and-baileying, and sealioning. The former: the person doing it does not realize they are doing something wrong, because they know that the motte is supposed to expand into the bailey (or vice versa; I always mix up those two). The latter, because people think of their twitter as a de facto private space, and are not happy when outsiders come in to their mentions and disrupt their fun.

    —-

    Second thing, I don’t have a good way to work this in but a recent tweet from Alice Maz seems appropriate to this whole taxonomy of arguments discussion https://twitter.com/alicemazzy/status/991865939674222592

    • gbdub says:

      But the comics don’t just shout out numbers on tour, because there the audience isn’t part of the ingroup that “gets” the numbered joke references. There, to get a laugh, Chris Rock has to actually articulate the whole bit.

      I think you’ve identified an important phenomenon here, but it still doesn’t make a “gotcha” an argument. It just demonstrates that “gotchas” are basically performance art for the ingroup that already agrees with you rather than something that actually furthers a debate with an opponent.

      • eqdw says:

        but it still doesn’t make a “gotcha” an argument.

        This was not my intention! This was the opposite of my intention! I was not trying to highlight how gotchas are actually reasonable arguments. I was trying to highlight that I think that most of the time gotchas are not trying to be arguments at all.

        • gbdub says:

          Rereading you more carefully I don’t think we disagree on much. I was mostly expanding on your own point:

          “The larger argument is expected to be understood by everyone participating.”

          Which you yourself (rereading more carefully) also expand on in a way I mostly agree with.

          But I don’t think that really works as a “defense” of gotchas (which was what you said you intended) because I don’t think it actually clashes with any of what Scott says against gotchas. If anything, it pushes them more into “social shaming” since they start to function as ingroup markers.

    • A Definite Beta Guy says:

      I agree with part of your point, but I disagree with the use. “Gotchas” are used pretty extensively to disprove other people’s viewpoints: I spent a lot of time on other internet forums, and the “if you hate government so much you should move to Somalia” line was used many times against libertarians. Other “gotchas” or ill-formed arguments accompanied this, to the extent that MOST of the criticism against libertarians (or any group really) consisted of “gotchas.”

      These also dominate inter-tribal discussions in Twitter as well as Facebook, so I think these are basically the backbone of most arguments. But they aren’t convincing at all, and aren’t serious attempts at discussion, so there’s definitely the need to call them out. I think we might want a better term than “gotcha,” though. Reducto ad sound-byte?

      I think part of the problem is that tribe rank-and-file do not really understand their arguments beyond the “Gotchas,” and so can’t really communicate the actual meat to other tribes.

    • fion says:

      I’ve heard that joke before, but with a different punchline. In my version (I’ll use your names) Sammy says “Can I try it?” “Of course,” says his friend. “Number three hundred and eighty-two!” says Sammy. Everybody laughs hysterically and for much longer than any of the other jokes. Sammy sits down rather bemused and says to his friend “Wow, what happened there?” His friend fights down his remaining chuckles and says “Oh, we’ve not heard that one before!”

    • soreff says:

      Many Thanks! I was considering writing something similar.
      Re:

      The gotcha is a short, concise pointer to a larger argument. The larger argument is expected to be understood by everyone participating. It’s not meant to be taken literally, as an instance of ignorance and stupidity, but instead to refer to and/or emphasize the wider argument, amongst a group of people who all have similar feelings and understandings on that subject.

      Exactly agreed. The crucial point is what information the writer can expect their
      audience to already have. If a physicist at a physics seminar sees a mistake in a
      presentation, it may suffice for them to just say “angular momentum” to communicate it to everyone else there.

      To put it another way: gotchas can be abbreviations.

    • Galle says:

      The obvious objection, though, is that if everyone involved already knows the argument, then there’s no point in making it. One thing I noticed a long while ago is that particularly interminable arguments tend to follow a “script” of standardized responses and counter-responses. For a completely non-political example, let’s take an argument about Luke Skywalker’s characterization in The Last Jedi that I’ve seen repeated on multiple occasions. Spoilers for that movie, obviously:

      Argument: TLJ presents Luke as someone willing to murder a teenage boy over a mere possibility, something that completely contradicts his characterization at the end of Return of the Jedi.
      Response: But Luke didn’t murder a teenage boy. He was tempted to, but resisted the temptation.
      Counter Response: But Luke shouldn’t have been tempted in the first place. He moved beyond that in ROTJ.
      Counter-Counter-Response: You can’t move beyond temptation, you can only learn to resist it, which Luke did.

      As long as you follow the script, no actual new information is being communicated and nothing useful is getting done, you’re just performing some kind of complicated social ritual to excise your feelings of unhappiness that someone is wrong on the internet. If you’re genuinely trying to persuade someone, then step one is to break script.

      If an argument is so well-known that you expect your opponent to recognize it from a simple gotcha, then that argument is part of the script, and therefore isn’t useful.

  35. doublebuffered says:

    I think this is a valuable post that helps clarify some thoughts I’ve been having over the last few weeks. I’m interested in discussions on how to better deal with the overpopulation of social shaming.

    One concrete criticism I want to offer is that your red/blue chart is REALLY hard to read. Specifically the blue gradient makes the topmost point illegible on my monitor. If you can I would suggest slightly redoing that graphic for legibility

  36. gbdub says:

    “assuming that for most people the natural purpose of argumentation is truth-seeking”

    Doesn’t Scott come right out and say that 90% of argumentation isn’t that? That’s why “social shaming” is the bottom of the pyramid and meta-debate is the Sphinx!

    I’m sure plenty of people are quite happy to make the bottom of the pyramid their be-all-end-all, but I don’t think most of them are self-aware or willing to admit that that is what they are doing, and gaining that self awareness is probably valuable even if they conclude “yeah all that high level stuff is great but I just want to win the interpersonal status game”.

    • Patrick Cruce says:

      I’m not sure failure to admit this is necessarily a bad thing. For one there are many social games where some sort of unspoken common knowledge is a critical element of successful play.

      And two, if they’re sufficiently good at the game, they may not say this for the same reason that chess masters don’t spend time talking about how a knight moves on the board.

      At best these acknowledgements would be digressions and at worse they could mean losing the game. (tantamount to revealing your next three moves in chess)

  37. MNadolsky says:

    > Most people are either meta-debating – debating whether some parties in the debate are violating norms

    This is the heart of conflict theory argumentation.

    Try as you might, the more you examine effective disagreement with authentic and good-faith truth-discovery as an end-goal, the more difficult it is to take conflict theory seriously as an effective way to interact with the world.

    I know that conflict/mistake theorists roughly correlate with tribes, and I know you want to suppress tribal urges, but some things really are beyond defense. Admitting that conflict theory is unambiguously bad is not to dismiss the tribe it’s associated with; the tribe mistake theory is associated with has failures as well, and pointing one out is not dismissing the tribe as a whole, even if the failure is pervasive.

    • fion says:

      I don’t think conflict and mistake theorists correlate with tribes at all. In fact I don’t know which way around you think the correlation goes… My best guess is that Blue and Red tribes are most associated with conflict theory and Grey tribe is associated with mistake theory?

  38. rubberduck says:

    Great post!

    I haven’t gone through all the comments yet so maybe somebody addressed this, but in my opinion either next to this pyramid or parallel to it (there’s overlap) there’s another pyramid for levels of misunderstanding. This might sound like “people don’t REALLY disagree, they just need to come to a common set of goals and definitions!” but I find it important to determine if there is any actual disagreement in play before looking at its sources. I’m taking “disagreement” to mean primarily those things above the upper dashed line on Scott’s pyramid, since everything below it sounds more like debate tactics than actual disagreements of fact. Being a conflict vs. mistake theorist probably plays into this so I guess this is like a disagreement pyramid for mistake theorists.

    I imagine the pyramid of misunderstanding as follows, in which going up the levels, the misunderstanding becomes more difficult to resolve:

    Level 0: Inability to understand even the surface-level argument due to some sort of barrier that’s independent of the claim being made. (Ex: not understanding your friend because he’s very drunk and his speech is slurred, not understanding a scientific publication due to insufficient technical knowledge, not understanding something written in a foreign language.)

    Level 1: Inconsistent definitions between parties, essentially the same thing as the “disputing definitions” section above. (Ex: “How can you not be a feminist? Don’t you know that feminism gave women the right to vote?”, the “capital punishment is murder” thing, etc.)

    Level 2: Access to different sources of information (Ex: debating gun control with someone who has read a different set of publications than you, trying to explain to a time-traveller from 1900 that manned spaceflight is possible)

    Level 3: Inability to follow the other person’s reasoning: you have access to the same facts and agree on definitions but draw different conclusions, could be related to higher-level disagreements. (Ex: you and your friend have examined the wage gap and agree on a number of other factors besides sexism in hiring that account for women earning less than men, such as women choosing flexibility, working fewer hours, going into different fields, etc. Your friend claims that all this still indicates a sexist society, just not that hiring managers specifically are biased. You do not understand how she came to that conclusion.) (Alternate ex: a crazy person says that the text on the side of his cereal box holds a prophecy that the world is ending next week.)

    Level 4: Inability to understand motivations and values (Ex: The gun-control debate in the “operationalizing” description above: One person’s motivation is to ensure that citizens are protected from governmental tyranny, the other’s is to lower the number of deaths by firearms. If either side approaches the debate without seeing the other’s motivation, then they will still disagree even if they’ve agreed on definitions and evidence.)

    Level 5: Inability to understand physical/emotional experiences that have strong influences on people’s views and standards (Ex: arguments against waterboarding that hinge on how unpleasant the experience is will not be understood by most people the way they’d understand hunger, being tired, or having a cold.) (Ex 2: a religious person’s understanding of the universe is deeply tied to a spiritual awakening they went through, which anyone who has not experienced a spiritual awakening will never quite understand.)

    The examples aren’t the best since I haven’t thought about them as deeply as I should but overall I think disentangling misunderstanding from actual disagreement is important.

  39. greghb says:

    Does anyone else feel somehow blessedly isolated from things below the lower dotted line, and even largely isolated from things below the higher dotted line? I believe such things exist and even that they are the majority of discourse. But am I probably somehow misguided/wrong for thinking I mostly am not seeing it in the discourse I consume voluntarily? I mostly see a bunch of intelligent, decent, and philosophically sophisticated people working through genuinely complex issues, about which they are mostly unsure. Heck, that’s why I like Scott’s posts. Does anyone else feel like the discourse they engage with is mostly well-intentioned, intelligent, sane, and not that strident or confident?

    Reasons this might be plausible: I’m not on Twitter or Tumblr; I don’t watch cable news or read the major newspapers beyond the headlines; I get most of my analysis from nerdy blogs and long-form journalism.

    Reasons this might be implausible: many of the people I read/talk with are probably left-of-center when it comes down to it, so maybe I’m just not recognizing my own echo chamber; what else?

  40. ksvanhorn says:

    “it is exactly as wrong for the state to do something as for a random criminal to do it” is a fake rule we never apply to anything else.”

    Bad example, as this is, in fact, pretty close to the hard-core libertarian position. Just replace “criminal” with “person”. So rather than being a “fake rule we never apply to anything else,” it is a fundamental and often-used rule for hard-core libertarians.

  41. ksvanhorn says:

    ““everyone has a right to do what they want with their own bodies” is a fake rule we never apply to anything else.”

    Again, only if you exclude libertarians.

    • Simon_Jester says:

      Does this invalidate the observation that 90% or more of the general population uses this class of argument as a fake rule never applied consistently?

      This isn’t about the specific political positions involved here, it’s about the general principle that many, many of our political ‘sacred cow’ arguments are in fact isolated demands for rigor when taken seriously in their strong form.

      • fion says:

        I think this is exactly the point. Maybe when Alice says “everyone has a right to do what they want with their own bodies” she’s not applying a fake rule to win an abortion argument, but is actually applying her consistent libertarian principles. But Bob isn’t a consistent libertarian, and he also uses “everyone has a right to do what they want with their own bodies” in abortion arguments.

        What Scott’s saying isn’t wrong, it just doesn’t apply to Alice.

        • Simon_Jester says:

          Okay.

          Is this incompatible with the observation that for a variety of very common political arguments, including the ones Scott uses as examples, the Bobs greatly outnumber the Alices?

    • Garrett says:

      “I’m pro-choice”

      “Cool – I choose 3 new machine guns and heroin.”

  42. Nornagest says:

    Cactus people, obviously.

  43. edgepatrol says:

    ” One could argue that
    “Banning abortion is unconscionable because it denies someone the right to do what they want with their own body” is an isolated demand for rigor, given that we ban people from selling their organs, accepting unlicensed medical treatments, using illegal drugs, engaging in prostitution, accepting euthanasia, and countless other things that involve telling them what to do with their bodies – “everyone has a right to do what they want with their own bodies” is a fake rule we never apply to anything else. ”

    Am I the only one who doesn’t have a problem with ANY of those things and, indeed, have it as a hard rule that is applied to everything else? 😉

    Great post, as always.

    • hnau says:

      I don’t have such a rule. I do, however, have the following as a hard rule: “Relying on hard rules to determine your beliefs in practical matters is wrong and dangerous.”

  44. Yaleocon says:

    tl;dr: In many contexts, but especially moral/political discourse, “disputing definitions” is good and necessary.

    Take the statement “a fetus is a person”, and arguments about it. We can define “person” as “having a beating heart,” or as “being rational” or “having the potential to be rational,” or “being alive and having human DNA”, or more. This is clearly just a question of “how we use our words”, which Scott says “nothing in our factual or moral debates should hinge on.”

    But this seems obviously wrong. One of those definitions, perhaps, is the important one, the one that matters and confers moral significance. And if we’re wrong about which definition it is, then we might deny a hundred million women their reproductive rights, or kill hundreds of millions of people. Disputing this definition, and arriving at a right answer, is crazy important.

    And “person” isn’t the only case of this. Any instance of what metaethicists and philosophers of language call “evaluative judgments”–key examples being “wrong”, “good”, “person”, and “important”–has a definition that isn’t trivial, and can’t be dispensed with in argument. And before anyone @’s me with some Yudkowsky, he agrees with this in a comment on his article about disputing definitions (ctrl+f “direct appearance”). I just don’t think he fully thought through the implication of that.

    Because let’s say you’ve agreed that a “person” must be “rational.” What is “rationality”? All of a sudden, another term whose definition matters has snuck its way in, and you can no longer just say “well, we can define it many different ways, and it’s only important that we agree about whatever arbitrary definition we give it.” Let’s say we’ve agreed that “right” means “maximizing pleasure.” Now it becomes important to know what “pleasure” is and what kinds of beings experience it. Oh, and also, “maximizing” needs clarification, especially because it seems to involve some kind of causation, and now it’s important what makes an action cause a result (a ridiculously thorny metaphysical problem), and so on.

    Given, arguments about trees falling in forests and making sounds can be dumb, because the answer isn’t important. But there are some definitions that are important, and we can’t dispense with them. And once we know those definitions, their importance is contagious; everything they mention gains that same importance. In the end, disputing definitions is incredibly important and totally indispensable, and there are lots and lots of definitions for which it matters what we think about them.

    (Also, notice how all of these definitions inherit their importance–in some sense, their “objectiveness”–from their moral importance? You might think that there’s some kind of metaphysical significance there, where the Good gives rise to certain classes of things united by Forms that define and characterize them. This is called Platonism. You don’t have to adopt it to think that I’m right about everything else, which is why this part is in parentheses. But maybe, if you agree with me about the rest, you can see why Platonism is still a live metaphysics.)

    • Randy M says:

      Disputing this definition, and arriving at a right answer, is crazy important.

      Disputing the implications of the definition are what you are worried about; that is, it isn’t critical to know whether or not a fetus is “a person” it is critical to know whether or not it falls into the category of “entity worth protecting even at some cost to other people” (or however carefully you want to phrase the requisite action). Maybe in this case that is the whole of what is meant by person, but your interlocutor might not have that definition in mind; the argument will stall or devolve unless you can put the label aside and debate the consequences that you are implicitly associating with the label.

      It’s possible some social, let’s say, influence may depend on the labels used. Say if we discover some aliens that we feel are equally worthy of protection as a human, perhaps we want to extend the words like people and person and man and woman to them in order to condition people to view them as equivalent to homo sapiens. But doing so isn’t winning the argument, it is a kind of dark art. Explicitly outlining the criteria for personhood may be having the argument in a round about way, but it is a proxy for arguing about the implications on our behavior.

      Of course, ultimately a lot of our values may cash out in definitions, at least without a lot of reflection. (edit: Actually, I think I may be coming around to agree with you…)

      • Yaleocon says:

        I’m not sure I’ll directly respond to your point (and maybe it will be unnecessary for me to, if you’re coming around anyway). But this might clarify my thinking here: I think “person” is our word for the implications of the definition.

        That implication is that the thing “has rights” or “has dignity” or “should have its pleasure maximized” or whatever. That’s why animal rights activists, for example, use the language “animals are people too”–they don’t mean they’re human, they mean they matter (in the sense of having rights/dignity/feelings etc, whichever is the truly important one). The word “person” is a tag for that quality.

        So both people agreeing to extend the word “person” to aliens would constitute winning the argument about whether aliens are “equally worthy of protection” or whatever. If you can convincingly define “person” such that it includes aliens (or doesn’t include, depending), you win. But in order to do that, you have to actually do argumentative work: offer a case for why rationality is what gives us our dignity, or why anything which can feel pain deserves not to feel pain, or something. This argument will be a dispute over a definition–the definition of “person”–which clearly matters.

        Put another way: there are kind of two definitions of “person” going on here. The first is the moral/evaluative definition–the “implications”, we called it–and the second is the descriptive/empirical characterization. The debate is actually about which moral implications match up to which descriptive characterization, an important and valuable debate–which manifests as a dispute over the definition of “person.”

        And yes, we could totally do away with the word “person” in this argument. We could say “there is something that makes things have rights; that thing is X; aliens also have X; ergo aliens have rights.” Maybe we’d be better off if we did reformulate every dispute over definitions in those terms. But I think I’d say that this is just a roundabout way of disputing definitions anyway, an artificial reframing of a perfectly natural and meaningful form of argument.

        And at the very least: if two people are having a perfectly productive discussion about the definition of “personhood” and what confers moral meaning, nobody should step in and say “stop disputing definitions!” They’re not doing anything wrong, and the interloper will only kick off a round of unproductive metadebate. If you agree with nothing else, agree with this, please 🙂

    • cuke says:

      I had the same reaction, that arguments over definitions are often essential, not a sideshow.

      During the whole conflict theory/mistake theory conversation awhile back, I kept asking in various threads for us to clarify definitions for each of these. The construct continued to be incoherent to me. I see the dichotomy picked back up here in this discussion and it’s still incoherent to me. That incoherence (as I experience it anyway) makes any further claims about “mistake theorists” and “conflict theorists” impossible to evaluate.

      We often smuggle biases and assumptions into our definitions and so starting with definitions can really help ensure the rest of the discussion isn’t a waste of time.

      My mind here starts to imagine another graphic that maps out a road to better conversations. For me, the road starts with these three things:

      1. Are we comprehending each other’s starting positions? “What I hear you saying is… do I have that right?”

      2. If there are misunderstandings of meaning, do they have to do with how we’re defining things and can we get more precise? “By conflict theorist I mean someone who… is that also what you mean?”

      3. Can we agree about what would constitute evidence for either side of the discussion? “I only want to consider quantitative studies in peer-reviewed journals” or “I’m open to being persuaded by moral arguments as long as I can follow your logic,” etc.

      I found the debate between Sam Harris and Ezra Klein about Charles Murray (that took place on Vox, podcasts, Sam’s website, Reddit, and so on) to be fascinating because so much of the time they were talking past each other. I wished there were a moderator requiring them to respond to each other more coherently. And I thought later someone trained in rhetoric could dismantle the conversation in a way that would be illuminating. I spent way too long on that whole thing partly because I want us to do better with the mechanics of how we talk to each other and I find all the evasive tactics, willful mischaracterization, and ego-defending to be so frustrating.

      • Yaleocon says:

        I think what you’re pointing out is important, but the issue you’re addressing is clarifying definitions, not disputing them, and I think Scott would agree with you. If I’m interpreting you correctly, you’re saying that people can use the same words differently without realizing it, so making sure we agree on what the definitions of things are is really important.

        So, for example, there are different definitions of “blue”–e.g., reflecting a certain wavelength range vs. activating certain “rods” in the retina (these don’t line up perfectly)–and we might get confused if you’re using definition 1 and I’m using definition 2. But once we figure out that we’re using different definitions, it seems like there’s no problem with just picking one. 1 or 2, doesn’t matter. So long as we agree on it, we can stop talking past each other.

        I’m making a further claim, which you may or may not agree with. In certain cases, I think that we can all agree about a definition, not talk past one another, and still be wrong. Take “good.” There’s a question about which actions are “good.” Let’s say you believe definition 1, which says “goodness is pleasure and absence of pain,” and I believe definition 2, which says “goodness is virtue and absence of vice.” We realize that if we just said “good” without clarifying, we would talk past each other, so we agree to use definition 2. But it seems definition 2 could still be wrong. There might be a true definition of goodness, one which has actual moral importance. So it’s not enough to just agree on our definitions: sometimes, our definitions must also be the right ones (or at least, that’s the case I’m making).

        • cuke says:

          Okay, interesting. My default is that I think there’s no bright line between clarifying and disputing and that it’s often not clear until afterwards whether we were mainly clarifying or disputing a definition. Was it merely clarification if we came to agreement finally and dispute if we couldn’t agree? Anyway, I don’t see the distinction at the moment. Maybe you have an example that would clarify.

          Your next point, about some definitions being true or right, separate from whether we agree on a shared definition, also stumps me. I’m guessing if I look up “good” in the dictionary, there will be several definitions at least. Are we taking the dictionary to be true and right or are you speaking to some grand underlying true and right?

          I get that some definitions could be wrong in the sense that they are contrary to a widely-accepted definition for something, like say if I said diabetes is the condition where you have a sore throat and a stuffy nose.

          How do you define “true” and “right” when it comes to definitions? In the context of the word “good” I think you suggest a definition would be true or right if has “moral importance” — which takes us to another term to be defined.

          So I think my position is that there’s no objective standard for a true or right definition, outside of one that the parties might agree to, like a dictionary or common usage or whatever. But I am interested in being persuaded by more examples!

          • Yaleocon says:

            I think there definitely is a “grand underlying true and right.” If you’re a relativist, you can get out of this: but otherwise, it seems clear that all else equal, we should prevent pain. Now let’s say two people agree on what pain is, but they’re wrong: they exclude emotional pain, let’s say. Their agreement isn’t enough. There’s a true definition of pain which matters, and it would be good for them to correct their false definition.

            For a perhaps higher-stakes example: defining “person” to exclude humans with dark skin would be very bad. As a matter of fact, black humans do have rights and dignity, even if people at a certain time and place agree not to recognize that. A correct definition of “person” should reflect that truth.

            Are either of these satisfying examples for you, cuke? If not, is it because you’re a relativist/nihilist? Intrigued to hear.

          • cuke says:

            Yalecon, thanks for your response!

            I’ve tried for the past hour to write something coherent and concise, and it’s not happening. So here’s a partial reply…

            Let’s say I doubt whether there’s a grand underlying true or right, but I think we do better as a society if we live as if there were provisional truth claims and broadly shared moral rights, even if our ideas about these change by place and time and many of them are hotly contested at any given moment.

            So for the purposes of this conversation, I don’t think there’s any right or true definition of pain in an absolute sense. In a practical, daily sense, we can see that there’s multiple kinds of pain and suffering going on around us all the time. If someone wants to say that there’s no such thing as emotional pain, I would show them the evidence I have for it or I’d point to someone in evident emotional pain and say “What do you call that? How do you think about what that is?”

            I gather from your examples that definitions matter to you because they point to rights and responsibilities. So, if you define pain as including emotional pain, it implies to you a duty to alleviate emotional pain. If you define a person as having certain rights, then you have to make sure everyone gets included in that definition. I would consider the definition and the duty to be separate things but I can see how historically they’ve been quite connected in practice.

            Societies are constantly negotiating our rights and responsibilities to each other and I think that’s a good thing. Things like freedom of speech and press, democratic participation, and universal education are helpful to making those negotiations more productive and beneficial. Things like abortion, gun control, transgender rights, and the acceptable degree of state corruption are all hotly contested things in U.S. society right now, in the same way that the personhood of slaves or women was hotly contested in another time. Given how our collective moral compass changes, how do we argue for its absolute rightness in any given time?

            I think killing is wrong. It’s often cited as something universally recognized as wrong in the same way you suggest pain is obviously something we should prevent. But where is the “right and true” between the pacifist and the just war theorist?

            From my point of view, physical pain and emotional suffering are both inevitable and efforts to prevent or reduce them are commendable, but which kinds of pain and whose pain do we prioritize? And how much pain is self-inflicted or constructed in the mind and how much control do we have over its prevention up against other priorities we might have? Where’s the line between experiences that are challenging, very difficult, and painful? A lot of the things I value about myself I arrived at through pain.

            Anyway, it’s complicated, right? I can tell you I started as a person very uncomfortable with uncertainty and I’ve grown into an older person who is somewhat more comfortable with uncertainty and that that’s made a lot of things easier and more enjoyable.

          • Yaleocon says:

            Thanks for your response as well!

            I think the “right and true” doesn’t exist between the pacifist and the just war theorist. The moral law is something outside of them, which is what their debate is about. They both have hypotheses about it, which conflict—but they both agree that it exists, and that it matters which of them is correct.

            There’s the position (error theory, nihilism) which claims that they’re postulating about a nonexistent thing, and that “better” and “right” don’t have true definitions. That’s probably impossible to disprove, as a question of faith. but thankfully, it doesn’t seem like you believe that, since you refer to “do[ing] better as a society if we live as if there were [moral truth]”.

            Correct me if you are, in fact, an error theorist. But it seems like you believe something different. Maybe we collectively create or invent a moral order to serve our own purposes, or something of the kind, and it’s a “provisional” truth (let’s call this “constructivism” or maybe “contractualism”). This is a position to be taken seriously, but I find it somewhat hard to believe. It seems possible for people to agree on evil things, and it seems we have in the past; society-wide endorsement of racialized slavery is an important example. But if morality is merely what we invent, how can we express the idea that it was bad then? If the Civil War had gone the other way, would slavery still be good now? (These aren’t unanswerable questions—but I think each possible answer has its own problems, including maybe collapsing back into error theory.)

            Rather than explaining our confusion over controversial issues (like the ones you laid out) by denying that the moral law exists, I’d rather say that it’s hard to get exactly right, and therefore easy to disagree about, but we can still discover more about it over time. We might never achieve “absolute rightness”, but without something absolute that we’re reaching toward, we can’t even have relative rightness; the ideas of “progress” and “regress” wouldn’t make sense. These concerns lead me to endorse some version of the “grand underlying true and right” theory.

            (And I hope in all this that I haven’t mischaracterized your views.)

          • cuke says:

            I think you’ve characterized my views entirely fairly. You clearly have some training in moral philosophy!

            I do lean constructivist-y, and I’m not a nihilist or error theorist, as I understand them. I have yet to meet truths that are not contingent on various particulars of time and place though.

            You say between the pacifist and the just war theorist that the standard of right and true isn’t between them, that moral law exists on its own outside of them and it sounds like maybe also outside of all people?

            The existence of a universal, objective moral law looks to me like something people take on faith; I don’t have evidence for its existence. It seems fine to me that some people would take it on faith like others might take the existence of a god on faith. But then I’d want to be clear that faith is the bedrock that the whole moral structure sits on. For me, if such a moral law rests on faith, then I have trouble talking about it in true and false terms; I think about faith things in terms of what I believe, prefer, would like to be so.

            I think I consider moral law more analogous to the law in the legal sense — a long-crafted, collective, and usually-improving product of human thought. So it’s bigger than the two people talking, but it is the product of a lot of people talking to each other over a long time. It’s a living, changing kind of a document. We’re always working on drafts and there’s no true final draft waiting out there for us.

            This conversation reminds me of how we talk about the self in Buddhism. The self seems to have no ultimate reality to it; it’s an emergent property of consciousness. We function better day-to-day if our stance with regard to the existence of the self is to act as if it is something that is kind of real, but to not get overly attached to its realness. Buddhists will sometimes say to capture this feeling: “it’s okay to think the self is real; just don’t imagine that it’s really real.” So that is a nutshell for how I feel about morals. I take them seriously, but I also consider them provisional and an artifact of human society.

            I am puzzling over what it means that when we both started talking about the value of talking about definitions when people are having arguments that I was thinking “agreeing on definitions matters because otherwise the conversation won’t result in new information” — while it seems you were thinking something like “agreeing on definitions matters because otherwise the conversation might lead to morally wrong outcomes.” Does that sound right?

        • Certainly you can’t resolve a dispute over definitions by just picking some arbitrary definition. You have to figure out what you and your interlocutor are actually talking about. In your example with “good”, if the disputants are talking about, say, whether some action is the morally right thing to do or not, then obviously they need to talk about what’s morally right to do, and talking about either what’s pleasurable or virtuous without at least talking also about how this factors into what’s right to do is a distraction from the original issue. The dispute must be resolved by defining “good” as “morally right”, rather than by using either definition 1 or definition 2. (Or better, tabooing “good”—because that way, if the discussion devolves into being solely about what’s “pleasurable”, the virtue ethicist interlocutor will not be able to miss that.)

          But I don’t think the “blue” example is different. Suppose a certain medical treatment involves bombarding the eyes with light in the blue wavelength range (450–495 nm), because it’s at the right energy to trigger a certain chemical reaction, say. A doctor says to his patient: “We can cure you by bombarding your eyes with blue light. So we’ll shine this 470 nm wavelength light in your eyes”. The patient says: “Hold on. I actually have abnormal colour vision. The rod cells that, for most people, are activated by blue light, are activated by a different wavelength for me; while those rod cells are, for me, activated by green light. So perhaps you should shine green light in my eyes instead.” Clearly, in this situation, it’s blue light the patient needs—since it’s the wavelength, rather than rod activation, which is what matters—but it’s understandable that the patient is concerned, since the doctor didn’t specify which definition of “blue” he was using originally.

      • Tim van Beek says:

        Sorry, but I find both the post and the comment thread frustrating, because of the constant conflation of so many different concepts. In his point “Disputing definitions” Scott conflates the need of a common understanding of language, including technical terminology, with the “no true Scotsman” fallacity. Those two concepts have absolutly no relation to each other except that they may frustrate the participants of the discussion.

        But since you mention the Harris-Murray-Klein-Debate: That’s an example of a conversation that left both participants frustrated and exhausted without touching any of the points that Scott lists.

        Harris believes that there is a meaningful distinction between objective reality, our knowledge about it, and what we decide to do with that knowledge. Klein believes that our intentions what to do take precendece. Both state that multiple times, both in the podcast and in their conclusion on their web pages, so this is IMHO pretty clear.

        Klein explicitly states in the podcast that Murray is primarily a political actor, his intention is e.g. to get rid of affirmative action, that this is morally wrong, that this determines his (Murray’s) scientific reasoning, and that said reasoning must therefore be refutable and has actually been refuted.

        This means they live in disjunct epistemological worlds, with disjunct definitions of what counts as evidence. Therefore, neither one can convince the other.

        As Harris states himself, there is no need to dismiss Henlon’s razor to explain Klein’s performance, even if it seems he cannot quite get himself to implement his own finding.

        • cuke says:

          Hi Tim,

          I’m not clear what you’re responding to in this thread.

          I didn’t mean to open a debate about the contents of what Klein and Harris were doing, only to say it was a salient example for me of smart people having a sloppy conversation and that I wished we all had better tools to have better conversations. I disagree with some of how you characterize their debate and my guess is that could lead right back to all the secondary arguments many many people had after their debate. I didn’t mean to open that up here.

          I agree with you that this whole conversation about the role of clarifying/disputing definitions could use some greater clarifying/disputing. That’s the conversation I intended to extend here by replying to Yaleocon.

    • beleester says:

      I think in this taxonomy, the question “Is a fetus a person?” is actually two separate questions:
      1. Does a fetus have a certain property (human DNA, capacity for thought, ability to feel pain, whatever)? This is a factual question.
      2. Should that property determine whether an entity has moral value? This fits into the “higher-level generators of disagreement” – it’s a question of morals rather than facts.

      I think all of your “important definitions” fall into that category of high-level generators. Either because they’re questions of morality (what does “right” mean?), or because they’re at such a basic level (what does “cause” mean?) that disagreeing on them means that you’re disagreeing on some basic axiom that determines everything about how you view the world.

      • albatross11 says:

        Right, “is a fetus a human” isn’t a biology question, it’s a question of how to classify a fetus morally. When someone says “I believe a fetus is human,” he’s not saying that it was ever up for debate that your fetus, say, contained human DNA or was properly classified as a member of the human species, he’s saying that he thinks we should apply the same rules w.r.t. proper moral treatment to a fetus that we do to, say, five year old children, or kids with Downs, or healthy adults, or 99-year-olds who are senile enough they don’t recognize themselves in a mirror.

        When someone says “I don’t believe a fetus is human,” likewise, he’s not saying he thinks fetuses have dinosaur DNA or would properly be classified as members of the species of dogs, he’s saying he doesn’t think we should apply the same rules w.r.t. proper moral treatment that we do with healthy/disabled children and adults.

      • cuke says:

        I think I agree with this, beleester.

        Yaleocon, what do you say to this based on what you wrote?

        I suspect that many confusions/disagreements over definitions involve both questions of fact and questions of morals. To me, it’s helpful to distinguish between those two types of questions. I’m not sure, but it seems in our above conversation, Yaleocon, that I was focused on definitional disagreements about facts and you were focused on definitional disagreements about morals. But I can’t tell because the way you talk about morals is the way I talk about facts (I think). I consider both to be worthwhile to talk about in order to have better arguments about important topics.

        albatross11 here seems to be saying the question of “do we define a fetus as a human?” is ONLY a question of morals, not of facts at all. Is that right? I would say like you do beleester that it’s a question of both facts and morals.

      • Yaleocon says:

        Hey! Sorry to get to this late, hectic week.

        @beleester: when we have disputes over “Is a fetus a person?”, I think we’re usually arguing question 2. That’s probably because we already have question 1, the empirical part, mostly figured out (we know when the heart beats, fingers form, neurons fire, etc.), and all that’s left is the moral dispute. And question 2, like similar questions about “right” and “good”, might in fact be a high-level generator—but it’s also a definition that’s under dispute, the definition of “person.” It’s practically my point, at least as I took myself to be making it, that some cases can be both.

        @albatross: “Human” seems like a biological category; “person” like a moral one. I would say “fetuses are human” is obvious biological fact, while “fetuses are people” is a moral question to be investigated. This could be a matter of dialect, and you might use the words “human” and “person” differently–but I don’t think we’re really in disagreement anyway! We both agree that the biology isn’t really what’s at issue, and the moral question is the thing worth disputing. Either way, when we dispute the boundaries of the moral category (whatever name we tack on it), we’re having an important and non-trivial dispute over definitions.

        @cuke: I’ll reference my response to beleester here. Question 1 is “what empirical properties do fetuses have?” Question 2 is “what properties confer the moral property of personhood?” I think question 2 is “purely moral,” and question 1 “purely factual.” But note that we won’t be able to answer the question “Is a fetus a person?” until we’ve answered both questions. And yes, looking back, it makes sense that you were focused on facts where I was focused on morals; that would explain some of our disagreement. And it makes sense that I would talk about morals “the way you talk about facts” because of my moral-realist belief that there are grand and underlying moral facts! Sorry if that was confusing.

        @everyone: We all seem to think that talking about empirical facts and moral questions are both important. My original point, which may or may not have come through, is that because both are important, and because the latter is often a dispute over a definition (of “person”, “good”, “right”), some disputes over definitions clearly matter. I hope I’ve talked at least some of you into buying that 🙂

  45. moridinamael says:

    Thanks! I’m going to shame so many people with these images.

    • Yaleocon says:

      So, you’re going to do a lot of… social shaming? The thing at the very bottom of the pyramid?

      Please refrain.

  46. Nicholas Conrad says:

    Hey Scott,

    Enjoying the article, but had to chime in at:

    it is exactly as wrong for the state to do something as for a random criminal to do it

    I see how this is an incoherent demand for rigor coming from anyone supporting otherwise statist policies, but for libertarians and an-caps the state is nothing more than the most powerful and well-oiled organized crime syndicate in a geographic area. It is a real rule, applied consistently.

    Not to mention the empirical literature on public choice (turns out people acting under color of state respond to their incentives the same as everyone else, therefore they deserve no special moral status as a function of their employer).

    • Alsadius says:

      A lot of those rules are actually used seriously and honestly by a small percentage of the population, and as isolated demands from a much larger percentage of people. The joys of debating people you don’t know.

      • Nicholas Conrad says:

        Really? Maybe I’m just sensitive to my own philosophy being dismissed as something nobody honestly believes, but upon careful re-reading I still have the distinct impression

        🎵 “Three of these things belong together; one of these things isn’t the same” 🎵

        you can never doubt something unless you can prove it doesn’t exist

        you can never oppose something that benefits you

        it is exactly as wrong for the state to do something as for a random criminal to do it

        it’s impossible to honestly oppose someone even when there’s a good economy

        Three are absolute claims based on logically unrelated contingents, one is (quite reasonably) evaluating the morality of action between two classes of actor. To your point, I’m sure someone somewhere earnestly argues the other three; nevertheless I think it’s unfair to lump all libertarians in with that level of illogical argumentation.

  47. InBalance says:

    Hi Scott, would you please reconsider your color scheme inside the pyramid you created? Both the red and blue regions are too dark for me to read the text therein (other than the phrase “Single fact”). I had to zoom in quite a bit to be able to read the whole diagram.

    I realize that I may not be a representative visitor. But the color hurdle made it difficult for me to decode the presumed intent: that the viewer’s gaze should move upwards along the left-hand side, as the “good” path.

    Also, splitting the word “Gotchas” between the red and blue regions was a little confusing for me. At first, I thought that both the red and blue regions were being described as Gotchas.

    Another issue: considering the diagram in isolation, it wasn’t obvious to me what the dashed lines mean. I had to read far into your post, in order to learn their significance.

  48. MrApophenia says:

    I can’t help but think that to some extent, I disagree with parts of this for the same reasons I disagreed with “Against Murderism,” which is basically the Argumentative Problem of Evil – ie, how does this theory of argument deal with the actual existence of the evil people you’re saying should be assumed not to exist for purpose of quality argument?

    Let’s take a specific example here that really struck me –

    I’ve seen too many arguments degenerate into some form of “So you’re saying that rape is good and we should have more of it, are you?” No. Nobody is ever saying that. If someone thinks the other side is saying that, they’ve stopped doing honest clarification and gotten more into the performative shaming side.

    After the past two weeks, making a statement like this is honestly kind of wild. Not to rehash the whole Incel War yet again, but a significant portion of that community were openly advocating that rape was good and we should have more of it. It wasn’t coded, it wasn’t a wink-wink nudge-nudge thing like a lot of extreme types try to engage in for that thin veneer of acceptability. r/incels was kicked off Reddit for being rape advocates – that was the specific violation of the site’s terms of service that caused them to ban the whole thing.

    And then in the past couple of weeks, various folks started coming out with hot takes about how “Yeah, but maybe they have a point! Let’s really engage with the idea of Sexual Marxism!” They strip out all the *explicit* advocacy for rape, and just leave the line of logic that led down that path, so they can write their essay or think piece or whatever and if anyone says they’re advocating for rape, clearly they’re just rabid SJWs trying to shut down debate. And yet, the people who coined “Sexual Marxism” as a term were openly advocating for state-sanctioned rape. That’s what it means. If pointing this out isn’t an appropriate response when people strip the most abhorrent verbiage out and start trying to have a big public debate about it, what is the correct response in this situation?

    The whole enterprise here needs a way to grapple with the correct argumentative response when you meet one of the actual, in the flesh Murderists.

    • ilikekittycat says:

      The variety of thing like this I’ve gotten most worried by how much it occurs now are propositions like “The Facts Can’t Be Racist” which are both trivially, tautologically true and at the same time completely irrelevant to convincing anyone of the argument the sort of person making that argument is making. The whole point is to make you go through the ritual of admitting some kind of truth proposition that that sort of authoritarian personality knows will signal to the kind of people like them that they just succeeded in a social-dominance-test and “won”

      For certain kinds of people you want to win an argument against you have to find a way to not even give them credit for saying the sky is blue, or you undermine your whole point

    • cuke says:

      It seems like if you’re arguing with someone about whether rape is ever acceptable and their position is that it is acceptable, then it’s not the situation Scott describes where you are disingenuously and inaccurately attributing malice or extreme positions to another person. You are arguing with someone who does hold a position pretty far out of currently acceptable discourse. So you have some choices, depending on your intention.

      Option 1: If your intention is to understand how someone could come to such extreme conclusions, you ask them a lot of open-ended questions without expressing judgment. You’re taking the position of an investigator to learn more about this strange phenomenon.

      Option 2: If your intention is to shame them or exclude them from the conversation space, then doing what 99% of everyone else will be doing is sufficient.

      Option 3: If your intention is to change their opinion, you would do better to start with option 1 and see where it goes, but be prepared that most people don’t change their minds because of online interactions. If that person were your brother, friend, co-worker who you interact with regularly, you might have some chance of affecting their opinion over the long run by doing mostly option 1 with them until they feel safe enough with you to ask you how you see things and are open to hearing your thoughts.

      Maybe other people will see other options.

      In my experience, people come to believe all kinds of crazy things. We don’t make any progress in conversation by labeling them as crazy. There might be other situations in which we decide to do that for strategic reasons, but then it’s not really about having a conversation, but ending one.

    • Not to rehash the whole Incel War yet again, but a significant portion of that community were openly advocating that rape was good and we should have more of it.

      I cannot speak to arguments elsewhere online, but I do not think that is a true descriptions of arguments being made here.

    • carvenvisage says:

      I agree it’s at least imprecise there and there are real hostile people.

      Regarding your example:

      As much as incel places strike me as negative places place to go and stew in anger, I’m pretty sure they don’t believe half the shit they say. The whole point/aesthetic of ‘incels’ seems to be saying no, we’re not inferior, yooou’re inferior, in the most ridiculous, performance-arty, over the top way possible. -I reject the norms of the society that have rejected me! I am not a virgin but an incel, and you are not an everyman but a dirty normie. Aha! So it is that the tables are turned….

    • hnau says:

      One thing I noticed about Scott’s description is that you don’t get further up the pyramid by making better arguments. You get further up the pyramid by recognizing and internalizing the fact that your opponents are arguing in good faith.

      So to some extent I think Scott’s post actually covers and supports your point: engaging with not-in-good-faith opponents is a waste of time, energy, social/political capital, image, etc. for little or no belief-updating benefit. Thus the Sphinx.

      On the other hand, there is a cost to being permissive about accepting the “not in good faith” meta-claim: you lose a lot of potential belief-updating benefit by making it easy to dismiss inconvenient counter-arguments as “not in good faith”. Of course I’m not accusing you of this. But there’s a spectrum over people and beliefs of how much inconvenience we can take before we simply dismiss the argument. Having the “not in good faith” dismissal button readily available shifts that cost curve in the wrong direction.

      So if we were to have a high-level-of-the-pyramid argument about how to address the Argumentative Problem of Evil, it might look like gathering evidence of the incidence and consequences of (A) letting Murderism participate in higher-up-the-pyramid argument, versus (B) denying worthwhile higher-up-the-pyramid argument to inconvenient non-Murderists. My impression is that (B) would win, because (A) is relatively rare and low-impact: Murderist tactics usually don’t rely on intellectual respectability anyway.

      • cuke says:

        This makes a lot of sense to me.

        I also think arguing-in-good-faith and not-arguing-in-good-faith is a spectrum rather than a toggle switch and that how one proceeds in a conversation can help move both parties either way along the spectrum.

        Really good conversational skills are more likely to move someone further towards good faith, but their starting position on the spectrum and other psychological factors will also determine whether they get far enough towards good faith that it feels worth continuing the conversation. Relationships in general are like this.

    • hnau says:

      Also, because apparently I can’t resist making an object-level point: I happened to read both Hanson’s and Douthat’s posts, blissfully unaware of the background you describe, and it didn’t ever occur to me to interpret either of them as talking about “state-sanctioned rape”. It’s just a single data point that doesn’t fit your interpretation, but I thought I’d put it out there.

    • Hoopdawg says:

      I am going to be pretty rude and uncharitable here, but I’m pretty sure I’m passing the two remaining requirements.

      Considering that the concept of “sexual Marxism” can’t be demonstrated to exist in any meaningful capacity earlier than a few days ago, I’m pretty sure you are now witnessing a result of media-induced moral panic. In other words, no, the people who you claim are “seriously” arguing for it aren’t (and are at best actually trolls aiming to rile you up and succeeding; at worst – a pastiche of selective out-of-context quotations existing only in you internet bubble’s collective imagination), and yes, you are currently attempting to silence and shame people who are trying to have a good-will nuanced argument that we should probably try to deal with the society’s rising problem of loneliness.

      Not only does Scott’s point stand, you are also exactly the person who it was directed at and who seriously needs to grasp it.

      • MrApophenia says:

        Cursory Googling shows the term “Sexual Marxism” in common use in the incel and red pill sections of the internet as far back as 2012, along with all the other buzzwords we got to learn so much about last week (hypergamy, high status males, etc.), often as part of coherently written essays about how the sexual revolution/feminism stripped men of the rightful distribution of women they enjoyed in all traditional societies up to that point, and how a proper remedy would be assigning women to men for proper womanly duties.

        I have to say, it’s almost as if these “no one could possibly hold these ridiculous, caricatured views” arguments are timed for maximal refutation by current events. I mean, hell, “Against Murderism” was posted just in time for Charlottesville, when a bunch of murderists conveniently showed up to declare that, no, they really do exist and hold exactly the views that we’re being told that no one really holds, and claiming anyone does is simply an attempt to shut down discussion.

        • Hoopdawg says:

          I am going to continue to be rude. It’s still true and necessary.

          Cursory googling for “sexual marxism” shows a total 1120 results. That’s now, it was ~400 a few hours ago. (Random fluctuation perhaps, but I do expect the number to rise as the witchhunt spreads until nobody can recognize its bullshit origins.) The pre-last week hits mostly consist of conservatives complaining about Marxist corruption of society, with the words appearing next to each other by random chance. I also searched reddit and 4chan archives for good measure, found a few unrelated hits and a couple of copypastas, few dozen posts in total. Needless to say, there’s no evidence of “common use” among any community ever.

          For posterity: http://archive.is/ijDYZ

          So, while I’m sure that you did learn a lot of buzzwords last week, I feel very confident in asserting that you did not in fact encounter any of the people your social bubble is currently othering. Which did not stop you from treating them, and everyone who dares to sympathize with them in any capacity, as inhuman monsters. This tends to shut down discussion, though I am willing to believe this was not what you’re consciously attempting. It’s simply an unfortunate side effect of participating in a toxic social bubble.

          • MrApophenia says:

            Who said anything about inhuman? You don’t need to be inhuman or incomprehensible to hold evil views. Quite the contrary, part of why I find this approach so flummoxing is my close personal history with some of this.

            I mentioned “Against Murderism” a few times, and like I mentioned back when that was posted, the reason it seemed so very off-key to me is that I grew up in a poor, rural area where a ton of people were exactly the kind of cartoon-cliche racists that we’re being told don’t really exist. I grew up with family members and friends who casually dropped the N-word in conversation all the time, who would laugh at and tell jokes that would (rightly) get me banned if I related them here, who held black people in absolute contempt and if asked about it would go on at some length, who voted based on these views – the whole deal.

            And none of them were inhuman monsters. In 99% of daily life, most them were the nicest people you could hope to meet. (I mean, some of them were all around assholes, but that’s true of any group.) But they were nice, generous, loving people who also had racist views like the bad-guys in some kind of after-school special.

            Likewise, as a deeply socially awkward lifelong nerd, I have a lot of sympathy for how lifelong loneliness and feeling like Gollum every day of your life can basically drive you mad. That doesn’t in any way prevent someone from genuinely adopting shocking, abhorrent views, though.

            In both of these pieces, the general vibe I get (and I believe it’s stated fairly explicitly in “Against Murderism”) is that in order for reasonable, productive conversation to occur, it needs to be true that both sides hold reasonable views and it’s just a matter of finding some way to communicate between them. But that’s my objection – not everyone holds reasonable views. That doesn’t make them inhuman, but if you can’t acknowledge their existence and have an idea about what to do when you wind up in an argument with them, it’s going to be a problem when you run into them.

          • cuke says:

            I’m hearing your central question in this conversation, MrApophenia, to be “how do we have conversations with people who hold unreasonable views?” Is that right?

            My sense is that in most disagreements that people feel the other side holds unreasonable views. So I find your question relevant to most important conversations.

            I’m guessing that many people concerned about climate change consider people who are not concerned about climate change to be holding harmful and unreasonable views. Same on both sides of the abortion debate.

            To me, a big part of what makes conversation possible is how well can either side regulate their own emotions so as to not feel under attack even when the other person expresses “unreasonable” views. Our tendency to feel under attack by views we disagree with will quickly force the level of debate down to the bottom of the pyramid.

            So in your original post I heard you saying “How do we have conversations with people who think rape is acceptable because that’s a really unreasonable, harmful viewpoint?”

            And Hoopdawg seems to respond by saying your characterization is unreasonable, harmful, and othering.

            I don’t know if this is an accurate characterization — I hear this exchange as both of you saying something like “It’s not reasonable to expect me to have a conversation with someone whose views are so unreasonable.”

            This seems to be a common form of disagreement, where two parties keep talking and rallying evidence to support their position for why talking to the other person isn’t something they should have to do. There’s an ad hominem implied: “I find you unworthy of talking to because your views are so bad.”

            It would be more efficient to come out and say that, but I think in the desire to appear reasonable, we keep talking, but not really in good faith because what we’re talking about is why we shouldn’t have to be talking to such an unreasonable person.

            Alternatively, it would be interesting to see where a conversation could go if we took it as a given that on many issues, opposing sides feel that the other person’s views are unreasonable. “Yes, so you think I’m a monstrous baby killer because I support the right to abortion…” Now what? Do we keep name-calling, or can we try to understand each other better?

            I am pretty confident that not every self-identified incel thinks rape is okay. I’m thinking some do. For sure there are non-incel people in the wider world who think rape is acceptable, no question. We can call all incels beyond the pale and unworthy of conversation. I’m inclined to express curiosity and to feel confident that my curiosity isn’t going to harm me; and at the point that it does feel harmful, that I will have choices about what to do next.

            (side note: I’m a woman, feminist, and a sexual abuse survivor)

          • Hoopdawg says:

            Who said “inhuman”? Scott in “Against Murderism”, describing what counts as “actual murderists”. This discussion is downstream from it, after all.

            You are now saying that you knew racists and they weren’t monsters – but that’s the whole point of “Against Murderism”! They’re humans who understand and respect our society and its human values, kind people who’ve probably never harmed anyone, they just have an irrational fear or dislike of another group of people. You can disagree with them or even think it makes them worse as persons, but you don’t have to disregard the entire standards of civilized society in favor of kill-or-get-killed tribalism to deal with them. In fact, you probably shouldn’t, because then they have a completely rational reason to treat you the same way, and then we all die.

            This does not mean that everyone always holds only reasonable views. What it means is that there is a core of reason underneath surface-level positions that can be reached if you approach them honestly and charitably. Someone’s abhorrent intolerance against a disadvantaged group easily can, and usually does, come from a genuine and laudable sentiment. Islamophobes do worry that Muslims will perpetrate terrorist attacks and rape their women, antisemites do feel repulsed with evil plots of the Elders of Zion…

            I feel that Cuke’s post exposed up my response plan a bit, so let’s get straight to the punchline – can you blame them, when it turns out that given a right environment, right group of people to abhor, and the right “evidence”, you ended up with suspiciously similar beliefs?

          • albatross11 says:

            Note that there are two things going on here:

            a. Basically decent human beings with some abhorent-to-us views. That’s plenty of people right now, and pretty-much everyone when you go back 100+ years in time.

            b. Abhorent-to-us views that have some actual intelligent reasons for believing them, non-crazy arguments, solid-looking evidence, plausible-seeming theories, etc. Depending on your starting point of view, this might mean intelligent and worthy-of-attention human b-odiversity advocates, opponents of gay marriage/gay rights, Communists, whatever.

        • John Schilling says:

          FWIW, I got 36 results predating this month. I didn’t see any that were clearly unironic uses by incels; there seemed to be roughly an even mix of stuff talking about the sexual implications of Actual Marxism and outsiders mockingly attributing “sexual marxism” to incels.

          • MrApophenia says:

            I found this in 30 seconds of Googling, which is what my “cursory Googling” comment was in reference to –

            https://robertlindsay.wordpress.com/2012/07/22/sexual-marxism-has-always-been-the-norm-for-human-society/

            What really jumped out at me is this is a debate in which both parties are already throwing these terms around as fully formed concepts, years ago.

            But that said, a somewhat less cursory Googling also does show that the actual term “Sexual Marxism” was indeed quite thin on the ground.

            The places it does show up definitely make it look like it already was a term of art being used in communities that Google is no longer turning up – people using comment handles like “Sexual Free Market vs Sexual Marxism” in comment sections on a few relevant blogs, and I did find an offhand snark against incels like you mention, so I do think it existed as a concept to some degree – but I definitely am not finding the primary sources. I kind of suspect it’s because said sources are just not around to Google anymore, but I can’t prove it, so I will retract that part since I can’t show it. (At least, not without more work than I am willing to put into the research.)

          • The Nybbler says:

            Robert Lindsay does not appear to be an incel, and he’s quoting his own commenter Bhabiji who is also not an incel. So, not a unironic use of the term by incels.

  49. 10240 says:

    “The shorter and more public the medium, the more pressure there is to stick to the lower levels.”

    This should be emphasized more. I think it’s the primary reason most internet debates are stupid.

    – In a big internet forum or comment section, if you post a two-page reply, nobody is going to read it. Often there is a popular one-sentence argument that sounds nice but is (IMO) wrong, but the proper counter-argument takes pages. Or there is a one-sentence counter-argument, but it makes you look X (racist/communist/…), not only getting you attacks, but also making most people ignore your argument — and explaining why your argument doesn’t mean you’re X (or why being X doesn’t mean your argument should be ignored) takes pages. Even if you’re intelligent, decent, and philosophically sophisticated, your choices are posting a one-sentence gotcha, or ceding the territory to your opponents.
    – It’s not worth spending an hour to convince a few faceless internet strangers out of millions who hold a view you oppose (just to start from square one when someone else posts the same view).
    – It’s not worth the effort particularly when you can’t even be sure they even read your counter-arugment. Or that, if they read it and they still disagree, they’ll post a counter-counter-argument you can continue to argue against, rather than deciding they are not willing to spend more time on the debate, and quit it, continuing to hold their view.
    – Internet discussion often (more than 10%) has actual arguments. But when a proper debate could be dozens of argument — counter-argument iterations, most of the time only the first few are made, over and over again. (E.g. (1) Women make 77% of what men make. (2) But they choose various jobs in different proportions.) Most internet venues don’t facilitate proper back-and-forth debate (e.g. on this site you can’t get notifications when someone replies to your comment), and you don’t get punished for posting argument (1) (that has already been made 100’000’000 times) without attempting to address (2) (which has been made 40’000’000 times in reply).

    Another big one is that we have to get an opinion from a limited amout of information. Even if you’re intelligent, decent, and philosophically sophisticated, you don’t have enough time to make a full survey of evidence on all issues you’re supposed to have an opinion on (for example, because they matter when deciding who to vote for). So, often the best you can do is listen to a few opinion leaders, and try to guess which one is the more likely to tell the truth. At this point an argument like “Senators who oppose gun control are in the pocket of the NRA” becomes relevant, since it increases P(senators oppose gun control | gun control is right) and decreases P(gun control is wrong | senators oppose it).

    • hnau says:

      “The shorter and more public the medium, the more pressure there is to stick to the lower levels.”

      This should be emphasized more. I think it’s the primary reason most internet debates are stupid.

      Yes. This.

      Your sub-reasons are all plausible, but I’ll add one more proposal. I suspect this is in fact the largest contributing factor. In internet discussions, most of the poster’s satisfaction comes from the act of writing and posting, not any subsequent interaction.

      I have no clear theory as to why this is so. If you pressed me for a guess, it would probably be “something something superstimulus”.

      On the other hand, if you accept the above point, it immediately makes clear why and how internet discussion fails to create productive arguments. And the answer is very much consistent with my personal experience.

      • 10240 says:

        I don’t really think so. On a forum where I’m more likely to have feedback that I’ve convinced someone (or at least that they’ve considered my arguments), I feel more inclined to put effort into making good arguments.

        • albatross11 says:

          Noam Chomsky made exactly this point about political talk shows once. Basically, the time slot and format allows for only very conventional arguments that won’t take much time because they’re basically what everyone is already expecting. Trying to make a case for an unconventional view of anything is basically impossible–there’s not time to explain anything very complicated before the next commercial break.

          • My father’s view was that, in this respect, radio talk shows were much better than television talk shows.

          • Lambert says:

            Were?
            BBC Radio 4 are still putting out some decent debate.

            Like that time they got a pro-animal testing activist and an anti-animal testing activist to debate one another. The conclusion was that one thought that most useful animal testing should be carried out, but less important ones should be stopped, while the other believed that less important testing should be stopped, but the most useful ones should be carried out. (Though that was a programme explicitly aimed at the ‘clarification’ level.)

          • “Were” because I was reporting an opinion he expressed many years ago.

        • hnau says:

          Fair. I may be overgeneralizing based on my experience of forums like Twitter and Facebook.

        • emblem14 says:

          By your own admission this doesn’t account for most online arguments. I think the figure of 90%+ of adversarial discussions online having some driving force other than persuasion is likely true.

          The scary question is, are most people consciously aware that they’re serving non-truth seeking ends by engaging in this fashion, or is this just some massive bug in human cognition that leads to virtually everyone being under a false impression that they’re doing A when they’re really doing X, Y or Z?

          Surely people must have noticed that all of this noise is fruitless and counterproductive if persuasion is what we’re really aiming for.

          Or is the relative rarity of “winning” an online debate, coupled with the sweet dopamine satisfaction of when it does happen, lead to a kind of slot machine dynamic where we’re happy to “pull the lever” 99 times for the chance of enjoying that 1 in 100 event of sweet victory?

          • 10240 says:

            The motive is still persuasion. We just don’t put a lot of effort into making good arguments, but a weak argument may still convince some people. Plus, the strength of an argument from a rational, philosophical standpoint is a different question than its persuading power. Single facts, “gotchas” may be much weaker than a good-faith survey of evidence from a rational standpoint, but not necessarily much weaker in persuading power.

            The motive is often to avoid ceding the territory to one’s opponents. If people hear one side’s arguments all the time, and rarely hear opposing arguments, they’ll generally side with the side they hear. This is particularly true for people who are not particularly interested in the topic, but even those who are will mostly work within the Overton window. Thus if we see a lot of arguments we oppose, we feel necessary to make our side heard. This is still about persuading people, though often aimed more at the bystanders than the opponent, much as Scott wrote about the social shaming category. But the aim is not necessarily to silence the other side, just to increase the share of your side, and to avoid falling out of the Overton window yourself.

  50. Jiro says:

    I don’t believe in double-cruxing. It has one of the same problems that bets have–you’ve basically told the other guy “you don’t need to refute the spirit of my argument; you just need to refute my literal words”, which will lead to your opponent looking for trivial mistakes in how you presented your argument rather than real refutations.

    “If you can name one country where Communists were elected into office, never set up an authoritarian government, and were peacefully removed from office, I’ll believe that Communism is okay.”

    “The Republic of San Marino (population 14000) had a Communist government between 1945 and 1957. I win!”

    “But I meant a good-sized country.”

    “Tough. You agreed that you were refuted if I showed you an example and you didn’t say ‘good-sized’ then.”

    . . .

    “I’ll believe in miracles if you show me a case where scientists studied something like a regrown limb and positively concluded it was real.”

    “Here’s an example. It’s from the 19th Century.”

    “I didn’t mean that. I generally don’t trust scientific experiments on touchy subjects made that long ago.”

    “You wanted an example, I gave you one. Better start believing in miracles.”

    • hnau says:

      Sorry if I’m off base here– I’ve never done a double-crux myself– but isn’t what you describe more like “single-cruxing”? The person you’re arguing against doesn’t have anything on the line.

      A more realistic double-crux of your first example might be: “Communism is okay” reduces to “Communism doesn’t reliably lead to authoritarianism” reduces to “there are good historical examples of non-authoritarian communist societies” etc., where if there *aren’t* good examples, your opponent agrees that this would weaken their opinion that Communism is okay.

      More generally, it seems like “If the opposite were the case, that wouldn’t change your mind” should be a valid refutation of any claim to “win” a double-crux.

      • Jiro says:

        Yes, those are only single cruxes. Double cruxes are one of them on each side, which doesn’t help–they both have the same problem, and not only that, they give the advantage to uncharitable sides who are more willing to go against the spirit of the point when necessary.

        More generally, it seems like “If the opposite were the case, that wouldn’t change your mind” should be a valid refutation of any claim to “win” a double-crux.

        I don’t think that’s true either. It has to reduce your confidence in the result somewhat, but it may not reduce it by enough to matter. If the disputed issue is “it is possible to put a man on the moon”, one example of putting a man on the moon is going to do a lot more to confirm it than the opposite of that is going to disconfirm it.

        • hnau says:

          OK, yes, I see the difficulty of applying double crux specifically to claims of the form “X is possible / X exists”. I don’t think this constitutes a generalized objection to double-cruxing as a tool for resolving arguments, though.

          • Jiro says:

            That’s not the only case that has a problem; it’s just one where the problem is unusually obvious. Consider the disputed issue “Bill Gates is a Martian”. Showing that he has one nonhuman trait is a lot better evidence for that than showing that he has one human trait is evidence against. (This is not of course the exact opposite, which is “he has no nonhuman traits”, but it’s going to be what will be provided as evidence in arguments.)

            But that’s irrelevant to my first objection, which is that your opponent can pick on a deficiency in your wording or something you forgot to mention that doesn’t really have any bearing on what you’re trying to prove. If I say that I’d believe in miracles if you proved that someone could walk on water, and you then gave me an example of someone walking on ice, that fits my literal request but only because I wasn’t specific enough. If I don’t then go on to believe in miracles, you shouldn’t be able to claim that I reneged on my agreement.

        • emblem14 says:

          If the double crux exercise falls apart because the original proposition is poorly phrased, or not specific enough, or open to lawyerly interpretations, and/or the counterparty exploits that for a technical “win”, all that signifies is it was a bad formulation of the double crux to begin with – Garbage-in, garbage-out – and that the counterparty isn’t acting in good faith, but merely looking to score some kind of cheap win on a technicality, which defeats the purpose of the exercise.

          Both parties have to be of the mindset that the reason they’re doing a double crux is to refine their own beliefs and achieve greater clarity. In your example, one of the agents clearly doesn’t share that objective – they’re still in the “gotcha” frame. A good faith counterpart would actually suggest a better crux proposal if the one offered was too weak.

          • Nornagest says:

            Both parties have to be of the mindset that the reason they’re doing a double crux is to refine their own beliefs and achieve greater clarity.

            If both parties have that mindset, do you need to be doing the exercise?

  51. carvenvisage says:

    or example, a Trump supporter might admit he would probably vote Hillary if he learned that Trump was more likely to start a war than Hillary was.

    I think “be convinced” would be more precise/accurate/universal than “learn” here.

  52. johnWH says:

    Sarcasm aside, it is completely legal for a private institution to be employed as a public relations firm, but we can no longer classify their output as academic, and so it would be substantially degraded in the taxonomy of arguments that was outlined in the OP.

    I don’t think that’s a fair move to make. It sounds an awful lot like the genetic fallacy. Good research is good research. GMU scholars do get published in mainstream outlets, which means that their research has passed through the peer-review system. This isn’t a perfect system, of course, but I think it indicates that the research in question is worth being engaged with by other reasonable people.

    The only way you can determine if a particular academic output is good or bad is to actually read it. Knowing that it was produced by a Koch affiliated scholar might give you some reason to doubt it, but that should only further encourage you to read the article, book, etc. in question.

  53. enye-word says:

    Meta-debate is discussion of the debate itself rather than the ideas being debated. Is one side being hypocritical? Are some of the arguments involved offensive? Is someone being silenced? What biases motivate either side? Is someone ignorant? Is someone a “fanatic”? Are their beliefs a “religion”? Is someone defying a consensus? Who is the underdog? I’ve placed it in a sphinx outside the pyramid to emphasize that it’s not a bad argument for the thing, it’s just an argument about something completely different.

    What sphinx of sentiment and ad hominem bashed open their skulls and ate up their brains and imagination?

  54. Guy in TN says:

    “Truth-seeking” and “ideology normalizing/advancing/implementing” are different pyramids. While social shaming is at the bottom of the truth-seeking pyramid, it’s at the healthy middle of the ideology-normalization/implementation one. Be careful not to mistake the tactics used to advance a political position, with the rationality used to arrive at one.

    As others have mentioned, when someone responds using tactics low on your truth-seeking pyramid, its not to defeat your argument, it’s to defeat you. This is only “bad” from the perspective of truth-seeking being your goal. But truth seeking is not everyone’s goal, at least not all of the time (it is certainly not a terminal goal for myself). Sometimes the point is not to convince you, but to demonstrate shame/ridicule/solidarity against you. These social demonstrations are often more effective tactics at policy and norm implementation than truth-seeking. This is because I don’t need your consent to advance policy, I just need a majority of observers to the conversation to be on my side.

    This is the “why” of social shaming- its an irrational behavior in truth-seeking sense, used as a rational tactic in the ideology-advancement sense. I get that it sucks to be on the receiving end of it, but there’s a good reason for its widespread social prevalence.

    • emblem14 says:

      Good insight. Sad in what it tells us about our social conditions – that most people are willing to steamroll those they disagree with to either impose their will on unconsenting parties or protect their own interests from the same.

      An underlying cultural consensus of “live-and-let-live”, which actually defuses the need for a lot of high-stakes political gamesmanship, is sorely lacking in the “land of the free”.

  55. Deej says:

    I don’t think saying people can’t be against something they benefit is A. necessarilly isolated or B. a demand for rigour.

    Taking the B. then A.

    B. It’s a mostly* stupid argument rather than a demand for rigour. It’s a stupid argument in that it doesn’t follow you should be against things that you benefit from. Whereas a demand for rigour is, to my mind at least, more to do with asking for more or better evidence or a higher bar of proof or better logic/argument.

    A. It can be applied pretty consistently by people. Plenty radio talk show hosts and people who think like them, would apply it to capitalism, the class system, unions, sexist culture (eg female acters “benefitting” from being objectified) and loads of other stuff.

    *It obviously doesn’t follow that something that benefits you is something you should support, as it might be bad for other people and you might be nice or you might benefit more from something else. But I say mostly because, at times, the real meaning behind it can be, “you’re a hypocrite as you choose to do something that you say you’re against” or “you’re being disloyal to people you owe loyalty to”. Agree with these or not, they follow from some people’s starting assumptions (you should be loyal to group x, you should not do something that you’re against if you have the choice not to), so aren’t stupid, at least in the sense I’m using stupid here.

  56. hnau says:

    I had a weird reaction to this post. Namely:
    (1) All the kinds of arguments mentioned here are correctly defined, categorized, and pyramid-ized.
    (2) Nothing here bears any resemblance to the way I normally argue.

    When I argue, it doesn’t often seem to be about definitions or evidence. Normally, hearing my opponent’s side on either of those points wouldn’t greatly surprise me or shift my position. Instead, my impression is that I normally do one or more of several things:
    – Bring in examples / motivations / relevant points from my own experience
    – Try to present “point of view” / “worldview” / context so others understand where I’m coming from
    – Attack crucial points of opposing arguments that seem fundamentally wrong in how they think about the issue
    In other words, something roughly like what I’ve been doing so far in this comment. Didn’t come off as so weird, did it?

    I’d propose that we should make like the FDA and add more vertical slices to the pyramid. “Factual evidence” and “definitions” don’t do enough to capture the entirety of a dispute. Some tentative suggestions of possible categories would be “personal relevance” (why do I / should people care about this), “outcome evaluation” (if it’s a policy question, we may have legitimately different visions of what’s a good / bad outcome), and “perspective / worldview” (are there real differences in how we frame / approach the issue– this goes toward addressing Chesterton’s Fence).

    Yes, it’s possible to cut those categories down until either they fit in “evidence” or “philosophy” or just get thrown out into the Sphinx– if you rationalize hard enough. But part of my point is that most people don’t break it down into just “evidence” / “philosophy” mentally. And it’s not clear how one would argue, within this framework, that they should.

  57. Baeraad says:

    Good post. I can find nothing in it I disagree with.

    I would like to state for the record that I’m willing to change my mind on any and all issues where someone can convince me that the other side better increases the ability of all people to sit quietly and contemplate their humanity as much as possible. I would happily become a capitalist pro-life Christian if I genuinely believed that it would lead to more blissful OMMMMMing instead of less.

    Conversely, I think I do in fact assume that people who disagree with me have their facts straight and that they are trying to do good as they define goodness. It’s just that their definition is anti-OMMMMM instead of pro-OMMMMM.

  58. Krisztian says:

    I think you left out the highest form of disagreement: betting.

    Being the highest form, it is impossible to find unless the stars perfectly align.

    • fr8train_ssc says:

      Betting in most cases constitutes Operationalizing. You’ve agreed on a set of goal posts for whether something is-or-isn’t and have established a stake of money as the enforcement of it. In the case where you bet heads-or-tails and end up with the coin landing on its edge you’ve veered into higher generators of disagreement, and possibly a 7th unknown tier.

  59. Thegnskald says:

    There is some scattered discussion here about how important clarification is.

    An observation, however: Clarification almost never works.

    Okay, we agree that a flitchet is a zomp and a frim, rather than a tock and a frim.

    What do THOSE words mean, however?

    We greatly overestimate how much context we share; imagine, for a moment, trying to have a conversation with someone in a foreign language; your dictionary is composed of the meaning of the words as defined by Conservapedia; theirs, Wikipedia. And you can’t consult each other’s wikis.

    That is closer to the true problem. And even that greatly underestimates the difficulty involved.

    Communication seems to work because for most cases, the differences are trivial; if I ask you for an apple, and you bring me a Granny Smith when I was expecting a Red Delicious, I can blame myself for being insufficiently specific, and be more specific in the future. Apparently we can communicate, even if your default conceptualization of an apple differs from my own. But an apple is a concrete thing we can point at, and even if there are fuzzy boundaries on our conceptualizations – maybe a hyrbid satisfies your definition but not mine – we can ultimately use reality itself as an objective communication device, and demonstrate our meaning with examples and exceptions.

    Anything too abstract to be pointed at? Forget it. There is no common context, and no means of establishing one. Mathematics might be the closest thing we have to such a context, but the myriad interpretations of quantum physics demonstrates that even mathematics cannot provide a true context of meaning.

    • albatross11 says:

      This is why shared context is really valuable for conversations. A roomful of economists will have a large set of shared concepts and internal mental models and definitions and approaches to thinking about problems. A given economist may think some of those concepts aren’t meaningful or some of those models are not very useful, but he will still understand what they mean. And so a discussion can take place within that shared understanding and context.

  60. nameless1 says:

    I think one needs a separate pyramid for PO. The lowest is social shaming or even expression of hatred.

    One level higher a basic friend or foe expression.

    One level higher a more accurate social categorization, like you belong to group A me to B and we are neither arch-enemies nor arch-allies, these groups tend to get along but sometimes clash.

    One level higher an empathic understanding: what life experience, emotion or memory made you feel about the topic the way you do? I said feel, not think. They are different things. I can think a free market is a good thing or a bad thing and bring up many rational TO arguments. But the feeling aspect is that one defender of the free market is detached while another is passionate. While the first may have arrived to it just by thinking and can be convinced otherwise, the second has that passionate emotion from somewhere. This is what is to find out empathically. Maybe a regulation destroyed his dads business and he killed himself. Once you know that, you understand why he cares so passionately about the issue.

    So it is like (shittiest asciii art ever)


    / empathy \
    / subgroup \
    / friend or foe \
    / shaming or hatred \

    Also, having figured it out, or at least I think I partially did, now I feel like PO is something any intelligent TO person would learn in three weeks if he cares to do so and then go back to learn other TO things. Sorry if I am an ass, probably I am. But PO sounds like something simple and intellectually unchallenging… quickly learnable…

    Sorry again. Just one remark. There is Toastmasters. I have seen people go from unintelligible, shy mumblers to actually decent, understandable and likeable public speakers in something like 3 sessions, 3 x 10-20 min talking + feedback. OK that is just one element of PO but I sincerely feel like we could teach people in something like 120 hours how to function socially like an okay PO person…

  61. fr8train_ssc says:

    Curious that I see some similarities and coincidences between both pyramids, and Kohlberg’s stages of moral development in the sense that as one travels higher, greater cognitive load and reasoning skills are required. If you don’t want to read the rest of this post, the tl;dr is that the six stages, (and stage 4.5) seem to map on the levels of Scott’s pyramid well, and you can probably see this by putting up Scott’s diagram and the six stages side by side.

    To begin, Scott divides the pyramid into three sections based on dotted lines, similar to Kohlberg’s three sections on Pre-Conventional, Conventional, and Post-Conventional morality. The bottom of the pyramid basically represents the first two minutes of this, and should be avoided if you’re trying to persuade. Social shaming seem to map well to Obedience/Punishment: One is trying to reinforce a base level punishment to convince the other not to make the argument, or trying to get the other to rationalize based on obedience/punishment ideas rather than the merits of the argument. Once we elevate to Gotchas, we’re at stage 1. Gotchas are still Pre-conventional, as the focus of the Gotcha, as Scott says:

    It’s not saying “calculate the value of these parameters, because I think they work out in a way where this is a pretty strong argument against controlling guns”. It’s saying “gotcha!”.

    Gotchas! Map similarly to primitive self-interest. One says a Gotcha not because its a compelling argument, but because one is trying to boost their individual reputation at the expense of the other (Hey y’all, remember that time I made so-and-so look like an idiot) and not considering how it actually enhances the argument space at all.

    Once we go beyond the first dotted line, we’ve now gone into Conventional territory. In this sense, single facts or demands for rigor map somewhat into stage three social consensus motivations “I am providing this anecdote/piece of information because I believe it will appeal to your considerations” or “You believe in bodily autonomy, so how can you support thing-that-goes-against-it”. While these are still considerably weak, they are a significant improvement over Pre-Conventional arguments/reasoning.

    The next level of evidence: Studies and disputing definitions, shows and enhancement over the previous level of single facts and isolated rigor, in the same way that stage 4 maps over stage 3. Just as stage 4 morality/reasoning not only looks at social consensus, but also norms and practical limitations for the functioning of society. A scientific study is no longer just “I am providing this to appeal to your considerations” but “I am providing this to appeal to your considerations, and it follows conventions that will allow you to replicate the results” similar to how Laws are used to codify norms to make behavior more predictable. Similarly, arguments over definitions try to improve the conventions of words used, and appeals to prior authoritative sources of language rather than a personal interpretation of contradiction (i.e. “You claim to be for bodily-autonomy, which could mean this-thing, and this-thing contradicts thing-that-goes-against-it which you support”)

    A further coincidence I find, mentioned in the Wikipedia page:

    In particular Kohlberg noted a stage 4½ or 4+, a transition from stage four to five, that shared characteristics of both

    Scott similarly puts clarifying on the dotted line between the middle and highest thirds of the pyramid, while the 4+ stage straddles Conventional and Post-Conventional morality/rationalizing. The four plus stage according to Kohlberg, looks at dissatisfaction with social systems and a desire to reform them (or at least understanding their limitations.) Similarly, clarification builds on disputing definitions, but this time tries to do things like Taboo the words or remove an appeal to the authoritative basis of the word, and instead get down to the details of what is being argued.

    If you’re intelligent, decent, and philosophically sophisticated, you can avoid everything below the higher dotted line. Everything below that is either a show or some form of mistake; everything above it is impossible to avoid no matter how great you are.

    Once we get past the second dotted line, we’re now into Post-Conventional morality, or stages 5 and 6. Kohlberg believed most people developed to stage 4 and remain there for most of their lives. Similarly, as Scott states, getting into the top tiers of the pyramid isn’t easy (requiring some moral decency, intelligence, and knowledge of philosophy)

    Operationalizing and Evidence Surverys map well onto Stage 5 morality. Similar to how people with stage 5 reasoning/morality recognize that there are numerous perspectives that contradict each other and so complex meta-laws or mechanisms/social contracts are needed to reconcile conflicting desires. Good faith surveys of evidence try to not only look at multiple studies, but also try to look at studies that come up with different conclusions, to possible narrow down what is true (“these things appeal to some of your considerations, but the whole body of science rejects these others”) Operationalizing not only focuses on definitions, but also establishing goal posts, and rules for how to reach them. The

    Double Crux

    functions as a sort of social contract for argument (“I will reconsider or apply the term bodily-autonomy in these circumstances”)

    What’s also interesting since the wiki page mentions:

    Because post-conventional individuals elevate their own moral evaluation of a situation over social conventions, their behavior, especially at stage six, can be confused with that of those at the pre-conventional level

    Which also doubles back to what Scott was saying about Gotchas! A Gotcha! has the potential to be a good point with mental effort (“Studies show background checks don’t improve plus here’s other studies on criminal subversion of rules…” vs “If guns are outlawed only outlaws will have guns!”) but the motivation and effort is different. To give an example of Kohlberg’s idea, a stage 2 person’s reasoning for evading taxes might be “This tax takes away my money. I prefer to keep my money and I know how to get away with it”, wheras a stage 5/6 person’s might be “This tax is unfairly assessed and enforced, and I plan to show how one can evade it and am ready to go to jail if prosecuted. Helping others evade it creates a more equitable application of the law (no-one is getting taxed) and both actions reinforce publicity behind it, making it more likely citizens will petition to have it removed” or just consider Colbert’s abuse of SuperPAC rules vs why your regular politician might be motivated to create a SuperPAC.

    Finally we reach the apex of Kohlberg’s stages and Scott’s pyramid: Universal Ethical Principles and High Level Generators of disagreement, respectively. Both claim the highest stages are elusive, and both again, can be confused with lower levels of disagreement. They also seem to intertwine in a certain way. In Scott’s case, the high level generators could map to what would ostensibly be, stage six individuals trying to reconcile their own universal principles (“Unlike other compelling cases for regulating this-bodily autonomy-thing, I’ve seen other cases where regulating-this-bodily-autonomy-thing backfires, and maintain a general heuristic of not regulating-bodily-autonomy-things”) or stage 5 individuals briefly coming to stage 6 reasoning when evaluating their own positions on issues. Likewise, to get to stage 6, you basically have to internalize high level generators of disagreement within yourself to get to a rational and consistent base of ethics that you follow. A stage 6 person’s rationale for opposing gun control isn’t “a compelling argument that guns support individual liberty, social contracts and opposing tyranny of the government.” A stage 6 person would oppose gun control because they internalize high level generator arguments for individual liberty, social contracts, and opposing tyranny of the government, recognize how guns could both be used to support (self-defense) and subvert (crime) those high level generated principles, and developed a mental calculus that supports less government control of guns.

    On the other hand, maybe I’m just seeing pyramids everywhere

    • whateverthisistupd says:

      Darmok and Jalad at Tenagra. Shaka, when the Walls Fell. Darmok,His Eyes Open.

      • fr8train_ssc says:

        Ekaku’s instruction with one hand; Tolkien translating Beowulf; Darwin reading Wallace’s letter;

  62. whateverthisistupd says:

    A few things.

    First-
    Personally, I actually do consistently treat people who do violence on a consistent ethical basis, and there are some libertarian/anarchist types who follow the same ethical reasoning.

    Second-

    I think there can be legitimate cases about not legitimatizing discourse that don’t serve the “shaming/gang sign” function you’re talking about. As a recent example, in a facebook group called Beyond left/right politics or something like that, there quite a few posts about understanding Neo-Reactionary perspectives. In the modern age where something like the “incel” movement can go from fringe of fringe to relative mainstream, the media in the interest of shaming can make fringe ideologies quickly go viral, and people taking up new reactionary politics in response to the intersectionalist left, I think there could be a legitimate argument that increasing the exposure of fringe ideologies is genuinely a bad idea.

    Third-

    Increasingly, I find myself frustrated by the incredible inferential distance between myself and so many people. We can understand how conversation and debate function, but how to we use this knowledge to communicate with others who do not?

    These days, I find so much of what i want to express requires long explanations of concepts and frustrating attempts to ask the right questions to figure out what the other person understands and how they are thinking and the best way to explain it to them and how I need to unpack the linguistic traps that limit their thought, and most people just don’t have the patience for that or want to have their world view deconstructed just to have a conversation with me.

    The more rationalist I become,the more like I feel I need something like the metaphor language of Darmok to communicate efficiently. And the more difficult it becomes to actually communicate with the majority of people.

  63. qwertie says:

    Nitpick: It’s not that “97% of climate scientists think the earth is warming,” it’s more like “97% of experienced climate scientists, those with at least 20 published papers, find humans are causing the Earth to warm via greenhouse gases, though some estimates run as low as 91% and there is less consensus among those who have published fewer papers (the simpler question of whether Earth is warming is not in dispute, although a small minority challenge the quantity of warming)”. Several studies have been done, see details here.

    • I believe the original source of the 97% figure was Cook et. al. 2013. Their 97% was for humans as one of the causes of warming—the example for one of the categories that went into the 97% used the term “contributed to.”

      I suspect your “at least 20 published papers” is Anderegg et. al. If you look at the webbed supporting information you discover that of their initial group of climate researchers about a third were in the unconvinced category. It was after they reduced the initial 1372 researchers to the top 100, defined by publications, that they got a figure of 97%.

      Note also that they were classifying researchers not by what they said about climate but by whether they were in one or another of various groups, such as signatories to statements on one side or another of climate issues. About 2/3 of the people they classified as “convinced by the evidence” were classified that way because they were IPCC AR4 Working Group I Contributors. I don’t think you can assume that everyone who contributes to the IPCC report agrees with the particular claim (“anthropogenic greenhouse gases have been responsible for “most” of the “unequivocal” warming of the Earth’s average global temperature over the second half of the 20th century”) that the authors are talking about.

      • qwertie says:

        It isn’t necessary to believe/suspect what my sources are. Simply click the link and read the bullet points (I’m the author). In particular I’ve mentioned nuances regarding Cook 2013 that you might not be aware of.

        I had not seen your claims about Anderegg 2010 before so I had another look.

        This is my second time reading the paper and it struck me that no consensus percentage was provided for the full 908 researchers who had their names on at least 20 papers. It stops after saying that “The UE [unconvinced by the evidence] group comprises only 2% of the top 50 climate researchers as ranked by expertise (number of climate publications), 3% of researchers of the top 100, and 2.5% of the top 200”. For the whole 908 one must calculate manually from the numbers: (N[CE] = 817; N[UE] = 93). The numbers should add up to 911 (since 3 researchers were in both groups) but somehow add up to 910 instead. Anyway, if we assume (unlike Anderegg) that the 3 researchers in both groups are actually UE (it’s quite plausible that certain people contribute to the IPCC report but publicly disagree with its core conclusion), that gives us 90% [89.8% – somehow miscalculated this first time]. Very interesting, and roughly consistent with results from that AAAS survey showing 93% consensus, and Verheggen 2014 (91% of the ~467 respondents with the most publications).

        I also checked your second claim about “2/3” – it’s not in the paper itself so I went to “SI” (supplementary info) which links to more detailed info here. I verified that you’re correct. Evidently, Anderegg is not an ideal source.

  64. soreff says:

    (Also, this hews so close to a Hansonesque “Argumentation isn’t about the arguments” thesis that I should probably just shut up and go Google where he’s discussed this. Will do so now and link if I can find something.)

    This isn’t an exact match, but it turns out that Hanson blogged about
    meta-discussion issues closely related to this blog article today
    http://www.overcomingbias.com/2018/05/skip-value-signals.html

  65. Robert Jones says:

    The third is wrong because eg prison is just state-sanctioned kidnapping; “it is exactly as wrong for the state to do something as for a random criminal to do it” is a fake rule we never apply to anything else.

    I don’t think this is right. I don’t fully understand republicanism, so I’m just going to express the following in monarchical terms, but I think it should port over mutatis mutandis.

    The crown never does anything itself. All acts are carried out by subjects. Of course the queen does things personally, but she doesn’t do things like levy taxes or sit in judgment or go to war, and it’s been established since the Case of Proclamations in 1610 that she cannot (except perhaps in the case of going to war).

    It therefore seems to me that the distinction between a state act and the act of “random criminal” begs the question. If a person does something, claiming to be acting for the crown, but the act is illicit, then the person is not a true servant of the crown, but just a random criminal. Contrariwise, anybody may perform a “citizen’s arrest”, so what seemed to be a kidnapping might turn out, on closer examination, to be a lawful arrest. The question of whether the arrest is lawful does not depend on whether the arresting agent is a crown employee.

    The relevant distinction is not between crown acts and the acts of subjects. To say so would violate the principle of equality before the law. The relevant distinction is between lawful and unlawful acts.

    This is illustrated if you consider that really the analogy ought to be saying that “prison is state sanctioned unlawful imprisonment”.

    What is expressed by saying “capital punishment is state sanctioned murder” is that capital punishment can never be lawful. All definitions of murder includes the element “unlawful”. On the other hand, it seems clear that capital punishment would satisfy the other elements of the definition (since we no longer have the concept of “outlaws”, the convicted criminal remains under the queen’s peace).

    The problem with saying “capital punishment is state sanctioned murder” is that it tends to obscure the real debate: it generates more heat than light. If one says, “capital punishment can never be lawful”, it’s clear where the real debate is. The proponents of capital punishment believe (wrongly, in my view) that it can be lawful.

    The problem isn’t that it’s an isolated demand for rigour, because it’s perfectly correct to say that otherwise unlawful acts do not become licit when carried out by the state.

    • Murphy says:

      This hinges on whether murder is only a legal definition.

      If kidnapping only a legal definition? Is it possible to be kidnapped in a country with no laws about kidnapping? If someone in one of those countries abducts you and holds you for ransom is it fair to declare that it’s not “really” kidnapping.

      If you abduct, slaughter and eat a prostitute in a country lacking any coherent government are you a murderer?

      Or

      Lets imagine I’m best buddies with a dictator , lets say I was roomies with Kim Jong Un at uni.

      Now, I convince him, as the lawful leader of the country who legally has the right to make or remove laws that I should have an exception in law. (I am not an expert on NK legalities but lets assume he legally can under the law of the country)

      He writes an exception into law such that I’m exempt from being charged with murder as long as I avoid powerful or rich people and their families.

      If I then go on a gleeful slaughter spree am I not a murderer?

      To bring it back to your example, if I’m in a country where the government has collapsed and am thus stateless, if I handcuff someone and lock them in a cell in my basement am I a kidnapper?

      How about if I saw them doing something wrong first? How about if I think that thing is wrong but they don’t?

      If I make a citizens arrest and the cops never show up to collect the person am I a kidnapper if I lock them in a cell in my basement and otherwise treat them as the police would if they were in a cell? I schedule a “trial” for a few months down the road and set a “bail” that they or their kin must pay if they want them released before trial. If they don’t turn up for the trial I keep the bail money.

      Some might call that kidnapping and ransom if done by an ordinary citizen.

      many people don’t believe you have the right to grant rights to others that you don’t yourself have. If you don’t have the right to kill me if you think I’ve done something wrong then they don’t believe you have the right to nominate your mate Bob as the official executioner such that when he cuts my head off it doesn’t count as murder.

      Personally I don’t really subscribe to that worldview, there’s a practical side that in reality laws tend to come from the point of a sword without much coherent philosophical backing though that leaves the lines between murder/execution, imprisonment with bail/kidnapping with ransom kinda fuzzy. It feels unsatisfying to say that the difference depends on whether the person/group doing it have enough guns and power to be the defacto government in the area where it happens.

  66. Robert Jones says:

    One could argue that “Banning abortion is unconscionable because it denies someone the right to do what they want with their own body” is an isolated demand for rigor, given that we ban people from selling their organs, accepting unlicensed medical treatments, using illegal drugs, engaging in prostitution, accepting euthanasia, and countless other things that involve telling them what to do with their bodies – “everyone has a right to do what they want with their own bodies” is a fake rule we never apply to anything else.

    This also seems unconvincing as an example. As a matter of fact, we do ban people from selling their organs, accepting unlicensed medical treatments, using illegal drugs, engaging in prostitution and accepting euthanasia, but every one of those examples is a bad law. We should allow people to sell their organs, accept whatever medical treatment they choose, use whatever drugs they choose, engage in prostitution if they wish and accept euthanasia. Therefore it’s reasonable to say that, consistent with believing that people should be allowed to do those things with their bodies we should also believe that people should be allowed to have abortions.

    I actually feel somewhat more convinced of the pro-abortion argument now.

    • baconbits9 says:

      The argument is doubly bad. First few people who say “my body, my choice” support the proposition of fully legalizing all drugs and organ sales, which is Scott’s point.

      Second pregnancy is one of the few times in which there is grey area in the fact that it isn’t just your body anymore. Some of the same people yelling ‘my body, my choice’ are supportive of preventing pregnant women from smoking and drinking.

  67. Given the discussions of consensus and bias, I thought it might be worth mentioning the sources of my bias against consensus before this thread vanishes.

    I grew up in the middle of the destruction of a well established consensus in economics. When I was an undergraduate at Harvard in the early sixties a fellow student, who almost certainly knew nothing about me, remarked that he couldn’t take an economics course at Chicago because he would burst out laughing. I took that as a mildly exaggerated picture of the views presented to him in the, probably introductory, econ course he had taken, hence of the accepted view at one of (indeed, all but one of) the leading departments.

    Within about fifteen years of that conversation, the Chicago position had become orthodoxy or near it on some of the major issues disputed between the two schools. On Wikipedia’s list of economics Nobels classified by institution, Chicago beats Harvard eleven to five (that includes one or two figures not associated with the Chicago School but excludes at least one associated with the school but not at Chicago).

    My second bias comes from having political views that are far from the orthodoxy in the environment, American academia, that I have spent my life in—and observing how poor an idea most of the people sharing that orthodoxy have of the arguments against it. That goes with another story, also from my time at Harvard.

    It was 1964 and I was supporting Goldwater. I got into a friendly conversation with a stranger who wanted to know how I could do so. In the course of the conversation I ran through a number of arguments for my position. In each case it was clear that the man I was talking with had never heard the argument and had no immediate rebuttal. At the end of the conversation he asked me, in what I took to be a “taking care not to offend” tone, whether perhaps I was defending all of these positions for fun, not because I believed in them. I took that as the intellectual equivalent of “what is a nice girl like you doing in a place like this?” How could I be smart enough to offer arguments he couldn’t readily rebut for views he knew were wrong, and stupid enough to actually believe them?

    In the years since I have held a political position, anarcho-capitalism, considerable farther from orthodoxy than support of Goldwater was—and observed its status moving, at least in circles I frequent, from “No reasonable person could believe it” to “hardly anyone agrees with it, but it’s an interesting position worth thinking about.”

    All of which may help explain why my instinctive response to being told that everyone believes X is to start looking for reasons why X might be false.

    • 10240 says:

      I’d expect this to be less likely in natural sciences than social sciences, because 1. the factual questions it discusses are more exact, 2. the questions are less directly related to politics, 3. there are less likely to be hidden motives. (That is, there may be hidden motives from external sources such as outside financial interests, but less likely from the scientists’ own political convictions. E.g. a leftist may be biased against claiming that redistribution hurts economic growth because he’s worries that the claim would be misinterpreted and exaggerated, but also because he actually wants redistribution even if it hurts growth; the opposite for a libertarian. A climate scientist might also be biased against making (inconclusive) claims against AGW as they could be exaggerated, but it’s unlikely that he would want to reduce CO2 emissions even if they don’t cause global warming.)

      • The Nybbler says:

        A climate scientist might also be biased against making (inconclusive) claims against AGW as they could be exaggerated, but it’s unlikely that he would want to reduce CO2 emissions even if they don’t cause global warming.)

        Unfortunately, this well-known cartoon suggests they might.

      • A climate scientist might also be biased against making (inconclusive) claims against AGW as they could be exaggerated, but it’s unlikely that he would want to reduce CO2 emissions even if they don’t cause global warming.

        That depends on whether the things proposed to reduce CO2 emissions were things he also wanted to do for other reasons. I think it is clear that for many of the people campaigning for action against warming, they are. For some evidence consider this cartoon, popular, in my experience, with such people.

        Here is my blog post on it.

  68. There should be another level above the pyramid which is “Extreme humility after reading John Ioannidis”.

  69. Galle says:

    This is a fantastic chart that I’d love to spread hither and yon in the hope of getting more arguments that aren’t a complete waste of time for everyone involved, but unfortunately the “t” in “Gotchas” overlaps the black dividing bar between the red and blue sections of the pyramid and makes it oddly hard to read. Any chance you could fix that?

  70. bmcmolo says:

    Longtime reader, first (or possibly second) time poster. I just wanted to say I’ve bookmarked and reread this article many times over the past month, and I’m still learning from it. “Arguments above the second line are rare” has become something of a catch-phrase for me, at least in my head.